A Design Study of Co-Splitting as Situated in the Equipartitioning Learning Trajectory
ERIC Educational Resources Information Center
Corley, Andrew Kent
2013-01-01
The equipartitioning learning trajectory (Confrey, Maloney, Nguyen, Mojica & Myers, 2009) has been hypothesized and the proficiency levels have been validated through much prior work. This study solidifies understanding of the upper level of co-splitting, which has been redefined through further clinical interview work (Corley, Confrey &…
The two Faces of Equipartition
NASA Astrophysics Data System (ADS)
Sanchez-Sesma, F. J.; Perton, M.; Rodriguez-Castellanos, A.; Campillo, M.; Weaver, R. L.; Rodriguez, M.; Prieto, G.; Luzon, F.; McGarr, A.
2008-12-01
Equipartition is good. Beyond its philosophical implications, in many instances of statistical physics it implies that the available kinetic and potential elastic energy, in phase space, is distributed in the same fixed proportions among the possible "states". There are at least two distinct and complementary descriptions of such states in a diffuse elastic wave field u(r,t). One asserts that u may be represented as an incoherent isotropic superposition of incident plane waves of different polarizations. Each type of wave has an appropriate share of the available energy. This definition introduced by Weaver is similar to the room acoustics notion of a diffuse field, and it suffices to permit prediction of field correlations. The other description assumes that the degrees of freedom of the system, in this case, the kinetic energy densities, are all incoherently excited with equal expected amplitude. This definition, introduced by Maxwell, is also familiar from room acoustics using the normal modes of vibration within an arbitrarily large body. Usually, to establish if an elastic field is diffuse and equipartitioned only the first description has been applied, which requires the separation of dilatational and shear waves using carefully designed experiments. When the medium is bounded by an interface, waves of other modes, for example Rayleigh waves, complicate the measurement of these energies. As a consequence, it can be advantageous to use the second description. Moreover, each spatial component of the energy densities is linked, when an elastic field is diffuse and equipartitioned, to the component of the imaginary part of the Green function at the source. Accordingly, one can use the second description to retrieve the Green function and obtain more information about the medium. The equivalence between the two descriptions of equipartition are given for an infinite space and extended to the case of a half-space. These two descriptiosn are equivalent thanks to the
ERIC Educational Resources Information Center
Confrey, Jere; Maloney, Alan
2015-01-01
Design research studies provide significant opportunities to study new innovations and approaches and how they affect the forms of learning in complex classroom ecologies. This paper reports on a two-week long design research study with twelve 2nd through 4th graders using curricular materials and a tablet-based diagnostic assessment system, both…
Observation of equipartition of seismic waves.
Hennino, R; Trégourès, N; Shapiro, N M; Margerin, L; Campillo, M; van Tiggelen, B A; Weaver, R L
2001-04-01
Equipartition is a first principle in wave transport, based on the tendency of multiple scattering to homogenize phase space. We report observations of this principle for seismic waves created by earthquakes in Mexico. We find qualitative agreement with an equipartition model that accounts for mode conversions at the Earth's surface.
Green's function calculation from equipartition theorem.
Perton, Mathieu; Sánchez-Sesma, Francisco José
2016-08-01
A method is presented to calculate the elastodynamic Green's functions by using the equipartition principle. The imaginary parts are calculated as the average cross correlations of the displacement fields generated by the incidence of body and surface waves with amplitudes weighted by partition factors. The real part is retrieved using the Hilbert transform. The calculation of the partition factors is discussed for several geometrical configurations in two dimensional space: the full-space, a basin in a half-space and for layered media. For the last case, it results in a fast computation of the full Green's functions. Additionally, if the contribution of only selected states is desired, as for instance the surface wave part, the computation is even faster. Its use for full waveform inversion may then be advantageous.
Green's function calculation from equipartition theorem.
Perton, Mathieu; Sánchez-Sesma, Francisco José
2016-08-01
A method is presented to calculate the elastodynamic Green's functions by using the equipartition principle. The imaginary parts are calculated as the average cross correlations of the displacement fields generated by the incidence of body and surface waves with amplitudes weighted by partition factors. The real part is retrieved using the Hilbert transform. The calculation of the partition factors is discussed for several geometrical configurations in two dimensional space: the full-space, a basin in a half-space and for layered media. For the last case, it results in a fast computation of the full Green's functions. Additionally, if the contribution of only selected states is desired, as for instance the surface wave part, the computation is even faster. Its use for full waveform inversion may then be advantageous. PMID:27586757
MODIFIED EQUIPARTITION CALCULATION FOR SUPERNOVA REMNANTS
Arbutina, B.; Urosevic, D.; Andjelic, M. M.; Pavlovic, M. Z.; Vukotic, B.
2012-02-10
Determination of the magnetic field strength in the interstellar medium is one of the more complex tasks of contemporary astrophysics. We can only estimate the order of magnitude of the magnetic field strength by using a few very limited methods. Besides the Zeeman effect and Faraday rotation, the equipartition or minimum-energy calculation is a widespread method for estimating magnetic field strength and energy contained in the magnetic field and cosmic-ray particles by using only the radio synchrotron emission. Despite its approximate character, it remains a useful tool, especially when there are no other data about the magnetic field in a source. In this paper, we give a modified calculation that we think is more appropriate for estimating magnetic field strengths and energetics in supernova remnants (SNRs). We present calculated estimates of the magnetic field strengths for all Galactic SNRs for which the necessary observational data are available. The Web application for calculation of the magnetic field strengths of SNRs is available at http://poincare.matf.bg.ac.rs/{approx}arbo/eqp/.
The modified equipartition calculation for supernova remnants with the spectral index α = 0.5
NASA Astrophysics Data System (ADS)
Urošević, Dejan; Pavlović, Marko Z.; Arbutina, Bojan; Dobardžić, Aleksandra
2015-03-01
Recently, the modified equipartition calculation for supernova remnants (SNRs) has been derived by Arbutina et al. (2012). Their formulae can be used for SNRs with the spectral indices between 0.5 < α < 1. Here, by using approximately the same analytical method, we derive the equipartition formulae useful for SNRs with spectral index α=0.5. These formulae represent next step upgrade of Arbutina et al. (2012) derivation, because among 30 Galactic SNRs with available observational parameters for the equipartition calculation, 16 have spectral index α = 0.5. For these 16 Galactic SNRs we calculated the magnetic field strengths which are approximately 40 per cent higher than those calculated by using Pacholczyk (1970) equipartition and similar to those calculated by using Beck & Krause (2005) calculation.
Turbulent equipartition theory of toroidal momentum pinch
Hahm, T. S.; Rewoldt, G.; Diamond, P. H.; Gurcan, O. D.
2008-05-15
The mode-independent part of the magnetic curvature driven turbulent convective (TurCo) pinch of the angular momentum density [Hahm et al., Phys. Plasmas 14, 072302 (2007)], which was originally derived from the gyrokinetic equation, can be interpreted in terms of the turbulent equipartition (TEP) theory. It is shown that the previous results can be obtained from the local conservation of 'magnetically weighted angular momentum density', nm{sub i}U{sub parallel}R/B{sup 2}, and its homogenization due to turbulent flows. It is also demonstrated that the magnetic curvature modification of the parallel acceleration in the nonlinear gyrokinetic equation in the laboratory frame, which was shown to be responsible for the TEP part of the TurCo pinch of angular momentum density in the previous work, is closely related to the Coriolis drift coupling to the perturbed electric field. In addition, the origin of the diffusive flux in the rotating frame is highlighted. Finally, it is illustrated that there should be a difference in scalings between the momentum pinch originated from inherently toroidal effects and that coming from other mechanisms that exist in a simpler geometry.
Turbulent Equipartition Theory of Toroidal Momentum Pinch
T.S. Hahm, P.H. Diamond, O.D. Gurcan, and G. Rewaldt
2008-01-31
The mode-independet part of magnetic curvature driven turbulent convective (TuroCo) pinch of the angular momentum density [Hahm et al., Phys. Plasmas 14,072302 (2007)] which was originally derived from the gyrokinetic equation, can be interpreted in terms of the turbulent equipartition (TEP) theory. It is shown that the previous results can be obtained from the local conservation of "magnetically weighted angular momentum density," nmi U|| R/B2, and its homogenization due to turbulent flows. It is also demonstrated that the magnetic curvature modification of the parallel acceleration in the nonlinear gyrokinetic equation in the laboratory frame, which was shown to be responsible for the TEP part of the TurCo pinch of angular momentum density in the previous work, is closely related to the Coriolis drift coupling to the perturbed electric field. In addition, the origin of the diffusive flux in the rotating frame is highlighted. Finally, it is illustratd that there should be a difference in scalings between the momentum pinch originated from inherently toroidal effects and that coming from other mechanisms which exist in a simpler geometry.
NASA Astrophysics Data System (ADS)
Lee, Kurnchul; Venugopal, Vishnu; Girimaji, Sharath S.
2016-08-01
Return-to-isotropy and kinetic-potential energy equipartition are two fundamental pressure-moderated energy redistributive processes in anisotropic compressible turbulence. Pressure-strain correlation tensor redistributes energy among various Reynolds stress components and pressure-dilatation is responsible for energy reallocation between dilatational kinetic and potential energies. The competition and interplay between these pressure-based processes are investigated in this study. Direct numerical simulations (DNS) of low turbulent Mach number dilatational turbulence are performed employing the hybrid thermal Lattice Boltzman method (HTLBM). It is found that a tendency towards equipartition precedes proclivity for isotropization. An evolution towards equipartition has a collateral but critical effect on return-to-isotropy. The preferential transfer of energy from strong (rather than weak) Reynolds stress components to potential energy accelerates the isotropization of dilatational fluctuations. Understanding of these pressure-based redistributive processes is critical for developing insight into the character of compressible turbulence.
Turbulent Equipartition Theory of Toroidal Momentum Pinch
NASA Astrophysics Data System (ADS)
Hahm, T. S.
2007-11-01
The turbulent convective flux (pinch) of the toroidal angular momentum density is derived using the nonlinear toroidal gyrokinetic equation which conserves phase space density and energy[1], and a novel pinch mechanism which originates from the symmetry breaking due to the magnetic field curvature is identified. A net parallel momentum transfer from the waves to the ion guiding centers is possible when the fluctuation intensity varies on the flux surface, resulting in imperfect cancellation of the curvature drift contribution to the parallel acceleration. This pinch velocity of the angular momentum density can also be understood as a manifestation of a tendency to homogenize the profile of ``magnetically weighted angular momentum density,'' nmiRU/B^2. This part of the pinch flux is mode-independent (whether it's TEM driven or ITG driven), and radially inward for fluctuations peaked at the low-B-field side, with a pinch velocity typically, V^TEPAng˜- 2 χφ/R0. We compare and contrast the pinch of toroidal angular momentum with the now familiar ``turbulent equipartition'' (TEP) mechanism for the particle pinch[2] which exhibit some relevance in various L-mode plasmas in tokamaks. In our theoretical model[3], the TEP momentum pinch is shown to arise from the fact that, in a low-β tokamak equilibrium, B^2uE= cB x∇ δφ is approximately incompressible, so that the magnetically weighted angular momentum density (minU/B^3 minUR/B^2) is locally advected by fluctuating E xB velocities, to the lowest order in O(a/R). As a consequence minUR/B^2 is mixed or homogenized, so that ψ minUR/B^2 ->0. [1] T.S. Hahm, Phys. Fluids 31, 2670 (1988) [2] V.V. Yankov, JETP Lett. 60, 171 (1994); M.B. Isichenko et al., Phys. Rev. Lett. 74, 4436 (1995); X. Garbet et al., Phys. Plasmas 12, 082511 (2005). [3] T.S. Hahm, P.H. Diamond, O. Gurcan, and G. Rewoldt, Phys. Plasmas 14, 072302 (2007).
A novel look at energy equipartition in globular clusters
NASA Astrophysics Data System (ADS)
Bianchini, P.; van de Ven, G.; Norris, M. A.; Schinnerer, E.; Varri, A. L.
2016-06-01
Two-body interactions play a major role in shaping the structural and dynamical properties of globular clusters (GCs) over their long-term evolution. In particular, GCs evolve towards a state of partial energy equipartition that induces a mass dependence in their kinematics. By using a set of Monte Carlo cluster simulations evolved in quasi-isolation, we show that the stellar mass dependence of the velocity dispersion σ(m) can be described by an exponential function σ2 ∝ exp (-m/meq), with the parameter meq quantifying the degree of partial energy equipartition of the systems. This simple parametrization successfully captures the behaviour of the velocity dispersion at lower as well as higher stellar masses, that is, the regime where the system is expected to approach full equipartition. We find a tight correlation between the degree of equipartition reached by a GC and its dynamical state, indicating that clusters that are more than about 20 core relaxation times old, have reached a maximum degree of equipartition. This equipartition-dynamical state relation can be used as a tool to characterize the relaxation condition of a cluster with a kinematic measure of the meq parameter. Vice versa, the mass dependence of the kinematics can be predicted knowing the relaxation time solely on the basis of photometric measurements. Moreover, any deviations from this tight relation could be used as a probe of a peculiar dynamical history of a cluster. Finally, our novel approach is important for the interpretation of state-of-the-art Hubble Space Telescope proper motion data, for which the mass dependence of kinematics can now be measured, and for the application of modelling techniques which take into consideration multimass components and mass segregation.
Comment on Turbulent Equipartition Theory of Toroidal Momentum Pinch
Hahm, T. S.; Diamond, P. H.; Gurcan, O. D.; Rewoldt, G.
2009-03-12
This response demonstrates that the comment by Peeters et al. contains an incorrect and misleading interpretation of our paper [Hahm et al., Phys. Plasmas 15, 055902 (2008)] regarding the density gradient dependence of momentum pinch and the turbulent equipartition (TEP) theory.
Mass segregation in star clusters is not energy equipartition
NASA Astrophysics Data System (ADS)
Parker, Richard J.; Goodwin, Simon P.; Wright, Nicholas J.; Meyer, Michael R.; Quanz, Sascha P.
2016-06-01
Mass segregation in star clusters is often thought to indicate the onset of energy equipartition, where the most massive stars impart kinetic energy to the lower-mass stars and brown dwarfs/free-floating planets. The predicted net result of this is that the centrally concentrated massive stars should have significantly lower velocities than fast-moving low-mass objects on the periphery of the cluster. We search for energy equipartition in initially spatially and kinematically substructured N-body simulations of star clusters with N = 1500 stars, evolved for 100 Myr. In clusters that show significant mass segregation we find no differences in the proper motions or radial velocities as a function of mass. The kinetic energies of all stars decrease as the clusters relax, but the kinetic energies of the most massive stars do not decrease faster than those of lower-mass stars. These results suggest that dynamical mass segregation - which is observed in many star clusters - is not a signature of energy equipartition from two-body relaxation.
NON-EQUIPARTITION OF ENERGY, MASSES OF NOVA EJECTA, AND TYPE Ia SUPERNOVAE
Shara, Michael M.; Yaron, Ofer; Prialnik, Dina; Kovetz, Attay
2010-04-01
The total masses ejected during classical nova (CN) eruptions are needed to answer two questions with broad astrophysical implications: can accreting white dwarfs be 'pushed over' the Chandrasekhar mass limit to yield type Ia supernovae? Are ultra-luminous red variables a new kind of astrophysical phenomenon, or merely extreme classical novae? We review the methods used to determine nova ejecta masses. Except for the unique case of BT Mon (nova 1939), all nova ejecta mass determinations depend on untested assumptions and multi-parameter modeling. The remarkably simple assumption of equipartition between kinetic and radiated energy (E {sub kin} and E {sub rad}, respectively) in nova ejecta has been invoked as a way around this conundrum for the ultra-luminous red variable in M31. The deduced mass is far larger than that produced by any CN model. Our nova eruption simulations show that radiation and kinetic energy in nova ejecta are very far from being in energy equipartition, with variations of 4 orders of magnitude in the ratio E {sub kin}/E {sub rad} being commonplace. The assumption of equipartition must not be used to deduce nova ejecta masses; any such 'determinations' can be overestimates by a factor of up to 10,000. We data-mined our extensive series of nova simulations to search for correlations that could yield nova ejecta masses. Remarkably, the mass ejected during a nova eruption is dependent only on (and is directly proportional to) E {sub rad}. If we measure the distance to an erupting nova and its bolometric light curve, then E {sub rad} and hence the mass ejected can be directly measured.
NASA Technical Reports Server (NTRS)
Freed, Alan D.
1996-01-01
There are many aspects to consider when designing a Rosenbrock-Wanner-Wolfbrandt (ROW) method for the numerical integration of ordinary differential equations (ODE's) solving initial value problems (IVP's). The process can be simplified by constructing ROW methods around good Runge-Kutta (RK) methods. The formulation of a new, simple, embedded, third-order, ROW method demonstrates this design approach.
RADIUS CONSTRAINTS AND MINIMAL EQUIPARTITION ENERGY OF RELATIVISTICALLY MOVING SYNCHROTRON SOURCES
Barniol Duran, Rodolfo; Piran, Tsvi; Nakar, Ehud E-mail: tsvi.piran@mail.huji.ac.il
2013-07-20
A measurement of the synchrotron self-absorption flux and frequency provides tight constraints on the physical size of the source and a robust lower limit on its energy. This lower limit is also a good estimate of the magnetic field and electrons' energy, if the two components are at equipartition. This well-known method was used for decades to study numerous astrophysical sources moving at non-relativistic (Newtonian) speeds. Here, we generalize the Newtonian equipartition theory to sources moving at relativistic speeds including the effect of deviation from spherical symmetry expected in such sources. As in the Newtonian case, minimization of the energy provides an excellent estimate of the emission radius and yields a useful lower limit on the energy. We find that the application of the Newtonian formalism to a relativistic source would yield a smaller emission radius, and would generally yield a larger lower limit on the energy (within the observed region). For sources where the synchrotron-self-Compton component can be identified, the minimization of the total energy is not necessary and we present an unambiguous solution for the parameters of the system.
Do open star clusters evolve towards energy equipartition?
NASA Astrophysics Data System (ADS)
Spera, Mario; Mapelli, Michela; Jeffries, Robin D.
2016-07-01
We investigate whether open clusters (OCs) tend to energy equipartition, by means of direct N-body simulations with a broken power-law mass function. We find that the simulated OCs become strongly mass segregated, but the local velocity dispersion does not depend on the stellar mass for most of the mass range: the curve of the velocity dispersion as a function of mass is nearly flat even after several half-mass relaxation times, regardless of the adopted stellar evolution recipes and Galactic tidal field model. This result holds both if we start from virialized King models and if we use clumpy sub-virial initial conditions. The velocity dispersion of the most massive stars and stellar remnants tends to be higher than the velocity dispersion of the lighter stars. This trend is particularly evident in simulations without stellar evolution. We interpret this result as a consequence of the strong mass segregation, which leads to Spitzer's instability. Stellar winds delay the onset of the instability. Our simulations strongly support the result that OCs do not attain equipartition, for a wide range of initial conditions.
MODIFIED EQUIPARTITION CALCULATION FOR SUPERNOVA REMNANTS. CASES α = 0.5 AND α = 1
Arbutina, B.; Urošević, D.; Vučetić, M. M.; Pavlović, M. Z.; Vukotić, B.
2013-11-01
The equipartition or minimum energy calculation is a well-known procedure for estimating the magnetic field strength and the total energy in the magnetic field and cosmic ray particles by using only the radio synchrotron emission. In one of our previous papers, we have offered a modified equipartition calculation for supernova remnants (SNRs) with spectral indices 0.5 < α < 1. Here we extend the analysis to SNRs with α = 0.5 and α = 1.
Modified Equipartition Calculation for Supernova Remnants. Cases α = 0.5 and α = 1
NASA Astrophysics Data System (ADS)
Arbutina, B.; Urošević, D.; Vučetić, M. M.; Pavlović, M. Z.; Vukotić, B.
2013-11-01
The equipartition or minimum energy calculation is a well-known procedure for estimating the magnetic field strength and the total energy in the magnetic field and cosmic ray particles by using only the radio synchrotron emission. In one of our previous papers, we have offered a modified equipartition calculation for supernova remnants (SNRs) with spectral indices 0.5 < α < 1. Here we extend the analysis to SNRs with α = 0.5 and α = 1.
Wilson, David G.; Robinett, III, Rush D.
2012-02-21
A control system design method and concomitant control system comprising representing a physical apparatus to be controlled as a Hamiltonian system, determining elements of the Hamiltonian system representation which are power generators, power dissipators, and power storage devices, analyzing stability and performance of the Hamiltonian system based on the results of the determining step and determining necessary and sufficient conditions for stability of the Hamiltonian system, creating a stable control system based on the results of the analyzing step, and employing the resulting control system to control the physical apparatus.
Aircraft digital control design methods
NASA Technical Reports Server (NTRS)
Powell, J. D.; Parsons, E.; Tashker, M. G.
1976-01-01
Variations in design methods for aircraft digital flight control are evaluated and compared. The methods fall into two categories; those where the design is done in the continuous domain (or s plane) and those where the design is done in the discrete domain (or z plane). Design method fidelity is evaluated by examining closed loop root movement and the frequency response of the discretely controlled continuous aircraft. It was found that all methods provided acceptable performance for sample rates greater than 10 cps except the uncompensated s plane design method which was acceptable above 20 cps. A design procedure based on optimal control methods was proposed that provided the best fidelity at very slow sample rates and required no design iterations for changing sample rates.
On the Equipartition of Kinetic Energy in an Ideal Gas Mixture
ERIC Educational Resources Information Center
Peliti, L.
2007-01-01
A refinement of an argument due to Maxwell for the equipartition of translational kinetic energy in a mixture of ideal gases with different masses is proposed. The argument is elementary, yet it may work as an illustration of the role of symmetry and independence postulates in kinetic theory. (Contains 1 figure.)
Remarks on the Equipartition Rule and Thermodynamics of Reissner-Nordstrom Black Holes
NASA Astrophysics Data System (ADS)
Chen, Deyou
2014-07-01
In Verlinde's work, gravity is explained as an entropic force caused by changes in the information associated with the positions of material bodies. In this paper, we investigate the thermodynamic property of Reissner-Nordstrom black holes from the equipartition rule and holographic scenario. As a result, the first law of thermodynamics of the black holes is recovered.
NASA Astrophysics Data System (ADS)
Webb, Jeremy J.; Vesperini, Enrico
2016-10-01
We make use of N-body simulations to determine the relationship between two observable parameters that are used to quantify mass segregation and energy equipartition in star clusters. Mass segregation can be quantified by measuring how the slope of a cluster's stellar mass function α changes with clustercentric distance r, and then calculating δ _α = d α (r)/d ln(r/r_m) where rm is the cluster's half-mass radius. The degree of energy equipartition in a cluster is quantified by η, which is a measure of how stellar velocity dispersion σ depends on stellar mass m via σ(m)∝m-η. Through a suite of N-body star cluster simulations with a range of initial sizes, binary fractions, orbits, black hole retention fractions, and initial mass functions, we present the co-evolution of δα and η. We find that measurements of the global η are strongly affected by the radial dependence of σ and mean stellar mass and the relationship between η and δα depends mainly on the cluster's initial conditions and the tidal field. Within rm, where these effects are minimized, we find that η and δα initially share a linear relationship. However, once the degree of mass segregation increases such that the radial dependence of σ and mean stellar mass become a factor within rm, or the cluster undergoes core collapse, the relationship breaks down. We propose a method for determining η within rm from an observational measurement of δα. In cases where η and δα can be measured independently, this new method offers a way of measuring the cluster's dynamical state.
Tsai, V.C.
2010-01-01
Recent derivations have shown that when noise in a physical system has its energy equipartitioned into the modes of the system, there is a convenient relationship between the cross correlation of time-series recorded at two points and the Green's function of the system. Here, we show that even when energy is not fully equipartitioned and modes are allowed to be degenerate, a similar (though less general) property holds for equations with wave equation structure. This property can be used to understand why certain seismic noise correlation measurements are successful despite known degeneracy and lack of equipartition on the Earth. No claim to original US government works Journal compilation ?? 2010 RAS.
Modelling the structure of molecular clouds - I. A multiscale energy equipartition
NASA Astrophysics Data System (ADS)
Veltchev, Todor V.; Donkov, Sava; Klessen, Ralf S.
2016-07-01
We present a model for describing the general structure of molecular clouds (MCs) at early evolutionary stages in terms of their mass-size relationship. Sizes are defined through threshold levels at which equipartitions between gravitational, turbulent and thermal energy |W| ˜ f(Ekin + Eth) take place, adopting interdependent scaling relations of velocity dispersion and density and assuming a lognormal density distribution at each scale. Variations of the equipartition coefficient 1 ≤ f ≤ 4 allow for modelling of star-forming regions at scales within the size range of typical MCs (≳4 pc). Best fits are obtained for regions with low or no star formation (Pipe, Polaris) as well for such with star-forming activity but with nearly lognormal distribution of column density (Rosette). An additional numerical test of the model suggests its applicability to cloud evolutionary times prior to the formation of first stars.
NASA Technical Reports Server (NTRS)
Parker, E. N.
1976-01-01
It has previously been inferred that small isolated flux tubes appearing in supergranule boundaries are compressed to 1500 gauss or more. This paper considers whether some dynamic condition within a flux tube exists which provides both stability and a 'mechanical advantage' so that a small force over a small period of time can accomplish the enormous compression from the weak-field to the strong-field state. It is found that the equipartition solutions to the hydromagnetic equations apparently may have the desired property of permitting an infinitesimal external pressure to convert a gentle flow of gas along a weak field into a very intense field through a succession of equipartition states. An illustrative example is presented, and field compression by convective forces is analyzed.
Stochastic Methods for Aircraft Design
NASA Technical Reports Server (NTRS)
Pelz, Richard B.; Ogot, Madara
1998-01-01
The global stochastic optimization method, simulated annealing (SA), was adapted and applied to various problems in aircraft design. The research was aimed at overcoming the problem of finding an optimal design in a space with multiple minima and roughness ubiquitous to numerically generated nonlinear objective functions. SA was modified to reduce the number of objective function evaluations for an optimal design, historically the main criticism of stochastic methods. SA was applied to many CFD/MDO problems including: low sonic-boom bodies, minimum drag on supersonic fore-bodies, minimum drag on supersonic aeroelastic fore-bodies, minimum drag on HSCT aeroelastic wings, FLOPS preliminary design code, another preliminary aircraft design study with vortex lattice aerodynamics, HSR complete aircraft aerodynamics. In every case, SA provided a simple, robust and reliable optimization method which found optimal designs in order 100 objective function evaluations. Perhaps most importantly, from this academic/industrial project, technology has been successfully transferred; this method is the method of choice for optimization problems at Northrop Grumman.
Design method of supercavitating pumps
NASA Astrophysics Data System (ADS)
Kulagin, V.; Likhachev, D.; Li, F. C.
2016-05-01
The problem of effective supercavitating (SC) pump is solved, and optimum load distribution along the radius of the blade is found taking into account clearance, degree of cavitation development, influence of finite number of blades, and centrifugal forces. Sufficient accuracy can be obtained using the equivalent flat SC-grid for design of any SC-mechanisms, applying the “grid effect” coefficient and substituting the skewed flow calculated for grids of flat plates with the infinite attached cavitation caverns. This article gives the universal design method and provides an example of SC-pump design.
Peeters, A. G.; Angioni, C.; Strintzi, D.
2009-03-15
The comment addresses questions raised on the derivation of the momentum pinch velocity due to the Coriolis drift effect [A. G. Peeters et al., Phys. Rev. Lett. 98, 265003 (2007)]. These concern the definition of the gradient, and the scaling with the density gradient length. It will be shown that the turbulent equipartition mechanism is included within the derivation using the Coriolis drift, with the density gradient scaling being the consequence of drift terms not considered in [T. S. Hahm et al., Phys. Plasmas 15, 055902 (2008)]. Finally the accuracy of the analytic models is assessed through a comparison with the full gyrokinetic solution.
Phenomenology treatment of magnetohydrodynamic turbulence with non-equipartition and anisotropy
Zhou, Y; Matthaeus, W H
2005-02-07
Magnetohydrodynamics (MHD) turbulence theory, often employed satisfactorily in astrophysical applications, has often focused on parameter ranges that imply nearly equal values of kinetic and magnetic energies and length scales. However, MHD flow may have disparity magnetic Prandtl number, dissimilar kinetic and magnetic Reynolds number, different kinetic and magnetic outer length scales, and strong anisotropy. Here a phenomenology for such ''non-equipartitioned'' MHD flow is discussed. Two conditions are proposed for a MHD flow to transition to strong turbulent flow, extensions of (1) Taylor's constant flux in an inertial range, and (2) Kolmogorov's scale separation between the large and small scale boundaries of an inertial range. For this analysis, the detailed information on turbulence structure is not needed. These two conditions for MHD transition are expected to provide consistent predictions and should be applicable to anisotropic MHD flows, after the length scales are replaced by their corresponding perpendicular components. Second, it is stressed that the dynamics and anisotropy of MHD fluctuations is controlled by the relative strength between the straining effects between eddies of similar size and the sweeping action by the large-eddies, or propagation effect of the large-scale magnetic fields, on the small scales, and analysis of this balance in principle also requires consideration of non-equipartition effects.
Brunt-Vaisala growth rate and the radial emergence of equipartition fields
NASA Astrophysics Data System (ADS)
D'Silva, S.
1995-04-01
It is believed that the dynamo operates in the overshoot region at the base of the solar convection zone (CZ), and the magnetic features we see at the surface are formed when flux tubes rise through the CZ and appear at the photosphere. Studies of dynamics of flux tubes have pointed out that 10 kG tubes, which are nearly in energy equipartition with the velocity field at the base of the CZ, are weakly buoyant and hence overwhelmed by the Coriolis force. They move parallel to the rotation axis and emerge at very high latitudes, well above the sunspot zone, which makes it difficult to explain the formation of sunspots. Influence of the Coriolis force was found to be overcome only if flux tubes were stronger than roughly a 100 kG. The Brunt-Vaisala growth rate (we define as the square root of the absolute value of N2; where N is the Brunt-Vaisala frequency) of the CZ plays an imporatnt role in the dynamics of rising flux tubes. In an isothermal rise, when the flux tube is in thermal equilibrium with its surroundings, absolute value of N2 is shown to play a negligible role. However, in an adiabatic rise the role of absolute value of N2 is dominant; if absolute value of N2 is larger than roughly 10-12/sq sec in the lower CZ, magnetic buoyancy is shown to rise exponentially as the flux tube emerges. Further if absolute value of N2 greater than 4 x 10-11/sq sec, the exponential rise is sufficiently rapid to enable equipartition fields to overcome the influence of the Coriolis force and emerge rapidly. In the CZ of the solar model of Christensen-Dalsgaard, Proffitt, & Thompson (1993; model CPT) equipartition fields are found to emerge at high latitudes. However, an increase of absolute value of N2 in the lower CZ, on average, roughly by a factor of 8 would make them emerge radially to sunspot latitudes. If this is possible, there would be no need for the dynamo to produce extraordinarily strong fields to explain the formation of sunspots. Conversely, if such a large
Equipartition of rotational and translational energy in a dense granular gas.
Nichol, Kiri; Daniels, Karen E
2012-01-01
Experiments quantifying the rotational and translational motion of particles in a dense, driven, 2D granular gas floating on an air table reveal that kinetic energy is divided equally between the two translational and one rotational degrees of freedom. This equipartition persists when the particle properties, confining pressure, packing density, or spatial ordering are changed. While the translational velocity distributions are the same for both large and small particles, the angular velocity distributions scale with the particle radius. The probability distributions of all particle velocities have approximately exponential tails. Additionally, we find that the system can be described with a granular Boyle's law with a van der Waals-like equation of state. These results demonstrate ways in which conventional statistical mechanics can unexpectedly apply to nonequilibrium systems. PMID:22304293
Iwata, Kazunori; Ikeda, Kazushi; Sakai, Hideaki
2006-01-01
We discuss an important property called the asymptotic equipartition property on empirical sequences in reinforcement learning. This states that the typical set of empirical sequences has probability nearly one, that all elements in the typical set are nearly equi-probable, and that the number of elements in the typical set is an exponential function of the sum of conditional entropies if the number of time steps is sufficiently large. The sum is referred to as stochastic complexity. Using the property we elucidate the fact that the return maximization depends on two factors, the stochastic complexity and a quantity depending on the parameters of environment. Here, the return maximization means that the best sequences in terms of expected return have probability one. We also examine the sensitivity of stochastic complexity, which is a qualitative guide in tuning the parameters of action-selection strategy, and show a sufficient condition for return maximization in probability.
Equipartition of Rotational and Translational Energy in a Dense Granular Gas
NASA Astrophysics Data System (ADS)
Nichol, Kiri; Daniels, Karen E.
2012-01-01
Experiments quantifying the rotational and translational motion of particles in a dense, driven, 2D granular gas floating on an air table reveal that kinetic energy is divided equally between the two translational and one rotational degrees of freedom. This equipartition persists when the particle properties, confining pressure, packing density, or spatial ordering are changed. While the translational velocity distributions are the same for both large and small particles, the angular velocity distributions scale with the particle radius. The probability distributions of all particle velocities have approximately exponential tails. Additionally, we find that the system can be described with a granular Boyle’s law with a van der Waals-like equation of state. These results demonstrate ways in which conventional statistical mechanics can unexpectedly apply to nonequilibrium systems.
Experimental design methods for bioengineering applications.
Keskin Gündoğdu, Tuğba; Deniz, İrem; Çalışkan, Gülizar; Şahin, Erdem Sefa; Azbar, Nuri
2016-01-01
Experimental design is a form of process analysis in which certain factors are selected to obtain the desired responses of interest. It may also be used for the determination of the effects of various independent factors on a dependent factor. The bioengineering discipline includes many different areas of scientific interest, and each study area is affected and governed by many different factors. Briefly analyzing the important factors and selecting an experimental design for optimization are very effective tools for the design of any bioprocess under question. This review summarizes experimental design methods that can be used to investigate various factors relating to bioengineering processes. The experimental methods generally used in bioengineering are as follows: full factorial design, fractional factorial design, Plackett-Burman design, Taguchi design, Box-Behnken design and central composite design. These design methods are briefly introduced, and then the application of these design methods to study different bioengineering processes is analyzed.
Computational methods for stealth design
Cable, V.P. )
1992-08-01
A review is presented of the utilization of computer models for stealth design toward the ultimate goal of designing and fielding an aircraft that remains undetected at any altitude and any range. Attention is given to the advancements achieved in computational tools and their utilization. Consideration is given to the development of supercomputers for large-scale scientific computing and the development of high-fidelity, 3D, radar-signature-prediction tools for complex shapes with nonmetallic and radar-penetrable materials.
Spacesuit Radiation Shield Design Methods
NASA Technical Reports Server (NTRS)
Wilson, John W.; Anderson, Brooke M.; Cucinotta, Francis A.; Ware, J.; Zeitlin, Cary J.
2006-01-01
Meeting radiation protection requirements during EVA is predominantly an operational issue with some potential considerations for temporary shelter. The issue of spacesuit shielding is mainly guided by the potential of accidental exposure when operational and temporary shelter considerations fail to maintain exposures within operational limits. In this case, very high exposure levels are possible which could result in observable health effects and even be life threatening. Under these assumptions, potential spacesuit radiation exposures have been studied using known historical solar particle events to gain insight on the usefulness of modification of spacesuit design in which the control of skin exposure is a critical design issue and reduction of blood forming organ exposure is desirable. Transition to a new spacesuit design including soft upper-torso and reconfigured life support hardware gives an opportunity to optimize the next generation spacesuit for reduced potential health effects during an accidental exposure.
Design Methods for Clinical Systems
Blum, B.I.
1986-01-01
This paper presents a brief introduction to the techniques, methods and tools used to implement clinical systems. It begins with a taxonomy of software systems, describes the classic approach to development, provides some guidelines for the planning and management of software projects, and finishes with a guide to further reading. The conclusions are that there is no single right way to develop software, that most decisions are based upon judgment built from experience, and that there are tools that can automate some of the better understood tasks.
Mixed Method Designs in Implementation Research
Aarons, Gregory A.; Horwitz, Sarah; Chamberlain, Patricia; Hurlburt, Michael; Landsverk, John
2010-01-01
This paper describes the application of mixed method designs in implementation research in 22 mental health services research studies published in peer-reviewed journals over the last 5 years. Our analyses revealed 7 different structural arrangements of qualitative and quantitative methods, 5 different functions of mixed methods, and 3 different ways of linking quantitative and qualitative data together. Complexity of design was associated with number of aims or objectives, study context, and phase of implementation examined. The findings provide suggestions for the use of mixed method designs in implementation research. PMID:20967495
Culture, Interface Design, and Design Methods for Mobile Devices
NASA Astrophysics Data System (ADS)
Lee, Kun-Pyo
Aesthetic differences and similarities among cultures are obviously one of the very important issues in cultural design. However, ever since products became knowledge-supporting tools, the visible elements of products have become more universal so that the invisible parts of products such as interface and interaction are getting more important. Therefore, the cultural design should be extended to the invisible elements of culture like people's conceptual models beyond material and phenomenal culture. This chapter aims to explain how we address the invisible cultural elements in interface design and design methods by exploring the users' cognitive styles and communication patterns in different cultures. Regarding cultural interface design, we examined users' conceptual models while interacting with mobile phone and website interfaces, and observed cultural difference in performing tasks and viewing patterns, which appeared to agree with cultural cognitive styles known as Holistic thoughts vs. Analytic thoughts. Regarding design methods for culture, we explored how to localize design methods such as focus group interview and generative session for specific cultural groups, and the results of comparative experiments revealed cultural difference on participants' behaviors and performance in each design method and led us to suggest how to conduct them in East Asian culture. Mobile Observation Analyzer and Wi-Pro, user research tools we invented to capture user behaviors and needs especially in their mobile context, were also introduced.
Model reduction methods for control design
NASA Technical Reports Server (NTRS)
Dunipace, K. R.
1988-01-01
Several different model reduction methods are developed and detailed implementation information is provided for those methods. Command files to implement the model reduction methods in a proprietary control law analysis and design package are presented. A comparison and discussion of the various reduction techniques is included.
Mixed Methods Research Designs in Counseling Psychology
ERIC Educational Resources Information Center
Hanson, William E.; Creswell, John W.; Clark, Vicki L. Plano; Petska, Kelly S.; Creswell, David J.
2005-01-01
With the increased popularity of qualitative research, researchers in counseling psychology are expanding their methodologies to include mixed methods designs. These designs involve the collection, analysis, and integration of quantitative and qualitative data in a single or multiphase study. This article presents an overview of mixed methods…
Airbreathing hypersonic vehicle design and analysis methods
NASA Technical Reports Server (NTRS)
Lockwood, Mary Kae; Petley, Dennis H.; Hunt, James L.; Martin, John G.
1996-01-01
The design, analysis, and optimization of airbreathing hypersonic vehicles requires analyses involving many highly coupled disciplines at levels of accuracy exceeding those traditionally considered in a conceptual or preliminary-level design. Discipline analysis methods including propulsion, structures, thermal management, geometry, aerodynamics, performance, synthesis, sizing, closure, and cost are discussed. Also, the on-going integration of these methods into a working environment, known as HOLIST, is described.
Micarta propellers IV : technical methods of design
NASA Technical Reports Server (NTRS)
Caldwell, F W; Clay, N S
1924-01-01
A description is given of the methods used in design of Micarta propellers. The most direct method for working out the design of a Micarta propeller is to start with the diameter and blade angles of a wooden propeller suited for a particular installation and then to apply one of the plan forms suitable for Micarta propellers. This allows one to obtain the corresponding blade widths and to then use these angles and blade widths for an aerodynamic analysis.
Development of a hydraulic turbine design method
NASA Astrophysics Data System (ADS)
Kassanos, Ioannis; Anagnostopoulos, John; Papantonis, Dimitris
2013-10-01
In this paper a hydraulic turbine parametric design method is presented which is based on the combination of traditional methods and parametric surface modeling techniques. The blade of the turbine runner is described using Bezier surfaces for the definition of the meridional plane as well as the blade angle distribution, and a thickness distribution applied normal to the mean blade surface. In this way, it is possible to define parametrically the whole runner using a relatively small number of design parameters, compared to conventional methods. The above definition is then combined with a commercial CFD software and a stochastic optimization algorithm towards the development of an automated design optimization procedure. The process is demonstrated with the design of a Francis turbine runner.
Multidisciplinary Optimization Methods for Aircraft Preliminary Design
NASA Technical Reports Server (NTRS)
Kroo, Ilan; Altus, Steve; Braun, Robert; Gage, Peter; Sobieski, Ian
1994-01-01
This paper describes a research program aimed at improved methods for multidisciplinary design and optimization of large-scale aeronautical systems. The research involves new approaches to system decomposition, interdisciplinary communication, and methods of exploiting coarse-grained parallelism for analysis and optimization. A new architecture, that involves a tight coupling between optimization and analysis, is intended to improve efficiency while simplifying the structure of multidisciplinary, computation-intensive design problems involving many analysis disciplines and perhaps hundreds of design variables. Work in two areas is described here: system decomposition using compatibility constraints to simplify the analysis structure and take advantage of coarse-grained parallelism; and collaborative optimization, a decomposition of the optimization process to permit parallel design and to simplify interdisciplinary communication requirements.
ESD protection device design using statistical methods
NASA Astrophysics Data System (ADS)
Shigyo, N.; Kawashima, H.; Yasuda, S.
2002-12-01
This paper describes a design of the electrostatic discharge (ESD) protection device to minimize its area Ap while maintaining the breakdown voltage VESD. Hypothesis tests using measured data were performed to find the severest applied serge condition and to select control factors for the design-of-experiments (DOE). Also, technology CAD (TCAD) was used to estimate VESD. An optimum device structure, where salicide block was employed, was found using statistical methods and TCAD.
Analysis Method for Quantifying Vehicle Design Goals
NASA Technical Reports Server (NTRS)
Fimognari, Peter; Eskridge, Richard; Martin, Adam; Lee, Michael
2007-01-01
A document discusses a method for using Design Structure Matrices (DSM), coupled with high-level tools representing important life-cycle parameters, to comprehensively conceptualize a flight/ground space transportation system design by dealing with such variables as performance, up-front costs, downstream operations costs, and reliability. This approach also weighs operational approaches based on their effect on upstream design variables so that it is possible to readily, yet defensively, establish linkages between operations and these upstream variables. To avoid the large range of problems that have defeated previous methods of dealing with the complex problems of transportation design, and to cut down the inefficient use of resources, the method described in the document identifies those areas that are of sufficient promise and that provide a higher grade of analysis for those issues, as well as the linkages at issue between operations and other factors. Ultimately, the system is designed to save resources and time, and allows for the evolution of operable space transportation system technology, and design and conceptual system approach targets.
Axisymmetric inlet minimum weight design method
NASA Technical Reports Server (NTRS)
Nadell, Shari-Beth
1995-01-01
An analytical method for determining the minimum weight design of an axisymmetric supersonic inlet has been developed. The goal of this method development project was to improve the ability to predict the weight of high-speed inlets in conceptual and preliminary design. The initial model was developed using information that was available from inlet conceptual design tools (e.g., the inlet internal and external geometries and pressure distributions). Stiffened shell construction was assumed. Mass properties were computed by analyzing a parametric cubic curve representation of the inlet geometry. Design loads and stresses were developed at analysis stations along the length of the inlet. The equivalent minimum structural thicknesses for both shell and frame structures required to support the maximum loads produced by various load conditions were then determined. Preliminary results indicated that inlet hammershock pressures produced the critical design load condition for a significant portion of the inlet. By improving the accuracy of inlet weight predictions, the method will improve the fidelity of propulsion and vehicle design studies and increase the accuracy of weight versus cost studies.
Optimization methods applied to hybrid vehicle design
NASA Technical Reports Server (NTRS)
Donoghue, J. F.; Burghart, J. H.
1983-01-01
The use of optimization methods as an effective design tool in the design of hybrid vehicle propulsion systems is demonstrated. Optimization techniques were used to select values for three design parameters (battery weight, heat engine power rating and power split between the two on-board energy sources) such that various measures of vehicle performance (acquisition cost, life cycle cost and petroleum consumption) were optimized. The apporach produced designs which were often significant improvements over hybrid designs already reported on in the literature. The principal conclusions are as follows. First, it was found that the strategy used to split the required power between the two on-board energy sources can have a significant effect on life cycle cost and petroleum consumption. Second, the optimization program should be constructed so that performance measures and design variables can be easily changed. Third, the vehicle simulation program has a significant effect on the computer run time of the overall optimization program; run time can be significantly reduced by proper design of the types of trips the vehicle takes in a one year period. Fourth, care must be taken in designing the cost and constraint expressions which are used in the optimization so that they are relatively smooth functions of the design variables. Fifth, proper handling of constraints on battery weight and heat engine rating, variables which must be large enough to meet power demands, is particularly important for the success of an optimization study. Finally, the principal conclusion is that optimization methods provide a practical tool for carrying out the design of a hybrid vehicle propulsion system.
Standardized Radiation Shield Design Methods: 2005 HZETRN
NASA Technical Reports Server (NTRS)
Wilson, John W.; Tripathi, Ram K.; Badavi, Francis F.; Cucinotta, Francis A.
2006-01-01
Research committed by the Langley Research Center through 1995 resulting in the HZETRN code provides the current basis for shield design methods according to NASA STD-3000 (2005). With this new prominence, the database, basic numerical procedures, and algorithms are being re-examined with new methods of verification and validation being implemented to capture a well defined algorithm for engineering design processes to be used in this early development phase of the Bush initiative. This process provides the methodology to transform the 1995 HZETRN research code into the 2005 HZETRN engineering code to be available for these early design processes. In this paper, we will review the basic derivations including new corrections to the codes to insure improved numerical stability and provide benchmarks for code verification.
MAST Propellant and Delivery System Design Methods
NASA Technical Reports Server (NTRS)
Nadeem, Uzair; Mc Cleskey, Carey M.
2015-01-01
A Mars Aerospace Taxi (MAST) concept and propellant storage and delivery case study is undergoing investigation by NASA's Element Design and Architectural Impact (EDAI) design and analysis forum. The MAST lander concept envisions landing with its ascent propellant storage tanks empty and supplying these reusable Mars landers with propellant that is generated and transferred while on the Mars surface. The report provides an overview of the data derived from modeling between different methods of propellant line routing (or "lining") and differentiate the resulting design and operations complexity of fluid and gaseous paths based on a given set of fluid sources and destinations. The EDAI team desires a rough-order-magnitude algorithm for estimating the lining characteristics (i.e., the plumbing mass and complexity) associated different numbers of vehicle propellant sources and destinations. This paper explored the feasibility of preparing a mathematically sound algorithm for this purpose, and offers a method for the EDAI team to implement.
New method of designing CCD driver
NASA Astrophysics Data System (ADS)
Yu, Wei; Yu, Daoyin; Zhang, Yimo
1993-04-01
A new method of designing CCD driver circuits is introduced in this paper. Some kinds of programmable logic device (PLD) chips including generic array logic (GAL) and EPROM are used to drive a CCD sensor. The driver runs stably and reliably. It is widely applied in many fields with its good interchangeability, small size, and low cost.
Acoustic Treatment Design Scaling Methods. Phase 2
NASA Technical Reports Server (NTRS)
Clark, L. (Technical Monitor); Parrott, T. (Technical Monitor); Jones, M. (Technical Monitor); Kraft, R. E.; Yu, J.; Kwan, H. W.; Beer, B.; Seybert, A. F.; Tathavadekar, P.
2003-01-01
The ability to design, build and test miniaturized acoustic treatment panels on scale model fan rigs representative of full scale engines provides not only cost-savings, but also an opportunity to optimize the treatment by allowing multiple tests. To use scale model treatment as a design tool, the impedance of the sub-scale liner must be known with confidence. This study was aimed at developing impedance measurement methods for high frequencies. A normal incidence impedance tube method that extends the upper frequency range to 25,000 Hz. without grazing flow effects was evaluated. The free field method was investigated as a potential high frequency technique. The potential of the two-microphone in-situ impedance measurement method was evaluated in the presence of grazing flow. Difficulties in achieving the high frequency goals were encountered in all methods. Results of developing a time-domain finite difference resonator impedance model indicated that a re-interpretation of the empirical fluid mechanical models used in the frequency domain model for nonlinear resistance and mass reactance may be required. A scale model treatment design that could be tested on the Universal Propulsion Simulator vehicle was proposed.
3. 6 simplified methods for design
Nickell, R.E.; Yahr, G.T.
1981-01-01
Simplified design analysis methods for elevated temperature construction are classified and reviewed. Because the major impetus for developing elevated temperature design methodology during the past ten years has been the LMFBR program, considerable emphasis is placed upon results from this source. The operating characteristics of the LMFBR are such that cycles of severe transient thermal stresses can be interspersed with normal elevated temperature operational periods of significant duration, leading to a combination of plastic and creep deformation. The various simplified methods are organized into two general categories, depending upon whether it is the material, or constitutive, model that is reduced, or the geometric modeling that is simplified. Because the elastic representation of material behavior is so prevalent, an entire section is devoted to elastic analysis methods. Finally, the validation of the simplified procedures is discussed.
Reliability Methods for Shield Design Process
NASA Technical Reports Server (NTRS)
Tripathi, R. K.; Wilson, J. W.
2002-01-01
Providing protection against the hazards of space radiation is a major challenge to the exploration and development of space. The great cost of added radiation shielding is a potential limiting factor in deep space operations. In this enabling technology, we have developed methods for optimized shield design over multi-segmented missions involving multiple work and living areas in the transport and duty phase of space missions. The total shield mass over all pieces of equipment and habitats is optimized subject to career dose and dose rate constraints. An important component of this technology is the estimation of two most commonly identified uncertainties in radiation shield design, the shielding properties of materials used and the understanding of the biological response of the astronaut to the radiation leaking through the materials into the living space. The largest uncertainty, of course, is in the biological response to especially high charge and energy (HZE) ions of the galactic cosmic rays. These uncertainties are blended with the optimization design procedure to formulate reliability-based methods for shield design processes. The details of the methods will be discussed.
Optimization methods for alternative energy system design
NASA Astrophysics Data System (ADS)
Reinhardt, Michael Henry
An electric vehicle heating system and a solar thermal coffee dryer are presented as case studies in alternative energy system design optimization. Design optimization tools are compared using these case studies, including linear programming, integer programming, and fuzzy integer programming. Although most decision variables in the designs of alternative energy systems are generally discrete (e.g., numbers of photovoltaic modules, thermal panels, layers of glazing in windows), the literature shows that the optimization methods used historically for design utilize continuous decision variables. Integer programming, used to find the optimal investment in conservation measures as a function of life cycle cost of an electric vehicle heating system, is compared to linear programming, demonstrating the importance of accounting for the discrete nature of design variables. The electric vehicle study shows that conservation methods similar to those used in building design, that reduce the overall UA of a 22 ft. electric shuttle bus from 488 to 202 (Btu/hr-F), can eliminate the need for fossil fuel heating systems when operating in the northeast United States. Fuzzy integer programming is presented as a means of accounting for imprecise design constraints such as being environmentally friendly in the optimization process. The solar thermal coffee dryer study focuses on a deep-bed design using unglazed thermal collectors (UTC). Experimental data from parchment coffee drying are gathered, including drying constants and equilibrium moisture. In this case, fuzzy linear programming is presented as a means of optimizing experimental procedures to produce the most information under imprecise constraints. Graphical optimization is used to show that for every 1 m2 deep-bed dryer, of 0.4 m depth, a UTC array consisting of 5, 1.1 m 2 panels, and a photovoltaic array consisting of 1, 0.25 m 2 panels produces the most dry coffee per dollar invested in the system. In general this study
Waterflooding injectate design systems and methods
Brady, Patrick V.; Krumhansl, James L.
2014-08-19
A method of designing an injectate to be used in a waterflooding operation is disclosed. One aspect includes specifying data representative of chemical characteristics of a liquid hydrocarbon, a connate, and a reservoir rock, of a subterranean reservoir. Charged species at an interface of the liquid hydrocarbon are determined based on the specified data by evaluating at least one chemical reaction. Charged species at an interface of the reservoir rock are determined based on the specified data by evaluating at least one chemical reaction. An extent of surface complexation between the charged species at the interfaces of the liquid hydrocarbon and the reservoir rock is determined by evaluating at least one surface complexation reaction. The injectate is designed and is operable to decrease the extent of surface complexation between the charged species at interfaces of the liquid hydrocarbon and the reservoir rock. Other methods, apparatus, and systems are disclosed.
An improved design method for EPC middleware
NASA Astrophysics Data System (ADS)
Lou, Guohuan; Xu, Ran; Yang, Chunming
2014-04-01
For currently existed problems and difficulties during the small and medium enterprises use EPC (Electronic Product Code) ALE (Application Level Events) specification to achieved middleware, based on the analysis of principle of EPC Middleware, an improved design method for EPC middleware is presented. This method combines the powerful function of MySQL database, uses database to connect reader-writer with upper application system, instead of development of ALE application program interface to achieve a middleware with general function. This structure is simple and easy to implement and maintain. Under this structure, different types of reader-writers added can be configured conveniently and the expandability of the system is improved.
Design methods of rhombic tensegrity structures
NASA Astrophysics Data System (ADS)
Feng, Xi-Qiao; Li, Yue; Cao, Yan-Ping; Yu, Shou-Wen; Gu, Yuan-Tong
2010-08-01
As a special type of novel flexible structures, tensegrity holds promise for many potential applications in such fields as materials science, biomechanics, civil and aerospace engineering. Rhombic systems are an important class of tensegrity structures, in which each bar constitutes the longest diagonal of a rhombus of four strings. In this paper, we address the design methods of rhombic structures based on the idea that many tensegrity structures can be constructed by assembling one-bar elementary cells. By analyzing the properties of rhombic cells, we first develop two novel schemes, namely, direct enumeration scheme and cell-substitution scheme. In addition, a facile and efficient method is presented to integrate several rhombic systems into a larger tensegrity structure. To illustrate the applications of these methods, some novel rhombic tensegrity structures are constructed.
Method of designing layered sound absorbing materials
NASA Astrophysics Data System (ADS)
Atalla, Youssef; Panneton, Raymond
2002-11-01
A widely used model for describing sound propagation in porous materials is the Johnson-Champoux-Allard model. This rigid frame model is based on five geometrical properties of the porous medium: resistivity, porosity, tortuosity, and viscous and thermal characteristic lengths. Using this model and with the knowledge of such properties for different absorbing materials, the design of a multiple layered system can be optimized efficiently and rapidly. The overall impedance of the layered systems can be calculated by the repeated application of single layer impedance equation. The knowledge of the properties of the materials involved in the layered system and their physical meaning, allows to perform by computer a systematic evaluation of potential layer combinations rather than do it experimentally which is time consuming and always not efficient. The final design of layered materials can then be confirmed by suitable measurements. A method of designing the overall acoustic absorption of multiple layered porous materials is presented. Some aspects based on the material properties, for designing a flat layered absorbing system are considered. Good agreement between measured and computed sound absorption coefficients has been obtained for the studied configurations. [Work supported by N.S.E.R.C. Canada, F.C.A.R. Quebec, and Bombardier Aerospace.
Methods for structural design at elevated temperatures
NASA Technical Reports Server (NTRS)
Ellison, A. M.; Jones, W. E., Jr.; Leimbach, K. R.
1973-01-01
A procedure which can be used to design elevated temperature structures is discussed. The desired goal is to have the same confidence in the structural integrity at elevated temperature as the factor of safety gives on mechanical loads at room temperature. Methods of design and analysis for creep, creep rupture, and creep buckling are presented. Example problems are included to illustrate the analytical methods. Creep data for some common structural materials are presented. Appendix B is description, user's manual, and listing for the creep analysis program. The program predicts time to a given creep or to creep rupture for a material subjected to a specified stress-temperature-time spectrum. Fatigue at elevated temperature is discussed. Methods of analysis for high stress-low cycle fatigue, fatigue below the creep range, and fatigue in the creep range are included. The interaction of thermal fatigue and mechanical loads is considered, and a detailed approach to fatigue analysis is given for structures operating below the creep range.
Direct optimization method for reentry trajectory design
NASA Astrophysics Data System (ADS)
Jallade, S.; Huber, P.; Potti, J.; Dutruel-Lecohier, G.
The software package called `Reentry and Atmospheric Transfer Trajectory' (RATT) was developed under ESA contract for the design of atmospheric trajectories. It includes four software TOP (Trajectory OPtimization) programs, which optimize reentry and aeroassisted transfer trajectories. 6FD and 3FD (6 and 3 degrees of freedom Flight Dynamic) are devoted to the simulation of the trajectory. SCA (Sensitivity and Covariance Analysis) performs covariance analysis on a given trajectory with respect to different uncertainties and error sources. TOP provides the optimum guidance law of a three degree of freedom reentry of aeroassisted transfer (AAOT) trajectories. Deorbit and reorbit impulses (if necessary) can be taken into account in the optimization. A wide choice of cost function is available to the user such as the integrated heat flux, or the sum of the velocity impulses, or a linear combination of both of them for trajectory and vehicle design. The crossrange and the downrange can be maximized during reentry trajectory. Path constraints are available on the load factor, the heat flux and the dynamic pressure. Results on these proposed options are presented. TOPPHY is the part of the TOP software corresponding to the definition and the computation of the optimization problemphysics. TOPPHY can interface with several optimizes with dynamic solvers: TOPOP and TROPIC using direct collocation methods and PROMIS using direct multiple shooting method. TOPOP was developed in the frame of this contract, it uses Hermite polynomials for the collocation method and the NPSOL optimizer from the NAG library. Both TROPIC and PROMIS were developed by the DLR (Deutsche Forschungsanstalt fuer Luft und Raumfahrt) and use the SLSQP optimizer. For the dynamic equation resolution, TROPIC uses a collocation method with Splines and PROMIS uses a multiple shooting method with finite differences. The three different optimizers including dynamics were tested on the reentry trajectory of the
Design analysis, robust methods, and stress classification
Bees, W.J.
1993-01-01
This special edition publication volume is comprised of papers presented at the 1993 ASME Pressure Vessels and Piping Conference, July 25--29, 1993 in Denver, Colorado. The papers were prepared for presentations in technical sessions developed under the auspices of the PVPD Committees on Computer Technology, Design and Analysis, Operations Applications and Components. The topics included are: Analysis of Pressure Vessels and Components; Expansion Joints; Robust Methods; Stress Classification; and Non-Linear Analysis. Individual papers have been processed separately for inclusion in the appropriate data bases.
A structural design decomposition method utilizing substructuring
NASA Technical Reports Server (NTRS)
Scotti, Stephen J.
1994-01-01
A new method of design decomposition for structural analysis and optimization is described. For this method, the structure is divided into substructures where each substructure has its structural response described by a structural-response subproblem, and its structural sizing determined from a structural-sizing subproblem. The structural responses of substructures that have rigid body modes when separated from the remainder of the structure are further decomposed into displacements that have no rigid body components, and a set of rigid body modes. The structural-response subproblems are linked together through forces determined within a structural-sizing coordination subproblem which also determines the magnitude of any rigid body displacements. Structural-sizing subproblems having constraints local to the substructures are linked together through penalty terms that are determined by a structural-sizing coordination subproblem. All the substructure structural-response subproblems are totally decoupled from each other, as are all the substructure structural-sizing subproblems, thus there is significant potential for use of parallel solution methods for these subproblems.
Research and Design of Rootkit Detection Method
NASA Astrophysics Data System (ADS)
Liu, Leian; Yin, Zuanxing; Shen, Yuli; Lin, Haitao; Wang, Hongjiang
Rootkit is one of the most important issues of network communication systems, which is related to the security and privacy of Internet users. Because of the existence of the back door of the operating system, a hacker can use rootkit to attack and invade other people's computers and thus he can capture passwords and message traffic to and from these computers easily. With the development of the rootkit technology, its applications are more and more extensive and it becomes increasingly difficult to detect it. In addition, for various reasons such as trade secrets, being difficult to be developed, and so on, the rootkit detection technology information and effective tools are still relatively scarce. In this paper, based on the in-depth analysis of the rootkit detection technology, a new kind of the rootkit detection structure is designed and a new method (software), X-Anti, is proposed. Test results show that software designed based on structure proposed is much more efficient than any other rootkit detection software.
Method for designing gas tag compositions
Gross, K.C.
1995-04-11
For use in the manufacture of gas tags such as employed in a nuclear reactor gas tagging failure detection system, a method for designing gas tagging compositions utilizes an analytical approach wherein the final composition of a first canister of tag gas as measured by a mass spectrometer is designated as node No. 1. Lattice locations of tag nodes in multi-dimensional space are then used in calculating the compositions of a node No. 2 and each subsequent node so as to maximize the distance of each node from any combination of tag components which might be indistinguishable from another tag composition in a reactor fuel assembly. Alternatively, the measured compositions of tag gas numbers 1 and 2 may be used to fix the locations of nodes 1 and 2, with the locations of nodes 3-N then calculated for optimum tag gas composition. A single sphere defining the lattice locations of the tag nodes may be used to define approximately 20 tag nodes, while concentric spheres can extend the number of tag nodes to several hundred. 5 figures.
Design Process Guide Method for Minimizing Loops and Conflicts
NASA Astrophysics Data System (ADS)
Koga, Tsuyoshi; Aoyama, Kazuhiro
We propose a new guide method for developing an easy-to-design process for product development. This process ensures a smaller number of wasteful iterations and less multiple conflicts. The design process is modeled as a sequence of design decisions. A design decision is defined as the process of determination of product attributes. A design task is represented as a calculation flow that depends on the product constraints between the product attributes. We also propose an automatic planning algorithm for the execution of the design task, in order to minimize the design loops and design conflicts. Further, we validate the effectiveness of the proposed guide method by developing a prototype design system and a design example of piping for a power steering system. We find that the proposed method can successfully minimize design loops and design conflicts. This paper addresses (1) a design loop model, (2) a design conflict model, and (3) how to minimize design loops and design conflicts.
Adjoint methods for aerodynamic wing design
NASA Technical Reports Server (NTRS)
Grossman, Bernard
1993-01-01
A model inverse design problem is used to investigate the effect of flow discontinuities on the optimization process. The optimization involves finding the cross-sectional area distribution of a duct that produces velocities that closely match a targeted velocity distribution. Quasi-one-dimensional flow theory is used, and the target is chosen to have a shock wave in its distribution. The objective function which quantifies the difference between the targeted and calculated velocity distributions may become non-smooth due to the interaction between the shock and the discretization of the flowfield. This paper offers two techniques to resolve the resulting problems for the optimization algorithms. The first, shock-fitting, involves careful integration of the objective function through the shock wave. The second, coordinate straining with shock penalty, uses a coordinate transformation to align the calculated shock with the target and then adds a penalty proportional to the square of the distance between the shocks. The techniques are tested using several popular sensitivity and optimization methods, including finite-differences, and direct and adjoint discrete sensitivity methods. Two optimization strategies, Gauss-Newton and sequential quadratic programming (SQP), are used to drive the objective function to a minimum.
Game Methodology for Design Methods and Tools Selection
ERIC Educational Resources Information Center
Ahmad, Rafiq; Lahonde, Nathalie; Omhover, Jean-françois
2014-01-01
Design process optimisation and intelligence are the key words of today's scientific community. A proliferation of methods has made design a convoluted area. Designers are usually afraid of selecting one method/tool over another and even expert designers may not necessarily know which method is the best to use in which circumstances. This…
An inverse design method for 2D airfoil
NASA Astrophysics Data System (ADS)
Liang, Zhi-Yong; Cui, Peng; Zhang, Gen-Bao
2010-03-01
The computational method for aerodynamic design of aircraft is applied more universally than before, in which the design of an airfoil is a hot problem. The forward problem is discussed by most relative papers, but inverse method is more useful in practical designs. In this paper, the inverse design of 2D airfoil was investigated. A finite element method based on the variational principle was used for carrying out. Through the simulation, it was shown that the method was fit for the design.
Translating Vision into Design: A Method for Conceptual Design Development
NASA Technical Reports Server (NTRS)
Carpenter, Joyce E.
2003-01-01
One of the most challenging tasks for engineers is the definition of design solutions that will satisfy high-level strategic visions and objectives. Even more challenging is the need to demonstrate how a particular design solution supports the high-level vision. This paper describes a process and set of system engineering tools that have been used at the Johnson Space Center to analyze and decompose high-level objectives for future human missions into design requirements that can be used to develop alternative concepts for vehicles, habitats, and other systems. Analysis and design studies of alternative concepts and approaches are used to develop recommendations for strategic investments in research and technology that support the NASA Integrated Space Plan. In addition to a description of system engineering tools, this paper includes a discussion of collaborative design practices for human exploration mission architecture studies used at the Johnson Space Center.
Using Software Design Methods in CALL
ERIC Educational Resources Information Center
Ward, Monica
2006-01-01
The phrase "software design" is not one that arouses the interest of many CALL practitioners, particularly those from a humanities background. However, software design essentials are simply logical ways of going about designing a system. The fundamentals include modularity, anticipation of change, generality and an incremental approach. While CALL…
Methods and Strategies: Derby Design Day
ERIC Educational Resources Information Center
Kennedy, Katheryn
2013-01-01
In this article the author describes the "Derby Design Day" project--a project that paired high school honors physics students with second-grade children for a design challenge and competition. The overall project goals were to discover whether collaboration in a design process would: (1) increase an interest in science; (2) enhance the…
An Efficient Inverse Aerodynamic Design Method For Subsonic Flows
NASA Technical Reports Server (NTRS)
Milholen, William E., II
2000-01-01
Computational Fluid Dynamics based design methods are maturing to the point that they are beginning to be used in the aircraft design process. Many design methods however have demonstrated deficiencies in the leading edge region of airfoil sections. The objective of the present research is to develop an efficient inverse design method which is valid in the leading edge region. The new design method is a streamline curvature method, and a new technique is presented for modeling the variation of the streamline curvature normal to the surface. The new design method allows the surface coordinates to move normal to the surface, and has been incorporated into the Constrained Direct Iterative Surface Curvature (CDISC) design method. The accuracy and efficiency of the design method is demonstrated using both two-dimensional and three-dimensional design cases.
Hou, Saing Paul; Haddad, Wassim M; Meskin, Nader; Bailey, James M
2015-12-01
With the advances in biochemistry, molecular biology, and neurochemistry there has been impressive progress in understanding the molecular properties of anesthetic agents. However, there has been little focus on how the molecular properties of anesthetic agents lead to the observed macroscopic property that defines the anesthetic state, that is, lack of responsiveness to noxious stimuli. In this paper, we use dynamical system theory to develop a mechanistic mean field model for neural activity to study the abrupt transition from consciousness to unconsciousness as the concentration of the anesthetic agent increases. The proposed synaptic drive firing-rate model predicts the conscious-unconscious transition as the applied anesthetic concentration increases, where excitatory neural activity is characterized by a Poincaré-Andronov-Hopf bifurcation with the awake state transitioning to a stable limit cycle and then subsequently to an asymptotically stable unconscious equilibrium state. Furthermore, we address the more general question of synchronization and partial state equipartitioning of neural activity without mean field assumptions. This is done by focusing on a postulated subset of inhibitory neurons that are not themselves connected to other inhibitory neurons. Finally, several numerical experiments are presented to illustrate the different aspects of the proposed theory. PMID:26438186
Design optimization method for Francis turbine
NASA Astrophysics Data System (ADS)
Kawajiri, H.; Enomoto, Y.; Kurosawa, S.
2014-03-01
This paper presents a design optimization system coupled CFD. Optimization algorithm of the system employs particle swarm optimization (PSO). Blade shape design is carried out in one kind of NURBS curve defined by a series of control points. The system was applied for designing the stationary vanes and the runner of higher specific speed francis turbine. As the first step, single objective optimization was performed on stay vane profile, and second step was multi-objective optimization for runner in wide operating range. As a result, it was confirmed that the design system is useful for developing of hydro turbine.
Alternative methods for the design of jet engine control systems
NASA Technical Reports Server (NTRS)
Sain, M. K.; Leake, R. J.; Basso, R.; Gejji, R.; Maloney, A.; Seshadri, V.
1976-01-01
Various alternatives to linear quadratic design methods for jet engine control systems are discussed. The main alternatives are classified into two broad categories: nonlinear global mathematical programming methods and linear local multivariable frequency domain methods. Specific studies within these categories include model reduction, the eigenvalue locus method, the inverse Nyquist method, polynomial design, dynamic programming, and conjugate gradient approaches.
Demystifying Mixed Methods Research Design: A Review of the Literature
ERIC Educational Resources Information Center
Caruth, Gail D.
2013-01-01
Mixed methods research evolved in response to the observed limitations of both quantitative and qualitative designs and is a more complex method. The purpose of this paper was to examine mixed methods research in an attempt to demystify the design thereby allowing those less familiar with its design an opportunity to utilize it in future research.…
Computational Methods Applied to Rational Drug Design
Ramírez, David
2016-01-01
Due to the synergic relationship between medical chemistry, bioinformatics and molecular simulation, the development of new accurate computational tools for small molecules drug design has been rising over the last years. The main result is the increased number of publications where computational techniques such as molecular docking, de novo design as well as virtual screening have been used to estimate the binding mode, site and energy of novel small molecules. In this work I review some tools, which enable the study of biological systems at the atomistic level, providing relevant information and thereby, enhancing the process of rational drug design. PMID:27708723
Supersonic biplane design via adjoint method
NASA Astrophysics Data System (ADS)
Hu, Rui
In developing the next generation supersonic transport airplane, two major challenges must be resolved. The fuel efficiency must be significantly improved, and the sonic boom propagating to the ground must be dramatically reduced. Both of these objectives can be achieved by reducing the shockwaves formed in supersonic flight. The Busemann biplane is famous for using favorable shockwave interaction to achieve nearly shock-free supersonic flight at its design Mach number. Its performance at off-design Mach numbers, however, can be very poor. This dissertation studies the performance of supersonic biplane airfoils at design and off-design conditions. The choked flow and flow-hysteresis phenomena of these biplanes are studied. These effects are due to finite thickness of the airfoils and non-uniqueness of the solution to the Euler equations, creating over an order of magnitude more wave drag than that predicted by supersonic thin airfoil theory. As a result, the off-design performance is the major barrier to the practical use of supersonic biplanes. The main contribution of this work is to drastically improve the off-design performance of supersonic biplanes by using an adjoint based aerodynamic optimization technique. The Busemann biplane is used as the baseline design, and its shape is altered to achieve optimal wave drags in series of Mach numbers ranging from 1.1 to 1.7, during both acceleration and deceleration conditions. The optimized biplane airfoils dramatically reduces the effects of the choked flow and flow-hysteresis phenomena, while maintaining a certain degree of favorable shockwave interaction effects at the design Mach number. Compared to a diamond shaped single airfoil of the same total thickness, the wave drag of our optimized biplane is lower at almost all Mach numbers, and is significantly lower at the design Mach number. In addition, by performing a Navier-Stokes solution for the optimized airfoil, it is verified that the optimized biplane improves
Probabilistic Methods for Structural Design and Reliability
NASA Technical Reports Server (NTRS)
Chamis, Christos C.; Whitlow, Woodrow, Jr. (Technical Monitor)
2002-01-01
This report describes a formal method to quantify structural damage tolerance and reliability in the presence of a multitude of uncertainties in turbine engine components. The method is based at the material behavior level where primitive variables with their respective scatter ranges are used to describe behavior. Computational simulation is then used to propagate the uncertainties to the structural scale where damage tolerance and reliability are usually specified. Several sample cases are described to illustrate the effectiveness, versatility, and maturity of the method. Typical results from this method demonstrate, that it is mature and that it can be used to probabilistically evaluate turbine engine structural components. It may be inferred from the results that the method is suitable for probabilistically predicting the remaining life in aging or in deteriorating structures, for making strategic projections and plans, and for achieving better, cheaper, faster products that give competitive advantages in world markets.
A comparison of digital flight control design methods
NASA Technical Reports Server (NTRS)
Powell, J. D.; Parsons, E.; Tashker, M. G.
1976-01-01
Many variations in design methods for aircraft digital flight control have been proposed in the literature. In general, the methods fall into two categories: those where the design is done in the continuous domain (or s-plane), and those where the design is done in the discrete domain (or z-plane). This paper evaluates several variations of each category and compares them for various flight control modes of the Langley TCV Boeing 737 aircraft. Design method fidelity is evaluated by examining closed loop root movement and the frequency response of the discretely controlled continuous aircraft. It was found that all methods provided acceptable performance for sample rates greater than 10 cps except the 'uncompensated s-plane design' method which was acceptable above 20 cps. A design procedure based on optimal control methods was proposed that provided the best fidelity at very slow sample rates and required no design iterations for changing sample rates.
Soft Computing Methods in Design of Superalloys
NASA Technical Reports Server (NTRS)
Cios, K. J.; Berke, L.; Vary, A.; Sharma, S.
1996-01-01
Soft computing techniques of neural networks and genetic algorithms are used in the design of superalloys. The cyclic oxidation attack parameter K(sub a), generated from tests at NASA Lewis Research Center, is modelled as a function of the superalloy chemistry and test temperature using a neural network. This model is then used in conjunction with a genetic algorithm to obtain an optimized superalloy composition resulting in low K(sub a) values.
The Triton: Design concepts and methods
NASA Technical Reports Server (NTRS)
Meholic, Greg; Singer, Michael; Vanryn, Percy; Brown, Rhonda; Tella, Gustavo; Harvey, Bob
1992-01-01
During the design of the C & P Aerospace Triton, a few problems were encountered that necessitated changes in the configuration. After the initial concept phase, the aspect ratio was increased from 7 to 7.6 to produce a greater lift to drag ratio (L/D = 13) which satisfied the horsepower requirements (118 hp using the Lycoming O-235 engine). The initial concept had a wing planform area of 134 sq. ft. Detailed wing sizing analysis enlarged the planform area to 150 sq. ft., without changing its layout or location. The most significant changes, however, were made just prior to inboard profile design. The fuselage external diameter was reduced from 54 to 50 inches to reduce drag to meet the desired cruise speed of 120 knots. Also, the nose was extended 6 inches to accommodate landing gear placement. Without the extension, the nosewheel received an unacceptable percentage (25 percent) of the landing weight. The final change in the configuration was made in accordance with the stability and control analysis. In order to reduce the static margin from 20 to 13 percent, the horizontal tail area was reduced from 32.02 to 25.0 sq. ft. The Triton meets all the specifications set forth in the design criteria. If time permitted another iteration of the calculations, two significant changes would be made. The vertical stabilizer area would be reduced to decrease the aircraft lateral stability slope since the current value was too high in relation to the directional stability slope. Also, the aileron size would be decreased to reduce the roll rate below the current 106 deg/second. Doing so would allow greater flap area (increasing CL(sub max)) and thus reduce the overall wing area. C & P would also recalculate the horsepower and drag values to further validate the 120 knot cruising speed.
A survey on methods of design features identification
NASA Astrophysics Data System (ADS)
Grabowik, C.; Kalinowski, K.; Paprocka, I.; Kempa, W.
2015-11-01
It is widely accepted that design features are one of the most attractive integration method of most fields of engineering activities such as a design modelling, process planning or production scheduling. One of the most important tasks which are realized in the integration process of design and planning functions is a design translation meant as design data mapping into data which are important from process planning needs point of view, it is manufacturing data. A design geometrical shape translation process can be realized with application one of the following strategies: (i) designing with previously prepared design features library also known as DBF method it is design by feature, (ii) interactive design features recognition IFR, (iii) automatic design features recognition AFR. In case of the DBF method design geometrical shape is created with design features. There are two basic approaches for design modelling in DBF method it is classic in which a part design is modelled from beginning to end with application design features previously stored in a design features data base and hybrid where part is partially created with standard predefined CAD system tools and the rest with suitable design features. Automatic feature recognition consist in an autonomic searching of a product model represented with a specific design representation method in order to find those model features which might be potentially recognized as design features, manufacturing features, etc. This approach needs the searching algorithm to be prepared. The searching algorithm should allow carrying on the whole recognition process without a user supervision. Currently there are lots of AFR methods. These methods need the product model to be represented with B-Rep representation most often, CSG rarely, wireframe very rarely. In the IFR method potential features are being recognized by a user. This process is most often realized by a user who points out those surfaces which seem to belong to a
A flexible layout design method for passive micromixers.
Deng, Yongbo; Liu, Zhenyu; Zhang, Ping; Liu, Yongshun; Gao, Qingyong; Wu, Yihui
2012-10-01
This paper discusses a flexible layout design method of passive micromixers based on the topology optimization of fluidic flows. Being different from the trial and error method, this method obtains the detailed layout of a passive micromixer according to the desired mixing performance by solving a topology optimization problem. Therefore, the dependence on the experience of the designer is weaken, when this method is used to design a passive micromixer with acceptable mixing performance. Several design disciplines for the passive micromixers are considered to demonstrate the flexibility of the layout design method for passive micromixers. These design disciplines include the approximation of the real 3D micromixer, the manufacturing feasibility, the spacial periodic design, and effects of the Péclet number and Reynolds number on the designs obtained by this layout design method. The capability of this design method is validated by several comparisons performed between the obtained layouts and the optimized designs in the recently published literatures, where the values of the mixing measurement is improved up to 40.4% for one cycle of the micromixer. PMID:22736305
Method for designing and controlling compliant gripper
NASA Astrophysics Data System (ADS)
Spanu, A. R.; Besnea, D.; Avram, M.; Ciobanu, R.
2016-08-01
The compliant grippers are useful for high accuracy grasping of small objects with adaptive control of contact points along the active surfaces of the fingers. The spatial trajectories of the elements become a must, due to the development of MEMS. The paper presents the solution for the compliant gripper designed by the authors, so the planar and spatial movements are discussed. At the beginning of the process, the gripper could work as passive one just for the moment when it has to reach out the object surface. The forces provided by the elements have to avoid the damage. As part of the system, the camera is taken picture of the object, in order to facilitate the positioning of the system. When the contact is established, the mechanism is acting as an active gripper by using an electrical stepper motor, which has controlled movement.
Design Methods and Optimization for Morphing Aircraft
NASA Technical Reports Server (NTRS)
Crossley, William A.
2005-01-01
This report provides a summary of accomplishments made during this research effort. The major accomplishments are in three areas. The first is the use of a multiobjective optimization strategy to help identify potential morphing features that uses an existing aircraft sizing code to predict the weight, size and performance of several fixed-geometry aircraft that are Pareto-optimal based upon on two competing aircraft performance objectives. The second area has been titled morphing as an independent variable and formulates the sizing of a morphing aircraft as an optimization problem in which the amount of geometric morphing for various aircraft parameters are included as design variables. This second effort consumed most of the overall effort on the project. The third area involved a more detailed sizing study of a commercial transport aircraft that would incorporate a morphing wing to possibly enable transatlantic point-to-point passenger service.
NASA Technical Reports Server (NTRS)
Patnaik, Surya N.; Pai, Shantaram S.; Coroneos, Rula M.
2010-01-01
Structural design generated by traditional method, optimization method and the stochastic design concept are compared. In the traditional method, the constraints are manipulated to obtain the design and weight is back calculated. In design optimization, the weight of a structure becomes the merit function with constraints imposed on failure modes and an optimization algorithm is used to generate the solution. Stochastic design concept accounts for uncertainties in loads, material properties, and other parameters and solution is obtained by solving a design optimization problem for a specified reliability. Acceptable solutions were produced by all the three methods. The variation in the weight calculated by the methods was modest. Some variation was noticed in designs calculated by the methods. The variation may be attributed to structural indeterminacy. It is prudent to develop design by all three methods prior to its fabrication. The traditional design method can be improved when the simplified sensitivities of the behavior constraint is used. Such sensitivity can reduce design calculations and may have a potential to unify the traditional and optimization methods. Weight versus reliabilitytraced out an inverted-S-shaped graph. The center of the graph corresponded to mean valued design. A heavy design with weight approaching infinity could be produced for a near-zero rate of failure. Weight can be reduced to a small value for a most failure-prone design. Probabilistic modeling of load and material properties remained a challenge.
Equipartition gamma-ray blazars and the location of the gamma-ray emission site in 3C 279
Dermer, Charles D.; Cerruti, Matteo; Lott, Benoit
2014-02-20
Blazar spectral models generally have numerous unconstrained parameters, leading to ambiguous values for physical properties like Doppler factor δ{sub D} or fluid magnetic field B'. To help remedy this problem, a few modifications of the standard leptonic blazar jet scenario are considered. First, a log-parabola function for the electron distribution is used. Second, analytic expressions relating energy loss and kinematics to blazar luminosity and variability, written in terms of equipartition parameters, imply δ{sub D}, B', and the peak electron Lorentz factor γ{sub pk}{sup ′}. The external radiation field in a blazar is approximated by Lyα radiation from the broad-line region (BLR) and ≈0.1 eV infrared radiation from a dusty torus. When used to model 3C 279 spectral energy distributions from 2008 and 2009 reported by Hayashida et al., we derive δ{sub D} ∼ 20-30, B' ∼ few G, and total (IR + BLR) external radiation field energy densities u ∼ 10{sup –2}-10{sup –3} erg cm{sup –3}, implying an origin of the γ-ray emission site in 3C 279 at the outer edges of the BLR. This is consistent with the γ-ray emission site being located at a distance R ≲ Γ{sup 2} ct {sub var} ∼ 0.1(Γ/30){sup 2}(t {sub var}/10{sup 4} s) pc from the black hole powering 3C 279's jets, where t {sub var} is the variability timescale of the radiation in the source frame, and at farther distances for narrow-jet and magnetic-reconnection models. Excess ≳ 5 GeV γ-ray emission observed with Fermi LAT from 3C 279 challenges the model, opening the possibility of a second leptonic component or a hadronic origin of the emission. For low hadronic content, absolute jet powers of ≈10% of the Eddington luminosity are calculated.
Analytical techniques for instrument design - matrix methods
Robinson, R.A.
1997-09-01
We take the traditional Cooper-Nathans approach, as has been applied for many years for steady-state triple-axis spectrometers, and consider its generalisation to other inelastic scattering spectrometers. This involves a number of simple manipulations of exponentials of quadratic forms. In particular, we discuss a toolbox of matrix manipulations that can be performed on the 6- dimensional Cooper-Nathans matrix: diagonalisation (Moller-Nielsen method), coordinate changes e.g. from ({Delta}k{sub I},{Delta}k{sub F} to {Delta}E, {Delta}Q & 2 dummy variables), integration of one or more variables (e.g. over such dummy variables), integration subject to linear constraints (e.g. Bragg`s Law for analysers), inversion to give the variance-covariance matrix, and so on. We show how these tools can be combined to solve a number of important problems, within the narrow-band limit and the gaussian approximation. We will argue that a generalised program that can handle multiple different spectrometers could (and should) be written in parallel to the Monte-Carlo packages that are becoming available. We will also discuss the complementarity between detailed Monte-Carlo calculations and the approach presented here. In particular, Monte-Carlo methods traditionally simulate the real experiment as performed in practice, given a model scattering law, while the Cooper-Nathans method asks the inverse question: given that a neutron turns up in a particular spectrometer configuration (e.g. angle and time of flight), what is the probability distribution of possible scattering events at the sample? The Monte-Carlo approach could be applied in the same spirit to this question.
Analytical techniques for instrument design -- Matrix methods
Robinson, R.A.
1997-12-31
The authors take the traditional Cooper-Nathans approach, as has been applied for many years for steady-state triple-axis spectrometers, and consider its generalization to other inelastic scattering spectrometers. This involves a number of simple manipulations of exponentials of quadratic forms. In particular, they discuss a toolbox of matrix manipulations that can be performed on the 6-dimensional Cooper-Nathans matrix. They show how these tools can be combined to solve a number of important problems, within the narrow-band limit and the gaussian approximation. They will argue that a generalized program that can handle multiple different spectrometers could (and should) be written in parallel to the Monte-Carlo packages that are becoming available. They also discuss the complementarity between detailed Monte-Carlo calculations and the approach presented here. In particular, Monte-Carlo methods traditionally simulate the real experiment as performed in practice, given a model scattering law, while the Cooper-Nathans method asks the inverse question: given that a neutron turns up in a particular spectrometer configuration (e.g. angle and time of flight), what is the probability distribution of possible scattering events at the sample? The Monte-Carlo approach could be applied in the same spirit to this question.
HEALTHY study rationale, design and methods
2009-01-01
The HEALTHY primary prevention trial was designed and implemented in response to the growing numbers of children and adolescents being diagnosed with type 2 diabetes. The objective was to moderate risk factors for type 2 diabetes. Modifiable risk factors measured were indicators of adiposity and glycemic dysregulation: body mass index ≥85th percentile, fasting glucose ≥5.55 mmol l-1 (100 mg per 100 ml) and fasting insulin ≥180 pmol l-1 (30 μU ml-1). A series of pilot studies established the feasibility of performing data collection procedures and tested the development of an intervention consisting of four integrated components: (1) changes in the quantity and nutritional quality of food and beverage offerings throughout the total school food environment; (2) physical education class lesson plans and accompanying equipment to increase both participation and number of minutes spent in moderate-to-vigorous physical activity; (3) brief classroom activities and family outreach vehicles to increase knowledge, enhance decision-making skills and support and reinforce youth in accomplishing goals; and (4) communications and social marketing strategies to enhance and promote changes through messages, images, events and activities. Expert study staff provided training, assistance, materials and guidance for school faculty and staff to implement the intervention components. A cohort of students were enrolled in sixth grade and followed to end of eighth grade. They attended a health screening data collection at baseline and end of study that involved measurement of height, weight, blood pressure, waist circumference and a fasting blood draw. Height and weight were also collected at the end of the seventh grade. The study was conducted in 42 middle schools, six at each of seven locations across the country, with 21 schools randomized to receive the intervention and 21 to act as controls (data collection activities only). Middle school was the unit of sample size and
Method speeds tapered rod design for directional well
Hu Yongquan; Yuan Xiangzhong
1995-10-16
Determination of the minimum rod diameter, from statistical relationships, can decrease the time needed for designing a sucker-rod string for a directional well. A tapered rod string design for a directional well is more complex than for a vertical well. Based on the theory of a continuous beam column, the rod string design in a directional well is a trial and error method. The key to reduce the time to obtain a solution is to rapidly determine the minimum rod diameter. This can be done with a statistical relationship. The paper describes sucker rods, design method, basic analysis rod design, and minimum rod diameter.
Inhalation exposure systems: design, methods and operation.
Wong, Brian A
2007-01-01
The respiratory system, the major route for entry of oxygen into the body, provides entry for external compounds, including pharmaceutic and toxic materials. These compounds (that might be inhaled under environmental, occupational, medical, or other situations) can be administered under controlled conditions during laboratory inhalation studies. Inhalation study results may be controlled or adversely affected by variability in four key factors: animal environment; exposure atmosphere; inhaled dose; and individual animal biological response. Three of these four factors can be managed through engineering processes. Variability in the animal environment is reduced by engineering control of temperature, humidity, oxygen content, waste gas content, and noise in the exposure facility. Exposure atmospheres are monitored and adjusted to assure a consistent and known exposure for each animal dose group. The inhaled dose, affected by changes in respiration physiology, may be controlled by exposure-specific monitoring of respiration. Selection of techniques and methods for the three factors affected by engineering allows the toxicologic pathologist to study the reproducibility of the fourth factor, the biological response of the animal. PMID:17325967
A new interval optimization method considering tolerance design
NASA Astrophysics Data System (ADS)
Jiang, C.; Xie, H. C.; Zhang, Z. G.; Han, X.
2015-12-01
This study considers the design variable uncertainty in the actual manufacturing process for a product or structure and proposes a new interval optimization method based on tolerance design, which can provide not only an optimal design but also the allowable maximal manufacturing errors that the design can bear. The design variables' manufacturing errors are depicted using the interval method, and an interval optimization model for the structure is constructed. A dimensionless design tolerance index is defined to describe the overall uncertainty of all design variables, and by combining the nominal objective function, a deterministic two-objective optimization model is built. The possibility degree of interval is used to represent the reliability of the constraints under uncertainty, through which the model is transformed to a deterministic optimization problem. Three numerical examples are investigated to verify the effectiveness of the present method.
An analytical method for designing low noise helicopter transmissions
NASA Technical Reports Server (NTRS)
Bossler, R. B., Jr.; Bowes, M. A.; Royal, A. C.
1978-01-01
The development and experimental validation of a method for analytically modeling the noise mechanism in the helicopter geared power transmission systems is described. This method can be used within the design process to predict interior noise levels and to investigate the noise reducing potential of alternative transmission design details. Examples are discussed.
What Can Mixed Methods Designs Offer Professional Development Program Evaluators?
ERIC Educational Resources Information Center
Giordano, Victoria; Nevin, Ann
2007-01-01
In this paper, the authors describe the benefits and pitfalls of mixed methods designs. They argue that mixed methods designs may be preferred when evaluating professional development programs for p-K-12 education given the new call for accountability in making data-driven decisions. They summarize and critique the studies in terms of limitations…
Turbine blade fixture design using kinematic methods and genetic algorithms
NASA Astrophysics Data System (ADS)
Bausch, John J., III
2000-10-01
The design of fixtures for turbine blades is a difficult problem even for experience toolmakers. Turbine blades are characterized by complex 3D surfaces, high performance materials that are difficult to manufacture, close tolerance finish requirements, and high precision machining accuracy. Tool designers typically rely on modified designs based on experience, but have no analytical tools to guide or even evaluate their designs. This paper examines the application of kinematic algorithms to the design of six-point-nest, seventh-point-clamp datum transfer fixtures for turbine blade production. The kinematic algorithms, based on screw coordinate theory, are computationally intensive. When used in a blind search mode the time required to generate an actual design is unreasonable. In order to reduce the computation time, the kinematic methods are combined with genetic algorithms and a set of heuristic design rules to guide the search. The kinematic, genetic, and heuristic methods were integrated within a fixture design module as part of the Unigraphics CAD system used by Pratt and Whitney. The kinematic design module was used to generate a datum transfer fixture design for a standard production turbine blade. This design was then used to construct an actual fixture, and compared to the existing production fixture for the same part. The positional accuracy of both designs was compared using a coordinate measurement machine (CMM). Based on the CMM data, the observed variation of kinematic design was over two orders-of-magnitude less than for the production design resulting in greatly improved accuracy.
Probabilistic Methods for Uncertainty Propagation Applied to Aircraft Design
NASA Technical Reports Server (NTRS)
Green, Lawrence L.; Lin, Hong-Zong; Khalessi, Mohammad R.
2002-01-01
Three methods of probabilistic uncertainty propagation and quantification (the method of moments, Monte Carlo simulation, and a nongradient simulation search method) are applied to an aircraft analysis and conceptual design program to demonstrate design under uncertainty. The chosen example problems appear to have discontinuous design spaces and thus these examples pose difficulties for many popular methods of uncertainty propagation and quantification. However, specific implementation features of the first and third methods chosen for use in this study enable successful propagation of small uncertainties through the program. Input uncertainties in two configuration design variables are considered. Uncertainties in aircraft weight are computed. The effects of specifying required levels of constraint satisfaction with specified levels of input uncertainty are also demonstrated. The results show, as expected, that the designs under uncertainty are typically heavier and more conservative than those in which no input uncertainties exist.
Expanding color design methods for architecture and allied disciplines
NASA Astrophysics Data System (ADS)
Linton, Harold E.
2002-06-01
The color design processes of visual artists, architects, designers, and theoreticians included in this presentation reflect the practical role of color in architecture. What the color design professional brings to the architectural design team is an expertise and rich sensibility made up of a broad awareness and a finely tuned visual perception. This includes a knowledge of design and its history, expertise with industrial color materials and their methods of application, an awareness of design context and cultural identity, a background in physiology and psychology as it relates to human welfare, and an ability to problem-solve and respond creatively to design concepts with innovative ideas. The broadening of the definition of the colorists's role in architectural design provides architects, artists and designers with significant opportunities for continued professional and educational development.
Aerodynamic design optimization by using a continuous adjoint method
NASA Astrophysics Data System (ADS)
Luo, JiaQi; Xiong, JunTao; Liu, Feng
2014-07-01
This paper presents the fundamentals of a continuous adjoint method and the applications of this method to the aerodynamic design optimization of both external and internal flows. General formulation of the continuous adjoint equations and the corresponding boundary conditions are derived. With the adjoint method, the complete gradient information needed in the design optimization can be obtained by solving the governing flow equations and the corresponding adjoint equations only once for each cost function, regardless of the number of design parameters. An inverse design of airfoil is firstly performed to study the accuracy of the adjoint gradient and the effectiveness of the adjoint method as an inverse design method. Then the method is used to perform a series of single and multiple point design optimization problems involving the drag reduction of airfoil, wing, and wing-body configuration, and the aerodynamic performance improvement of turbine and compressor blade rows. The results demonstrate that the continuous adjoint method can efficiently and significantly improve the aerodynamic performance of the design in a shape optimization problem.
Single-Case Designs and Qualitative Methods: Applying a Mixed Methods Research Perspective
ERIC Educational Resources Information Center
Hitchcock, John H.; Nastasi, Bonnie K.; Summerville, Meredith
2010-01-01
The purpose of this conceptual paper is to describe a design that mixes single-case (sometimes referred to as single-subject) and qualitative methods, hereafter referred to as a single-case mixed methods design (SCD-MM). Minimal attention has been given to the topic of applying qualitative methods to SCD work in the literature. These two…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-11
... made under the provisions of 40 CFR part 53, as ] amended on August 31, 2011 (76 FR 54326-54341). The... AGENCY Ambient Air Monitoring Reference and Equivalent Methods: Designation of a New Equivalent Method AGENCY: Environmental Protection Agency. ACTION: Notice of the designation of a new equivalent method...
An artificial viscosity method for the design of supercritical airfoils
NASA Technical Reports Server (NTRS)
Mcfadden, G. B.
1979-01-01
A numerical technique is presented for the design of two-dimensional supercritical wing sections with low wave drag. The method is a design mode of the analysis code H which gives excellent agreement with experimental results and is widely used in the aircraft industry. Topics covered include the partial differential equations of transonic flow, the computational procedure and results; the design procedure; a convergence theorem; and description of the code.
Numerical methods for aerothermodynamic design of hypersonic space transport vehicles
NASA Astrophysics Data System (ADS)
Wanie, K. M.; Brenneis, A.; Eberle, A.; Heiss, S.
1993-04-01
The requirement of the design process of hypersonic vehicles to predict flow past entire configurations with wings, fins, flaps, and propulsion system represents one of the major challenges for aerothermodynamics. In this context computational fluid dynamics has come up as a powerful tool to support the experimental work. A couple of numerical methods developed at MBB designed to fulfill the needs of the design process are described. The governing equations and fundamental details of the solution methods are shortly reviewed. Results are given for both geometrically simple test cases and realistic hypersonic configurations. Since there is still a considerable lack of experience for hypersonic flow calculations an extensive testing and verification is essential. This verification is done by comparison of results with experimental data and other numerical methods. The results presented prove that the methods used are robust, flexible, and accurate enough to fulfill the strong needs of the design process.
New directions for Artificial Intelligence (AI) methods in optimum design
NASA Technical Reports Server (NTRS)
Hajela, Prabhat
1989-01-01
Developments and applications of artificial intelligence (AI) methods in the design of structural systems is reviewed. Principal shortcomings in the current approach are emphasized, and the need for some degree of formalism in the development environment for such design tools is underscored. Emphasis is placed on efforts to integrate algorithmic computations in expert systems.
Two-Method Planned Missing Designs for Longitudinal Research
ERIC Educational Resources Information Center
Garnier-Villarreal, Mauricio; Rhemtulla, Mijke; Little, Todd D.
2014-01-01
We examine longitudinal extensions of the two-method measurement design, which uses planned missingness to optimize cost-efficiency and validity of hard-to-measure constructs. These designs use a combination of two measures: a "gold standard" that is highly valid but expensive to administer, and an inexpensive (e.g., survey-based)…
Investigating the Use of Design Methods by Capstone Design Students at Clemson University
ERIC Educational Resources Information Center
Miller, W. Stuart; Summers, Joshua D.
2013-01-01
The authors describe a preliminary study to understand the attitude of engineering students regarding the use of design methods in projects to identify the factors either affecting or influencing the use of these methods by novice engineers. A senior undergraduate capstone design course at Clemson University, consisting of approximately fifty…
New knowledge network evaluation method for design rationale management
NASA Astrophysics Data System (ADS)
Jing, Shikai; Zhan, Hongfei; Liu, Jihong; Wang, Kuan; Jiang, Hao; Zhou, Jingtao
2015-01-01
Current design rationale (DR) systems have not demonstrated the value of the approach in practice since little attention is put to the evaluation method of DR knowledge. To systematize knowledge management process for future computer-aided DR applications, a prerequisite is to provide the measure for the DR knowledge. In this paper, a new knowledge network evaluation method for DR management is presented. The method characterizes the DR knowledge value from four perspectives, namely, the design rationale structure scale, association knowledge and reasoning ability, degree of design justification support and degree of knowledge representation conciseness. The DR knowledge comprehensive value is also measured by the proposed method. To validate the proposed method, different style of DR knowledge network and the performance of the proposed measure are discussed. The evaluation method has been applied in two realistic design cases and compared with the structural measures. The research proposes the DR knowledge evaluation method which can provide object metric and selection basis for the DR knowledge reuse during the product design process. In addition, the method is proved to be more effective guidance and support for the application and management of DR knowledge.
Design method for four-reflector type beam waveguide systems
NASA Technical Reports Server (NTRS)
Betsudan, S.; Katagi, T.; Urasaki, S.
1986-01-01
Discussed is a method for the design of four reflector type beam waveguide feed systems, comprised of a conical horn and 4 focused reflectors, which are used widely as the primary reflector systems for communications satellite Earth station antennas. The design parameters for these systems are clarified, the relations between each parameter are brought out based on the beam mode development, and the independent design parameters are specified. The characteristics of these systems, namely spillover loss, crosspolarization components, and frequency characteristics, and their relation to the design parameters, are also shown. It is also indicated that design parameters which decide the dimensions of the conical horn or the shape of the focused reflectors can be unerringly established once the design standard for the system has been selected as either: (1) minimizing the crosspolarization component by keeping the spillover loss to within acceptable limits, or (2) minimizing the spillover loss by maintaining the crossover components below an acceptable level and the independent design parameters, such as the respective sizes of the focused reflectors and the distances between the focussed reflectors, etc., have been established according to mechanical restrictions. A sample design is also shown. In addition to being able to clarify the effects of each of the design parameters on the system and improving insight into these systems, the efficiency of these systems will also be increased with this design method.
Epidemiological designs for vaccine safety assessment: methods and pitfalls.
Andrews, Nick
2012-09-01
Three commonly used designs for vaccine safety assessment post licensure are cohort, case-control and self-controlled case series. These methods are often used with routine health databases and immunisation registries. This paper considers the issues that may arise when designing an epidemiological study, such as understanding the vaccine safety question, case definition and finding, limitations of data sources, uncontrolled confounding, and pitfalls that apply to the individual designs. The example of MMR and autism, where all three designs have been used, is presented to help consider these issues. PMID:21985898
Epidemiological designs for vaccine safety assessment: methods and pitfalls.
Andrews, Nick
2012-09-01
Three commonly used designs for vaccine safety assessment post licensure are cohort, case-control and self-controlled case series. These methods are often used with routine health databases and immunisation registries. This paper considers the issues that may arise when designing an epidemiological study, such as understanding the vaccine safety question, case definition and finding, limitations of data sources, uncontrolled confounding, and pitfalls that apply to the individual designs. The example of MMR and autism, where all three designs have been used, is presented to help consider these issues.
XML-based product information processing method for product design
NASA Astrophysics Data System (ADS)
Zhang, Zhen Yu
2011-12-01
Design knowledge of modern mechatronics product is based on information processing as the center of the knowledge-intensive engineering, thus product design innovation is essentially the knowledge and information processing innovation. Analysis of the role of mechatronics product design knowledge and information management features, a unified model of XML-based product information processing method is proposed. Information processing model of product design includes functional knowledge, structural knowledge and their relationships. For the expression of product function element, product structure element, product mapping relationship between function and structure based on the XML model are proposed. The information processing of a parallel friction roller is given as an example, which demonstrates that this method is obviously helpful for knowledge-based design system and product innovation.
XML-based product information processing method for product design
NASA Astrophysics Data System (ADS)
Zhang, Zhen Yu
2012-01-01
Design knowledge of modern mechatronics product is based on information processing as the center of the knowledge-intensive engineering, thus product design innovation is essentially the knowledge and information processing innovation. Analysis of the role of mechatronics product design knowledge and information management features, a unified model of XML-based product information processing method is proposed. Information processing model of product design includes functional knowledge, structural knowledge and their relationships. For the expression of product function element, product structure element, product mapping relationship between function and structure based on the XML model are proposed. The information processing of a parallel friction roller is given as an example, which demonstrates that this method is obviously helpful for knowledge-based design system and product innovation.
Novel parameter-based flexure bearing design method
NASA Astrophysics Data System (ADS)
Amoedo, Simon; Thebaud, Edouard; Gschwendtner, Michael; White, David
2016-06-01
A parameter study was carried out on the design variables of a flexure bearing to be used in a Stirling engine with a fixed axial displacement and a fixed outer diameter. A design method was developed in order to assist identification of the optimum bearing configuration. This was achieved through a parameter study of the bearing carried out with ANSYS®. The parameters varied were the number and the width of the arms, the thickness of the bearing, the eccentricity, the size of the starting and ending holes, and the turn angle of the spiral. Comparison was made between the different designs in terms of axial and radial stiffness, the natural frequency, and the maximum induced stresses. Moreover, the Finite Element Analysis (FEA) was compared to theoretical results for a given design. The results led to a graphical design method which assists the selection of flexure bearing geometrical parameters based on pre-determined geometric and material constraints.
The Design with Intent Method: a design tool for influencing user behaviour.
Lockton, Dan; Harrison, David; Stanton, Neville A
2010-05-01
Using product and system design to influence user behaviour offers potential for improving performance and reducing user error, yet little guidance is available at the concept generation stage for design teams briefed with influencing user behaviour. This article presents the Design with Intent Method, an innovation tool for designers working in this area, illustrated via application to an everyday human-technology interaction problem: reducing the likelihood of a customer leaving his or her card in an automatic teller machine. The example application results in a range of feasible design concepts which are comparable to existing developments in ATM design, demonstrating that the method has potential for development and application as part of a user-centred design process.
INNOVATIVE METHODS FOR THE OPTIMIZATION OF GRAVITY STORM SEWER DESIGN
The purpose of this paper is to describe a new method for optimizing the design of urban storm sewer systems. Previous efforts to optimize gravity sewers have met with limited success because classical optimization methods require that the problem be well behaved, e.g. describ...
Designing, Teaching, and Evaluating Two Complementary Mixed Methods Research Courses
ERIC Educational Resources Information Center
Christ, Thomas W.
2009-01-01
Teaching mixed methods research is difficult. This longitudinal explanatory study examined how two classes were designed, taught, and evaluated. Curriculum, Research, and Teaching (EDCS-606) and Mixed Methods Research (EDCS-780) used a research proposal generation process to highlight the importance of the purpose, research question and…
GAMMA-RAY BLAZARS NEAR EQUIPARTITION AND THE ORIGIN OF THE GeV SPECTRAL BREAK IN 3C 454.3
Cerruti, Matteo; Dermer, Charles D.; Lott, Benoit
2013-07-01
Observations performed with the Fermi-LAT telescope have revealed the presence of a spectral break in the GeV spectrum of flat-spectrum radio quasars (FSRQs) and other low- and intermediate-synchrotron peaked blazars. We propose that this feature can be explained by Compton scattering of broad-line region photons by a non-thermal population of electrons described by a log-parabolic function. We consider in particular a scenario in which the energy densities of particles, magnetic field, and soft photons in the emitting region are close to equipartition. We show that this model can satisfactorily account for the overall spectral energy distribution of the FSRQ 3C 454.3, reproducing the GeV spectral cutoff due to Klein-Nishina effects and a curving electron distribution.
Optimal Input Signal Design for Data-Centric Estimation Methods
Deshpande, Sunil; Rivera, Daniel E.
2013-01-01
Data-centric estimation methods such as Model-on-Demand and Direct Weight Optimization form attractive techniques for estimating unknown functions from noisy data. These methods rely on generating a local function approximation from a database of regressors at the current operating point with the process repeated at each new operating point. This paper examines the design of optimal input signals formulated to produce informative data to be used by local modeling procedures. The proposed method specifically addresses the distribution of the regressor vectors. The design is examined for a linear time-invariant system under amplitude constraints on the input. The resulting optimization problem is solved using semidefinite relaxation methods. Numerical examples show the benefits in comparison to a classical PRBS input design. PMID:24317042
Test methods and design allowables for fibrous composites. Volume 2
NASA Technical Reports Server (NTRS)
Chamis, Christos C. (Editor)
1989-01-01
Topics discussed include extreme/hostile environment testing, establishing design allowables, and property/behavior specific testing. Papers are presented on environmental effects on the high strain rate properties of graphite/epoxy composite, the low-temperature performance of short-fiber reinforced thermoplastics, the abrasive wear behavior of unidirectional and woven graphite fiber/PEEK, test methods for determining design allowables for fiber reinforced composites, and statistical methods for calculating material allowables for MIL-HDBK-17. Attention is also given to a test method to measure the response of composite materials under reversed cyclic loads, a through-the-thickness strength specimen for composites, the use of torsion tubes to measure in-plane shear properties of filament-wound composites, the influlence of test fixture design on the Iosipescu shear test for fiber composite materials, and a method for monitoring in-plane shear modulus in fatigue testing of composites.
Tradeoff methods in multiobjective insensitive design of airplane control systems
NASA Technical Reports Server (NTRS)
Schy, A. A.; Giesy, D. P.
1984-01-01
The latest results of an ongoing study of computer-aided design of airplane control systems are given. Constrained minimization algorithms are used, with the design objectives in the constraint vector. The concept of Pareto optimiality is briefly reviewed. It is shown how an experienced designer can use it to find designs which are well-balanced in all objectives. Then the problem of finding designs which are insensitive to uncertainty in system parameters are discussed, introducing a probabilistic vector definition of sensitivity which is consistent with the deterministic Pareto optimal problem. Insensitivity is important in any practical design, but it is particularly important in the design of feedback control systems, since it is considered to be the most important distinctive property of feedback control. Methods of tradeoff between deterministic and stochastic-insensitive (SI) design are described, and tradeoff design results are presented for the example of the a Shuttle lateral stability augmentation system. This example is used because careful studies have been made of the uncertainty in Shuttle aerodynamics. Finally, since accurate statistics of uncertain parameters are usually not available, the effects of crude statistical models on SI designs are examined.
Computer method for design of acoustic liners for turbofan engines
NASA Technical Reports Server (NTRS)
Minner, G. L.; Rice, E. J.
1976-01-01
A design package is presented for the specification of acoustic liners for turbofans. An estimate of the noise generation was made based on modifications of existing noise correlations, for which the inputs are basic fan aerodynamic design variables. The method does not predict multiple pure tones. A target attenuation spectrum was calculated which was the difference between the estimated generation spectrum and a flat annoyance-weighted goal attenuated spectrum. The target spectrum was combined with a knowledge of acoustic liner performance as a function of the liner design variables to specify the acoustic design. The liner design method at present is limited to annular duct configurations. The detailed structure of the liner was specified by combining the required impedance (which is a result of the previous step) with a mathematical model relating impedance to the detailed structure. The design procedure was developed for a liner constructed of perforated sheet placed over honeycomb backing cavities. A sample calculation was carried through in order to demonstrate the design procedure, and experimental results presented show good agreement with the calculated results of the method.
A new method of VLSI conform design for MOS cells
NASA Astrophysics Data System (ADS)
Schmidt, K. H.; Wach, W.; Mueller-Glaser, K. D.
An automated method for the design of specialized SSI/LSI-level MOS cells suitable for incorporation in VLSI chips is described. The method uses the symbolic-layout features of the CABBAGE computer program (Hsueh, 1979; De Man et al., 1982), but restricted by a fixed grid system to facilitate compaction procedures. The techniques used are shown to significantly speed the processes of electrical design, layout, design verification, and description for subsequent CAD/CAM application. In the example presented, a 211-transistor, parallel-load, synchronous 4-bit up/down binary counter cell was designed in 9 days, as compared to 30 days for a manually-optimized-layout version and 3 days for a larger, less efficient cell designed by a programmable logic array; the cell areas were 0.36, 0.21, and 0.79 sq mm, respectively. The primary advantage of the method is seen in the extreme ease with which the cell design can be adapted to new parameters or design rules imposed by improvements in technology.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-12
... March 6, 2009. The monitors are commercially available from the applicant, Thermo Fisher Scientific, Air... AGENCY Ambient Air Monitoring Reference and Equivalent Methods: Designation of Five New Equivalent... of the designation of five new equivalent methods for monitoring ambient air quality. SUMMARY:...
Method for Enzyme Design with Genetically Encoded Unnatural Amino Acids.
Hu, C; Wang, J
2016-01-01
We describe the methodologies for the design of artificial enzymes with genetically encoded unnatural amino acids. Genetically encoded unnatural amino acids offer great promise for constructing artificial enzymes with novel activities. In our studies, the designs of artificial enzyme were divided into two steps. First, we considered the unnatural amino acids and the protein scaffold separately. The scaffold is designed by traditional protein design methods. The unnatural amino acids are inspired by natural structure and organic chemistry methods, and synthesized by either organic chemistry methods or enzymatic conversion. With the increasing number of published unnatural amino acids with various functions, we described an unnatural amino acids toolkit containing metal chelators, redox mediators, and click chemistry reagents. These efforts enable a researcher to search the toolkit for appropriate unnatural amino acids for the study, rather than design and synthesize the unnatural amino acids from the beginning. After the first step, the model enzyme was optimized by computational methods and directed evolution. Lastly, we describe a general method for evolving aminoacyl-tRNA synthetase and expressing unnatural amino acids incorporated into a protein. PMID:27586330
Method for Enzyme Design with Genetically Encoded Unnatural Amino Acids.
Hu, C; Wang, J
2016-01-01
We describe the methodologies for the design of artificial enzymes with genetically encoded unnatural amino acids. Genetically encoded unnatural amino acids offer great promise for constructing artificial enzymes with novel activities. In our studies, the designs of artificial enzyme were divided into two steps. First, we considered the unnatural amino acids and the protein scaffold separately. The scaffold is designed by traditional protein design methods. The unnatural amino acids are inspired by natural structure and organic chemistry methods, and synthesized by either organic chemistry methods or enzymatic conversion. With the increasing number of published unnatural amino acids with various functions, we described an unnatural amino acids toolkit containing metal chelators, redox mediators, and click chemistry reagents. These efforts enable a researcher to search the toolkit for appropriate unnatural amino acids for the study, rather than design and synthesize the unnatural amino acids from the beginning. After the first step, the model enzyme was optimized by computational methods and directed evolution. Lastly, we describe a general method for evolving aminoacyl-tRNA synthetase and expressing unnatural amino acids incorporated into a protein.
Developing Conceptual Hypersonic Airbreathing Engines Using Design of Experiments Methods
NASA Technical Reports Server (NTRS)
Ferlemann, Shelly M.; Robinson, Jeffrey S.; Martin, John G.; Leonard, Charles P.; Taylor, Lawrence W.; Kamhawi, Hilmi
2000-01-01
Designing a hypersonic vehicle is a complicated process due to the multi-disciplinary synergy that is required. The greatest challenge involves propulsion-airframe integration. In the past, a two-dimensional flowpath was generated based on the engine performance required for a proposed mission. A three-dimensional CAD geometry was produced from the two-dimensional flowpath for aerodynamic analysis, structural design, and packaging. The aerodynamics, engine performance, and mass properties arc inputs to the vehicle performance tool to determine if the mission goals were met. If the mission goals were not met, then a flowpath and vehicle redesign would begin. This design process might have to be performed several times to produce a "closed" vehicle. This paper will describe an attempt to design a hypersonic cruise vehicle propulsion flowpath using a Design of' Experiments method to reduce the resources necessary to produce a conceptual design with fewer iterations of the design cycle. These methods also allow for more flexible mission analysis and incorporation of additional design constraints at any point. A design system was developed using an object-based software package that would quickly generate each flowpath in the study given the values of the geometric independent variables. These flowpath geometries were put into a hypersonic propulsion code and the engine performance was generated. The propulsion results were loaded into statistical software to produce regression equations that were combined with an aerodynamic database to optimize the flowpath at the vehicle performance level. For this example, the design process was executed twice. The first pass was a cursory look at the independent variables selected to determine which variables are the most important and to test all of the inputs to the optimization process. The second cycle is a more in-depth study with more cases and higher order equations representing the design space.
A decentralized linear quadratic control design method for flexible structures
NASA Technical Reports Server (NTRS)
Su, Tzu-Jeng; Craig, Roy R., Jr.
1990-01-01
A decentralized suboptimal linear quadratic control design procedure which combines substructural synthesis, model reduction, decentralized control design, subcontroller synthesis, and controller reduction is proposed for the design of reduced-order controllers for flexible structures. The procedure starts with a definition of the continuum structure to be controlled. An evaluation model of finite dimension is obtained by the finite element method. Then, the finite element model is decomposed into several substructures by using a natural decomposition called substructuring decomposition. Each substructure, at this point, still has too large a dimension and must be reduced to a size that is Riccati-solvable. Model reduction of each substructure can be performed by using any existing model reduction method, e.g., modal truncation, balanced reduction, Krylov model reduction, or mixed-mode method. Then, based on the reduced substructure model, a subcontroller is designed by an LQ optimal control method for each substructure independently. After all subcontrollers are designed, a controller synthesis method called substructural controller synthesis is employed to synthesize all subcontrollers into a global controller. The assembling scheme used is the same as that employed for the structure matrices. Finally, a controller reduction scheme, called the equivalent impulse response energy controller (EIREC) reduction algorithm, is used to reduce the global controller to a reasonable size for implementation. The EIREC reduced controller preserves the impulse response energy of the full-order controller and has the property of matching low-frequency moments and low-frequency power moments. An advantage of the substructural controller synthesis method is that it relieves the computational burden associated with dimensionality. Besides that, the SCS design scheme is also a highly adaptable controller synthesis method for structures with varying configuration, or varying mass
Rotordynamics and Design Methods of an Oil-Free Turbocharger
NASA Technical Reports Server (NTRS)
Howard, Samuel A.
1999-01-01
The feasibility of supporting a turbocharger rotor on air foil bearings is investigated based upon predicted rotordynamic stability, load accommodations, and stress considerations. It is demonstrated that foil bearings offer a plausible replacement for oil-lubricated bearings in diesel truck turbochargers. Also, two different rotor configurations are analyzed and the design is chosen which best optimizes the desired performance characteristics. The method of designing machinery for foil bearing use and the assumptions made are discussed.
Mixed methods research design for pragmatic psychoanalytic studies.
Tillman, Jane G; Clemence, A Jill; Stevens, Jennifer L
2011-10-01
Calls for more rigorous psychoanalytic studies have increased over the past decade. The field has been divided by those who assert that psychoanalysis is properly a hermeneutic endeavor and those who see it as a science. A comparable debate is found in research methodology, where qualitative and quantitative methods have often been seen as occupying orthogonal positions. Recently, Mixed Methods Research (MMR) has emerged as a viable "third community" of research, pursuing a pragmatic approach to research endeavors through integrating qualitative and quantitative procedures in a single study design. Mixed Methods Research designs and the terminology associated with this emerging approach are explained, after which the methodology is explored as a potential integrative approach to a psychoanalytic human science. Both qualitative and quantitative research methods are reviewed, as well as how they may be used in Mixed Methods Research to study complex human phenomena.
Scenario building as an ergonomics method in consumer product design.
Suri, J F; Marsh, M
2000-04-01
The role of human factors in design appears to have broadened from data analysis and interpretation into application of discovery and "user experience" design. The human factors practitioner is continually in search of ways to enhance and to better communicate their contributions, as well as to raise the prominence of the user at all stages of the design process. In work with design teams on the development of many consumer products, scenario building has proved to be a valuable addition to the repertoire of more traditional human factors methods. It is a powerful exploration, prototyping and communication tool, and is particularly useful early on in the product design process. This paper describes some advantages and potential pitfalls in using scenarios, and provides examples of how and where they can be usefully applied.
Design of large Francis turbine using optimal methods
NASA Astrophysics Data System (ADS)
Flores, E.; Bornard, L.; Tomas, L.; Liu, J.; Couston, M.
2012-11-01
Among a high number of Francis turbine references all over the world, covering the whole market range of heads, Alstom has especially been involved in the development and equipment of the largest power plants in the world : Three Gorges (China -32×767 MW - 61 to 113 m), Itaipu (Brazil- 20x750 MW - 98.7m to 127m) and Xiangjiaba (China - 8x812 MW - 82.5m to 113.6m - in erection). Many new projects are under study to equip new power plants with Francis turbines in order to answer an increasing demand of renewable energy. In this context, Alstom Hydro is carrying out many developments to answer those needs, especially for jumbo units such the planned 1GW type units in China. The turbine design for such units requires specific care by using the state of the art in computation methods and the latest technologies in model testing as well as the maximum feedback from operation of Jumbo plants already in operation. We present in this paper how a large Francis turbine can be designed using specific design methods, including the global and local optimization methods. The design of the spiral case, the tandem cascade profiles, the runner and the draft tube are designed with optimization loops involving a blade design tool, an automatic meshing software and a Navier-Stokes solver, piloted by a genetic algorithm. These automated optimization methods, presented in different papers over the last decade, are nowadays widely used, thanks to the growing computation capacity of the HPC clusters: the intensive use of such optimization methods at the turbine design stage allows to reach very high level of performances, while the hydraulic flow characteristics are carefully studied over the whole water passage to avoid any unexpected hydraulic phenomena.
Computational methods of robust controller design for aerodynamic flutter suppression
NASA Technical Reports Server (NTRS)
Anderson, L. R.
1981-01-01
The development of Riccati iteration, a tool for the design and analysis of linear control systems is examined. First, Riccati iteration is applied to the problem of pole placement and order reduction in two-time scale control systems. Order reduction, yielding a good approximation to the original system, is demonstrated using a 16th order linear model of a turbofan engine. Next, a numerical method for solving the Riccati equation is presented and demonstrated for a set of eighth order random examples. A literature review of robust controller design methods follows which includes a number of methods for reducing the trajectory and performance index sensitivity in linear regulators. Lastly, robust controller design for large parameter variations is discussed.
Improved method for transonic airfoil design-by-optimization
NASA Technical Reports Server (NTRS)
Kennelly, R. A., Jr.
1983-01-01
An improved method for use of optimization techniques in transonic airfoil design is demonstrated. FLO6QNM incorporates a modified quasi-Newton optimization package, and is shown to be more reliable and efficient than the method developed previously at NASA-Ames, which used the COPES/CONMIN optimization program. The design codes are compared on a series of test cases with known solutions, and the effects of problem scaling, proximity of initial point to solution, and objective function precision are studied. In contrast to the older method, well-converged solutions are shown to be attainable in the context of engineering design using computational fluid dynamics tools, a new result. The improvements are due to better performance by the optimization routine and to the use of problem-adaptive finite difference step sizes for gradient evaluation.
An uncertain multidisciplinary design optimization method using interval convex models
NASA Astrophysics Data System (ADS)
Li, Fangyi; Luo, Zhen; Sun, Guangyong; Zhang, Nong
2013-06-01
This article proposes an uncertain multi-objective multidisciplinary design optimization methodology, which employs the interval model to represent the uncertainties of uncertain-but-bounded parameters. The interval number programming method is applied to transform each uncertain objective function into two deterministic objective functions, and a satisfaction degree of intervals is used to convert both the uncertain inequality and equality constraints to deterministic inequality constraints. In doing so, an unconstrained deterministic optimization problem will be constructed in association with the penalty function method. The design will be finally formulated as a nested three-loop optimization, a class of highly challenging problems in the area of engineering design optimization. An advanced hierarchical optimization scheme is developed to solve the proposed optimization problem based on the multidisciplinary feasible strategy, which is a well-studied method able to reduce the dimensions of multidisciplinary design optimization problems by using the design variables as independent optimization variables. In the hierarchical optimization system, the non-dominated sorting genetic algorithm II, sequential quadratic programming method and Gauss-Seidel iterative approach are applied to the outer, middle and inner loops of the optimization problem, respectively. Typical numerical examples are used to demonstrate the effectiveness of the proposed methodology.
Evaluation of Methods for Multidisciplinary Design Optimization (MDO). Phase 1
NASA Technical Reports Server (NTRS)
Kodiyalam, Srinivas
1998-01-01
The NASA Langley Multidisciplinary Design Optimization (MDO) method evaluation study seeks to arrive at a set of guidelines for using promising MDO methods by accumulating and analyzing computational data for such methods. The data are collected by conducting a series of reproducible experiments. This report documents all computational experiments conducted in Phase I of the study. This report is a companion to the paper titled Initial Results of an MDO Method Evaluation Study by N. M. Alexandrov and S. Kodiyalam (AIAA-98-4884).
Exploration of Advanced Probabilistic and Stochastic Design Methods
NASA Technical Reports Server (NTRS)
Mavris, Dimitri N.
2003-01-01
The primary objective of the three year research effort was to explore advanced, non-deterministic aerospace system design methods that may have relevance to designers and analysts. The research pursued emerging areas in design methodology and leverage current fundamental research in the area of design decision-making, probabilistic modeling, and optimization. The specific focus of the three year investigation was oriented toward methods to identify and analyze emerging aircraft technologies in a consistent and complete manner, and to explore means to make optimal decisions based on this knowledge in a probabilistic environment. The research efforts were classified into two main areas. First, Task A of the grant has had the objective of conducting research into the relative merits of possible approaches that account for both multiple criteria and uncertainty in design decision-making. In particular, in the final year of research, the focus was on the comparison and contrasting between three methods researched. Specifically, these three are the Joint Probabilistic Decision-Making (JPDM) technique, Physical Programming, and Dempster-Shafer (D-S) theory. The next element of the research, as contained in Task B, was focused upon exploration of the Technology Identification, Evaluation, and Selection (TIES) methodology developed at ASDL, especially with regards to identification of research needs in the baseline method through implementation exercises. The end result of Task B was the documentation of the evolution of the method with time and a technology transfer to the sponsor regarding the method, such that an initial capability for execution could be obtained by the sponsor. Specifically, the results of year 3 efforts were the creation of a detailed tutorial for implementing the TIES method. Within the tutorial package, templates and detailed examples were created for learning and understanding the details of each step. For both research tasks, sample files and
A PDE Sensitivity Equation Method for Optimal Aerodynamic Design
NASA Technical Reports Server (NTRS)
Borggaard, Jeff; Burns, John
1996-01-01
The use of gradient based optimization algorithms in inverse design is well established as a practical approach to aerodynamic design. A typical procedure uses a simulation scheme to evaluate the objective function (from the approximate states) and its gradient, then passes this information to an optimization algorithm. Once the simulation scheme (CFD flow solver) has been selected and used to provide approximate function evaluations, there are several possible approaches to the problem of computing gradients. One popular method is to differentiate the simulation scheme and compute design sensitivities that are then used to obtain gradients. Although this black-box approach has many advantages in shape optimization problems, one must compute mesh sensitivities in order to compute the design sensitivity. In this paper, we present an alternative approach using the PDE sensitivity equation to develop algorithms for computing gradients. This approach has the advantage that mesh sensitivities need not be computed. Moreover, when it is possible to use the CFD scheme for both the forward problem and the sensitivity equation, then there are computational advantages. An apparent disadvantage of this approach is that it does not always produce consistent derivatives. However, for a proper combination of discretization schemes, one can show asymptotic consistency under mesh refinement, which is often sufficient to guarantee convergence of the optimal design algorithm. In particular, we show that when asymptotically consistent schemes are combined with a trust-region optimization algorithm, the resulting optimal design method converges. We denote this approach as the sensitivity equation method. The sensitivity equation method is presented, convergence results are given and the approach is illustrated on two optimal design problems involving shocks.
Taguchi method of experimental design in materials education
NASA Technical Reports Server (NTRS)
Weiser, Martin W.
1993-01-01
Some of the advantages and disadvantages of the Taguchi Method of experimental design as applied to Materials Science will be discussed. This is a fractional factorial method that employs the minimum number of experimental trials for the information obtained. The analysis is also very simple to use and teach, which is quite advantageous in the classroom. In addition, the Taguchi loss function can be easily incorporated to emphasize that improvements in reproducibility are often at least as important as optimization of the response. The disadvantages of the Taguchi Method include the fact that factor interactions are normally not accounted for, there are zero degrees of freedom if all of the possible factors are used, and randomization is normally not used to prevent environmental biasing. In spite of these disadvantages it is felt that the Taguchi Method is extremely useful for both teaching experimental design and as a research tool, as will be shown with a number of brief examples.
Function combined method for design innovation of children's bike
NASA Astrophysics Data System (ADS)
Wu, Xiaoli; Qiu, Tingting; Chen, Huijuan
2013-03-01
As children mature, bike products for children in the market develop at the same time, and the conditions are frequently updated. Certain problems occur when using a bike, such as cycle overlapping, repeating function, and short life cycle, which go against the principles of energy conservation and the environmental protection intensive design concept. In this paper, a rational multi-function method of design through functional superposition, transformation, and technical implementation is proposed. An organic combination of frog-style scooter and children's tricycle is developed using the multi-function method. From the ergonomic perspective, the paper elaborates on the body size of children aged 5 to 12 and effectively extracts data for a multi-function children's bike, which can be used for gliding and riding. By inverting the body, parts can be interchanged between the handles and the pedals of the bike. Finally, the paper provides a detailed analysis of the components and structural design, body material, and processing technology of the bike. The study of Industrial Product Innovation Design provides an effective design method to solve the bicycle problems, extends the function problems, improves the product market situation, and enhances the energy saving feature while implementing intensive product development effectively at the same time.
System Synthesis in Preliminary Aircraft Design using Statistical Methods
NASA Technical Reports Server (NTRS)
DeLaurentis, Daniel; Mavris, Dimitri N.; Schrage, Daniel P.
1996-01-01
This paper documents an approach to conceptual and preliminary aircraft design in which system synthesis is achieved using statistical methods, specifically design of experiments (DOE) and response surface methodology (RSM). These methods are employed in order to more efficiently search the design space for optimum configurations. In particular, a methodology incorporating three uses of these techniques is presented. First, response surface equations are formed which represent aerodynamic analyses, in the form of regression polynomials, which are more sophisticated than generally available in early design stages. Next, a regression equation for an overall evaluation criterion is constructed for the purpose of constrained optimization at the system level. This optimization, though achieved in a innovative way, is still traditional in that it is a point design solution. The methodology put forward here remedies this by introducing uncertainty into the problem, resulting a solutions which are probabilistic in nature. DOE/RSM is used for the third time in this setting. The process is demonstrated through a detailed aero-propulsion optimization of a high speed civil transport. Fundamental goals of the methodology, then, are to introduce higher fidelity disciplinary analyses to the conceptual aircraft synthesis and provide a roadmap for transitioning from point solutions to probabalistic designs (and eventually robust ones).
System Synthesis in Preliminary Aircraft Design Using Statistical Methods
NASA Technical Reports Server (NTRS)
DeLaurentis, Daniel; Mavris, Dimitri N.; Schrage, Daniel P.
1996-01-01
This paper documents an approach to conceptual and early preliminary aircraft design in which system synthesis is achieved using statistical methods, specifically Design of Experiments (DOE) and Response Surface Methodology (RSM). These methods are employed in order to more efficiently search the design space for optimum configurations. In particular, a methodology incorporating three uses of these techniques is presented. First, response surface equations are formed which represent aerodynamic analyses, in the form of regression polynomials, which are more sophisticated than generally available in early design stages. Next, a regression equation for an Overall Evaluation Criterion is constructed for the purpose of constrained optimization at the system level. This optimization, though achieved in an innovative way, is still traditional in that it is a point design solution. The methodology put forward here remedies this by introducing uncertainty into the problem, resulting in solutions which are probabilistic in nature. DOE/RSM is used for the third time in this setting. The process is demonstrated through a detailed aero-propulsion optimization of a High Speed Civil Transport. Fundamental goals of the methodology, then, are to introduce higher fidelity disciplinary analyses to the conceptual aircraft synthesis and provide a roadmap for transitioning from point solutions to probabilistic designs (and eventually robust ones).
A Simple Method for High-Lift Propeller Conceptual Design
NASA Technical Reports Server (NTRS)
Patterson, Michael; Borer, Nick; German, Brian
2016-01-01
In this paper, we present a simple method for designing propellers that are placed upstream of the leading edge of a wing in order to augment lift. Because the primary purpose of these "high-lift propellers" is to increase lift rather than produce thrust, these props are best viewed as a form of high-lift device; consequently, they should be designed differently than traditional propellers. We present a theory that describes how these props can be designed to provide a relatively uniform axial velocity increase, which is hypothesized to be advantageous for lift augmentation based on a literature survey. Computational modeling indicates that such propellers can generate the same average induced axial velocity while consuming less power and producing less thrust than conventional propeller designs. For an example problem based on specifications for NASA's Scalable Convergent Electric Propulsion Technology and Operations Research (SCEPTOR) flight demonstrator, a propeller designed with the new method requires approximately 15% less power and produces approximately 11% less thrust than one designed for minimum induced loss. Higher-order modeling and/or wind tunnel testing are needed to verify the predicted performance.
An interdisciplinary heuristic evaluation method for universal building design.
Afacan, Yasemin; Erbug, Cigdem
2009-07-01
This study highlights how heuristic evaluation as a usability evaluation method can feed into current building design practice to conform to universal design principles. It provides a definition of universal usability that is applicable to an architectural design context. It takes the seven universal design principles as a set of heuristics and applies an iterative sequence of heuristic evaluation in a shopping mall, aiming to achieve a cost-effective evaluation process. The evaluation was composed of three consecutive sessions. First, five evaluators from different professions were interviewed regarding the construction drawings in terms of universal design principles. Then, each evaluator was asked to perform the predefined task scenarios. In subsequent interviews, the evaluators were asked to re-analyze the construction drawings. The results showed that heuristic evaluation could successfully integrate universal usability into current building design practice in two ways: (i) it promoted an iterative evaluation process combined with multi-sessions rather than relying on one evaluator and on one evaluation session to find the maximum number of usability problems, and (ii) it highlighted the necessity of an interdisciplinary ad hoc committee regarding the heuristic abilities of each profession. A multi-session and interdisciplinary heuristic evaluation method can save both the project budget and the required time, while ensuring a reduced error rate for the universal usage of the built environments.
Comparison of optimal design methods in inverse problems
NASA Astrophysics Data System (ADS)
Banks, H. T.; Holm, K.; Kappel, F.
2011-07-01
Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667-77 De Gaetano A and Arino O 2000 J. Math. Biol. 40 136-68 Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979-90).
New Methods and Transducer Designs for Ultrasonic Diagnostics and Therapy
NASA Astrophysics Data System (ADS)
Rybyanets, A. N.; Naumenko, A. A.; Sapozhnikov, O. A.; Khokhlova, V. A.
Recent advances in the field of physical acoustics, imaging technologies, piezoelectric materials, and ultrasonic transducer design have led to emerging of novel methods and apparatus for ultrasonic diagnostics, therapy and body aesthetics. The paper presents the results on development and experimental study of different high intensity focused ultrasound (HIFU) transducers. Technological peculiarities of the HIFU transducer design as well as theoretical and numerical models of such transducers and the corresponding HIFU fields are discussed. Several HIFU transducers of different design have been fabricated using different advanced piezoelectric materials. Acoustic field measurements for those transducers have been performed using a calibrated fiber optic hydrophone and an ultrasonic measurement system (UMS). The results of ex vivo experiments with different tissues as well as in vivo experiments with blood vessels are presented that prove the efficacy, safety and selectivity of the developed HIFU transducers and methods.
New displacement-based methods for optimal truss topology design
NASA Technical Reports Server (NTRS)
Bendsoe, Martin P.; Ben-Tal, Aharon; Haftka, Raphael T.
1991-01-01
Two alternate methods for maximum stiffness truss topology design are presented. The ground structure approach is used, and the problem is formulated in terms of displacements and bar areas. This large, nonconvex optimization problem can be solved by a simultaneous analysis and design approach. Alternatively, an equivalent, unconstrained, and convex problem in the displacements only can be formulated, and this problem can be solved by a nonsmooth, steepest descent algorithm. In both methods, the explicit solving of the equilibrium equations and the assembly of the global stiffness matrix are circumvented. A large number of examples have been studied, showing the attractive features of topology design as well as exposing interesting features of optimal topologies.
Obtaining Valid Response Rates: Considerations beyond the Tailored Design Method.
ERIC Educational Resources Information Center
Huang, Judy Y.; Hubbard, Susan M.; Mulvey, Kevin P.
2003-01-01
Reports on the use of the tailored design method (TDM) to achieve high survey response in two separate studies of the dissemination of Treatment Improvement Protocols (TIPs). Findings from these two studies identify six factors may have influenced nonresponse, and show that use of TDM does not, in itself, guarantee a high response rate. (SLD)
14 CFR 161.9 - Designation of noise description methods.
Code of Federal Regulations, 2010 CFR
2010-01-01
... and methods prescribed under appendix A of 14 CFR part 150; and (b) Use of computer models to create noise contours must be in accordance with the criteria prescribed under appendix A of 14 CFR part 150. ... 14 Aeronautics and Space 3 2010-01-01 2010-01-01 false Designation of noise description...
14 CFR 161.9 - Designation of noise description methods.
Code of Federal Regulations, 2011 CFR
2011-01-01
... and methods prescribed under appendix A of 14 CFR part 150; and (b) Use of computer models to create noise contours must be in accordance with the criteria prescribed under appendix A of 14 CFR part 150. ... 14 Aeronautics and Space 3 2011-01-01 2011-01-01 false Designation of noise description...
Impact design methods for ceramic components in gas turbine engines
NASA Technical Reports Server (NTRS)
Song, J.; Cuccio, J.; Kington, H.
1991-01-01
Methods currently under development to design ceramic turbine components with improved impact resistance are presented. Two different modes of impact damage are identified and characterized, i.e., structural damage and local damage. The entire computation is incorporated into the EPIC computer code. Model capability is demonstrated by simulating instrumented plate impact and particle impact tests.
Designs and Methods in School Improvement Research: A Systematic Review
ERIC Educational Resources Information Center
Feldhoff, Tobias; Radisch, Falk; Bischof, Linda Marie
2016-01-01
Purpose: The purpose of this paper is to focus on challenges faced by longitudinal quantitative analyses of school improvement processes and offers a systematic literature review of current papers that use longitudinal analyses. In this context, the authors assessed designs and methods that are used to analyze the relation between school…
Polypharmacology: in silico methods of ligand design and development.
McKie, Samuel A
2016-04-01
How to design a ligand to bind multiple targets, rather than to a single target, is the focus of this review. Rational polypharmacology draws on knowledge that is both broad ranging and hierarchical. Computer-aided multitarget ligand design methods are described according to their nested knowledge level. Ligand-only and then receptor-ligand strategies are first described; followed by the metabolic network viewpoint. Subsequently strategies that view infectious diseases as multigenomic targets are discussed, and finally the disease level interpretation of medicinal therapy is considered. As yet there is no consensus on how best to proceed in designing a multitarget ligand. The current methodologies are bought together in an attempt to give a practical overview of how polypharmacology design might be best initiated. PMID:27105127
Polypharmacology: in silico methods of ligand design and development.
McKie, Samuel A
2016-04-01
How to design a ligand to bind multiple targets, rather than to a single target, is the focus of this review. Rational polypharmacology draws on knowledge that is both broad ranging and hierarchical. Computer-aided multitarget ligand design methods are described according to their nested knowledge level. Ligand-only and then receptor-ligand strategies are first described; followed by the metabolic network viewpoint. Subsequently strategies that view infectious diseases as multigenomic targets are discussed, and finally the disease level interpretation of medicinal therapy is considered. As yet there is no consensus on how best to proceed in designing a multitarget ligand. The current methodologies are bought together in an attempt to give a practical overview of how polypharmacology design might be best initiated.
Guidance for using mixed methods design in nursing practice research.
Chiang-Hanisko, Lenny; Newman, David; Dyess, Susan; Piyakong, Duangporn; Liehr, Patricia
2016-08-01
The mixed methods approach purposefully combines both quantitative and qualitative techniques, enabling a multi-faceted understanding of nursing phenomena. The purpose of this article is to introduce three mixed methods designs (parallel; sequential; conversion) and highlight interpretive processes that occur with the synthesis of qualitative and quantitative findings. Real world examples of research studies conducted by the authors will demonstrate the processes leading to the merger of data. The examples include: research questions; data collection procedures and analysis with a focus on synthesizing findings. Based on experience with mixed methods studied, the authors introduce two synthesis patterns (complementary; contrasting), considering application for practice and implications for research. PMID:27397810
Computational methods for aerodynamic design using numerical optimization
NASA Technical Reports Server (NTRS)
Peeters, M. F.
1983-01-01
Five methods to increase the computational efficiency of aerodynamic design using numerical optimization, by reducing the computer time required to perform gradient calculations, are examined. The most promising method consists of drastically reducing the size of the computational domain on which aerodynamic calculations are made during gradient calculations. Since a gradient calculation requires the solution of the flow about an airfoil whose geometry was slightly perturbed from a base airfoil, the flow about the base airfoil is used to determine boundary conditions on the reduced computational domain. This method worked well in subcritical flow.
ERIC Educational Resources Information Center
Collins, Kathleen M. T.; Onwuegbuzie, Anthony J.; Jiao, Qun G.
2007-01-01
A sequential design utilizing identical samples was used to classify mixed methods studies via a two-dimensional model, wherein sampling designs were grouped according to the time orientation of each study's components and the relationship of the qualitative and quantitative samples. A quantitative analysis of 121 studies representing nine fields…
Mixed methods research: a design for emergency care research?
Cooper, Simon; Porter, Jo; Endacott, Ruth
2011-08-01
This paper follows previous publications on generic qualitative approaches, qualitative designs and action research in emergency care by this group of authors. Contemporary views on mixed methods approaches are considered, with a particular focus on the design choice and the amalgamation of qualitative and quantitative data emphasising the timing of data collection for each approach, their relative 'weight' and how they will be mixed. Mixed methods studies in emergency care are reviewed before the variety of methodological approaches and best practice considerations are presented. The use of mixed methods in clinical studies is increasing, aiming to answer questions such as 'how many' and 'why' in the same study, and as such are an important and useful approach to many key questions in emergency care.
Tuning Parameters in Heuristics by Using Design of Experiments Methods
NASA Technical Reports Server (NTRS)
Arin, Arif; Rabadi, Ghaith; Unal, Resit
2010-01-01
With the growing complexity of today's large scale problems, it has become more difficult to find optimal solutions by using exact mathematical methods. The need to find near-optimal solutions in an acceptable time frame requires heuristic approaches. In many cases, however, most heuristics have several parameters that need to be "tuned" before they can reach good results. The problem then turns into "finding best parameter setting" for the heuristics to solve the problems efficiently and timely. One-Factor-At-a-Time (OFAT) approach for parameter tuning neglects the interactions between parameters. Design of Experiments (DOE) tools can be instead employed to tune the parameters more effectively. In this paper, we seek the best parameter setting for a Genetic Algorithm (GA) to solve the single machine total weighted tardiness problem in which n jobs must be scheduled on a single machine without preemption, and the objective is to minimize the total weighted tardiness. Benchmark instances for the problem are available in the literature. To fine tune the GA parameters in the most efficient way, we compare multiple DOE models including 2-level (2k ) full factorial design, orthogonal array design, central composite design, D-optimal design and signal-to-noise (SIN) ratios. In each DOE method, a mathematical model is created using regression analysis, and solved to obtain the best parameter setting. After verification runs using the tuned parameter setting, the preliminary results for optimal solutions of multiple instances were found efficiently.
Current methods of epitope identification for cancer vaccine design.
Cherryholmes, Gregory A; Stanton, Sasha E; Disis, Mary L
2015-12-16
The importance of the immune system in tumor development and progression has been emerging in many cancers. Previous cancer vaccines have not shown long-term clinical benefit possibly because were not designed to avoid eliciting regulatory T-cell responses that inhibit the anti-tumor immune response. This review will examine different methods of identifying epitopes derived from tumor associated antigens suitable for immunization and the steps used to design and validate peptide epitopes to improve efficacy of anti-tumor peptide-based vaccines. Focusing on in silico prediction algorithms, we survey the advantages and disadvantages of current cancer vaccine prediction tools.
Material Design, Selection, and Manufacturing Methods for System Sustainment
David Sowder, Jim Lula, Curtis Marshall
2010-02-18
This paper describes a material selection and validation process proven to be successful for manufacturing high-reliability long-life product. The National Secure Manufacturing Center business unit of the Kansas City Plant (herein called KCP) designs and manufactures complex electrical and mechanical components used in extreme environments. The material manufacturing heritage is founded in the systems design to manufacturing practices that support the U.S. Department of Energy’s National Nuclear Security Administration (DOE/NNSA). Material Engineers at KCP work with the systems designers to recommend materials, develop test methods, perform analytical analysis of test data, define cradle to grave needs, present final selection and fielding. The KCP material engineers typically will maintain cost control by utilizing commercial products when possible, but have the resources and to develop and produce unique formulations as necessary. This approach is currently being used to mature technologies to manufacture materials with improved characteristics using nano-composite filler materials that will enhance system design and production. For some products the engineers plan and carry out science-based life-cycle material surveillance processes. Recent examples of the approach include refurbished manufacturing of the high voltage power supplies for cockpit displays in operational aircraft; dry film lubricant application to improve bearing life for guided munitions gyroscope gimbals, ceramic substrate design for electrical circuit manufacturing, and tailored polymeric materials for various systems. The following examples show evidence of KCP concurrent design-to-manufacturing techniques used to achieve system solutions that satisfy or exceed demanding requirements.
Denoising Sparse Images from GRAPPA using the Nullspace Method (DESIGN)
Weller, Daniel S.; Polimeni, Jonathan R.; Grady, Leo; Wald, Lawrence L.; Adalsteinsson, Elfar; Goyal, Vivek K
2011-01-01
To accelerate magnetic resonance imaging using uniformly undersampled (nonrandom) parallel imaging beyond what is achievable with GRAPPA alone, the Denoising of Sparse Images from GRAPPA using the Nullspace method (DESIGN) is developed. The trade-off between denoising and smoothing the GRAPPA solution is studied for different levels of acceleration. Several brain images reconstructed from uniformly undersampled k-space data using DESIGN are compared against reconstructions using existing methods in terms of difference images (a qualitative measure), PSNR, and noise amplification (g-factors) as measured using the pseudo-multiple replica method. Effects of smoothing, including contrast loss, are studied in synthetic phantom data. In the experiments presented, the contrast loss and spatial resolution are competitive with existing methods. Results for several brain images demonstrate significant improvements over GRAPPA at high acceleration factors in denoising performance with limited blurring or smoothing artifacts. In addition, the measured g-factors suggest that DESIGN mitigates noise amplification better than both GRAPPA and L1 SPIR-iT (the latter limited here by uniform undersampling). PMID:22213069
Docking methods for structure-based library design.
Cavasotto, Claudio N; Phatak, Sharangdhar S
2011-01-01
The drug discovery process mainly relies on the experimental high-throughput screening of huge compound libraries in their pursuit of new active compounds. However, spiraling research and development costs and unimpressive success rates have driven the development of more rational, efficient, and cost-effective methods. With the increasing availability of protein structural information, advancement in computational algorithms, and faster computing resources, in silico docking-based methods are increasingly used to design smaller and focused compound libraries in order to reduce screening efforts and costs and at the same time identify active compounds with a better chance of progressing through the optimization stages. This chapter is a primer on the various docking-based methods developed for the purpose of structure-based library design. Our aim is to elucidate some basic terms related to the docking technique and explain the methodology behind several docking-based library design methods. This chapter also aims to guide the novice computational practitioner by laying out the general steps involved for such an exercise. Selected successful case studies conclude this chapter. PMID:20981523
Docking methods for structure-based library design.
Cavasotto, Claudio N; Phatak, Sharangdhar S
2011-01-01
The drug discovery process mainly relies on the experimental high-throughput screening of huge compound libraries in their pursuit of new active compounds. However, spiraling research and development costs and unimpressive success rates have driven the development of more rational, efficient, and cost-effective methods. With the increasing availability of protein structural information, advancement in computational algorithms, and faster computing resources, in silico docking-based methods are increasingly used to design smaller and focused compound libraries in order to reduce screening efforts and costs and at the same time identify active compounds with a better chance of progressing through the optimization stages. This chapter is a primer on the various docking-based methods developed for the purpose of structure-based library design. Our aim is to elucidate some basic terms related to the docking technique and explain the methodology behind several docking-based library design methods. This chapter also aims to guide the novice computational practitioner by laying out the general steps involved for such an exercise. Selected successful case studies conclude this chapter.
NASA Technical Reports Server (NTRS)
Venkatapathy, Ethiraj; Nystrom, G. A.; Bardina, J.; Lombard, C. K.
1987-01-01
This paper describes the application of the conservative supra characteristic method (CSCM) to predict the flow around two-dimensional slot injection cooled cavities in hypersonic flow. Seven different numerical solutions are presented that model three different experimental designs. The calculations manifest outer flow conditions including the effects of nozzle/lip geometry, angle of attack, nozzle inlet conditions, boundary and shear layer growth and turbulance on the surrounding flow. The calculations were performed for analysis prior to wind tunnel testing for sensitivity studies early in the design process. Qualitative and quantitative understanding of the flows for each of the cavity designs and design recommendations are provided. The present paper demonstrates the ability of numerical schemes, such as the CSCM method, to play a significant role in the design process.
COMPSIZE - PRELIMINARY DESIGN METHOD FOR FIBER REINFORCED COMPOSITE STRUCTURES
NASA Technical Reports Server (NTRS)
Eastlake, C. N.
1994-01-01
The Composite Structure Preliminary Sizing program, COMPSIZE, is an analytical tool which structural designers can use when doing approximate stress analysis to select or verify preliminary sizing choices for composite structural members. It is useful in the beginning stages of design concept definition, when it is helpful to have quick and convenient approximate stress analysis tools available so that a wide variety of structural configurations can be sketched out and checked for feasibility. At this stage of the design process the stress/strain analysis does not need to be particularly accurate because any configurations tentatively defined as feasible will later be analyzed in detail by stress analysis specialists. The emphasis is on fast, user-friendly methods so that rough but technically sound evaluation of a broad variety of conceptual designs can be accomplished. Analysis equations used are, in most cases, widely known basic structural analysis methods. All the equations used in this program assume elastic deformation only. The default material selection is intermediate strength graphite/epoxy laid up in a quasi-isotropic laminate. A general flat laminate analysis subroutine is included for analyzing arbitrary laminates. However, COMPSIZE should be sufficient for most users to presume a quasi-isotropic layup and use the familiar basic structural analysis methods for isotropic materials, after estimating an appropriate elastic modulus. Homogeneous materials can be analyzed as simplified cases. The COMPSIZE program is written in IBM BASICA. The program format is interactive. It was designed on an IBM Personal Computer operating under DOS with a central memory requirement of approximately 128K. It has been implemented on an IBM compatible with GW-BASIC under DOS 3.2. COMPSIZE was developed in 1985.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-05
... 53, as amended on August 31, 2011 (76 FR 54326-54341). The new equivalent methods are automated... beta radiation attenuation. The newly designated equivalent methods are identified as follows: EQPM-0912-204, ``Teledyne Model 602 Beta\\PLUS\\ Particle Measurement System'' and ``SWAM 5a Dual...
Preliminary demonstration of a robust controller design method
NASA Technical Reports Server (NTRS)
Anderson, L. R.
1980-01-01
Alternative computational procedures for obtaining a feedback control law which yields a control signal based on measurable quantitites are evaluated. The three methods evaluated are: (1) the standard linear quadratic regulator design model; (2) minimization of the norm of the feedback matrix, k via nonlinear programming subject to the constraint that the closed loop eigenvalues be in a specified domain in the complex plane; and (3) maximize the angles between the closed loop eigenvectors in combination with minimizing the norm of K also via the constrained nonlinear programming. The third or robust design method was chosen to yield a closed loop system whose eigenvalues are insensitive to small changes in the A and B matrices. The relationship between orthogonality of closed loop eigenvectors and the sensitivity of closed loop eigenvalues is described. Computer programs are described.
Helicopter flight-control design using an H(2) method
NASA Technical Reports Server (NTRS)
Takahashi, Marc D.
1991-01-01
Rate-command and attitude-command flight-control designs for a UH-60 helicopter in hover are presented and were synthesized using an H(2) method. Using weight functions, this method allows the direct shaping of the singular values of the sensitivity, complementary sensitivity, and control input transfer-function matrices to give acceptable feedback properties. The designs were implemented on the Vertical Motion Simulator, and four low-speed hover tasks were used to evaluate the control system characteristics. The pilot comments from the accel-decel, bob-up, hovering turn, and side-step tasks indicated good decoupling and quick response characteristics. However, an underlying roll PIO tendency was found to exist away from the hover condition, which was caused by a flap regressing mode with insufficient damping.
National Tuberculosis Genotyping and Surveillance Network: Design and Methods
Braden, Christopher R.; Schable, Barbara A.; Onorato, Ida M.
2002-01-01
The National Tuberculosis Genotyping and Surveillance Network was established in 1996 to perform a 5-year, prospective study of the usefulness of genotyping Mycobacterium tuberculosis isolates to tuberculosis control programs. Seven sentinel sites identified all new cases of tuberculosis, collected information on patients and contacts, and obtained patient isolates. Seven genotyping laboratories performed DNA fingerprinting analysis by the international standard IS6110 method. BioImage Whole Band Analyzer software was used to analyze patterns, and distinct patterns were assigned unique designations. Isolates with six or fewer bands on IS6110 patterns were also spoligotyped. Patient data and genotyping designations were entered in a relational database and merged with selected variables from the national surveillance database. In two related databases, we compiled the results of routine contact investigations and the results of investigations of the relationships of patients who had isolates with matching genotypes. We describe the methods used in the study. PMID:12453342
Optical design and active optics methods in astronomy
NASA Astrophysics Data System (ADS)
Lemaitre, Gerard R.
2013-03-01
Optical designs for astronomy involve implementation of active optics and adaptive optics from X-ray to the infrared. Developments and results of active optics methods for telescopes, spectrographs and coronagraph planet finders are presented. The high accuracy and remarkable smoothness of surfaces generated by active optics methods also allow elaborating new optical design types with high aspheric and/or non-axisymmetric surfaces. Depending on the goal and performance requested for a deformable optical surface analytical investigations are carried out with one of the various facets of elasticity theory: small deformation thin plate theory, large deformation thin plate theory, shallow spherical shell theory, weakly conical shell theory. The resulting thickness distribution and associated bending force boundaries can be refined further with finite element analysis.
A Requirements-Driven Optimization Method for Acoustic Treatment Design
NASA Technical Reports Server (NTRS)
Berton, Jeffrey J.
2016-01-01
Acoustic treatment designers have long been able to target specific noise sources inside turbofan engines. Facesheet porosity and cavity depth are key design variables of perforate-over-honeycomb liners that determine levels of noise suppression as well as the frequencies at which suppression occurs. Layers of these structures can be combined to create a robust attenuation spectrum that covers a wide range of frequencies. Looking to the future, rapidly-emerging additive manufacturing technologies are enabling new liners with multiple degrees of freedom, and new adaptive liners with variable impedance are showing promise. More than ever, there is greater flexibility and freedom in liner design. Subject to practical considerations, liner design variables may be manipulated to achieve a target attenuation spectrum. But characteristics of the ideal attenuation spectrum can be difficult to know. Many multidisciplinary system effects govern how engine noise sources contribute to community noise. Given a hardwall fan noise source to be suppressed, and using an analytical certification noise model to compute a community noise measure of merit, the optimal attenuation spectrum can be derived using multidisciplinary systems analysis methods. The subject of this paper is an analytical method that derives the ideal target attenuation spectrum that minimizes noise perceived by observers on the ground.
Subsonic panel method for designing wing surfaces from pressure distribution
NASA Technical Reports Server (NTRS)
Bristow, D. R.; Hawk, J. D.
1983-01-01
An iterative method has been developed for designing wing section contours corresponding to a prescribed subcritical distribution of pressure. The calculations are initialized by using a surface panel method to analyze a baseline wing or wing-fuselage configuration. A first-order expansion to the baseline panel method equations is then used to calculate a matrix containing the partial derivative of potential at each control point with respect to each unknown geometry parameter. In every iteration cycle, the matrix is used both to calculate the geometry perturbation and to analyze the perturbed geometry. The distribution of potential on the perturbed geometry is established by simple linear extrapolation from the baseline solution. The extrapolated potential is converted to pressure by Bernoulli's equation. Not only is the accuracy of the approach good for very large perturbations, but the computing cost of each complete iteration cycle is substantially less than one analysis solution by a conventional panel method.
Evaluation of Methods for Multidisciplinary Design Optimization (MDO). Part 2
NASA Technical Reports Server (NTRS)
Kodiyalam, Srinivas; Yuan, Charles; Sobieski, Jaroslaw (Technical Monitor)
2000-01-01
A new MDO method, BLISS, and two different variants of the method, BLISS/RS and BLISS/S, have been implemented using iSIGHT's scripting language and evaluated in this report on multidisciplinary problems. All of these methods are based on decomposing a modular system optimization system into several subtasks optimization, that may be executed concurrently, and the system optimization that coordinates the subtasks optimization. The BLISS method and its variants are well suited for exploiting the concurrent processing capabilities in a multiprocessor machine. Several steps, including the local sensitivity analysis, local optimization, response surfaces construction and updates are all ideally suited for concurrent processing. Needless to mention, such algorithms that can effectively exploit the concurrent processing capabilities of the compute servers will be a key requirement for solving large-scale industrial design problems, such as the automotive vehicle problem detailed in Section 3.4.
Improve emergency light design with lumens/sq ft method.
Sieron, R L
1981-05-01
In summary, the "Lumens/sq ft Method" outlined here is proposed as a guideline for designing emergency lighting systems such as in the accompanying examples. With this method, the total lumens delivered by the emergency lighting units in the area is divided by the floor area (in sq ft) to yield a figure of merit. The author proposes that a range from 0.25 to 1.0 lumens/sq ft be specified for emergency lighting. The lower value may be used for non-critical areas (for example, warehouses), while the higher value would be used for areas such as school corridors and hospitals.
Synthesis of aircraft structures using integrated design and analysis methods
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; Goetz, R. C.
1978-01-01
A systematic research is reported to develop and validate methods for structural sizing of an airframe designed with the use of composite materials and active controls. This research program includes procedures for computing aeroelastic loads, static and dynamic aeroelasticity, analysis and synthesis of active controls, and optimization techniques. Development of the methods is concerned with the most effective ways of integrating and sequencing the procedures in order to generate structural sizing and the associated active control system, which is optimal with respect to a given merit function constrained by strength and aeroelasticity requirements.
Bayesian methods for design and analysis of safety trials.
Price, Karen L; Xia, H Amy; Lakshminarayanan, Mani; Madigan, David; Manner, David; Scott, John; Stamey, James D; Thompson, Laura
2014-01-01
Safety assessment is essential throughout medical product development. There has been increased awareness of the importance of safety trials recently, in part due to recent US Food and Drug Administration guidance related to thorough assessment of cardiovascular risk in the treatment of type 2 diabetes. Bayesian methods provide great promise for improving the conduct of safety trials. In this paper, the safety subteam of the Drug Information Association Bayesian Scientific Working Group evaluates challenges associated with current methods for designing and analyzing safety trials and provides an overview of several suggested Bayesian opportunities that may increase efficiency of safety trials along with relevant case examples.
Asymmetric MRI magnet design using a hybrid numerical method.
Zhao, H; Crozier, S; Doddrell, D M
1999-12-01
This paper describes a hybrid numerical method for the design of asymmetric magnetic resonance imaging magnet systems. The problem is formulated as a field synthesis and the desired current density on the surface of a cylinder is first calculated by solving a Fredholm equation of the first kind. Nonlinear optimization methods are then invoked to fit practical magnet coils to the desired current density. The field calculations are performed using a semi-analytical method. A new type of asymmetric magnet is proposed in this work. The asymmetric MRI magnet allows the diameter spherical imaging volume to be positioned close to one end of the magnet. The main advantages of making the magnet asymmetric include the potential to reduce the perception of claustrophobia for the patient, better access to the patient by attending physicians, and the potential for reduced peripheral nerve stimulation due to the gradient coil configuration. The results highlight that the method can be used to obtain an asymmetric MRI magnet structure and a very homogeneous magnetic field over the central imaging volume in clinical systems of approximately 1.2 m in length. Unshielded designs are the focus of this work. This method is flexible and may be applied to magnets of other geometries.
Design Methods for Load-bearing Elements from Crosslaminated Timber
NASA Astrophysics Data System (ADS)
Vilguts, A.; Serdjuks, D.; Goremikins, V.
2015-11-01
Cross-laminated timber is an environmentally friendly material, which possesses a decreased level of anisotropy in comparison with the solid and glued timber. Cross-laminated timber could be used for load-bearing walls and slabs of multi-storey timber buildings as well as decking structures of pedestrian and road bridges. Design methods of cross-laminated timber elements subjected to bending and compression with bending were considered. The presented methods were experimentally validated and verified by FEM. Two cross-laminated timber slabs were tested at the action of static load. Pine wood was chosen as a board's material. Freely supported beam with the span equal to 1.9 m, which was loaded by the uniformly distributed load, was a design scheme of the considered plates. The width of the plates was equal to 1 m. The considered cross-laminated timber plates were analysed by FEM method. The comparison of stresses acting in the edge fibres of the plate and the maximum vertical displacements shows that both considered methods can be used for engineering calculations. The difference between the results obtained experimentally and analytically is within the limits from 2 to 31%. The difference in results obtained by effective strength and stiffness and transformed sections methods was not significant.
Libration Orbit Mission Design: Applications of Numerical & Dynamical Methods
NASA Technical Reports Server (NTRS)
Bauer, Frank (Technical Monitor); Folta, David; Beckman, Mark
2002-01-01
Sun-Earth libration point orbits serve as excellent locations for scientific investigations. These orbits are often selected to minimize environmental disturbances and maximize observing efficiency. Trajectory design in support of libration orbits is ever more challenging as more complex missions are envisioned in the next decade. Trajectory design software must be further enabled to incorporate better understanding of the libration orbit solution space and thus improve the efficiency and expand the capabilities of current approaches. The Goddard Space Flight Center (GSFC) is currently supporting multiple libration missions. This end-to-end support consists of mission operations, trajectory design, and control. It also includes algorithm and software development. The recently launched Microwave Anisotropy Probe (MAP) and upcoming James Webb Space Telescope (JWST) and Constellation-X missions are examples of the use of improved numerical methods for attaining constrained orbital parameters and controlling their dynamical evolution at the collinear libration points. This paper presents a history of libration point missions, a brief description of the numerical and dynamical design techniques including software used, and a sample of future GSFC mission designs.
A MODEL AND CONTROLLER REDUCTION METHOD FOR ROBUST CONTROL DESIGN.
YUE,M.; SCHLUETER,R.
2003-10-20
A bifurcation subsystem based model and controller reduction approach is presented. Using this approach a robust {micro}-synthesis SVC control is designed for interarea oscillation and voltage control based on a small reduced order bifurcation subsystem model of the full system. The control synthesis problem is posed by structured uncertainty modeling and control configuration formulation using the bifurcation subsystem knowledge of the nature of the interarea oscillation caused by a specific uncertainty parameter. Bifurcation subsystem method plays a key role in this paper because it provides (1) a bifurcation parameter for uncertainty modeling; (2) a criterion to reduce the order of the resulting MSVC control; and (3) a low order model for a bifurcation subsystem based SVC (BMSVC) design. The use of the model of the bifurcation subsystem to produce a low order controller simplifies the control design and reduces the computation efforts so significantly that the robust {micro}-synthesis control can be applied to large system where the computation makes robust control design impractical. The RGA analysis and time simulation show that the reduced BMSVC control design captures the center manifold dynamics and uncertainty structure of the full system model and is capable of stabilizing the full system and achieving satisfactory control performance.
Towards Robust Designs Via Multiple-Objective Optimization Methods
NASA Technical Reports Server (NTRS)
Man Mohan, Rai
2006-01-01
evolutionary method (DE) is first used to solve a relatively difficult problem in extended surface heat transfer wherein optimal fin geometries are obtained for different safe operating base temperatures. The objective of maximizing the safe operating base temperature range is in direct conflict with the objective of maximizing fin heat transfer. This problem is a good example of achieving robustness in the context of changing operating conditions. The evolutionary method is then used to design a turbine airfoil; the two objectives being reduced sensitivity of the pressure distribution to small changes in the airfoil shape and the maximization of the trailing edge wedge angle with the consequent increase in airfoil thickness and strength. This is a relevant example of achieving robustness to manufacturing tolerances and wear and tear in the presence of other objectives.
Bayesian methods for the design and analysis of noninferiority trials.
Gamalo-Siebers, Margaret; Gao, Aijun; Lakshminarayanan, Mani; Liu, Guanghan; Natanegara, Fanni; Railkar, Radha; Schmidli, Heinz; Song, Guochen
2016-01-01
The gold standard for evaluating treatment efficacy of a medical product is a placebo-controlled trial. However, when the use of placebo is considered to be unethical or impractical, a viable alternative for evaluating treatment efficacy is through a noninferiority (NI) study where a test treatment is compared to an active control treatment. The minimal objective of such a study is to determine whether the test treatment is superior to placebo. An assumption is made that if the active control treatment remains efficacious, as was observed when it was compared against placebo, then a test treatment that has comparable efficacy with the active control, within a certain range, must also be superior to placebo. Because of this assumption, the design, implementation, and analysis of NI trials present challenges for sponsors and regulators. In designing and analyzing NI trials, substantial historical data are often required on the active control treatment and placebo. Bayesian approaches provide a natural framework for synthesizing the historical data in the form of prior distributions that can effectively be used in design and analysis of a NI clinical trial. Despite a flurry of recent research activities in the area of Bayesian approaches in medical product development, there are still substantial gaps in recognition and acceptance of Bayesian approaches in NI trial design and analysis. The Bayesian Scientific Working Group of the Drug Information Association provides a coordinated effort to target the education and implementation issues on Bayesian approaches for NI trials. In this article, we provide a review of both frequentist and Bayesian approaches in NI trials, and elaborate on the implementation for two common Bayesian methods including hierarchical prior method and meta-analytic-predictive approach. Simulations are conducted to investigate the properties of the Bayesian methods, and some real clinical trial examples are presented for illustration.
Improved Method of Design for Folding Inflatable Shells
NASA Technical Reports Server (NTRS)
Johnson, Christopher J.
2009-01-01
An improved method of designing complexly shaped inflatable shells to be assembled from gores was conceived for original application to the inflatable outer shell of a developmental habitable spacecraft module having a cylindrical mid-length section with toroidal end caps. The method is also applicable to inflatable shells of various shapes for terrestrial use. The method addresses problems associated with the assembly, folding, transport, and deployment of inflatable shells that may comprise multiple layers and have complex shapes that can include such doubly curved surfaces as toroids and spheres. One particularly difficult problem is that of mathematically defining fold lines on a gore pattern in a double- curvature region. Moreover, because the fold lines in a double-curvature region tend to be curved, there is a practical problem of how to implement the folds. Another problem is that of modifying the basic gore shapes and sizes for the various layers so that when they are folded as part of the integral structure, they do not mechanically interfere with each other at the fold lines. Heretofore, it has been a common practice to design an inflatable shell to be assembled in the deployed configuration, without regard for the need to fold it into compact form. Typically, the result has been that folding has been a difficult, time-consuming process resulting in a An improved method of designing complexly shaped inflatable shells to be assembled from gores was conceived for original application to the inflatable outer shell of a developmental habitable spacecraft module having a cylindrical mid-length section with toroidal end caps. The method is also applicable to inflatable shells of various shapes for terrestrial use. The method addresses problems associated with the assembly, folding, transport, and deployment of inflatable shells that may comprise multiple layers and have complex shapes that can include such doubly curved surfaces as toroids and spheres. One
A geometric design method for side-stream distillation columns
Rooks, R.E.; Malone, M.F.; Doherty, M.F.
1996-10-01
A side-stream distillation column may replace two simple columns for some applications, sometimes at considerable savings in energy and investment. This paper describes a geometric method for the design of side-stream columns; the method provides rapid estimates of equipment size and utility requirements. Unlike previous approaches, the geometric method is applicable to nonideal and azeotropic mixtures. Several example problems for both ideal and nonideal mixtures, including azeotropic mixtures containing distillation boundaries, are given. The authors make use of the fact that azeotropes or pure components whose classification in the residue curve map is a saddle can be removed as side-stream products. Significant process simplifications are found among some alternatives in example problems, leading to flow sheets with fewer units and a substantial savings in vapor rate.
Design of time interval generator based on hybrid counting method
NASA Astrophysics Data System (ADS)
Yao, Yuan; Wang, Zhaoqi; Lu, Houbing; Chen, Lian; Jin, Ge
2016-10-01
Time Interval Generators (TIGs) are frequently used for the characterizations or timing operations of instruments in particle physics experiments. Though some "off-the-shelf" TIGs can be employed, the necessity of a custom test system or control system makes the TIGs, being implemented in a programmable device desirable. Nowadays, the feasibility of using Field Programmable Gate Arrays (FPGAs) to implement particle physics instrumentation has been validated in the design of Time-to-Digital Converters (TDCs) for precise time measurement. The FPGA-TDC technique is based on the architectures of Tapped Delay Line (TDL), whose delay cells are down to few tens of picosecond. In this case, FPGA-based TIGs with high delay step are preferable allowing the implementation of customized particle physics instrumentations and other utilities on the same FPGA device. A hybrid counting method for designing TIGs with both high resolution and wide range is presented in this paper. The combination of two different counting methods realizing an integratable TIG is described in detail. A specially designed multiplexer for tap selection is emphatically introduced. The special structure of the multiplexer is devised for minimizing the different additional delays caused by the unpredictable routings from different taps to the output. A Kintex-7 FPGA is used for the hybrid counting-based implementation of a TIG, providing a resolution up to 11 ps and an interval range up to 8 s.
Sequence design in lattice models by graph theoretical methods
NASA Astrophysics Data System (ADS)
Sanjeev, B. S.; Patra, S. M.; Vishveshwara, S.
2001-01-01
A general strategy has been developed based on graph theoretical methods, for finding amino acid sequences that take up a desired conformation as the native state. This problem of inverse design has been addressed by assigning topological indices for the monomer sites (vertices) of the polymer on a 3×3×3 cubic lattice. This is a simple design strategy, which takes into account only the topology of the target protein and identifies the best sequence for a given composition. The procedure allows the design of a good sequence for a target native state by assigning weights for the vertices on a lattice site in a given conformation. It is seen across a variety of conformations that the predicted sequences perform well both in sequence and in conformation space, in identifying the target conformation as native state for a fixed composition of amino acids. Although the method is tested in the framework of the HP model [K. F. Lau and K. A. Dill, Macromolecules 22, 3986 (1989)] it can be used in any context if proper potential functions are available, since the procedure derives unique weights for all the sites (vertices, nodes) of the polymer chain of a chosen conformation (graph).
A kind of optimizing design method of progressive addition lenses
NASA Astrophysics Data System (ADS)
Tang, Yunhai; Qian, Lin; Wu, Quanying; Yu, Jingchi; Chen, Hao; Wang, Yuanyuan
2010-10-01
Progressive addition lenses are a kind of ophthalmic lenses with freeform surface. The surface curvature of the progressive addition lenses varies gradually from a minimum value in the upper, distance-viewing area, to a maximum value in the lower, near-viewing area. A kind of optimizing design method of progressive addition lenses is proposed to improve the optical quality by modifying the vector heights of the surface of designed progressive addition lenses initially. The relationship among mean power, cylinder power and the vector heights of the surface is deduced, and the optimizing factor is also gained. The vector heights of the surface of designed progressive addition lenses initially are used to calculate the plots of mean power and cylinder power based on the principle of differential geometry. The mean power plot is changed by adjusting the optimizing factor. Otherwise, the novel plot of the mean power can also be derived by shifting the mean power of one selected region to another and then by interpolating and smoothing. A partial differential equation of the elliptic type is founded based on the changed mean power. The solution of the equation is achieved by iterative method. The optimized vector heights of the surface are solved out. Compared with the original lens, the region in which the astigmatism near the nasal side on distance-vision portion is less than 0.5 D has become broader, and the clear regions on distance-vision and near-vision portion are wider.
A Probabilistic Design Method Applied to Smart Composite Structures
NASA Technical Reports Server (NTRS)
Shiao, Michael C.; Chamis, Christos C.
1995-01-01
A probabilistic design method is described and demonstrated using a smart composite wing. Probabilistic structural design incorporates naturally occurring uncertainties including those in constituent (fiber/matrix) material properties, fabrication variables, structure geometry and control-related parameters. Probabilistic sensitivity factors are computed to identify those parameters that have a great influence on a specific structural reliability. Two performance criteria are used to demonstrate this design methodology. The first criterion requires that the actuated angle at the wing tip be bounded by upper and lower limits at a specified reliability. The second criterion requires that the probability of ply damage due to random impact load be smaller than an assigned value. When the relationship between reliability improvement and the sensitivity factors is assessed, the results show that a reduction in the scatter of the random variable with the largest sensitivity factor (absolute value) provides the lowest failure probability. An increase in the mean of the random variable with a negative sensitivity factor will reduce the failure probability. Therefore, the design can be improved by controlling or selecting distribution parameters associated with random variables. This can be implemented during the manufacturing process to obtain maximum benefit with minimum alterations.
Modified method to improve the design of Petlyuk distillation columns
2014-01-01
Background A response surface analysis was performed to study the effect of the composition and feeding thermal conditions of ternary mixtures on the number of theoretical stages and the energy consumption of Petlyuk columns. A modification of the pre-design algorithm was necessary for this purpose. Results The modified algorithm provided feasible results in 100% of the studied cases, compared with only 8.89% for the current algorithm. The proposed algorithm allowed us to attain the desired separations, despite the type of mixture and the operating conditions in the feed stream, something that was not possible with the traditional pre-design method. The results showed that the type of mixture had great influence on the number of stages and on energy consumption. A higher number of stages and a lower consumption of energy were attained with mixtures rich in the light component, while higher energy consumption occurred when the mixture was rich in the heavy component. Conclusions The proposed strategy expands the search of an optimal design of Petlyuk columns within a feasible region, which allow us to find a feasible design that meets output specifications and low thermal loads. PMID:25061476
Rapid and simple method of qPCR primer design.
Thornton, Brenda; Basu, Chhandak
2015-01-01
Quantitative real-time polymerase chain reaction (qPCR) is a powerful tool for analysis and quantification of gene expression. It is advantageous compared to traditional gel-based method of PCR, as gene expression can be visualized "real-time" using a computer. In qPCR, a reporter dye system is used which intercalates with DNA's region of interest and detects DNA amplification. Some of the popular reporter systems used in qPCR are the following: Molecular Beacon(®), SYBR Green(®), and Taqman(®). However, success of qPCR depends on the optimal primers used. Some of the considerations for primer design are the following: GC content, primer self-dimer, or secondary structure formation. Freely available software could be used for ideal qPCR primer design. Here we have shown how to use some freely available web-based software programs (such as Primerquest(®), Unafold(®), and Beacon designer(®)) to design qPCR primers.
Statistical Methods for Rapid Aerothermal Analysis and Design Technology: Validation
NASA Technical Reports Server (NTRS)
DePriest, Douglas; Morgan, Carolyn
2003-01-01
The cost and safety goals for NASA s next generation of reusable launch vehicle (RLV) will require that rapid high-fidelity aerothermodynamic design tools be used early in the design cycle. To meet these requirements, it is desirable to identify adequate statistical models that quantify and improve the accuracy, extend the applicability, and enable combined analyses using existing prediction tools. The initial research work focused on establishing suitable candidate models for these purposes. The second phase is focused on assessing the performance of these models to accurately predict the heat rate for a given candidate data set. This validation work compared models and methods that may be useful in predicting the heat rate.
Collocation methods for distillation design. 1: Model description and testing
Huss, R.S.; Westerberg, A.W.
1996-05-01
Fast and accurate distillation design requires a model that significantly reduces the problem size while accurately approximating a full-order distillation column model. This collocation model builds on the concepts of past collocation models for design of complex real-world separation systems. Two variable transformations make this method unique. Polynomials cannot accurately fit trajectories which flatten out. In columns, flat sections occur in the middle of large column sections or where concentrations go to 0 or 1. With an exponential transformation of the tray number which maps zero to an infinite number of trays onto the range 0--1, four collocation trays can accurately simulate a large column section. With a hyperbolic tangent transformation of the mole fractions, the model can simulate columns which reach high purities. Furthermore, this model uses multiple collocation elements for a column section, which is more accurate than a single high-order collocation section.
Airfoil Design and Optimization by the One-Shot Method
NASA Technical Reports Server (NTRS)
Kuruvila, G.; Taasan, Shlomo; Salas, M. D.
1995-01-01
An efficient numerical approach for the design of optimal aerodynamic shapes is presented in this paper. The objective of any optimization problem is to find the optimum of a cost function subject to a certain state equation (governing equation of the flow field) and certain side constraints. As in classical optimal control methods, the present approach introduces a costate variable (Lagrange multiplier) to evaluate the gradient of the cost function. High efficiency in reaching the optimum solution is achieved by using a multigrid technique and updating the shape in a hierarchical manner such that smooth (low-frequency) changes are done separately from high-frequency changes. Thus, the design variables are changed on a grid where their changes produce nonsmooth (high-frequency) perturbations that can be damped efficiently by the multigrid. The cost of solving the optimization problem is approximately two to three times the cost of the equivalent analysis problem.
Collocation methods for distillation design. 2: Applications for distillation
Huss, R.S.; Westerberg, A.W.
1996-05-01
The authors present applications for a collocation method for modeling distillation columns that they developed in a companion paper. They discuss implementation of the model, including discussion of the ASCEND (Advanced System for Computations in ENgineering Design) system, which enables one to create complex models with simple building blocks and interactively learn to solve them. They first investigate applying the model to compute minimum reflux for a given separation task, exactly solving nonsharp and approximately solving sharp split minimum reflux problems. They next illustrate the use of the collocation model to optimize the design a single column capable of carrying out a prescribed set of separation tasks. The optimization picks the best column diameter and total number of trays. It also picks the feed tray for each of the prescribed separations.
Property Exchange Method for Designing Computer-Based Learning Game
NASA Astrophysics Data System (ADS)
Umetsu, Takanobu; Hirashima, Tsukasa
Motivation is one of the most important factors in learning. Many researchers of learning environments, therefore, pay special attention to learning games as a remarkable approach to realize highly motivated learning. However, to make a learning game is not easy task. Although there are several investigations for design methods of learning games, most of them are only proposals of guidelines for the design or characteristics that learning games should have. Therefore, developers of learning games are required to have enough knowledge and experiences regarding learning and games in order to understand the guidelines or to deal with the characteristics. Then, it is very difficult for teachers to obtain learning games fitting for their learning issues.
A Method for Designing CDO Conformed to Investment Parameters
NASA Astrophysics Data System (ADS)
Nakae, Tatsuya; Moritsu, Toshiyuki; Komoda, Norihisa
We propose a method for designing CDO (Collateralized Debt Obligation) that meets investor needs about attributes of CDO. It is demonstrated that adjusting attributes (that are credit capability and issue amount) of CDO to investors' preferences causes a capital loss risk that the agent takes. We formulate a CDO optimization problem by defining an objective function using the above risk and by setting constraints that arise from investor needs and a risk premium that is paid for the agent. Our prototype experiment, in which fictitious underlying obligations and investor needs are given, verifies that CDOs can be designed without opportunity loss and dead stock loss, and that the capital loss is not more than thousandth part of the amount of annual payment under guarantee for small and midium-sized enterprises by a general credit guarantee institution.
Design of braided composite tubes by numerical analysis method
Hamada, Hiroyuki; Fujita, Akihiro; Maekawa, Zenichiro; Nakai, Asami; Yokoyama, Atsushi
1995-11-01
Conventional composite laminates have very poor strength through thickness and as a result are limited in their application for structural parts with complex shape. In this paper, the design for braided composite tube was proposed. The concept of analysis model which involved from micro model to macro model was presented. This method was applied to predict bending rigidity and initial fracture stress under bending load of the braided tube. The proposed analytical procedure can be included as a unit in CAE system for braided composites.
Methods to Design and Synthesize Antibody-Drug Conjugates (ADCs)
Yao, Houzong; Jiang, Feng; Lu, Aiping; Zhang, Ge
2016-01-01
Antibody-drug conjugates (ADCs) have become a promising targeted therapy strategy that combines the specificity, favorable pharmacokinetics and biodistributions of antibodies with the destructive potential of highly potent drugs. One of the biggest challenges in the development of ADCs is the application of suitable linkers for conjugating drugs to antibodies. Recently, the design and synthesis of linkers are making great progress. In this review, we present the methods that are currently used to synthesize antibody-drug conjugates by using thiols, amines, alcohols, aldehydes and azides. PMID:26848651
A Method of Trajectory Design for Manned Asteroids Exploration
NASA Astrophysics Data System (ADS)
Gan, Q. B.; Zhang, Y.; Zhu, Z. F.; Han, W. H.; Dong, X.
2014-11-01
A trajectory optimization method of the nuclear propulsion manned asteroids exploration is presented. In the case of launching between 2035 and 2065, based on the Lambert transfer orbit, the phases of departure from and return to the Earth are searched at first. Then the optimal flight trajectory in the feasible regions is selected by pruning the flight sequences. Setting the nuclear propulsion flight plan as propel-coast-propel, and taking the minimal mass of aircraft departure as the index, the nuclear propulsion flight trajectory is separately optimized using a hybrid method. With the initial value of the optimized local parameters of each three phases, the global parameters are jointedly optimized. At last, the minimal departure mass trajectory design result is given.
Novel computational methods to design protein-protein interactions
NASA Astrophysics Data System (ADS)
Zhou, Alice Qinhua; O'Hern, Corey; Regan, Lynne
2014-03-01
Despite the abundance of structural data, we still cannot accurately predict the structural and energetic changes resulting from mutations at protein interfaces. The inadequacy of current computational approaches to the analysis and design of protein-protein interactions has hampered the development of novel therapeutic and diagnostic agents. In this work, we apply a simple physical model that includes only a minimal set of geometrical constraints, excluded volume, and attractive van der Waals interactions to 1) rank the binding affinity of mutants of tetratricopeptide repeat proteins with their cognate peptides, 2) rank the energetics of binding of small designed proteins to the hydrophobic stem region of the influenza hemagglutinin protein, and 3) predict the stability of T4 lysozyme and staphylococcal nuclease mutants. This work will not only lead to a fundamental understanding of protein-protein interactions, but also to the development of efficient computational methods to rationally design protein interfaces with tunable specificity and affinity, and numerous applications in biomedicine. NSF DMR-1006537, PHY-1019147, Raymond and Beverly Sackler Institute for Biological, Physical and Engineering Sciences, and Howard Hughes Medical Institute.
Cox regression methods for two-stage randomization designs.
Lokhnygina, Yuliya; Helterbrand, Jeffrey D
2007-06-01
Two-stage randomization designs (TSRD) are becoming increasingly common in oncology and AIDS clinical trials as they make more efficient use of study participants to examine therapeutic regimens. In these designs patients are initially randomized to an induction treatment, followed by randomization to a maintenance treatment conditional on their induction response and consent to further study treatment. Broader acceptance of TSRDs in drug development may hinge on the ability to make appropriate intent-to-treat type inference within this design framework as to whether an experimental induction regimen is better than a standard induction regimen when maintenance treatment is fixed. Recently Lunceford, Davidian, and Tsiatis (2002, Biometrics 58, 48-57) introduced an inverse probability weighting based analytical framework for estimating survival distributions and mean restricted survival times, as well as for comparing treatment policies at landmarks in the TSRD setting. In practice Cox regression is widely used and in this article we extend the analytical framework of Lunceford et al. (2002) to derive a consistent estimator for the log hazard in the Cox model and a robust score test to compare treatment policies. Large sample properties of these methods are derived, illustrated via a simulation study, and applied to a TSRD clinical trial. PMID:17425633
An introduction to quantum chemical methods applied to drug design.
Stenta, Marco; Dal Peraro, Matteo
2011-06-01
The advent of molecular medicine allowed identifying the malfunctioning of subcellular processes as the source of many diseases. Since then, drugs are not only discovered, but actually designed to fulfill a precise task. Modern computational techniques, based on molecular modeling, play a relevant role both in target identification and drug lead development. By flanking and integrating standard experimental techniques, modeling has proven itself as a powerful tool across the drug design process. The success of computational methods depends on a balance between cost (computation time) and accuracy. Thus, the integration of innovative theories and more powerful hardware architectures allows molecular modeling to be used as a reliable tool for rationalizing the results of experiments and accelerating the development of new drug design strategies. We present an overview of the most common quantum chemistry computational approaches, providing for each one a general theoretical introduction to highlight limitations and strong points. We then discuss recent developments in software and hardware resources, which have allowed state-of-the-art of computational quantum chemistry to be applied to drug development.
Sensitivity method for integrated structure/active control law design
NASA Technical Reports Server (NTRS)
Gilbert, Michael G.
1987-01-01
The development is described of an integrated structure/active control law design methodology for aeroelastic aircraft applications. A short motivating introduction to aeroservoelasticity is given along with the need for integrated structures/controls design algorithms. Three alternative approaches to development of an integrated design method are briefly discussed with regards to complexity, coordination and tradeoff strategies, and the nature of the resulting solutions. This leads to the formulation of the proposed approach which is based on the concepts of sensitivity of optimum solutions and multi-level decompositions. The concept of sensitivity of optimum is explained in more detail and compared with traditional sensitivity concepts of classical control theory. The analytical sensitivity expressions for the solution of the linear, quadratic cost, Gaussian (LQG) control problem are summarized in terms of the linear regulator solution and the Kalman Filter solution. Numerical results for a state space aeroelastic model of the DAST ARW-II vehicle are given, showing the changes in aircraft responses to variations of a structural parameter, in this case first wing bending natural frequency.
Simplified design method for shear-valve magnetorheological dampers
NASA Astrophysics Data System (ADS)
Ding, Yang; Zhang, Lu; Zhu, Haitao; Li, Zhongxian
2014-12-01
Based on the Bingham parallel-plate model, a simplified design method of shear-valve magnetorheological (MR) dampers is proposed considering the magnetic circuit optimization. Correspondingly, a new MR damper with a full-length effective damping path is proposed. The prototype dampers are also fabricated and studied numerically and experimentally. According to the test results, the Bingham parallel-plate model is further modified to obtain a damping force prediction model of the proposed MR dampers. This prediction model considers the magnetic saturation phenomenon. The study indicates that the proposed simplified design method is simple, effective and reliable. The maximum damping force of the proposed MR dampers with a full-length effective damping path is at least twice as large as those of conventional MR dampers. The dynamic range of damping force increases by at least 70%. The proposed damping force prediction model considers the magnetic saturation phenomenon and it can realize the actual characteristic of MR fluids. The model is able to predict the actual damping force of MR dampers precisely.
Development of Analysis Methods for Designing with Composites
NASA Technical Reports Server (NTRS)
Madenci, E.
1999-01-01
The project involved the development of new analysis methods to achieve efficient design of composite structures. We developed a complex variational formulation to analyze the in-plane and bending coupling response of an unsymmetrically laminated plate with an elliptical cutout subjected to arbitrary edge loading as shown in Figure 1. This formulation utilizes four independent complex potentials that satisfy the coupled in-plane and bending equilibrium equations, thus eliminating the area integrals from the strain energy expression. The solution to a finite geometry laminate under arbitrary loading is obtained by minimizing the total potential energy function and solving for the unknown coefficients of the complex potentials. The validity of this approach is demonstrated by comparison with finite element analysis predictions for a laminate with an inclined elliptical cutout under bi-axial loading.The geometry and loading of this laminate with a lay-up of [-45/45] are shown in Figure 2. The deformed configuration shown in Figure 3 reflects the presence of bending-stretching coupling. The validity of the present method is established by comparing the out-of-plane deflections along the boundary of the elliptical cutout from the present approach with those of the finite element method. The comparison shown in Figure 4 indicates remarkable agreement. The details of this method are described in a manuscript by Madenci et al. (1998).
A New Aerodynamic Data Dispersion Method for Launch Vehicle Design
NASA Technical Reports Server (NTRS)
Pinier, Jeremy T.
2011-01-01
A novel method for implementing aerodynamic data dispersion analysis is herein introduced. A general mathematical approach combined with physical modeling tailored to the aerodynamic quantity of interest enables the generation of more realistically relevant dispersed data and, in turn, more reasonable flight simulation results. The method simultaneously allows for the aerodynamic quantities and their derivatives to be dispersed given a set of non-arbitrary constraints, which stresses the controls model in more ways than with the traditional bias up or down of the nominal data within the uncertainty bounds. The adoption and implementation of this new method within the NASA Ares I Crew Launch Vehicle Project has resulted in significant increases in predicted roll control authority, and lowered the induced risks for flight test operations. One direct impact on launch vehicles is a reduced size for auxiliary control systems, and the possibility of an increased payload. This technique has the potential of being applied to problems in multiple areas where nominal data together with uncertainties are used to produce simulations using Monte Carlo type random sampling methods. It is recommended that a tailored physics-based dispersion model be delivered with any aerodynamic product that includes nominal data and uncertainties, in order to make flight simulations more realistic and allow for leaner spacecraft designs.
Nanobiological studies on drug design using molecular mechanic method
Ghaheh, Hooria Seyedhosseini; Mousavi, Maryam; Araghi, Mahmood; Rasoolzadeh, Reza; Hosseini, Zahra
2015-01-01
Background: Influenza H1N1 is very important worldwide and point mutations that occur in the virus gene are a threat for the World Health Organization (WHO) and druggists, since they could make this virus resistant to the existing antibiotics. Influenza epidemics cause severe respiratory illness in 30 to 50 million people and kill 250,000 to 500,000 people worldwide every year. Nowadays, drug design is not done through trial and error because of its cost and waste of time; therefore bioinformatics studies is essential for designing drugs. Materials and Methods: This paper, infolds a study on binding site of Neuraminidase (NA) enzyme, (that is very important in drug design) in 310K temperature and different dielectrics, for the best drug design. Information of NA enzyme was extracted from Protein Data Bank (PDB) and National Center for Biotechnology Information (NCBI) websites. The new sequences of N1 were downloaded from the NCBI influenza virus sequence database. Drug binding sites were assimilated and homologized modeling using Argus lab 4.0, HyperChem 6.0 and Chem. D3 softwares. Their stability was assessed in different dielectrics and temperatures. Result: Measurements of potential energy (Kcal/mol) of binding sites of NA in different dielectrics and 310K temperature revealed that at time step size = 0 pSec drug binding sites have maximum energy level and at time step size = 100 pSec have maximum stability and minimum energy. Conclusions: Drug binding sites are more dependent on dielectric constants rather than on temperature and the optimum dielectric constant is 39/78. PMID:26605248
IODC98 optical design problem: method of progressing from an ahromatic to an apochromatic design
Seppala, L.G.
1998-07-20
A general method of designing an apochromatic lens by using a triplet of special glasses, in which the buried surfaces concept is used, can be outlined. First, one initially chooses a starting point which is already achromatic. Second, a thick plate or shell is added to the design, where the plate or shell has an index of refraction 1.62, which is similar to the special glass triplet average index of refraction (for example: PSK53A, KZFS1 and TIF6). Third, the lens is then reoptimized to an achromatic design. Fourth, the single element is replace by the special glass triplet. Fifth, only the internal surfaces of the triplet are varied to correct all three wavelengths. Although this step will produce little improvement, it does serve to stabilize further optimization. Sixth and finally, all potential variables are used to fully optimize the apochromatic lens. Microscope objectives, for example, could be designed using this technique. The important concept to apply is the use of multiple buried surfaces in which each interface involves a special glass, after an achromatic design has been achieved. This extension relieves the restriction that all special glasses have a common index of refraction and allows a wider variety of special glasses to be used. However, it is still desirable to use glasses which form a large triangle on the P versus V diagram.
An analytical filter design method for guided wave phased arrays
NASA Astrophysics Data System (ADS)
Kwon, Hyu-Sang; Kim, Jin-Yeon
2016-12-01
This paper presents an analytical method for designing a spatial filter that processes the data from an array of two-dimensional guided wave transducers. An inverse problem is defined where the spatial filter coefficients are determined in such a way that a prescribed beam shape, i.e., a desired array output is best approximated in the least-squares sense. Taking advantage of the 2π-periodicity of the generated wave field, Fourier-series representation is used to derive closed-form expressions for the constituting matrix elements. Special cases in which the desired array output is an ideal delta function and a gate function are considered in a more explicit way. Numerical simulations are performed to examine the performance of the filters designed by the proposed method. It is shown that the proposed filters can significantly improve the beam quality in general. Most notable is that the proposed method does not compromise between the main lobe width and the sidelobe levels; i.e. a narrow main lobe and low sidelobes are simultaneously achieved. It is also shown that the proposed filter can compensate the effects of nonuniform directivity and sensitivity of array elements by explicitly taking these into account in the formulation. From an example of detecting two separate targets, how much the angular resolution can be improved as compared to the conventional delay-and-sum filter is quantitatively illustrated. Lamb wave based imaging of localized defects in an elastic plate using a circular array is also presented as an example of practical applications.
Learning physics: A comparative analysis between instructional design methods
NASA Astrophysics Data System (ADS)
Mathew, Easow
The purpose of this research was to determine if there were differences in academic performance between students who participated in traditional versus collaborative problem-based learning (PBL) instructional design approaches to physics curricula. This study utilized a quantitative quasi-experimental design methodology to determine the significance of differences in pre- and posttest introductory physics exam performance between students who participated in traditional (i.e., control group) versus collaborative problem solving (PBL) instructional design (i.e., experimental group) approaches to physics curricula over a college semester in 2008. There were 42 student participants (N = 42) enrolled in an introductory physics course at the research site in the Spring 2008 semester who agreed to participate in this study after reading and signing informed consent documents. A total of 22 participants were assigned to the experimental group (n = 22) who participated in a PBL based teaching methodology along with traditional lecture methods. The other 20 students were assigned to the control group (n = 20) who participated in the traditional lecture teaching methodology. Both the courses were taught by experienced professors who have qualifications at the doctoral level. The results indicated statistically significant differences (p < .01) in academic performance between students who participated in traditional (i.e., lower physics posttest scores and lower differences between pre- and posttest scores) versus collaborative (i.e., higher physics posttest scores, and higher differences between pre- and posttest scores) instructional design approaches to physics curricula. Despite some slight differences in control group and experimental group demographic characteristics (gender, ethnicity, and age) there were statistically significant (p = .04) differences between female average academic improvement which was much higher than male average academic improvement (˜63%) in
Formal methods in the design of Ada 1995
NASA Technical Reports Server (NTRS)
Guaspari, David
1995-01-01
Formal, mathematical methods are most useful when applied early in the design and implementation of a software system--that, at least, is the familiar refrain. I will report on a modest effort to apply formal methods at the earliest possible stage, namely, in the design of the Ada 95 programming language itself. This talk is an 'experience report' that provides brief case studies illustrating the kinds of problems we worked on, how we approached them, and the extent (if any) to which the results proved useful. It also derives some lessons and suggestions for those undertaking future projects of this kind. Ada 95 is the first revision of the standard for the Ada programming language. The revision began in 1988, when the Ada Joint Programming Office first asked the Ada Board to recommend a plan for revising the Ada standard. The first step in the revision was to solicit criticisms of Ada 83. A set of requirements for the new language standard, based on those criticisms, was published in 1990. A small design team, the Mapping Revision Team (MRT), became exclusively responsible for revising the language standard to satisfy those requirements. The MRT, from Intermetrics, is led by S. Tucker Taft. The work of the MRT was regularly subject to independent review and criticism by a committee of distinguished Reviewers and by several advisory teams--for example, the two User/Implementor teams, each consisting of an industrial user (attempting to make significant use of the new language on a realistic application) and a compiler vendor (undertaking, experimentally, to modify its current implementation in order to provide the necessary new features). One novel decision established the Language Precision Team (LPT), which investigated language proposals from a mathematical point of view. The LPT applied formal mathematical analysis to help improve the design of Ada 95 (e.g., by clarifying the language proposals) and to help promote its acceptance (e.g., by identifying a
PARTIAL RESTRAINING FORCE INTRODUCTION METHOD FOR DESIGNING CONSTRUCTION COUNTERMESURE ON ΔB METHOD
NASA Astrophysics Data System (ADS)
Nishiyama, Taku; Imanishi, Hajime; Chiba, Noriyuki; Ito, Takao
Landslide or slope failure is a three-dimensional movement phenomenon, thus a three-dimensional treatment makes it easier to understand stability. The ΔB method (simplified three-dimensional slope stability analysis method) is based on the limit equilibrium method and equals to an approximate three-dimensional slope stability analysis that extends two-dimensional cross-section stability analysis results to assess stability. This analysis can be conducted using conventional spreadsheets or two-dimensional slope stability computational software. This paper describes the concept of the partial restraining force in-troduction method for designing construction countermeasures using the distribution of the restraining force found along survey lines, which is based on the distribution of survey line safety factors derived from the above-stated analysis. This paper also presents the transverse distributive method of restraining force used for planning ground stabilizing on the basis of the example analysis.
Designing arrays for modern high-resolution methods
Dowla, F.U.
1987-10-01
A bearing estimation study of seismic wavefields propagating from a strongly heterogeneous media shows that with the high-resolution MUSIC algorithm the bias of the direction estimate can be reduced by adopting a smaller aperture sub-array. Further, on this sub-array, the bias of the MUSIC algorithm is less than those of the MLM and Bartlett methods. On the full array, the performance for the three different methods are comparable. Improvement in bearing estimation in MUSIC with a reduced aperture might be attributed to increased signal coherency in the array. For methods with less resolution, the improved signal coherency in the smaller array is possible being offset by severe loss of resolution and the presence of weak secondary sources. Building upon the characteristics of real seismic wavefields, a design language has been developed to generate, modify, and test other arrays. Eigenstructures of wavefields and arrays have been studied empirically by simulation of a variety of realistic signals. 6 refs., 5 figs.
Basic research on design analysis methods for rotorcraft vibrations
NASA Technical Reports Server (NTRS)
Hanagud, S.
1991-01-01
The objective of the present work was to develop a method for identifying physically plausible finite element system models of airframe structures from test data. The assumed models were based on linear elastic behavior with general (nonproportional) damping. Physical plausibility of the identified system matrices was insured by restricting the identification process to designated physical parameters only and not simply to the elements of the system matrices themselves. For example, in a large finite element model the identified parameters might be restricted to the moduli for each of the different materials used in the structure. In the case of damping, a restricted set of damping values might be assigned to finite elements based on the material type and on the fabrication processes used. In this case, different damping values might be associated with riveted, bolted and bonded elements. The method itself is developed first, and several approaches are outlined for computing the identified parameter values. The method is applied first to a simple structure for which the 'measured' response is actually synthesized from an assumed model. Both stiffness and damping parameter values are accurately identified. The true test, however, is the application to a full-scale airframe structure. In this case, a NASTRAN model and actual measured modal parameters formed the basis for the identification of a restricted set of physically plausible stiffness and damping parameters.
AmiRNA Designer - new method of artificial miRNA design.
Mickiewicz, Agnieszka; Rybarczyk, Agnieszka; Sarzynska, Joanna; Figlerowicz, Marek; Blazewicz, Jacek
2016-01-01
MicroRNAs (miRNAs) are small non-coding RNAs that have been found in most of the eukaryotic organisms. They are involved in the regulation of gene expression at the post-transcriptional level in a sequence specific manner. MiRNAs are produced from their precursors by Dicer-dependent small RNA biogenesis pathway. Involvement of miRNAs in a wide range of biological processes makes them excellent candidates for studying gene function or for therapeutic applications. For this purpose, different RNA-based gene silencing techniques have been developed. Artificially transformed miRNAs (amiRNAs) targeting one or several genes of interest represent one of such techniques being a potential tool in functional genomics. Here, we present a new approach to amiRNA*design, implemented as AmiRNA Designer software. Our method is based on the thermodynamic analysis of the native miRNA/miRNA* and miRNA/target duplexes. In contrast to the available automated tools, our program allows the user to perform analysis of natural miRNAs for the organism of interest and to create customized constraints for the design stage. It also provides filtering of the amiRNA candidates for the potential off-targets. AmiRNA Designer is freely available at http://www.cs.put.poznan.pl/arybarczyk/AmiRNA/. PMID:26784022
Information processing systems, reasoning modules, and reasoning system design methods
Hohimer, Ryan E.; Greitzer, Frank L.; Hampton, Shawn D.
2016-08-23
Information processing systems, reasoning modules, and reasoning system design methods are described. According to one aspect, an information processing system includes working memory comprising a semantic graph which comprises a plurality of abstractions, wherein the abstractions individually include an individual which is defined according to an ontology and a reasoning system comprising a plurality of reasoning modules which are configured to process different abstractions of the semantic graph, wherein a first of the reasoning modules is configured to process a plurality of abstractions which include individuals of a first classification type of the ontology and a second of the reasoning modules is configured to process a plurality of abstractions which include individuals of a second classification type of the ontology, wherein the first and second classification types are different.
Information processing systems, reasoning modules, and reasoning system design methods
Hohimer, Ryan E; Greitzer, Frank L; Hampton, Shawn D
2014-03-04
Information processing systems, reasoning modules, and reasoning system design methods are described. According to one aspect, an information processing system includes working memory comprising a semantic graph which comprises a plurality of abstractions, wherein the abstractions individually include an individual which is defined according to an ontology and a reasoning system comprising a plurality of reasoning modules which are configured to process different abstractions of the semantic graph, wherein a first of the reasoning modules is configured to process a plurality of abstractions which include individuals of a first classification type of the ontology and a second of the reasoning modules is configured to process a plurality of abstractions which include individuals of a second classification type of the ontology, wherein the first and second classification types are different.
Information processing systems, reasoning modules, and reasoning system design methods
Hohimer, Ryan E.; Greitzer, Frank L.; Hampton, Shawn D.
2015-08-18
Information processing systems, reasoning modules, and reasoning system design methods are described. According to one aspect, an information processing system includes working memory comprising a semantic graph which comprises a plurality of abstractions, wherein the abstractions individually include an individual which is defined according to an ontology and a reasoning system comprising a plurality of reasoning modules which are configured to process different abstractions of the semantic graph, wherein a first of the reasoning modules is configured to process a plurality of abstractions which include individuals of a first classification type of the ontology and a second of the reasoning modules is configured to process a plurality of abstractions which include individuals of a second classification type of the ontology, wherein the first and second classification types are different.
Computational methods in metabolic engineering for strain design.
Long, Matthew R; Ong, Wai Kit; Reed, Jennifer L
2015-08-01
Metabolic engineering uses genetic approaches to control microbial metabolism to produce desired compounds. Computational tools can identify new biological routes to chemicals and the changes needed in host metabolism to improve chemical production. Recent computational efforts have focused on exploring what compounds can be made biologically using native, heterologous, and/or enzymes with broad specificity. Additionally, computational methods have been developed to suggest different types of genetic modifications (e.g. gene deletion/addition or up/down regulation), as well as suggest strategies meeting different criteria (e.g. high yield, high productivity, or substrate co-utilization). Strategies to improve the runtime performances have also been developed, which allow for more complex metabolic engineering strategies to be identified. Future incorporation of kinetic considerations will further improve strain design algorithms.
Unique Method for Generating Design Earthquake Time Histories
R. E. Spears
2008-07-01
A method has been developed which takes a seed earthquake time history and modifies it to produce given design response spectra. It is a multi-step process with an initial scaling step and then multiple refinement steps. It is unique in the fact that both the acceleration and displacement response spectra are considered when performing the fit (which primarily improves the low frequency acceleration response spectrum accuracy). Additionally, no matrix inversion is needed. The features include encouraging the code acceleration, velocity, and displacement ratios and attempting to fit the pseudo velocity response spectrum. Also, “smoothing” is done to transition the modified time history to the seed time history at its start and end. This is done in the time history regions below a cumulative energy of 5% and above a cumulative energy of 95%. Finally, the modified acceleration, velocity, and displacement time histories are adjusted to start and end with an amplitude of zero (using Fourier transform techniques for integration).
Computational methods in metabolic engineering for strain design.
Long, Matthew R; Ong, Wai Kit; Reed, Jennifer L
2015-08-01
Metabolic engineering uses genetic approaches to control microbial metabolism to produce desired compounds. Computational tools can identify new biological routes to chemicals and the changes needed in host metabolism to improve chemical production. Recent computational efforts have focused on exploring what compounds can be made biologically using native, heterologous, and/or enzymes with broad specificity. Additionally, computational methods have been developed to suggest different types of genetic modifications (e.g. gene deletion/addition or up/down regulation), as well as suggest strategies meeting different criteria (e.g. high yield, high productivity, or substrate co-utilization). Strategies to improve the runtime performances have also been developed, which allow for more complex metabolic engineering strategies to be identified. Future incorporation of kinetic considerations will further improve strain design algorithms. PMID:25576846
Development of impact design methods for ceramic gas turbine components
NASA Technical Reports Server (NTRS)
Song, J.; Cuccio, J.; Kington, H.
1990-01-01
Impact damage prediction methods are being developed to aid in the design of ceramic gas turbine engine components with improved impact resistance. Two impact damage modes were characterized: local, near the impact site, and structural, usually fast fracture away from the impact site. Local damage to Si3N4 impacted by Si3N4 spherical projectiles consists of ring and/or radial cracks around the impact point. In a mechanistic model being developed, impact damage is characterized as microcrack nucleation and propagation. The extent of damage is measured as volume fraction of microcracks. Model capability is demonstrated by simulating late impact tests. Structural failure is caused by tensile stress during impact exceeding material strength. The EPIC3 code was successfully used to predict blade structural failures in different size particle impacts on radial and axial blades.
Design method of water jet pump towards high cavitation performances
NASA Astrophysics Data System (ADS)
Cao, L. L.; Che, B. X.; Hu, L. J.; Wu, D. Z.
2016-05-01
As one of the crucial components for power supply, the propulsion system is of great significance to the advance speed, noise performances, stabilities and other associated critical performances of underwater vehicles. Developing towards much higher advance speed, the underwater vehicles make more critical demands on the performances of the propulsion system. Basically, the increased advance speed requires the significantly raised rotation speed of the propulsion system, which would result in the deteriorated cavitation performances and consequently limit the thrust and efficiency of the whole system. Compared with the traditional propeller, the water jet pump offers more favourite cavitation, propulsion efficiency and other associated performances. The present research focuses on the cavitation performances of the waterjet pump blade profile in expectation of enlarging its advantages in high-speed vehicle propulsion. Based on the specifications of a certain underwater vehicle, the design method of the waterjet blade with high cavitation performances was investigated in terms of numerical simulation.
Virtual Design Method for Controlled Failure in Foldcore Sandwich Panels
NASA Astrophysics Data System (ADS)
Sturm, Ralf; Fischer, S.
2015-12-01
For certification, novel fuselage concepts have to prove equivalent crashworthiness standards compared to the existing metal reference design. Due to the brittle failure behaviour of CFRP this requirement can only be fulfilled by a controlled progressive crash kinematics. Experiments showed that the failure of a twin-walled fuselage panel can be controlled by a local modification of the core through-thickness compression strength. For folded cores the required change in core properties can be integrated by a modification of the fold pattern. However, the complexity of folded cores requires a virtual design methodology for tailoring the fold pattern according to all static and crash relevant requirements. In this context a foldcore micromodel simulation method is presented to identify the structural response of a twin-walled fuselage panels with folded core under crash relevant loading condition. The simulations showed that a high degree of correlation is required before simulation can replace expensive testing. In the presented studies, the necessary correlation quality could only be obtained by including imperfections of the core material in the micromodel simulation approach.
ERIC Educational Resources Information Center
Rowley, Kurt
2005-01-01
A multi-stage study of the practices of expert courseware designers was conducted with the final goal of identifying methods for assisting non-experts with the design of effective instructional systems. A total of 25 expert designers were involved in all stages of the inquiry. A model of the expert courseware design process was created, tested,…
The Method of Complex Characteristics for Design of Transonic Compressors.
NASA Astrophysics Data System (ADS)
Bledsoe, Margaret Randolph
We calculate shockless transonic flows past two -dimensional cascades of airfoils characterized by a prescribed speed distribution. The approach is to find solutions of the partial differential equation (c('2)-u('2)) (PHI)(,xx) - 2uv (PHI)(,xy) + (c('2)-v('2)) (PHI)(,yy) = 0 by the method of complex characteristics. Here (PHI) is the velocity potential, so (DEL)(PHI) = (u,v), and c is the local speed of sound. Our method consists in noting that the coefficients of the equation are analytic, so that we can use analytic continuation, conformal mapping, and a spectral method in the hodograph plane to determine the flow. After complex extension we obtain canonical equations for (PHI) and for the stream function (psi) as well as an explicit map from the hodograph plane to complex characteristic coordinates. In the subsonic case, a new coordinate system is defined in which the flow region corresponds to the interior of an ellipse. We construct special solutions of the flow equations in these coordinates by solving characteristic initial value problems in the ellipse with initial data defined by the complete system of Chebyshev polynomials. The condition (psi) = 0 on the boundary of the ellipse is used to determine the series representation of (PHI) and (psi). The map from the ellipse to the complex flow coordinates is found from data specifying the speed q as a function of the arc length s. The transonic problem for shockless flow becomes well posed after appropriate modifications of this procedure. The nonlinearity of the problem is handled by an iterative method that determines the boundary value problem in the ellipse and the map function in sequence. We have implemented this method as a computer code to design two-dimensional cascades of shockless compressor airfoils with gap-to-chord ratios as low as .5 and supersonic zones on both the upper and lower surfaces. The method may be extended to solve more general boundary value problems for second order partial
Shimada, Masato; Suzuki, Wataru; Yamada, Shuho; Inoue, Masato
2016-01-01
To achieve a Universal Design, designers must consider diverse users' physical and functional requirements for their products. However, satisfying these requirements and obtaining the information which is necessary for designing a universal product is very difficult. Therefore, we propose a new design method based on the concept of set-based design to solve these issues. This paper discusses the suitability of proposed design method by applying bicycle frame design problem. PMID:27534334
Shimada, Masato; Suzuki, Wataru; Yamada, Shuho; Inoue, Masato
2016-01-01
To achieve a Universal Design, designers must consider diverse users' physical and functional requirements for their products. However, satisfying these requirements and obtaining the information which is necessary for designing a universal product is very difficult. Therefore, we propose a new design method based on the concept of set-based design to solve these issues. This paper discusses the suitability of proposed design method by applying bicycle frame design problem.
Design optimization methods for genomic DNA tiling arrays.
Bertone, Paul; Trifonov, Valery; Rozowsky, Joel S; Schubert, Falk; Emanuelsson, Olof; Karro, John; Kao, Ming-Yang; Snyder, Michael; Gerstein, Mark
2006-02-01
A recent development in microarray research entails the unbiased coverage, or tiling, of genomic DNA for the large-scale identification of transcribed sequences and regulatory elements. A central issue in designing tiling arrays is that of arriving at a single-copy tile path, as significant sequence cross-hybridization can result from the presence of non-unique probes on the array. Due to the fragmentation of genomic DNA caused by the widespread distribution of repetitive elements, the problem of obtaining adequate sequence coverage increases with the sizes of subsequence tiles that are to be included in the design. This becomes increasingly problematic when considering complex eukaryotic genomes that contain many thousands of interspersed repeats. The general problem of sequence tiling can be framed as finding an optimal partitioning of non-repetitive subsequences over a prescribed range of tile sizes, on a DNA sequence comprising repetitive and non-repetitive regions. Exact solutions to the tiling problem become computationally infeasible when applied to large genomes, but successive optimizations are developed that allow their practical implementation. These include an efficient method for determining the degree of similarity of many oligonucleotide sequences over large genomes, and two algorithms for finding an optimal tile path composed of longer sequence tiles. The first algorithm, a dynamic programming approach, finds an optimal tiling in linear time and space; the second applies a heuristic search to reduce the space complexity to a constant requirement. A Web resource has also been developed, accessible at http://tiling.gersteinlab.org, to generate optimal tile paths from user-provided DNA sequences.
The Convergence Insufficiency Treatment Trial: Design, Methods, and Baseline Data
2009-01-01
Objective This report describes the design and methodology of the Convergence Insufficiency Treatment Trial (CITT), the first large-scale, placebo-controlled, randomized clinical trial evaluating treatments for convergence insufficiency (CI) in children. We also report the clinical and demographic characteristics of patients. Methods We prospectively randomized children 9 to 17 years of age to one of four treatment groups: 1) home-based pencil push-ups, 2) home-based computer vergence/accommodative therapy and pencil push-ups, 3) office-based vergence/accommodative therapy with home reinforcement, 4) office-based placebo therapy. Outcome data on the Convergence Insufficiency Symptom Survey (CISS) score (primary outcome), near point of convergence (NPC), and positive fusional vergence were collected after 12 weeks of active treatment and again at 6 and 12 months post-treatment. Results The CITT enrolled 221 children with symptomatic CI with a mean age of 12.0 years (SD = +2.3). The clinical profile of the cohort at baseline was 9Δ exophoria at near (+/− 4.4) and 2Δ exophoria (+/−2.8) at distance, CISS score = 30 (+/−9.0), NPC = 14 cm (+/− 7.5), and near positive fusional vergence break = 13 Δ (+/− 4.6). There were no statistically significant nor clinically relevant differences between treatment groups with respect to baseline characteristics (p > 0.05). Conclusion Hallmark features of the study design include formal definitions of conditions and outcomes, standardized diagnostic and treatment protocols, a placebo treatment arm, masked outcome examinations, and the CISS score outcome measure. The baseline data reported herein define the clinical profile of those enrolled into the CITT. PMID:18300086
Visual Narrative Research Methods as Performance in Industrial Design Education
ERIC Educational Resources Information Center
Campbell, Laurel H.; McDonagh, Deana
2009-01-01
This article discusses teaching empathic research methodology as performance. The authors describe their collaboration in an activity to help undergraduate industrial design students learn empathy for others when designing products for use by diverse or underrepresented people. The authors propose that an industrial design curriculum would benefit…
The Chinese American Eye Study: Design and Methods
Varma, Rohit; Hsu, Chunyi; Wang, Dandan; Torres, Mina; Azen, Stanley P.
2016-01-01
Purpose To summarize the study design, operational strategies and procedures of the Chinese American Eye Study (CHES), a population-based assessment of the prevalence of visual impairment, ocular disease, and visual functioning in Chinese Americans. Methods This population-based, cross-sectional study, included 4,570 Chinese, 50 years and older, residing in the city of Monterey Park, California. Each eligible participant completed a detailed interview and eye examination. The interview included an assessment of demographic, behavioral, and ocular risk factors and health-related and vision-related quality of life. The eye examination included measurements of visual acuity, intraocular pressure, visual fields, fundus and optic disc photography, a detailed anterior and posterior segment examination, and measurements of blood pressure, glycosylated hemoglobin levels, and blood glucose levels. Results The objectives of the CHES are to obtain prevalence estimates of visual impairment, refractive error, diabetic retinopathy, open-angle and angle-closure glaucoma, lens opacities, and age-related macular degeneration in Chinese-Americans. In addition, outcomes include effect estimates for risk factors associated with eye diseases. Lastly, CHES will investigate the genetic determinates of myopia and glaucoma. Conclusion The CHES will provide information about the prevalence and risk factors of ocular diseases in one of the fastest growing minority groups in the United States. PMID:24044409
Design and methods of the national Vietnam veterans longitudinal study.
Schlenger, William E; Corry, Nida H; Kulka, Richard A; Williams, Christianna S; Henn-Haase, Clare; Marmar, Charles R
2015-09-01
The National Vietnam Veterans Longitudinal Study (NVVLS) is the second assessment of a representative cohort of US veterans who served during the Vietnam War era, either in Vietnam or elsewhere. The cohort was initially surveyed in the National Vietnam Veterans Readjustment Study (NVVRS) from 1984 to 1988 to assess the prevalence, incidence, and effects of post-traumatic stress disorder (PTSD) and other post-war problems. The NVVLS sought to re-interview the cohort to assess the long-term course of PTSD. NVVLS data collection began July 3, 2012 and ended May 17, 2013, comprising three components: a mailed health questionnaire, a telephone health survey interview, and, for a probability sample of theater Veterans, a clinical diagnostic telephone interview administered by licensed psychologists. Excluding decedents, 78.8% completed the questionnaire and/or telephone survey, and 55.0% of selected living veterans participated in the clinical interview. This report provides a description of the NVVLS design and methods. Together, the NVVRS and NVVLS constitute a nationally representative longitudinal study of Vietnam veterans, and extend the NVVRS as a critical resource for scientific and policy analyses for Vietnam veterans, with policy relevance for Iraq and Afghanistan veterans.
Design and methods of the national Vietnam veterans longitudinal study.
Schlenger, William E; Corry, Nida H; Kulka, Richard A; Williams, Christianna S; Henn-Haase, Clare; Marmar, Charles R
2015-09-01
The National Vietnam Veterans Longitudinal Study (NVVLS) is the second assessment of a representative cohort of US veterans who served during the Vietnam War era, either in Vietnam or elsewhere. The cohort was initially surveyed in the National Vietnam Veterans Readjustment Study (NVVRS) from 1984 to 1988 to assess the prevalence, incidence, and effects of post-traumatic stress disorder (PTSD) and other post-war problems. The NVVLS sought to re-interview the cohort to assess the long-term course of PTSD. NVVLS data collection began July 3, 2012 and ended May 17, 2013, comprising three components: a mailed health questionnaire, a telephone health survey interview, and, for a probability sample of theater Veterans, a clinical diagnostic telephone interview administered by licensed psychologists. Excluding decedents, 78.8% completed the questionnaire and/or telephone survey, and 55.0% of selected living veterans participated in the clinical interview. This report provides a description of the NVVLS design and methods. Together, the NVVRS and NVVLS constitute a nationally representative longitudinal study of Vietnam veterans, and extend the NVVRS as a critical resource for scientific and policy analyses for Vietnam veterans, with policy relevance for Iraq and Afghanistan veterans. PMID:26096554
A decision-based perspective for the design of methods for systems design
NASA Technical Reports Server (NTRS)
Mistree, Farrokh; Muster, Douglas; Shupe, Jon A.; Allen, Janet K.
1989-01-01
Organization of material, a definition of decision based design, a hierarchy of decision based design, the decision support problem technique, a conceptual model design that can be manufactured and maintained, meta-design, computer-based design, action learning, and the characteristics of decisions are among the topics covered.
Applications of numerical optimization methods to helicopter design problems: A survey
NASA Technical Reports Server (NTRS)
Miura, H.
1984-01-01
A survey of applications of mathematical programming methods is used to improve the design of helicopters and their components. Applications of multivariable search techniques in the finite dimensional space are considered. Five categories of helicopter design problems are considered: (1) conceptual and preliminary design, (2) rotor-system design, (3) airframe structures design, (4) control system design, and (5) flight trajectory planning. Key technical progress in numerical optimization methods relevant to rotorcraft applications are summarized.
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1998-01-01
The paper identifies speed, agility, human interface, generation of sensitivity information, task decomposition, and data transmission (including storage) as important attributes for a computer environment to have in order to support engineering design effectively. It is argued that when examined in terms of these attributes the presently available environment can be shown to be inadequate a radical improvement is needed, and it may be achieved by combining new methods that have recently emerged from multidisciplinary design optimization (MDO) with massively parallel processing computer technology. The caveat is that, for successful use of that technology in engineering computing, new paradigms for computing will have to be developed - specifically, innovative algorithms that are intrinsically parallel so that their performance scales up linearly with the number of processors. It may be speculated that the idea of simulating a complex behavior by interaction of a large number of very simple models may be an inspiration for the above algorithms, the cellular automata are an example. Because of the long lead time needed to develop and mature new paradigms, development should be now, even though the widespread availability of massively parallel processing is still a few years away.
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw
1999-01-01
The paper identifies speed, agility, human interface, generation of sensitivity information, task decomposition, and data transmission (including storage) as important attributes for a computer environment to have in order to support engineering design effectively. It is argued that when examined in terms of these attributes the presently available environment can be shown to be inadequate. A radical improvement is needed, and it may be achieved by combining new methods that have recently emerged from multidisciplinary design optimisation (MDO) with massively parallel processing computer technology. The caveat is that, for successful use of that technology in engineering computing, new paradigms for computing will have to be developed - specifically, innovative algorithms that are intrinsically parallel so that their performance scales up linearly with the number of processors. It may be speculated that the idea of simulating a complex behaviour by interaction of a large number of very simple models may be an inspiration for the above algorithms; the cellular automata are an example. Because of the long lead time needed to develop and mature new paradigms, development should begin now, even though the widespread availability of massively parallel processing is still a few years away.
Stillbirth Collaborative Research Network: design, methods and recruitment experience.
Parker, Corette B; Hogue, Carol J R; Koch, Matthew A; Willinger, Marian; Reddy, Uma M; Thorsten, Vanessa R; Dudley, Donald J; Silver, Robert M; Coustan, Donald; Saade, George R; Conway, Deborah; Varner, Michael W; Stoll, Barbara; Pinar, Halit; Bukowski, Radek; Carpenter, Marshall; Goldenberg, Robert
2011-09-01
The Stillbirth Collaborative Research Network (SCRN) has conducted a multisite, population-based, case-control study, with prospective enrollment of stillbirths and livebirths at the time of delivery. This paper describes the general design, methods and recruitment experience. The SCRN attempted to enroll all stillbirths and a representative sample of livebirths occurring to residents of pre-defined geographical catchment areas delivering at 59 hospitals associated with five clinical sites. Livebirths <32 weeks gestation and women of African descent were oversampled. The recruitment hospitals were chosen to ensure access to at least 90% of all stillbirths and livebirths to residents of the catchment areas. Participants underwent a standardised protocol including maternal interview, medical record abstraction, placental pathology, biospecimen testing and, in stillbirths, post-mortem examination. Recruitment began in March 2006 and was completed in September 2008 with 663 women with a stillbirth and 1932 women with a livebirth enrolled, representing 69% and 63%, respectively, of the women identified. Additional surveillance for stillbirths continued until June 2009 and a follow-up of the case-control study participants was completed in December 2009. Among consenting women, there were high consent rates for the various study components. For the women with stillbirths, 95% agreed to a maternal interview, chart abstraction and a placental pathological examination; 91% of the women with a livebirth agreed to all of these components. Additionally, 84% of the women with stillbirths agreed to a fetal post-mortem examination. This comprehensive study is poised to systematically study a wide range of potential causes of, and risk factors for, stillbirths and to better understand the scope and incidence of the problem.
Stillbirth Collaborative Research Network: Design, Methods and Recruitment Experience
Parker, Corette B.; Hogue, Carol J. Rowland; Koch, Matthew A.; Willinger, Marian; Reddy, Uma; Thorsten, Vanessa R.; Dudley, Donald J.; Silver, Robert M.; Coustan, Donald; Saade, George R.; Conway, Deborah; Varner, Michael W.; Stoll, Barbara; Pinar, Halit; Bukowski, Radek; Carpenter, Marshall; Goldenberg, Robert
2013-01-01
SUMMARY The Stillbirth Collaborative Research Network (SCRN) has conducted a multisite, population-based, case-control study, with prospective enrollment of stillbirths and live births at the time of delivery. This paper describes the general design, methods, and recruitment experience. The SCRN attempted to enroll all stillbirths and a representative sample of live births occurring to residents of pre-defined geographic catchment areas delivering at 59 hospitals associated with five clinical sites. Live births <32 weeks gestation and women of African descent were oversampled. The recruitment hospitals were chosen to ensure access to at least 90% of all stillbirths and live births to residents of the catchment areas. Participants underwent a standardized protocol including maternal interview, medical record abstraction, placental pathology, biospecimen testing, and, in stillbirths, postmortem examination. Recruitment began in March 2006 and was completed in September 2008 with 663 women with a stillbirth and 1932 women with a live birth enrolled, representing 69% and 63%, respectively, of the women identified. Additional surveillance for stillbirth continued through June 2009 and a follow-up of the case-control study participants was completed in December 2009. Among consenting women, there were high consent rates for the various study components. For the women with stillbirth, 95% agreed to maternal interview, chart abstraction, and placental pathologic examination; 91% of the women with live birth agreed to all of these components. Additionally, 84% of the women with stillbirth agreed to a fetal postmortem examination. This comprehensive study is poised to systematically study a wide range of potential causes of, and risk factors for, stillbirth and to better understand the scope and incidence of the problem. PMID:21819424
NASA Technical Reports Server (NTRS)
Merchant, D. H.
1976-01-01
Methods are presented for calculating design limit loads compatible with probabilistic structural design criteria. The approach is based on the concept that the desired limit load, defined as the largest load occurring in a mission, is a random variable having a specific probability distribution which may be determined from extreme-value theory. The design limit load, defined as a particular of this random limit load, is the value conventionally used in structural design. Methods are presented for determining the limit load probability distributions from both time-domain and frequency-domain dynamic load simulations. Numerical demonstrations of the method are also presented.
Developing Baby Bag Design by Using Kansei Engineering Method
NASA Astrophysics Data System (ADS)
Janari, D.; Rakhmawati, A.
2016-01-01
Consumer's preferences and market demand are essential factors for product's success. Thus, in achieving its success, a product should have design that could fulfill consumer's expectation. Purpose of this research is accomplishing baby bag product as stipulated by Kansei. The results that represent Kanseiwords are; neat, unique, comfortable, safe, modern, gentle, elegant, antique, attractive, simple, spacious, creative, colorful, durable, stylish, smooth and strong. Identification value on significance of correlation for durable attribute is 0,000 < 0,005, which means significant to baby's bag. While the value of coefficient regression is 0,812 < 0,005, which means that durable attribute insignificant to baby's bag.The result of the baby's bag final design selectionbased on the questionnaire 3 is resulting the combination of all design. Space for clothes, diaper's space, shoulder grip, side grip, bottle's heater pocket and bottle's pocket are derived from design 1. Top grip, space for clothes, shoulder grip, and side grip are derived from design 2.Others design that were taken are, spaces for clothes from design 3, diaper's space and clothes’ space from design 4.
Teaching Improvement Model Designed with DEA Method and Management Matrix
ERIC Educational Resources Information Center
Montoneri, Bernard
2014-01-01
This study uses student evaluation of teachers to design a teaching improvement matrix based on teaching efficiency and performance by combining management matrix and data envelopment analysis. This matrix is designed to formulate suggestions to improve teaching. The research sample consists of 42 classes of freshmen following a course of English…
METHODS FOR INTEGRATING ENVIRONMENTAL CONSIDERATIONS INTO CHEMICAL PROCESS DESIGN DECISIONS
The objective of this cooperative agreement was to postulate a means by which an engineer could routinely include environmental considerations in day-to-day conceptual design problems; a means that could easily integrate with existing design processes, and thus avoid massive retr...
Development of Combinatorial Methods for Alloy Design and Optimization
Pharr, George M.; George, Easo P.; Santella, Michael L
2005-07-01
The primary goal of this research was to develop a comprehensive methodology for designing and optimizing metallic alloys by combinatorial principles. Because conventional techniques for alloy preparation are unavoidably restrictive in the range of alloy composition that can be examined, combinatorial methods promise to significantly reduce the time, energy, and expense needed for alloy design. Combinatorial methods can be developed not only to optimize existing alloys, but to explore and develop new ones as well. The scientific approach involved fabricating an alloy specimen with a continuous distribution of binary and ternary alloy compositions across its surface--an ''alloy library''--and then using spatially resolved probing techniques to characterize its structure, composition, and relevant properties. The three specific objectives of the project were: (1) to devise means by which simple test specimens with a library of alloy compositions spanning the range interest can be produced; (2) to assess how well the properties of the combinatorial specimen reproduce those of the conventionally processed alloys; and (3) to devise screening tools which can be used to rapidly assess the important properties of the alloys. As proof of principle, the methodology was applied to the Fe-Ni-Cr ternary alloy system that constitutes many commercially important materials such as stainless steels and the H-series and C-series heat and corrosion resistant casting alloys. Three different techniques were developed for making alloy libraries: (1) vapor deposition of discrete thin films on an appropriate substrate and then alloying them together by solid-state diffusion; (2) co-deposition of the alloying elements from three separate magnetron sputtering sources onto an inert substrate; and (3) localized melting of thin films with a focused electron-beam welding system. Each of the techniques was found to have its own advantages and disadvantages. A new and very powerful technique for
Categorisation of visualisation methods to support the design of Human-Computer Interaction Systems.
Li, Katie; Tiwari, Ashutosh; Alcock, Jeffrey; Bermell-Garcia, Pablo
2016-07-01
During the design of Human-Computer Interaction (HCI) systems, the creation of visual artefacts forms an important part of design. On one hand producing a visual artefact has a number of advantages: it helps designers to externalise their thought and acts as a common language between different stakeholders. On the other hand, if an inappropriate visualisation method is employed it could hinder the design process. To support the design of HCI systems, this paper reviews the categorisation of visualisation methods used in HCI. A keyword search is conducted to identify a) current HCI design methods, b) approaches of selecting these methods. The resulting design methods are filtered to create a list of just visualisation methods. These are then categorised using the approaches identified in (b). As a result 23 HCI visualisation methods are identified and categorised in 5 selection approaches (The Recipient, Primary Purpose, Visual Archetype, Interaction Type, and The Design Process).
Categorisation of visualisation methods to support the design of Human-Computer Interaction Systems.
Li, Katie; Tiwari, Ashutosh; Alcock, Jeffrey; Bermell-Garcia, Pablo
2016-07-01
During the design of Human-Computer Interaction (HCI) systems, the creation of visual artefacts forms an important part of design. On one hand producing a visual artefact has a number of advantages: it helps designers to externalise their thought and acts as a common language between different stakeholders. On the other hand, if an inappropriate visualisation method is employed it could hinder the design process. To support the design of HCI systems, this paper reviews the categorisation of visualisation methods used in HCI. A keyword search is conducted to identify a) current HCI design methods, b) approaches of selecting these methods. The resulting design methods are filtered to create a list of just visualisation methods. These are then categorised using the approaches identified in (b). As a result 23 HCI visualisation methods are identified and categorised in 5 selection approaches (The Recipient, Primary Purpose, Visual Archetype, Interaction Type, and The Design Process). PMID:26995039
NASA Astrophysics Data System (ADS)
Adrich, Przemysław
2016-05-01
In Part I of this work existing methods and problems in dual foil electron beam forming system design are presented. On this basis, a new method of designing these systems is introduced. The motivation behind this work is to eliminate the shortcomings of the existing design methods and improve overall efficiency of the dual foil design process. The existing methods are based on approximate analytical models applied in an unrealistically simplified geometry. Designing a dual foil system with these methods is a rather labor intensive task as corrections to account for the effects not included in the analytical models have to be calculated separately and accounted for in an iterative procedure. To eliminate these drawbacks, the new design method is based entirely on Monte Carlo modeling in a realistic geometry and using physics models that include all relevant processes. In our approach, an optimal configuration of the dual foil system is found by means of a systematic, automatized scan of the system performance in function of parameters of the foils. The new method, while being computationally intensive, minimizes the involvement of the designer and considerably shortens the overall design time. The results are of high quality as all the relevant physics and geometry details are naturally accounted for. To demonstrate the feasibility of practical implementation of the new method, specialized software tools were developed and applied to solve a real life design problem, as described in Part II of this work.
The cryogenic balance design and balance calibration methods
NASA Astrophysics Data System (ADS)
Ewald, B.; Polanski, L.; Graewe, E.
1992-07-01
The current status of a program aimed at the development of a cryogenic balance for the European Transonic Wind Tunnel is reviewed. In particular, attention is given to the cryogenic balance design philosophy, mechanical balance design, reliability and accuracy, cryogenic balance calibration concept, and the concept of an automatic calibration machine. It is shown that the use of the automatic calibration machine will improve the accuracy of calibration while reducing the man power and time required for balance calibration.
Advanced transonic fan design procedure based on a Navier-Stokes method
NASA Astrophysics Data System (ADS)
Rhie, C. M.; Zacharias, R. M.; Hobbs, D. E.; Sarathy, K. P.; Biederman, B. P.; Lejambre, C. R.; Spear, D. A.
1994-04-01
A fan performance analysis method based upon three-dimensional steady Navier-Stokes equations is presented in this paper. Its accuracy is established through extensive code validation effort. Validation data comparisons ranging from a two-dimensional compressor cascade to three-dimensional fans are shown in this paper to highlight the accuracy and reliability of the code. The overall fan design procedure using this code is then presented. Typical results of this design process are shown for a current engine fan design. This new design method introduces a major improvement over the conventional design methods based on inviscid flow and boundary layer concepts. Using the Navier-Stokes design method, fan designers can confidently refine their designs prior to rig testing. This results in reduced rig testing and cost savings as the bulk of the iteration between design and experimental verification is transferred to an iteration between design and computational verification.
Aircraft design for mission performance using nonlinear multiobjective optimization methods
NASA Technical Reports Server (NTRS)
Dovi, Augustine R.; Wrenn, Gregory A.
1990-01-01
A new technique which converts a constrained optimization problem to an unconstrained one where conflicting figures of merit may be simultaneously considered was combined with a complex mission analysis system. The method is compared with existing single and multiobjective optimization methods. A primary benefit from this new method for multiobjective optimization is the elimination of separate optimizations for each objective, which is required by some optimization methods. A typical wide body transport aircraft is used for the comparative studies.
A Preliminary Rubric Design to Evaluate Mixed Methods Research
ERIC Educational Resources Information Center
Burrows, Timothy J.
2013-01-01
With the increase in frequency of the use of mixed methods, both in research publications and in externally funded grants there are increasing calls for a set of standards to assess the quality of mixed methods research. The purpose of this mixed methods study was to conduct a multi-phase analysis to create a preliminary rubric to evaluate mixed…
Statistical Methods for Rapid Aerothermal Analysis and Design Technology
NASA Technical Reports Server (NTRS)
Morgan, Carolyn; DePriest, Douglas; Thompson, Richard (Technical Monitor)
2002-01-01
The cost and safety goals for NASA's next generation of reusable launch vehicle (RLV) will require that rapid high-fidelity aerothermodynamic design tools be used early in the design cycle. To meet these requirements, it is desirable to establish statistical models that quantify and improve the accuracy, extend the applicability, and enable combined analyses using existing prediction tools. The research work was focused on establishing the suitable mathematical/statistical models for these purposes. It is anticipated that the resulting models can be incorporated into a software tool to provide rapid, variable-fidelity, aerothermal environments to predict heating along an arbitrary trajectory. This work will support development of an integrated design tool to perform automated thermal protection system (TPS) sizing and material selection.
A method for designing robust multivariable feedback systems
NASA Technical Reports Server (NTRS)
Milich, David Albert; Athans, Michael; Valavani, Lena; Stein, Gunter
1988-01-01
A new methodology is developed for the synthesis of linear, time-invariant (LTI) controllers for multivariable LTI systems. The aim is to achieve stability and performance robustness of the feedback system in the presence of multiple unstructured uncertainty blocks; i.e., to satisfy a frequency-domain inequality in terms of the structured singular value. The design technique is referred to as the Causality Recovery Methodology (CRM). Starting with an initial (nominally) stabilizing compensator, the CRM produces a closed-loop system whose performance-robustness is at least as good as, and hopefully superior to, that of the original design. The robustness improvement is obtained by solving an infinite-dimensional, convex optimization program. A finite-dimensional implementation of the CRM was developed, and it was applied to a multivariate design example.
A design method for an intuitive web site
Quinniey, M.L.; Diegert, K.V.; Baca, B.G.; Forsythe, J.C.; Grose, E.
1999-11-03
The paper describes a methodology for designing a web site for human factor engineers that is applicable for designing a web site for a group of people. Many web pages on the World Wide Web are not organized in a format that allows a user to efficiently find information. Often the information and hypertext links on web pages are not organized into intuitive groups. Intuition implies that a person is able to use their knowledge of a paradigm to solve a problem. Intuitive groups are categories that allow web page users to find information by using their intuition or mental models of categories. In order to improve the human factors engineers efficiency for finding information on the World Wide Web, research was performed to develop a web site that serves as a tool for finding information effectively. The paper describes a methodology for designing a web site for a group of people who perform similar task in an organization.
Advanced 3D inverse method for designing turbomachine blades
Dang, T.
1995-10-01
To meet the goal of 60% plant-cycle efficiency or better set in the ATS Program for baseload utility scale power generation, several critical technologies need to be developed. One such need is the improvement of component efficiencies. This work addresses the issue of improving the performance of turbo-machine components in gas turbines through the development of an advanced three-dimensional and viscous blade design system. This technology is needed to replace some elements in current design systems that are based on outdated technology.
NASA Technical Reports Server (NTRS)
Chen, Shu-cheng, S.
2009-01-01
For the preliminary design and the off-design performance analysis of axial flow turbines, a pair of intermediate level-of-fidelity computer codes, TD2-2 (design; reference 1) and AXOD (off-design; reference 2), are being evaluated for use in turbine design and performance prediction of the modern high performance aircraft engines. TD2-2 employs a streamline curvature method for design, while AXOD approaches the flow analysis with an equal radius-height domain decomposition strategy. Both methods resolve only the flows in the annulus region while modeling the impact introduced by the blade rows. The mathematical formulations and derivations involved in both methods are documented in references 3, 4 for TD2-2) and in reference 5 (for AXOD). The focus of this paper is to discuss the fundamental issues of applicability and compatibility of the two codes as a pair of companion pieces, to perform preliminary design and off-design analysis for modern aircraft engine turbines. Two validation cases for the design and the off-design prediction using TD2-2 and AXOD conducted on two existing high efficiency turbines, developed and tested in the NASA/GE Energy Efficient Engine (GE-E3) Program, the High Pressure Turbine (HPT; two stages, air cooled) and the Low Pressure Turbine (LPT; five stages, un-cooled), are provided in support of the analysis and discussion presented in this paper.
Convergence of controllers designed using state space methods
NASA Technical Reports Server (NTRS)
Morris, K. A.
1991-01-01
The convergence of finite dimensional controllers for infinite dimensional systems designed using approximations is examined. Stable coprime factorization theory is used to show that under the standard assumptions of uniform stabilizability/detectability, the controllers stabilize the original system for large enough model order. The controllers converge uniformly to an infinite dimensional controller, as does the closed loop response.
Improved Methods for Classification, Prediction and Design of Antimicrobial Peptides
Wang, Guangshun
2015-01-01
Peptides with diverse amino acid sequences, structures and functions are essential players in biological systems. The construction of well-annotated databases not only facilitates effective information management, search and mining, but also lays the foundation for developing and testing new peptide algorithms and machines. The antimicrobial peptide database (APD) is an original construction in terms of both database design and peptide entries. The host defense antimicrobial peptides (AMPs) registered in the APD cover the five kingdoms (bacteria, protists, fungi, plants, and animals) or three domains of life (bacteria, archaea, and eukaryota). This comprehensive database (http://aps.unmc.edu/AP) provides useful information on peptide discovery timeline, nomenclature, classification, glossary, calculation tools, and statistics. The APD enables effective search, prediction, and design of peptides with antibacterial, antiviral, antifungal, antiparasitic, insecticidal, spermicidal, anticancer activities, chemotactic, immune modulation, or anti-oxidative properties. A universal classification scheme is proposed herein to unify innate immunity peptides from a variety of biological sources. As an improvement, the upgraded APD makes predictions based on the database-defined parameter space and provides a list of the sequences most similar to natural AMPs. In addition, the powerful pipeline design of the database search engine laid a solid basis for designing novel antimicrobials to combat resistant superbugs, viruses, fungi or parasites. This comprehensive AMP database is a useful tool for both research and education. PMID:25555720
Designing green corrosion inhibitors using chemical computation methods
Singhl, W.P.; Lin, G.; Bockris, J.O.M.; Kang, Y.
1998-12-31
Green corrosion inhibitors have been designed by understanding the relationships between the structure of organic compounds and toxicity as well as corrosion inhibition efficiency. The estimation of aquatic toxicity as well as corrosion inhibition efficiency are made using QSAR techniques. The predicted structures with reduced toxicity and improved corrosion inhibition efficiency are then tested experimentally for these properties, thus leading to green inhibitors.
A Prospective Method to Guide Small Molecule Drug Design
ERIC Educational Resources Information Center
Johnson, Alan T.
2015-01-01
At present, small molecule drug design follows a retrospective path when considering what analogs are to be made around a current hit or lead molecule with the focus often on identifying a compound with higher intrinsic potency. What this approach overlooks is the simultaneous need to also improve the physicochemical (PC) and pharmacokinetic (PK)…
Library Design Analysis Using Post-Occupancy Evaluation Methods.
ERIC Educational Resources Information Center
James, Dennis C.; Stewart, Sharon L.
1995-01-01
Presents findings of a user-based study of the interior of Rodger's Science and Engineering Library at the University of Alabama. Compared facility evaluations from faculty, library staff, and graduate and undergraduate students. Features evaluated include: acoustics, aesthetics, book stacks, design, finishes/materials, furniture, lighting,…
Improved methods for classification, prediction, and design of antimicrobial peptides.
Wang, Guangshun
2015-01-01
Peptides with diverse amino acid sequences, structures, and functions are essential players in biological systems. The construction of well-annotated databases not only facilitates effective information management, search, and mining but also lays the foundation for developing and testing new peptide algorithms and machines. The antimicrobial peptide database (APD) is an original construction in terms of both database design and peptide entries. The host defense antimicrobial peptides (AMPs) registered in the APD cover the five kingdoms (bacteria, protists, fungi, plants, and animals) or three domains of life (bacteria, archaea, and eukaryota). This comprehensive database ( http://aps.unmc.edu/AP ) provides useful information on peptide discovery timeline, nomenclature, classification, glossary, calculation tools, and statistics. The APD enables effective search, prediction, and design of peptides with antibacterial, antiviral, antifungal, antiparasitic, insecticidal, spermicidal, anticancer activities, chemotactic, immune modulation, or antioxidative properties. A universal classification scheme is proposed herein to unify innate immunity peptides from a variety of biological sources. As an improvement, the upgraded APD makes predictions based on the database-defined parameter space and provides a list of the sequences most similar to natural AMPs. In addition, the powerful pipeline design of the database search engine laid a solid basis for designing novel antimicrobials to combat resistant superbugs, viruses, fungi, or parasites. This comprehensive AMP database is a useful tool for both research and education.
A Pareto-optimal refinement method for protein design scaffolds.
Nivón, Lucas Gregorio; Moretti, Rocco; Baker, David
2013-01-01
Computational design of protein function involves a search for amino acids with the lowest energy subject to a set of constraints specifying function. In many cases a set of natural protein backbone structures, or "scaffolds", are searched to find regions where functional sites (an enzyme active site, ligand binding pocket, protein-protein interaction region, etc.) can be placed, and the identities of the surrounding amino acids are optimized to satisfy functional constraints. Input native protein structures almost invariably have regions that score very poorly with the design force field, and any design based on these unmodified structures may result in mutations away from the native sequence solely as a result of the energetic strain. Because the input structure is already a stable protein, it is desirable to keep the total number of mutations to a minimum and to avoid mutations resulting from poorly-scoring input structures. Here we describe a protocol using cycles of minimization with combined backbone/sidechain restraints that is Pareto-optimal with respect to RMSD to the native structure and energetic strain reduction. The protocol should be broadly useful in the preparation of scaffold libraries for functional site design.
Study of design and analysis methods for transonic flow
NASA Technical Reports Server (NTRS)
Murman, E. M.
1977-01-01
An airfoil design program and a boundary layer analysis were developed. Boundary conditions were derived for ventilated transonic wind tunnels and performing transonic windtunnel wall calculations. A computational procedure for rotational transonic flow in engine inlet throats was formulated. Results and conclusions are summarized.
Overview of control design methods for smart structural system
NASA Astrophysics Data System (ADS)
Rao, Vittal S.; Sana, Sridhar
2001-08-01
Smart structures are a result of effective integration of control system design and signal processing with the structural systems to maximally utilize the new advances in materials for structures, actuation and sensing to obtain the best performance for the application at hand. The research in smart structures is constantly driving towards attaining self adaptive and diagnostic capabilities that biological systems possess. This has been manifested in the number of successful applications in many areas of engineering such as aerospace, civil and automotive systems. Instrumental in the development of such systems are smart materials such as piezo-electric, shape memory alloys, electrostrictive, magnetostrictive and fiber-optic materials and various composite materials for use as actuators, sensors and structural members. The need for development of control systems that maximally utilize the smart actuators and sensing materials to design highly distributed and highly adaptable controllers has spurred research in the area of smart structural modeling, identification, actuator/sensor design and placement, control systems design such as adaptive and robust controllers with new tools such a neural networks, fuzzy logic, genetic algorithms, linear matrix inequalities and electronics for controller implementation such as analog electronics, micro controllers, digital signal processors (DSPs) and application specific integrated circuits (ASICs) such field programmable gate arrays (FPGAs) and Multichip modules (MCMs) etc. In this paper, we give a brief overview of the state of control in smart structures. Different aspects of the development of smart structures such as applications, technology and theoretical advances especially in the area of control systems design and implementation will be covered.
Optimal reliability design method for remote solar systems
NASA Astrophysics Data System (ADS)
Suwapaet, Nuchida
A unique optimal reliability design algorithm is developed for remote communication systems. The algorithm deals with either minimizing an unavailability of the system within a fixed cost or minimizing the cost of the system with an unavailability constraint. The unavailability of the system is a function of three possible failure occurrences: individual component breakdown, solar energy deficiency (loss of load probability), and satellite/radio transmission loss. The three mathematical models of component failure, solar power failure, transmission failure are combined and formulated as a nonlinear programming optimization problem with binary decision variables, such as number and type (or size) of photovoltaic modules, batteries, radios, antennas, and controllers. Three possible failures are identified and integrated in computer algorithm to generate the parameters for the optimization algorithm. The optimization algorithm is implemented with a branch-and-bound technique solution in MS Excel Solver. The algorithm is applied to a case study design for an actual system that will be set up in remote mountainous areas of Peru. The automated algorithm is verified with independent calculations. The optimal results from minimizing the unavailability of the system with the cost constraint case and minimizing the total cost of the system with the unavailability constraint case are consistent with each other. The tradeoff feature in the algorithm allows designers to observe results of 'what-if' scenarios of relaxing constraint bounds, thus obtaining the most benefit from the optimization process. An example of this approach applied to an existing communication system in the Andes shows dramatic improvement in reliability for little increase in cost. The algorithm is a real design tool, unlike other existing simulation design tools. The algorithm should be useful for other stochastic systems where component reliability, random supply and demand, and communication are
Chen, Deng-kai; Gu, Rong; Gu, Yu-feng; Yu, Sui-huai
2016-01-01
Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design.
Chen, Deng-kai; Gu, Rong; Gu, Yu-feng; Yu, Sui-huai
2016-01-01
Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design. PMID:27630709
Yang, Yan-Pu; Chen, Deng-Kai; Gu, Rong; Gu, Yu-Feng; Yu, Sui-Huai
2016-01-01
Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design.
Yang, Yan-Pu; Chen, Deng-Kai; Gu, Rong; Gu, Yu-Feng; Yu, Sui-Huai
2016-01-01
Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design. PMID:27630709
Design component method for sensitivity analysis of built-up structures
NASA Technical Reports Server (NTRS)
Choi, Kyung K.; Seong, Hwai G.
1986-01-01
A 'design component method' that provides a unified and systematic organization of design sensitivity analysis for built-up structures is developed and implemented. Both conventional design variables, such as thickness and cross-sectional area, and shape design variables of components of built-up structures are considered. It is shown that design of components of built-up structures can be characterized and system design sensitivity expressions obtained by simply adding contributions from each component. The method leads to a systematic organization of computations for design sensitivity analysis that is similar to the way in which computations are organized within a finite element code.
Applications of Genetic Methods to NASA Design and Operations Problems
NASA Technical Reports Server (NTRS)
Laird, Philip D.
1996-01-01
We review four recent NASA-funded applications in which evolutionary/genetic methods are important. In the process we survey: the kinds of problems being solved today with these methods; techniques and tools used; problems encountered; and areas where research is needed. The presentation slides are annotated briefly at the top of each page.
Design and ergonomics. Methods for integrating ergonomics at hand tool design stage.
Marsot, Jacques; Claudon, Laurent
2004-01-01
As a marked increase in the number of musculoskeletal disorders was noted in many industrialized countries and more specifically in companies that require the use of hand tools, the French National Research and Safety Institute (INRS) launched in 1999 a research project on the topic of integrating ergonomics into hand tool design, and more particularly to a design of a boning knife. After a brief recall of the difficulties of integrating ergonomics at the design stage, the present paper shows how 3 design methodological tools--Functional Analysis, Quality Function Deployment and TRIZ--have been applied to the design of a boning knife. Implementation of these tools enabled us to demonstrate the extent to which they are capable of responding to the difficulties of integrating ergonomics into product design. PMID:15028190
ERIC Educational Resources Information Center
Nogry, S.; Jean-Daubias, S.; Guin, N.
2012-01-01
This article deals with evaluating an interactive learning environment (ILE) during the iterative-design process. Various aspects of the system must be assessed and a number of evaluation methods are available. In designing the ILE Ambre-add, several techniques were combined to test and refine the system. In particular, we point out the merits of…
A hybrid nonlinear programming method for design optimization
NASA Technical Reports Server (NTRS)
Rajan, S. D.
1986-01-01
Solutions to engineering design problems formulated as nonlinear programming (NLP) problems usually require the use of more than one optimization technique. Moreover, the interaction between the user (analysis/synthesis) program and the NLP system can lead to interface, scaling, or convergence problems. An NLP solution system is presented that seeks to solve these problems by providing a programming system to ease the user-system interface. A simple set of rules is used to select an optimization technique or to switch from one technique to another in an attempt to detect, diagnose, and solve some potential problems. Numerical examples involving finite element based optimal design of space trusses and rotor bearing systems are used to illustrate the applicability of the proposed methodology.
Design Method of Fault Detector for Injection Unit
NASA Astrophysics Data System (ADS)
Ochi, Kiyoshi; Saeki, Masami
An injection unit is considered as a speed control system utilizing a reaction-force sensor. Our purpose is to design a fault detector that detects and isolates actuator and sensor faults under the condition that the system is disturbed by a reaction force. First described is the fault detector's general structure. In this system, a disturbance observer that estimates the reaction force is designed for the speed control system in order to obtain the residual signals, and then post-filters that separate the specific frequency elements from the residual signals are applied in order to generate the decision signals. Next, we describe a fault detector designed specifically for a model of the injection unit. It is shown that the disturbance imposed on the decision variables can be made significantly small by appropriate adjustments to the observer bandwidth, and that most of the sensor faults and actuator faults can be detected and some of them can be isolated in the frequency domain by setting the frequency characteristics of the post-filters appropriately. Our result is verified by experiments for an actual injection unit.
14 CFR 161.9 - Designation of noise description methods.
Code of Federal Regulations, 2014 CFR
2014-01-01
... and methods prescribed under appendix A of 14 CFR part 150; and (b) Use of computer models to create noise contours must be in accordance with the criteria prescribed under appendix A of 14 CFR part 150....
Advanced Control and Protection system Design Methods for Modular HTGRs
Ball, Sydney J; Wilson Jr, Thomas L; Wood, Richard Thomas
2012-06-01
The project supported the Nuclear Regulatory Commission (NRC) in identifying and evaluating the regulatory implications concerning the control and protection systems proposed for use in the Department of Energy's (DOE) Next-Generation Nuclear Plant (NGNP). The NGNP, using modular high-temperature gas-cooled reactor (HTGR) technology, is to provide commercial industries with electricity and high-temperature process heat for industrial processes such as hydrogen production. Process heat temperatures range from 700 to 950 C, and for the upper range of these operation temperatures, the modular HTGR is sometimes referred to as the Very High Temperature Reactor or VHTR. Initial NGNP designs are for operation in the lower temperature range. The defining safety characteristic of the modular HTGR is that its primary defense against serious accidents is to be achieved through its inherent properties of the fuel and core. Because of its strong negative temperature coefficient of reactivity and the capability of the fuel to withstand high temperatures, fast-acting active safety systems or prompt operator actions should not be required to prevent significant fuel failure and fission product release. The plant is designed such that its inherent features should provide adequate protection despite operational errors or equipment failure. Figure 1 shows an example modular HTGR layout (prismatic core version), where its inlet coolant enters the reactor vessel at the bottom, traversing up the sides to the top plenum, down-flow through an annular core, and exiting from the lower plenum (hot duct). This research provided NRC staff with (a) insights and knowledge about the control and protection systems for the NGNP and VHTR, (b) information on the technologies/approaches under consideration for use in the reactor and process heat applications, (c) guidelines for the design of highly integrated control rooms, (d) consideration for modeling of control and protection system designs for
Reducing Design Risk Using Robust Design Methods: A Dual Response Surface Approach
NASA Technical Reports Server (NTRS)
Unal, Resit; Yeniay, Ozgur; Lepsch, Roger A. (Technical Monitor)
2003-01-01
Space transportation system conceptual design is a multidisciplinary process containing considerable element of risk. Risk here is defined as the variability in the estimated (output) performance characteristic of interest resulting from the uncertainties in the values of several disciplinary design and/or operational parameters. Uncertainties from one discipline (and/or subsystem) may propagate to another, through linking parameters and the final system output may have a significant accumulation of risk. This variability can result in significant deviations from the expected performance. Therefore, an estimate of variability (which is called design risk in this study) together with the expected performance characteristic value (e.g. mean empty weight) is necessary for multidisciplinary optimization for a robust design. Robust design in this study is defined as a solution that minimizes variability subject to a constraint on mean performance characteristics. Even though multidisciplinary design optimization has gained wide attention and applications, the treatment of uncertainties to quantify and analyze design risk has received little attention. This research effort explores the dual response surface approach to quantify variability (risk) in critical performance characteristics (such as weight) during conceptual design.
Object-oriented design of preconditioned iterative methods
Bruaset, A.M.
1994-12-31
In this talk the author discusses how object-oriented programming techniques can be used to develop a flexible software package for preconditioned iterative methods. The ideas described have been used to implement the linear algebra part of Diffpack, which is a collection of C++ class libraries that provides high-level tools for the solution of partial differential equations. In particular, this software package is aimed at rapid development of PDE-based numerical simulators, primarily using finite element methods.
Genetic-evolution-based optimization methods for engineering design
NASA Technical Reports Server (NTRS)
Rao, S. S.; Pan, T. S.; Dhingra, A. K.; Venkayya, V. B.; Kumar, V.
1990-01-01
This paper presents the applicability of a biological model, based on genetic evolution, for engineering design optimization. Algorithms embodying the ideas of reproduction, crossover, and mutation are developed and applied to solve different types of structural optimization problems. Both continuous and discrete variable optimization problems are solved. A two-bay truss for maximum fundamental frequency is considered to demonstrate the continuous variable case. The selection of locations of actuators in an actively controlled structure, for minimum energy dissipation, is considered to illustrate the discrete variable case.
Category's analysis and operational project capacity method of transformation in design
NASA Astrophysics Data System (ADS)
Obednina, S. V.; Bystrova, T. Y.
2015-10-01
The method of transformation is attracting widespread interest in fields such contemporary design. However, in theory of design little attention has been paid to a categorical status of the term "transformation". This paper presents the conceptual analysis of transformation based on the theory of form employed in the influential essays by Aristotle and Thomas Aquinas. In the present work the transformation as a method of shaping design has been explored as well as potential application of this term in design has been demonstrated.
A Proposed Model of Retransformed Qualitative Data within a Mixed Methods Research Design
ERIC Educational Resources Information Center
Palladino, John M.
2009-01-01
Most models of mixed methods research design provide equal emphasis of qualitative and quantitative data analyses and interpretation. Other models stress one method more than the other. The present article is a discourse about the investigator's decision to employ a mixed method design to examine special education teachers' advocacy and…
40 CFR 53.8 - Designation of reference and equivalent methods.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 6 2014-07-01 2014-07-01 false Designation of reference and equivalent... PROGRAMS (CONTINUED) AMBIENT AIR MONITORING REFERENCE AND EQUIVALENT METHODS General Provisions § 53.8 Designation of reference and equivalent methods. (a) A candidate method determined by the Administrator...
40 CFR 53.8 - Designation of reference and equivalent methods.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 6 2012-07-01 2012-07-01 false Designation of reference and equivalent... PROGRAMS (CONTINUED) AMBIENT AIR MONITORING REFERENCE AND EQUIVALENT METHODS General Provisions § 53.8 Designation of reference and equivalent methods. (a) A candidate method determined by the Administrator...
Mixing Qualitative and Quantitative Methods: Insights into Design and Analysis Issues
ERIC Educational Resources Information Center
Lieber, Eli
2009-01-01
This article describes and discusses issues related to research design and data analysis in the mixing of qualitative and quantitative methods. It is increasingly desirable to use multiple methods in research, but questions arise as to how best to design and analyze the data generated by mixed methods projects. I offer a conceptualization for such…
40 CFR 53.8 - Designation of reference and equivalent methods.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 5 2011-07-01 2011-07-01 false Designation of reference and equivalent... PROGRAMS (CONTINUED) AMBIENT AIR MONITORING REFERENCE AND EQUIVALENT METHODS General Provisions § 53.8 Designation of reference and equivalent methods. (a) A candidate method determined by the Administrator...
Cathodic protection design using the regression and correlation method
Niembro, A.M.; Ortiz, E.L.G.
1997-09-01
A computerized statistical method which calculates the current demand requirement based on potential measurements for cathodic protection systems is introduced. The method uses the regression and correlation analysis of statistical measurements of current and potentials of the piping network. This approach involves four steps: field potential measurements, statistical determination of the current required to achieve full protection, installation of more cathodic protection capacity with distributed anodes around the plant and examination of the protection potentials. The procedure is described and recommendations for the improvement of the existing and new cathodic protection systems are given.
Flight critical system design guidelines and validation methods
NASA Technical Reports Server (NTRS)
Holt, H. M.; Lupton, A. O.; Holden, D. G.
1984-01-01
Efforts being expended at NASA-Langley to define a validation methodology, techniques for comparing advanced systems concepts, and design guidelines for characterizing fault tolerant digital avionics are described with an emphasis on the capabilities of AIRLAB, an environmentally controlled laboratory. AIRLAB has VAX 11/750 and 11/780 computers with an aggregate of 22 Mb memory and over 650 Mb storage, interconnected at 256 kbaud. An additional computer is programmed to emulate digital devices. Ongoing work is easily accessed at user stations by either chronological or key word indexing. The CARE III program aids in analyzing the capabilities of test systems to recover from faults. An additional code, the semi-Markov unreliability program (SURE) generates upper and lower reliability bounds. The AIRLAB facility is mainly dedicated to research on designs of digital flight-critical systems which must have acceptable reliability before incorporation into aircraft control systems. The digital systems would be too costly to submit to a full battery of flight tests and must be initially examined with the AIRLAB simulation capabilities.
Computer control of large accelerators design concepts and methods
Beck, F.; Gormley, M.
1984-05-01
Unlike most of the specialities treated in this volume, control system design is still an art, not a science. These lectures are an attempt to produce a primer for prospective practitioners of this art. A large modern accelerator requires a comprehensive control system for commissioning, machine studies and day-to-day operation. Faced with the requirement to design a control system for such a machine, the control system architect has a bewildering array of technical devices and techniques at his disposal, and it is our aim in the following chapters to lead him through the characteristics of the problems he will have to face and the practical alternatives available for solving them. We emphasize good system architecture using commercially available hardware and software components, but in addition we discuss the actual control strategies which are to be implemented since it is at the point of deciding what facilities shall be available that the complexity of the control system and its cost are implicitly decided. 19 references.
Haberland, M; Kim, S
2015-02-02
When millions of years of evolution suggest a particular design solution, we may be tempted to abandon traditional design methods and copy the biological example. However, biological solutions do not often translate directly into the engineering domain, and even when they do, copying eliminates the opportunity to improve. A better approach is to extract design principles relevant to the task of interest, incorporate them in engineering designs, and vet these candidates against others. This paper presents the first general framework for determining whether biologically inspired relationships between design input variables and output objectives and constraints are applicable to a variety of engineering systems. Using optimization and statistics to generalize the results beyond a particular system, the framework overcomes shortcomings observed of ad hoc methods, particularly those used in the challenging study of legged locomotion. The utility of the framework is demonstrated in a case study of the relative running efficiency of rotary-kneed and telescoping-legged robots.
Fault self-diagnosis designing method of the automotive electronic control system
NASA Astrophysics Data System (ADS)
Ding, Yangyan; Yang, Zhigang; Fu, Xiaolin
2005-12-01
The fault self-diagnosis system is an important component of an the automotive electronic control system. Designers of automotive electronic control systems urgently require or need a complete understanding of the self-diagnosis designing method of the control system in order to apply it in practice. Aiming at this exigent need, self-diagnosis methods of designing sensors, electronic control unit (ECU), and actuators, which are the three main parts of automotive electronic control systems, are discussed in this paper. According to the fault types and characteristics of commonly used sensors, self-diagnosis designing methods of the sensors are discussed. Then fault diagnosis techniques of sensors utilizing signal detection and analytical redundancy are analysed and summarized respectively, from the viewpoint of the self-diagnosis designing method. Also, problems about failure self-diagnosis of ECU are analyzed here. For different fault types of an ECU, setting up a circuit monitoring method and a self-detection method of the hardware circuit are adopted respectively. Using these two methods mentioned above, a real-time and on-line technique of failure self-diagnosis is presented. Furthermore, the failure self-diagnosis design method of ECU are summarized. Finally, common faults of actuators are analyzed and the general design method of the failure self-diagnosis system is presented. It is suggested that self-diagnosis design methods relative to the failure of automotive electronic control systems can offer a useful approach to designers of control systems.
Power Analysis for Complex Mediational Designs Using Monte Carlo Methods
ERIC Educational Resources Information Center
Thoemmes, Felix; MacKinnon, David P.; Reiser, Mark R.
2010-01-01
Applied researchers often include mediation effects in applications of advanced methods such as latent variable models and linear growth curve models. Guidance on how to estimate statistical power to detect mediation for these models has not yet been addressed in the literature. We describe a general framework for power analyses for complex…
The Use of Hermeneutics in a Mixed Methods Design
ERIC Educational Resources Information Center
von Zweck, Claudia; Paterson, Margo; Pentland, Wendy
2008-01-01
Combining methods in a single study is becoming a more common practice because of the limitations of using only one approach to fully address all aspects of a research question. Hermeneutics in this paper is discussed in relation to a large national study that investigated issues influencing the ability of international graduates to work as…
Application of Six Sigma Method to EMS Design
NASA Astrophysics Data System (ADS)
Rusko, Miroslav; Králiková, Ružena
2011-01-01
The Six Sigma method is a complex and flexible system of achieving, maintaining and maximizing the business success. Six Sigma is based mainly on understanding the customer needs and expectation, disciplined use of facts and statistics analysis, and responsible approach to managing, improving and establishing new business, manufacturing and service processes.
AI/OR computational model for integrating qualitative and quantitative design methods
NASA Technical Reports Server (NTRS)
Agogino, Alice M.; Bradley, Stephen R.; Cagan, Jonathan; Jain, Pramod; Michelena, Nestor
1990-01-01
A theoretical framework for integrating qualitative and numerical computational methods for optimally-directed design is described. The theory is presented as a computational model and features of implementations are summarized where appropriate. To demonstrate the versatility of the methodology we focus on four seemingly disparate aspects of the design process and their interaction: (1) conceptual design, (2) qualitative optimal design, (3) design innovation, and (4) numerical global optimization.
The research progress on Hodograph Method of aerodynamic design at Tsinghua University
NASA Technical Reports Server (NTRS)
Chen, Zuoyi; Guo, Jingrong
1991-01-01
Progress in the use of the Hodograph method of aerodynamic design is discussed. It was found that there are some restricted conditions in the application of Hodograph design to transonic turbine and compressor cascades. The Hodograph method is suitable not only to the transonic turbine cascade but also to the transonic compressor cascade. The three dimensional Hodograph method will be developed after obtaining the basic equation for the three dimensional Hodograph method. As an example of the Hodograph method, the use of the method to design a transonic turbine and compressor cascade is discussed.
Finite Element Method Applied to Fuse Protection Design
NASA Astrophysics Data System (ADS)
Li, Sen; Song, Zhiquan; Zhang, Ming; Xu, Liuwei; Li, Jinchao; Fu, Peng; Wang, Min; Dong, Lin
2014-03-01
In a poloidal field (PF) converter module, fuse protection is of great importance to ensure the safety of the thyristors. The fuse is pre-selected in a traditional way and then verified by finite element analysis. A 3D physical model is built by ANSYS software to solve the thermal-electric coupled problem of transient process in case of external fault. The result shows that this method is feasible.
Bumpus, S.E.; Johnson, J.J.; Smith, P.D.
1980-05-01
The concept of how two techniques, Best Estimate Method and Evaluation Method, may be applied to the traditional seismic analysis and design of a nuclear power plant is introduced. Only the four links of the seismic analysis and design methodology chain (SMC) - seismic input, soil-structure interaction, major structural response, and subsystem response - are considered. The objective is to evaluate the compounding of conservatisms in the seismic analysis and design of nuclear power plants, to provide guidance for judgments in the SMC, and to concentrate the evaluation on that part of the seismic analysis and design which is familiar to the engineering community. An example applies the effects of three-dimensional excitations on a model of a nuclear power plant structure. The example demonstrates how conservatisms accrue by coupling two links in the SMC and comparing those results to the effects of one link alone. The utility of employing the Best Estimate Method vs the Evaluation Method is also demonstrated.
A New Automated Design Method Based on Machine Learning for CMOS Analog Circuits
NASA Astrophysics Data System (ADS)
Moradi, Behzad; Mirzaei, Abdolreza
2016-11-01
A new simulation based automated CMOS analog circuit design method which applies a multi-objective non-Darwinian-type evolutionary algorithm based on Learnable Evolution Model (LEM) is proposed in this article. The multi-objective property of this automated design of CMOS analog circuits is governed by a modified Strength Pareto Evolutionary Algorithm (SPEA) incorporated in the LEM algorithm presented here. LEM includes a machine learning method such as the decision trees that makes a distinction between high- and low-fitness areas in the design space. The learning process can detect the right directions of the evolution and lead to high steps in the evolution of the individuals. The learning phase shortens the evolution process and makes remarkable reduction in the number of individual evaluations. The expert designer's knowledge on circuit is applied in the design process in order to reduce the design space as well as the design time. The circuit evaluation is made by HSPICE simulator. In order to improve the design accuracy, bsim3v3 CMOS transistor model is adopted in this proposed design method. This proposed design method is tested on three different operational amplifier circuits. The performance of this proposed design method is verified by comparing it with the evolutionary strategy algorithm and other similar methods.
Matching wind turbine rotors and loads: computational methods for designers
Seale, J.B.
1983-04-01
This report provides a comprehensive method for matching wind energy conversion system (WECS) rotors with the load characteristics of common electrical and mechanical applications. The user must supply: (1) turbine aerodynamic efficiency as a function of tipspeed ratio; (2) mechanical load torque as a function of rotation speed; (3) useful delivered power as a function of incoming mechanical power; (4) site average windspeed and, for maximum accuracy, distribution data. The description of the data includes governing limits consistent with the capacities of components. The report develops, a step-by-step method for converting the data into useful results: (1) from turbine efficiency and load torque characteristics, turbine power is predicted as a function of windspeed; (2) a decision is made how turbine power is to be governed (it may self-govern) to insure safety of all components; (3) mechanical conversion efficiency comes into play to predict how useful delivered power varies with windspeed; (4) wind statistics come into play to predict longterm energy output. Most systems can be approximated by a graph-and-calculator approach: Computer-generated families of coefficient curves provide data for algebraic scaling formulas. The method leads not only to energy predictions, but also to insight into the processes being modeled. Direct use of a computer program provides more sophisticated calculations where a highly unusual system is to be modeled, where accuracy is at a premium, or where error analysis is required. The analysis is fleshed out witn in-depth case studies for induction generator and inverter utility systems; battery chargers; resistance heaters; positive displacement pumps, including three different load-compensation strategies; and centrifugal pumps with unregulated electric power transmission from turbine to pump.
Design of a Password-Based EAP Method
NASA Astrophysics Data System (ADS)
Manganaro, Andrea; Koblensky, Mingyur; Loreti, Michele
In recent years, amendments to IEEE standards for wireless networks added support for authentication algorithms based on the Extensible Authentication Protocol (EAP). Available solutions generally use digital certificates or pre-shared keys but the management of the resulting implementations is complex or unlikely to be scalable. In this paper we present EAP-SRP-256, an authentication method proposal that relies on the SRP-6 protocol and provides a strong password-based authentication mechanism. It is intended to meet the IETF security and key management requirements for wireless networks.
Defining Requirements and Related Methods for Designing Sensorized Garments
Andreoni, Giuseppe; Standoli, Carlo Emilio; Perego, Paolo
2016-01-01
Designing smart garments has strong interdisciplinary implications, specifically related to user and technical requirements, but also because of the very different applications they have: medicine, sport and fitness, lifestyle monitoring, workplace and job conditions analysis, etc. This paper aims to discuss some user, textile, and technical issues to be faced in sensorized clothes development. In relation to the user, the main requirements are anthropometric, gender-related, and aesthetical. In terms of these requirements, the user’s age, the target application, and fashion trends cannot be ignored, because they determine the compliance with the wearable system. Regarding textile requirements, functional factors—also influencing user comfort—are elasticity and washability, while more technical properties are the stability of the chemical agents’ effects for preserving the sensors’ efficacy and reliability, and assuring the proper duration of the product for the complete life cycle. From the technical side, the physiological issues are the most important: skin conductance, tolerance, irritation, and the effect of sweat and perspiration are key factors for reliable sensing. Other technical features such as battery size and duration, and the form factor of the sensor collector, should be considered, as they affect aesthetical requirements, which have proven to be crucial, as well as comfort and wearability. PMID:27240361
Simplified tornado depressurization design methods for nuclear power plants
Howard, N.M.; Krasnopoler, M.I.
1983-05-01
A simplified approach for the calculation of tornado depressurization effects on nuclear power plant structures and components is based on a generic computer depressurization analysis for an arbitrary single volume V connected to the atmosphere by an effective vent area A. For a given tornado depressurization transient, the maximum depressurization ..delta..P of the volume was found to depend on the parameter V/A. The relation between ..delta..P and V/A can be represented by a single monotonically increasing curve for each of the three design-basis tornadoes described in the U.S. Nuclear Regulatory Commission's Regulatory Guide 1.76. These curves can be applied to most multiple-volume nuclear power plant structures by considering each volume and its controlling vent area. Where several possible flow areas could be controlling, the maximum value of V/A can be used to estimate a conservative value for ..delta..P. This simplified approach was shown to yield reasonably conservative results when compared to detailed computer calculations of moderately complex geometries. Treatment of severely complicated geometries, heating and ventilation systems, and multiple blowout panel arrangements were found to be beyond the limitations of the simplified analysis.
The ZInEP Epidemiology Survey: background, design and methods.
Ajdacic-Gross, Vladeta; Müller, Mario; Rodgers, Stephanie; Warnke, Inge; Hengartner, Michael P; Landolt, Karin; Hagenmuller, Florence; Meier, Magali; Tse, Lee-Ting; Aleksandrowicz, Aleksandra; Passardi, Marco; Knöpfli, Daniel; Schönfelder, Herdis; Eisele, Jochen; Rüsch, Nicolas; Haker, Helene; Kawohl, Wolfram; Rössler, Wulf
2014-12-01
This article introduces the design, sampling, field procedures and instruments used in the ZInEP Epidemiology Survey. This survey is one of six ZInEP projects (Zürcher Impulsprogramm zur nachhaltigen Entwicklung der Psychiatrie, i.e. the "Zurich Program for Sustainable Development of Mental Health Services"). It parallels the longitudinal Zurich Study with a sample comparable in age and gender, and with similar methodology, including identical instruments. Thus, it is aimed at assessing the change of prevalence rates of common mental disorders and the use of professional help and psychiatric sevices. Moreover, the current survey widens the spectrum of topics by including sociopsychiatric questionnaires on stigma, stress related biological measures such as load and cortisol levels, electroencephalographic (EEG) and near-infrared spectroscopy (NIRS) examinations with various paradigms, and sociophysiological tests. The structure of the ZInEP Epidemiology Survey entails four subprojects: a short telephone screening using the SCL-27 (n of nearly 10,000), a comprehensive face-to-face interview based on the SPIKE (Structured Psychopathological Interview and Rating of the Social Consequences for Epidemiology: the main instrument of the Zurich Study) with a stratified sample (n = 1500), tests in the Center for Neurophysiology and Sociophysiology (n = 227), and a prospective study with up to three follow-up interviews and further measures (n = 157). In sum, the four subprojects of the ZInEP Epidemiology Survey deliver a large interdisciplinary database. PMID:24942564
Defining Requirements and Related Methods for Designing Sensorized Garments.
Andreoni, Giuseppe; Standoli, Carlo Emilio; Perego, Paolo
2016-01-01
Designing smart garments has strong interdisciplinary implications, specifically related to user and technical requirements, but also because of the very different applications they have: medicine, sport and fitness, lifestyle monitoring, workplace and job conditions analysis, etc. This paper aims to discuss some user, textile, and technical issues to be faced in sensorized clothes development. In relation to the user, the main requirements are anthropometric, gender-related, and aesthetical. In terms of these requirements, the user's age, the target application, and fashion trends cannot be ignored, because they determine the compliance with the wearable system. Regarding textile requirements, functional factors-also influencing user comfort-are elasticity and washability, while more technical properties are the stability of the chemical agents' effects for preserving the sensors' efficacy and reliability, and assuring the proper duration of the product for the complete life cycle. From the technical side, the physiological issues are the most important: skin conductance, tolerance, irritation, and the effect of sweat and perspiration are key factors for reliable sensing. Other technical features such as battery size and duration, and the form factor of the sensor collector, should be considered, as they affect aesthetical requirements, which have proven to be crucial, as well as comfort and wearability. PMID:27240361
NMR quantum computing: applying theoretical methods to designing enhanced systems.
Mawhinney, Robert C; Schreckenbach, Georg
2004-10-01
Density functional theory results for chemical shifts and spin-spin coupling constants are presented for compounds currently used in NMR quantum computing experiments. Specific design criteria were examined and numerical guidelines were assessed. Using a field strength of 7.0 T, protons require a coupling constant of 4 Hz with a chemical shift separation of 0.3 ppm, whereas carbon needs a coupling constant of 25 Hz for a chemical shift difference of 10 ppm, based on the minimal coupling approximation. Using these guidelines, it was determined that 2,3-dibromothiophene is limited to only two qubits; the three qubit system bromotrifluoroethene could be expanded to five qubits and the three qubit system 2,3-dibromopropanoic acid could also be used as a six qubit system. An examination of substituent effects showed that judiciously choosing specific groups could increase the number of available qubits by removing rotational degeneracies in addition to introducing specific conformational preferences that could increase (or decrease) the magnitude of the couplings. The introduction of one site of unsaturation can lead to a marked improvement in spectroscopic properties, even increasing the number of active nuclei.
A Computational Method for Materials Design of New Interfaces
NASA Astrophysics Data System (ADS)
Kaminski, Jakub; Ratsch, Christian; Weber, Justin; Haverty, Michael; Shankar, Sadasivan
2015-03-01
We propose a novel computational approach to explore the broad configurational space of possible interfaces formed from known crystal structures to find new heterostructure materials with potentially interesting properties. In a series of steps with increasing complexity and accuracy, the vast number of possible combinations is narrowed down to a limited set of the most promising and chemically compatible candidates. This systematic screening encompasses (i) establishing the geometrical compatibility along multiple crystallographic orientations of two materials, (ii) simple functions eliminating configurations with unfavorable interatomic steric conflicts, (iii) application of empirical and semi-empirical potentials estimating approximate energetics and structures, (iv) use of DFT based quantum-chemical methods to ascertain the final optimal geometry and stability of the interface in question. For efficient high-throughput screening we have developed a new method to calculate surface energies, which allows for fast and systematic treatment of materials terminated with non-polar surfaces. We show that our approach leads to a maximum error around 3% from the exact reference. The representative results from our search protocol will be presented for selected materials including semiconductors and oxides.
A Comparison of Five Statistical Methods for Analyzing Pretest-Posttest Designs.
ERIC Educational Resources Information Center
Hendrix, Leland J.; And Others
1978-01-01
Five methods for analyzing data from pretest-post-test research designs are discussed. Analysis of gain scores, with pretests as a covariate, is indicated as a superior method when the assumptions underlying covariance analysis are met. (Author/GDC)
Modified Fully Utilized Design (MFUD) Method for Stress and Displacement Constraints
NASA Technical Reports Server (NTRS)
Patnaik, Surya; Gendy, Atef; Berke, Laszlo; Hopkins, Dale
1997-01-01
The traditional fully stressed method performs satisfactorily for stress-limited structural design. When this method is extended to include displacement limitations in addition to stress constraints, it is known as the fully utilized design (FUD). Typically, the FUD produces an overdesign, which is the primary limitation of this otherwise elegant method. We have modified FUD in an attempt to alleviate the limitation. This new method, called the modified fully utilized design (MFUD) method, has been tested successfully on a number of designs that were subjected to multiple loads and had both stress and displacement constraints. The solutions obtained with MFUD compare favorably with the optimum results that can be generated by using nonlinear mathematical programming techniques. The MFUD method appears to have alleviated the overdesign condition and offers the simplicity of a direct, fully stressed type of design method that is distinctly different from optimization and optimality criteria formulations. The MFUD method is being developed for practicing engineers who favor traditional design methods rather than methods based on advanced calculus and nonlinear mathematical programming techniques. The Integrated Force Method (IFM) was found to be the appropriate analysis tool in the development of the MFUD method. In this paper, the MFUD method and its optimality are presented along with a number of illustrative examples.
Design of a Variational Multiscale Method for Turbulent Compressible Flows
NASA Technical Reports Server (NTRS)
Diosady, Laslo Tibor; Murman, Scott M.
2013-01-01
A spectral-element framework is presented for the simulation of subsonic compressible high-Reynolds-number flows. The focus of the work is maximizing the efficiency of the computational schemes to enable unsteady simulations with a large number of spatial and temporal degrees of freedom. A collocation scheme is combined with optimized computational kernels to provide a residual evaluation with computational cost independent of order of accuracy up to 16th order. The optimized residual routines are used to develop a low-memory implicit scheme based on a matrix-free Newton-Krylov method. A preconditioner based on the finite-difference diagonalized ADI scheme is developed which maintains the low memory of the matrix-free implicit solver, while providing improved convergence properties. Emphasis on low memory usage throughout the solver development is leveraged to implement a coupled space-time DG solver which may offer further efficiency gains through adaptivity in both space and time.
The C8 Health Project: Design, Methods, and Participants
Frisbee, Stephanie J.; Brooks, A. Paul; Maher, Arthur; Flensborg, Patsy; Arnold, Susan; Fletcher, Tony; Steenland, Kyle; Shankar, Anoop; Knox, Sarah S.; Pollard, Cecil; Halverson, Joel A.; Vieira, Verónica M.; Jin, Chuanfang; Leyden, Kevin M.; Ducatman, Alan M.
2009-01-01
Background The C8 Health Project was created, authorized, and funded as part of the settlement agreement reached in the case of Jack W. Leach, et al. v. E.I. du Pont de Nemours & Company (no. 01-C-608 W.Va., Wood County Circuit Court, filed 10 April 2002). The settlement stemmed from the perfluorooctanoic acid (PFOA, or C8) contamination of drinking water in six water districts in two states near the DuPont Washington Works facility near Parkersburg, West Virginia. Objectives This study reports on the methods and results from the C8 Health Project, a population study created to gather data that would allow class members to know their own PFOA levels and permit subsequent epidemiologic investigations. Methods Final study participation was 69,030, enrolled over a 13-month period in 2005–2006. Extensive data were collected, including demographic data, medical diagnoses (both self-report and medical records review), clinical laboratory testing, and determination of serum concentrations of 10 perfluorocarbons (PFCs). Here we describe the processes used to collect, validate, and store these health data. We also describe survey participants and their serum PFC levels. Results The population geometric mean for serum PFOA was 32.91 ng/mL, 500% higher than previously reported for a representative American population. Serum concentrations for perfluorohexane sulfonate and perfluorononanoic acid were elevated 39% and 73% respectively, whereas perfluorooctanesulfonate was present at levels similar to those in the U.S. population. Conclusions This largest known population study of community PFC exposure permits new evaluations of associations between PFOA, in particular, and a range of health parameters. These will contribute to understanding of the biology of PFC exposure. The C8 Health Project also represents an unprecedented effort to gather basic data on an exposed population; its achievements and limitations can inform future legal settlements for populations exposed to
Matching Learning Style Preferences with Suitable Delivery Methods on Textile Design Programmes
ERIC Educational Resources Information Center
Sayer, Kate; Studd, Rachel
2006-01-01
Textile design is a subject that encompasses both design and technology; aesthetically pleasing patterns and forms must be set within technical parameters to create successful fabrics. When considering education methods in design programmes, identifying the most relevant learning approach is key to creating future successes. Yet are the most…
Assessing Adaptive Instructional Design Tools and Methods in ADAPT[IT].
ERIC Educational Resources Information Center
Eseryel, Deniz; Spector, J. Michael
ADAPT[IT] (Advanced Design Approach for Personalized Training - Interactive Tools) is a European project within the Information Society Technologies program that is providing design methods and tools to guide a training designer according to the latest cognitive science and standardization principles. ADAPT[IT] addresses users in two significantly…
Co-Designing and Co-Teaching Graduate Qualitative Methods: An Innovative Ethnographic Workshop Model
ERIC Educational Resources Information Center
Cordner, Alissa; Klein, Peter T.; Baiocchi, Gianpaolo
2012-01-01
This article describes an innovative collaboration between graduate students and a faculty member to co-design and co-teach a graduate-level workshop-style qualitative methods course. The goal of co-designing and co-teaching the course was to involve advanced graduate students in all aspects of designing a syllabus and leading class discussions in…
NASA Technical Reports Server (NTRS)
Leininger, G. G.
1981-01-01
Using nonlinear digital simulation as a representative model of the dynamic operation of the QCSEE turbofan engine, a feedback control system is designed by variable frequency design techniques. Transfer functions are generated for each of five power level settings covering the range of operation from approach power to full throttle (62.5% to 100% full power). These transfer functions are then used by an interactive control system design synthesis program to provide a closed loop feedback control using the multivariable Nyquist array and extensions to multivariable Bode diagrams and Nichols charts.
The Use of Qsar and Computational Methods in Drug Design
NASA Astrophysics Data System (ADS)
Bajot, Fania
The application of quantitative structure-activity relationships (QSARs) has significantly impacted the paradigm of drug discovery. Following the successful utilization of linear solvation free-energy relationships (LSERs), numerous 2D- and 3D-QSAR methods have been developed, most of them based on descriptors for hydrophobicity, polarizability, ionic interactions, and hydrogen bonding. QSAR models allow for the calculation of physicochemical properties (e.g., lipophilicity), the prediction of biological activity (or toxicity), as well as the evaluation of absorption, distribution, metabolism, and excretion (ADME). In pharmaceutical research, QSAR has a particular interest in the preclinical stages of drug discovery to replace tedious and costly experimentation, to filter large chemical databases, and to select drug candidates. However, to be part of drug discovery and development strategies, QSARs need to meet different criteria (e.g., sufficient predictivity). This chapter describes the foundation of modern QSAR in drug discovery and presents some current challenges and applications for the discovery and optimization of drug candidates
Unique Method for Generating Design Earthquake Time History Seeds
R. E. Spears
2008-07-01
A method has been developed which takes a single seed earthquake time history and produces multiple similar seed earthquake time histories. These new time histories possess important frequency and cumulative energy attributes of the original while having a correlation less than 30% (per the ASCE/SEI 43-05 Section 2.4 [1]). They are produced by taking the fast Fourier transform of the original seed. The averaged amplitudes are then pared with random phase angles and the inverse fast Fourier transform is taken to produce a new time history. The average amplitude through time is then adjusted to encourage a similar cumulative energy curve. Next, the displacement is modified to approximate the original curve using Fourier techniques. Finally, the correlation is checked to ensure it is less than 30%. This process does not guarantee that the correlation will be less than 30% for all of a given set of new curves. It does provide a simple tool where a few additional iterations of the process should produce a set of seed earthquake time histories meeting the correlation criteria.
A Computational Method for Materials Design of Interfaces
NASA Astrophysics Data System (ADS)
Kaminski, Jakub; Ratsch, Christian; Shankar, Sadasivan
2014-03-01
In the present work we propose a novel computational approach to explore the broad configurational space of possible interfaces formed from known crystal structures to find new hetrostructure materials with potentially interesting properties. In the series of subsequent steps with increasing complexity and accuracy, the vast number of possible combinations is narrowed down to a limited set of the most promising and chemically compatible candidates. This systematic screening encompasses (i) establishing the geometrical compatibility along multiple crystallographic orientations of two (or more) materials, (ii) simple functions eliminating configurations with unfavorable interatomic steric conflicts, (iii) application of empirical and semi-empirical potentials estimating approximate energetics and structures, (iv) use of DFT based quantum-chemical methods to ascertain the final optimal geometry and stability of the interface in question. We also demonstrate the flexibility and efficiency of our approach depending on the size of the investigated structures and size of the search space. The representative results from our search protocol will be presented for selected materials including semiconductors, transition metal systems, and oxides.
Mixture design and treatment methods for recycling contaminated sediment.
Wang, Lei; Kwok, June S H; Tsang, Daniel C W; Poon, Chi-Sun
2015-01-01
Conventional marine disposal of contaminated sediment presents significant financial and environmental burden. This study aimed to recycle the contaminated sediment by assessing the roles and integration of binder formulation, sediment pretreatment, curing method, and waste inclusion in stabilization/solidification. The results demonstrated that the 28-d compressive strength of sediment blocks produced with coal fly ash and lime partially replacing cement at a binder-to-sediment ratio of 3:7 could be used as fill materials for construction. The X-ray diffraction analysis revealed that hydration products (calcium hydroxide) were difficult to form at high sediment content. Thermal pretreatment of sediment removed 90% of indigenous organic matter, significantly increased the compressive strength, and enabled reuse as non-load-bearing masonry units. Besides, 2-h CO2 curing accelerated early-stage carbonation inside the porous structure, sequestered 5.6% of CO2 (by weight) in the sediment blocks, and acquired strength comparable to 7-d curing. Thermogravimetric analysis indicated substantial weight loss corresponding to decomposition of poorly and well crystalline calcium carbonate. Moreover, partial replacement of contaminated sediment by various granular waste materials notably augmented the strength of sediment blocks. The metal leachability of sediment blocks was minimal and acceptable for reuse. These results suggest that contaminated sediment should be viewed as useful resources.
Mixture design and treatment methods for recycling contaminated sediment.
Wang, Lei; Kwok, June S H; Tsang, Daniel C W; Poon, Chi-Sun
2015-01-01
Conventional marine disposal of contaminated sediment presents significant financial and environmental burden. This study aimed to recycle the contaminated sediment by assessing the roles and integration of binder formulation, sediment pretreatment, curing method, and waste inclusion in stabilization/solidification. The results demonstrated that the 28-d compressive strength of sediment blocks produced with coal fly ash and lime partially replacing cement at a binder-to-sediment ratio of 3:7 could be used as fill materials for construction. The X-ray diffraction analysis revealed that hydration products (calcium hydroxide) were difficult to form at high sediment content. Thermal pretreatment of sediment removed 90% of indigenous organic matter, significantly increased the compressive strength, and enabled reuse as non-load-bearing masonry units. Besides, 2-h CO2 curing accelerated early-stage carbonation inside the porous structure, sequestered 5.6% of CO2 (by weight) in the sediment blocks, and acquired strength comparable to 7-d curing. Thermogravimetric analysis indicated substantial weight loss corresponding to decomposition of poorly and well crystalline calcium carbonate. Moreover, partial replacement of contaminated sediment by various granular waste materials notably augmented the strength of sediment blocks. The metal leachability of sediment blocks was minimal and acceptable for reuse. These results suggest that contaminated sediment should be viewed as useful resources. PMID:25464304
Design studies for the transmission simulator method of experimental dynamic substructuring.
Mayes, Randall Lee; Arviso, Michael
2010-05-01
In recent years, a successful method for generating experimental dynamic substructures has been developed using an instrumented fixture, the transmission simulator. The transmission simulator method solves many of the problems associated with experimental substructuring. These solutions effectively address: (1) rotation and moment estimation at connection points; (2) providing substructure Ritz vectors that adequately span the connection motion space; and (3) adequately addressing multiple and continuous attachment locations. However, the transmission simulator method may fail if the transmission simulator is poorly designed. Four areas of the design addressed here are: (1) designating response sensor locations; (2) designating force input locations; (3) physical design of the transmission simulator; and (4) modal test design. In addition to the transmission simulator design investigations, a review of the theory with an example problem is presented.
Reeder, Blaine; Turner, Anne M
2011-01-01
Responding to public health emergencies requires rapid and accurate assessment of workforce availability under adverse and changing circumstances. However, public health information systems to support resource management during both routine and emergency operations are currently lacking. We applied scenario-based design as an approach to engage public health practitioners in the creation and validation of an information design to support routine and emergency public health activities. Methods: Using semi-structured interviews we identified the information needs and activities of senior public health managers of a large municipal health department during routine and emergency operations. Results: Interview analysis identified twenty-five information needs for public health operations management. The identified information needs were used in conjunction with scenario-based design to create twenty-five scenarios of use and a public health manager persona. Scenarios of use and persona were validated and modified based on follow-up surveys with study participants. Scenarios were used to test and gain feedback on a pilot information system. Conclusion: The method of scenario-based design was applied to represent the resource management needs of senior-level public health managers under routine and disaster settings. Scenario-based design can be a useful tool for engaging public health practitioners in the design process and to validate an information system design. PMID:21807120
Design of Aspirated Compressor Blades Using Three-dimensional Inverse Method
NASA Technical Reports Server (NTRS)
Dang, T. Q.; Rooij, M. Van; Larosiliere, L. M.
2003-01-01
A three-dimensional viscous inverse method is extended to allow blading design with full interaction between the prescribed pressure-loading distribution and a specified transpiration scheme. Transpiration on blade surfaces and endwalls is implemented as inflow/outflow boundary conditions, and the basic modifications to the method are outlined. This paper focuses on a discussion concerning an application of the method to the design and analysis of a supersonic rotor with aspiration. Results show that an optimum combination of pressure-loading tailoring with surface aspiration can lead to a minimization of the amount of sucked flow required for a net performance improvement at design and off-design operations.
Launch Vehicle Design and Optimization Methods and Priority for the Advanced Engineering Environment
NASA Technical Reports Server (NTRS)
Rowell, Lawrence F.; Korte, John J.
2003-01-01
NASA's Advanced Engineering Environment (AEE) is a research and development program that will improve collaboration among design engineers for launch vehicle conceptual design and provide the infrastructure (methods and framework) necessary to enable that environment. In this paper, three major technical challenges facing the AEE program are identified, and three specific design problems are selected to demonstrate how advanced methods can improve current design activities. References are made to studies that demonstrate these design problems and methods, and these studies will provide the detailed information and check cases to support incorporation of these methods into the AEE. This paper provides background and terminology for discussing the launch vehicle conceptual design problem so that the diverse AEE user community can participate in prioritizing the AEE development effort.
A method for designing fiberglass sucker-rod strings with API RP 11L
Jennings, J.W.; Laine, R.E. )
1991-02-01
This paper presents a method for using the API recommended practice for the design of sucker-rod pumping systems with fiberglass composite rod strings. The API method is useful for obtaining quick, approximate, preliminary design calculations. Equations for calculating all the composite material factors needed in the API calculations are given.
Application of Skeleton Method in Interconnection of Cae Programs Used in Vehicle Design
NASA Astrophysics Data System (ADS)
Bucha, Jozef; Gavačová, Jana; Milesich, Tomáš
2014-12-01
This paper deals with the application of the skeleton method as the main element of interconnection of CAE programs involved in the process of vehicle design. This article focuses on the utilization of the skeleton method for mutual connection of CATIA V5 and ADAMS/CAR. Both programs can be used simultaneously during various stages of vehicle design.
Human Reliability Analysis for Design: Using Reliability Methods for Human Factors Issues
Ronald Laurids Boring
2010-11-01
This paper reviews the application of human reliability analysis methods to human factors design issues. An application framework is sketched in which aspects of modeling typically found in human reliability analysis are used in a complementary fashion to the existing human factors phases of design and testing. The paper provides best achievable practices for design, testing, and modeling. Such best achievable practices may be used to evaluate and human system interface in the context of design safety certifications.
NASA Astrophysics Data System (ADS)
Sparks, Andrew W.; Olson, Craig; Theisen, Michael J.; Addiego, Chris J.; Hutchins, Tiffany G.; Goodman, Timothy D.
2016-05-01
Performance models for infrared imaging systems require image quality parameters; optical design engineers need image quality design goals; systems engineers develop image quality allocations to test imaging systems against. It is a challenge to maintain consistency and traceability amongst the various expressions of image quality. We present a method and parametric tool for generating and managing expressions of image quality during the system modeling, requirements specification, design, and testing phases of an imaging system design and development project.
NASA Technical Reports Server (NTRS)
Liu, A. F.
1974-01-01
A systematic approach for applying methods for fracture control in the structural components of space vehicles consists of four major steps. The first step is to define the primary load-carrying structural elements and the type of load, environment, and design stress levels acting upon them. The second step is to identify the potential fracture-critical parts by means of a selection logic flow diagram. The third step is to evaluate the safe-life and fail-safe capabilities of the specified part. The last step in the sequence is to apply the control procedures that will prevent damage to the fracture-critical parts. The fracture control methods discussed include fatigue design and analysis methods, methods for preventing crack-like defects, fracture mechanics analysis methods, and nondestructive evaluation methods. An example problem is presented for evaluation of the safe-crack-growth capability of the space shuttle crew compartment skin structure.
NASA Astrophysics Data System (ADS)
Chen, Enguo; Zhuang, Zhenfeng; Cai, Jin; Liu, Yan; Yu, Feihong
2012-10-01
This paper presents a segment and spline synthesis optimization method (SSS method) for the freeform total-internal-reflection (TIR) lens design. Before the optimization starts, a series of discrete control points are used to describe the TIR lens profile. In order to realize initial optimization, the segment method is applied to optimize a linear-segmented TIR lens. The final optimization is further achieved by the spline optimization method, after which the cubic-spline-modeling TIR lens with the characteristic of low cost and easy fabrication could satisfy the target illumination requirements. The detailed design principle and optimization process of the SSS method are both analyzed and compared in the paper. Complementing each other, the synthesis of the segment and spline optimization method could realize the prescribed design and greatly improve the design efficiency for designers. As an example, the specially designed polymethyl methacrylate (PMMA) freeform TIR lens used for LED general lighting could demonstrate the effectiveness of this method. The uniformity of the lens significantly increases from 67% to 88% after the segment and spline method, respectively. High light output efficiency (LOE) of 99.3% is available within the target illumination area for the final lens system. It is believed that the SSS method could be applied to design other freeform illumination optics.
NASA Technical Reports Server (NTRS)
Unger, Eric R.; Hager, James O.; Agrawal, Shreekant
1999-01-01
This paper is a discussion of the supersonic nonlinear point design optimization efforts at McDonnell Douglas Aerospace under the High-Speed Research (HSR) program. The baseline for these optimization efforts has been the M2.4-7A configuration which represents an arrow-wing technology for the High-Speed Civil Transport (HSCT). Optimization work on this configuration began in early 1994 and continued into 1996. Initial work focused on optimization of the wing camber and twist on a wing/body configuration and reductions of 3.5 drag counts (Euler) were realized. The next phase of the optimization effort included fuselage camber along with the wing and a drag reduction of 5.0 counts was achieved. Including the effects of the nacelles and diverters into the optimization problem became the next focus where a reduction of 6.6 counts (Euler W/B/N/D) was eventually realized. The final two phases of the effort included a large set of constraints designed to make the final optimized configuration more realistic and they were successful albeit with a loss of performance.
A Systematic Composite Service Design Modeling Method Using Graph-Based Theory
Elhag, Arafat Abdulgader Mohammed; Mohamad, Radziah; Aziz, Muhammad Waqar; Zeshan, Furkh
2015-01-01
The composite service design modeling is an essential process of the service-oriented software development life cycle, where the candidate services, composite services, operations and their dependencies are required to be identified and specified before their design. However, a systematic service-oriented design modeling method for composite services is still in its infancy as most of the existing approaches provide the modeling of atomic services only. For these reasons, a new method (ComSDM) is proposed in this work for modeling the concept of service-oriented design to increase the reusability and decrease the complexity of system while keeping the service composition considerations in mind. Furthermore, the ComSDM method provides the mathematical representation of the components of service-oriented design using the graph-based theoryto facilitate the design quality measurement. To demonstrate that the ComSDM method is also suitable for composite service design modeling of distributed embedded real-time systems along with enterprise software development, it is implemented in the case study of a smart home. The results of the case study not only check the applicability of ComSDM, but can also be used to validate the complexity and reusability of ComSDM. This also guides the future research towards the design quality measurement such as using the ComSDM method to measure the quality of composite service design in service-oriented software system. PMID:25928358
A systematic composite service design modeling method using graph-based theory.
Elhag, Arafat Abdulgader Mohammed; Mohamad, Radziah; Aziz, Muhammad Waqar; Zeshan, Furkh
2015-01-01
The composite service design modeling is an essential process of the service-oriented software development life cycle, where the candidate services, composite services, operations and their dependencies are required to be identified and specified before their design. However, a systematic service-oriented design modeling method for composite services is still in its infancy as most of the existing approaches provide the modeling of atomic services only. For these reasons, a new method (ComSDM) is proposed in this work for modeling the concept of service-oriented design to increase the reusability and decrease the complexity of system while keeping the service composition considerations in mind. Furthermore, the ComSDM method provides the mathematical representation of the components of service-oriented design using the graph-based theoryto facilitate the design quality measurement. To demonstrate that the ComSDM method is also suitable for composite service design modeling of distributed embedded real-time systems along with enterprise software development, it is implemented in the case study of a smart home. The results of the case study not only check the applicability of ComSDM, but can also be used to validate the complexity and reusability of ComSDM. This also guides the future research towards the design quality measurement such as using the ComSDM method to measure the quality of composite service design in service-oriented software system.
Design, fabrication, and beam commissioning of a continuous-wave four-rod rf quadrupole
NASA Astrophysics Data System (ADS)
Yin, X. J.; Yuan, Y. J.; Xia, J. W.; He, Y.; Zhao, H. W.; Zhang, X. H.; Du, H.; Li, Z. S.; Li, X. N.; Jiang, P. Y.; Yang, Y. Q.; Ma, L. Z.; Wu, J. X.; Xu, Z.; Sun, L. T.; Zhang, W.; Zhang, X. Z.; Meng, J.; Zhou, Z. Z.; Yao, Q. G.; Cai, G. Z.; Lu, W.; Wang, H. N.; Chen, W. J.; Zhang, Y.; Xu, X. W.; Xie, W. J.; Lu, Y. R.; Zhu, K.; Liu, G.; Yan, X. Q.; Gao, S. L.; Wang, Z.; Chen, J. E.
2016-01-01
A new heavy-ion linac within a continuous-wave (CW) 4-rod radio-frequency quadrupole (RFQ) was designed and constructed as the injector for the separated-sector cyclotron (SSC) at the Heavy Ion Research Facility at Lanzhou (HIRFL). In this paper, we present the development of and the beam commissioning results for the 53.667 MHz CW RFQ. In the beam dynamics design, the transverse phase advance at zero current, σ0 ⊥ , is maintained at a relatively high level compared with the longitudinal phase advance (σ0 ∥ ) to avoid parametric resonance. A quasi-equipartitioning design strategy was applied to control the emittance growth and beam loss. The installation error of the electrodes was checked using a FARO 3D measurement arm during the manufacturing procedure. This method represents a new approach to measuring the position shifts of electrodes in a laboratory environment and provides information regarding the manufacturing quality. The experimental results of rf measurements exhibited general agreement with the simulation results obtained using CST code. During on-line beam testing of the RFQ, two kinds of ion beams (40Ar 8 + and 16O5+ ) were transported and accelerated to 142.8 keV /u , respectively. These results demonstrate that the SSC-Linac has made a significant progress. And the design scheme and technology experiences developed in this work can be applied to other future CW RFQs.
NASA Astrophysics Data System (ADS)
Adrich, Przemysław
2016-05-01
In Part I of this work a new method for designing dual foil electron beam forming systems was introduced. In this method, an optimal configuration of the dual foil system is found by means of a systematic, automatized scan of system performance in function of its parameters. At each point of the scan, Monte Carlo method is used to calculate the off-axis dose profile in water taking into account detailed and complete geometry of the system. The new method, while being computationally intensive, minimizes the involvement of the designer. In this Part II paper, feasibility of practical implementation of the new method is demonstrated. For this, a prototype software tools were developed and applied to solve a real life design problem. It is demonstrated that system optimization can be completed within few hours time using rather moderate computing resources. It is also demonstrated that, perhaps for the first time, the designer can gain deep insight into system behavior, such that the construction can be simultaneously optimized in respect to a number of functional characteristics besides the flatness of the off-axis dose profile. In the presented example, the system is optimized in respect to both, flatness of the off-axis dose profile and the beam transmission. A number of practical issues related to application of the new method as well as its possible extensions are discussed.
Design predictions and diagnostic test methods for hydronic heating systems in ASHRAE standard 152P
Andrews, J.W.
1996-04-01
A new method of test for residential thermal distribution efficiency is currently being developed under the auspices of the American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE). The initial version of this test method is expected to have two main approaches, or ``pathways,`` designated Design and Diagnostic. The Design Pathway will use builder`s information to predict thermal distribution efficiency in new construction. The Diagnostic Pathway will use simple tests to evaluate thermal distribution efficiency in a completed house. Both forced-air and hydronic systems are included in the test method. This report describes an approach to predicting and measuring thermal distribution efficiency for residential hydronic heating systems for use in the Design and Diagnostic Pathways of the test method. As written, it is designed for single-loop systems with any type of passive radiation/convection (baseboard or radiators). Multiloop capability may be added later.
NASA Astrophysics Data System (ADS)
Tsai, Chung-Yu
2016-07-01
A free-form (FF) surface design method is proposed for a nonaxial-symmetrical projector system comprising an FF reflector and a light source. The profile of the reflector is designed using a nonaxial-symmetrical FF (NFF) surface construction method such that each incident ray is directed in such a way as to form a user-specified image pattern on the target region of the image plane. The light ray paths within the projection system are analyzed using an exact analytical model and a skew-ray tracing approach. The validity of the proposed NFF design method is demonstrated by means of ZEMAX simulations. It is shown that the image pattern formed on the target region of the image plane is in good agreement with that specified by the user. The NFF method is mathematically straightforward and easily implemented in computer code. As such, it provides a useful tool for the design and analysis stages of optical systems design.
Development of direct-inverse 3-D methods for applied transonic aerodynamic wing design and analysis
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1989-01-01
An inverse wing design method was developed around an existing transonic wing analysis code. The original analysis code, TAWFIVE, has as its core the numerical potential flow solver, FLO30, developed by Jameson and Caughey. Features of the analysis code include a finite-volume formulation; wing and fuselage fitted, curvilinear grid mesh; and a viscous boundary layer correction that also accounts for viscous wake thickness and curvature. The development of the inverse methods as an extension of previous methods existing for design in Cartesian coordinates is presented. Results are shown for inviscid wing design cases in super-critical flow regimes. The test cases selected also demonstrate the versatility of the design method in designing an entire wing or discontinuous sections of a wing.
A Method for the Constrained Design of Natural Laminar Flow Airfoils
NASA Technical Reports Server (NTRS)
Green, Bradford E.; Whitesides, John L.; Campbell, Richard L.; Mineck, Raymond E.
1996-01-01
A fully automated iterative design method has been developed by which an airfoil with a substantial amount of natural laminar flow can be designed, while maintaining other aerodynamic and geometric constraints. Drag reductions have been realized using the design method over a range of Mach numbers, Reynolds numbers and airfoil thicknesses. The thrusts of the method are its ability to calculate a target N-Factor distribution that forces the flow to undergo transition at the desired location; the target-pressure-N-Factor relationship that is used to reduce the N-Factors in order to prolong transition; and its ability to design airfoils to meet lift, pitching moment, thickness and leading-edge radius constraints while also being able to meet the natural laminar flow constraint. The method uses several existing CFD codes and can design a new airfoil in only a few days using a Silicon Graphics IRIS workstation.
Optical design method of freeform lens for a high-power extended LED source
NASA Astrophysics Data System (ADS)
Wang, Hong; Du, Naifeng; Wu, Yuefeng; Huang, Huamao
2012-10-01
In view of limitation for LED optical design method as ideal point source, a new uniform illumination optical design method of freeform lens for a high-power LED is present in the paper.By establishing an energy corresponding relationship between the extended LED source and the point illumination of the receiving surface, a freeform lens optical model achieving uniform illumination in target plane is obtained.The optical simulation results of uniform light intensity curve of the model are compared with the one designed by an approximate point source method. The results show that the new method can effectively overcome the shortages from the point source design.It is more accurately to control the correspondence relationship of light energy and the outgoing direction of light.The illumination uniformity of the freeform lens is greater than 75% and also meets the design requirements.
A procedural method for the efficient implementation of full-custom VLSI designs
NASA Technical Reports Server (NTRS)
Belk, P.; Hickey, N.
1987-01-01
An imbedded language system for the layout of very large scale integration (VLSI) circuits is examined. It is shown that through the judicious use of this system, a large variety of circuits can be designed with circuit density and performance comparable to traditional full-custom design methods, but with design costs more comparable to semi-custom design methods. The high performance of this methodology is attributable to the flexibility of procedural descriptions of VLSI layouts and to a number of automatic and semi-automatic tools within the system.
Design of subwavelength binary micro-optics using a gradient optimization method
NASA Astrophysics Data System (ADS)
Nesterenko, Dmitry V.; Kotlyar, Victor V.
2001-12-01
Various rigorous methods have been developed for the efficient analysis of diffractive optical elements (DOEs). We apply a gradient algorithm of synthesis to design two-dimensional DOEs, with the diffraction of the electromagnetic wave of TE polarization using a hybrid finite element - boundary element method. The hybrid method is capable of modeling inhomogeneous DOEs in unbounded free space in a computationally efficient manner. In this paper we discuss the application of the gradient optimization method to the matrix notation of the hybrid method. Such an application makes it possible to analyze DOE profiles with a large number of features. This allows one to overcome the limitations of calculation time dependent on the amount of the DOE modifications. We use the gradient method to design binary-phase lenses with subwavelength features. Although we have considered only binary-phase lenses, the gradient method presented is also suitable for designing continuous-relief DOEs.
NASA Astrophysics Data System (ADS)
Cui, Ying; Baik, Kiho; Gleason, Bob; Tavassoli, Malahat
2006-10-01
Design Based Metrology (DBM) requires an integrated process from design to metrology, and the very first and key step of this integration is to translate design CD lists to metrology measurement recipes. Design CD lists can come from different sources, such as design rule check, OPC validation, or yield analysis. These design CD lists can not be directly used to create metrology tool recipes, since tool recipe makers usually require specific information of each CD site, or a measurement matrix. The manual process to identify measurement matrix for each design CD site can be very difficult, especially when the list is in hundreds or more. This paper will address this issue and propose a method to automate Design CD Identification (DCDI), using a new CD Pattern Vector (CDPV) library.
A Government/Industry Summary of the Design Analysis Methods for Vibrations (DAMVIBS) Program
NASA Technical Reports Server (NTRS)
Kvaternik, Raymond G. (Compiler)
1993-01-01
The NASA Langley Research Center in 1984 initiated a rotorcraft structural dynamics program, designated DAMVIBS (Design Analysis Methods for VIBrationS), with the objective of establishing the technology base needed by the rotorcraft industry for developing an advanced finite-element-based dynamics design analysis capability for vibrations. An assessment of the program showed that the DAMVIBS Program has resulted in notable technical achievements and major changes in industrial design practice, all of which have significantly advanced the industry's capability to use and rely on finite-element-based dynamics analyses during the design process.
NASA Technical Reports Server (NTRS)
English, Robert E; Cavicchi, Richard H
1951-01-01
Empirical methods of Ainley and Kochendorfer and Nettles were used to predict performances of nine turbine designs. Measured and predicted performances were compared. Appropriate values of blade-loss parameter were determined for the method of Kochendorfer and Nettles. The measured design-point efficiencies were lower than predicted by as much as 0.09 (Ainley and 0.07 (Kochendorfer and Nettles). For the method of Kochendorfer and Nettles, appropriate values of blade-loss parameter ranged from 0.63 to 0.87 and the off-design performance was accurately predicted.
NASA Technical Reports Server (NTRS)
Zang, Thomas A.; Hemsch, Michael J.; Hilburger, Mark W.; Kenny, Sean P; Luckring, James M.; Maghami, Peiman; Padula, Sharon L.; Stroud, W. Jefferson
2002-01-01
This report consists of a survey of the state of the art in uncertainty-based design together with recommendations for a Base research activity in this area for the NASA Langley Research Center. This report identifies the needs and opportunities for computational and experimental methods that provide accurate, efficient solutions to nondeterministic multidisciplinary aerospace vehicle design problems. Barriers to the adoption of uncertainty-based design methods are identified. and the benefits of the use of such methods are explained. Particular research needs are listed.
Layer-by-layer design method for soft-X-ray multilayers
NASA Technical Reports Server (NTRS)
Yamamoto, Masaki; Namioka, Takeshi
1992-01-01
A new design method effective for a nontransparent system has been developed for soft-X-ray multilayers with the aid of graphic representation of the complex amplitude reflectance in a Gaussian plane. The method provides an effective means of attaining the absolute maximum reflectance on a layer-by-layer basis and also gives clear insight into the evolution of the amplitude reflectance on a multilayer as it builds up. An optical criterion is derived for the selection of a proper pair of materials needed for designing a high-reflectance multilayer. Some examples are given to illustrate the usefulness of this design method.
NASA Astrophysics Data System (ADS)
Ding, Xiaohong; Ji, Xuerong; Ma, Man; Hou, Jianyun
2013-11-01
The application of the adaptive growth method is limited because several key techniques during the design process need manual intervention of designers. Key techniques of the method including the ground structure construction and seed selection are studied, so as to make it possible to improve the effectiveness and applicability of the adaptive growth method in stiffener layout design optimization of plates and shells. Three schemes of ground structures, which are comprised by different shell elements and beam elements, are proposed. It is found that the main stiffener layouts resulted from different ground structures are almost the same, but the ground structure comprised by 8-nodes shell elements and both 3-nodes and 2-nodes beam elements can result in clearest stiffener layout, and has good adaptability and low computational cost. An automatic seed selection approach is proposed, which is based on such selection rules that the seeds should be positioned on where the structural strain energy is great for the minimum compliance problem, and satisfy the dispersancy requirement. The adaptive growth method with the suggested key techniques is integrated into an ANSYS-based program, which provides a design tool for the stiffener layout design optimization of plates and shells. Typical design examples, including plate and shell structures to achieve minimum compliance and maximum bulking stability are illustrated. In addition, as a practical mechanical structural design example, the stiffener layout of an inlet structure for a large-scale electrostatic precipitator is also demonstrated. The design results show that the adaptive growth method integrated with the suggested key techniques can effectively and flexibly deal with stiffener layout design problem for plates and shells with complex geometrical shape and loading conditions to achieve various design objectives, thus it provides a new solution method for engineering structural topology design optimization.
Small-Tip-Angle Spokes Pulse Design Using Interleaved Greedy and Local Optimization Methods
Grissom, William A.; Khalighi, Mohammad-Mehdi; Sacolick, Laura I.; Rutt, Brian K.; Vogel, Mika W.
2013-01-01
Current spokes pulse design methods can be grouped into methods based either on sparse approximation or on iterative local (gradient descent-based) optimization of the transverse-plane spatial frequency locations visited by the spokes. These two classes of methods have complementary strengths and weaknesses: sparse approximation-based methods perform an efficient search over a large swath of candidate spatial frequency locations but most are incompatible with off-resonance compensation, multifrequency designs, and target phase relaxation, while local methods can accommodate off-resonance and target phase relaxation but are sensitive to initialization and suboptimal local cost function minima. This article introduces a method that interleaves local iterations, which optimize the radiofrequency pulses, target phase patterns, and spatial frequency locations, with a greedy method to choose new locations. Simulations and experiments at 3 and 7 T show that the method consistently produces single- and multifrequency spokes pulses with lower flip angle inhomogeneity compared to current methods. PMID:22392822
Hybrid airfoil design methods for full-scale ice accretion simulation
NASA Astrophysics Data System (ADS)
Saeed, Farooq
The objective of this thesis is to develop a design method together with a design philosophy that allows the design of "subscale" or "hybrid" airfoils that simulate fullscale ice accretions. These subscale or hybrid airfoils have full-scale leading edges and redesigned aft-sections. A preliminary study to help develop a design philosophy for the design of hybrid airfoils showed that hybrid airfoils could be designed to simulate full-scale airfoil droplet-impingement characteristics and, therefore, ice accretion. The study showed that the primary objective in such a design should be to determine the aft section profile that provides the circulation necessary for simulating full-scale airfoil droplet-impingement characteristics. The outcome of the study, therefore, reveals circulation control as the main design variable. To best utilize this fact, this thesis describes two innovative airfoil design methods for the design of hybrid airfoils. Of the two design methods, one uses a conventional flap system while the other only suggests the use of boundary-layer control through slot-suction on the airfoil upper surface as a possible alternative for circulation control. The formulation of each of the two design methods is described in detail, and the results from each method are validated using wind-tunnel test data. The thesis demonstrates the capabilities of each method with the help of specific design examples highlighting their application potential. In particular, the flap-system based hybrid airfoil design method is used to demonstrate the design of a half-scale hybrid model of a full-scale airfoil that simulates full-scale ice accretion at both the design and off-design conditions. The full-scale airfoil used is representative of a scaled modern business-jet main wing section. The study suggests some useful advantages of using hybrid airfoils as opposed to full-scale airfoils for a better understanding of the ice accretion process and the related issues. Results
A New Method to Design Cam Used in Automobile Heating, Ventilating and Cooling System
NASA Astrophysics Data System (ADS)
Singh, B.; Singh, D.; Saini, J. S.
2012-10-01
With the automotive air-conditioning industry aiming at better levels of quality, cost effectiveness and short time to market, the need for simulation is at an all time high. In the present study, the airflow control mechanism of an automotive heating, ventilating and cooling module for opening various doors/dampers were kinematically analyzed. A new method for cam design was developed which is faster and simpler than the existing oscillating link method. The existing design was modified for the same output using the new cam design method. It is shown that the torque required in the modified design is lesser than that in the existing design, thus lowering the effort required to rotate the cam from the control panel.
Hydrodynamic Optimization Method and Design Code for Stall-Regulated Hydrokinetic Turbine Rotors
Sale, D.; Jonkman, J.; Musial, W.
2009-08-01
This report describes the adaptation of a wind turbine performance code for use in the development of a general use design code and optimization method for stall-regulated horizontal-axis hydrokinetic turbine rotors. This rotor optimization code couples a modern genetic algorithm and blade-element momentum performance code in a user-friendly graphical user interface (GUI) that allows for rapid and intuitive design of optimal stall-regulated rotors. This optimization method calculates the optimal chord, twist, and hydrofoil distributions which maximize the hydrodynamic efficiency and ensure that the rotor produces an ideal power curve and avoids cavitation. Optimizing a rotor for maximum efficiency does not necessarily create a turbine with the lowest cost of energy, but maximizing the efficiency is an excellent criterion to use as a first pass in the design process. To test the capabilities of this optimization method, two conceptual rotors were designed which successfully met the design objectives.
NASA Technical Reports Server (NTRS)
2004-01-01
The grant closure report is organized in the following four chapters: Chapter describes the two research areas Design optimization and Solid mechanics. Ten journal publications are listed in the second chapter. Five highlights is the subject matter of chapter three. CHAPTER 1. The Design Optimization Test Bed CometBoards. CHAPTER 2. Solid Mechanics: Integrated Force Method of Analysis. CHAPTER 3. Five Highlights: Neural Network and Regression Methods Demonstrated in the Design Optimization of a Subsonic Aircraft. Neural Network and Regression Soft Model Extended for PX-300 Aircraft Engine. Engine with Regression and Neural Network Approximators Designed. Cascade Optimization Strategy with Neural network and Regression Approximations Demonstrated on a Preliminary Aircraft Engine Design. Neural Network and Regression Approximations Used in Aircraft Design.
Development of a conceptual flight vehicle design weight estimation method library and documentation
NASA Astrophysics Data System (ADS)
Walker, Andrew S.
The state of the art in estimating the volumetric size and mass of flight vehicles is held today by an elite group of engineers in the Aerospace Conceptual Design Industry. This is not a skill readily accessible or taught in academia. To estimate flight vehicle mass properties, many aerospace engineering students are encouraged to read the latest design textbooks, learn how to use a few basic statistical equations, and plunge into the details of parametric mass properties analysis. Specifications for and a prototype of a standardized engineering "tool-box" of conceptual and preliminary design weight estimation methods were developed to manage the growing and ever-changing body of weight estimation knowledge. This also bridges the gap in Mass Properties education for aerospace engineering students. The Weight Method Library will also be used as a living document for use by future aerospace students. This "tool-box" consists of a weight estimation method bibliography containing unclassified, open-source literature for conceptual and preliminary flight vehicle design phases. Transport aircraft validation cases have been applied to each entry in the AVD Weight Method Library in order to provide a sense of context and applicability to each method. The weight methodology validation results indicate consensus and agreement of the individual methods. This generic specification of a method library will be applicable for use by other disciplines within the AVD Lab, Post-Graduate design labs, or engineering design professionals.
Bumpus, S.E.; Johnson, J.J.; Smith, P.D.
1980-07-01
The concept of how two techniques, Best Estimate Method and Evaluation Method, may be applied to the tradditional seismic analysis and design of a nuclear power plant is introduced. Only the four links of the seismic analysis and design methodology chain (SMC)--seismic input, soil-structure interaction, major structural response, and subsystem response--are considered. The objective is to evaluate the compounding of conservatisms in the seismic analysis and design of nuclear power plants, to provide guidance for judgments in the SMC, and to concentrate the evaluation on that part of the seismic analysis and design which is familiar to the engineering community. An example applies the effects of three-dimensional excitations on the model of a nuclear power plant structure. The example demonstrates how conservatisms accrue by coupling two links in the SMC and comparing those results to the effects of one link alone. The utility of employing the Best Estimate Method vs the Evauation Method is also demonstrated.
Schurig, David
2007-02-21
I will explain how the transformation design method can yield a material specification that possesses the same electromagnetic behavior as a fairly general set of imagined space-time topologies. This method has been used to design invisibility cloaks, but the method is quite general and can be used to design a wide variety of interesting devices that guide, concentrate or shape electromagnetic fields in ways that would be difficult to manage with other design methodologies. Applications range from stealth to energy conversion and distribution to wireless communications to biomedical imaging. The drawback of the method is the complexity of the material specifications that it produces, which are in general anisotropic and inhomogeneous. Only with recent advances in the field of metamaterials can these specifications be realized. I will discuss how metamaterials accomplish this and what their limitations are, e.g. bandwidth, loss, frequency range etc. I will discuss in detail the recent implementation of an invisibility cloak in the microwave spectrum.
The DDBD Method In The A-Seismic Design of Anchored Diaphragm Walls
Manuela, Cecconi; Vincenzo, Pane; Sara, Vecchietti
2008-07-08
The development of displacement based approaches for earthquake engineering design appears to be very useful and capable to provide improved reliability by directly comparing computed response and expected structural performance. In particular, the design procedure known as the Direct Displacement Based Design (DDBD) method, which has been developed in structural engineering over the past ten years in the attempt to mitigate some of the deficiencies in current force-based design methods, has been shown to be very effective and promising ([1], [2]). The first attempts of application of the procedure to geotechnical engineering and, in particular, earth retaining structures are discussed in [3], [4] and [5]. However in this field, the outcomes of the research need to be further investigated in many aspects. The paper focuses on the application of the DDBD method to anchored diaphragm walls. The results of the DDBD method are discussed in detail in the paper, and compared to those obtained from conventional pseudo-static analyses.
A new statistical method for design and analyses of component tolerance
NASA Astrophysics Data System (ADS)
Movahedi, Mohammad Mehdi; Khounsiavash, Mohsen; Otadi, Mahmood; Mosleh, Maryam
2016-09-01
Tolerancing conducted by design engineers to meet customers' needs is a prerequisite for producing high-quality products. Engineers use handbooks to conduct tolerancing. While use of statistical methods for tolerancing is not something new, engineers often use known distributions, including the normal distribution. Yet, if the statistical distribution of the given variable is unknown, a new statistical method will be employed to design tolerance. In this paper, we use generalized lambda distribution for design and analyses component tolerance. We use percentile method (PM) to estimate the distribution parameters. The findings indicated that, when the distribution of the component data is unknown, the proposed method can be used to expedite the design of component tolerance. Moreover, in the case of assembled sets, more extensive tolerance for each component with the same target performance can be utilized.
The ROM Design with Half Grouping Compression Method for Chip Area and Power Consumption Reduction
NASA Astrophysics Data System (ADS)
Jung, Ki-Sang; Kim, Kang-Jik; Kim, Young-Eun; Chung, Jin-Gyun; Pyun, Ki-Hyun; Lee, Jong-Yeol; Jeong, Hang-Geun; Cho, Seong-Ik
In memory design, the issue is smaller size and low power. Most power used in the ROM is consumed in line capacitance such as address lines, word lines, bit lines, and decoder. This paper presents ROM design of a novel HG (Half Grouping) compression method so as to reduce the parasitic capacitance of bit lines and the area of the row decoder for power consumption and chip area reduction. ROM design result of 512 point FFT block shows that the proposed method reduces 40.6% area, 42.12% power, and 37.82% transistor number respectively in comparison with the conventional method. The designed ROM with proposed method is implemented in a 0.35µm CMOS process. It consumes 5.8mW at 100MHz with a single 3.3V power supply.
The Importance of Adhering to Details of the Total Design Method (TDM) for Mail Surveys.
ERIC Educational Resources Information Center
Dillman, Don A.; And Others
1984-01-01
The empirical effects of adherence of details of the Total Design Method (TDM) approach to the design of mail surveys is discussed, based on the implementation of a common survey in 11 different states. The results suggest that greater adherence results in higher response, especially in the later stages of the TDM. (BW)
Paragogy and Flipped Assessment: Experience of Designing and Running a MOOC on Research Methods
ERIC Educational Resources Information Center
Lee, Yenn; Rofe, J. Simon
2016-01-01
This study draws on the authors' first-hand experience of designing, developing and delivering (3Ds) a massive open online course (MOOC) entitled "Understanding Research Methods" since 2014, largely but not exclusively for learners in the humanities and social sciences. The greatest challenge facing us was to design an assessment…
ERIC Educational Resources Information Center
Honebein, Peter C.; Honebein, Cass H.
2014-01-01
Instructional theory is intended to guide instructional designers in selecting the best instructional methods for a given situation. There have been numerous qualitative investigations into how instructional designers make decisions and the alignment of those decisions with theoretical influences. The purpose of this research is to more…
A Typology of Mixed Methods Sampling Designs in Social Science Research
ERIC Educational Resources Information Center
Onwuegbuzie, Anthony J.; Collins, Kathleen M. T.
2007-01-01
This paper provides a framework for developing sampling designs in mixed methods research. First, we present sampling schemes that have been associated with quantitative and qualitative research. Second, we discuss sample size considerations and provide sample size recommendations for each of the major research designs for quantitative and…
Rationale, Design, and Methods of the Preschool ADHD Treatment Study (PATS)
ERIC Educational Resources Information Center
Kollins, Scott; Greenhill, Laurence; Swanson, James; Wigal, Sharon; Abikoff, Howard; McCracken, James; Riddle, Mark; McGough, James; Vitiello, Benedetto; Wigal, Tim; Skrobala, Anne; Posner, Kelly; Ghuman, Jaswinder; Davies, Mark; Cunningham, Charles; Bauzo, Audrey
2006-01-01
Objective: To describe the rationale and design of the Preschool ADHD Treatment Study (PATS). Method: PATS was a National Institutes of Mental Health-funded, multicenter, randomized, efficacy trial designed to evaluate the short-term (5 weeks) efficacy and long-term (40 weeks) safety of methylphenidate (MPH) in preschoolers with…
Connecting Generations: Developing Co-Design Methods for Older Adults and Children
ERIC Educational Resources Information Center
Xie, Bo; Druin, Allison; Fails, Jerry; Massey, Sheri; Golub, Evan; Franckel, Sonia; Schneider, Kiki
2012-01-01
As new technologies emerge that can bring older adults together with children, little has been discussed by researchers concerning the design methods used to create these new technologies. Giving both children and older adults a voice in a shared design process comes with many challenges. This paper details an exploratory study focusing on…
Curiosity and Pedagogy: A Mixed-Methods Study of Student Experiences in the Design Studio
ERIC Educational Resources Information Center
Smith, Korydon H.
2010-01-01
Curiosity is often considered the foundation of learning. There is, however, little understanding of how (or if) pedagogy in higher education affects student curiosity, especially in the studio setting of architecture, interior design, and landscape architecture. This study used mixed-methods to investigate curiosity among design students in the…
An Empirical Comparison of Five Linear Equating Methods for the NEAT Design
ERIC Educational Resources Information Center
Suh, Youngsuk; Mroch, Andrew A.; Kane, Michael T.; Ripkey, Douglas R.
2009-01-01
In this study, a data base containing the responses of 40,000 candidates to 90 multiple-choice questions was used to mimic data sets for 50-item tests under the "nonequivalent groups with anchor test" (NEAT) design. Using these smaller data sets, we evaluated the performance of five linear equating methods for the NEAT design with five levels of…
A Comparison of Diary Method Variations for Enlightening Form Generation in the Design Process
ERIC Educational Resources Information Center
Babapour, Maral; Rehammar, Bjorn; Rahe, Ulrike
2012-01-01
This paper presents two studies in which an empirical approach was taken to understand and explain form generation and decisions taken in the design process. In particular, the activities addressing aesthetic aspects when exteriorising form ideas in the design process have been the focus of the present study. Diary methods were the starting point…
Treatment of Early-Onset Schizophrenia Spectrum Disorders (TEOSS): Rationale, Design, and Methods
ERIC Educational Resources Information Center
McClellan, Jon; Sikich, Linmarie; Findling, Robert L.; Frazier, Jean A.; Vitiello, Benedetto; Hlastala, Stefanie A.; Williams, Emily; Ambler, Denisse; Hunt-Harrison, Tyehimba; Maloney, Ann E.; Ritz, Louise; Anderson, Robert; Hamer, Robert M.; Lieberman, Jeffrey A.
2007-01-01
Objective: The Treatment of Early Onset Schizophrenia Spectrum Disorders Study is a publicly funded clinical trial designed to compare the therapeutic benefits, safety, and tolerability of risperidone, olanzapine, and molindone in youths with early-onset schizophrenia spectrum disorders. The rationale, design, and methods of the Treatment of Early…
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-01
... Hot Block Dilute Acid and Hydrogen Peroxide Filter Extraction'' In this method, total suspended... and nitric acid and two aliquots of hydrogen peroxide, for a total of two and a half hours extraction... Coupled Plasma Mass Spectrometry (ICP-MS) with Hot Block Dilute Acid and Hydrogen Peroxide...
Development of direct-inverse 3-D methods for applied transonic aerodynamic wing design and analysis
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1989-01-01
Progress in the direct-inverse wing design method in curvilinear coordinates has been made. This includes the remedying of a spanwise oscillation problem and the assessment of grid skewness, viscous interaction, and the initial airfoil section on the final design. It was found that, in response to the spanwise oscillation problem that designing at every other spanwise station produced the best results for the cases presented, a smoothly varying grid is especially needed for the accurate design at the wing tip, the boundary layer displacement thicknesses must be included in a successful wing design, the design of high and medium aspect ratio wings is possible with this code, and the final airfoil section designed is fairly independent of the initial section.
Compact lens design for LED chip array using supporting surface method
NASA Astrophysics Data System (ADS)
Zhang, Xiaohui; Chen, Chen
2015-10-01
As the low luminous flux of one single LED, LED chip array plays important effect on achieving high luminous flux in all kinds of applied field, such as automotive lighting, street lighting, sensing and imaging, etc. However, LED chip array is an extended source rather than a point source of conventional one single LED. Obviously, lens design for LED chip array will be reconsider and redesign to accommodate this difference. In recent years, as the development of illumination optics, some excellent optical design methods for extended source have been improved and suggested. When the design method for point source is adopt to design the LED chip array with high flux and high uniformity, the obtained Lens is so huge that the advantage of small LED chip is dissipated at this condition. The supporting surface method is effective and commonly used. However, it is not convergent when solving the refractor problem of designing point light source near field. Based on the property of Cartesian oval, a modified method is proposed and the convergence of the modified method is verified by Monte-Carlo ray trace. The number of the Cartesian oval and the size of the lens can be firmly under control during the design, while generally the ratio between the sizes of the lens and the chip is greater than 5. Based on the modified supporting surface method, a compact lens design method for extended light source is constructed. And the LED illumination lens is designed by this method and fabricated, and the simulation result shows that this LED illumination lens can achieve uniform illumination at target surface.
Supercritical blade design on stream surfaces of revolution with an inverse method
NASA Technical Reports Server (NTRS)
Schmidt, E.; Grein, H.-D.
1991-01-01
A method to solve the inverse problem of supercritical blade-to-blade flow on stream surfaces of revolution with variable radius and variable stream surface thickness in a relative system is described. Some aspects of shockless design and of leading edge resolution in the numerical procedure are depicted. Some supercritical compressor cascades were designed and their complete flow field results were compared with computations of two different analysis methods.
Fast Numerical Methods for the Design of Layered Photonic Structures with Rough Interfaces
NASA Technical Reports Server (NTRS)
Komarevskiy, Nikolay; Braginsky, Leonid; Shklover, Valery; Hafner, Christian; Lawson, John
2011-01-01
Modified boundary conditions (MBC) and a multilayer approach (MA) are proposed as fast and efficient numerical methods for the design of 1D photonic structures with rough interfaces. These methods are applicable for the structures, composed of materials with arbitrary permittivity tensor. MBC and MA are numerically validated on different types of interface roughness and permittivities of the constituent materials. The proposed methods can be combined with the 4x4 scattering matrix method as a field solver and an evolutionary strategy as an optimizer. The resulted optimization procedure is fast, accurate, numerically stable and can be used to design structures for various applications.
Eckermann, Simon; Karnon, Jon; Willan, Andrew R
2010-01-01
Value of information (VOI) methods have been proposed as a systematic approach to inform optimal research design and prioritization. Four related questions arise that VOI methods could address. (i) Is further research for a health technology assessment (HTA) potentially worthwhile? (ii) Is the cost of a given research design less than its expected value? (iii) What is the optimal research design for an HTA? (iv) How can research funding be best prioritized across alternative HTAs? Following Occam's razor, we consider the usefulness of VOI methods in informing questions 1-4 relative to their simplicity of use. Expected value of perfect information (EVPI) with current information, while simple to calculate, is shown to provide neither a necessary nor a sufficient condition to address question 1, given that what EVPI needs to exceed varies with the cost of research design, which can vary from very large down to negligible. Hence, for any given HTA, EVPI does not discriminate, as it can be large and further research not worthwhile or small and further research worthwhile. In contrast, each of questions 1-4 are shown to be fully addressed (necessary and sufficient) where VOI methods are applied to maximize expected value of sample information (EVSI) minus expected costs across designs. In comparing complexity in use of VOI methods, applying the central limit theorem (CLT) simplifies analysis to enable easy estimation of EVSI and optimal overall research design, and has been shown to outperform bootstrapping, particularly with small samples. Consequently, VOI methods applying the CLT to inform optimal overall research design satisfy Occam's razor in both improving decision making and reducing complexity. Furthermore, they enable consideration of relevant decision contexts, including option value and opportunity cost of delay, time, imperfect implementation and optimal design across jurisdictions. More complex VOI methods such as bootstrapping of the expected value of
Automated divertor target design by adjoint shape sensitivity analysis and a one-shot method
Dekeyser, W.; Reiter, D.; Baelmans, M.
2014-12-01
As magnetic confinement fusion progresses towards the development of first reactor-scale devices, computational tokamak divertor design is a topic of high priority. Presently, edge plasma codes are used in a forward approach, where magnetic field and divertor geometry are manually adjusted to meet design requirements. Due to the complex edge plasma flows and large number of design variables, this method is computationally very demanding. On the other hand, efficient optimization-based design strategies have been developed in computational aerodynamics and fluid mechanics. Such an optimization approach to divertor target shape design is elaborated in the present paper. A general formulation of the design problems is given, and conditions characterizing the optimal designs are formulated. Using a continuous adjoint framework, design sensitivities can be computed at a cost of only two edge plasma simulations, independent of the number of design variables. Furthermore, by using a one-shot method the entire optimization problem can be solved at an equivalent cost of only a few forward simulations. The methodology is applied to target shape design for uniform power load, in simplified edge plasma geometry.
Csipke, Emese; Papoulias, Constantina; Vitoratou, Silia; Williams, Paul; Rose, Diana; Wykes, Til
2016-01-01
Abstract Background: Psychiatric ward design may make an important contribution to patient outcomes and well-being. However, research is hampered by an inability to assess its effects robustly. This paper reports on a study which deployed innovative methods to capture service user and staff perceptions of ward design. Method: User generated measures of the impact of ward design were developed and tested on four acute adult wards using participatory methodology. Additionally, inpatients took photographs to illustrate their experience of the space in two wards. Data were compared across wards. Results: Satisfactory reliability indices emerged based on both service user and staff responses. Black and minority ethnic (BME) service users and those with a psychosis spectrum diagnosis have more positive views of the ward layout and fixtures. Staff members have more positive views than service users, while priorities of staff and service users differ. Inpatient photographs prioritise hygiene, privacy and control and address symbolic aspects of the ward environment. Conclusions: Participatory and visual methodologies can provide robust tools for an evaluation of the impact of psychiatric ward design on users. PMID:26886239
NASA Technical Reports Server (NTRS)
Martini, W. R.
1980-01-01
Four fully disclosed reference engines and five design methods are discussed. So far, the agreement between theory and experiment is about as good for the simpler calculation methods as it is for the more complicated methods, that is, within 20%. For the simpler methods, a one number adjustable constant can be used to reduce the error in predicting power output and efficiency over the entire operating map to less than 10%.
Accuracy of the domain method for the material derivative approach to shape design sensitivities
NASA Technical Reports Server (NTRS)
Yang, R. J.; Botkin, M. E.
1987-01-01
Numerical accuracy for the boundary and domain methods of the material derivative approach to shape design sensitivities is investigated through the use of mesh refinement. The results show that the domain method is generally more accurate than the boundary method, using the finite element technique. It is also shown that the domain method is equivalent, under certain assumptions, to the implicit differentiation approach not only theoretically but also numerically.
An application of performance goal based method for the design and evaluation of structures
Conrads, T.J.
1996-10-15
This paper describes an application of the U.S. Department of Energy`s (DOE) performance goal based method for the design and evaluation of structures, systems, and components (SSCS) at Fluor Daniel Hanford, Inc. (FDH). The philosophy on which DOE`s method is based has been employed to construct a graded approach to the minimum structural design and evaluation criteriz@ used at the DOE Hanford Site that complies with the DOE Order 54E;0.28, Natural Phenomena Hazards Mitigation. The FDH structural design and evaluation criteria applies to both nuclear and non-nuclear SSCs that are not covered by a reactor safety analysis report.
A method of optimal design of single-sided linear induction motor for transit
Yoon, S.B.; Hur, J.; Hyun, D.S.
1997-09-01
An optimal design method for a single-sided linear induction motor (SLIM) for transit is described. The authors propose the method which determines the overall parameters of SLIM for transit using only the rated mechanical output. When the optimization is carried out, the slot depth is used as the initial value so that the exact slot depth is calculated iteratively from the circuit equation. The optimization problem of a SLIM design is approached by use of the sequential quadratic programming (SQP). The influence of design variables is analyzed by the rated thrust and the rated velocity respectively.
The 1995 forum on appropriate criteria and methods for seismic design of nuclear piping
Slagis, G.C.
1996-12-01
A record of the 1995 Forum on Appropriate Criteria and Methods for Seismic Design of Nuclear Piping is provided. The focus of the forum was the earthquake experience data base and whether the data base demonstrates that seismic inertia loads will not cause failure in ductile piping systems. This was a follow-up to the 1994 Forum when the use of earthquake experience data, including the recent Northridge earthquake, to justify a design-by-rule method was explored. Two possible topics for the next forum were identified--inspection after an earthquake and design for safe-shutdown earthquake only.
Design method for a distributed Bragg resonator based evanescent field sensor
NASA Astrophysics Data System (ADS)
Bischof, David; Kehl, Florian; Michler, Markus
2016-12-01
This paper presents an analytic design method for a distributed Bragg resonator based evanescent field sensor. Such sensors can, for example, be used to measure changing refractive indices of the cover medium of a waveguide, as well as molecule adsorption at the sensor surface. For given starting conditions, the presented design method allows the analytical calculation of optimized sensor parameters for quantitative simulation and fabrication. The design process is based on the Fabry-Pérot resonator and analytical solutions of coupled mode theory.
NASA Technical Reports Server (NTRS)
Ratcliff, Robert R.; Carlson, Leland A.
1989-01-01
Progress in the direct-inverse wing design method in curvilinear coordinates has been made. A spanwise oscillation problem and proposed remedies are discussed. Test cases are presented which reveal the approximate limits on the wing's aspect ratio and leading edge wing sweep angle for a successful design, and which show the significance of spanwise grid skewness, grid refinement, viscous interaction, the initial airfoil section and Mach number-pressure distribution compatibility on the final design. Furthermore, preliminary results are shown which indicate that it is feasible to successfully design a region of the wing which begins aft of the leading edge and terminates prior to the trailing edge.
An analytical sensitivity method for use in integrated aeroservoelastic aircraft design
NASA Technical Reports Server (NTRS)
Gilbert, Michael G.
1989-01-01
Interdisciplinary analysis capabilities have been developed for aeroservoelastic aircraft and large flexible spacecraft, but the requisite integrated design methods are only beginning to be developed. One integrated design method which has received attention is based on hierarchal problem decompositions, optimization, and design sensitivity analyses. This paper highlights a design sensitivity analysis method for Linear Quadratic Gaussian (LQG) optimal control laws, enabling the use of LQG techniques in the hierarchal design methodology. The LQG sensitivity analysis method calculates the change in the optimal control law and resulting controlled system responses due to changes in fixed design integration parameters using analytical sensitivity equations. Numerical results of a LQG design sensitivity analysis for a realistic aeroservoelastic aircraft example are presented. In this example, the sensitivity of the optimal control law and aircraft response for various parameters such as wing bending natural frequency is determined. The sensitivity results computed from the analytical expressions are used to estimate changes in response resulting from changes in the parameters. Comparisons of the estimates with exact calculated responses show they are reasonably accurate for + or - 15 percent changes in the parameters. Evaluation of the analytical expressions is computationally faster than equivalent finite difference calculations.
Computational Fluid Dynamics-Based Design Optimization Method for Archimedes Screw Blood Pumps.
Yu, Hai; Janiga, Gábor; Thévenin, Dominique
2016-04-01
An optimization method suitable for improving the performance of Archimedes screw axial rotary blood pumps is described in the present article. In order to achieve a more robust design and to save computational resources, this method combines the advantages of the established pump design theory with modern computer-aided, computational fluid dynamics (CFD)-based design optimization (CFD-O) relying on evolutionary algorithms and computational fluid dynamics. The main purposes of this project are to: (i) integrate pump design theory within the already existing CFD-based optimization; (ii) demonstrate that the resulting procedure is suitable for optimizing an Archimedes screw blood pump in terms of efficiency. Results obtained in this study demonstrate that the developed tool is able to meet both objectives. Finally, the resulting level of hemolysis can be numerically assessed for the optimal design, as hemolysis is an issue of overwhelming importance for blood pumps. PMID:26526039
Computational Fluid Dynamics-Based Design Optimization Method for Archimedes Screw Blood Pumps.
Yu, Hai; Janiga, Gábor; Thévenin, Dominique
2016-04-01
An optimization method suitable for improving the performance of Archimedes screw axial rotary blood pumps is described in the present article. In order to achieve a more robust design and to save computational resources, this method combines the advantages of the established pump design theory with modern computer-aided, computational fluid dynamics (CFD)-based design optimization (CFD-O) relying on evolutionary algorithms and computational fluid dynamics. The main purposes of this project are to: (i) integrate pump design theory within the already existing CFD-based optimization; (ii) demonstrate that the resulting procedure is suitable for optimizing an Archimedes screw blood pump in terms of efficiency. Results obtained in this study demonstrate that the developed tool is able to meet both objectives. Finally, the resulting level of hemolysis can be numerically assessed for the optimal design, as hemolysis is an issue of overwhelming importance for blood pumps.
Application of direct inverse analogy method (DIVA) and viscous design optimization techniques
NASA Technical Reports Server (NTRS)
Greff, E.; Forbrich, D.; Schwarten, H.
1991-01-01
A direct-inverse approach to the transonic design problem was presented in its initial state at the First International Conference on Inverse Design Concepts and Optimization in Engineering Sciences (ICIDES-1). Further applications of the direct inverse analogy (DIVA) method to the design of airfoils and incremental wing improvements and experimental verification are reported. First results of a new viscous design code also from the residual correction type with semi-inverse boundary layer coupling are compared with DIVA which may enhance the accuracy of trailing edge design for highly loaded airfoils. Finally, the capabilities of an optimization routine coupled with the two viscous full potential solvers are investigated in comparison to the inverse method.
Double freeform surfaces design for laser beam shaping with Monge-Ampère equation method
NASA Astrophysics Data System (ADS)
Zhang, Yaqin; Wu, Rengmao; Liu, Peng; Zheng, Zhenrong; Li, Haifeng; Liu, Xu
2014-11-01
This paper presents a method for designing double freeform surfaces to simultaneously control the intensity distribution and phase profile of the laser beam. Based on Snell’s law, the conservation law of energy and the constraint imposed on the optical path length between the input and output wavefronts, the double surfaces design is converted into an elliptic Monge-Ampère (MA) equation with a nonlinear boundary problem. A generalized approach is introduced to find the numerical solution of the design model. Two different layouts of the beam shaping system are introduced and detailed comparisons are also made between the two layouts. Design examples are given and the results indicate that good matching is achieved by the MA method with more than 98% of the energy efficiency. The MA method proposed in this paper provides a reasonably good means for laser beam shaping.
NASA Technical Reports Server (NTRS)
Stahara, S. S.
1984-01-01
An investigation was carried out to complete the preliminary development of a combined perturbation/optimization procedure and associated computational code for designing optimized blade-to-blade profiles of turbomachinery blades. The overall purpose of the procedures developed is to provide demonstration of a rapid nonlinear perturbation method for minimizing the computational requirements associated with parametric design studies of turbomachinery flows. The method combines the multiple parameter nonlinear perturbation method, successfully developed in previous phases of this study, with the NASA TSONIC blade-to-blade turbomachinery flow solver, and the COPES-CONMIN optimization procedure into a user's code for designing optimized blade-to-blade surface profiles of turbomachinery blades. Results of several design applications and a documented version of the code together with a user's manual are provided.
Two-Step Design Method of Engine Control System Based on Generalized Predictive Control
NASA Astrophysics Data System (ADS)
Hashimoto, Seiji; Okuda, Hiroyuki; Okada, Yasushi; Adachi, Shuichi; Niwa, Shinji; Kajitani, Mitsunobu
Conservation of the environment has become critical to the automotive industry. Recently, requirements for on-board diagnostic and engine control systems have been strictly enforced. In the present paper, in order to meet the requirements for a low-emissions vehicle, a novel construction method of the air-fuel ratio (A/F) control system is proposed. The construction method of the system is divided into two steps. The first step is to design the A/F control system for the engine based on an open loop design. The second step is to design the A/F control system for the catalyst system. The design method is based on the generalized predictive control in order to satisfy the robustness to open loop control as well as model uncertainty. The effectiveness of the proposed A/F control system is verified through experiments using full-scale products.
NASA Astrophysics Data System (ADS)
Tao, Shanshan; Dong, Sheng; Wang, Zhifeng; Jiang, Wensheng
2016-06-01
The maximum entropy distribution, which consists of various recognized theoretical distributions, is a better curve to estimate the design thickness of sea ice. Method of moment and empirical curve fitting method are common-used parameter estimation methods for maximum entropy distribution. In this study, we propose to use the particle swarm optimization method as a new parameter estimation method for the maximum entropy distribution, which has the advantage to avoid deviation introduced by simplifications made in other methods. We conducted a case study to fit the hindcasted thickness of the sea ice in the Liaodong Bay of Bohai Sea using these three parameter-estimation methods for the maximum entropy distribution. All methods implemented in this study pass the K-S tests at 0.05 significant level. In terms of the average sum of deviation squares, the empirical curve fitting method provides the best fit for the original data, while the method of moment provides the worst. Among all three methods, the particle swarm optimization method predicts the largest thickness of the sea ice for a same return period. As a result, we recommend using the particle swarm optimization method for the maximum entropy distribution for offshore structures mainly influenced by the sea ice in winter, but using the empirical curve fitting method to reduce the cost in the design of temporary and economic buildings.
Walters, W.H.
1982-08-01
This report reviews the more accepted or recommended riprap design methods currently used to design rock riprap protection against soil erosion by flowing water. The basic theories used to develop the various methods are presented. The Riprap Design with Safety Factors Method is identified as the logical choice for uranium mill tailings impoundments. This method is compared to the other methods and its applicability to the protection requirements of tailings impoundments is discussed. Other design problems are identified and investigative studies recommended.
Korakianitis, T. )
1993-04-01
The direct and inverse blade-design iterations for the selection of isolated airfoils and gas turbine blade cascades are enormously reduced if the initial blade shape has performance characteristics near the desirable ones. This paper presents the hierarchical development of three direct blade-design methods of increasing utility for generating two-dimensional blade shapes. The methods can be used to generate inputs to the direct- or inverse-blade-design sequences for subsonic or supersonic airfoils for compressors and turbines, or isolated airfoils. The first method specifies the airfoil shapes with analytical polynomials. It shows that continuous curvature and continuous slope of curvature are necessary conditions to minimize the possibility of flow separation, and to lead to improved blade designs. The second method specifies the airfoil shapes with parametric fourth-order polynomials, which result in continuous-slope-of-curvature airfoils, with smooth Mach number and pressure distributions. This method is time consuming. The third method specifies the airfoil shapes by using a mixture of analytical polynomials and mapping the airfoil surfaces from a desirable curvature distribution. The third method provides blade surfaces with desirable performance in very few direct-design iterations. In all methods the geometry near the leading edge is specified by a thickness distribution added to a construction line, which eliminates the leading edge overspeed and laminar-separation regions. The blade-design methods presented in this paper can be used to improve the aerodynamic and heat transfer performance of turbomachinery cascades, and they can result in high-performance airfoils in very few iterations.
The Research of Computer Aided Farm Machinery Designing Method Based on Ergonomics
NASA Astrophysics Data System (ADS)
Gao, Xiyin; Li, Xinling; Song, Qiang; Zheng, Ying
Along with agricultural economy development, the farm machinery product type Increases gradually, the ergonomics question is also getting more and more prominent. The widespread application of computer aided machinery design makes it possible that farm machinery design is intuitive, flexible and convenient. At present, because the developed computer aided ergonomics software has not suitable human body database, which is needed in view of farm machinery design in China, the farm machinery design have deviation in ergonomics analysis. This article puts forward that using the open database interface procedure in CATIA to establish human body database which aims at the farm machinery design, and reading the human body data to ergonomics module of CATIA can product practical application virtual body, using human posture analysis and human activity analysis module to analysis the ergonomics in farm machinery, thus computer aided farm machinery designing method based on engineering can be realized.
Co-design of RAD and ETHICS methodologies: a combination of information system development methods
NASA Astrophysics Data System (ADS)
Nasehi, Arezo; Shahriyari, Salman
2011-12-01
Co-design is a new trend in the social world which tries to capture different ideas in order to use the most appropriate features for a system. In this paper, co-design of two information system methodologies is regarded; rapid application development (RAD) and effective technical and human implementation of computer-based systems (ETHICS). We tried to consider the characteristics of these methodologies to see the possibility of having a co-design or combination of them for developing an information system. To reach this purpose, four different aspects of them are analyzed: social or technical approach, user participation and user involvement, job satisfaction, and overcoming change resistance. Finally, a case study using the quantitative method is analyzed in order to examine the possibility of co-design using these factors. The paper concludes that RAD and ETHICS are appropriate to be co-designed and brings some suggestions for the co-design.
Processor and method for developing a set of admissible fixture designs for a workpiece
Brost, Randolph C.; Goldberg, Kenneth Y.; Canny, John; Wallack, Aaron S.
1999-01-01
Methods and apparatus are provided for developing a complete set of all admissible Type I and Type II fixture designs for a workpiece. The fixture processor generates the set of all admissible designs based on geometric access constraints and expected applied forces on the workpiece. For instance, the fixture processor may generate a set of admissible fixture designs for first, second and third locators placed in an array of holes on a fixture plate and a translating clamp attached to the fixture plate for contacting the workpiece. In another instance, a fixture vise is used in which first, second, third and fourth locators are used and first and second fixture jaws are tightened to secure the workpiece. The fixture process also ranks the set of admissible fixture designs according to a predetermined quality metric so that the optimal fixture design for the desired purpose may be identified from the set of all admissible fixture designs.
Processor and method for developing a set of admissible fixture designs for a workpiece
Brost, Randolph C.; Goldberg, Kenneth Y.; Wallack, Aaron S.; Canny, John
1996-01-01
A fixture process and method is provided for developing a complete set of all admissible fixture designs for a workpiece which prevents the workpiece from translating or rotating. The fixture processor generates the set of all admissible designs based on geometric access constraints and expected applied forces on the workpiece. For instance, the fixture processor may generate a set of admissible fixture designs for first, second and third locators placed in an array of holes on a fixture plate and a translating clamp attached to the fixture plate for contacting the workpiece. In another instance, a fixture vice is used in which first, second, third and fourth locators are used and first and second fixture jaws are tightened to secure the workpiece. The fixture process also ranks the set of admissible fixture designs according to a predetermined quality metric so that the optimal fixture design for the desired purpose may be identified from the set of all admissible fixture designs.
Processor and method for developing a set of admissible fixture designs for a workpiece
Brost, R.C.; Goldberg, K.Y.; Canny, J.; Wallack, A.S.
1999-01-05
Methods and apparatus are provided for developing a complete set of all admissible Type 1 and Type 2 fixture designs for a workpiece. The fixture processor generates the set of all admissible designs based on geometric access constraints and expected applied forces on the workpiece. For instance, the fixture processor may generate a set of admissible fixture designs for first, second and third locators placed in an array of holes on a fixture plate and a translating clamp attached to the fixture plate for contacting the workpiece. In another instance, a fixture vise is used in which first, second, third and fourth locators are used and first and second fixture jaws are tightened to secure the workpiece. The fixture process also ranks the set of admissible fixture designs according to a predetermined quality metric so that the optimal fixture design for the desired purpose may be identified from the set of all admissible fixture designs. 44 figs.
Processor and method for developing a set of admissible fixture designs for a workpiece
Brost, R.C.; Goldberg, K.Y.; Wallack, A.S.; Canny, J.
1996-08-13
A fixture process and method is provided for developing a complete set of all admissible fixture designs for a workpiece which prevents the workpiece from translating or rotating. The fixture processor generates the set of all admissible designs based on geometric access constraints and expected applied forces on the workpiece. For instance, the fixture processor may generate a set of admissible fixture designs for first, second and third locators placed in an array of holes on a fixture plate and a translating clamp attached to the fixture plate for contacting the workpiece. In another instance, a fixture vice is used in which first, second, third and fourth locators are used and first and second fixture jaws are tightened to secure the workpiece. The fixture process also ranks the set of admissible fixture designs according to a predetermined quality metric so that the optimal fixture design for the desired purpose may be identified from the set of all admissible fixture designs. 27 figs.
A Modified Frequency Estimation Equating Method for the Common-Item Nonequivalent Groups Design
ERIC Educational Resources Information Center
Wang, Tianyou; Brennan, Robert L.
2009-01-01
Frequency estimation, also called poststratification, is an equating method used under the common-item nonequivalent groups design. A modified frequency estimation method is proposed here, based on altering one of the traditional assumptions in frequency estimation in order to correct for equating bias. A simulation study was carried out to…
Are standard wastewater treatment plant design methods suitable for any municipal wastewater?
Insel, G; Güder, B; Güneş, G; Ubay Cokgor, E
2012-01-01
The design and operational parameters of an activated sludge system were analyzed treating the municipal wastewaters in Istanbul. The design methods of ATV131, Metcalf & Eddy together with model simulations were compared with actual plant operational data. The activated sludge model parameters were determined using 3-month dynamic data for the biological nutrient removal plant. The ATV131 method yielded closer sludge production, total oxygen requirement and effluent nitrogen levels to the real plant after adopting correct influent chemical oxygen demand (COD) fractionation. The enhanced biological phosphorus removal (EBPR) could not easily be predicted with ATV131 method due to low volatile fatty acids (VFA) potential.
Larson, R.B.
1996-12-31
The design and installation of horizontal wells is the primary factor in the efficiency of the remedial actions. Often, inadequacies in the design and installation of remediation systems are not identified until remedial actions have commenced, at which time, required modifications of operational methods can be costly. The parameters required for designing a horizontal well remediation system include spatial variations in contaminant concentrations and lithology, achievable injection and/or extraction rates, area of influence from injection and/or extraction processes, and limitations of installation methods. As with vertical wells, there are several different methods for the installation of horizontal wells. This paper will summarize four installation methods for horizontal wells, including four sites where horizontal wells have been utilized for in-situ groundwater and soil remediation.
Guan, Jianguo; Li, Wei; Wang, Wei; Fu, Zhengyi
2011-09-26
A general boundary mapping method is proposed to enable the designing of various transformation devices with arbitrary shapes by reducing the traditional space-to-space mapping to boundary-to-boundary mapping. The method also makes the designing of complex-shaped transformation devices more feasible and flexible. Using the boundary mapping method, an arbitrarily shaped perfect electric conductor (PEC) reshaping device, which is called a "PEC reshaper," is demonstrated to visually reshape a PEC with an arbitrary shape to another arbitrary one. Unlike the previously reported simple PEC reshaping devices, the arbitrarily shaped PEC reshaper designed here does not need to share a common domain. Moreover, the flexibilities of the boundary mapping method are expected to inspire some novel PEC reshapers with attractive new functionalities.
NASA Technical Reports Server (NTRS)
Mark, W. D.
1982-01-01
A transfer function method for predicting the dynamic responses of gear systems with more than one gear mesh is developed and applied to the NASA Lewis four-square gear fatigue test apparatus. Methods for computing bearing-support force spectra and temporal histories of the total force transmitted by a gear mesh, the force transmitted by a single pair of teeth, and the maximum root stress in a single tooth are developed. Dynamic effects arising from other gear meshes in the system are included. A profile modification design method to minimize the vibration excitation arising from a pair of meshing gears is reviewed and extended. Families of tooth loading functions required for such designs are developed and examined for potential excitation of individual tooth vibrations. The profile modification design method is applied to a pair of test gears.
NASA Astrophysics Data System (ADS)
Ju, Yaping; Zhang, Chuhua
2016-03-01
Blade fouling has been proved to be a great threat to compressor performance in operating stage. The current researches on fouling-induced performance degradations of centrifugal compressors are based mainly on simplified roughness models without taking into account the realistic factors such as spatial non-uniformity and randomness of the fouling-induced surface roughness. Moreover, little attention has been paid to the robust design optimization of centrifugal compressor impellers with considerations of blade fouling. In this paper, a multi-objective robust design optimization method is developed for centrifugal impellers under surface roughness uncertainties due to blade fouling. A three-dimensional surface roughness map is proposed to describe the nonuniformity and randomness of realistic fouling accumulations on blades. To lower computational cost in robust design optimization, the support vector regression (SVR) metamodel is combined with the Monte Carlo simulation (MCS) method to conduct the uncertainty analysis of fouled impeller performance. The analyzed results show that the critical fouled region associated with impeller performance degradations lies at the leading edge of blade tip. The SVR metamodel has been proved to be an efficient and accurate means in the detection of impeller performance variations caused by roughness uncertainties. After design optimization, the robust optimal design is found to be more efficient and less sensitive to fouling uncertainties while maintaining good impeller performance in the clean condition. This research proposes a systematic design optimization method for centrifugal compressors with considerations of blade fouling, providing a practical guidance to the design of advanced centrifugal compressors.
Why does Japan use the probability method to set design flood?
NASA Astrophysics Data System (ADS)
Nakamura, S.; Oki, T.
2015-12-01
Design flood is hypothetical flood to make flood prevention plan. In Japan, a probability method based on precipitation data is used to define the scale of design flood: Tone River, the biggest river in Japan, is 1 in 200 years, Shinano River is 1 in 150 years, and so on. It is one of important socio-hydrological issue how to set reasonable and acceptable design flood in a changing world. The method to set design flood vary among countries. Although the probability method is also used in Netherland, but the base data is water level or discharge data and the probability is 1 in 1250 years (in fresh water section). On the other side, USA and China apply the maximum flood method which set the design flood based on the historical or probable maximum flood. This cases can leads a question: "what is the reason why the method vary among countries?" or "why does Japan use the probability method?" The purpose of this study is to clarify the historical process which the probability method was developed in Japan based on the literature. In the late 19the century, the concept of "discharge" and modern river engineering were imported by Dutch engineers, and modern flood prevention plans were developed in Japan. In these plans, the design floods were set based on the historical maximum method. Although the historical maximum method had been used until World War 2, however, the method was changed to the probability method after the war because of limitations of historical maximum method under the specific socio-economic situations: (1) the budget limitation due to the war and the GHQ occupation, (2) the historical floods: Makurazaki typhoon in 1945, Kathleen typhoon in 1947, Ione typhoon in 1948, and so on, attacked Japan and broke the record of historical maximum discharge in main rivers and the flood disasters made the flood prevention projects difficult to complete. Then, Japanese hydrologists imported the hydrological probability statistics from the West to take account of
A Proposal for the use of the Consortium Method in the Design-build system
NASA Astrophysics Data System (ADS)
Miyatake, Ichiro; Kudo, Masataka; Kawamata, Hiroyuki; Fueta, Toshiharu
In view of the necessity for efficient implementation of public works projects, it is expected to utilize advanced technical skills of private firms, for the purpose of reducing project costs, improving performance and functions of construction objects, and reducing work periods, etc. The design-build system is a method to order design and construction as a single contract, including design of structural forms and main specifications of the construction object. This is a system in which high techniques of private firms can be utilized, as a means to ensure qualities of design and construction, rational design, and efficiency of the project. The objective of this study is to examine the use of a method to form a consortium of civil engineering consultants and construction companies, as it is an issue related to the implementation of the design-build method. Furthermore, by studying various forms of consortiums to be introduced in future, it proposes procedural items required to utilize this method, during the bid and after signing a contract, such as the estimate submission from the civil engineering consultants etc.
Sutradhar, Alok; Park, Jaejong; Carrau, Diana; Nguyen, Tam H; Miller, Michael J; Paulino, Glaucio H
2016-07-01
Large craniofacial defects require efficient bone replacements which should not only provide good aesthetics but also possess stable structural function. The proposed work uses a novel multiresolution topology optimization method to achieve the task. Using a compliance minimization objective, patient-specific bone replacement shapes can be designed for different clinical cases that ensure revival of efficient load transfer mechanisms in the mid-face. In this work, four clinical cases are introduced and their respective patient-specific designs are obtained using the proposed method. The optimized designs are then virtually inserted into the defect to visually inspect the viability of the design . Further, once the design is verified by the reconstructive surgeon, prototypes are fabricated using a 3D printer for validation. The robustness of the designs are mechanically tested by subjecting them to a physiological loading condition which mimics the masticatory activity. The full-field strain result through 3D image correlation and the finite element analysis implies that the solution can survive the maximum mastication of 120 lb. Also, the designs have the potential to restore the buttress system and provide the structural integrity. Using the topology optimization framework in designing the bone replacement shapes would deliver surgeons new alternatives for rather complicated mid-face reconstruction. PMID:26660897
Sutradhar, Alok; Park, Jaejong; Carrau, Diana; Nguyen, Tam H; Miller, Michael J; Paulino, Glaucio H
2016-07-01
Large craniofacial defects require efficient bone replacements which should not only provide good aesthetics but also possess stable structural function. The proposed work uses a novel multiresolution topology optimization method to achieve the task. Using a compliance minimization objective, patient-specific bone replacement shapes can be designed for different clinical cases that ensure revival of efficient load transfer mechanisms in the mid-face. In this work, four clinical cases are introduced and their respective patient-specific designs are obtained using the proposed method. The optimized designs are then virtually inserted into the defect to visually inspect the viability of the design . Further, once the design is verified by the reconstructive surgeon, prototypes are fabricated using a 3D printer for validation. The robustness of the designs are mechanically tested by subjecting them to a physiological loading condition which mimics the masticatory activity. The full-field strain result through 3D image correlation and the finite element analysis implies that the solution can survive the maximum mastication of 120 lb. Also, the designs have the potential to restore the buttress system and provide the structural integrity. Using the topology optimization framework in designing the bone replacement shapes would deliver surgeons new alternatives for rather complicated mid-face reconstruction.
A Comparison of Functional Models for Use in the Function-Failure Design Method
NASA Technical Reports Server (NTRS)
Stock, Michael E.; Stone, Robert B.; Tumer, Irem Y.
2006-01-01
When failure analysis and prevention, guided by historical design knowledge, are coupled with product design at its conception, shorter design cycles are possible. By decreasing the design time of a product in this manner, design costs are reduced and the product will better suit the customer s needs. Prior work indicates that similar failure modes occur with products (or components) with similar functionality. To capitalize on this finding, a knowledge base of historical failure information linked to functionality is assembled for use by designers. One possible use for this knowledge base is within the Elemental Function-Failure Design Method (EFDM). This design methodology and failure analysis tool begins at conceptual design and keeps the designer cognizant of failures that are likely to occur based on the product s functionality. The EFDM offers potential improvement over current failure analysis methods, such as FMEA, FMECA, and Fault Tree Analysis, because it can be implemented hand in hand with other conceptual design steps and carried throughout a product s design cycle. These other failure analysis methods can only truly be effective after a physical design has been completed. The EFDM however is only as good as the knowledge base that it draws from, and therefore it is of utmost importance to develop a knowledge base that will be suitable for use across a wide spectrum of products. One fundamental question that arises in using the EFDM is: At what level of detail should functional descriptions of components be encoded? This paper explores two approaches to populating a knowledge base with actual failure occurrence information from Bell 206 helicopters. Functional models expressed at various levels of detail are investigated to determine the necessary detail for an applicable knowledge base that can be used by designers in both new designs as well as redesigns. High level and more detailed functional descriptions are derived for each failed component based
Limitations of the method of characteristics when applied to axisymmetric hypersonic nozzle design
NASA Technical Reports Server (NTRS)
Edwards, Anne C.; Perkins, John N.; Benton, James R.
1990-01-01
A design study of axisymmetric hypersonic wind tunnel nozzles was initiated by NASA Langley Research Center with the objective of improving the flow quality of their ground test facilities. Nozzles for Mach 6 air, Mach 13.5 nitrogen, and Mach 17 nitrogen were designed using the Method of Characteristics/Boundary Layer (MOC/BL) approach and were analyzed with a Navier-Stokes solver. Results of the analysis agreed well with design for the Mach 6 case, but revealed oblique shock waves of increasing strength originating from near the inflection point of the Mach 13.5 and Mach 17 nozzles. The findings indicate that the MOC/BL design method has a fundamental limitation that occurs at some Mach number between 6 an 13.5. In order to define the limitation more exactly and attempt to discover the cause, a parametric study of hypersonic ideal air nozzles designed with the current MOC/BL method was done. Results of this study indicate that, while stagnations conditions have a moderate affect on the upper limit of the method, the method fails at Mach numbers above 8.0.
Research on design method of the full form ship with minimum thrust deduction factor
NASA Astrophysics Data System (ADS)
Zhang, Bao-ji; Miao, Ai-qin; Zhang, Zhu-xin
2015-04-01
In the preliminary design stage of the full form ships, in order to obtain a hull form with low resistance and maximum propulsion efficiency, an optimization design program for a full form ship with the minimum thrust deduction factor has been developed, which combined the potential flow theory and boundary layer theory with the optimization technique. In the optimization process, the Sequential Unconstrained Minimization Technique (SUMT) interior point method of Nonlinear Programming (NLP) was proposed with the minimum thrust deduction factor as the objective function. An appropriate displacement is a basic constraint condition, and the boundary layer separation is an additional one. The parameters of the hull form modification function are used as design variables. At last, the numerical optimization example for lines of after-body of 50000 DWT product oil tanker was provided, which indicated that the propulsion efficiency was improved distinctly by this optimal design method.
Three dimensional finite element methods: Their role in the design of DC accelerator systems
Podaru, Nicolae C.; Gottdang, A.; Mous, D. J. W.
2013-04-19
High Voltage Engineering has designed, built and tested a 2 MV dual irradiation system that will be applied for radiation damage studies and ion beam material modification. The system consists of two independent accelerators which support simultaneous proton and electron irradiation (energy range 100 keV - 2 MeV) of target sizes of up to 300 Multiplication-Sign 300 mm{sup 2}. Three dimensional finite element methods were used in the design of various parts of the system. The electrostatic solver was used to quantify essential parameters of the solid-state power supply generating the DC high voltage. The magnetostatic solver and ray tracing were used to optimize the electron/ion beam transport. Close agreement between design and measurements of the accelerator characteristics as well as beam performance indicate the usefulness of three dimensional finite element methods during accelerator system design.
A new method for the design optimization of three-phase induction motors
Daidone, A.; Parasiliti, F.; Villani, M.; Lucidi, S.
1998-09-01
The paper deals with the optimization problem of induction motors design. In particular a new global minimization algorithm is described; it tries to take into account all the features of these particular problems. A first numerical comparison between this new algorithm and a method widely used in the design optimization of induction motors has been performed. The obtained results show that the proposed approach is promising.
NASA Astrophysics Data System (ADS)
Yatsuyanagi, Nobuyuki
A comprehensive design method for a LOX/Liquid-Methane (L-CH4) rocket engine combustor with a coaxial injector and the preliminary design of the regenerative cooling combustor with 100-kN thrust in vacuum at a combustion pressure of a 3.43 MPa are presented. Reasonable dimensions for the combustor that satisfy the targeted C* efficiency of more than 98% and combustion stability are obtained.
In silico methods to assist drug developers in acetylcholinesterase inhibitor design.
Bermúdez-Lugo, J A; Rosales-Hernández, M C; Deeb, O; Trujillo-Ferrara, J; Correa-Basurto, J
2011-01-01
Alzheimer's disease (AD) is a neurodegenerative disease characterized by a low acetylcholine (ACh) concentration in the hippocampus and cortex. ACh is a neurotransmitter hydrolyzed by acetylcholinesterase (AChE). Therefore, it is not surprising that AChE inhibitors (AChEIs) have shown better results in the treatment of AD than any other strategy. To improve the effects of AD, many researchers have focused on designing and testing new AChEIs. One of the principal strategies has been the use of computational methods (structural bioinformatics or in silico methods). In this review, we summarize the in silico methods used to enhance the understanding of AChE, particularly at the binding site, to design new AChEIs. Several computational methods have been used, such as docking approaches, molecular dynamics studies, quantum mechanical studies, electronic properties, hindrance effects, partition coefficients (Log P) and molecular electrostatic potentials surfaces, among other physicochemical methods that exhibit quantitative structure-activity relationships.
What can formal methods offer to digital flight control systems design
NASA Technical Reports Server (NTRS)
Good, Donald I.
1990-01-01
Formal methods research begins to produce methods which will enable mathematic modeling of the physical behavior of digital hardware and software systems. The development of these methods directly supports the NASA mission of increasing the scope and effectiveness of flight system modeling capabilities. The conventional, continuous mathematics that is used extensively in modeling flight systems is not adequate for accurate modeling of digital systems. Therefore, the current practice of digital flight control system design has not had the benefits of extensive mathematical modeling which are common in other parts of flight system engineering. Formal methods research shows that by using discrete mathematics, very accurate modeling of digital systems is possible. These discrete modeling methods will bring the traditional benefits of modeling to digital hardware and hardware design. Sound reasoning about accurate mathematical models of flight control systems can be an important part of reducing risk of unsafe flight control.
Aerodynamic aircraft design methods and their notable applications: Survey of the activity in Japan
NASA Technical Reports Server (NTRS)
Fujii, Kozo; Takanashi, Susumu
1991-01-01
An overview of aerodynamic aircraft design methods and their recent applications in Japan is presented. A design code which was developed at the National Aerospace Laboratory (NAL) and is in use now is discussed, hence, most of the examples are the result of the collaborative work between heavy industry and the National Aerospace Laboratory. A wide variety of applications in transonic to supersonic flow regimes are presented. Although design of aircraft elements for external flows are the main focus, some of the internal flow applications are also presented. Recent applications of the design code, using the Navier Stokes and Euler equations in the analysis mode, include the design of HOPE (a space vehicle) and Upper Surface Blowing (USB) aircraft configurations.
Performance-based plastic design method for steel concentric braced frames
NASA Astrophysics Data System (ADS)
Banihashemi, M. R.; Mirzagoltabar, A. R.; Tavakoli, H. R.
2015-09-01
This paper presents a performance-based plastic design (PBPD) methodology for the design of steel concentric braced frames. The design base shear is obtained based on energy-work balance equation using pre-selected target drift and yield mechanism. To achieve the intended yield mechanism and behavior, plastic design is applied to detail the frame members. For validity, three baseline frames (3, 6, 9-story) are designed according to AISC (Seismic Provisions for Structural Steel Buildings, American Institute of Steel Construction, Chicago, 2005) seismic provisions (baseline frames). Then, the frames are redesigned based on the PBPD method. These frames are subjected to extensive nonlinear dynamic time-history analyses. The results show that the PBPD frames meet all the intended performance objectives in terms of yield mechanisms and target drifts, whereas the baseline frames show very poor response due to premature brace fractures leading to unacceptably large drifts and instability.
A new method for designing a compliant mechanism based displacement amplifier
NASA Astrophysics Data System (ADS)
Bharanidaran, R.; Aswin Srikanth, Sai
2016-09-01
Advancement of precision industries, displacement amplifying device is essential to produce precise and long range of motion for micro-actuator. Compliant mechanism based displacement amplifier (DA) is more appropriate to attain high precision motion. Compliant mechanism utilizes elastic nature of material to achieve required motion. In this research paper, compliant mechanism design is developed using topology optimization. The output of the topologically optimized design is impossible to fabricate as it is due to the presence of senseless regions. Hence, this optimized design is considered as a primary design of compliant mechanism which provides the configuration of kinematic linkages and also provides the details of the geometrical locations of the flexure hinges. Selection of appropriate geometrical parameters of the flexure hinges is another critical task in the design process and parameterization technique is used to determine flexure hinge parameters. Structural performance of mechanical amplifier confirmed using finite element method (FEM).
On the Use of Parmetric-CAD Systems and Cartesian Methods for Aerodynamic Design
NASA Technical Reports Server (NTRS)
Nemec, Marian; Aftosmis, Michael J.; Pulliam, Thomas H.
2004-01-01
Automated, high-fidelity tools for aerodynamic design face critical issues in attempting to optimize real-life geometry arid in permitting radical design changes. Success in these areas promises not only significantly shorter design- cycle times, but also superior and unconventional designs. To address these issues, we investigate the use of a parmetric-CAD system in conjunction with an embedded-boundary Cartesian method. Our goal is to combine the modeling capabilities of feature-based CAD with the robustness and flexibility of component-based Cartesian volume-mesh generation for complex geometry problems. We present the development of an automated optimization frame-work with a focus on the deployment of such a CAD-based design approach in a heterogeneous parallel computing environment.
Efficient design of a truss beam by applying first order optimization method
NASA Astrophysics Data System (ADS)
Fedorik, Filip
2013-10-01
Applications of optimization procedures in structural designs are widely discussed problems, which are caused by currently still-increasing demands on structures. Using of optimization methods in efficient designs passes through great development, especially in duplicate production where even small savings might lead to considerable reduction of total costs. The presented paper deals with application and analysis of the First Order optimization technique, which is implemented in the Design Optimization module that uses the main features of multi-physical FEM program ANSYS, in steel truss-beam design. Constraints of the design are stated by EN 1993 Eurocode 3, for uniform compression forces in compression members and tensile resistance moments in tension members. Furthermore, a minimum frequency of the first natural modal shape of the structure is determined. The aim of the solution is minimizing the weight of the structure by changing members' cross-section properties.
Differential assessment of designations of wetland status using two delineation methods.
Wu, Meiyin; Kalma, Dennis; Treadwell-Steitz, Carol
2014-07-01
Two different methods are commonly used to delineate and characterize wetlands. The U.S. Army Corps of Engineers (ACOE) delineation method uses field observation of hydrology, soils, and vegetation. The U.S. Fish and Wildlife Service's National Wetland Inventory Program (NWI) relies on remote sensing and photointerpretation. This study compared designations of wetland status at selected study sites using both methods. Twenty wetlands from the Wetland Boundaries Map of the Ausable-Boquet River Basin (created using the revised NWI method) in the Ausable River watershed in Essex and Clinton Counties, NY, were selected for this study. Sampling sites within and beyond the NWI wetland boundaries were selected. During the summers of 2008 and 2009, wetland hydrology, soils, and vegetation were examined for wetland indicators following the methods described in the ACOE delineation manual. The study shows that the two methods agree at 78 % of the sampling sites and disagree at 22 % of the sites. Ninety percent of the sampling locations within the wetland boundaries on the NWI maps were categorized as ACOE wetlands with all three ACOE wetland indicators present. A binary linear logistic regression model analyzed the relationship between the designations of the two methods. The outcome of the model indicates that 83 % of the time, the two wetland designation methods agree. When discrepancies are found, it is the presence or absence of wetland hydrology and vegetation that causes the differences in delineation.
Predictive Array Design. A method for sampling combinatorial chemistry library space.
Lipkin, M J; Rose, V S; Wood, J
2002-01-01
A method, Predictive Array Design, is presented for sampling combinatorial chemistry space and selecting a subarray for synthesis based on the experimental design method of Latin Squares. The method is appropriate for libraries with three sites of variation. Libraries with four sites of variation can be designed using the Graeco-Latin Square. Simulated annealing is used to optimise the physicochemical property profile of the sub-array. The sub-array can be used to make predictions of the activity of compounds in the all combinations array if we assume each monomer has a relatively constant contribution to activity and that the activity of a compound is composed of the sum of the activities of its constitutive monomers.
Takeda, T.; Shimazu, Y.; Hibi, K.; Fujimura, K.
2012-07-01
Under the R and D project to improve the modeling accuracy for the design of fast breeder reactors the authors are developing a neutronics calculation method for designing a large commercial type sodium- cooled fast reactor. The calculation method is established by taking into account the special features of the reactor such as the use of annular fuel pellet, inner duct tube in large fuel assemblies, large core. The Verification and Validation, and Uncertainty Qualification (V and V and UQ) of the calculation method is being performed by using measured data from the prototype FBR Monju. The results of this project will be used in the design and analysis of the commercial type demonstration FBR, known as the Japanese Sodium fast Reactor (JSFR). (authors)
Prospects of Applying Enhanced Semi-Empirical QM Methods for 2101 Virtual Drug Design.
Yilmazer, Nusret Duygu; Korth, Martin
2016-01-01
The last five years have seen a renaissance of semiempirical quantum mechanical (SQM) methods in the field of virtual drug design, largely due to the increased accuracy of so-called enhanced SQM approaches. These methods make use of additional terms for treating dispersion (D) and hydrogen bond (H) interactions with an accuracy comparable to dispersion-corrected density functional theory (DFT-D). DFT-D in turn was shown to provide an accuracy comparable to the most sophisticated QM approaches when it comes to non-covalent intermolecular forces, which usually dominate the protein/ligand interactions that are central to virtual drug design. Enhanced SQM methods thus offer a very promising way to improve upon the current state of the art in the field of virtual drug design. PMID:27183985
Design of Intelligent Hydraulic Excavator Control System Based on PID Method
NASA Astrophysics Data System (ADS)
Zhang, Jun; Jiao, Shengjie; Liao, Xiaoming; Yin, Penglong; Wang, Yulin; Si, Kuimao; Zhang, Yi; Gu, Hairong
Most of the domestic designed hydraulic excavators adopt the constant power design method and set 85%~90% of engine power as the hydraulic system adoption power, it causes high energy loss due to mismatching of power between the engine and the pump. While the variation of the rotational speed of engine could sense the power shift of the load, it provides a new method to adjust the power matching between engine and pump through engine speed. Based on negative flux hydraulic system, an intelligent hydraulic excavator control system was designed based on rotational speed sensing method to improve energy efficiency. The control system was consisted of engine control module, pump power adjusted module, engine idle module and system fault diagnosis module. Special PLC with CAN bus was used to acquired the sensors and adjusts the pump absorption power according to load variation. Four energy saving control strategies with constant power method were employed to improve the fuel utilization. Three power modes (H, S and L mode) were designed to meet different working status; Auto idle function was employed to save energy through two work status detected pressure switches, 1300rpm was setting as the idle speed according to the engine consumption fuel curve. Transient overload function was designed for deep digging within short time without spending extra fuel. An increasing PID method was employed to realize power matching between engine and pump, the rotational speed's variation was taken as the PID algorithm's input; the current of proportional valve of variable displacement pump was the PID's output. The result indicated that the auto idle could decrease fuel consumption by 33.33% compared to work in maximum speed of H mode, the PID control method could take full use of maximum engine power at each power mode and keep the engine speed at stable range. Application of rotational speed sensing method provides a reliable method to improve the excavator's energy efficiency and
Use of experimental data in testing methods for design against uncertainty
NASA Astrophysics Data System (ADS)
Rosca, Raluca Ioana
Modern methods of design take into consideration the fact that uncertainty is present in everyday life, whether in the form of variable loads (the strongest wind that would affect a building), material properties of an alloy, or future demand for the product or cost of labor. Moreover, the Japanese example showed that it may be more cost-effective to design taking into account the existence of the uncertainty rather than to plan to eliminate or greatly reduce it. The dissertation starts by comparing the theoretical basis of two methods for design against uncertainty, namely probability theory and possibility theory. A two-variable design problem is then used to show the differences. It is concluded that for design problems with two or more cases of failure of very different magnitude (as the stop of a car due to lack of gas or motor failure), probability theory divides existent resources in a more intuitive way than possibility theory. The dissertation continues with the description of simple experiments (building towers of dominoes) and then it presents the methodology to increase the amount of information that can be drawn from a given data set. The methodology is shown on the Bidder-Challenger problem, a simulation of a problem of a company that makes microchips to set a target speed for its next microchip. The simulations use the domino experimental data. It is demonstrated that important insights into methods of probability and possibility based design can be gained from experiments.
Study on the rotor design method for a small propeller-type wind turbine
NASA Astrophysics Data System (ADS)
Nishi, Yasuyuki; Yamashita, Yusuke; Inagaki, Terumi
2016-08-01
Small propeller-type wind turbines have a low Reynolds number, limiting the number of usable airfoil materials. Thus, their design method is not sufficiently established, and their performance is often low. The ultimate goal of this research is to establish high-performance design guidelines and design methods for small propeller-type wind turbines. To that end, we designed two rotors: Rotor A, based on the rotor optimum design method from the blade element momentum theory, and Rotor B, in which the chord length of the tip is extended and the chord length distribution is linearized. We examined performance characteristics and flow fields of the two rotors through wind tunnel experiments and numerical analysis. Our results revealed that the maximum output tip speed ratio of Rotor B shifted lower than that of Rotor A, but the maximum output coefficient increased by approximately 38.7%. Rotors A and B experienced a large-scale separation on the hub side, which extended to the mean in Rotor A. This difference in separation had an impact on the significant decrease in Rotor A's output compared to the design value and the increase in Rotor B's output compared to Rotor A.
Cost-effective methods for designing and operating fiberglass sucker rod strings
Jacobs, G.H.
1986-01-01
This paper describes procedures used by Amoco Production Company in a West Texas district to maximize the life of more than 200 fiberglass rod strings in service at depths between 5000 and 8000 ft. The paper describes rod string design methods, operating practices, and failure analyses for two major manufacturers' rods. Emphasis has been placed on showing procedures used in designing fiberglass rod strings for cost effective installation and for operating so as to minimize the number of rod string failures and, consequently, rod string operating costs. Actual cases histories are used to illustrate the reduction in failure frequency which results from proper rod string design, operating practices, and failure analysis.
Multidisciplinary design of a rocket-based combined cycle SSTO launch vehicle using Taguchi methods
NASA Astrophysics Data System (ADS)
Olds, John R.; Walberg, Gerald D.
1993-02-01
Results are presented from the optimization process of a winged-cone configuration SSTO launch vehicle that employs a rocket-based ejector/ramjet/scramjet/rocket operational mode variable-cycle engine. The Taguchi multidisciplinary parametric-design method was used to evaluate the effects of simultaneously changing a total of eight design variables, rather than changing them one at a time as in conventional tradeoff studies. A combination of design variables was in this way identified which yields very attractive vehicle dry and gross weights.
NASA Technical Reports Server (NTRS)
Consoli, Robert David; Sobieszczanski-Sobieski, Jaroslaw
1990-01-01
Advanced multidisciplinary analysis and optimization methods, namely system sensitivity analysis and non-hierarchical system decomposition, are applied to reduce the cost and improve the visibility of an automated vehicle design synthesis process. This process is inherently complex due to the large number of functional disciplines and associated interdisciplinary couplings. Recent developments in system sensitivity analysis as applied to complex non-hierarchic multidisciplinary design optimization problems enable the decomposition of these complex interactions into sub-processes that can be evaluated in parallel. The application of these techniques results in significant cost, accuracy, and visibility benefits for the entire design synthesis process.
A Bayesian Chance-Constrained Method for Hydraulic Barrier Design Under Model Structure Uncertainty
NASA Astrophysics Data System (ADS)
Chitsazan, N.; Pham, H. V.; Tsai, F. T. C.
2014-12-01
The groundwater community has widely recognized the model structure uncertainty as the major source of model uncertainty in groundwater modeling. Previous studies in the aquifer remediation design, however, rarely discuss the impact of the model structure uncertainty. This study combines the chance-constrained (CC) programming with the Bayesian model averaging (BMA) as a BMA-CC framework to assess the effect of model structure uncertainty in the remediation design. To investigate the impact of the model structure uncertainty on the remediation design, we compare the BMA-CC method with the traditional CC programming that only considers the model parameter uncertainty. The BMA-CC method is employed to design a hydraulic barrier to protect public supply wells of the Government St. pump station from saltwater intrusion in the "1,500-foot" sand and the "1-700-foot" sand of the Baton Rouge area, southeastern Louisiana. To address the model structure uncertainty, we develop three conceptual groundwater models based on three different hydrostratigraphy structures. The results show that using the traditional CC programming overestimates design reliability. The results also show that at least five additional connector wells are needed to achieve more than 90% design reliability level. The total amount of injected water from connector wells is higher than the total pumpage of the protected public supply wells. While reducing injection rate can be achieved by reducing reliability level, the study finds that the hydraulic barrier design to protect the Government St. pump station is not economically attractive.
Sensitivity Analysis of the Thermal Response of 9975 Packaging Using Factorial Design Methods
Gupta, Narendra K.
2005-10-31
A method is presented for using the statistical design of experiment (2{sup k} Factorial Design) technique in the sensitivity analysis of the thermal response (temperature) of the 9975 radioactive material packaging where multiple thermal properties of the impact absorbing and fire insulating material Celotex and certain boundary conditions are subject to uncertainty. 2{sup k} Factorial Design method is very efficient in the use of available data and is capable of analyzing the impact of main variables (Factors) and their interactions on the component design. The 9975 design is based on detailed finite element (FE) analyses and extensive proof testing to meet the design requirements given in 10CFR71 [1]. However, the FE analyses use Celotex thermal properties that are based on published data and limited experiments. Celotex is an orthotropic material that is used in the home building industry. Its thermal properties are prone to variation due to manufacturing and fabrication processes, and due to long environmental exposure. This paper will evaluate the sensitivity of variations in thermal conductivity of the Celotex, convection coefficient at the drum surface, and drum emissivity (herein called Factors) on the thermal response of 9975 packaging under Normal Conditions of Transport (NCT). Application of this methodology will ascertain the robustness of the 9975 design and it can lead to more specific and useful understanding of the effects of various Factors on 9975 performance.
Chen, T Scott; Keating, Amy E
2012-01-01
Given the importance of protein–protein interactions for nearly all biological processes, the design of protein affinity reagents for use in research, diagnosis or therapy is an important endeavor. Engineered proteins would ideally have high specificities for their intended targets, but achieving interaction specificity by design can be challenging. There are two major approaches to protein design or redesign. Most commonly, proteins and peptides are engineered using experimental library screening and/or in vitro evolution. An alternative approach involves using protein structure and computational modeling to rationally choose sequences predicted to have desirable properties. Computational design has successfully produced novel proteins with enhanced stability, desired interactions and enzymatic function. Here we review the strengths and limitations of experimental library screening and computational structure-based design, giving examples where these methods have been applied to designing protein interaction specificity. We highlight recent studies that demonstrate strategies for combining computational modeling with library screening. The computational methods provide focused libraries predicted to be enriched in sequences with the properties of interest. Such integrated approaches represent a promising way to increase the efficiency of protein design and to engineer complex functionality such as interaction specificity. PMID:22593041
Neuhauser, Linda; Kreps, Gary L
2014-12-01
Traditional communication theory and research methods provide valuable guidance about designing and evaluating health communication programs. However, efforts to use health communication programs to educate, motivate, and support people to adopt healthy behaviors often fail to meet the desired goals. One reason for this failure is that health promotion issues are complex, changeable, and highly related to the specific needs and contexts of the intended audiences. It is a daunting challenge to effectively influence health behaviors, particularly culturally learned and reinforced behaviors concerning lifestyle factors related to diet, exercise, and substance (such as alcohol and tobacco) use. Too often, program development and evaluation are not adequately linked to provide rapid feedback to health communication program developers so that important revisions can be made to design the most relevant and personally motivating health communication programs for specific audiences. Design science theory and methods commonly used in engineering, computer science, and other fields can address such program and evaluation weaknesses. Design science researchers study human-created programs using tightly connected build-and-evaluate loops in which they use intensive participatory methods to understand problems and develop solutions concurrently and throughout the duration of the program. Such thinking and strategies are especially relevant to address complex health communication issues. In this article, the authors explore the history, scientific foundation, methods, and applications of design science and its potential to enhance health communication programs and their evaluation.
Neuhauser, Linda; Kreps, Gary L
2014-12-01
Traditional communication theory and research methods provide valuable guidance about designing and evaluating health communication programs. However, efforts to use health communication programs to educate, motivate, and support people to adopt healthy behaviors often fail to meet the desired goals. One reason for this failure is that health promotion issues are complex, changeable, and highly related to the specific needs and contexts of the intended audiences. It is a daunting challenge to effectively influence health behaviors, particularly culturally learned and reinforced behaviors concerning lifestyle factors related to diet, exercise, and substance (such as alcohol and tobacco) use. Too often, program development and evaluation are not adequately linked to provide rapid feedback to health communication program developers so that important revisions can be made to design the most relevant and personally motivating health communication programs for specific audiences. Design science theory and methods commonly used in engineering, computer science, and other fields can address such program and evaluation weaknesses. Design science researchers study human-created programs using tightly connected build-and-evaluate loops in which they use intensive participatory methods to understand problems and develop solutions concurrently and throughout the duration of the program. Such thinking and strategies are especially relevant to address complex health communication issues. In this article, the authors explore the history, scientific foundation, methods, and applications of design science and its potential to enhance health communication programs and their evaluation. PMID:25491581
Acoustic Treatment Design Scaling Methods. Volume 1; Overview, Results, and Recommendations
NASA Technical Reports Server (NTRS)
Kraft, R. E.; Yu, J.
1999-01-01
Scale model fan rigs that simulate new generation ultra-high-bypass engines at about 1/5-scale are achieving increased importance as development vehicles for the design of low-noise aircraft engines. Testing at small scale allows the tests to be performed in existing anechoic wind tunnels, which provides an accurate simulation of the important effects of aircraft forward motion on the noise generation. The ability to design, build, and test miniaturized acoustic treatment panels on scale model fan rigs representative of the fullscale engine provides not only a cost-savings, but an opportunity to optimize the treatment by allowing tests of different designs. The primary objective of this study was to develop methods that will allow scale model fan rigs to be successfully used as acoustic treatment design tools. The study focuses on finding methods to extend the upper limit of the frequency range of impedance prediction models and acoustic impedance measurement methods for subscale treatment liner designs, and confirm the predictions by correlation with measured data. This phase of the program had as a goal doubling the upper limit of impedance measurement from 6 kHz to 12 kHz. The program utilizes combined analytical and experimental methods to achieve the objectives.
Tracers and Tracer Testing: Design, Implementation, Tracer Selection, and Interpretation Methods
G. Michael Shook; Shannon L.; Allan Wylie
2004-01-01
Conducting a successful tracer test requires adhering to a set of steps. The steps include identifying appropriate and achievable test goals, identifying tracers with the appropriate properties, and implementing the test as designed. When these steps are taken correctly, a host of tracer test analysis methods are available to the practitioner. This report discusses the individual steps required for a successful tracer test and presents methods for analysis. The report is an overview of tracer technology; the Suggested Reading section offers references to the specifics of test design and interpretation.
NASA Astrophysics Data System (ADS)
Chen, Ming-Ji; Pei, Yong-Mao; Fang, Dai-Ning
2010-03-01
The classic anisotropic spherical cloak can be mimicked by many alternating thin layers of isotropic metamaterials [Qiu et al. Phys. Rev. E 79 (2009) 047602]. We propose an improved method of designing permittivity and permeability in each isotropic layer, which eliminates the jumping of the refractive index at the interface. Multilayered spherical cloaks designed by the present method perform much better than those by Qiu et al., especially for forward scattering. It is found that the ratio of layer thickness to the operating wavelength plays an important role in achieving invisibility. The presented cloak should be discretized to at least 40 layers to meet the thickness threshold corresponding to 10% scattering.
Methods for game user research: studying player behavior to enhance game design.
Desurvire, Heather; El-Nasr, Magy Seif
2013-01-01
The emerging field of game user research (GUR) investigates interaction between players and games and the surrounding context of play. Game user researchers have explored methods from, for example, human-computer interaction, psychology, interaction design, media studies, and the social sciences. They've extended and modified these methods for different types of digital games, such as social games, casual games, and serious games. This article describes several current GUR methods. A case study illustrates two specific methods: think-aloud and heuristics. PMID:24808062
Multiscale Design of Advanced Materials based on Hybrid Ab Initio and Quasicontinuum Methods
Luskin, Mitchell
2014-03-12
This project united researchers from mathematics, chemistry, computer science, and engineering for the development of new multiscale methods for the design of materials. Our approach was highly interdisciplinary, but it had two unifying themes: first, we utilized modern mathematical ideas about change-of-scale and state-of-the-art numerical analysis to develop computational methods and codes to solve real multiscale problems of DOE interest; and, second, we took very seriously the need for quantum mechanics-based atomistic forces, and based our methods on fast solvers of chemically accurate methods.
Neural Network and Regression Methods Demonstrated in the Design Optimization of a Subsonic Aircraft
NASA Technical Reports Server (NTRS)
Hopkins, Dale A.; Lavelle, Thomas M.; Patnaik, Surya
2003-01-01
The neural network and regression methods of NASA Glenn Research Center s COMETBOARDS design optimization testbed were used to generate approximate analysis and design models for a subsonic aircraft operating at Mach 0.85 cruise speed. The analytical model is defined by nine design variables: wing aspect ratio, engine thrust, wing area, sweep angle, chord-thickness ratio, turbine temperature, pressure ratio, bypass ratio, fan pressure; and eight response parameters: weight, landing velocity, takeoff and landing field lengths, approach thrust, overall efficiency, and compressor pressure and temperature. The variables were adjusted to optimally balance the engines to the airframe. The solution strategy included a sensitivity model and the soft analysis model. Researchers generated the sensitivity model by training the approximators to predict an optimum design. The trained neural network predicted all response variables, within 5-percent error. This was reduced to 1 percent by the regression method. The soft analysis model was developed to replace aircraft analysis as the reanalyzer in design optimization. Soft models have been generated for a neural network method, a regression method, and a hybrid method obtained by combining the approximators. The performance of the models is graphed for aircraft weight versus thrust as well as for wing area and turbine temperature. The regression method followed the analytical solution with little error. The neural network exhibited 5-percent maximum error over all parameters. Performance of the hybrid method was intermediate in comparison to the individual approximators. Error in the response variable is smaller than that shown in the figure because of a distortion scale factor. The overall performance of the approximators was considered to be satisfactory because aircraft analysis with NASA Langley Research Center s FLOPS (Flight Optimization System) code is a synthesis of diverse disciplines: weight estimation, aerodynamic
Hybrid method for designing digital FIR filters based on fractional derivative constraints.
Baderia, Kuldeep; Kumar, Anil; Kumar Singh, Girish
2015-09-01
In this manuscript, a hybrid approach based on Lagrange multiplier method and cuckoo search (CS) optimization technique is proposed for the design of linear phase finite impulse response (FIR) filters using fractional derivative constraints. In the proposed method, FIR filter is designed by optimizing the integral squares in passband and stopband from ideal response such that the fractional derivatives of designed filter response become zero at a given frequency point. Lagrange multiplier method is exploited for finding the optimized filter coefficients. Optimal value of fractional derivative constraints for optimized filter coefficients are determined by minimizing the objective function constructed using a sum of maximum passband ripple and maximum stopband ripple in frequency domain using CS algorithm. Performance of the proposed method is evaluated by passband error (ϕ(p)), stopband error (ϕ(s)), stopband attenuation (A(s)), maximum passband ripple (MPR), maximum stopband ripple (MSR) and CPU time. A comparative study of the performance of particle swarm optimization (PSO) and artificial bee colony (ABC) for designing FIR filters using the proposed method is also made. PMID:26142984
MoM-based topology optimization method for planar metallic antenna design
NASA Astrophysics Data System (ADS)
Liu, Shutian; Wang, Qi; Gao, Renjing
2016-09-01
The metallic antenna design problem can be treated as a problem to find the optimal distribution of conductive material in a certain domain. Although this problem is well suited for topology optimization method, the volumetric distribution of conductive material based on 3D finite element method (FEM) has been known to cause numerical bottlenecks such as the skin depth issue, meshed "air regions" and other numerical problems. In this paper a topology optimization method based on the method of moments (MoM) for configuration design of planar metallic antenna was proposed. The candidate structure of the planar metallic antenna was approximately considered as a resistance sheet with position-dependent impedance. In this way, the electromagnetic property of the antenna can be analyzed easily by using the MoM to solve the radiation problem of the resistance sheet in a finite domain. The topology of the antenna was depicted with the distribution of the impedance related to the design parameters or relative densities. The conductive material (metal) was assumed to have zero impedance, whereas the non-conductive material was simulated as a material with a finite but large enough impedance. The interpolation function of the impedance between conductive material and non-conductive material was taken as a tangential function. The design of planar metallic antenna was optimized for maximizing the efficiency at the target frequency. The results illustrated the effectiveness of the method.
Energy cost based design optimization method for medium temperature CPC collectors
NASA Astrophysics Data System (ADS)
Horta, Pedro; Osório, Tiago; Collares-Pereira, Manuel
2016-05-01
CPC collectors, approaching the ideal concentration limits established by non-imaging optics, can be designed to have such acceptance angles enabling fully stationary designs, useful for applications in the low temperature range (T < 100°C). Their use in the medium temperature range (100°C < T < 250°C) typically requires higher concentration factors in turn requiring seasonal tracking strategies. Considering the CPC design options in terms of effective concentration factor, truncation, concentrator height, mirror perimeter, seasonal tracking, trough spacing, etc., an energy cost function based design optimization method is presented in this article. Accounting for the impact of the design on its optical (optical efficiency, Incidence Angle Modifier, diffuse acceptance) and thermal performances (dependent on the concentration factor), the optimization function integrates design (e.g. mirror area, frame length, trough spacing/shading), concept (e.g. rotating/stationary components, materials) and operation (e.g. O&M, tilt shifts and tracking strategy) costs into a collector specific energy cost function, in €/(kWh.m2). The use of such function stands for a location and operating temperature dependent design optimization procedure, aiming at the lowest solar energy cost. Illustrating this approach, optimization results will be presented for a (tubular) evacuated absorber CPC design operating in Morocco.
Casey, S.M.
1980-06-01
The purpose of this document is to provide an overview of the recommended activities and methods to be employed by a team of human factors engineers during the development of a nuclear waste retrieval system. This system, as it is presently conceptualized, is intended to be used for the removal of storage canisters (each canister containing a spent fuel rod assembly) located in an underground salt bed depository. This document, and the others in this series, have been developed for the purpose of implementing human factors engineering principles during the design and construction of the retrieval system facilities and equipment. The methodology presented has been structured around a basic systems development effort involving preliminary development, equipment development, personnel subsystem development, and operational test and evaluation. Within each of these phases, the recommended activities of the human engineering team have been stated, along with descriptions of the human factors engineering design techniques applicable to the specific design issues. Explicit examples of how the techniques might be used in the analysis of human tasks and equipment required in the removal of spent fuel canisters have been provided. Only those techniques having possible relevance to the design of the waste retrieval system have been reviewed. This document is intended to provide the framework for integrating human engineering with the rest of the system development effort. The activities and methodologies reviewed in this document have been discussed in the general order in which they will occur, although the time frame (the total duration of the development program in years and months) in which they should be performed has not been discussed.
The comparison of laser surface designing and pigment printing methods for the product quality
NASA Astrophysics Data System (ADS)
Ozguney, Arif Taner
2007-07-01
Developing new designs by using the computer and transferring the designs that are obtained to textile surfaces will not only increase and facilitate the production in a more practical manner, but also help you create identical designs. This means serial manufacturing of the products at standard quality and increasing their added values. Moreover, creating textile designs using the laser will also contribute to the value of the product as far as the consumer is concerned because it will not cause any wearing off and deformation in the texture of the fabric unlike the other methods. In the system that has been designed, the laser beam at selected wavelength and intensity was directed onto a selected textile surface and a computer-controlled laser beam source was used to change the colour substances on the textile surface. Pigment printing is also used for designing in textile and apparel sector. In this method, designs are transferred to the fabric manually by using dyestuff. In this study, the denim fabric used for the surfacing trial was 100% cotton, with a weft count per centimeter of 20 and a warp count per centimeter of 27, with fabric weight of 458 g/m 2. The first step was to prepare 40 pieces of denim samples, half of which were prepared manually pigment printing and the other half by using the laser beam. After this, some test applications were done. The tensile strength, tensile extension and some fastness values of designed pieces with two methods were compared according to the international standards.
HOFMAYER,C.; MILLER,C.; WANG,Y.; COSTELLO,J.
2001-08-12
Revisions to the USNRC Regulatory Guides and Standard Review Plan Sections devoted to earthquake engineering practice are currently in process. The intent is to reflect changes in engineering practice that have evolved in the twenty years that have passed since those criteria were originally published. Additionally, field observations of the effects of the Northridge (1994) and Kobe (1995) earthquakes have inspired some reassessment in the technical community about certain aspects of design practice. In particular, questions have arisen about the effectiveness of basing earthquake resistant designs on resistance to seismic forces and, then evaluating tolerability of the expected displacements. Therefore, a research effort was undertaken to examine the implications for NRC's seismic practice of the move, in the earthquake engineering community, toward using expected displacement rather than force (or stress) as the basis for assessing design adequacy. The results of the NRC sponsored research on this subject are reported in this paper. A slow trend toward the utilization of displacement based methods for design was noted. However, there is a more rapid trend toward the use of displacement based methods for seismic evaluation of existing facilities. A document known as FEMA 273, has been developed and is being used as the basis for the design of modifications to enhance the seismic capability of existing non-nuclear facilities. The research concluded that displacement based methods, such as given in FEMA 273, may be useful for seismic margin studies of existing nuclear power stations. They are unlikely to be useful for the basic design of new stations since nuclear power stations are designed to remain elastic during a seismic event. They could, however, be useful for estimating the margins associated with that design.
Two Reconfigurable Flight-Control Design Methods: Robust Servomechanism and Control Allocation
NASA Technical Reports Server (NTRS)
Burken, John J.; Lu, Ping; Wu, Zheng-Lu; Bahm, Cathy
2001-01-01
Two methods for control system reconfiguration have been investigated. The first method is a robust servomechanism control approach (optimal tracking problem) that is a generalization of the classical proportional-plus-integral control to multiple input-multiple output systems. The second method is a control-allocation approach based on a quadratic programming formulation. A globally convergent fixed-point iteration algorithm has been developed to make onboard implementation of this method feasible. These methods have been applied to reconfigurable entry flight control design for the X-33 vehicle. Examples presented demonstrate simultaneous tracking of angle-of-attack and roll angle commands during failures of the fight body flap actuator. Although simulations demonstrate success of the first method in most cases, the control-allocation method appears to provide uniformly better performance in all cases.
Use of clinical simulation for assessment in EHR-procurement: design of method.
Jensen, Sanne; Rasmussen, Stine L; Lyng, Karen M
2013-01-01
In Denmark, two large regions cooperate in a public intervention process of acquiring a new eHealth-platform to support the daily clinical work of approximately 40,000 users in 14 hospitals. It is essential that the new platform, besides fulfilling comprehensive detailed specifications, supports the daily work practice consisting of numerous mixed tasks executed by many different clinical actors in various settings. Within health informatics it has proven beneficial to use human factors approaches in the design process to secure systems that are responsive to the actual field of application. While design methods are widely described, there are very limited descriptions of how to assess and compare different EHR-platforms and their support in work processes upon its procurement. This paper describes the method we have developed to undertake this task. It is discussed how the method differs and how it has been adjusted from existing assessment methods. Finally, future considerations are discussed.
NASA Technical Reports Server (NTRS)
Ibrahim, A. H.; Tiwari, S. N.; Smith, R. E.
1997-01-01
Variational methods (VM) sensitivity analysis employed to derive the costate (adjoint) equations, the transversality conditions, and the functional sensitivity derivatives. In the derivation of the sensitivity equations, the variational methods use the generalized calculus of variations, in which the variable boundary is considered as the design function. The converged solution of the state equations together with the converged solution of the costate equations are integrated along the domain boundary to uniquely determine the functional sensitivity derivatives with respect to the design function. The application of the variational methods to aerodynamic shape optimization problems is demonstrated for internal flow problems at supersonic Mach number range. The study shows, that while maintaining the accuracy of the functional sensitivity derivatives within the reasonable range for engineering prediction purposes, the variational methods show a substantial gain in computational efficiency, i.e., computer time and memory, when compared with the finite difference sensitivity analysis.
Development of Approximate Methods for the Analysis of Patch Damping Design Concepts
NASA Astrophysics Data System (ADS)
Kung, S.-W.; Singh, R.
1999-02-01
This paper develops three approximate methods for the analysis of patch damping designs. Undamped natural frequencies and modal loss factors are calculated using the Rayleigh energy method and modal strain energy technique, respectively, without explicitly solving high order differential equations or complex eigenvalue problems. Approximate Method I is developed for sandwich beams assuming that damped mode shapes are given by the Euler beam eigenfunctions. The superposition principal is then used to accommodate any arbitrary mode shape, which may be obtained from modal experiments or the finite element method. In Method II, the formulation is further simplified with the assumption of a very compliant viscoelastic core. Finally, Method III considers a compact patch problem. The modal loss factor is then expressed as a product of terms related to material properties, layer thickness, patch size and patch performance. Approximate Methods II and III are also extended to rectangular plates. Formulations are verified by conducting analogous modal measurements and by comparing predictions with those obtained using the Rayleigh-Ritz method (without making any of the above mentioned assumptions). Several example cases are presented to demonstrate the validity and utility of approximate methods for patch damping design concepts.
NASA Astrophysics Data System (ADS)
Pourtau, J. C.
Analysis methods and tests implemented by Alcatel-Espace to make spacecraft electronic units resistant to electrostatic discharges (ESD) are presented. ESD qualification tests for Telecom-2 and Intelsat-7 equipment are described. A methodology for the analysis of ESD effects on electronic equipment which combines experimental and computer simulations is defined. These methods have led to the definition of design rules that will ensure sufficient immunity of electronic circuits against ESD.
NASA Astrophysics Data System (ADS)
Pingen, Georg
The objective of this work is the development of a formal design approach for fluidic systems, providing conceptually novel design layouts with the provision of only boundary conditions and some basic parameters. The lattice Boltzmann method (LBM) is chosen as a flow model due to its simplicity, inherent use of immersed boundary methods, parallelizability, and general flexibility. Immersed Boundary Methods in the form of a Brinkmann penalization are used to continuously vary the flow from fluid to solid, leading to a material distribution based boundary representation. An analytical adjoint sensitivity analysis is derived for the lattice Boltzmann method, enabling the combination of the lattice Boltzmann method with optimization techniques. This results in the first application of design optimization with the lattice Boltzmann method. In particular, the first LBM topology optimization framework for 2D and 3D problems is developed and validated with numerical design optimization problems for drag and pressure drop minimization. To improve the parallel scalability of the LBM sensitivity analysis and permit the solution of large 2D and 3D problems, iterative solvers are studied and a parallel GMRES Schur Complement method is applied to the solution of the linear adjoint problem in the LBM sensitivity analysis. This leads to improved parallel scalability through reduced memory use and algorithmic speedup. The potential of the developed design approach for fluidic systems is illustrated with the optimization of a 3D dual-objective fixed-geometry valve. The use of a parametric level-set method coupled with the LBM material distribution based topology optimization framework is shown to provide further versatility for design applications. Finally, the use of a penalty formulation of the fluid volume constraint permits the topology optimization of flows at moderate Reynolds numbers for a steady-state pipe bend application. Concluding, this work has led to the development of
Finding fatigue resistant and lightweight designs using the optimization methods CAO and SKO
NASA Astrophysics Data System (ADS)
Mattheck, C.; Walther, Frank; Baumgartner, A.
1992-07-01
Computer Aided Optimization (CAO), a new method of shape optimization based on the computer simulation of biological growth and Soft Kill Option (SKO), and a strategy to find new design solutions with reduced weight are presented. CAO is used to improve the design of technical components by gaining a homogenized stress distribution on the surface of two and three dimensional finite element models. SKO helps to define new topologies starting from a general oversized 'design area'. With CAO it is also possible to simulate the 'adaptive growth' of biological load carriers, while SKO simulates the mineralization process of bones adapted to their loading. The main ideas of the methods are outlined and several examples of optimizations are shown. If completely new solutions for technical problems are desired, SKO is used first and the design proposal being found is then optimized by CAO in order to achieve a lightweight and fatigue resistant design. The efficiency of the combination of the two methods as a complete layout procedure is shown.
Computerized method and system for designing an aerodynamic focusing lens stack
Gard, Eric; Riot, Vincent; Coffee, Keith; Woods, Bruce; Tobias, Herbert; Birch, Jim; Weisgraber, Todd
2011-11-22
A computerized method and system for designing an aerodynamic focusing lens stack, using input from a designer related to, for example, particle size range to be considered, characteristics of the gas to be flowed through the system, the upstream temperature and pressure at the top of a first focusing lens, the flow rate through the aerodynamic focusing lens stack equivalent at atmosphere pressure; and a Stokes number range. Based on the design parameters, the method and system determines the total number of focusing lenses and their respective orifice diameters required to focus the particle size range to be considered, by first calculating for the orifice diameter of the first focusing lens in the Stokes formula, and then using that value to determine, in iterative fashion, intermediate flow values which are themselves used to determine the orifice diameters of each succeeding focusing lens in the stack design, with the results being output to a designer. In addition, the Reynolds numbers associated with each focusing lens as well as exit nozzle size may also be determined to enhance the stack design.
ERIC Educational Resources Information Center
Lu, Hui-Ping; Chen, Jun-Hong; Lee, Chang-Franw
2016-01-01
Inspiration is the primary element of good design. Designers, however, also risk not being able to find inspiration. Novice designers commonly find themselves to be depressed during the conceptual design phase when they fail to find inspiration and the information to be creative. Accordingly, under the graphic design parameter, we have developed…
Debrus, Benjamin; Guillarme, Davy; Rudaz, Serge
2013-10-01
A complete strategy dedicated to quality-by-design (QbD) compliant method development using design of experiments (DOE), multiple linear regressions responses modelling and Monte Carlo simulations for error propagation was evaluated for liquid chromatography (LC). The proposed approach includes four main steps: (i) the initial screening of column chemistry, mobile phase pH and organic modifier, (ii) the selectivity optimization through changes in gradient time and mobile phase temperature, (iii) the adaptation of column geometry to reach sufficient resolution, and (iv) the robust resolution optimization and identification of the method design space. This procedure was employed to obtain a complex chromatographic separation of 15 antipsychotic basic drugs, widely prescribed. To fully automate and expedite the QbD method development procedure, short columns packed with sub-2 μm particles were employed, together with a UHPLC system possessing columns and solvents selection valves. Through this example, the possibilities of the proposed QbD method development workflow were exposed and the different steps of the automated strategy were critically discussed. A baseline separation of the mixture of antipsychotic drugs was achieved with an analysis time of less than 15 min and the robustness of the method was demonstrated simultaneously with the method development phase.
New design method based on sagittal flat-field equipment of Offner type imaging spectrometer
NASA Astrophysics Data System (ADS)
Ji, Yiqun; Xue, Rudong; Xu, Li; Shi, Rongbao; He, Hucheng; Shen, Weimin
2011-11-01
Based on the wave aberration theory, a new method of optical design of the planate symmetric Offner type imaging spectrometer is performed. Astigmatism changing with the diffraction angle of the grating, the meridional and saggital focusing characters are all studied. Determination of the initial configurations and optimally design methods of two improved types of Offner imaging spectrometer are discussed in detailed. A design example with the numerical aperture larger than 0.2, and the entrance slit 30mm is given. Its spectral resolution is better than 2nm and MTF is above 0.7@20lp/mm. The smile and keystone are less than 3% and 0.2% of the pixel respectively.
Nie, Yunfeng; Mohedano, Rubén; Benítez, Pablo; Chaves, Julio; Miñano, Juan C; Thienpont, Hugo; Duerr, Fabian
2016-05-10
In this work, we present a multifield direct design method for ultrashort throw ratio projection optics. The multifield design method allows us to directly calculate two freeform mirror profiles, which are fitted by odd polynomials and imported into an optical design program as an excellent starting point. Afterward, these two mirrors are represented by XY polynomial freeform surfaces for further optimization. The final configuration consists of an off-the-shelf projection lens and two XY polynomial freeform mirrors to greatly shorten the regular projection distance from 2 m to 48 cm for a 78.3 inch diagonal screen. The values of the modulation transfer function for the optimized freeform mirror system are improved to over 0.6 at 0.5 lp/mm, in comparison with its rotationally symmetric counterpart's 0.43, and the final distortion is less than 1.5%, showing a very good and well-tailored imaging performance over the entire field of view.
Elevated Temperature Primary Load Design Method Using Pseudo Elastic-Perfectly Plastic Model
Carter, Peter; Sham, Sam; Jetter, Robert I
2012-01-01
A new primary load design method for elevated temperature service has been developed. Codification of the procedure in an ASME Boiler and Pressure Vessel Code, Section III Code Case is being pursued. The proposed primary load design method is intended to provide the same margins on creep rupture, yielding and creep deformation for a component or structure that are implicit in the allowable stress data. It provides a methodology that does not require stress classification and is also applicable to a full range of temperature above and below the creep regime. Use of elastic-perfectly plastic analysis based on allowable stress with corrections for constraint, steady state stress and creep ductility is described. This approach is intended to ensure that traditional primary stresses are the basis for design, taking into account ductility limits to stress re-distribution and multiaxial rupture criteria.
A row-by-row off-design performance calculation method for turbines
NASA Technical Reports Server (NTRS)
Schobeiri, T.; Abouelkheir, M.
1991-01-01
The turbine component of a gas turbine engine is frequently subjected to extreme operation conditions associated with significant changes in mass flow, turbine inlet temperature, pressure and rotational speed. These off-design operation conditions significantly affect the flow deflection within the turbine stage, which consists of individual stator and rotor rows. As a result, the stage parameters representing the velocity diagram will change and affect the efficiency and performance of the stage and, thus, the turbine. A row-by-row calculation method is presented for predicting the performance behavior of turbines under extreme off-design conditions. The method is applied to a multistage turbine for which the off-design performance is calculated and compared with the measurement.
A Software Engineering Method for the Design of Mixed Reality Systems
NASA Astrophysics Data System (ADS)
Dupuy-Chessa, S.; Godet-Bar, G.; Pérez-Medina, J.-L.; Rieu, D.; Juras, D.
The domain of mixed reality systems is currently making decisive advances on a daily basis. However, the knowledge and know-how of HCI scientists and interaction engineers, used in the design of such systems, are not well understood. This chapter addresses this issue by proposing a software engineering method that couples a process for designing mixed reality interaction with a process for developing the functional core. Our development method features a Y-shaped development cycle that separates the description of functional requirements and their analysis from the study of technical requirements of the application. These sub-processes produce Business Objects and Interactional Objects, which are connected to produce a complete mixed reality system. The whole process is presented via a case study, with a particular emphasis on the design of the interactive solution.
Detecting Low Incidents Effects: The Value of Mixed Methods Research Designs in Low-N Studies
ERIC Educational Resources Information Center
Newman, Isadore; Ridenour, Carolyn S.; Newman, Carole; Smith, Shannon; Brown, Russell C.
2013-01-01
Many important educational situations such as traumatic brain injury among preschoolers, school gun violence, preadolescent eating disorders, and adolescent suicide happen relatively infrequently. In this article, the authors explain why mixed methods research designs offer more meaningful empirical results than do qualitative or quantitative…
Storyboarding: A Method for Bootstrapping the Design of Computer-Based Educational Tasks
ERIC Educational Resources Information Center
Jones, Ian
2008-01-01
There has been a recent call for the use of more systematic thought experiments when investigating learning. This paper presents a storyboarding method for capturing and sharing initial ideas and their evolution in the design of a mathematics learning task. The storyboards produced can be considered as "virtual data" created by thought experiments…
ERIC Educational Resources Information Center
Grammatikopoulos, Vasilis; Zachopoulou, Evridiki; Tsangaridou, Niki; Liukkonen, Jarmo; Pickup, Ian
2008-01-01
The body of research relating to assessment in education suggests that professional developers and seminar administrators have generally paid little attention to evaluation procedures. Scholars have also been critical of evaluations which use a single data source and have favoured the use of a multiple method design to generate a complete picture…
Scaffolded Instruction Improves Student Understanding of the Scientific Method & Experimental Design
ERIC Educational Resources Information Center
D'Costa, Allison R.; Schlueter, Mark A.
2013-01-01
Implementation of a guided-inquiry lab in introductory biology classes, along with scaffolded instruction, improved students' understanding of the scientific method, their ability to design an experiment, and their identification of experimental variables. Pre- and postassessments from experimental versus control sections over three semesters…
Taguchi statistical design and analysis of cleaning methods for spacecraft materials
NASA Technical Reports Server (NTRS)
Lin, Y.; Chung, S.; Kazarians, G. A.; Blosiu, J. O.; Beaudet, R. A.; Quigley, M. S.; Kern, R. G.
2003-01-01
In this study, we have extensively tested various cleaning protocols. The variant parameters included the type and concentration of solvent, type of wipe, pretreatment conditions, and various rinsing systems. Taguchi statistical method was used to design and evaluate various cleaning conditions on ten common spacecraft materials.
ERIC Educational Resources Information Center
Lee, Jang Ho
2012-01-01
Experimental methods have played a significant role in the growth of English teaching and learning studies. The paper presented here outlines basic features of experimental design, including the manipulation of independent variables, the role and practicality of randomised controlled trials (RCTs) in educational research, and alternative methods…
ERIC Educational Resources Information Center
Williamson, Ben
2015-01-01
Policy innovation labs are emerging knowledge actors and technical experts in the governing of education. The article offers a historical and conceptual account of the organisational form of the policy innovation lab. Policy innovation labs are characterised by specific methods and techniques of design, data science, and digitisation in public…
A Controlled Evaluation of a High School Biomedical Pipeline Program: Design and Methods
ERIC Educational Resources Information Center
Winkleby, Marilyn A.; Ned, Judith; Ahn, David; Koehler, Alana; Fagliano, Kathleen; Crump, Casey
2014-01-01
Given limited funding for school-based science education, non-school-based programs have been developed at colleges and universities to increase the number of students entering science- and health-related careers and address critical workforce needs. However, few evaluations of such programs have been conducted. We report the design and methods of…
Free-form surface design method for a collimator TIR lens.
Tsai, Chung-Yu
2016-04-01
A free-form (FF) surface design method is proposed for a general axial-symmetrical collimator system consisting of a light source and a total internal reflection lens with two coupled FF boundary surfaces. The profiles of the boundary surfaces are designed using a FF surface construction method such that each incident ray is directed (refracted and reflected) in such a way as to form a specified image pattern on the target plane. The light ray paths within the system are analyzed using an exact analytical model and a skew-ray tracing approach. In addition, the validity of the proposed FF design method is demonstrated by means of ZEMAX simulations. It is shown that the illumination distribution formed on the target plane is in good agreement with that specified by the user. The proposed surface construction method is mathematically straightforward and easily implemented in computer code. As such, it provides a useful tool for the design and analysis of general axial-symmetrical optical systems. PMID:27140792
Application of a Novel Collaboration Engineering Method for Learning Design: A Case Study
ERIC Educational Resources Information Center
Cheng, Xusen; Li, Yuanyuan; Sun, Jianshan; Huang, Jianqing
2016-01-01
Collaborative case studies and computer-supported collaborative learning (CSCL) play an important role in the modern education environment. A number of researchers have given significant attention to learning design in order to improve the satisfaction of collaborative learning. Although collaboration engineering (CE) is a mature method widely…
Applying Item Response Theory Methods to Design a Learning Progression-Based Science Assessment
ERIC Educational Resources Information Center
Chen, Jing
2012-01-01
Learning progressions are used to describe how students' understanding of a topic progresses over time and to classify the progress of students into steps or levels. This study applies Item Response Theory (IRT) based methods to investigate how to design learning progression-based science assessments. The research questions of this study are: (1)…
On the SCTC-OCTC Method for the Analysis and Design of Circuits
ERIC Educational Resources Information Center
Salvatori, S.; Conte, G.
2009-01-01
This paper discusses guidelines that emphasize the relevance of short-circuit- and open-circuit-time constant (SCTC and OCTC, respectively) methods in the analysis and design of electronic amplifiers. It is demonstrated that it is only necessary to grasp a few concepts in order to understand that the two short- and open-circuit cases fall into a…
A Method for User Centering Systematic Product Development Aimed at Industrial Design Students
ERIC Educational Resources Information Center
Coelho, Denis A.
2010-01-01
Instead of limiting the introduction and stimulus for new concept creation to lists of specifications, industrial design students seem to prefer to be encouraged by ideas in context. A new method that specifically tackles human activity to foster the creation of user centered concepts of new products was developed and is presented in this article.…
A bibliography on formal methods for system specification, design and validation
NASA Technical Reports Server (NTRS)
Meyer, J. F.; Furchtgott, D. G.; Movaghar, A.
1982-01-01
Literature on the specification, design, verification, testing, and evaluation of avionics systems was surveyed, providing 655 citations. Journal papers, conference papers, and technical reports are included. Manual and computer-based methods were employed. Keywords used in the online search are listed.
ERIC Educational Resources Information Center
He, Yong
2013-01-01
Common test items play an important role in equating multiple test forms under the common-item nonequivalent groups design. Inconsistent item parameter estimates among common items can lead to large bias in equated scores for IRT true score equating. Current methods extensively focus on detection and elimination of outlying common items, which…
ERIC Educational Resources Information Center
McIver, Derrick; Fitzsimmons, Stacey; Flanagan, David
2016-01-01
Decisions about instructional methods are becoming more complex, with options ranging from problem sets to experiential service-learning projects. However, instructors not trained in instructional design may make these important decisions based on convenience, comfort, or trends. Instead, this article draws on the knowledge management literature…
Two-dimensional cylindrical thermal cloak designed by implicit transformation method
NASA Astrophysics Data System (ADS)
Yuan, Xuebo; Lin, Guochang; Wang, Youshan
2016-07-01
As a new-type technology of heat management, thermal metamaterials have attracted more and more attentions recently and thermal cloak is a typical case. Thermal conductivity of thermal cloak designed by coordinate transformation method is usually featured by inhomogeneity, anisotropy and local singularity. Explicit transformation method, which is commonly used to design thermal cloak with the coordinate transformation known in advance, has insufficient flexibility, making it hard to proactively reduce the difficulty of device fabrication. In this work, we designed the thermal conductivity of two-dimensional (2D) cylindrical thermal cloak using the implicit transformation method without knowledge of the coordinate transformation in advance. With two classes of generation functions taken into consideration, this study adopted full-wave simulations to analyze the thermal cloaking performances of designed thermal cloaks. Material distributions and simulation results showed that the implicit transformation method has high flexibility. The form of coordinate transformation not only influences the homogeneity and anisotropy but also directly influences the thermal cloaking performance. An improved layered structure for 2D cylindrical thermal cloak was put forward based on the generation function g(r) = r15, which reduces the number of the kinds of constituent materials while guaranteeing good thermal cloaking performance. This work provides a beneficial guidance for reducing the fabrication difficulty of thermal cloak.
The Sequential Order of Procedural Instructions: Some Formal Methods for Designers of Flow Charts.
ERIC Educational Resources Information Center
Jansen, Carel J. M.; Steehouder, Michael F.
1996-01-01
States that document designers presenting procedural instructions can choose several formats: prose, table, logical tree, or flow chart. Suggests that instructions should allow users to reach the outcome without losing time. Discusses two formal methods that help determine which order is most efficient--that based on the selection principle, or…
A Mixed-Methods Social Networks Study Design for Research on Transnational Families
ERIC Educational Resources Information Center
Bernardi, Laura
2011-01-01
This paper advocates the adoption of a mixed-methods research design to describe and analyze ego-centered social networks in transnational family research. Drawing on the experience of the "Social Networks Influences on Family Formation" project (2004-2005; see Bernardi, Keim, & von der Lippe, 2007a, 2007b), I show how the combined use of network…
Methods to actively modify the dynamic response of cm-scale FWMAV designs
NASA Astrophysics Data System (ADS)
Peters, H. J.; Goosen, J. F. L.; van Keulen, F.
2016-05-01
Lightweight vibrating structures (such as flapping wing micro air vehicle (FWMAV) designs) often require some form of control. To achieve controllability, local structural property changes (e.g., damping and stiffness changes) might be induced in an active manner. The stroke-averaged lift force production of a FWMAV wing can be modified by changing the structural properties of that wing at carefully selected places (e.g., changing the properties of the elastic hinge at the wing root as studied in this work). To actively change the structural properties, we investigate three different methods which are based on: (1) piezoelectric polymers, (2) electrorheological fluids, and (3) electrostatic softening. This work aims to gain simple yet insightful ways to determine the potential of these methods without focusing on the precise modeling. Analytical models of FWMAV wing designs that include control approaches based on these three methods are used to calculate the achievable lift force modifications after activating these methods. The lift force production as a result of a wing flapping motion is determined using a quasi-steady aerodynamic model. Both piezoelectric polymers and electrostatic softening are found to be promising in changing the structural properties and, hence, the lift force production of FWMAV wings. For the control of lightweight FWMAV designs, numerical simulations reveal a promising roll maneuverability due to the induced lift force difference between a pair of opposite wings. Although applied to a specific FWMAV design, this work is relevant for control of small, lightweight, possible compliant, vibrating structures in general.
Designing an experiment to measure cellular interaction forces
NASA Astrophysics Data System (ADS)
McAlinden, Niall; Glass, David G.; Millington, Owain R.; Wright, Amanda J.
2013-09-01
Optical trapping is a powerful tool in Life Science research and is becoming common place in many microscopy laboratories and facilities. The force applied by the laser beam on the trapped object can be accurately determined allowing any external forces acting on the trapped object to be deduced. We aim to design a series of experiments that use an optical trap to measure and quantify the interaction force between immune cells. In order to cause minimum perturbation to the sample we plan to directly trap T cells and remove the need to introduce exogenous beads to the sample. This poses a series of challenges and raises questions that need to be answered in order to design a set of effect end-point experiments. A typical cell is large compared to the beads normally trapped and highly non-uniform - can we reliably trap such objects and prevent them from rolling and re-orientating? In this paper we show how a spatial light modulator can produce a triple-spot trap, as opposed to a single-spot trap, giving complete control over the object's orientation and preventing it from rolling due, for example, to Brownian motion. To use an optical trap as a force transducer to measure an external force you must first have a reliably calibrated system. The optical trapping force is typically measured using either the theory of equipartition and observing the Brownian motion of the trapped object or using an escape force method, e.g. the viscous drag force method. In this paper we examine the relationship between force and displacement, as well as measuring the maximum displacement from equilibrium position before an object falls out of the trap, hence determining the conditions under which the different calibration methods should be applied.
Aliabadi, Amir A.; Rogak, Steven N.; Bartlett, Karen H.; Green, Sheldon I.
2011-01-01
Health care facility ventilation design greatly affects disease transmission by aerosols. The desire to control infection in hospitals and at the same time to reduce their carbon footprint motivates the use of unconventional solutions for building design and associated control measures. This paper considers indoor sources and types of infectious aerosols, and pathogen viability and infectivity behaviors in response to environmental conditions. Aerosol dispersion, heat and mass transfer, deposition in the respiratory tract, and infection mechanisms are discussed, with an emphasis on experimental and modeling approaches. Key building design parameters are described that include types of ventilation systems (mixing, displacement, natural and hybrid), air exchange rate, temperature and relative humidity, air flow distribution structure, occupancy, engineered disinfection of air (filtration and UV radiation), and architectural programming (source and activity management) for health care facilities. The paper describes major findings and suggests future research needs in methods for ventilation design of health care facilities to prevent airborne infection risk. PMID:22162813
A Simple and Reliable Method of Design for Standalone Photovoltaic Systems
NASA Astrophysics Data System (ADS)
Srinivasarao, Mantri; Sudha, K. Rama; Bhanu, C. V. K.
2016-08-01
Standalone photovoltaic (SAPV) systems are seen as a promoting method of electrifying areas of developing world that lack power grid infrastructure. Proliferations of these systems require a design procedure that is simple, reliable and exhibit good performance over its life time. The proposed methodology uses simple empirical formulae and easily available parameters to design SAPV systems, that is, array size with energy storage. After arriving at the different array size (area), performance curves are obtained for optimal design of SAPV system with high amount of reliability in terms of autonomy at a specified value of loss of load probability (LOLP). Based on the array to load ratio (ALR) and levelized energy cost (LEC) through life cycle cost (LCC) analysis, it is shown that the proposed methodology gives better performance, requires simple data and is more reliable when compared with conventional design using monthly average daily load and insolation.
Full potential methods for analysis/design of complex aerospace configurations
NASA Technical Reports Server (NTRS)
Shankar, Vijaya; Szema, Kuo-Yen; Bonner, Ellwood
1986-01-01
The steady form of the full potential equation, in conservative form, is employed to analyze and design a wide variety of complex aerodynamic shapes. The nonlinear method is based on the theory of characteristic signal propagation coupled with novel flux biasing concepts and body-fitted mapping procedures. The resulting codes are vectorized for the CRAY XMP and the VPS-32 supercomputers. Use of the full potential nonlinear theory is demonstrated for a single-point supersonic wing design and a multipoint design for transonic maneuver/supersonic cruise/maneuver conditions. Achievement of high aerodynamic efficiency through numerical design is verified by wind tunnel tests. Other studies reported include analyses of a canard/wing/nacelle fighter geometry.
Improvements in surface singularity analysis and design methods. [applicable to airfoils
NASA Technical Reports Server (NTRS)
Bristow, D. R.
1979-01-01
The coupling of the combined source vortex distribution of Green's potential flow function with contemporary numerical techniques is shown to provide accurate, efficient, and stable solutions to subsonic inviscid analysis and design problems for multi-element airfoils. The analysis problem is solved by direct calculation of the surface singularity distribution required to satisfy the flow tangency boundary condition. The design or inverse problem is solved by an iteration process. In this process, the geometry and the associated pressure distribution are iterated until the pressure distribution most nearly corresponding to the prescribed design distribution is obtained. Typically, five iteration cycles are required for convergence. A description of the analysis and design method is presented, along with supporting examples.
Design and Optimization Method of a Two-Disk Rotor System
NASA Astrophysics Data System (ADS)
Huang, Jingjing; Zheng, Longxi; Mei, Qing
2016-04-01
An integrated analytical method based on multidisciplinary optimization software Isight and general finite element software ANSYS was proposed in this paper. Firstly, a two-disk rotor system was established and the mode, humorous response and transient response at acceleration condition were analyzed with ANSYS. The dynamic characteristics of the two-disk rotor system were achieved. On this basis, the two-disk rotor model was integrated to the multidisciplinary design optimization software Isight. According to the design of experiment (DOE) and the dynamic characteristics, the optimization variables, optimization objectives and constraints were confirmed. After that, the multi-objective design optimization of the transient process was carried out with three different global optimization algorithms including Evolutionary Optimization Algorithm, Multi-Island Genetic Algorithm and Pointer Automatic Optimizer. The optimum position of the two-disk rotor system was obtained at the specified constraints. Meanwhile, the accuracy and calculation numbers of different optimization algorithms were compared. The optimization results indicated that the rotor vibration reached the minimum value and the design efficiency and quality were improved by the multidisciplinary design optimization in the case of meeting the design requirements, which provided the reference to improve the design efficiency and reliability of the aero-engine rotor.