Variable aperture-based ptychographical iterative engine method.
Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng
2018-02-01
A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Variable aperture-based ptychographical iterative engine method
NASA Astrophysics Data System (ADS)
Sun, Aihui; Kong, Yan; Meng, Xin; He, Xiaoliang; Du, Ruijun; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng
2018-02-01
A variable aperture-based ptychographical iterative engine (vaPIE) is demonstrated both numerically and experimentally to reconstruct the sample phase and amplitude rapidly. By adjusting the size of a tiny aperture under the illumination of a parallel light beam to change the illumination on the sample step by step and recording the corresponding diffraction patterns sequentially, both the sample phase and amplitude can be faithfully reconstructed with a modified ptychographical iterative engine (PIE) algorithm. Since many fewer diffraction patterns are required than in common PIE and the shape, the size, and the position of the aperture need not to be known exactly, this proposed vaPIE method remarkably reduces the data acquisition time and makes PIE less dependent on the mechanical accuracy of the translation stage; therefore, the proposed technique can be potentially applied for various scientific researches.
Wang, Hai-Yan; Liu, Cheng; Veetil, Suhas P; Pan, Xing-Chen; Zhu, Jian-Qiang
2014-01-27
Wavefront control is a significant parameter in inertial confinement fusion (ICF). The complex transmittance of large optical elements which are often used in ICF is obtained by computing the phase difference of the illuminating and transmitting fields using Ptychographical Iterative Engine (PIE). This can accurately and effectively measure the transmittance of large optical elements with irregular surface profiles, which are otherwise not measurable using commonly used interferometric techniques due to a lack of standard reference plate. Experiments are done with a Continue Phase Plate (CPP) to illustrate the feasibility of this method.
Sub-aperture switching based ptychographic iterative engine (sasPIE) method for quantitative imaging
NASA Astrophysics Data System (ADS)
Sun, Aihui; Kong, Yan; Jiang, Zhilong; Yu, Wei; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng
2018-03-01
Though ptychographic iterative engine (PIE) has been widely adopted in the quantitative micro-imaging with various illuminations as visible light, X-ray and electron beam, the mechanical inaccuracy in the raster scanning of the sample relative to the illumination always degrades the reconstruction quality seriously and makes the resolution reached much lower than that determined by the numerical aperture of the optical system. To overcome this disadvantage, the sub-aperture switching based PIE method is proposed: the mechanical scanning in the common PIE is replaced by the sub-aperture switching, and the reconstruction error related to the positioning inaccuracy is completely avoided. The proposed technique remarkably improves the reconstruction quality, reduces the complexity of the experimental setup and fundamentally accelerates the data acquisition and reconstruction.
NASA Astrophysics Data System (ADS)
Sun, Aihui; Tian, Xiaolin; Kong, Yan; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng
2018-01-01
As a lensfree imaging technique, ptychographic iterative engine (PIE) method can provide both quantitative sample amplitude and phase distributions avoiding aberration. However, it requires field of view (FoV) scanning often relying on mechanical translation, which not only slows down measuring speed, but also introduces mechanical errors decreasing both resolution and accuracy in retrieved information. In order to achieve high-accurate quantitative imaging with fast speed, digital micromirror device (DMD) is adopted in PIE for large FoV scanning controlled by on/off state coding by DMD. Measurements were implemented using biological samples as well as USAF resolution target, proving high resolution in quantitative imaging using the proposed system. Considering its fast and accurate imaging capability, it is believed the DMD based PIE technique provides a potential solution for medical observation and measurements.
Hruszkewycz, Stephan O; Holt, Martin V; Tripathi, Ash; Maser, Jörg; Fuoss, Paul H
2011-06-15
We present the framework for convergent beam Bragg ptychography, and, using simulations, we demonstrate that nanocrystals can be ptychographically reconstructed from highly convergent x-ray Bragg diffraction. The ptychographic iterative engine is extended to three dimensions and shown to successfully reconstruct a simulated nanocrystal using overlapping raster scans with a defocused curved beam, the diameter of which matches the crystal size. This object reconstruction strategy can serve as the basis for coherent diffraction imaging experiments at coherent scanning nanoprobe x-ray sources.
Optical ptychographic microscopy for quantitative anisotropic phase imaging
NASA Astrophysics Data System (ADS)
Anthony, N.; Cadenazzi, G.; Nugent, K. A.; Abbey, B.
2016-12-01
Ptychography has recently been adapted for the recovery of the complete Jones matrix of an anisotropic specimen, using a vectorial form of the Ptychographic Iterative Engine (vPIE) for a set of linearly polarized probes. Here we show that this method can be applied to the recovery of the in-plane components of the elastic strain tensor in a diametrically compressed disc. The advantages and disadvantages of vPIE for the recovery of strain information from `real-world' samples is discussed as well as the potential for this approach to be applied to the characterization of the mechanical properties of optically transparent materials
Ultra-high speed digital micro-mirror device based ptychographic iterative engine method
Sun, Aihui; He, Xiaoliang; Kong, Yan; Cui, Haoyang; Song, Xiaojun; Xue, Liang; Wang, Shouyu; Liu, Cheng
2017-01-01
To reduce the long data acquisition time of the common mechanical scanning based Ptychographic Iterative Engine (PIE) technique, the digital micro-mirror device (DMD) is used to form the fast scanning illumination on the sample. Since the transverse mechanical scanning in the common PIE is replaced by the on/off switching of the micro-mirrors, the data acquisition time can be reduced from more than 15 minutes to less than 20 seconds for recording 12 × 10 diffraction patterns to cover the same field of 147.08 mm2. Furthermore, since the precision of DMD fabricated with the optical lithography is always higher than 10 nm (1 μm for the mechanical translation stage), the time consuming position-error-correction procedure is not required in the iterative reconstruction. These two improvements fundamentally speed up both the data acquisition and the reconstruction procedures in PIE, and relax its requirements on the stability of the imaging system, therefore remarkably improve its applicability for many practices. It is demonstrated experimentally with both USAF resolution target and biological sample that, the spatial resolution of 5.52 μm and the field of view of 147.08 mm2 can be reached with the DMD based PIE method. In a word, by using the DMD to replace the translation stage, we can effectively overcome the main shortcomings of common PIE related to the mechanical scanning, while keeping its advantages on both the high resolution and large field of view. PMID:28717560
System calibration method for Fourier ptychographic microscopy.
Pan, An; Zhang, Yan; Zhao, Tianyu; Wang, Zhaojun; Dan, Dan; Lei, Ming; Yao, Baoli
2017-09-01
Fourier ptychographic microscopy (FPM) is a recently proposed computational imaging technique with both high-resolution and wide field of view. In current FPM imaging platforms, systematic error sources come from aberrations, light-emitting diode (LED) intensity fluctuation, parameter imperfections, and noise, all of which may severely corrupt the reconstruction results with similar artifacts. Therefore, it would be unlikely to distinguish the dominating error from these degraded reconstructions without any preknowledge. In addition, systematic error is generally a mixture of various error sources in the real situation, and it cannot be separated due to their mutual restriction and conversion. To this end, we report a system calibration procedure, termed SC-FPM, to calibrate the mixed systematic errors simultaneously from an overall perspective, based on the simulated annealing algorithm, the LED intensity correction method, the nonlinear regression process, and the adaptive step-size strategy, which involves the evaluation of an error metric at each iteration step, followed by the re-estimation of accurate parameters. The performance achieved both in simulations and experiments demonstrates that the proposed method outperforms other state-of-the-art algorithms. The reported system calibration scheme improves the robustness of FPM, relaxes the experiment conditions, and does not require any preknowledge, which makes the FPM more pragmatic. (2017) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).
Translation position determination in ptychographic coherent diffraction imaging.
Zhang, Fucai; Peterson, Isaac; Vila-Comamala, Joan; Diaz, Ana; Berenguer, Felisa; Bean, Richard; Chen, Bo; Menzel, Andreas; Robinson, Ian K; Rodenburg, John M
2013-06-03
Accurate knowledge of translation positions is essential in ptychography to achieve a good image quality and the diffraction limited resolution. We propose a method to retrieve and correct position errors during the image reconstruction iterations. Sub-pixel position accuracy after refinement is shown to be achievable within several tens of iterations. Simulation and experimental results for both optical and X-ray wavelengths are given. The method improves both the quality of the retrieved object image and relaxes the position accuracy requirement while acquiring the diffraction patterns.
NASA Astrophysics Data System (ADS)
Sun, Jiasong; Zhang, Yuzhen; Chen, Qian; Zuo, Chao
2017-02-01
Fourier ptychographic microscopy (FPM) is a newly developed super-resolution technique, which employs angularly varying illuminations and a phase retrieval algorithm to surpass the diffraction limit of a low numerical aperture (NA) objective lens. In current FPM imaging platforms, accurate knowledge of LED matrix's position is critical to achieve good recovery quality. Furthermore, considering such a wide field-of-view (FOV) in FPM, different regions in the FOV have different sensitivity of LED positional misalignment. In this work, we introduce an iterative method to correct position errors based on the simulated annealing (SA) algorithm. To improve the efficiency of this correcting process, large number of iterations for several images with low illumination NAs are firstly implemented to estimate the initial values of the global positional misalignment model through non-linear regression. Simulation and experimental results are presented to evaluate the performance of the proposed method and it is demonstrated that this method can both improve the quality of the recovered object image and relax the LED elements' position accuracy requirement while aligning the FPM imaging platforms.
Parallel ptychographic reconstruction
Nashed, Youssef S. G.; Vine, David J.; Peterka, Tom; ...
2014-12-19
Ptychography is an imaging method whereby a coherent beam is scanned across an object, and an image is obtained by iterative phasing of the set of diffraction patterns. It is able to be used to image extended objects at a resolution limited by scattering strength of the object and detector geometry, rather than at an optics-imposed limit. As technical advances allow larger fields to be imaged, computational challenges arise for reconstructing the correspondingly larger data volumes, yet at the same time there is also a need to deliver reconstructed images immediately so that one can evaluate the next steps tomore » take in an experiment. Here we present a parallel method for real-time ptychographic phase retrieval. It uses a hybrid parallel strategy to divide the computation between multiple graphics processing units (GPUs) and then employs novel techniques to merge sub-datasets into a single complex phase and amplitude image. Results are shown on a simulated specimen and a real dataset from an X-ray experiment conducted at a synchrotron light source.« less
A phase space model of Fourier ptychographic microscopy
Horstmeyer, Roarke; Yang, Changhuei
2014-01-01
A new computational imaging technique, termed Fourier ptychographic microscopy (FPM), uses a sequence of low-resolution images captured under varied illumination to iteratively converge upon a high-resolution complex sample estimate. Here, we propose a mathematical model of FPM that explicitly connects its operation to conventional ptychography, a common procedure applied to electron and X-ray diffractive imaging. Our mathematical framework demonstrates that under ideal illumination conditions, conventional ptychography and FPM both produce datasets that are mathematically linked by a linear transformation. We hope this finding encourages the future cross-pollination of ideas between two otherwise unconnected experimental imaging procedures. In addition, the coherence state of the illumination source used by each imaging platform is critical to successful operation, yet currently not well understood. We apply our mathematical framework to demonstrate that partial coherence uniquely alters both conventional ptychography’s and FPM’s captured data, but up to a certain threshold can still lead to accurate resolution-enhanced imaging through appropriate computational post-processing. We verify this theoretical finding through simulation and experiment. PMID:24514995
Microscopy illumination engineering using a low-cost liquid crystal display.
Guo, Kaikai; Bian, Zichao; Dong, Siyuan; Nanda, Pariksheet; Wang, Ying Min; Zheng, Guoan
2015-02-01
Illumination engineering is critical for obtaining high-resolution, high-quality images in microscope settings. In a typical microscope, the condenser lens provides sample illumination that is uniform and free from glare. The associated condenser diaphragm can be manually adjusted to obtain the optimal illumination numerical aperture. In this paper, we report a programmable condenser lens for active illumination control. In our prototype setup, we used a $15 liquid crystal display as a transparent spatial light modulator and placed it at the back focal plane of the condenser lens. By setting different binary patterns on the display, we can actively control the illumination and the spatial coherence of the microscope platform. We demonstrated the use of such a simple scheme for multimodal imaging, including bright-field microscopy, darkfield microscopy, phase-contrast microscopy, polarization microscopy, 3D tomographic imaging, and super-resolution Fourier ptychographic imaging. The reported illumination engineering scheme is cost-effective and compatible with most existing platforms. It enables a turnkey solution with high flexibility for researchers in various communities. From the engineering point-of-view, the reported illumination scheme may also provide new insights for the development of multimodal microscopy and Fourier ptychographic imaging.
System calibration method for Fourier ptychographic microscopy
NASA Astrophysics Data System (ADS)
Pan, An; Zhang, Yan; Zhao, Tianyu; Wang, Zhaojun; Dan, Dan; Lei, Ming; Yao, Baoli
2017-09-01
Fourier ptychographic microscopy (FPM) is a recently proposed computational imaging technique with both high-resolution and wide field of view. In current FPM imaging platforms, systematic error sources come from aberrations, light-emitting diode (LED) intensity fluctuation, parameter imperfections, and noise, all of which may severely corrupt the reconstruction results with similar artifacts. Therefore, it would be unlikely to distinguish the dominating error from these degraded reconstructions without any preknowledge. In addition, systematic error is generally a mixture of various error sources in the real situation, and it cannot be separated due to their mutual restriction and conversion. To this end, we report a system calibration procedure, termed SC-FPM, to calibrate the mixed systematic errors simultaneously from an overall perspective, based on the simulated annealing algorithm, the LED intensity correction method, the nonlinear regression process, and the adaptive step-size strategy, which involves the evaluation of an error metric at each iteration step, followed by the re-estimation of accurate parameters. The performance achieved both in simulations and experiments demonstrates that the proposed method outperforms other state-of-the-art algorithms. The reported system calibration scheme improves the robustness of FPM, relaxes the experiment conditions, and does not require any preknowledge, which makes the FPM more pragmatic.
An introduction to the theory of ptychographic phase retrieval methods
NASA Astrophysics Data System (ADS)
Konijnenberg, Sander
2017-12-01
An overview of several ptychographic phase retrieval methods and the theory behind them is presented. By looking into the theory behind more basic single-intensity pattern phase retrieval methods, a theoretical framework is provided for analyzing ptychographic algorithms. Extensions of ptychographic algorithms that deal with issues such as partial coherence, thick samples, or uncertainties of the probe or probe positions are also discussed. This introduction is intended for scientists and students without prior experience in the field of phase retrieval or ptychography to quickly get introduced to the theory, so that they can put the more specialized literature in context more easily.
Denoised Wigner distribution deconvolution via low-rank matrix completion
Lee, Justin; Barbastathis, George
2016-08-23
Wigner distribution deconvolution (WDD) is a decades-old method for recovering phase from intensity measurements. Although the technique offers an elegant linear solution to the quadratic phase retrieval problem, it has seen limited adoption due to its high computational/memory requirements and the fact that the technique often exhibits high noise sensitivity. Here, we propose a method for noise suppression in WDD via low-rank noisy matrix completion. Our technique exploits the redundancy of an object’s phase space to denoise its WDD reconstruction. We show in model calculations that our technique outperforms other WDD algorithms as well as modern iterative methods for phasemore » retrieval such as ptychography. Here, our results suggest that a class of phase retrieval techniques relying on regularized direct inversion of ptychographic datasets (instead of iterative reconstruction techniques) can provide accurate quantitative phase information in the presence of high levels of noise.« less
Denoised Wigner distribution deconvolution via low-rank matrix completion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, Justin; Barbastathis, George
Wigner distribution deconvolution (WDD) is a decades-old method for recovering phase from intensity measurements. Although the technique offers an elegant linear solution to the quadratic phase retrieval problem, it has seen limited adoption due to its high computational/memory requirements and the fact that the technique often exhibits high noise sensitivity. Here, we propose a method for noise suppression in WDD via low-rank noisy matrix completion. Our technique exploits the redundancy of an object’s phase space to denoise its WDD reconstruction. We show in model calculations that our technique outperforms other WDD algorithms as well as modern iterative methods for phasemore » retrieval such as ptychography. Here, our results suggest that a class of phase retrieval techniques relying on regularized direct inversion of ptychographic datasets (instead of iterative reconstruction techniques) can provide accurate quantitative phase information in the presence of high levels of noise.« less
Data compression strategies for ptychographic diffraction imaging
NASA Astrophysics Data System (ADS)
Loetgering, Lars; Rose, Max; Treffer, David; Vartanyants, Ivan A.; Rosenhahn, Axel; Wilhein, Thomas
2017-12-01
Ptychography is a computational imaging method for solving inverse scattering problems. To date, the high amount of redundancy present in ptychographic data sets requires computer memory that is orders of magnitude larger than the retrieved information. Here, we propose and compare data compression strategies that significantly reduce the amount of data required for wavefield inversion. Information metrics are used to measure the amount of data redundancy present in ptychographic data. Experimental results demonstrate the technique to be memory efficient and stable in the presence of systematic errors such as partial coherence and noise.
Procedures for cryogenic X-ray ptychographic imaging of biological samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yusuf, M.; Zhang, F.; Chen, B.
Biological sample-preparation procedures have been developed for imaging human chromosomes under cryogenic conditions. A new experimental setup, developed for imaging frozen samples using beamline I13 at Diamond Light Source, is described. This paper describes the equipment and experimental procedures as well as the authors' first ptychographic reconstructions using X-rays.
Procedures for cryogenic X-ray ptychographic imaging of biological samples
Yusuf, M.; Zhang, F.; Chen, B.; ...
2017-01-12
Biological sample-preparation procedures have been developed for imaging human chromosomes under cryogenic conditions. A new experimental setup, developed for imaging frozen samples using beamline I13 at Diamond Light Source, is described. This paper describes the equipment and experimental procedures as well as the authors' first ptychographic reconstructions using X-rays.
Fourier ptychographic microscopy at telecommunication wavelengths using a femtosecond laser
NASA Astrophysics Data System (ADS)
Ahmed, Ishtiaque; Alotaibi, Maged; Skinner-Ramos, Sueli; Dominguez, Daniel; Bernussi, Ayrton A.; de Peralta, Luis Grave
2017-12-01
We report the implementation of the Fourier Ptychographic Microscopy (FPM) technique, a phase retrieval technique, at telecommunication wavelengths using a low-coherence ultrafast pulsed laser source. High quality images, near speckle-free, were obtained with the proposed approach. We demonstrate that FPM can also be used to image periodic features through a silicon wafer.
Continuous scanning mode for ptychography
Clark, Jesse N.; Huang, Xiaojing; Harder, Ross J.; ...
2014-10-15
We outline how ptychographic imaging can be performed without the need for discrete scan positions. Through an idealized experiment, we demonstrate how a discrete-position scan regime can be replaced with a continuously scanned one with suitable modification of the reconstruction scheme based on coherent modes. Thus, the impact of this is that acquisition times can be reduced, significantly aiding ptychographic imaging with x rays, electrons, or visible light.
Quantitative ptychographic reconstruction by applying a probe constraint
NASA Astrophysics Data System (ADS)
Reinhardt, J.; Schroer, C. G.
2018-04-01
The coherent scanning technique X-ray ptychography has become a routine tool for high-resolution imaging and nanoanalysis in various fields of research such as chemistry, biology or materials science. Often the ptychographic reconstruction results are analysed in order to yield absolute quantitative values for the object transmission and illuminating probe function. In this work, we address a common ambiguity encountered in scaling the object transmission and probe intensity via the application of an additional constraint to the reconstruction algorithm. A ptychographic measurement of a model sample containing nanoparticles is used as a test data set against which to benchmark in the reconstruction results depending on the type of constraint used. Achieving quantitative absolute values for the reconstructed object transmission is essential for advanced investigation of samples that are changing over time, e.g., during in-situ experiments or in general when different data sets are compared.
Compression and information recovery in ptychography
NASA Astrophysics Data System (ADS)
Loetgering, L.; Treffer, D.; Wilhein, T.
2018-04-01
Ptychographic coherent diffraction imaging (PCDI) is a scanning microscopy modality that allows for simultaneous recovery of object and illumination information. This ability renders PCDI a suitable technique for x-ray lensless imaging and optics characterization. Its potential for information recovery typically relies on large amounts of data redundancy. However, the field of view in ptychography is practically limited by the memory and the computational facilities available. We describe techniques that achieve robust ptychographic information recovery at high compression rates. The techniques are compared and tested with experimental data.
Bian, Liheng; Suo, Jinli; Chung, Jaebum; Ou, Xiaoze; Yang, Changhuei; Chen, Feng; Dai, Qionghai
2016-06-10
Fourier ptychographic microscopy (FPM) is a novel computational coherent imaging technique for high space-bandwidth product imaging. Mathematically, Fourier ptychographic (FP) reconstruction can be implemented as a phase retrieval optimization process, in which we only obtain low resolution intensity images corresponding to the sub-bands of the sample's high resolution (HR) spatial spectrum, and aim to retrieve the complex HR spectrum. In real setups, the measurements always suffer from various degenerations such as Gaussian noise, Poisson noise, speckle noise and pupil location error, which would largely degrade the reconstruction. To efficiently address these degenerations, we propose a novel FP reconstruction method under a gradient descent optimization framework in this paper. The technique utilizes Poisson maximum likelihood for better signal modeling, and truncated Wirtinger gradient for effective error removal. Results on both simulated data and real data captured using our laser-illuminated FPM setup show that the proposed method outperforms other state-of-the-art algorithms. Also, we have released our source code for non-commercial use.
Zhu, Xiaohui; Hitchcock, Adam P.; Bazylinski, Dennis A.; Denes, Peter; Joseph, John; Lins, Ulysses; Marchesini, Stefano; Shiu, Hung-Wei; Tyliszczak, Tolek; Shapiro, David A.
2016-01-01
Characterizing the chemistry and magnetism of magnetotactic bacteria (MTB) is an important aspect of understanding the biomineralization mechanism and function of the chains of magnetosomes (Fe3O4 nanoparticles) found in such species. Images and X-ray absorption spectra (XAS) of magnetosomes extracted from, and magnetosomes in, whole Magnetovibrio blakemorei strain MV-1 cells have been recorded using soft X-ray ptychography at the Fe 2p edge. A spatial resolution of 7 nm is demonstrated. Precursor-like and immature magnetosome phases in a whole MV-1 cell were visualized, and their Fe 2p spectra were measured. Based on these results, a model for the pathway of magnetosome biomineralization for MV-1 is proposed. Fe 2p X-ray magnetic circular dichroism (XMCD) spectra have been derived from ptychography image sequences recorded using left and right circular polarization. The shape of the XAS and XMCD signals in the ptychographic absorption spectra of both sample types is identical to the shape and signals measured with conventional bright-field scanning transmission X-ray microscope. A weaker and inverted XMCD signal was observed in the ptychographic phase spectra of the extracted magnetosomes. The XMCD ptychographic phase spectrum of the intracellular magnetosomes differed from the ptychographic phase spectrum of the extracted magnetosomes. These results demonstrate that spectro-ptychography offers a superior means of characterizing the chemical and magnetic properties of MTB at the individual magnetosome level. PMID:27930297
Zhu, Xiaohui; Hitchcock, Adam P.; Bazylinski, Dennis A.; ...
2016-12-07
Characterizing the chemistry and magnetism of magnetotactic bacteria (MTB) is an important aspect of understanding the biomineralization mechanism and function of the chains of magnetosomes (Fe 3O 4 nanoparticles) found in such species. Images and X-ray absorption spectra (XAS) of magnetosomes extracted from, and magnetosomes in, whole Magnetovibrio blakemorei strain MV-1 cells have been recorded using soft X-ray ptychography at the Fe 2p edge. A spatial resolution of 7 nm is demonstrated. Precursor-like and immature magnetosome phases in a whole MV-1 cell were visualized, and their Fe 2p spectra were measured. Based on these results, a model for the pathwaymore » of magnetosome biomineralization for MV-1 is proposed. Fe 2p X-ray magnetic circular dichroism (XMCD) spectra have been derived from ptychography image sequences recorded using left and right circular polarization. The shape of the XAS and XMCD signals in the ptychographic absorption spectra of both sample types is identical to the shape and signals measured with conventional bright-field scanning transmission X-ray microscope. A weaker and inverted XMCD signal was observed in the ptychographic phase spectra of the extracted magnetosomes. The XMCD ptychographic phase spectrum of the intracellular magnetosomes differed from the ptychographic phase spectrum of the extracted magnetosomes. Lastly, these results demonstrate that spectro-ptychography offers a superior means of characterizing the chemical and magnetic properties of MTB at the individual magnetosome level.« less
Zhu, Xiaohui; Hitchcock, Adam P; Bazylinski, Dennis A; Denes, Peter; Joseph, John; Lins, Ulysses; Marchesini, Stefano; Shiu, Hung-Wei; Tyliszczak, Tolek; Shapiro, David A
2016-12-20
Characterizing the chemistry and magnetism of magnetotactic bacteria (MTB) is an important aspect of understanding the biomineralization mechanism and function of the chains of magnetosomes (Fe 3 O 4 nanoparticles) found in such species. Images and X-ray absorption spectra (XAS) of magnetosomes extracted from, and magnetosomes in, whole Magnetovibrio blakemorei strain MV-1 cells have been recorded using soft X-ray ptychography at the Fe 2p edge. A spatial resolution of 7 nm is demonstrated. Precursor-like and immature magnetosome phases in a whole MV-1 cell were visualized, and their Fe 2p spectra were measured. Based on these results, a model for the pathway of magnetosome biomineralization for MV-1 is proposed. Fe 2p X-ray magnetic circular dichroism (XMCD) spectra have been derived from ptychography image sequences recorded using left and right circular polarization. The shape of the XAS and XMCD signals in the ptychographic absorption spectra of both sample types is identical to the shape and signals measured with conventional bright-field scanning transmission X-ray microscope. A weaker and inverted XMCD signal was observed in the ptychographic phase spectra of the extracted magnetosomes. The XMCD ptychographic phase spectrum of the intracellular magnetosomes differed from the ptychographic phase spectrum of the extracted magnetosomes. These results demonstrate that spectro-ptychography offers a superior means of characterizing the chemical and magnetic properties of MTB at the individual magnetosome level.
NASA Astrophysics Data System (ADS)
Pfeiffer, Franz
2018-01-01
X-ray ptychographic microscopy combines the advantages of raster scanning X-ray microscopy with the more recently developed techniques of coherent diffraction imaging. It is limited neither by the fabricational challenges associated with X-ray optics nor by the requirements of isolated specimen preparation, and offers in principle wavelength-limited resolution, as well as stable access and solution to the phase problem. In this Review, we discuss the basic principles of X-ray ptychography and summarize the main milestones in the evolution of X-ray ptychographic microscopy and tomography over the past ten years, since its first demonstration with X-rays. We also highlight the potential for applications in the life and materials sciences, and discuss the latest advanced concepts and probable future developments.
Ptychographic imaging with partially coherent plasma EUV sources
NASA Astrophysics Data System (ADS)
Bußmann, Jan; Odstrčil, Michal; Teramoto, Yusuke; Juschkin, Larissa
2017-12-01
We report on high-resolution lens-less imaging experiments based on ptychographic scanning coherent diffractive imaging (CDI) method employing compact plasma sources developed for extreme ultraviolet (EUV) lithography applications. Two kinds of discharge sources were used in our experiments: a hollow-cathode-triggered pinch plasma source operated with oxygen and for the first time a laser-assisted discharge EUV source with a liquid tin target. Ptychographic reconstructions of different samples were achieved by applying constraint relaxation to the algorithm. Our ptychography algorithms can handle low spatial coherence and broadband illumination as well as compensate for the residual background due to plasma radiation in the visible spectral range. Image resolution down to 100 nm is demonstrated even for sparse objects, and it is limited presently by the sample structure contrast and the available coherent photon flux. We could extract material properties by the reconstruction of the complex exit-wave field, gaining additional information compared to electron microscopy or CDI with longer-wavelength high harmonic laser sources. Our results show that compact plasma-based EUV light sources of only partial spatial and temporal coherence can be effectively used for lens-less imaging applications. The reported methods may be applied in combination with reflectometry and scatterometry for high-resolution EUV metrology.
Effects of illumination on image reconstruction via Fourier ptychography
NASA Astrophysics Data System (ADS)
Cao, Xinrui; Sinzinger, Stefan
2017-12-01
The Fourier ptychographic microscopy (FPM) technique provides high-resolution images by combining a traditional imaging system, e.g. a microscope or a 4f-imaging system, with a multiplexing illumination system, e.g. an LED array and numerical image processing for enhanced image reconstruction. In order to numerically combine images that are captured under varying illumination angles, an iterative phase-retrieval algorithm is often applied. However, in practice, the performance of the FPM algorithm degrades due to the imperfections of the optical system, the image noise caused by the camera, etc. To eliminate the influence of the aberrations of the imaging system, an embedded pupil function recovery (EPRY)-FPM algorithm has been proposed [Opt. Express 22, 4960-4972 (2014)]. In this paper, we study how the performance of FPM and EPRY-FPM algorithms are affected by imperfections of the illumination system using both numerical simulations and experiments. The investigated imperfections include varying and non-uniform intensities, and wavefront aberrations. Our study shows that the aberrations of the illumination system significantly affect the performance of both FPM and EPRY-FPM algorithms. Hence, in practice, aberrations in the illumination system gain significant influence on the resulting image quality.
Iteration in Early-Elementary Engineering Design
ERIC Educational Resources Information Center
McFarland Kendall, Amber Leigh
2017-01-01
K-12 standards and curricula are beginning to include engineering design as a key practice within Science Technology Engineering and Mathematics (STEM) education. However, there is little research on how the youngest students engage in engineering design within the elementary classroom. This dissertation focuses on iteration as an essential aspect…
X-ray ptychographic and fluorescence microscopy of frozen-hydrated cells using continuous scanning
Deng, Junjing; Vine, David J.; Chen, Si; ...
2017-03-27
X-ray microscopy can be used to image whole, unsectioned cells in their native hydrated state. It complements the higher resolution of electron microscopy for submicrometer thick specimens, and the molecule-specific imaging capabilites of fluorescence light microscopy. We describe here the first use of fast, continuous x-ray scanning of frozen hydrated cells for simultaneous sub-20 nm resolution ptychographic transmission imaging with high contrast, and sub-100 nm resolution deconvolved x-ray fluorescence imaging of diffusible and bound ions at native concentrations, without the need to add specific labels. Here, by working with cells that have been rapidly frozen without the use of chemicalmore » fixatives, and imaging them under cryogenic conditions, we are able to obtain images with well preserved structural and chemical composition, and sufficient stability against radiation damage to allow for multiple images to be obtained with no observable change.« less
Every factor helps: Rapid Ptychographic Reconstruction
NASA Astrophysics Data System (ADS)
Nashed, Youssef
2015-03-01
Recent advances in microscopy, specifically higher spatial resolution and data acquisition rates, require faster and more robust phase retrieval reconstruction methods. Ptychography is a phase retrieval technique for reconstructing the complex transmission function of a specimen from a sequence of diffraction patterns in visible light, X-ray, and electron microscopes. As technical advances allow larger fields to be imaged, computational challenges arise for reconstructing the correspondingly larger data volumes. Waiting to postprocess datasets offline results in missed opportunities. Here we present a parallel method for real-time ptychographic phase retrieval. It uses a hybrid parallel strategy to divide the computation between multiple graphics processing units (GPUs). A final specimen reconstruction is then achieved by different techniques to merge sub-dataset results into a single complex phase and amplitude image. Results are shown on a simulated specimen and real datasets from X-ray experiments conducted at a synchrotron light source.
Soft X-ray spectromicroscopy using ptychography with randomly phased illumination
NASA Astrophysics Data System (ADS)
Maiden, A. M.; Morrison, G. R.; Kaulich, B.; Gianoncelli, A.; Rodenburg, J. M.
2013-04-01
Ptychography is a form of scanning diffractive imaging that can successfully retrieve the modulus and phase of both the sample transmission function and the illuminating probe. An experimental difficulty commonly encountered in diffractive imaging is the large dynamic range of the diffraction data. Here we report a novel ptychographic experiment using a randomly phased X-ray probe to considerably reduce the dynamic range of the recorded diffraction patterns. Images can be reconstructed reliably and robustly from this setup, even when scatter from the specimen is weak. A series of ptychographic reconstructions at X-ray energies around the L absorption edge of iron demonstrates the advantages of this method for soft X-ray spectromicroscopy, which can readily provide chemical sensitivity without the need for optical refocusing. In particular, the phase signal is in perfect registration with the modulus signal and provides complementary information that can be more sensitive to changes in the local chemical environment.
Iteration in Early-Elementary Engineering Design
NASA Astrophysics Data System (ADS)
McFarland Kendall, Amber Leigh
K-12 standards and curricula are beginning to include engineering design as a key practice within Science Technology Engineering and Mathematics (STEM) education. However, there is little research on how the youngest students engage in engineering design within the elementary classroom. This dissertation focuses on iteration as an essential aspect of engineering design, and because research at the college and professional level suggests iteration improves the designer's understanding of problems and the quality of design solutions. My research presents qualitative case studies of students in kindergarten and third-grade as they engage in classroom engineering design challenges which integrate with traditional curricula standards in mathematics, science, and literature. I discuss my results through the lens of activity theory, emphasizing practices, goals, and mediating resources. Through three chapters, I provide insight into how early-elementary students iterate upon their designs by characterizing the ways in which lesson design impacts testing and revision, by analyzing the plan-driven and experimentation-driven approaches that student groups use when solving engineering design challenges, and by investigating how students attend to constraints within the challenge. I connect these findings to teacher practices and curriculum design in order to suggest methods of promoting iteration within open-ended, classroom-based engineering design challenges. This dissertation contributes to the field of engineering education by providing evidence of productive engineering practices in young students and support for the value of engineering design challenges in developing students' participation and agency in these practices.
The Effect of Iteration on the Design Performance of Primary School Children
ERIC Educational Resources Information Center
Looijenga, Annemarie; Klapwijk, Remke; de Vries, Marc J.
2015-01-01
Iteration during the design process is an essential element. Engineers optimize their design by iteration. Research on iteration in Primary Design Education is however scarce; possibly teachers believe they do not have enough time for iteration in daily classroom practices. Spontaneous playing behavior of children indicates that iteration fits in…
Simultaneous cryo X-ray ptychographic and fluorescence microscopy of green algae
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deng, Junjing; Vine, David J.; Chen, Si
Trace metals play important roles in normal and in disease-causing biological functions. X-ray fluorescence microscopy reveals trace elements with no dependence on binding affinities (unlike with visible light fluorophores) and with improved sensitivity relative to electron probes. However, X-ray fluorescence is not very sensitive for showing the light elements that comprise the majority of cellular material. Here we show that X-ray ptychography can be combined with fluorescence to image both cellular structure and trace element distribution in frozen-hydrated cells at cryogenic temperatures, with high structural and chemical fidelity. Ptychographic reconstruction algorithms deliver phase and absorption contrast images at a resolutionmore » beyond that of the illuminating lens or beam size. Using 5.2-keV X-rays, we have obtained sub-30-nm resolution structural images and ~90-nm-resolution fluorescence images of several elements in frozen-hydrated green algae. Finally, this combined approach offers a way to study the role of trace elements in their structural context.« less
Simultaneous cryo X-ray ptychographic and fluorescence microscopy of green algae
Deng, Junjing; Vine, David J.; Chen, Si; ...
2015-02-24
Trace metals play important roles in normal and in disease-causing biological functions. X-ray fluorescence microscopy reveals trace elements with no dependence on binding affinities (unlike with visible light fluorophores) and with improved sensitivity relative to electron probes. However, X-ray fluorescence is not very sensitive for showing the light elements that comprise the majority of cellular material. Here we show that X-ray ptychography can be combined with fluorescence to image both cellular structure and trace element distribution in frozen-hydrated cells at cryogenic temperatures, with high structural and chemical fidelity. Ptychographic reconstruction algorithms deliver phase and absorption contrast images at a resolutionmore » beyond that of the illuminating lens or beam size. Using 5.2-keV X-rays, we have obtained sub-30-nm resolution structural images and ~90-nm-resolution fluorescence images of several elements in frozen-hydrated green algae. Finally, this combined approach offers a way to study the role of trace elements in their structural context.« less
Simultaneous cryo X-ray ptychographic and fluorescence microscopy of green algae
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deng, Junjing; Vine, David J.; Chen, Si
Trace metals play important roles in normal and in disease-causing biological functions. X-ray fluorescence microscopy reveals trace elements with no dependence on binding affinities (unlike with visible light fluorophores) and with improved sensitivity relative to electron probes. However, X-ray fluorescence is not very sensitive for showing the light elements that comprise the majority of cellular material. Here we show that X-ray ptychography can be combined with fluorescence to image both cellular structure and trace element distribution in frozen-hydrated cells at cryogenic temperatures, with high structural and chemical fidelity. Ptychographic reconstruction algorithms deliver phase and absorption contrast images at a resolutionmore » beyond that of the illuminating lens or beam size. Using 5.2-keV X-rays, we have obtained sub-30-nm resolution structural images and similar to 90-nm-resolution fluorescence images of several elements in frozen-hydrated green algae. This combined approach offers a way to study the role of trace elements in their structural context.« less
High energy near- and far-field ptychographic tomography at the ESRF
NASA Astrophysics Data System (ADS)
da Silva, Julio C.; Haubrich, Jan; Requena, Guillermo; Hubert, Maxime; Pacureanu, Alexandra; Bloch, Leonid; Yang, Yang; Cloetens, Peter
2017-09-01
In high-resolution tomography, one needs high-resolved projections in order to reconstruct a high-quality 3D map of a sample. X-ray ptychography is a robust technique which can provide such high-resolution 2D projections taking advantage of coherent X-rays. This technique was used in the far-field regime for a fair amount of time, but it can now also be implemented in the near-field regime. In both regimes, the technique enables not only high-resolution imaging, but also high sensitivity to the electron density of the sample. The combination with tomography makes 3D imaging possible via ptychographic X-ray computed tomography (PXCT), which can provide a 3D map of the complex-valued refractive index of the sample. The extension of PXCT to X-ray energies above 15 keV is challenging, but it can allow the imaging of object opaque to lower energy. We present here the implementation and developments of high-energy near- and far-field PXCT at the ESRF.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ciani, A.; Kewish, C. M.; Guizar-Sicairos, M.
A newly developed data processing method able to characterize the osteocytes lacuno-canalicular network (LCN) is presented. Osteocytes are the most abundant cells in the bone, living in spaces called lacunae embedded inside the bone matrix and connected to each other with an extensive network of canals that allows for the exchange of nutrients and for mechanotransduction functions. The geometrical three-dimensional (3D) architecture is increasingly thought to be related to the macroscopic strength or failure of the bone and it is becoming the focus for investigating widely spread diseases such as osteoporosis. To obtain 3D LCN images non-destructively has been outmore » of reach until recently, since tens-of-nanometers scale resolution is required. Ptychographic tomography was validated for bone imaging in [1], showing clearly the LCN. The method presented here was applied to 3D ptychographic tomographic images in order to extract morphological and geometrical parameters of the lacuno-canalicular structures.« less
Ptychographic X-ray nanotomography quantifies mineral distributions in human dentine
NASA Astrophysics Data System (ADS)
Zanette, I.; Enders, B.; Dierolf, M.; Thibault, P.; Gradl, R.; Diaz, A.; Guizar-Sicairos, M.; Menzel, A.; Pfeiffer, F.; Zaslansky, P.
2015-03-01
Bones are bio-composites with biologically tunable mechanical properties, where a polymer matrix of nanofibrillar collagen is reinforced by apatite mineral crystals. Some bones, such as antler, form and change rapidly, while other bone tissues, such as human tooth dentine, develop slowly and maintain constant composition and architecture for entire lifetimes. When studying apatite mineral microarchitecture, mineral distributions or mineralization activity of bone-forming cells, representative samples of tissue are best studied at submicrometre resolution while minimizing sample-preparation damage. Here, we demonstrate the power of ptychographic X-ray tomography to map variations in the mineral content distribution in three dimensions and at the nanometre scale. Using this non-destructive method, we observe nanostructures surrounding hollow tracts that exist in human dentine forming dentinal tubules. We reveal unprecedented quantitative details of the ultrastructure clearly revealing the spatially varying mineralization density. Such information is essential for understanding a variety of natural and therapeutic effects for example in bone tissue healing and ageing.
NASA Astrophysics Data System (ADS)
Ciani, A.; Guizar-Sicairos, M.; Diaz, A.; Holler, M.; Pallu, S.; Achiou, Z.; Jennane, R.; Toumi, H.; Lespessailles, E.; Kewish, C. M.
2016-01-01
A newly developed data processing method able to characterize the osteocytes lacuno-canalicular network (LCN) is presented. Osteocytes are the most abundant cells in the bone, living in spaces called lacunae embedded inside the bone matrix and connected to each other with an extensive network of canals that allows for the exchange of nutrients and for mechanotransduction functions. The geometrical three-dimensional (3D) architecture is increasingly thought to be related to the macroscopic strength or failure of the bone and it is becoming the focus for investigating widely spread diseases such as osteoporosis. To obtain 3D LCN images non-destructively has been out of reach until recently, since tens-of-nanometers scale resolution is required. Ptychographic tomography was validated for bone imaging in [1], showing clearly the LCN. The method presented here was applied to 3D ptychographic tomographic images in order to extract morphological and geometrical parameters of the lacuno-canalicular structures.
Liu, Haigang; Xu, Zijian; Zhang, Xiangzhi; Wu, Yanqing; Guo, Zhi; Tai, Renzhong
2013-04-10
In coherent diffractive imaging (CDI) experiments, a beamstop (BS) is commonly used to extend the exposure time of the charge-coupled detector and obtain high-angle diffraction signals. However, the negative effect of a large BS is also evident, causing low-frequency signals to be missed and making CDI reconstruction unstable or causing it to fail. We performed a systematic simulation investigation of the effects of BSs on the quality of reconstructed images from both plane-wave and ptychographic CDI (PCDI). For the same imaging quality, we found that ptychography can tolerate BSs that are at least 20 times larger than those for plane-wave CDI. For PCDI, a larger overlap ratio and a smaller illumination spot can significantly increase the imaging robustness to the negative influence of BSs. Our results provide guidelines for the usage of BSs in CDI, especially in PCDI experiments, which can help to further improve the spatial resolution of PCDI.
Digital micromirror device-based laser-illumination Fourier ptychographic microscopy
Kuang, Cuifang; Ma, Ye; Zhou, Renjie; Lee, Justin; Barbastathis, George; Dasari, Ramachandra R.; Yaqoob, Zahid; So, Peter T. C.
2015-01-01
We report a novel approach to Fourier ptychographic microscopy (FPM) by using a digital micromirror device (DMD) and a coherent laser source (532 nm) for generating spatially modulated sample illumination. Previously demonstrated FPM systems are all based on partially-coherent illumination, which offers limited throughput due to insufficient brightness. Our FPM employs a high power coherent laser source to enable shot-noise limited high-speed imaging. For the first time, a digital micromirror device (DMD), imaged onto the back focal plane of the illumination objective, is used to generate spatially modulated sample illumination field for ptychography. By coding the on/off states of the micromirrors, the illumination plane wave angle can be varied at speeds more than 4 kHz. A set of intensity images, resulting from different oblique illuminations, are used to numerically reconstruct one high-resolution image without obvious laser speckle. Experiments were conducted using a USAF resolution target and a fiber sample, demonstrating high-resolution imaging capability of our system. We envision that our approach, if combined with a coded-aperture compressive-sensing algorithm, will further improve the imaging speed in DMD-based FPM systems. PMID:26480361
Digital micromirror device-based laser-illumination Fourier ptychographic microscopy.
Kuang, Cuifang; Ma, Ye; Zhou, Renjie; Lee, Justin; Barbastathis, George; Dasari, Ramachandra R; Yaqoob, Zahid; So, Peter T C
2015-10-19
We report a novel approach to Fourier ptychographic microscopy (FPM) by using a digital micromirror device (DMD) and a coherent laser source (532 nm) for generating spatially modulated sample illumination. Previously demonstrated FPM systems are all based on partially-coherent illumination, which offers limited throughput due to insufficient brightness. Our FPM employs a high power coherent laser source to enable shot-noise limited high-speed imaging. For the first time, a digital micromirror device (DMD), imaged onto the back focal plane of the illumination objective, is used to generate spatially modulated sample illumination field for ptychography. By coding the on/off states of the micromirrors, the illumination plane wave angle can be varied at speeds more than 4 kHz. A set of intensity images, resulting from different oblique illuminations, are used to numerically reconstruct one high-resolution image without obvious laser speckle. Experiments were conducted using a USAF resolution target and a fiber sample, demonstrating high-resolution imaging capability of our system. We envision that our approach, if combined with a coded-aperture compressive-sensing algorithm, will further improve the imaging speed in DMD-based FPM systems.
NASA Astrophysics Data System (ADS)
Porter, Christina L.; Tanksalvala, Michael; Gerrity, Michael; Miley, Galen P.; Esashi, Yuka; Horiguchi, Naoto; Zhang, Xiaoshi; Bevis, Charles S.; Karl, Robert; Johnsen, Peter; Adams, Daniel E.; Kapteyn, Henry C.; Murnane, Margaret M.
2018-03-01
With increasingly 3D devices becoming the norm, there is a growing need in the semiconductor industry and in materials science for high spatial resolution, non-destructive metrology techniques capable of determining depth-dependent composition information on devices. We present a solution to this problem using ptychographic coherent diffractive imaging (CDI) implemented using a commercially available, tabletop 13 nm source. We present the design, simulations, and preliminary results from our new complex EUV imaging reflectometer, which uses coherent 13 nm light produced by tabletop high harmonic generation. This tool is capable of determining spatially-resolved composition vs. depth profiles for samples by recording ptychographic images at multiple incidence angles. By harnessing phase measurements, we can locally and nondestructively determine quantities such as device and thin film layer thicknesses, surface roughness, interface quality, and dopant concentration profiles. Using this advanced imaging reflectometer, we can quantitatively characterize materials-sciencerelevant and industry-relevant nanostructures for a wide variety of applications, spanning from defect and overlay metrology to the development and optimization of nano-enhanced thermoelectric or spintronic devices.
Direct coupling of tomography and ptychography
Gürsoy, Doğa
2017-08-09
We present a generalization of the ptychographic phase problem for recovering refractive properties of a three-dimensional object in a tomography setting. Our approach, which ignores the lateral overlapping probe requirements in existing ptychography algorithms, can enable the reconstruction of objects using highly flexible acquisition patterns and pave the way for sparse and rapid data collection with lower radiation exposure.
NASA Astrophysics Data System (ADS)
Shimomura, Y.; Aymar, R.; Chuyanov, V. A.; Huguet, M.; Matsumoto, H.; Mizoguchi, T.; Murakami, Y.; Polevoi, A. R.; Shimada, M.; ITER Joint Central Team; ITER Home Teams
2001-03-01
ITER is planned to be the first fusion experimental reactor in the world operating for research in physics and engineering. The first ten years of operation will be devoted primarily to physics issues at low neutron fluence and the following ten years of operation to engineering testing at higher fluence. ITER can accommodate various plasma configurations and plasma operation modes, such as inductive high Q modes, long pulse hybrid modes and non-inductive steady state modes, with large ranges of plasma current, density, beta and fusion power, and with various heating and current drive methods. This flexibility will provide an advantage for coping with uncertainties in the physics database, in studying burning plasmas, in introducing advanced features and in optimizing the plasma performance for the different programme objectives. Remote sites will be able to participate in the ITER experiment. This concept will provide an advantage not only in operating ITER for 24 hours a day but also in involving the worldwide fusion community and in promoting scientific competition among the ITER Parties.
Helping System Engineers Bridge the Peaks
NASA Technical Reports Server (NTRS)
Rungta, Neha; Tkachuk, Oksana; Person, Suzette; Biatek, Jason; Whalen, Michael W.; Castle, Joseph; Castle, JosephGundy-Burlet, Karen
2014-01-01
In our experience at NASA, system engineers generally follow the Twin Peaks approach when developing safety-critical systems. However, iterations between the peaks require considerable manual, and in some cases duplicate, effort. A significant part of the manual effort stems from the fact that requirements are written in English natural language rather than a formal notation. In this work, we propose an approach that enables system engineers to leverage formal requirements and automated test generation to streamline iterations, effectively "bridging the peaks". The key to the approach is a formal language notation that a) system engineers are comfortable with, b) is supported by a family of automated V&V tools, and c) is semantically rich enough to describe the requirements of interest. We believe the combination of formalizing requirements and providing tool support to automate the iterations will lead to a more efficient Twin Peaks implementation at NASA.
Iterative procedures for space shuttle main engine performance models
NASA Technical Reports Server (NTRS)
Santi, L. Michael
1989-01-01
Performance models of the Space Shuttle Main Engine (SSME) contain iterative strategies for determining approximate solutions to nonlinear equations reflecting fundamental mass, energy, and pressure balances within engine flow systems. Both univariate and multivariate Newton-Raphson algorithms are employed in the current version of the engine Test Information Program (TIP). Computational efficiency and reliability of these procedures is examined. A modified trust region form of the multivariate Newton-Raphson method is implemented and shown to be superior for off nominal engine performance predictions. A heuristic form of Broyden's Rank One method is also tested and favorable results based on this algorithm are presented.
Pollution Reduction Technology Program for Small Jet Aircraft Engines, Phase 2
NASA Technical Reports Server (NTRS)
Bruce, T. W.; Davis, F. G.; Kuhn, T. E.; Mongia, H. C.
1978-01-01
A series of iterative combustor pressure rig tests were conducted on two combustor concepts applied to the AiResearch TFE731-2 turbofan engine combustion system for the purpose of optimizing combustor performance and operating characteristics consistant with low emissions. The two concepts were an axial air-assisted airblast fuel injection configuration with variable-geometry air swirlers and a staged premix/prevaporization configuration. The iterative rig testing and modification sequence on both concepts was intended to provide operational compatibility with the engine and determine one concept for further evaluation in a TFE731-2 engine.
Tripathi, Ashish; McNulty, Ian; Shpyrko, Oleg G
2014-01-27
Ptychographic coherent x-ray diffractive imaging is a form of scanning microscopy that does not require optics to image a sample. A series of scanned coherent diffraction patterns recorded from multiple overlapping illuminated regions on the sample are inverted numerically to retrieve its image. The technique recovers the phase lost by detecting the diffraction patterns by using experimentally known constraints, in this case the measured diffraction intensities and the assumed scan positions on the sample. The spatial resolution of the recovered image of the sample is limited by the angular extent over which the diffraction patterns are recorded and how well these constraints are known. Here, we explore how reconstruction quality degrades with uncertainties in the scan positions. We show experimentally that large errors in the assumed scan positions on the sample can be numerically determined and corrected using conjugate gradient descent methods. We also explore in simulations the limits, based on the signal to noise of the diffraction patterns and amount of overlap between adjacent scan positions, of just how large these errors can be and still be rendered tractable by this method.
Yang, Hao; MacLaren, Ian; Jones, Lewys; ...
2017-04-01
Recent development in fast pixelated detector technology has allowed a two dimensional diffraction pattern to be recorded at every probe position of a two dimensional raster scan in a scanning transmission electron microscope (STEM), forming an information-rich four dimensional (4D) dataset. Electron ptychography has been shown to enable efficient coherent phase imaging of weakly scattering objects from a 4D dataset recorded using a focused electron probe, which is optimised for simultaneous incoherent Z-contrast imaging and spectroscopy in STEM. Thus coherent phase contrast and incoherent Z-contrast imaging modes can be efficiently combined to provide a good sensitivity of both light andmore » heavy elements at atomic resolution. Here, we explore the application of electron ptychography for atomic resolution imaging of strongly scattering crystalline specimens, and present experiments on imaging crystalline specimens including samples containing defects, under dynamical channelling conditions using an aberration corrected microscope. A ptychographic reconstruction method called Wigner distribution deconvolution (WDD) was implemented. Our experimental results and simulation results suggest that ptychography provides a readily interpretable phase image and great sensitivity for imaging light elements at atomic resolution in relatively thin crystalline materials.« less
Installation and Testing of ITER Integrated Modeling and Analysis Suite (IMAS) on DIII-D
NASA Astrophysics Data System (ADS)
Lao, L.; Kostuk, M.; Meneghini, O.; Smith, S.; Staebler, G.; Kalling, R.; Pinches, S.
2017-10-01
A critical objective of the ITER Integrated Modeling Program is the development of IMAS to support ITER plasma operation and research activities. An IMAS framework has been established based on the earlier work carried out within the EU. It consists of a physics data model and a workflow engine. The data model is capable of representing both simulation and experimental data and is applicable to ITER and other devices. IMAS has been successfully installed on a local DIII-D server using a flexible installer capable of managing the core data access tools (Access Layer and Data Dictionary) and optionally the Kepler workflow engine and coupling tools. A general adaptor for OMFIT (a workflow engine) is being built for adaptation of any analysis code to IMAS using a new IMAS universal access layer (UAL) interface developed from an existing OMFIT EU Integrated Tokamak Modeling UAL. Ongoing work includes development of a general adaptor for EFIT and TGLF based on this new UAL that can be readily extended for other physics codes within OMFIT. Work supported by US DOE under DE-FC02-04ER54698.
Evolutionary engineering for industrial microbiology.
Vanee, Niti; Fisher, Adam B; Fong, Stephen S
2012-01-01
Superficially, evolutionary engineering is a paradoxical field that balances competing interests. In natural settings, evolution iteratively selects and enriches subpopulations that are best adapted to a particular ecological niche using random processes such as genetic mutation. In engineering desired approaches utilize rational prospective design to address targeted problems. When considering details of evolutionary and engineering processes, more commonality can be found. Engineering relies on detailed knowledge of the problem parameters and design properties in order to predict design outcomes that would be an optimized solution. When detailed knowledge of a system is lacking, engineers often employ algorithmic search strategies to identify empirical solutions. Evolution epitomizes this iterative optimization by continuously diversifying design options from a parental design, and then selecting the progeny designs that represent satisfactory solutions. In this chapter, the technique of applying the natural principles of evolution to engineer microbes for industrial applications is discussed to highlight the challenges and principles of evolutionary engineering.
ITER EDA Newsletter. Volume 3, no. 2
NASA Astrophysics Data System (ADS)
1994-02-01
This issue of the ITER EDA (Engineering Design Activities) Newsletter contains reports on the Fifth ITER Council Meeting held in Garching, Germany, January 27-28, 1994, a visit (January 28, 1994) of an international group of Harvard Fellows to the San Diego Joint Work Site, the Inauguration Ceremony of the EC-hosted ITER joint work site in Garching (January 28, 1994), on an ITER Technical Meeting on Assembly and Maintenance held in Garching, Germany, January 19-26, 1994, and a report on a Technical Committee Meeting on radiation effects on in-vessel components held in Garching, Germany, November 15-19, 1993, as well as an ITER Status Report.
NASA Astrophysics Data System (ADS)
Macrander, Albert; Wojcik, Michael; Maser, Jörg; Bouet, Nathalie; Conley, Raymond
2017-09-01
Ptychography was used to determine the focus of a Multilayer-Laue-Lens (MLL) at beamline 1-BM at the Advanced Photon Source (APS). The MLL had a record aperture of 102 microns with 15170 layers. The measurements were made at 12 keV. The focal length was 9.6 mm, and the outer-most zone was 4 nm thick. MLLs with ever larger apertures are under continuous development since ever longer focal lengths, ever larger working distances, and ever increased flux in the focus are desired. A focus size of 25 nm was determined by ptychographic phase retrieval from a gold grating sample with 1 micron lines and spaces over 3.0 microns horizontal distance. The MLL was set to focus in the horizontal plane of the bending magnet beamline. A CCD with 13.0 micron pixel size positioned 1.13 m downstream of the sample was used to collect the transmitted intensity distribution. The beam incident on the MLL covered the whole 102 micron aperture in the horizontal focusing direction and 20 microns in the vertical direction. 160 iterations of the difference map algorithm were sufficient to obtain a reconstructed image of the sample. The present work highlights the utility of a bending magnet source at the APS for performing coherence-based experiments. Use of ptychography at 1-BM on MLL optics opens the way to study diffraction-limited imaging of other hard x-ray optics.
A superlinear interior points algorithm for engineering design optimization
NASA Technical Reports Server (NTRS)
Herskovits, J.; Asquier, J.
1990-01-01
We present a quasi-Newton interior points algorithm for nonlinear constrained optimization. It is based on a general approach consisting of the iterative solution in the primal and dual spaces of the equalities in Karush-Kuhn-Tucker optimality conditions. This is done in such a way to have primal and dual feasibility at each iteration, which ensures satisfaction of those optimality conditions at the limit points. This approach is very strong and efficient, since at each iteration it only requires the solution of two linear systems with the same matrix, instead of quadratic programming subproblems. It is also particularly appropriate for engineering design optimization inasmuch at each iteration a feasible design is obtained. The present algorithm uses a quasi-Newton approximation of the second derivative of the Lagrangian function in order to have superlinear asymptotic convergence. We discuss theoretical aspects of the algorithm and its computer implementation.
OMNY PIN—A versatile sample holder for tomographic measurements at room and cryogenic temperatures
NASA Astrophysics Data System (ADS)
Holler, M.; Raabe, J.; Wepf, R.; Shahmoradian, S. H.; Diaz, A.; Sarafimov, B.; Lachat, T.; Walther, H.; Vitins, M.
2017-11-01
Nowadays ptychographic tomography in the hard x-ray regime, i.e., at energies above about 2 keV, is a well-established measurement technique. At the Paul Scherrer Institut, currently two instruments are available: one is measuring at room temperature and atmospheric pressure, and the other, the so-called OMNY (tOMography Nano crYo) instrument, is operating at ultra-high vacuum and offering cryogenic sample temperatures down to 10 K. In this manuscript, we present the sample mounts that were developed for these instruments. Aside from excellent mechanical stability and thermal conductivity, they also offer highly reproducible mounting. Various types were developed for different kinds of samples and are presented in detail, including examples of how specimens can be mounted on these holders. We also show the first hard x-ray ptychographic tomography measurements of high-pressure frozen biological samples, in the present case Chlamydomonas cells, the related sample pins and preparation steps. For completeness, we present accessories such as transportation containers for both room temperature and cryogenic samples and a gripper mechanism for automatic sample changing. The sample mounts are not limited to x-ray tomography or hard x-ray energies, and we believe that they can be very useful for other instrumentation projects.
Williams, Anthony; Chung, Jaebum; Yang, Changhuei; Cote, Richard J
2017-01-01
Examining the hematogenous compartment for evidence of metastasis has increased significantly within the oncology research community in recent years, due to the development of technologies aimed at the enrichment of circulating tumor cells (CTCs), the subpopulation of primary tumor cells that gain access to the circulatory system and are responsible for colonization at distant sites. In contrast to other technologies, filtration-based CTC enrichment, which exploits differences in size between larger tumor cells and surrounding smaller, non-tumor blood cells, has the potential to improve CTC characterization through isolation of tumor cell populations with greater molecular heterogeneity. However, microscopic analysis of uneven filtration surfaces containing CTCs is laborious, time-consuming, and inconsistent, preventing widespread use of filtration-based enrichment technologies. Here, integrated with a microfiltration-based CTC and rare cell enrichment device we have previously described, we present a protocol for Fourier Ptychographic Microscopy (FPM), a method that, unlike many automated imaging platforms, produces high-speed, high-resolution images that can be digitally refocused, allowing users to observe objects of interest present on multiple focal planes within the same image frame. The development of a cost-effective and high-throughput CTC analysis system for filtration-based enrichment technologies could have profound clinical implications for improved CTC detection and analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wilke, R. N., E-mail: rwilke@gwdg.de; Wallentin, J.; Osterhoff, M.
The Large Area Medipix-Based Detector Array (Lambda) has been used in a ptychographic imaging experiment on solar-cell nanowires. By using a semi-transparent central stop, the high flux density provided by nano-focusing Kirkpatrick–Baez mirrors can be fully exploited for high-resolution phase reconstructions. Suitable detection systems that are capable of recording high photon count rates with single-photon detection are instrumental for coherent X-ray imaging. The new single-photon-counting pixel detector ‘Lambda’ has been tested in a ptychographic imaging experiment on solar-cell nanowires using Kirkpatrick–Baez-focused 13.8 keV X-rays. Taking advantage of the high count rate of the Lambda and dynamic range expansion by themore » semi-transparent central stop, a high-dynamic-range diffraction signal covering more than seven orders of magnitude has been recorded, which corresponds to a photon flux density of about 10{sup 5} photons nm{sup −2} s{sup −1} or a flux of ∼10{sup 10} photons s{sup −1} on the sample. By comparison with data taken without the semi-transparent central stop, an increase in resolution by a factor of 3–4 is determined: from about 125 nm to about 38 nm for the nanowire and from about 83 nm to about 21 nm for the illuminating wavefield.« less
FPscope: a field-portable high-resolution microscope using a cellphone lens.
Dong, Siyuan; Guo, Kaikai; Nanda, Pariksheet; Shiradkar, Radhika; Zheng, Guoan
2014-10-01
The large consumer market has made cellphone lens modules available at low-cost and in high-quality. In a conventional cellphone camera, the lens module is used to demagnify the scene onto the image plane of the camera, where image sensor is located. In this work, we report a 3D-printed high-resolution Fourier ptychographic microscope, termed FPscope, which uses a cellphone lens in a reverse manner. In our platform, we replace the image sensor with sample specimens, and use the cellphone lens to project the magnified image to the detector. To supersede the diffraction limit of the lens module, we use an LED array to illuminate the sample from different incident angles and synthesize the acquired images using the Fourier ptychographic algorithm. As a demonstration, we use the reported platform to acquire high-resolution images of resolution target and biological specimens, with a maximum synthetic numerical aperture (NA) of 0.5. We also show that, the depth-of-focus of the reported platform is about 0.1 mm, orders of magnitude longer than that of a conventional microscope objective with a similar NA. The reported platform may enable healthcare accesses in low-resource settings. It can also be used to demonstrate the concept of computational optics for educational purposes.
IDC Re-Engineering Phase 2 Iteration E2 Use Case Realizations Version 1.2.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hamlet, Benjamin R.; Harris, James M.; Burns, John F.
2016-12-01
This document contains 4 use case realizations generated from the model contained in Rational Software Architect. These use case realizations are the current versions of the realizations originally delivered in Elaboration Iteration 2.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhai, Y.; Loesser, G.; Smith, M.
ITER diagnostic first walls (DFWs) and diagnostic shield modules (DSMs) inside the port plugs (PPs) are designed to protect diagnostic instrument and components from a harsh plasma environment and provide structural support while allowing for diagnostic access to the plasma. The design of DFWs and DSMs are driven by 1) plasma radiation and nuclear heating during normal operation 2) electromagnetic loads during plasma events and associate component structural responses. A multi-physics engineering analysis protocol for the design has been established at Princeton Plasma Physics Laboratory and it was used for the design of ITER DFWs and DSMs. The analyses weremore » performed to address challenging design issues based on resultant stresses and deflections of the DFW-DSM-PP assembly for the main load cases. ITER Structural Design Criteria for In-Vessel Components (SDC-IC) required for design by analysis and three major issues driving the mechanical design of ITER DFWs are discussed. The general guidelines for the DSM design have been established as a result of design parametric studies.« less
2009-09-01
SAS Statistical Analysis Software SE Systems Engineering SEP Systems Engineering Process SHP Shaft Horsepower SIGINT Signals Intelligence......management occurs (OSD 2002). The Systems Engineering Process (SEP), displayed in Figure 2, is a comprehensive , iterative and recursive problem
Overview of International Thermonuclear Experimental Reactor (ITER) engineering design activities*
NASA Astrophysics Data System (ADS)
Shimomura, Y.
1994-05-01
The International Thermonuclear Experimental Reactor (ITER) [International Thermonuclear Experimental Reactor (ITER) (International Atomic Energy Agency, Vienna, 1988), ITER Documentation Series, No. 1] project is a multiphased project, presently proceeding under the auspices of the International Atomic Energy Agency according to the terms of a four-party agreement among the European Atomic Energy Community (EC), the Government of Japan (JA), the Government of the Russian Federation (RF), and the Government of the United States (US), ``the Parties.'' The ITER project is based on the tokamak, a Russian invention, and has since been brought to a high level of development in all major fusion programs in the world. The objective of ITER is to demonstrate the scientific and technological feasibility of fusion energy for peaceful purposes. The ITER design is being developed, with support from the Parties' four Home Teams and is in progress by the Joint Central Team. An overview of ITER Design activities is presented.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boggs, Paul T.; Althsuler, Alan; Larzelere, Alex R.
2005-08-01
The Design-through-Analysis Realization Team (DART) is chartered with reducing the time Sandia analysts require to complete the engineering analysis process. The DART system analysis team studied the engineering analysis processes employed by analysts in Centers 9100 and 8700 at Sandia to identify opportunities for reducing overall design-through-analysis process time. The team created and implemented a rigorous analysis methodology based on a generic process flow model parameterized by information obtained from analysts. They also collected data from analysis department managers to quantify the problem type and complexity distribution throughout Sandia's analyst community. They then used this information to develop a communitymore » model, which enables a simple characterization of processes that span the analyst community. The results indicate that equal opportunity for reducing analysis process time is available both by reducing the ''once-through'' time required to complete a process step and by reducing the probability of backward iteration. In addition, reducing the rework fraction (i.e., improving the engineering efficiency of subsequent iterations) offers approximately 40% to 80% of the benefit of reducing the ''once-through'' time or iteration probability, depending upon the process step being considered. Further, the results indicate that geometry manipulation and meshing is the largest portion of an analyst's effort, especially for structural problems, and offers significant opportunity for overall time reduction. Iteration loops initiated late in the process are more costly than others because they increase ''inner loop'' iterations. Identifying and correcting problems as early as possible in the process offers significant opportunity for time savings.« less
CRISPR/Cas9-coupled recombineering for metabolic engineering of Corynebacterium glutamicum.
Cho, Jae Sung; Choi, Kyeong Rok; Prabowo, Cindy Pricilia Surya; Shin, Jae Ho; Yang, Dongsoo; Jang, Jaedong; Lee, Sang Yup
2017-07-01
Genome engineering of Corynebacterium glutamicum, an important industrial microorganism for amino acids production, currently relies on random mutagenesis and inefficient double crossover events. Here we report a rapid genome engineering strategy to scarlessly knock out one or more genes in C. glutamicum in sequential and iterative manner. Recombinase RecT is used to incorporate synthetic single-stranded oligodeoxyribonucleotides into the genome and CRISPR/Cas9 to counter-select negative mutants. We completed the system by engineering the respective plasmids harboring CRISPR/Cas9 and RecT for efficient curing such that multiple gene targets can be done iteratively and final strains will be free of plasmids. To demonstrate the system, seven different mutants were constructed within two weeks to study the combinatorial deletion effects of three different genes on the production of γ-aminobutyric acid, an industrially relevant chemical of much interest. This genome engineering strategy will expedite metabolic engineering of C. glutamicum. Copyright © 2017 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.
Engineering aspects of design and integration of ECE diagnostic in ITER
Udintsev, V. S.; Taylor, G.; Pandya, H. K.B.; ...
2015-03-12
ITER ECE diagnostic [1] needs not only to meet measurement requirements, but also to withstand various loads, such as electromagnetic, mechanical, neutronic and thermal, and to be protected from stray ECH radiation at 170 GHz and other millimeter wave emission, like Collective Thomson scattering which is planned to operate at 60 GHz. Same or similar loads will be applied to other millimetre-wave diagnostics [2], located both in-vessel and in-port plugs. These loads must be taken into account throughout the design phases of the ECE and other microwave diagnostics to ensure their structural integrity and maintainability. The integration of microwave diagnosticsmore » with other ITER systems is another challenging activity which is currently ongoing through port integration and in-vessel integration work. Port Integration has to address the maintenance and the safety aspects of diagnostics, too. Engineering solutions which are being developed to support and to operate ITER ECE diagnostic, whilst complying with safety and maintenance requirements, are discussed in this paper.« less
Correlative cellular ptychography with functionalized nanoparticles at the Fe L-edge
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gallagher-Jones, Marcus; Dias, Carlos Sato Baraldi; Pryor, Alan
Precise localization of nanoparticles within a cell is crucial to the understanding of cell-particle interactions and has broad applications in nanomedicine. Here in this paper, we report a proof-of-principle experiment for imaging individual functionalized nanoparticles within a mammalian cell by correlative microscopy. Using a chemically-fixed HeLa cell labeled with fluorescent core-shell nanoparticles as a model system, we implemented a graphene-oxide layer as a substrate to significantly reduce background scattering. We identified cellular features of interest by fluorescence microscopy, followed by scanning transmission X-ray tomography to localize the particles in 3D, and ptychographic coherent diffractive imaging of the fine features inmore » the region at high resolution. By tuning the X-ray energy to the Fe L-edge, we demonstrated sensitive detection of nanoparticles composed of a 22 nm magnetic Fe 3O 4 core encased by a 25-nm-thick fluorescent silica (SiO 2) shell. These fluorescent core-shell nanoparticles act as landmarks and offer clarity in a cellular context. Our correlative microscopy results confirmed a subset of particles to be fully internalized, and high-contrast ptychographic images showed two oxidation states of individual nanoparticles with a resolution of ~16.5 nm. The ability to precisely localize individual fluorescent nanoparticles within mammalian cells will expand our understanding of the structure/function relationships for functionalized nanoparticles.« less
Giewekemeyer, Klaus; Philipp, Hugh T.; Wilke, Robin N.; Aquila, Andrew; Osterhoff, Markus; Tate, Mark W.; Shanks, Katherine S.; Zozulya, Alexey V.; Salditt, Tim; Gruner, Sol M.; Mancuso, Adrian P.
2014-01-01
Coherent (X-ray) diffractive imaging (CDI) is an increasingly popular form of X-ray microscopy, mainly due to its potential to produce high-resolution images and the lack of an objective lens between the sample and its corresponding imaging detector. One challenge, however, is that very high dynamic range diffraction data must be collected to produce both quantitative and high-resolution images. In this work, hard X-ray ptychographic coherent diffractive imaging has been performed at the P10 beamline of the PETRA III synchrotron to demonstrate the potential of a very wide dynamic range imaging X-ray detector (the Mixed-Mode Pixel Array Detector, or MM-PAD). The detector is capable of single photon detection, detecting fluxes exceeding 1 × 108 8-keV photons pixel−1 s−1, and framing at 1 kHz. A ptychographic reconstruction was performed using a peak focal intensity on the order of 1 × 1010 photons µm−2 s−1 within an area of approximately 325 nm × 603 nm. This was done without need of a beam stop and with a very modest attenuation, while ‘still’ images of the empty beam far-field intensity were recorded without any attenuation. The treatment of the detector frames and CDI methodology for reconstruction of non-sensitive detector regions, partially also extending the active detector area, are described. PMID:25178008
Giewekemeyer, Klaus; Philipp, Hugh T; Wilke, Robin N; Aquila, Andrew; Osterhoff, Markus; Tate, Mark W; Shanks, Katherine S; Zozulya, Alexey V; Salditt, Tim; Gruner, Sol M; Mancuso, Adrian P
2014-09-01
Coherent (X-ray) diffractive imaging (CDI) is an increasingly popular form of X-ray microscopy, mainly due to its potential to produce high-resolution images and the lack of an objective lens between the sample and its corresponding imaging detector. One challenge, however, is that very high dynamic range diffraction data must be collected to produce both quantitative and high-resolution images. In this work, hard X-ray ptychographic coherent diffractive imaging has been performed at the P10 beamline of the PETRA III synchrotron to demonstrate the potential of a very wide dynamic range imaging X-ray detector (the Mixed-Mode Pixel Array Detector, or MM-PAD). The detector is capable of single photon detection, detecting fluxes exceeding 1 × 10(8) 8-keV photons pixel(-1) s(-1), and framing at 1 kHz. A ptychographic reconstruction was performed using a peak focal intensity on the order of 1 × 10(10) photons µm(-2) s(-1) within an area of approximately 325 nm × 603 nm. This was done without need of a beam stop and with a very modest attenuation, while `still' images of the empty beam far-field intensity were recorded without any attenuation. The treatment of the detector frames and CDI methodology for reconstruction of non-sensitive detector regions, partially also extending the active detector area, are described.
Correlative cellular ptychography with functionalized nanoparticles at the Fe L-edge
Gallagher-Jones, Marcus; Dias, Carlos Sato Baraldi; Pryor, Alan; ...
2017-07-06
Precise localization of nanoparticles within a cell is crucial to the understanding of cell-particle interactions and has broad applications in nanomedicine. Here in this paper, we report a proof-of-principle experiment for imaging individual functionalized nanoparticles within a mammalian cell by correlative microscopy. Using a chemically-fixed HeLa cell labeled with fluorescent core-shell nanoparticles as a model system, we implemented a graphene-oxide layer as a substrate to significantly reduce background scattering. We identified cellular features of interest by fluorescence microscopy, followed by scanning transmission X-ray tomography to localize the particles in 3D, and ptychographic coherent diffractive imaging of the fine features inmore » the region at high resolution. By tuning the X-ray energy to the Fe L-edge, we demonstrated sensitive detection of nanoparticles composed of a 22 nm magnetic Fe 3O 4 core encased by a 25-nm-thick fluorescent silica (SiO 2) shell. These fluorescent core-shell nanoparticles act as landmarks and offer clarity in a cellular context. Our correlative microscopy results confirmed a subset of particles to be fully internalized, and high-contrast ptychographic images showed two oxidation states of individual nanoparticles with a resolution of ~16.5 nm. The ability to precisely localize individual fluorescent nanoparticles within mammalian cells will expand our understanding of the structure/function relationships for functionalized nanoparticles.« less
Methane Dual Expander Aerospike Nozzle Rocket Engine
2012-03-22
include O/F ratio, thrust, and engine geometry. After thousands of iterations over the design space , the selected MDEAN engine concept has 349 s of...35 Table 7: Fluid Property Table Supported Parameters...44 Table 8: Fluid Property Input Data Independent Variable Ranges. ................................. 46 Table 9
Advances and challenges in cryo ptychography at the Advanced Photon Source.
Deng, J; Vine, D J; Chen, S; Nashed, Y S G; Jin, Q; Peterka, T; Vogt, S; Jacobsen, C
Ptychography has emerged as a nondestructive tool to quantitatively study extended samples at a high spatial resolution. In this manuscript, we report on recent developments from our team. We have combined cryo ptychography and fluorescence microscopy to provide simultaneous views of ultrastructure and elemental composition, we have developed multi-GPU parallel computation to speed up ptychographic reconstructions, and we have implemented fly-scan ptychography to allow for faster data acquisition. We conclude with a discussion of future challenges in high-resolution 3D ptychography.
Strategies for high-throughput focused-beam ptychography
Jacobsen, Chris; Deng, Junjing; Nashed, Youssef
2017-08-08
X-ray ptychography is being utilized for a wide range of imaging experiments with a resolution beyond the limit of the X-ray optics used. Introducing a parameter for the ptychographic resolution gainG p(the ratio of the beam size over the achieved pixel size in the reconstructed image), strategies for data sampling and for increasing imaging throughput when the specimen is at the focus of an X-ray beam are considered. As a result, the tradeoffs between large and small illumination spots are examined.
Phase retrieval from local measurements in two dimensions
NASA Astrophysics Data System (ADS)
Iwen, Mark; Preskitt, Brian; Saab, Rayan; Viswanathan, Aditya
2017-08-01
The phase retrieval problem has appeared in a multitude of applications for decades. While ad hoc solutions have existed since the early 1970s, recent developments have provided algorithms that offer promising theoretical guarantees under increasingly realistic assumptions. Motivated by ptychographic imaging, we generalize a recent result on phase retrieval of a one dimensional objective vector x ∈ ℂd to recover a two dimensional sample Q ∈ ℂd x d from phaseless measurements, using a tensor product formulation to extend the previous work.
Strategies for high-throughput focused-beam ptychography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jacobsen, Chris; Deng, Junjing; Nashed, Youssef
X-ray ptychography is being utilized for a wide range of imaging experiments with a resolution beyond the limit of the X-ray optics used. Introducing a parameter for the ptychographic resolution gainG p(the ratio of the beam size over the achieved pixel size in the reconstructed image), strategies for data sampling and for increasing imaging throughput when the specimen is at the focus of an X-ray beam are considered. As a result, the tradeoffs between large and small illumination spots are examined.
ERIC Educational Resources Information Center
Bottoms, SueAnn I.; Ciechanowski, Kathryn M.; Hartman, Brian
2015-01-01
Iterative cycles of enactment embedded in culturally and linguistically diverse contexts provide rich opportunities for preservice teachers (PSTs) to enact core practices of science. This study is situated in the larger Families Involved in Sociocultural Teaching and Science, Technology, Engineering and Mathematics (FIESTAS) project, which weaves…
Breadboard RL10-2B low-thrust operating mode (second iteration) test report
NASA Technical Reports Server (NTRS)
Kanic, Paul G.; Kaldor, Raymond B.; Watkins, Pia M.
1988-01-01
Cryogenic rocket engines requiring a cooling process to thermally condition the engine to operating temperature can be made more efficient if cooling propellants can be burned. Tank head idle and pumped idle modes can be used to burn propellants employed for cooling, thereby providing useful thrust. Such idle modes required the use of a heat exchanger to vaporize oxygen prior to injection into the combustion chamber. During December 1988, Pratt and Whitney conducted a series of engine hot firing demonstrating the operation of two new, previously untested oxidizer heat exchanger designs. The program was a second iteration of previous low thrust testing conducted in 1984, during which a first-generation heat exchanger design was used. Although operation was demonstrated at tank head idle and pumped idle, the engine experienced instability when propellants could not be supplied to the heat exchanger at design conditions.
Airbreathing engine selection criteria for SSTO propulsion system
NASA Astrophysics Data System (ADS)
Ohkami, Yoshiaki; Maita, Masataka
1995-02-01
This paper presents airbreathing engine selection criteria to be applied to the propulsion system of a Single Stage To Orbit (SSTO). To establish the criteria, a relation among three major parameters, i.e., delta-V capability, weight penalty, and effective specific impulse of the engine subsystem, is derived as compared to these parameters of the LH2/LOX rocket engine. The effective specific impulse is a function of the engine I(sub sp) and vehicle thrust-to-drag ratio which is approximated by a function of the vehicle velocity. The weight penalty includes the engine dry weight, cooling subsystem weight. The delta-V capability is defined by the velocity region starting from the minimum operating velocity up to the maximum velocity. The vehicle feasibility is investigated in terms of the structural and propellant weights, which requires an iteration process adjusting the system parameters. The system parameters are computed by iteration based on the Newton-Raphson method. It has been concluded that performance in the higher velocity region is extremely important so that the airbreathing engines are required to operate beyond the velocity equivalent to the rocket engine exhaust velocity (approximately 4500 m/s).
Preliminary consideration of CFETR ITER-like case diagnostic system.
Li, G S; Yang, Y; Wang, Y M; Ming, T F; Han, X; Liu, S C; Wang, E H; Liu, Y K; Yang, W J; Li, G Q; Hu, Q S; Gao, X
2016-11-01
Chinese Fusion Engineering Test Reactor (CFETR) is a new superconducting tokamak device being designed in China, which aims at bridging the gap between ITER and DEMO, where DEMO is a tokamak demonstration fusion reactor. Two diagnostic cases, ITER-like case and towards DEMO case, have been considered for CFETR early and later operating phases, respectively. In this paper, some preliminary consideration of ITER-like case will be presented. Based on ITER diagnostic system, three versions of increased complexity and coverage of the ITER-like case diagnostic system have been developed with different goals and functions. Version A aims only machine protection and basic control. Both of version B and version C are mainly for machine protection, basic and advanced control, but version C has an increased level of redundancy necessary for improved measurements capability. The performance of these versions and needed R&D work are outlined.
Preliminary consideration of CFETR ITER-like case diagnostic system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, G. S.; Liu, Y. K.; Gao, X.
2016-11-15
Chinese Fusion Engineering Test Reactor (CFETR) is a new superconducting tokamak device being designed in China, which aims at bridging the gap between ITER and DEMO, where DEMO is a tokamak demonstration fusion reactor. Two diagnostic cases, ITER-like case and towards DEMO case, have been considered for CFETR early and later operating phases, respectively. In this paper, some preliminary consideration of ITER-like case will be presented. Based on ITER diagnostic system, three versions of increased complexity and coverage of the ITER-like case diagnostic system have been developed with different goals and functions. Version A aims only machine protection and basicmore » control. Both of version B and version C are mainly for machine protection, basic and advanced control, but version C has an increased level of redundancy necessary for improved measurements capability. The performance of these versions and needed R&D work are outlined.« less
Experimentation in software engineering
NASA Technical Reports Server (NTRS)
Basili, V. R.; Selby, R. W.; Hutchens, D. H.
1986-01-01
Experimentation in software engineering supports the advancement of the field through an iterative learning process. In this paper, a framework for analyzing most of the experimental work performed in software engineering over the past several years is presented. A variety of experiments in the framework is described and their contribution to the software engineering discipline is discussed. Some useful recommendations for the application of the experimental process in software engineering are included.
Optimization applications in aircraft engine design and test
NASA Technical Reports Server (NTRS)
Pratt, T. K.
1984-01-01
Starting with the NASA-sponsored STAEBL program, optimization methods based primarily upon the versatile program COPES/CONMIN were introduced over the past few years to a broad spectrum of engineering problems in structural optimization, engine design, engine test, and more recently, manufacturing processes. By automating design and testing processes, many repetitive and costly trade-off studies have been replaced by optimization procedures. Rather than taking engineers and designers out of the loop, optimization has, in fact, put them more in control by providing sophisticated search techniques. The ultimate decision whether to accept or reject an optimal feasible design still rests with the analyst. Feedback obtained from this decision process has been invaluable since it can be incorporated into the optimization procedure to make it more intelligent. On several occasions, optimization procedures have produced novel designs, such as the nonsymmetric placement of rotor case stiffener rings, not anticipated by engineering designers. In another case, a particularly difficult resonance contraint could not be satisfied using hand iterations for a compressor blade, when the STAEBL program was applied to the problem, a feasible solution was obtained in just two iterations.
ERIC Educational Resources Information Center
McClain, Arianna D.; Hekler, Eric B.; Gardner, Christopher D.
2013-01-01
Background: Previous research from the fields of computer science and engineering highlight the importance of an iterative design process (IDP) to create more creative and effective solutions. Objective: This study describes IDP as a new method for developing health behavior interventions and evaluates the effectiveness of a dining hall--based…
Wide-field Fourier ptychographic microscopy using laser illumination source
Chung, Jaebum; Lu, Hangwen; Ou, Xiaoze; Zhou, Haojiang; Yang, Changhuei
2016-01-01
Fourier ptychographic (FP) microscopy is a coherent imaging method that can synthesize an image with a higher bandwidth using multiple low-bandwidth images captured at different spatial frequency regions. The method’s demand for multiple images drives the need for a brighter illumination scheme and a high-frame-rate camera for a faster acquisition. We report the use of a guided laser beam as an illumination source for an FP microscope. It uses a mirror array and a 2-dimensional scanning Galvo mirror system to provide a sample with plane-wave illuminations at diverse incidence angles. The use of a laser presents speckles in the image capturing process due to reflections between glass surfaces in the system. They appear as slowly varying background fluctuations in the final reconstructed image. We are able to mitigate these artifacts by including a phase image obtained by differential phase contrast (DPC) deconvolution in the FP algorithm. We use a 1-Watt laser configured to provide a collimated beam with 150 mW of power and beam diameter of 1 cm to allow for the total capturing time of 0.96 seconds for 96 raw FPM input images in our system, with the camera sensor’s frame rate being the bottleneck for speed. We demonstrate a factor of 4 resolution improvement using a 0.1 NA objective lens over the full camera field-of-view of 2.7 mm by 1.5 mm. PMID:27896016
The Iterative Design Process in Research and Development: A Work Experience Paper
NASA Technical Reports Server (NTRS)
Sullivan, George F. III
2013-01-01
The iterative design process is one of many strategies used in new product development. Top-down development strategies, like waterfall development, place a heavy emphasis on planning and simulation. The iterative process, on the other hand, is better suited to the management of small to medium scale projects. Over the past four months, I have worked with engineers at Johnson Space Center on a multitude of electronics projects. By describing the work I have done these last few months, analyzing the factors that have driven design decisions, and examining the testing and verification process, I will demonstrate that iterative design is the obvious choice for research and development projects.
LOW-ENGINE-FRICTION TECHNOLOGY FOR ADVANCED NATURAL-GAS RECIPROCATING ENGINES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Victor W. Wong; Tian Tian; Grant Smedley
2003-08-28
This program aims at improving the efficiency of advanced natural-gas reciprocating engines (ANGRE) by reducing piston/ring assembly friction without major adverse effects on engine performance, such as increased oil consumption and emissions. A detailed set of piston/ring dynamic and friction models have been developed and applied that illustrated the fundamental relationships between design parameters and friction losses. Various low-friction strategies and concepts have been explored, and engine experiments will validate these concepts. An iterative process of experimentation, simulation and analysis, will be followed with the goal of demonstrating a complete optimized low-friction engine system. As planned, MIT has developed guidelinesmore » for an initial set of low-friction piston-ring-pack designs. Current recommendations focus on subtle top-piston-ring and oil-control-ring characteristics. A full-scale Waukesha F18 engine has been installed at Colorado State University and testing of the baseline configuration is in progress. Components for the first design iteration are being procured. Subsequent work includes examining the friction and engine performance data and extending the analyses to other areas to evaluate opportunities for further friction improvement and the impact on oil consumption/emission and wear, towards demonstrating an optimized reduced-friction engine system.« less
NIH-IEEE 2015 Strategic Conference on Healthcare Innovations and Point-of-Care Technologies for Prec
NIH and the Institute for Electrical and Electronics Engineering, Engineering in Medicine and Biology Society (IEEE/EMBS) hosted the third iteration of the Healthcare Innovations and Point-of-Care Technologies Conference last week.
Heinsch, Stephen C.; Das, Siba R.; Smanski, Michael J.
2018-01-01
Increasing the final titer of a multi-gene metabolic pathway can be viewed as a multivariate optimization problem. While numerous multivariate optimization algorithms exist, few are specifically designed to accommodate the constraints posed by genetic engineering workflows. We present a strategy for optimizing expression levels across an arbitrary number of genes that requires few design-build-test iterations. We compare the performance of several optimization algorithms on a series of simulated expression landscapes. We show that optimal experimental design parameters depend on the degree of landscape ruggedness. This work provides a theoretical framework for designing and executing numerical optimization on multi-gene systems. PMID:29535690
NASA Astrophysics Data System (ADS)
Cao, Huijun; Cao, Yong; Chu, Yuchuan; He, Xiaoming; Lin, Tao
2018-06-01
Surface evolution is an unavoidable issue in engineering plasma applications. In this article an iterative method for modeling plasma-surface interactions with moving interface is proposed and validated. In this method, the plasma dynamics is simulated by an immersed finite element particle-in-cell (IFE-PIC) method, and the surface evolution is modeled by the Huygens wavelet method which is coupled with the iteration of the IFE-PIC method. Numerical experiments, including prototypical engineering applications, such as the erosion of Hall thruster channel wall, are presented to demonstrate features of this Huygens IFE-PIC method for simulating the dynamic plasma-surface interactions.
Software Estimates Costs of Testing Rocket Engines
NASA Technical Reports Server (NTRS)
Smith, C. L.
2003-01-01
Simulation-Based Cost Model (SiCM), a discrete event simulation developed in Extend , simulates pertinent aspects of the testing of rocket propulsion test articles for the purpose of estimating the costs of such testing during time intervals specified by its users. A user enters input data for control of simulations; information on the nature of, and activity in, a given testing project; and information on resources. Simulation objects are created on the basis of this input. Costs of the engineering-design, construction, and testing phases of a given project are estimated from numbers and labor rates of engineers and technicians employed in each phase, the duration of each phase; costs of materials used in each phase; and, for the testing phase, the rate of maintenance of the testing facility. The three main outputs of SiCM are (1) a curve, updated at each iteration of the simulation, that shows overall expenditures vs. time during the interval specified by the user; (2) a histogram of the total costs from all iterations of the simulation; and (3) table displaying means and variances of cumulative costs for each phase from all iterations. Other outputs include spending curves for each phase.
ITER Cryoplant Infrastructures
NASA Astrophysics Data System (ADS)
Fauve, E.; Monneret, E.; Voigt, T.; Vincent, G.; Forgeas, A.; Simon, M.
2017-02-01
The ITER Tokamak requires an average 75 kW of refrigeration power at 4.5 K and 600 kW of refrigeration Power at 80 K to maintain the nominal operation condition of the ITER thermal shields, superconducting magnets and cryopumps. This is produced by the ITER Cryoplant, a complex cluster of refrigeration systems including in particular three identical Liquid Helium Plants and two identical Liquid Nitrogen Plants. Beyond the equipment directly part of the Cryoplant, colossal infrastructures are required. These infrastructures account for a large part of the Cryoplants lay-out, budget and engineering efforts. It is ITER Organization responsibility to ensure that all infrastructures are adequately sized and designed to interface with the Cryoplant. This proceeding presents the overall architecture of the cryoplant. It provides order of magnitude related to the cryoplant building and utilities: electricity, cooling water, heating, ventilation and air conditioning (HVAC).
From Intent to Action: An Iterative Engineering Process
ERIC Educational Resources Information Center
Mouton, Patrice; Rodet, Jacques; Vacaresse, Sylvain
2015-01-01
Quite by chance, and over the course of a few haphazard meetings, a Master's degree in "E-learning Design" gradually developed in a Faculty of Economics. Its original and evolving design was the result of an iterative process carried out, not by a single Instructional Designer (ID), but by a full ID team. Over the last 10 years it has…
NASA Astrophysics Data System (ADS)
Akiba, Masato; Matsui, Hideki; Takatsu, Hideyuki; Konishi, Satoshi
Technical issues regarding the fusion power plant that are required to be developed in the period of ITER construction and operation, both with ITER and with other facilities that complement ITER are described in this section. Three major fields are considered to be important in fusion technology. Section 4.1 summarizes blanket study, and ITER Test Blanket Module (TBM) development that focuses its effort on the first generation power blanket to be installed in DEMO. ITER will be equipped with 6 TBMs which are developed under each party's fusion program. In Japan, the solid breeder using water as a coolant is the primary candidate, and He-cooled pebble bed is the alternative. Other liquid options such as LiPb, Li or molten salt are developed by other parties' initiatives. The Test Blanket Working Group (TBWG) is coordinating these efforts. Japanese universities are investigating advanced concepts and fundamental crosscutting technologies. Section 4.2 introduces material development and particularly, the international irradiation facility, IFMIF. Reduced activation ferritic/martensitic steels are identified as promising candidates for the structural material of the first generation fusion blanket, while and vanadium alloy and SiC/SiC composite are pursued as advanced options. The IFMIF is currently planning the next phase of joint activity, EVEDA (Engineering Validation and Engineering Design Activity) that encompasses construction. Material studies together with the ITER TBM will provide essential technical information for development of the fusion power plant. Other technical issues to be addressed regarding the first generation fusion power plant are summarized in section 4.3. Development of components for ITER made remarkable progress for the major essential technology also necessary for future fusion plants, however many still need further improvements toward power plant. Such areas includes; the divertor, plasma heating/current drive, magnets, tritium, and remote handling. There remain many other technical issues for power plant which require integrated efforts.
Design Features of the Neutral Particle Diagnostic System for the ITER Tokamak
NASA Astrophysics Data System (ADS)
Petrov, S. Ya.; Afanasyev, V. I.; Melnik, A. D.; Mironov, M. I.; Navolotsky, A. S.; Nesenevich, V. G.; Petrov, M. P.; Chernyshev, F. V.; Kedrov, I. V.; Kuzmin, E. G.; Lyublin, B. V.; Kozlovski, S. S.; Mokeev, A. N.
2017-12-01
The control of the deuterium-tritium (DT) fuel isotopic ratio has to ensure the best performance of the ITER thermonuclear fusion reactor. The diagnostic system described in this paper allows the measurement of this ratio analyzing the hydrogen isotope fluxes (performing neutral particle analysis (NPA)). The development and supply of the NPA diagnostics for ITER was delegated to the Russian Federation. The diagnostics is being developed at the Ioffe Institute. The system consists of two analyzers, viz., LENPA (Low Energy Neutral Particle Analyzer) with 10-200 keV energy range and HENPA (High Energy Neutral Particle Analyzer) with 0.1-4.0MeV energy range. Simultaneous operation of both analyzers in different energy ranges enables researchers to measure the DT fuel ratio both in the central burning plasma (thermonuclear burn zone) and at the edge as well. When developing the diagnostic complex, it was necessary to account for the impact of several factors: high levels of neutron and gamma radiation, the direct vacuum connection to the ITER vessel, implying high tritium containment, strict requirements on reliability of all units and mechanisms, and the limited space available for accommodation of the diagnostic hardware at the ITER tokamak. The paper describes the design of the diagnostic complex and the engineering solutions that make it possible to conduct measurements under tokamak reactor conditions. The proposed engineering solutions provide a safe—with respect to thermal and mechanical loads—common vacuum channel for hydrogen isotope atoms to pass to the analyzers; ensure efficient shielding of the analyzers from the ITER stray magnetic field (up to 1 kG); provide the remote control of the NPA diagnostic complex, in particular, connection/disconnection of the NPA vacuum beamline from the ITER vessel; meet the ITER radiation safety requirements; and ensure measurements of the fuel isotopic ratio under high levels of neutron and gamma radiation.
ERIC Educational Resources Information Center
Estévez-Ayres, Iria; Alario-Hoyos, Carlos; Pérez-Sanagustín, Mar; Pardo, Abelardo; Crespo-García, Raquel M.; Leony, Derick; Parada G., Hugo A.; Delgado-Kloos, Carlos
2015-01-01
In the last decade, engineering education has evolved in many ways to meet society demands. Universities offer more flexible curricula and put a lot of effort on the acquisition of professional engineering skills by the students. In many universities, the courses in the first years of different engineering degrees share program and objectives,…
NASA Astrophysics Data System (ADS)
Suman, Rakesh; O'Toole, Peter
2014-03-01
Here we report a novel label free, high contrast and quantitative method for imaging live cells. The technique reconstructs an image from overlapping diffraction patterns using a ptychographical algorithm. The algorithm utilises both amplitude and phase data from the sample to report on quantitative changes related to the refractive index (RI) and thickness of the specimen. We report the ability of this technique to generate high contrast images, to visualise neurite elongation in neuronal cells, and to provide measure of cell proliferation.
Establishing Physical and Engineering Science Base to Bridge from ITER to Demo
NASA Astrophysics Data System (ADS)
Peng, Y.-K. Martin; Abdou, M.; Gates, D.; Hegna, C.; Hill, D.; Najmabadi, F.; Navratil, G.; Parker, R.
2007-11-01
A Nuclear Component Testing (NCT) Discussion Group emerged recently to clarify how ``a lowered-risk, reduced-cost approach can provide a progressive fusion environment beyond the ITER level to explore, discover, and help establish the remaining, critically needed physical and engineering sciences knowledge base for Demo.'' The group, assuming success of ITER and other contemporary projects, identified critical ``gap-filling'' investigations: plasma startup, tritium self-sufficiency, plasma facing surface performance and maintainability, first wall/blanket/divertor materials defect control and lifetime management, and remote handling. Only standard or spherical tokamak plasma conditions below the advanced regime are assumed to lower the anticipated physics risk to continuous operation (˜2 weeks). Modular designs and remote handling capabilities are included to mitigate the risk of component failure and ease replacement. Aspect ratio should be varied to lower the cost, accounting for the contending physics risks and the near-term R&D. Cost and time-effective staging from H-H, D-D, to D-T will also be considered. *Work supported by USDOE.
IDC Re-Engineering Phase 2 Glossary Version 1.3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Young, Christopher J.; Harris, James M.
2017-01-01
This document contains the glossary of terms used for the IDC Re-Engineering Phase 2 project. This version was created for Iteration E3. The IDC applies automatic processing methods in order to produce, archive, and distribute standard IDC products on behalf of all States Parties.
Optimization of sampling pattern and the design of Fourier ptychographic illuminator.
Guo, Kaikai; Dong, Siyuan; Nanda, Pariksheet; Zheng, Guoan
2015-03-09
Fourier ptychography (FP) is a recently developed imaging approach that facilitates high-resolution imaging beyond the cutoff frequency of the employed optics. In the original FP approach, a periodic LED array is used for sample illumination, and therefore, the scanning pattern is a uniform grid in the Fourier space. Such a uniform sampling scheme leads to 3 major problems for FP, namely: 1) it requires a large number of raw images, 2) it introduces the raster grid artefacts in the reconstruction process, and 3) it requires a high-dynamic-range detector. Here, we investigate scanning sequences and sampling patterns to optimize the FP approach. For most biological samples, signal energy is concentrated at low-frequency region, and as such, we can perform non-uniform Fourier sampling in FP by considering the signal structure. In contrast, conventional ptychography perform uniform sampling over the entire real space. To implement the non-uniform Fourier sampling scheme in FP, we have designed and built an illuminator using LEDs mounted on a 3D-printed plastic case. The advantages of this illuminator are threefold in that: 1) it reduces the number of image acquisitions by at least 50% (68 raw images versus 137 in the original FP setup), 2) it departs from the translational symmetry of sampling to solve the raster grid artifact problem, and 3) it reduces the dynamic range of the captured images 6 fold. The results reported in this paper significantly shortened acquisition time and improved quality of FP reconstructions. It may provide new insights for developing Fourier ptychographic imaging platforms and find important applications in digital pathology.
Teaching Engineering Design Through Paper Rockets
ERIC Educational Resources Information Center
Welling, Jonathan; Wright, Geoffrey A.
2018-01-01
The paper rocket activity described in this article effectively teaches the engineering design process (EDP) by engaging students in a problem-based learning activity that encourages iterative design. For example, the first rockets the students build typically only fly between 30 and 100 feet. As students test and evaluate their rocket designs,…
Systems Engineering of Electric and Hybrid Vehicles
NASA Technical Reports Server (NTRS)
Kurtz, D. W.; Levin, R. R.
1986-01-01
Technical paper notes systems engineering principles applied to development of electric and hybrid vehicles such that system performance requirements support overall program goal of reduced petroleum consumption. Paper discusses iterative design approach dictated by systems analyses. In addition to obvious peformance parameters of range, acceleration rate, and energy consumption, systems engineering also considers such major factors as cost, safety, reliability, comfort, necessary supporting infrastructure, and availability of materials.
NASA Technical Reports Server (NTRS)
Zinnecker, Alicia M.; Chapman, Jeffryes W.; Lavelle, Thomas M.; Litt, Johathan S.
2014-01-01
The Toolbox for Modeling and Analysis of Thermodynamic Systems (T-MATS) is a tool that has been developed to allow a user to build custom models of systems governed by thermodynamic principles using a template to model each basic process. Validation of this tool in an engine model application was performed through reconstruction of the Commercial Modular Aero-Propulsion System Simulation (C-MAPSS) (v2) using the building blocks from the T-MATS (v1) library. In order to match the two engine models, it was necessary to address differences in several assumptions made in the two modeling approaches. After these modifications were made, validation of the engine model continued by integrating both a steady-state and dynamic iterative solver with the engine plant and comparing results from steady-state and transient simulation of the T-MATS and C-MAPSS models. The results show that the T-MATS engine model was accurate within 3 of the C-MAPSS model, with inaccuracy attributed to the increased dimension of the iterative solver solution space required by the engine model constructed using the T-MATS library. This demonstrates that, given an understanding of the modeling assumptions made in T-MATS and a baseline model, the T-MATS tool provides a viable option for constructing a computational model of a twin-spool turbofan engine that may be used in simulation studies.
Low-Cost, Net-Shape Ceramic Radial Turbine Program
1985-05-01
PROGRAM ELEMENT. PROJECT. TASK Garrett Turbine Engine Company AE OKUI UBR 111 South 34th Street, P.O. Box 2517 Phoenix, Arizona 85010 %I. CONTROLLING...processing iterations. Program management and materials characterization were conducted at Garrett Turbine Engine Company (GTEC), test bar and rotor...automotive gas turbine engine rotor development efforts at ACC. xvii PREFACE This is the final technical report of the Low-Cost, Net- Shape Ceramic
Engineering Design of ITER Prototype Fast Plant System Controller
NASA Astrophysics Data System (ADS)
Goncalves, B.; Sousa, J.; Carvalho, B.; Rodrigues, A. P.; Correia, M.; Batista, A.; Vega, J.; Ruiz, M.; Lopez, J. M.; Rojo, R. Castro; Wallander, A.; Utzel, N.; Neto, A.; Alves, D.; Valcarcel, D.
2011-08-01
The ITER control, data access and communication (CODAC) design team identified the need for two types of plant systems. A slow control plant system is based on industrial automation technology with maximum sampling rates below 100 Hz, and a fast control plant system is based on embedded technology with higher sampling rates and more stringent real-time requirements than that required for slow controllers. The latter is applicable to diagnostics and plant systems in closed-control loops whose cycle times are below 1 ms. Fast controllers will be dedicated industrial controllers with the ability to supervise other fast and/or slow controllers, interface to actuators and sensors and, if necessary, high performance networks. Two prototypes of a fast plant system controller specialized for data acquisition and constrained by ITER technological choices are being built using two different form factors. This prototyping activity contributes to the Plant Control Design Handbook effort of standardization, specifically regarding fast controller characteristics. Envisaging a general purpose fast controller design, diagnostic use cases with specific requirements were analyzed and will be presented along with the interface with CODAC and sensors. The requirements and constraints that real-time plasma control imposes on the design were also taken into consideration. Functional specifications and technology neutral architecture, together with its implications on the engineering design, were considered. The detailed engineering design compliant with ITER standards was performed and will be discussed in detail. Emphasis will be given to the integration of the controller in the standard CODAC environment. Requirements for the EPICS IOC providing the interface to the outside world, the prototype decisions on form factor, real-time operating system, and high-performance networks will also be discussed, as well as the requirements for data streaming to CODAC for visualization and archiving.
Active spectroscopic measurements using the ITER diagnostic system.
Thomas, D M; Counsell, G; Johnson, D; Vasu, P; Zvonkov, A
2010-10-01
Active (beam-based) spectroscopic measurements are intended to provide a number of crucial parameters for the ITER device being built in Cadarache, France. These measurements include the determination of impurity ion temperatures, absolute densities, and velocity profiles, as well as the determination of the plasma current density profile. Because ITER will be the first experiment to study long timescale (∼1 h) fusion burn plasmas, of particular interest is the ability to study the profile of the thermalized helium ash resulting from the slowing down and confinement of the fusion alphas. These measurements will utilize both the 1 MeV heating neutral beams and a dedicated 100 keV hydrogen diagnostic neutral beam. A number of separate instruments are being designed and built by several of the ITER partners to meet the different spectroscopic measurement needs and to provide the maximum physics information. In this paper, we describe the planned measurements, the intended diagnostic ensemble, and we will discuss specific physics and engineering challenges for these measurements in ITER.
Chen, Tinggui; Xiao, Renbin
2014-01-01
Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness.
Physics and Engineering Design of the ITER Electron Cyclotron Emission Diagnostic
NASA Astrophysics Data System (ADS)
Rowan, W. L.; Austin, M. E.; Houshmandyar, S.; Phillips, P. E.; Beno, J. H.; Ouroua, A.; Weeks, D. A.; Hubbard, A. E.; Stillerman, J. A.; Feder, R. E.; Khodak, A.; Taylor, G.; Pandya, H. K.; Danani, S.; Kumar, R.
2015-11-01
Electron temperature (Te) measurements and consequent electron thermal transport inferences will be critical to the non-active phases of ITER operation and will take on added importance during the alpha heating phase. Here, we describe our design for the diagnostic that will measure spatial and temporal profiles of Te using electron cyclotron emission (ECE). Other measurement capability includes high frequency instabilities (e.g. ELMs, NTMs, and TAEs). Since results from TFTR and JET suggest that Thomson Scattering and ECE differ at high Te due to driven non-Maxwellian distributions, non-thermal features of the ITER electron distribution must be documented. The ITER environment presents other challenges including space limitations, vacuum requirements, and very high-neutron-fluence. Plasma control in ITER will require real-time Te. The diagnosic design that evolved from these sometimes-conflicting needs and requirements will be described component by component with special emphasis on the integration to form a single effective diagnostic system. Supported by PPPL/US-DA via subcontract S013464-C to UT Austin.
Vincent, Julian F V
2003-01-01
Biomimetics is seen as a path from biology to engineering. The only path from engineering to biology in current use is the application of engineering concepts and models to biological systems. However, there is another pathway: the verification of biological mechanisms by manufacture, leading to an iterative process between biology and engineering in which the new understanding that the engineering implementation of a biological system can bring is fed back into biology, allowing a more complete and certain understanding and the possibility of further revelations for application in engineering. This is a pathway as yet unformalized, and one that offers the possibility that engineers can also be scientists. PMID:14561351
NASA Technical Reports Server (NTRS)
Zinnecker, Alicia M.; Chapman, Jeffryes W.; Lavelle, Thomas M.; Litt, Jonathan S.
2014-01-01
The Toolbox for the Modeling and Analysis of Thermodynamic Systems (T-MATS) is a tool that has been developed to allow a user to build custom models of systems governed by thermodynamic principles using a template to model each basic process. Validation of this tool in an engine model application was performed through reconstruction of the Commercial Modular Aero-Propulsion System Simulation (C-MAPSS) (v2) using the building blocks from the T-MATS (v1) library. In order to match the two engine models, it was necessary to address differences in several assumptions made in the two modeling approaches. After these modifications were made, validation of the engine model continued by integrating both a steady-state and dynamic iterative solver with the engine plant and comparing results from steady-state and transient simulation of the T-MATS and C-MAPSS models. The results show that the T-MATS engine model was accurate within 3% of the C-MAPSS model, with inaccuracy attributed to the increased dimension of the iterative solver solution space required by the engine model constructed using the T-MATS library. This demonstrates that, given an understanding of the modeling assumptions made in T-MATS and a baseline model, the T-MATS tool provides a viable option for constructing a computational model of a twin-spool turbofan engine that may be used in simulation studies.
2003-06-01
delivery Data Access (1980s) "What were unit sales in New England last March?" Relational databases (RDBMS), Structured Query Language ( SQL ...macros written in Visual Basic for Applications ( VBA ). 32 Iteration Two: Class Diagram Tech OASIS Export ScriptImport Filter Data ProcessingMethod 1...MS Excel * 1 VBA Macro*1 contains sends data to co nt ai ns executes * * 1 1 contains contains Figure 20. Iteration two class diagram The
1990-02-01
noise. Tobias B. Orloff Work began on developing a high quality rendering algorithm based on the radiosity method. The algorithm is similar to...previous progressive radiosity algorithms except for the following improvements: 1. At each iteration vertex radiosities are computed using a modified scan...line approach, thus eliminating the quadratic cost associated with a ray tracing computation of vortex radiosities . 2. At each iteration the scene is
Iterative algorithms for large sparse linear systems on parallel computers
NASA Technical Reports Server (NTRS)
Adams, L. M.
1982-01-01
Algorithms for assembling in parallel the sparse system of linear equations that result from finite difference or finite element discretizations of elliptic partial differential equations, such as those that arise in structural engineering are developed. Parallel linear stationary iterative algorithms and parallel preconditioned conjugate gradient algorithms are developed for solving these systems. In addition, a model for comparing parallel algorithms on array architectures is developed and results of this model for the algorithms are given.
Beyond ITER: neutral beams for a demonstration fusion reactor (DEMO) (invited).
McAdams, R
2014-02-01
In the development of magnetically confined fusion as an economically sustainable power source, International Tokamak Experimental Reactor (ITER) is currently under construction. Beyond ITER is the demonstration fusion reactor (DEMO) programme in which the physics and engineering aspects of a future fusion power plant will be demonstrated. DEMO will produce net electrical power. The DEMO programme will be outlined and the role of neutral beams for heating and current drive will be described. In particular, the importance of the efficiency of neutral beam systems in terms of injected neutral beam power compared to wallplug power will be discussed. Options for improving this efficiency including advanced neutralisers and energy recovery are discussed.
High-speed Fourier ptychographic microscopy based on programmable annular illuminations.
Sun, Jiasong; Zuo, Chao; Zhang, Jialin; Fan, Yao; Chen, Qian
2018-05-16
High-throughput quantitative phase imaging (QPI) is essential to cellular phenotypes characterization as it allows high-content cell analysis and avoids adverse effects of staining reagents on cellular viability and cell signaling. Among different approaches, Fourier ptychographic microscopy (FPM) is probably the most promising technique to realize high-throughput QPI by synthesizing a wide-field, high-resolution complex image from multiple angle-variably illuminated, low-resolution images. However, the large dataset requirement in conventional FPM significantly limits its imaging speed, resulting in low temporal throughput. Moreover, the underlying theoretical mechanism as well as optimum illumination scheme for high-accuracy phase imaging in FPM remains unclear. Herein, we report a high-speed FPM technique based on programmable annular illuminations (AIFPM). The optical-transfer-function (OTF) analysis of FPM reveals that the low-frequency phase information can only be correctly recovered if the LEDs are precisely located at the edge of the objective numerical aperture (NA) in the frequency space. By using only 4 low-resolution images corresponding to 4 tilted illuminations matching a 10×, 0.4 NA objective, we present the high-speed imaging results of in vitro Hela cells mitosis and apoptosis at a frame rate of 25 Hz with a full-pitch resolution of 655 nm at a wavelength of 525 nm (effective NA = 0.8) across a wide field-of-view (FOV) of 1.77 mm 2 , corresponding to a space-bandwidth-time product of 411 megapixels per second. Our work reveals an important capability of FPM towards high-speed high-throughput imaging of in vitro live cells, achieving video-rate QPI performance across a wide range of scales, both spatial and temporal.
Multidisciplinary systems optimization by linear decomposition
NASA Technical Reports Server (NTRS)
Sobieski, J.
1984-01-01
In a typical design process major decisions are made sequentially. An illustrated example is given for an aircraft design in which the aerodynamic shape is usually decided first, then the airframe is sized for strength and so forth. An analogous sequence could be laid out for any other major industrial product, for instance, a ship. The loops in the discipline boxes symbolize iterative design improvements carried out within the confines of a single engineering discipline, or subsystem. The loops spanning several boxes depict multidisciplinary design improvement iterations. Omitted for graphical simplicity is parallelism of the disciplinary subtasks. The parallelism is important in order to develop a broad workfront necessary to shorten the design time. If all the intradisciplinary and interdisciplinary iterations were carried out to convergence, the process could yield a numerically optimal design. However, it usually stops short of that because of time and money limitations. This is especially true for the interdisciplinary iterations.
LOW-ENGINE-FRICTION TECHNOLOGY FOR ADVANCED NATURAL-GAS RECIPROCATING ENGINES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Victor Wong; Tian Tian; Luke Moughon
2005-09-30
This program aims at improving the efficiency of advanced natural-gas reciprocating engines (ANGRE) by reducing piston and piston ring assembly friction without major adverse effects on engine performance, such as increased oil consumption and wear. An iterative process of simulation, experimentation and analysis is being followed towards achieving the goal of demonstrating a complete optimized low-friction engine system. To date, a detailed set of piston and piston-ring dynamic and friction models have been developed and applied that illustrate the fundamental relationships between design parameters and friction losses. Low friction ring designs have already been recommended in a previous phase, withmore » full-scale engine validation partially completed. Current accomplishments include the addition of several additional power cylinder design areas to the overall system analysis. These include analyses of lubricant and cylinder surface finish and a parametric study of piston design. The Waukesha engine was found to be already well optimized in the areas of lubricant, surface skewness and honing cross-hatch angle, where friction reductions of 12% for lubricant, and 5% for surface characteristics, are projected. For the piston, a friction reduction of up to 50% may be possible by controlling waviness alone, while additional friction reductions are expected when other parameters are optimized. A total power cylinder friction reduction of 30-50% is expected, translating to an engine efficiency increase of two percentage points from its current baseline towards the goal of 50% efficiency. Key elements of the continuing work include further analysis and optimization of the engine piston design, in-engine testing of recommended lubricant and surface designs, design iteration and optimization of previously recommended technologies, and full-engine testing of a complete, optimized, low-friction power cylinder system.« less
ITER ECE Diagnostic: Design Progress of IN-DA and the diagnostic role for Physics
NASA Astrophysics Data System (ADS)
Pandya, H. K. B.; Kumar, Ravinder; Danani, S.; Shrishail, P.; Thomas, Sajal; Kumar, Vinay; Taylor, G.; Khodak, A.; Rowan, W. L.; Houshmandyar, S.; Udintsev, V. S.; Casal, N.; Walsh, M. J.
2017-04-01
The ECE Diagnostic system in ITER will be used for measuring the electron temperature profile evolution, electron temperature fluctuations, the runaway electron spectrum, and the radiated power in the electron cyclotron frequency range (70-1000 GHz), These measurements will be used for advanced real time plasma control (e.g. steering the electron cyclotron heating beams), and physics studies. The scope of the Indian Domestic Agency (IN-DA) is to design and develop the polarizer splitter units; the broadband (70 to 1000 GHz) transmission lines; a high temperature calibration source in the Diagnostics Hall; two Michelson Interferometers (70 to 1000 GHz) and a 122-230 GHz radiometer. The remainder of the ITER ECE diagnostic system is the responsibility of the US domestic agency and the ITER Organization (IO). The design needs to conform to the ITER Organization’s strict requirements for reliability, availability, maintainability and inspect-ability. Progress in the design and development of various subsystems and components considering various engineering challenges and solutions will be discussed in this paper. This paper will also highlight how various ECE measurements can enhance understanding of plasma physics in ITER.
A Holistic Approach to Systems Development
NASA Technical Reports Server (NTRS)
Wong, Douglas T.
2008-01-01
Introduces a Holistic and Iterative Design Process. Continuous process but can be loosely divided into four stages. More effort spent early on in the design. Human-centered and Multidisciplinary. Emphasis on Life-Cycle Cost. Extensive use of modeling, simulation, mockups, human subjects, and proven technologies. Human-centered design doesn t mean the human factors discipline is the most important Disciplines should be involved in the design: Subsystem vendors, configuration management, operations research, manufacturing engineering, simulation/modeling, cost engineering, hardware engineering, software engineering, test and evaluation, human factors, electromagnetic compatibility, integrated logistics support, reliability/maintainability/availability, safety engineering, test equipment, training systems, design-to-cost, life cycle cost, application engineering etc. 9
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berry, Jan; Ferrada, Juan J; Curd, Warren
During inductive plasma operation of ITER, fusion power will reach 500 MW with an energy multiplication factor of 10. The heat will be transferred by the Tokamak Cooling Water System (TCWS) to the environment using the secondary cooling system. Plasma operations are inherently safe even under the most severe postulated accident condition a large, in-vessel break that results in a loss-of-coolant accident. A functioning cooling water system is not required to ensure safe shutdown. Even though ITER is inherently safe, TCWS equipment (e.g., heat exchangers, piping, pressurizers) are classified as safety important components. This is because the water is predictedmore » to contain low-levels of radionuclides (e.g., activated corrosion products, tritium) with activity levels high enough to require the design of components to be in accordance with French regulations for nuclear pressure equipment, i.e., the French Order dated 12 December 2005 (ESPN). ESPN has extended the practical application of the methodology established by the Pressure Equipment Directive (97/23/EC) to nuclear pressure equipment, under French Decree 99-1046 dated 13 December 1999, and Order dated 21 December 1999 (ESP). ASME codes and supplementary analyses (e.g., Failure Modes and Effects Analysis) will be used to demonstrate that the TCWS equipment meets these essential safety requirements. TCWS is being designed to provide not only cooling, with a capacity of approximately 1 GW energy removal, but also elevated temperature baking of first-wall/blanket, vacuum vessel, and divertor. Additional TCWS functions include chemical control of water, draining and drying for maintenance, and facilitation of leak detection/localization. The TCWS interfaces with the majority of ITER systems, including the secondary cooling system. U.S. ITER is responsible for design, engineering, and procurement of the TCWS with industry support from an Engineering Services Organization (ESO) (AREVA Federal Services, with support from Northrop Grumman, and OneCIS). ITER International Organization (ITER-IO) is responsible for design oversight and equipment installation in Cadarache, France. TCWS equipment will be fabricated using ASME design codes with quality assurance and oversight by an Agreed Notified Body (approved by the French regulator) that will ensure regulatory compliance. This paper describes the TCWS design and how U.S. ITER and fabricators will use ASME codes to comply with EU Directives and French Orders and Decrees.« less
Least Squares Computations in Science and Engineering
1994-02-01
iterative least squares deblurring procedure. Because of the ill-posed characteristics of the deconvolution problem, in the presence of noise , direct...optimization methods. Generally, the problems are accompanied by constraints, such as bound constraints, and the observations are corrupted by noise . The...engineering. This effort has involved interaction with researchers in closed-loop active noise (vibration) control at Phillips Air Force Laboratory
Label-free imaging to study phenotypic behavioural traits of cells in complex co-cultures
NASA Astrophysics Data System (ADS)
Suman, Rakesh; Smith, Gabrielle; Hazel, Kathryn E. A.; Kasprowicz, Richard; Coles, Mark; O'Toole, Peter; Chawla, Sangeeta
2016-02-01
Time-lapse imaging is a fundamental tool for studying cellular behaviours, however studies of primary cells in complex co-culture environments often requires fluorescent labelling and significant light exposure that can perturb their natural function over time. Here, we describe ptychographic phase imaging that permits prolonged label-free time-lapse imaging of microglia in the presence of neurons and astrocytes, which better resembles in vivo microenvironments. We demonstrate the use of ptychography as an assay to study the phenotypic behaviour of microglial cells in primary neuronal co-cultures through the addition of cyclosporine A, a potent immune-modulator.
Chen, S.; Deng, J.; Nashed, Y. S. G.; ...
2016-07-25
Bionanoprobe (BNP), a hard x-ray fluorescence sample-scanning nanoprobe at the Advanced Photon Source of Argonne National Laboratory, has been used to quantitatively study elemental distributions in biological cells with sub-100 nm spatial resolution and high sensitivity. Cryogenic conditions enable biological samples to be studied in their frozen-hydrated state with both ultrastructure and elemental distributions more faithfully preserved compared to conventional chemical fixation or dehydration methods. Furthermore, radiation damage is reduced in two ways: the diffusion rate of free radicals is decreased at low temperatures; and the sample is embedded in vitrified ice, which reduces mass loss.
High-resolution extraction of particle size via Fourier Ptychography
NASA Astrophysics Data System (ADS)
Li, Shengfu; Zhao, Yu; Chen, Guanghua; Luo, Zhenxiong; Ye, Yan
2017-11-01
This paper proposes a method which can extract the particle size information with a resolution beyond λ/NA. This is achieved by applying Fourier Ptychographic (FP) ideas to the present problem. In a typical FP imaging platform, a 2D LED array is used as light sources for angle-varied illuminations, a series of low-resolution images was taken by a full sequential scan of the array of LEDs. Here, we demonstrate the particle size information is extracted by turning on each single LED on a circle. The simulated results show that the proposed method can reduce the total number of images, without loss of reliability in the results.
2014-01-01
Due to fierce market competition, how to improve product quality and reduce development cost determines the core competitiveness of enterprises. However, design iteration generally causes increases of product cost and delays of development time as well, so how to identify and model couplings among tasks in product design and development has become an important issue for enterprises to settle. In this paper, the shortcomings existing in WTM model are discussed and tearing approach as well as inner iteration method is used to complement the classic WTM model. In addition, the ABC algorithm is also introduced to find out the optimal decoupling schemes. In this paper, firstly, tearing approach and inner iteration method are analyzed for solving coupled sets. Secondly, a hybrid iteration model combining these two technologies is set up. Thirdly, a high-performance swarm intelligence algorithm, artificial bee colony, is adopted to realize problem-solving. Finally, an engineering design of a chemical processing system is given in order to verify its reasonability and effectiveness. PMID:25431584
Arc detection for the ICRF system on ITER
NASA Astrophysics Data System (ADS)
D'Inca, R.
2011-12-01
The ICRF system for ITER is designed to respect the high voltage breakdown limits. However arcs can still statistically happen and must be quickly detected and suppressed by shutting the RF power down. For the conception of a reliable and efficient detector, the analysis of the mechanism of arcs is necessary to find their unique signature. Numerous systems have been conceived to address the issues of arc detection. VSWR-based detectors, RF noise detectors, sound detectors, optical detectors, S-matrix based detectors. Until now, none of them has succeeded in demonstrating the fulfillment of all requirements and the studies for ITER now follow three directions: improvement of the existing concepts to fix their flaws, development of new theoretically fully compliant detectors (like the GUIDAR) and combination of several detectors to benefit from the advantages of each of them. Together with the physical and engineering challenges, the development of an arc detection system for ITER raises methodological concerns to extrapolate the results from basic experiments and present machines to the ITER scale ICRF system and to conduct a relevant risk analysis.
NASA Technical Reports Server (NTRS)
Winget, J. M.; Hughes, T. J. R.
1985-01-01
The particular problems investigated in the present study arise from nonlinear transient heat conduction. One of two types of nonlinearities considered is related to a material temperature dependence which is frequently needed to accurately model behavior over the range of temperature of engineering interest. The second nonlinearity is introduced by radiation boundary conditions. The finite element equations arising from the solution of nonlinear transient heat conduction problems are formulated. The finite element matrix equations are temporally discretized, and a nonlinear iterative solution algorithm is proposed. Algorithms for solving the linear problem are discussed, taking into account the form of the matrix equations, Gaussian elimination, cost, and iterative techniques. Attention is also given to approximate factorization, implementational aspects, and numerical results.
The ITER project construction status
NASA Astrophysics Data System (ADS)
Motojima, O.
2015-10-01
The pace of the ITER project in St Paul-lez-Durance, France is accelerating rapidly into its peak construction phase. With the completion of the B2 slab in August 2014, which will support about 400 000 metric tons of the tokamak complex structures and components, the construction is advancing on a daily basis. Magnet, vacuum vessel, cryostat, thermal shield, first wall and divertor structures are under construction or in prototype phase in the ITER member states of China, Europe, India, Japan, Korea, Russia, and the United States. Each of these member states has its own domestic agency (DA) to manage their procurements of components for ITER. Plant systems engineering is being transformed to fully integrate the tokamak and its auxiliary systems in preparation for the assembly and operations phase. CODAC, diagnostics, and the three main heating and current drive systems are also progressing, including the construction of the neutral beam test facility building in Padua, Italy. The conceptual design of the Chinese test blanket module system for ITER has been completed and those of the EU are well under way. Significant progress has been made addressing several outstanding physics issues including disruption load characterization, prediction, avoidance, and mitigation, first wall and divertor shaping, edge pedestal and SOL plasma stability, fuelling and plasma behaviour during confinement transients and W impurity transport. Further development of the ITER Research Plan has included a definition of the required plant configuration for 1st plasma and subsequent phases of ITER operation as well as the major plasma commissioning activities and the needs of the accompanying R&D program to ITER construction by the ITER parties.
Genome scale engineering techniques for metabolic engineering.
Liu, Rongming; Bassalo, Marcelo C; Zeitoun, Ramsey I; Gill, Ryan T
2015-11-01
Metabolic engineering has expanded from a focus on designs requiring a small number of genetic modifications to increasingly complex designs driven by advances in genome-scale engineering technologies. Metabolic engineering has been generally defined by the use of iterative cycles of rational genome modifications, strain analysis and characterization, and a synthesis step that fuels additional hypothesis generation. This cycle mirrors the Design-Build-Test-Learn cycle followed throughout various engineering fields that has recently become a defining aspect of synthetic biology. This review will attempt to summarize recent genome-scale design, build, test, and learn technologies and relate their use to a range of metabolic engineering applications. Copyright © 2015 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.
Experiences with a generator tool for building clinical application modules.
Kuhn, K A; Lenz, R; Elstner, T; Siegele, H; Moll, R
2003-01-01
To elaborate main system characteristics and relevant deployment experiences for the health information system (HIS) Orbis/OpenMed, which is in widespread use in Germany, Austria, and Switzerland. In a deployment phase of 3 years in a 1.200 bed university hospital, where the system underwent significant improvements, the system's functionality and its software design have been analyzed in detail. We focus on an integrated CASE tool for generating embedded clinical applications and for incremental system evolution. We present a participatory and iterative software engineering process developed for efficient utilization of such a tool. The system's functionality is comparable to other commercial products' functionality; its components are embedded in a vendor-specific application framework, and standard interfaces are being used for connecting subsystems. The integrated generator tool is a remarkable feature; it became a key factor of our project. Tool generated applications are workflow enabled and embedded into the overall data base schema. Rapid prototyping and iterative refinement are supported, so application modules can be adapted to the users' work practice. We consider tools supporting an iterative and participatory software engineering process highly relevant for health information system architects. The potential of a system to continuously evolve and to be effectively adapted to changing needs may be more important than sophisticated but hard-coded HIS functionality. More work will focus on HIS software design and on software engineering. Methods and tools are needed for quick and robust adaptation of systems to health care processes and changing requirements.
A transatlantic perspective on 20 emerging issues in biological engineering.
Wintle, Bonnie C; Boehm, Christian R; Rhodes, Catherine; Molloy, Jennifer C; Millett, Piers; Adam, Laura; Breitling, Rainer; Carlson, Rob; Casagrande, Rocco; Dando, Malcolm; Doubleday, Robert; Drexler, Eric; Edwards, Brett; Ellis, Tom; Evans, Nicholas G; Hammond, Richard; Haseloff, Jim; Kahl, Linda; Kuiken, Todd; Lichman, Benjamin R; Matthewman, Colette A; Napier, Johnathan A; ÓhÉigeartaigh, Seán S; Patron, Nicola J; Perello, Edward; Shapira, Philip; Tait, Joyce; Takano, Eriko; Sutherland, William J
2017-11-14
Advances in biological engineering are likely to have substantial impacts on global society. To explore these potential impacts we ran a horizon scanning exercise to capture a range of perspectives on the opportunities and risks presented by biological engineering. We first identified 70 potential issues, and then used an iterative process to prioritise 20 issues that we considered to be emerging, to have potential global impact, and to be relatively unknown outside the field of biological engineering. The issues identified may be of interest to researchers, businesses and policy makers in sectors such as health, energy, agriculture and the environment.
NASA Astrophysics Data System (ADS)
Bottoms, SueAnn I.; Ciechanowski, Kathryn M.; Hartman, Brian
2015-12-01
Iterative cycles of enactment embedded in culturally and linguistically diverse contexts provide rich opportunities for preservice teachers (PSTs) to enact core practices of science. This study is situated in the larger Families Involved in Sociocultural Teaching and Science, Technology, Engineering and Mathematics (FIESTAS) project, which weaves together cycles of enactment, core practices in science education and culturally relevant pedagogies. The theoretical foundation draws upon situated learning theory and communities of practice. Using video analysis by PSTs and course artifacts, the authors studied how the iterative process of these cycles guided PSTs development as teachers of elementary science. Findings demonstrate how PSTs were drawing on resources to inform practice, purposefully noticing their practice, renegotiating their roles in teaching, and reconsidering "professional blindness" through cultural practice.
Reverse engineering of integrated circuits
Chisholm, Gregory H.; Eckmann, Steven T.; Lain, Christopher M.; Veroff, Robert L.
2003-01-01
Software and a method therein to analyze circuits. The software comprises several tools, each of which perform particular functions in the Reverse Engineering process. The analyst, through a standard interface, directs each tool to the portion of the task to which it is most well suited, rendering previously intractable problems solvable. The tools are generally used iteratively to produce a successively more abstract picture of a circuit, about which incomplete a priori knowledge exists.
Computer-Aided Design Of Turbine Blades And Vanes
NASA Technical Reports Server (NTRS)
Hsu, Wayne Q.
1988-01-01
Quasi-three-dimensional method for determining aerothermodynamic configuration of turbine uses computer-interactive analysis and design and computer-interactive graphics. Design procedure executed rapidly so designer easily repeats it to arrive at best performance, size, structural integrity, and engine life. Sequence of events in aerothermodynamic analysis and design starts with engine-balance equations and ends with boundary-layer analysis and viscous-flow calculations. Analysis-and-design procedure interactive and iterative throughout.
Engineering the on-axis intensity of Bessel beam by a feedback tuning loop
NASA Astrophysics Data System (ADS)
Li, Runze; Yu, Xianghua; Yang, Yanlong; Peng, Tong; Yao, Baoli; Zhang, Chunmin; Ye, Tong
2018-02-01
The Bessel beam belongs to a typical class of non-diffractive optical fields that are characterized by their invariant focal profiles along the propagation direction. However, ideal Bessel beams only rigorously exist in theory; Bessel beams generated in the lab are quasi-Bessel beams with finite focal extensions and varying intensity profiles along the propagation axis. The ability to engineer the on-axis intensity profile to the desired shape is essential for many applications. Here we demonstrate an iterative optimization-based approach to engineering the on-axis intensity of Bessel beams. The genetic algorithm is used to demonstrate this approach. Starting with a traditional axicon phase mask, in the design process, the computed on-axis beam profile is fed into a feedback tuning loop of an iterative optimization process, which searches for an optimal radial phase distribution that can generate a generalized Bessel beam with the desired onaxis intensity profile. The experimental implementation involves a fine-tuning process that adjusts the originally targeted profile so that the optimization process can optimize the phase mask to yield an improved on-axis profile. Our proposed method has been demonstrated in engineering several zeroth-order Bessel beams with customized on-axis profiles. High accuracy and high energy throughput merit its use in many applications.
Multislice imaging of integrated circuits by precession X-ray ptychography.
Shimomura, Kei; Hirose, Makoto; Takahashi, Yukio
2018-01-01
A method for nondestructively visualizing multisection nanostructures of integrated circuits by X-ray ptychography with a multislice approach is proposed. In this study, tilt-series ptychographic diffraction data sets of a two-layered circuit with a ∼1.4 µm gap at nine incident angles are collected in a wide Q range and then artifact-reduced phase images of each layer are successfully reconstructed at ∼10 nm resolution. The present method has great potential for the three-dimensional observation of flat specimens with thickness on the order of 100 µm, such as three-dimensional stacked integrated circuits based on through-silicon vias, without laborious sample preparation.
High-energy cryo x-ray nano-imaging at the ID16A beamline of ESRF
NASA Astrophysics Data System (ADS)
da Silva, Julio C.; Pacureanu, Alexandra; Yang, Yang; Fus, Florin; Hubert, Maxime; Bloch, Leonid; Salome, Murielle; Bohic, Sylvain; Cloetens, Peter
2017-09-01
The ID16A beamline at ESRF offers unique capabilities for X-ray nano-imaging, and currently produces the worlds brightest high energy diffraction-limited nanofocus. Such a nanoprobe was designed for quantitative characterization of the morphology and the elemental composition of specimens at both room and cryogenic temperatures. Billions of photons per second can be delivered in a diffraction-limited focus spot size down to 13 nm. Coherent X-ray imaging techniques, as magnified holographic-tomography and ptychographic-tomography, are implemented as well as X-ray fluorescence nanoscopy. We will show the latest developments in coherent and spectroscopic X-ray nanoimaging implemented at the ID16A beamline
X-ray EM simulation tool for ptychography dataset construction
NASA Astrophysics Data System (ADS)
Stoevelaar, L. Pjotr; Gerini, Giampiero
2018-03-01
In this paper, we present an electromagnetic full-wave modeling framework, as a support EM tool providing data sets for X-ray ptychographic imaging. Modeling the entire scattering problem with Finite Element Method (FEM) tools is, in fact, a prohibitive task, because of the large area illuminated by the beam (due to the poor focusing power at these wavelengths) and the very small features to be imaged. To overcome this problem, the spectrum of the illumination beam is decomposed into a discrete set of plane waves. This allows reducing the electromagnetic modeling volume to the one enclosing the area to be imaged. The total scattered field is reconstructed by superimposing the solutions for each plane wave illumination.
Simulation and Spacecraft Design: Engineering Mars Landings.
Conway, Erik M
2015-10-01
A key issue in history of technology that has received little attention is the use of simulation in engineering design. This article explores the use of both mechanical and numerical simulation in the design of the Mars atmospheric entry phases of the Viking and Mars Pathfinder missions to argue that engineers used both kinds of simulation to develop knowledge of their designs' likely behavior in the poorly known environment of Mars. Each kind of simulation could be used as a warrant of the other's fidelity, in an iterative process of knowledge construction.
Millstone: software for multiplex microbial genome analysis and engineering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goodman, Daniel B.; Kuznetsov, Gleb; Lajoie, Marc J.
Inexpensive DNA sequencing and advances in genome editing have made computational analysis a major rate-limiting step in adaptive laboratory evolution and microbial genome engineering. Here, we describe Millstone, a web-based platform that automates genotype comparison and visualization for projects with up to hundreds of genomic samples. To enable iterative genome engineering, Millstone allows users to design oligonucleotide libraries and create successive versions of reference genomes. Millstone is open source and easily deployable to a cloud platform, local cluster, or desktop, making it a scalable solution for any lab.
Millstone: software for multiplex microbial genome analysis and engineering.
Goodman, Daniel B; Kuznetsov, Gleb; Lajoie, Marc J; Ahern, Brian W; Napolitano, Michael G; Chen, Kevin Y; Chen, Changping; Church, George M
2017-05-25
Inexpensive DNA sequencing and advances in genome editing have made computational analysis a major rate-limiting step in adaptive laboratory evolution and microbial genome engineering. We describe Millstone, a web-based platform that automates genotype comparison and visualization for projects with up to hundreds of genomic samples. To enable iterative genome engineering, Millstone allows users to design oligonucleotide libraries and create successive versions of reference genomes. Millstone is open source and easily deployable to a cloud platform, local cluster, or desktop, making it a scalable solution for any lab.
Millstone: software for multiplex microbial genome analysis and engineering
Goodman, Daniel B.; Kuznetsov, Gleb; Lajoie, Marc J.; ...
2017-05-25
Inexpensive DNA sequencing and advances in genome editing have made computational analysis a major rate-limiting step in adaptive laboratory evolution and microbial genome engineering. Here, we describe Millstone, a web-based platform that automates genotype comparison and visualization for projects with up to hundreds of genomic samples. To enable iterative genome engineering, Millstone allows users to design oligonucleotide libraries and create successive versions of reference genomes. Millstone is open source and easily deployable to a cloud platform, local cluster, or desktop, making it a scalable solution for any lab.
DOE Office of Scientific and Technical Information (OSTI.GOV)
S.R. Hudson; D.A. Monticello; A.H. Reiman
For the (non-axisymmetric) stellarator class of plasma confinement devices to be feasible candidates for fusion power stations it is essential that, to a good approximation, the magnetic field lines lie on nested flux surfaces; however, the inherent lack of a continuous symmetry implies that magnetic islands responsible for breaking the smooth topology of the flux surfaces are guaranteed to exist. Thus, the suppression of magnetic islands is a critical issue for stellarator design, particularly for small aspect ratio devices. Pfirsch-Schluter currents, diamagnetic currents, and resonant coil fields contribute to the formation of magnetic islands, and the challenge is to designmore » the plasma and coils such that these effects cancel. Magnetic islands in free-boundary high-pressure full-current stellarator magnetohydrodynamic equilibria are suppressed using a procedure based on the Princeton Iterative Equilibrium Solver [Reiman and Greenside, Comp. Phys. Comm. 43 (1986) 157] which iterate s the equilibrium equations to obtain the plasma equilibrium. At each iteration, changes to a Fourier representation of the coil geometry are made to cancel resonant fields produced by the plasma. The changes are constrained to preserve certain measures of engineering acceptability and to preserve the stability of ideal kink modes. As the iterations continue, the coil geometry and the plasma simultaneously converge to an equilibrium in which the island content is negligible, the plasma is stable to ideal kink modes, and the coils satisfy engineering constraints. The method is applied to a candidate plasma and coil design for the National Compact Stellarator Experiment [Reiman, et al., Phys. Plasmas 8 (May 2001) 2083].« less
NASA Astrophysics Data System (ADS)
Hudson, S. R.; Monticello, D. A.; Reiman, A. H.; Strickler, D. J.; Hirshman, S. P.; Ku, L.-P.; Lazarus, E.; Brooks, A.; Zarnstorff, M. C.; Boozer, A. H.; Fu, G.-Y.; Neilson, G. H.
2003-10-01
For the (non-axisymmetric) stellarator class of plasma confinement devices to be feasible candidates for fusion power stations it is essential that, to a good approximation, the magnetic field lines lie on nested flux surfaces; however, the inherent lack of a continuous symmetry implies that magnetic islands responsible for breaking the smooth topology of the flux surfaces are guaranteed to exist. Thus, the suppression of magnetic islands is a critical issue for stellarator design, particularly for small aspect ratio devices. Pfirsch-Schlüter currents, diamagnetic currents and resonant coil fields contribute to the formation of magnetic islands, and the challenge is to design the plasma and coils such that these effects cancel. Magnetic islands in free-boundary high-pressure full-current stellarator magnetohydrodynamic equilibria are suppressed using a procedure based on the Princeton Iterative Equilibrium Solver (Reiman and Greenside 1986 Comput. Phys. Commun. 43 157) which iterates the equilibrium equations to obtain the plasma equilibrium. At each iteration, changes to a Fourier representation of the coil geometry are made to cancel resonant fields produced by the plasma. The changes are constrained to preserve certain measures of engineering acceptability and to preserve the stability of ideal kink modes. As the iterations continue, the coil geometry and the plasma simultaneously converge to an equilibrium in which the island content is negligible, the plasma is stable to ideal kink modes, and the coils satisfy engineering constraints. The method is applied to a candidate plasma and coil design for the National Compact Stellarator eXperiment (Reiman et al 2001 Phys. Plasma 8 2083).
A transatlantic perspective on 20 emerging issues in biological engineering
Rhodes, Catherine; Molloy, Jennifer C; Millett, Piers; Adam, Laura; Breitling, Rainer; Carlson, Rob; Casagrande, Rocco; Dando, Malcolm; Doubleday, Robert; Drexler, Eric; Edwards, Brett; Ellis, Tom; Evans, Nicholas G; Hammond, Richard; Haseloff, Jim; Kahl, Linda; Kuiken, Todd; Lichman, Benjamin R; Matthewman, Colette A; Napier, Johnathan A; ÓhÉigeartaigh, Seán S; Patron, Nicola J; Perello, Edward; Shapira, Philip; Tait, Joyce; Takano, Eriko; Sutherland, William J
2017-01-01
Advances in biological engineering are likely to have substantial impacts on global society. To explore these potential impacts we ran a horizon scanning exercise to capture a range of perspectives on the opportunities and risks presented by biological engineering. We first identified 70 potential issues, and then used an iterative process to prioritise 20 issues that we considered to be emerging, to have potential global impact, and to be relatively unknown outside the field of biological engineering. The issues identified may be of interest to researchers, businesses and policy makers in sectors such as health, energy, agriculture and the environment. PMID:29132504
Redesign and Rehost of the BIG STICK Nuclear Wargame Simulation
1988-12-01
described by Pressman [16]. The 4GT soft- ware development approach consists of four iterative phases: the requirements gathering phase, the design strategy...2. BIG STICK Instructions and Planning Guidance. Air Command and Staff College, Air University, Maxwell AFB AL, 1987. Unpublished Manual. 3. Barry W...Software Engineering Notes, 7:29-32, April 1982. 81 17. Roger S. Pressman . Software Engineering: A Practitioner’s Approach. Mc-Craw-llill Book
Volumetric imaging of fast biological dynamics in deep tissue via wavefront engineering
NASA Astrophysics Data System (ADS)
Kong, Lingjie; Tang, Jianyong; Cui, Meng
2016-03-01
To reveal fast biological dynamics in deep tissue, we combine two wavefront engineering methods that were developed in our laboratory, namely optical phase-locked ultrasound lens (OPLUL) based volumetric imaging and iterative multiphoton adaptive compensation technique (IMPACT). OPLUL is used to generate oscillating defocusing wavefront for fast axial scanning, and IMPACT is used to compensate the wavefront distortions for deep tissue imaging. We show its promising applications in neuroscience and immunology.
CORSICA modelling of ITER hybrid operation scenarios
NASA Astrophysics Data System (ADS)
Kim, S. H.; Bulmer, R. H.; Campbell, D. J.; Casper, T. A.; LoDestro, L. L.; Meyer, W. H.; Pearlstein, L. D.; Snipes, J. A.
2016-12-01
The hybrid operating mode observed in several tokamaks is characterized by further enhancement over the high plasma confinement (H-mode) associated with reduced magneto-hydro-dynamic (MHD) instabilities linked to a stationary flat safety factor (q ) profile in the core region. The proposed ITER hybrid operation is currently aiming at operating for a long burn duration (>1000 s) with a moderate fusion power multiplication factor, Q , of at least 5. This paper presents candidate ITER hybrid operation scenarios developed using a free-boundary transport modelling code, CORSICA, taking all relevant physics and engineering constraints into account. The ITER hybrid operation scenarios have been developed by tailoring the 15 MA baseline ITER inductive H-mode scenario. Accessible operation conditions for ITER hybrid operation and achievable range of plasma parameters have been investigated considering uncertainties on the plasma confinement and transport. ITER operation capability for avoiding the poloidal field coil current, field and force limits has been examined by applying different current ramp rates, flat-top plasma currents and densities, and pre-magnetization of the poloidal field coils. Various combinations of heating and current drive (H&CD) schemes have been applied to study several physics issues, such as the plasma current density profile tailoring, enhancement of the plasma energy confinement and fusion power generation. A parameterized edge pedestal model based on EPED1 added to the CORSICA code has been applied to hybrid operation scenarios. Finally, fully self-consistent free-boundary transport simulations have been performed to provide information on the poloidal field coil voltage demands and to study the controllability with the ITER controllers. Extended from Proc. 24th Int. Conf. on Fusion Energy (San Diego, 2012) IT/P1-13.
New Parallel Algorithms for Structural Analysis and Design of Aerospace Structures
NASA Technical Reports Server (NTRS)
Nguyen, Duc T.
1998-01-01
Subspace and Lanczos iterations have been developed, well documented, and widely accepted as efficient methods for obtaining p-lowest eigen-pair solutions of large-scale, practical engineering problems. The focus of this paper is to incorporate recent developments in vectorized sparse technologies in conjunction with Subspace and Lanczos iterative algorithms for computational enhancements. Numerical performance, in terms of accuracy and efficiency of the proposed sparse strategies for Subspace and Lanczos algorithm, is demonstrated by solving for the lowest frequencies and mode shapes of structural problems on the IBM-R6000/590 and SunSparc 20 workstations.
Physics and engineering design of the accelerator and electron dump for SPIDER
NASA Astrophysics Data System (ADS)
Agostinetti, P.; Antoni, V.; Cavenago, M.; Chitarin, G.; Marconato, N.; Marcuzzi, D.; Pilan, N.; Serianni, G.; Sonato, P.; Veltri, P.; Zaccaria, P.
2011-06-01
The ITER Neutral Beam Test Facility (PRIMA) is planned to be built at Consorzio RFX (Padova, Italy). PRIMA includes two experimental devices: a full size ion source with low voltage extraction called SPIDER and a full size neutral beam injector at full beam power called MITICA. SPIDER is the first experimental device to be built and operated, aiming at testing the extraction of a negative ion beam (made of H- and in a later stage D- ions) from an ITER size ion source. The main requirements of this experiment are a H-/D- extracted current density larger than 355/285 A m-2, an energy of 100 keV and a pulse duration of up to 3600 s. Several analytical and numerical codes have been used for the design optimization process, some of which are commercial codes, while some others were developed ad hoc. The codes are used to simulate the electrical fields (SLACCAD, BYPO, OPERA), the magnetic fields (OPERA, ANSYS, COMSOL, PERMAG), the beam aiming (OPERA, IRES), the pressure inside the accelerator (CONDUCT, STRIP), the stripping reactions and transmitted/dumped power (EAMCC), the operating temperature, stress and deformations (ALIGN, ANSYS) and the heat loads on the electron dump (ED) (EDAC, BACKSCAT). An integrated approach, taking into consideration at the same time physics and engineering aspects, has been adopted all along the design process. Particular care has been taken in investigating the many interactions between physics and engineering aspects of the experiment. According to the 'robust design' philosophy, a comprehensive set of sensitivity analyses was performed, in order to investigate the influence of the design choices on the most relevant operating parameters. The design of the SPIDER accelerator, here described, has been developed in order to satisfy with reasonable margin all the requirements given by ITER, from the physics and engineering points of view. In particular, a new approach to the compensation of unwanted beam deflections inside the accelerator and a new concept for the ED have been introduced.
Design for disassembly and sustainability assessment to support aircraft end-of-life treatment
NASA Astrophysics Data System (ADS)
Savaria, Christian
Gas turbine engine design is a multidisciplinary and iterative process. Many design iterations are necessary to address the challenges among the disciplines. In the creation of a new engine architecture, the design time is crucial in capturing new business opportunities. At the detail design phase, it was proven very difficult to correct an unsatisfactory design. To overcome this difficulty, the concept of Multi-Disciplinary Optimization (MDO) at the preliminary design phase (Preliminary MDO or PMDO) is used allowing more freedom to perform changes in the design. PMDO also reduces the design time at the preliminary design phase. The concept of PMDO was used was used to create parametric models, and new correlations for high pressure gas turbine housing and shroud segments towards a new design process. First, dedicated parametric models were created because of their reusability and versatility. Their ease of use compared to non-parameterized models allows more design iterations thus reduces set up and design time. Second, geometry correlations were created to minimize the number of parameters used in turbine housing and shroud segment design. Since the turbine housing and the shroud segment geometries are required in tip clearance analyses, care was taken as to not oversimplify the parametric formulation. In addition, a user interface was developed to interact with the parametric models and improve the design time. Third, the cooling flow predictions require many engine parameters (i.e. geometric and performance parameters and air properties) and a reference shroud segments. A second correlation study was conducted to minimize the number of engine parameters required in the cooling flow predictions and to facilitate the selection of a reference shroud segment. Finally, the parametric models, the geometry correlations, and the user interface resulted in a time saving of 50% and an increase in accuracy of 56% in the new design system compared to the existing design system. Also, regarding the cooling flow correlations, the number of engine parameters was reduced by a factor of 6 to create a simplified prediction model and hence a faster shroud segment selection process. None
OVERVIEW OF NEUTRON MEASUREMENTS IN JET FUSION DEVICE.
Batistoni, P; Villari, R; Obryk, B; Packer, L W; Stamatelatos, I E; Popovichev, S; Colangeli, A; Colling, B; Fonnesu, N; Loreti, S; Klix, A; Klosowski, M; Malik, K; Naish, J; Pillon, M; Vasilopoulou, T; De Felice, P; Pimpinella, M; Quintieri, L
2017-10-05
The design and operation of ITER experimental fusion reactor requires the development of neutron measurement techniques and numerical tools to derive the fusion power and the radiation field in the device and in the surrounding areas. Nuclear analyses provide essential input to the conceptual design, optimisation, engineering and safety case in ITER and power plant studies. The required radiation transport calculations are extremely challenging because of the large physical extent of the reactor plant, the complexity of the geometry, and the combination of deep penetration and streaming paths. This article reports the experimental activities which are carried-out at JET to validate the neutronics measurements methods and numerical tools used in ITER and power plant design. A new deuterium-tritium campaign is proposed in 2019 at JET: the unique 14 MeV neutron yields produced will be exploited as much as possible to validate measurement techniques, codes, procedures and data currently used in ITER design thus reducing the related uncertainties and the associated risks in the machine operation. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Technical Reports Server (NTRS)
Barnes, Bruce W.; Sessions, Alaric M.; Beyon, Jeffrey; Petway, Larry B.
2014-01-01
Optimized designs of the Navigation Doppler Lidar (NDL) instrument for Autonomous Landing Hazard Avoidance Technology (ALHAT) were accomplished via Interdisciplinary Design Concept (IDEC) at NASA Langley Research Center during the summer of 2013. Three branches in the Engineering Directorate and three students were involved in this joint task through the NASA Langley Aerospace Research Summer Scholars (LARSS) Program. The Laser Remote Sensing Branch (LRSB), Mechanical Systems Branch (MSB), and Structural and Thermal Systems Branch (STSB) were engaged to achieve optimal designs through iterative and interactive collaborative design processes. A preliminary design iteration was able to reduce the power consumption, mass, and footprint by removing redundant components and replacing inefficient components with more efficient ones. A second design iteration reduced volume and mass by replacing bulky components with excessive performance with smaller components custom-designed for the power system. The existing power system was analyzed to rank components in terms of inefficiency, power dissipation, footprint and mass. Design considerations and priorities are compared along with the results of each design iteration. Overall power system improvements are summarized for design implementations.
Corwin, Lisa A; Runyon, Christopher R; Ghanem, Eman; Sandy, Moriah; Clark, Greg; Palmer, Gregory C; Reichler, Stuart; Rodenbusch, Stacia E; Dolan, Erin L
2018-06-01
Course-based undergraduate research experiences (CUREs) provide a promising avenue to attract a larger and more diverse group of students into research careers. CUREs are thought to be distinctive in offering students opportunities to make discoveries, collaborate, engage in iterative work, and develop a sense of ownership of their lab course work. Yet how these elements affect students' intentions to pursue research-related careers remain unexplored. To address this knowledge gap, we collected data on three design features thought to be distinctive of CUREs (discovery, iteration, collaboration) and on students' levels of ownership and career intentions from ∼800 undergraduates who had completed CURE or inquiry courses, including courses from the Freshman Research Initiative (FRI), which has a demonstrated positive effect on student retention in college and in science, technology, engineering, and mathematics. We used structural equation modeling to test relationships among the design features and student ownership and career intentions. We found that discovery, iteration, and collaboration had small but significant effects on students' intentions; these effects were fully mediated by student ownership. Students in FRI courses reported significantly higher levels of discovery, iteration, and ownership than students in other CUREs. FRI research courses alone had a significant effect on students' career intentions.
Engineering and manufacturing of ITER first mirror mock-ups.
Joanny, M; Travère, J M; Salasca, S; Corre, Y; Marot, L; Thellier, C; Gallay, G; Cammarata, C; Passier, B; Fermé, J J
2010-10-01
Most of the ITER optical diagnostics aiming at viewing and monitoring plasma facing components will use in-vessel metallic mirrors. These mirrors will be exposed to a severe plasma environment and lead to an important tradeoff on their design and manufacturing. As a consequence, investigations are carried out on diagnostic mirrors toward the development of optimal and reliable solutions. The goals are to assess the manufacturing feasibility of the mirror coatings, evaluate the manufacturing capability and associated performances for the mirrors cooling and polishing, and finally determine the costs and delivery time of the first prototypes with a diameter of 200 and 500 mm. Three kinds of ITER candidate mock-ups are being designed and manufactured: rhodium films on stainless steel substrate, molybdenum on TZM substrate, and silver films on stainless steel substrate. The status of the project is presented in this paper.
Phase modulation due to crystal diffraction by ptychographic imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Civita, M.; Diaz, A.; Bean, R. J.
Solving the phase problem in x-ray crystallography has occupied a considerable scientific effort in the 20th century and led to great advances in structural science. Here we use x-ray ptychography to demonstrate an interference method which measures the phase of the beam transmitted through a crystal, relative to the incoming beam, when diffraction takes place. The observed phase change of the direct beam through a small gold crystal is found to agree with both a quasikinematical model and full dynamical theories of diffraction. Our discovery of a diffraction contrast mechanism will enhance the interpretation of data obtained from crystalline samplesmore » using the ptychography method, which provides some of the most accurate x-ray phase-contrast images.« less
Phase modulation due to crystal diffraction by ptychographic imaging
Civita, M.; Diaz, A.; Bean, R. J.; ...
2018-03-06
Solving the phase problem in x-ray crystallography has occupied a considerable scientific effort in the 20th century and led to great advances in structural science. Here we use x-ray ptychography to demonstrate an interference method which measures the phase of the beam transmitted through a crystal, relative to the incoming beam, when diffraction takes place. The observed phase change of the direct beam through a small gold crystal is found to agree with both a quasikinematical model and full dynamical theories of diffraction. Our discovery of a diffraction contrast mechanism will enhance the interpretation of data obtained from crystalline samplesmore » using the ptychography method, which provides some of the most accurate x-ray phase-contrast images.« less
Phase modulation due to crystal diffraction by ptychographic imaging
NASA Astrophysics Data System (ADS)
Civita, M.; Diaz, A.; Bean, R. J.; Shabalin, A. G.; Gorobtsov, O. Yu.; Vartanyants, I. A.; Robinson, I. K.
2018-03-01
Solving the phase problem in x-ray crystallography has occupied a considerable scientific effort in the 20th century and led to great advances in structural science. Here we use x-ray ptychography to demonstrate an interference method which measures the phase of the beam transmitted through a crystal, relative to the incoming beam, when diffraction takes place. The observed phase change of the direct beam through a small gold crystal is found to agree with both a quasikinematical model and full dynamical theories of diffraction. Our discovery of a diffraction contrast mechanism will enhance the interpretation of data obtained from crystalline samples using the ptychography method, which provides some of the most accurate x-ray phase-contrast images.
Usability engineering: domain analysis activities for augmented-reality systems
NASA Astrophysics Data System (ADS)
Gabbard, Joseph; Swan, J. E., II; Hix, Deborah; Lanzagorta, Marco O.; Livingston, Mark; Brown, Dennis B.; Julier, Simon J.
2002-05-01
This paper discusses our usability engineering process for the Battlefield Augmented Reality System (BARS). Usability engineering is a structured, iterative, stepwise development process. Like the related disciplines of software and systems engineering, usability engineering is a combination of management principals and techniques, formal and semi- formal evaluation techniques, and computerized tools. BARS is an outdoor augmented reality system that displays heads- up battlefield intelligence information to a dismounted warrior. The paper discusses our general usability engineering process. We originally developed the process in the context of virtual reality applications, but in this work we are adapting the procedures to an augmented reality system. The focus of this paper is our work on domain analysis, the first activity of the usability engineering process. We describe our plans for and our progress to date on our domain analysis for BARS. We give results in terms of a specific urban battlefield use case we have designed.
Deductive Evaluation: Formal Code Analysis With Low User Burden
NASA Technical Reports Server (NTRS)
Di Vito, Ben. L
2016-01-01
We describe a framework for symbolically evaluating iterative C code using a deductive approach that automatically discovers and proves program properties. Although verification is not performed, the method can infer detailed program behavior. Software engineering work flows could be enhanced by this type of analysis. Floyd-Hoare verification principles are applied to synthesize loop invariants, using a library of iteration-specific deductive knowledge. When needed, theorem proving is interleaved with evaluation and performed on the fly. Evaluation results take the form of inferred expressions and type constraints for values of program variables. An implementation using PVS (Prototype Verification System) is presented along with results for sample C functions.
Automated multiplex genome-scale engineering in yeast
Si, Tong; Chao, Ran; Min, Yuhao; Wu, Yuying; Ren, Wen; Zhao, Huimin
2017-01-01
Genome-scale engineering is indispensable in understanding and engineering microorganisms, but the current tools are mainly limited to bacterial systems. Here we report an automated platform for multiplex genome-scale engineering in Saccharomyces cerevisiae, an important eukaryotic model and widely used microbial cell factory. Standardized genetic parts encoding overexpression and knockdown mutations of >90% yeast genes are created in a single step from a full-length cDNA library. With the aid of CRISPR-Cas, these genetic parts are iteratively integrated into the repetitive genomic sequences in a modular manner using robotic automation. This system allows functional mapping and multiplex optimization on a genome scale for diverse phenotypes including cellulase expression, isobutanol production, glycerol utilization and acetic acid tolerance, and may greatly accelerate future genome-scale engineering endeavours in yeast. PMID:28469255
Energy and technology review: Engineering modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cabayan, H.S.; Goudreau, G.L.; Ziolkowski, R.W.
1986-10-01
This report presents information concerning: Modeling Canonical Problems in Electromagnetic Coupling Through Apertures; Finite-Element Codes for Computing Electrostatic Fields; Finite-Element Modeling of Electromagnetic Phenomena; Modeling Microwave-Pulse Compression in a Resonant Cavity; Lagrangian Finite-Element Analysis of Penetration Mechanics; Crashworthiness Engineering; Computer Modeling of Metal-Forming Processes; Thermal-Mechanical Modeling of Tungsten Arc Welding; Modeling Air Breakdown Induced by Electromagnetic Fields; Iterative Techniques for Solving Boltzmann's Equations for p-Type Semiconductors; Semiconductor Modeling; and Improved Numerical-Solution Techniques in Large-Scale Stress Analysis.
Nonlinear mechanical behavior of thermoplastic matrix materials for advanced composites
NASA Technical Reports Server (NTRS)
Arenz, R. J.; Landel, R. F.
1989-01-01
Two recent theories of nonlinear mechanical response are quantitatively compared and related to experimental data. Computer techniques are formulated to handle the numerical integration and iterative procedures needed to solve the associated sets of coupled nonlinear differential equations. Problems encountered during these formulations are discussed and some open questions described. Bearing in mind these cautions, the consequences of changing parameters that appear in the formulations on the resulting engineering properties are discussed. Hence, engineering approaches to the analysis of thermoplastic matrix material can be suggested.
2017-03-01
opportunities emerged. It will be essential to capture and share lessons learned as the two organizations plan and implement selected EWN projects...their top five or six opportunities and subsequently selected the two highest priorities. Each of the three breakout groups then worked together to...will ensure agency buy-in, establish local reference sites, and promote EWN principles. Site selection will include an iterative process that factors
Performance Limiting Flow Processes in High-State Loading High-Mach Number Compressors
2008-03-13
the Doctoral Thesis Committee of the doctoral student. 3 3.0 Technical Background A strong incentive exists to reduce airfoil count in aircraft engine ...Advanced Turbine Engine ). A basic constraint on blade reduction is seen from the Euler turbine equation, which shows that, although a design can be carried...on the vane to rotor blade ratio of 8:11). Within the MSU Turbo code, specifying a small number of time steps requires more iteration at each time
Scientific and technical challenges on the road towards fusion electricity
NASA Astrophysics Data System (ADS)
Donné, A. J. H.; Federici, G.; Litaudon, X.; McDonald, D. C.
2017-10-01
The goal of the European Fusion Roadmap is to deliver fusion electricity to the grid early in the second half of this century. It breaks the quest for fusion energy into eight missions, and for each of them it describes a research and development programme to address all the open technical gaps in physics and technology and estimates the required resources. It points out the needs to intensify industrial involvement and to seek all opportunities for collaboration outside Europe. The roadmap covers three periods: the short term, which runs parallel to the European Research Framework Programme Horizon 2020, the medium term and the long term. ITER is the key facility of the roadmap as it is expected to achieve most of the important milestones on the path to fusion power. Thus, the vast majority of present resources are dedicated to ITER and its accompanying experiments. The medium term is focussed on taking ITER into operation and bringing it to full power, as well as on preparing the construction of a demonstration power plant DEMO, which will for the first time demonstrate fusion electricity to the grid around the middle of this century. Building and operating DEMO is the subject of the last roadmap phase: the long term. Clearly, the Fusion Roadmap is tightly connected to the ITER schedule. Three key milestones are the first operation of ITER, the start of the DT operation in ITER and reaching the full performance at which the thermal fusion power is 10 times the power put in to the plasma. The Engineering Design Activity of DEMO needs to start a few years after the first ITER plasma, while the start of the construction phase will be a few years after ITER reaches full performance. In this way ITER can give viable input to the design and development of DEMO. Because the neutron fluence in DEMO will be much higher than in ITER, it is important to develop and validate materials that can handle these very high neutron loads. For the testing of the materials, a dedicated 14 MeV neutron source is needed. This DEMO Oriented Neutron Source (DONES) is therefore an important facility to support the fusion roadmap.
Should Cost: A Strategy for Managing Military Systems Money
2016-03-01
telecom . Of the numerous SCIs available, the chief financial officer for the portfolio chose the QF-16 Engine Management (see photo). The program... telecoms and network discussions of what has been done and what will come down as future system and policy changes. Iterative results included
GLobal Integrated Design Environment
NASA Technical Reports Server (NTRS)
Kunkel, Matthew; McGuire, Melissa; Smith, David A.; Gefert, Leon P.
2011-01-01
The GLobal Integrated Design Environment (GLIDE) is a collaborative engineering application built to resolve the design session issues of real-time passing of data between multiple discipline experts in a collaborative environment. Utilizing Web protocols and multiple programming languages, GLIDE allows engineers to use the applications to which they are accustomed in this case, Excel to send and receive datasets via the Internet to a database-driven Web server. Traditionally, a collaborative design session consists of one or more engineers representing each discipline meeting together in a single location. The discipline leads exchange parameters and iterate through their respective processes to converge on an acceptable dataset. In cases in which the engineers are unable to meet, their parameters are passed via e-mail, telephone, facsimile, or even postal mail. The result of this slow process of data exchange would elongate a design session to weeks or even months. While the iterative process remains in place, software can now exchange parameters securely and efficiently, while at the same time allowing for much more information about a design session to be made available. GLIDE is written in a compilation of several programming languages, including REALbasic, PHP, and Microsoft Visual Basic. GLIDE client installers are available to download for both Microsoft Windows and Macintosh systems. The GLIDE client software is compatible with Microsoft Excel 2000 or later on Windows systems, and with Microsoft Excel X or later on Macintosh systems. GLIDE follows the Client-Server paradigm, transferring encrypted and compressed data via standard Web protocols. Currently, the engineers use Excel as a front end to the GLIDE Client, as many of their custom tools run in Excel.
Reflexive Principlism as an Effective Approach for Developing Ethical Reasoning in Engineering.
Beever, Jonathan; Brightman, Andrew O
2016-02-01
An important goal of teaching ethics to engineering students is to enhance their ability to make well-reasoned ethical decisions in their engineering practice: a goal in line with the stated ethical codes of professional engineering organizations. While engineering educators have explored a wide range of methodologies for teaching ethics, a satisfying model for developing ethical reasoning skills has not been adopted broadly. In this paper we argue that a principlist-based approach to ethical reasoning is uniquely suited to engineering ethics education. Reflexive Principlism is an approach to ethical decision-making that focuses on internalizing a reflective and iterative process of specification, balancing, and justification of four core ethical principles in the context of specific cases. In engineering, that approach provides structure to ethical reasoning while allowing the flexibility for adaptation to varying contexts through specification. Reflexive Principlism integrates well with the prevalent and familiar methodologies of reasoning within the engineering disciplines as well as with the goals of engineering ethics education.
Understanding and manipulating plant lipid composition: Metabolic engineering leads the way
Napier, Johnathan A; Haslam, Richard P; Beaudoin, Frederic; Cahoon, Edgar B
2014-01-01
The manipulation of plant seed oil composition so as to deliver enhanced fatty acid compositions suitable for feed or fuel has long been a goal of metabolic engineers. Recent advances in our understanding of the flux of acyl-changes through different key metabolic pools such as phosphatidylcholine and diacylglycerol have allowed for more targeted interventions. When combined in iterative fashion with further lipidomic analyses, significant breakthroughs in our capacity to generate plants with novel oils have been achieved. Collectively these studies, working at the interface between metabolic engineering and synthetic biology, demonstrate the positive fundamental and applied outcomes derived from such research. PMID:24809765
ERIC Educational Resources Information Center
Baiduc, Rachael R.; Linsenmeier, Robert A.; Ruggeri, Nancy
2016-01-01
Today's science, technology, engineering, and mathematics (STEM) graduate students and postdoctoral fellows are tomorrow's new faculty members; but these junior academicians often receive limited pedagogical training. We describe four iterations of an entry-level program with a low time commitment, Mentored Discussions of Teaching (MDT). The…
ERIC Educational Resources Information Center
Martinez-Maldonado, Roberto; Pardo, Abelardo; Mirriahi, Negin; Yacef, Kalina; Kay, Judy; Clayphan, Andrew
2015-01-01
Designing, validating, and deploying learning analytics tools for instructors or students is a challenge that requires techniques and methods from different disciplines, such as software engineering, human-computer interaction, computer graphics, educational design, and psychology. Whilst each has established its own design methodologies, we now…
FENDL: International reference nuclear data library for fusion applications
NASA Astrophysics Data System (ADS)
Pashchenko, A. B.; Wienke, H.; Ganesan, S.
1996-10-01
The IAEA Nuclear Data Section, in co-operation with several national nuclear data centres and research groups, has created the first version of an internationally available Fusion Evaluated Nuclear Data Library (FENDL-1). The FENDL library has been selected to serve as a comprehensive source of processed and tested nuclear data tailored to the requirements of the engineering design activity (EDA) of the ITER project and other fusion-related development projects. The present version of FENDL consists of the following sublibraries covering the necessary nuclear input for all physics and engineering aspects of the material development, design, operation and safety of the ITER project in its current EDA phase: FENDL/A-1.1: neutron activation cross-sections, selected from different available sources, for 636 nuclides, FENDL/D-1.0: nuclear decay data for 2900 nuclides in ENDF-6 format, FENDL/DS-1.0: neutron activation data for dosimetry by foil activation, FENDL/C-1.0: data for the fusion reactions D(d,n), D(d,p), T(d,n), T(t,2n), He-3(d,p) extracted from ENDF/B-6 and processed, FENDL/E-1.0:data for coupled neutron—photon transport calculations, including a data library for neutron interaction and photon production for 63 elements or isotopes, selected from ENDF/B-6, JENDL-3, or BROND-2, and a photon—atom interaction data library for 34 elements. The benchmark validation of FENDL-1 as required by the customer, i.e. the ITER team, is considered to be a task of high priority in the coming months. The well tested and validated nuclear data libraries in processed form of the FENDL-2 are expected to be ready by mid 1996 for use by the ITER team in the final phase of ITER EDA after extensive benchmarking and integral validation studies in the 1995-1996 period. The FENDL data files can be electronically transferred to users from the IAEA nuclear data section online system through INTERNET. A grand total of 54 (sub)directories with 845 files with total size of about 2 million blocks or about 1 Gigabyte (1 block = 512 bytes) of numerical data is currently available on-line.
Design, Manufacture, and Experimental Serviceability Validation of ITER Blanket Components
NASA Astrophysics Data System (ADS)
Leshukov, A. Yu.; Strebkov, Yu. S.; Sviridenko, M. N.; Safronov, V. M.; Putrik, A. B.
2017-12-01
In 2014, the Russian Federation and the ITER International Organization signed two Procurement Arrangements (PAs) for ITER blanket components: 1.6.P1ARF.01 "Blanket First Wall" of February 14, 2014, and 1.6.P3.RF.01 "Blanket Module Connections" of December 19, 2014. The first PA stipulates development, manufacture, testing, and delivery to the ITER site of 179 Enhanced Heat Flux (EHF) First Wall (FW) Panels intended for withstanding the heat flux from the plasma up to 4.7MW/m2. Two Russian institutions, NIIEFA (Efremov Institute) and NIKIET, are responsible for the implementation of this PA. NIIEFA manufactures plasma-facing components (PFCs) of the EHF FW panels and performs the final assembly and testing of the panels, and NIKIET manufactures FW beam structures, load-bearing structures of PFCs, and all elements of the panel attachment system. As for the second PA, NIKIET is the sole official supplier of flexible blanket supports, electrical insulation key pads (EIKPs), and blanket module/vacuum vessel electrical connectors. Joint activities of NIKIET and NIIEFA for implementing PA 1.6.P1ARF.01 are briefly described, and information on implementation of PA 1.6.P3.RF.01 is given. Results of the engineering design and research efforts in the scope of the above PAs in 2015-2016 are reported, and results of developing the technology for manufacturing ITER blanket components are presented.
Ensemble Kalman Filter versus Ensemble Smoother for Data Assimilation in Groundwater Modeling
NASA Astrophysics Data System (ADS)
Li, L.; Cao, Z.; Zhou, H.
2017-12-01
Groundwater modeling calls for an effective and robust integrating method to fill the gap between the model and data. The Ensemble Kalman Filter (EnKF), a real-time data assimilation method, has been increasingly applied in multiple disciplines such as petroleum engineering and hydrogeology. In this approach, the groundwater models are sequentially updated using measured data such as hydraulic head and concentration data. As an alternative to the EnKF, the Ensemble Smoother (ES) was proposed with updating models using all the data together, and therefore needs a much less computational cost. To further improve the performance of the ES, an iterative ES was proposed for continuously updating the models by assimilating measurements together. In this work, we compare the performance of the EnKF, the ES and the iterative ES using a synthetic example in groundwater modeling. The hydraulic head data modeled on the basis of the reference conductivity field are utilized to inversely estimate conductivities at un-sampled locations. Results are evaluated in terms of the characterization of conductivity and groundwater flow and solute transport predictions. It is concluded that: (1) the iterative ES could achieve a comparable result with the EnKF, but needs a less computational cost; (2) the iterative ES has the better performance than the ES through continuously updating. These findings suggest that the iterative ES should be paid much more attention for data assimilation in groundwater modeling.
Engineering design skills coverage in K-12 engineering program curriculum materials in the USA
NASA Astrophysics Data System (ADS)
Chabalengula, Vivien M.; Mumba, Frackson
2017-11-01
The current K-12 Science Education framework and Next Generation Science Standards (NGSS) in the United States emphasise the integration of engineering design in science instruction to promote scientific literacy and engineering design skills among students. As such, many engineering education programmes have developed curriculum materials that are being used in K-12 settings. However, little is known about the nature and extent to which engineering design skills outlined in NGSS are addressed in these K-12 engineering education programme curriculum materials. We analysed nine K-12 engineering education programmes for the nature and extent of engineering design skills coverage. Results show that developing possible solutions and actual designing of prototypes were the highly covered engineering design skills; specification of clear goals, criteria, and constraints received medium coverage; defining and identifying an engineering problem; optimising the design solution; and demonstrating how a prototype works, and making iterations to improve designs were lowly covered. These trends were similar across grade levels and across discipline-specific curriculum materials. These results have implications on engineering design-integrated science teaching and learning in K-12 settings.
Group iterative methods for the solution of two-dimensional time-fractional diffusion equation
NASA Astrophysics Data System (ADS)
Balasim, Alla Tareq; Ali, Norhashidah Hj. Mohd.
2016-06-01
Variety of problems in science and engineering may be described by fractional partial differential equations (FPDE) in relation to space and/or time fractional derivatives. The difference between time fractional diffusion equations and standard diffusion equations lies primarily in the time derivative. Over the last few years, iterative schemes derived from the rotated finite difference approximation have been proven to work well in solving standard diffusion equations. However, its application on time fractional diffusion counterpart is still yet to be investigated. In this paper, we will present a preliminary study on the formulation and analysis of new explicit group iterative methods in solving a two-dimensional time fractional diffusion equation. These methods were derived from the standard and rotated Crank-Nicolson difference approximation formula. Several numerical experiments were conducted to show the efficiency of the developed schemes in terms of CPU time and iteration number. At the request of all authors of the paper an updated version of this article was published on 7 July 2016. The original version supplied to AIP Publishing contained an error in Table 1 and References 15 and 16 were incomplete. These errors have been corrected in the updated and republished article.
Armour Materials for the ITER Plasma Facing Components
NASA Astrophysics Data System (ADS)
Barabash, V.; Federici, G.; Matera, R.; Raffray, A. R.; ITER Home Teams,
The selection of the armour materials for the Plasma Facing Components (PFCs) of the International Thermonuclear Experimental Reactor (ITER) is a trade-off between multiple requirements derived from the unique features of a burning fusion plasma environment. The factors that affect the selection come primarily from the requirements of plasma performance (e.g., minimise impurity contamination in the confined plasma), engineering integrity, component lifetime (e.g., withstand thermal stresses, acceptable erosion, etc.) and safety (minimise tritium and radioactive dust inventories). The current selection in ITER is to use beryllium on the first-wall, upper baffle and on the port limiter surfaces, carbon fibre composites near the strike points of the divertor vertical target and tungsten elsewhere in the divertor and lower baffle modules. This paper provides the background for this selection vis-à-vis the operating parameters expected during normal and off-normal conditions. The reasons for the selection of the specific grades of armour materials are also described. The effects of the neutron irradiation on the properties of Be, W and carbon fibre composites at the expected ITER conditions are briefly reviewed. Critical issues are discussed together with the necessary future R&D.
Development and Application of an Integrated Approach toward NASA Airspace Systems Research
NASA Technical Reports Server (NTRS)
Barhydt, Richard; Fong, Robert K.; Abramson, Paul D.; Koenke, Ed
2008-01-01
The National Aeronautics and Space Administration's (NASA) Airspace Systems Program is contributing air traffic management research in support of the 2025 Next Generation Air Transportation System (NextGen). Contributions support research and development needs provided by the interagency Joint Planning and Development Office (JPDO). These needs generally call for integrated technical solutions that improve system-level performance and work effectively across multiple domains and planning time horizons. In response, the Airspace Systems Program is pursuing an integrated research approach and has adapted systems engineering best practices for application in a research environment. Systems engineering methods aim to enable researchers to methodically compare different technical approaches, consider system-level performance, and develop compatible solutions. Systems engineering activities are performed iteratively as the research matures. Products of this approach include a demand and needs analysis, system-level descriptions focusing on NASA research contributions, system assessment and design studies, and common systemlevel metrics, scenarios, and assumptions. Results from the first systems engineering iteration include a preliminary demand and needs analysis; a functional modeling tool; and initial system-level metrics, scenario characteristics, and assumptions. Demand and needs analysis results suggest that several advanced concepts can mitigate demand/capacity imbalances for NextGen, but fall short of enabling three-times current-day capacity at the nation s busiest airports and airspace. Current activities are focusing on standardizing metrics, scenarios, and assumptions, conducting system-level performance assessments of integrated research solutions, and exploring key system design interfaces.
U.S. Seismic Design Maps Web Application
NASA Astrophysics Data System (ADS)
Martinez, E.; Fee, J.
2015-12-01
The application computes earthquake ground motion design parameters compatible with the International Building Code and other seismic design provisions. It is the primary method for design engineers to obtain ground motion parameters for multiple building codes across the country. When designing new buildings and other structures, engineers around the country use the application. Users specify the design code of interest, location, and other parameters to obtain necessary ground motion information consisting of a high-level executive summary as well as detailed information including maps, data, and graphs. Results are formatted such that they can be directly included in a final engineering report. In addition to single-site analysis, the application supports a batch mode for simultaneous consideration of multiple locations. Finally, an application programming interface (API) is available which allows other application developers to integrate this application's results into larger applications for additional processing. Development on the application has proceeded in an iterative manner working with engineers through email, meetings, and workshops. Each iteration provided new features, improved performance, and usability enhancements. This development approach positioned the application to be integral to the structural design process and is now used to produce over 1800 reports daily. Recent efforts have enhanced the application to be a data-driven, mobile-first, responsive web application. Development is ongoing, and source code has recently been published into the open-source community on GitHub. Open-sourcing the code facilitates improved incorporation of user feedback to add new features ensuring the application's continued success.
Concise Review: Organ Engineering: Design, Technology, and Integration.
Kaushik, Gaurav; Leijten, Jeroen; Khademhosseini, Ali
2017-01-01
Engineering complex tissues and whole organs has the potential to dramatically impact translational medicine in several avenues. Organ engineering is a discipline that integrates biological knowledge of embryological development, anatomy, physiology, and cellular interactions with enabling technologies including biocompatible biomaterials and biofabrication platforms such as three-dimensional bioprinting. When engineering complex tissues and organs, core design principles must be taken into account, such as the structure-function relationship, biochemical signaling, mechanics, gradients, and spatial constraints. Technological advances in biomaterials, biofabrication, and biomedical imaging allow for in vitro control of these factors to recreate in vivo phenomena. Finally, organ engineering emerges as an integration of biological design and technical rigor. An overall workflow for organ engineering and guiding technology to advance biology as well as a perspective on necessary future iterations in the field is discussed. Stem Cells 2017;35:51-60. © 2016 AlphaMed Press.
NASA Astrophysics Data System (ADS)
Schroer, Christian G.; Seyrich, Martin; Kahnt, Maik; Botta, Stephan; Döhrmann, Ralph; Falkenberg, Gerald; Garrevoet, Jan; Lyubomirskiy, Mikhail; Scholz, Maria; Schropp, Andreas; Wittwer, Felix
2017-09-01
In recent years, ptychography has revolutionized x-ray microscopy in that it is able to overcome the diffraction limit of x-ray optics, pushing the spatial resolution limit down to a few nanometers. However, due to the weak interaction of x rays with matter, the detection of small features inside a sample requires a high coherent fluence on the sample, a high degree of mechanical stability, and a low background signal from the x-ray microscope. The x-ray scanning microscope PtyNAMi at PETRA III is designed for high-spatial-resolution 3D imaging with high sensitivity. The design concept is presented with a special focus on real-time metrology of the sample position during tomographic scanning microscopy.
Reducing Design Cycle Time and Cost Through Process Resequencing
NASA Technical Reports Server (NTRS)
Rogers, James L.
2004-01-01
In today's competitive environment, companies are under enormous pressure to reduce the time and cost of their design cycle. One method for reducing both time and cost is to develop an understanding of the flow of the design processes and the effects of the iterative subcycles that are found in complex design projects. Once these aspects are understood, the design manager can make decisions that take advantage of decomposition, concurrent engineering, and parallel processing techniques to reduce the total time and the total cost of the design cycle. One software tool that can aid in this decision-making process is the Design Manager's Aid for Intelligent Decomposition (DeMAID). The DeMAID software minimizes the feedback couplings that create iterative subcycles, groups processes into iterative subcycles, and decomposes the subcycles into a hierarchical structure. The real benefits of producing the best design in the least time and at a minimum cost are obtained from sequencing the processes in the subcycles.
NASA Technical Reports Server (NTRS)
Nakazawa, Shohei
1991-01-01
Formulations and algorithms implemented in the MHOST finite element program are discussed. The code uses a novel concept of the mixed iterative solution technique for the efficient 3-D computations of turbine engine hot section components. The general framework of variational formulation and solution algorithms are discussed which were derived from the mixed three field Hu-Washizu principle. This formulation enables the use of nodal interpolation for coordinates, displacements, strains, and stresses. Algorithmic description of the mixed iterative method includes variations for the quasi static, transient dynamic and buckling analyses. The global-local analysis procedure referred to as the subelement refinement is developed in the framework of the mixed iterative solution, of which the detail is presented. The numerically integrated isoparametric elements implemented in the framework is discussed. Methods to filter certain parts of strain and project the element discontinuous quantities to the nodes are developed for a family of linear elements. Integration algorithms are described for linear and nonlinear equations included in MHOST program.
Shteynberg, David; Mendoza, Luis; Hoopmann, Michael R.; Sun, Zhi; Schmidt, Frank; Deutsch, Eric W.; Moritz, Robert L.
2016-01-01
Most shotgun proteomics data analysis workflows are based on the assumption that each fragment ion spectrum is explained by a single species of peptide ion isolated by the mass spectrometer; however, in reality mass spectrometers often isolate more than one peptide ion within the window of isolation that contributes to additional peptide fragment peaks in many spectra. We present a new tool called reSpect, implemented in the Trans-Proteomic Pipeline (TPP), that enables an iterative workflow whereby fragment ion peaks explained by a peptide ion identified in one round of sequence searching or spectral library search are attenuated based on the confidence of the identification, and then the altered spectrum is subjected to further rounds of searching. The reSpect tool is not implemented as a search engine, but rather as a post search engine processing step where only fragment ion intensities are altered. This enables the application of any search engine combination in the following iterations. Thus, reSpect is compatible with all other protein sequence database search engines as well as peptide spectral library search engines that are supported by the TPP. We show that while some datasets are highly amenable to chimeric spectrum identification and lead to additional peptide identification boosts of over 30% with as many as four different peptide ions identified per spectrum, datasets with narrow precursor ion selection only benefit from such processing at the level of a few percent. We demonstrate a technique that facilitates the determination of the degree to which a dataset would benefit from chimeric spectrum analysis. The reSpect tool is free and open source, provided within the TPP and available at the TPP website. PMID:26419769
Shteynberg, David; Mendoza, Luis; Hoopmann, Michael R; Sun, Zhi; Schmidt, Frank; Deutsch, Eric W; Moritz, Robert L
2015-11-01
Most shotgun proteomics data analysis workflows are based on the assumption that each fragment ion spectrum is explained by a single species of peptide ion isolated by the mass spectrometer; however, in reality mass spectrometers often isolate more than one peptide ion within the window of isolation that contribute to additional peptide fragment peaks in many spectra. We present a new tool called reSpect, implemented in the Trans-Proteomic Pipeline (TPP), which enables an iterative workflow whereby fragment ion peaks explained by a peptide ion identified in one round of sequence searching or spectral library search are attenuated based on the confidence of the identification, and then the altered spectrum is subjected to further rounds of searching. The reSpect tool is not implemented as a search engine, but rather as a post-search engine processing step where only fragment ion intensities are altered. This enables the application of any search engine combination in the iterations that follow. Thus, reSpect is compatible with all other protein sequence database search engines as well as peptide spectral library search engines that are supported by the TPP. We show that while some datasets are highly amenable to chimeric spectrum identification and lead to additional peptide identification boosts of over 30% with as many as four different peptide ions identified per spectrum, datasets with narrow precursor ion selection only benefit from such processing at the level of a few percent. We demonstrate a technique that facilitates the determination of the degree to which a dataset would benefit from chimeric spectrum analysis. The reSpect tool is free and open source, provided within the TPP and available at the TPP website. Graphical Abstract ᅟ.
NASA Astrophysics Data System (ADS)
Shteynberg, David; Mendoza, Luis; Hoopmann, Michael R.; Sun, Zhi; Schmidt, Frank; Deutsch, Eric W.; Moritz, Robert L.
2015-11-01
Most shotgun proteomics data analysis workflows are based on the assumption that each fragment ion spectrum is explained by a single species of peptide ion isolated by the mass spectrometer; however, in reality mass spectrometers often isolate more than one peptide ion within the window of isolation that contribute to additional peptide fragment peaks in many spectra. We present a new tool called reSpect, implemented in the Trans-Proteomic Pipeline (TPP), which enables an iterative workflow whereby fragment ion peaks explained by a peptide ion identified in one round of sequence searching or spectral library search are attenuated based on the confidence of the identification, and then the altered spectrum is subjected to further rounds of searching. The reSpect tool is not implemented as a search engine, but rather as a post-search engine processing step where only fragment ion intensities are altered. This enables the application of any search engine combination in the iterations that follow. Thus, reSpect is compatible with all other protein sequence database search engines as well as peptide spectral library search engines that are supported by the TPP. We show that while some datasets are highly amenable to chimeric spectrum identification and lead to additional peptide identification boosts of over 30% with as many as four different peptide ions identified per spectrum, datasets with narrow precursor ion selection only benefit from such processing at the level of a few percent. We demonstrate a technique that facilitates the determination of the degree to which a dataset would benefit from chimeric spectrum analysis. The reSpect tool is free and open source, provided within the TPP and available at the TPP website.
Inverse Problems, Control and Modeling in the Presence of Uncertainty
2007-10-30
using a Kelvin model, CRSC- TR07-08, March, 2007; IEEE Transactions on Biomedical Engineering, submitted. [P18] K. Ito, Q. Huynh and J . Toivanen, A fast...Science and Engineering, Springer (2006), 595 602 . [P19] K.Ito and J . Toivanen, A fast iterative solver for scattering by elastic objects in layered...and N.G. Medhin, " A stick-slip/Rouse hybrid model", CRSC-TR05-28, August, 2005. [P23] H.T. Banks, A . F. Karr, H. K. Nguyen, and J . R. Samuels, Jr
3D Printing: Exploring Capabilities
ERIC Educational Resources Information Center
Samuels, Kyle; Flowers, Jim
2015-01-01
As 3D printers become more affordable, schools are using them in increasing numbers. They fit well with the emphasis on product design in technology and engineering education, allowing students to create high-fidelity physical models to see and test different iterations in their product designs. They may also help students to "think in three…
Learning Biological Networks via Bootstrapping with Optimized GO-based Gene Similarity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taylor, Ronald C.; Sanfilippo, Antonio P.; McDermott, Jason E.
2010-08-02
Microarray gene expression data provide a unique information resource for learning biological networks using "reverse engineering" methods. However, there are a variety of cases in which we know which genes are involved in a given pathology of interest, but we do not have enough experimental evidence to support the use of fully-supervised/reverse-engineering learning methods. In this paper, we explore a novel semi-supervised approach in which biological networks are learned from a reference list of genes and a partial set of links for these genes extracted automatically from PubMed abstracts, using a knowledge-driven bootstrapping algorithm. We show how new relevant linksmore » across genes can be iteratively derived using a gene similarity measure based on the Gene Ontology that is optimized on the input network at each iteration. We describe an application of this approach to the TGFB pathway as a case study and show how the ensuing results prove the feasibility of the approach as an alternate or complementary technique to fully supervised methods.« less
Sánchez, Benjamín J; Pérez-Correa, José R; Agosin, Eduardo
2014-09-01
Dynamic flux balance analysis (dFBA) has been widely employed in metabolic engineering to predict the effect of genetic modifications and environmental conditions in the cell׳s metabolism during dynamic cultures. However, the importance of the model parameters used in these methodologies has not been properly addressed. Here, we present a novel and simple procedure to identify dFBA parameters that are relevant for model calibration. The procedure uses metaheuristic optimization and pre/post-regression diagnostics, fixing iteratively the model parameters that do not have a significant role. We evaluated this protocol in a Saccharomyces cerevisiae dFBA framework calibrated for aerobic fed-batch and anaerobic batch cultivations. The model structures achieved have only significant, sensitive and uncorrelated parameters and are able to calibrate different experimental data. We show that consumption, suboptimal growth and production rates are more useful for calibrating dynamic S. cerevisiae metabolic models than Boolean gene expression rules, biomass requirements and ATP maintenance. Copyright © 2014 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.
Van Daele, Timothy; Gernaey, Krist V; Ringborg, Rolf H; Börner, Tim; Heintz, Søren; Van Hauwermeiren, Daan; Grey, Carl; Krühne, Ulrich; Adlercreutz, Patrick; Nopens, Ingmar
2017-09-01
The aim of model calibration is to estimate unique parameter values from available experimental data, here applied to a biocatalytic process. The traditional approach of first gathering data followed by performing a model calibration is inefficient, since the information gathered during experimentation is not actively used to optimize the experimental design. By applying an iterative robust model-based optimal experimental design, the limited amount of data collected is used to design additional informative experiments. The algorithm is used here to calibrate the initial reaction rate of an ω-transaminase catalyzed reaction in a more accurate way. The parameter confidence region estimated from the Fisher Information Matrix is compared with the likelihood confidence region, which is not only more accurate but also a computationally more expensive method. As a result, an important deviation between both approaches is found, confirming that linearization methods should be applied with care for nonlinear models. © 2017 American Institute of Chemical Engineers Biotechnol. Prog., 33:1278-1293, 2017. © 2017 American Institute of Chemical Engineers.
Deployment of e-health services - a business model engineering strategy.
Kijl, Björn; Nieuwenhuis, Lambert J M; Huis in 't Veld, Rianne M H A; Hermens, Hermie J; Vollenbroek-Hutten, Miriam M R
2010-01-01
We designed a business model for deploying a myofeedback-based teletreatment service. An iterative and combined qualitative and quantitative action design approach was used for developing the business model and the related value network. Insights from surveys, desk research, expert interviews, workshops and quantitative modelling were combined to produce the first business model and then to refine it in three design cycles. The business model engineering strategy provided important insights which led to an improved, more viable and feasible business model and related value network design. Based on this experience, we conclude that the process of early stage business model engineering reduces risk and produces substantial savings in costs and resources related to service deployment.
Geo-Engineering through Internet Informatics (GEMINI)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watney, W. Lynn; Doveton, John H.; Victorine, John R.
GEMINI will resolve reservoir parameters that control well performance; characterize subtle reservoir properties important in understanding and modeling hydrocarbon pore volume and fluid flow; expedite recognition of bypassed, subtle, and complex oil and gas reservoirs at regional and local scale; differentiate commingled reservoirs; build integrated geologic and engineering model based on real-time, iterate solutions to evaluate reservoir management options for improved recovery; provide practical tools to assist the geoscientist, engineer, and petroleum operator in making their tasks more efficient and effective; enable evaluations to be made at different scales, ranging from individual well, through lease, field, to play and regionmore » (scalable information infrastructure); and provide training and technology transfer to evaluate capabilities of the client.« less
Nozzle Numerical Analysis Of The Scimitar Engine
NASA Astrophysics Data System (ADS)
Battista, F.; Marini, M.; Cutrone, L.
2011-05-01
This work describes part of the activities on the LAPCAT-II A2 vehicle, in which starting from the available conceptual vehicle design and the related pre- cooled turbo-ramjet engine called SCIMITAR, well- thought assumptions made for performance figures of different components during the iteration process within LAPCAT-I will be assessed in more detail. In this paper it is presented a numerical analysis aimed at the design optimization of the nozzle contour of the LAPCAT A2 SCIMITAR engine designed by Reaction Engines Ltd. (REL) (see Figure 1). In particular, nozzle shape optimization process is presented for cruise conditions. All the computations have been carried out by using the CIRA C3NS code in non equilibrium conditions. The effect of considering detailed or reduced chemical kinetic schemes has been analyzed with a particular focus on the production of pollutants. An analysis of engine performance parameters, such as thrust and combustion efficiency has been carried out.
Understanding Biological Regulation Through Synthetic Biology.
Bashor, Caleb J; Collins, James J
2018-05-20
Engineering synthetic gene regulatory circuits proceeds through iterative cycles of design, building, and testing. Initial circuit designs must rely on often-incomplete models of regulation established by fields of reductive inquiry-biochemistry and molecular and systems biology. As differences in designed and experimentally observed circuit behavior are inevitably encountered, investigated, and resolved, each turn of the engineering cycle can force a resynthesis in understanding of natural network function. Here, we outline research that uses the process of gene circuit engineering to advance biological discovery. Synthetic gene circuit engineering research has not only refined our understanding of cellular regulation but furnished biologists with a toolkit that can be directed at natural systems to exact precision manipulation of network structure. As we discuss, using circuit engineering to predictively reorganize, rewire, and reconstruct cellular regulation serves as the ultimate means of testing and understanding how cellular phenotype emerges from systems-level network function.
Tritium proof-of-principle pellet injector: Phase 2
NASA Astrophysics Data System (ADS)
Fisher, P. W.; Gouge, M. J.
1995-03-01
As part of the International Thermonuclear Engineering Reactor (ITER) plasma fueling development program, Oak Ridge National Laboratory (ORNL) has fabricated a pellet injection system to test the mechanical and thermal properties of extruded tritium. This repeating, single-stage, pneumatic injector, called the Tritium-Proof-of-Principle Phase-2 (TPOP-2) Pellet Injector, has a piston-driven mechanical extruder and is designed to extrude hydrogenic pellets sized for the ITER device. The TPOP-II program has the following development goals: evaluate the feasibility of extruding tritium and DT mixtures for use in future pellet injection systems; determine the mechanical and thermal properties of tritium and DT extrusions; integrate, test and evaluate the extruder in a repeating, single-stage light gas gun sized for the ITER application (pellet diameter approximately 7-8 mm); evaluate options for recycling propellant and extruder exhaust gas; evaluate operability and reliability of ITER prototypical fueling systems in an environment of significant tritium inventory requiring secondary and room containment systems. In initial tests with deuterium feed at ORNL, up to thirteen pellets have been extruded at rates up to 1 Hz and accelerated to speeds of order 1.0-1.1 km/s using hydrogen propellant gas at a supply pressure of 65 bar. The pellets are typically 7.4 mm in diameter and up to 11 mm in length and are the largest cryogenic pellets produced by the fusion program to date. These pellets represent about a 11% density perturbation to ITER. Hydrogenic pellets will be used in ITER to sustain the fusion power in the plasma core and may be crucial in reducing first wall tritium inventories by a process called isotopic fueling where tritium-rich pellets fuel the burning plasma core and deuterium gas fuels the edge.
A Phenomenographic Investigation of the Ways Engineering Students Experience Innovation
NASA Astrophysics Data System (ADS)
Fila, Nicholas David
Innovation has become an important phenomenon in engineering and engineering education. By developing novel, feasible, viable, and valued solutions to complex technical and human problems, engineers support the economic competitiveness of organizations, make a difference in the lives of users and other stakeholders, drive societal and scientific progress, and obtain key personal benefits. Innovation is also a complex phenomenon. It occurs across a variety of contexts and domains, encompasses numerous phases and activities, and requires unique competency profiles. Despite this complexity, many studies in engineering education focus on specific aspects (e.g., engineering students' abilities to generate original concepts during idea generation), and we still know little about the variety of ways engineering students approach and understand innovation. This study addresses that gap by asking: 1. What are the qualitatively different ways engineering students experience innovation during their engineering projects? 2. What are the structural relationships between the ways engineering students experience innovation? This study utilized phenomenography, a qualitative research method, to explore the above research questions. Thirty-three engineering students were recruited to ensure thorough coverage along four factors suggested by the literature to support differences related to innovation: engineering project experience, academic major, year in school, and gender. Each participant completed a 1-2 hour, semi-structured interview that focused on experiences with and conceptions of innovation. Whole transcripts were analyzed using an eight-stage, iterative, and comparative approach meant to identify a limited number of categories of description (composite ways of experiencing innovation comprised of the experiences of several participants), and the structural relationships between these categories. Phenomenographic analysis revealed eight categories of description that were structured in a semi-hierarchical, two-dimensional outcome space. The first four categories demonstrated a progression toward greater comprehensiveness in both process and focus dimensions. In the process dimension, subsequent categories added increasingly preliminary innovation phases: idea realization, idea generation, problem scoping, and problem finding. In the focus dimension, subsequent categories added key areas engineers considered during innovation: technical, human, and enterprise. The final four categories each incorporated all previous process phases and focus areas, but prioritized different focus areas in sophisticated ways and acknowledged a macro-iterative cycle, i.e., an understanding of how the processes within a single innovation project built upon and contributed to past and future innovation projects. These results demonstrate important differences between engineering students and suggest how they may come to experience innovation in increasingly comprehensive ways. A framework based on the results can be used by educators and researchers to support more robust educational offerings and nuanced research designs that reflect these differences.
Evaluation of coupling approaches for thermomechanical simulations
Novascone, S. R.; Spencer, B. W.; Hales, J. D.; ...
2015-08-10
Many problems of interest, particularly in the nuclear engineering field, involve coupling between the thermal and mechanical response of an engineered system. The strength of the two-way feedback between the thermal and mechanical solution fields can vary significantly depending on the problem. Contact problems exhibit a particularly high degree of two-way feedback between those fields. This paper describes and demonstrates the application of a flexible simulation environment that permits the solution of coupled physics problems using either a tightly coupled approach or a loosely coupled approach. In the tight coupling approach, Newton iterations include the coupling effects between all physics,more » while in the loosely coupled approach, the individual physics models are solved independently, and fixed-point iterations are performed until the coupled system is converged. These approaches are applied to simple demonstration problems and to realistic nuclear engineering applications. The demonstration problems consist of single and multi-domain thermomechanics with and without thermal and mechanical contact. Simulations of a reactor pressure vessel under pressurized thermal shock conditions and a simulation of light water reactor fuel are also presented. Here, problems that include thermal and mechanical contact, such as the contact between the fuel and cladding in the fuel simulation, exhibit much stronger two-way feedback between the thermal and mechanical solutions, and as a result, are better solved using a tight coupling strategy.« less
Computational methods of robust controller design for aerodynamic flutter suppression
NASA Technical Reports Server (NTRS)
Anderson, L. R.
1981-01-01
The development of Riccati iteration, a tool for the design and analysis of linear control systems is examined. First, Riccati iteration is applied to the problem of pole placement and order reduction in two-time scale control systems. Order reduction, yielding a good approximation to the original system, is demonstrated using a 16th order linear model of a turbofan engine. Next, a numerical method for solving the Riccati equation is presented and demonstrated for a set of eighth order random examples. A literature review of robust controller design methods follows which includes a number of methods for reducing the trajectory and performance index sensitivity in linear regulators. Lastly, robust controller design for large parameter variations is discussed.
ERIC Educational Resources Information Center
Hamilton, Eric; Lesh, Richard; Lester, Frank; Brilleslyper, Michael
2008-01-01
This article introduces Model-Eliciting Activities (MEAs) as a form of case study team problem-solving. MEA design focuses on eliciting from students conceptual models that they iteratively revise in problem-solving. Though developed by mathematics education researchers to study the evolution of mathematical problem-solving expertise in middle…
An application generator for rapid prototyping of Ada real-time control software
NASA Technical Reports Server (NTRS)
Johnson, Jim; Biglari, Haik; Lehman, Larry
1990-01-01
The need to increase engineering productivity and decrease software life cycle costs in real-time system development establishes a motivation for a method of rapid prototyping. The design by iterative rapid prototyping technique is described. A tool which facilitates such a design methodology for the generation of embedded control software is described.
A generalized computer code for developing dynamic gas turbine engine models (DIGTEM)
NASA Technical Reports Server (NTRS)
Daniele, C. J.
1984-01-01
This paper describes DIGTEM (digital turbofan engine model), a computer program that simulates two spool, two stream (turbofan) engines. DIGTEM was developed to support the development of a real time multiprocessor based engine simulator being designed at the Lewis Research Center. The turbofan engine model in DIGTEM contains steady state performance maps for all the components and has control volumes where continuity and energy balances are maintained. Rotor dynamics and duct momentum dynamics are also included. DIGTEM features an implicit integration scheme for integrating stiff systems and trims the model equations to match a prescribed design point by calculating correction coefficients that balance out the dynamic equations. It uses the same coefficients at off design points and iterates to a balanced engine condition. Transients are generated by defining the engine inputs as functions of time in a user written subroutine (TMRSP). Closed loop controls can also be simulated. DIGTEM is generalized in the aerothermodynamic treatment of components. This feature, along with DIGTEM's trimming at a design point, make it a very useful tool for developing a model of a specific turbofan engine.
A generalized computer code for developing dynamic gas turbine engine models (DIGTEM)
NASA Technical Reports Server (NTRS)
Daniele, C. J.
1983-01-01
This paper describes DIGTEM (digital turbofan engine model), a computer program that simulates two spool, two stream (turbofan) engines. DIGTEM was developed to support the development of a real time multiprocessor based engine simulator being designed at the Lewis Research Center. The turbofan engine model in DIGTEM contains steady state performance maps for all the components and has control volumes where continuity and energy balances are maintained. Rotor dynamics and duct momentum dynamics are also included. DIGTEM features an implicit integration scheme for integrating stiff systems and trims the model equations to match a prescribed design point by calculating correction coefficients that balance out the dynamic equations. It uses the same coefficients at off design points and iterates to a balanced engine condition. Transients are generated by defining the engine inputs as functions of time in a user written subroutine (TMRSP). Closed loop controls can also be simulated. DIGTEM is generalized in the aerothermodynamic treatment of components. This feature, along with DIGTEM's trimming at a design point, make it a very useful tool for developing a model of a specific turbofan engine.
DMD-based quantitative phase microscopy and optical diffraction tomography
NASA Astrophysics Data System (ADS)
Zhou, Renjie
2018-02-01
Digital micromirror devices (DMDs), which offer high speed and high degree of freedoms in steering light illuminations, have been increasingly applied to optical microscopy systems in recent years. Lately, we introduced DMDs into digital holography to enable new imaging modalities and break existing imaging limitations. In this paper, we will first present our progress in using DMDs for demonstrating laser-illumination Fourier ptychographic microscopy (FPM) with shotnoise limited detection. After that, we will present a novel common-path quantitative phase microscopy (QPM) system based on using a DMD. Building on those early developments, a DMD-based high speed optical diffraction tomography (ODT) system has been recently demonstrated, and the results will also be presented. This ODT system is able to achieve video-rate 3D refractive-index imaging, which can potentially enable observations of high-speed 3D sample structural changes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deng, Junjing; Vine, David J.; Chen, Si
X-ray microscopy can be used to image whole, unsectioned cells in their native hydrated state. It complements the higher resolution of electron microscopy for submicrometer thick specimens, and the molecule-specific imaging capabilites of fluorescence light microscopy. We describe here the first use of fast, continuous x-ray scanning of frozen hydrated cells for simultaneous sub-20 nm resolution ptychographic transmission imaging with high contrast, and sub-100 nm resolution deconvolved x-ray fluorescence imaging of diffusible and bound ions at native concentrations, without the need to add specific labels. Here, by working with cells that have been rapidly frozen without the use of chemicalmore » fixatives, and imaging them under cryogenic conditions, we are able to obtain images with well preserved structural and chemical composition, and sufficient stability against radiation damage to allow for multiple images to be obtained with no observable change.« less
Imaging photonic crystals using hemispherical digital condensers and phase-recovery techniques.
Alotaibi, Maged; Skinner-Ramos, Sueli; Farooq, Hira; Alharbi, Nouf; Alghasham, Hawra; de Peralta, Luis Grave
2018-05-10
We describe experiments where Fourier ptychographic microscopy (FPM) and dual-space microscopy (DSM) are implemented for imaging photonic crystals using a hemispherical digital condenser (HDC). Phase-recovery imaging simulations show that both techniques should be able to image photonic crystals with a period below the Rayleigh resolution limit. However, after processing the experimental images using both phase-recovery algorithms, we found that DSM can, but FPM cannot, image periodic structures with a period below the diffraction limit. We studied the origin of this apparent contradiction between simulations and experiments, and we concluded that the occurrence of unwanted reflections in the HDC is the source of the apparent failure of FPM. We thereafter solved the problem of reflections by using a single-directional illumination source and showed that FPM can image photonic crystals with a period below the Rayleigh resolution limit.
Low-dose, high-resolution and high-efficiency ptychography at STXM beamline of SSRF
NASA Astrophysics Data System (ADS)
Xu, Zijian; Wang, Chunpeng; Liu, Haigang; Tao, Xulei; Tai, Renzhong
2017-06-01
Ptychography is a diffraction-based X-ray microscopy method that can image extended samples quantitatively while remove the resolution limit imposed by image-forming optical elements. As a natural extension of scanning transmission X-ray microscopy (STXM) imaging method, we developed soft X-ray ptychographic coherent diffraction imaging (PCDI) method at the STXM endstation of BL08U beamline of Shanghai Synchrotron Radiation Facility. Compared to the traditional STXM imaging, the new PCDI method has resulted in significantly lower dose, higher resolution and higher efficiency imaging in our platform. In the demonstration experiments shown here, a spatial resolution of sub-10 nm was obtained for a gold nanowires sample, which is much better than the limit resolution 30 nm of the STXM method, while the radiation dose is only 1/12 of STXM.
Cell-free synthetic biology for in vitro prototype engineering.
Moore, Simon J; MacDonald, James T; Freemont, Paul S
2017-06-15
Cell-free transcription-translation is an expanding field in synthetic biology as a rapid prototyping platform for blueprinting the design of synthetic biological devices. Exemplar efforts include translation of prototype designs into medical test kits for on-site identification of viruses (Zika and Ebola), while gene circuit cascades can be tested, debugged and re-designed within rapid turnover times. Coupled with mathematical modelling, this discipline lends itself towards the precision engineering of new synthetic life. The next stages of cell-free look set to unlock new microbial hosts that remain slow to engineer and unsuited to rapid iterative design cycles. It is hoped that the development of such systems will provide new tools to aid the transition from cell-free prototype designs to functioning synthetic genetic circuits and engineered natural product pathways in living cells. © 2017 The Author(s).
Cell-free synthetic biology for in vitro prototype engineering
Moore, Simon J.; MacDonald, James T.
2017-01-01
Cell-free transcription–translation is an expanding field in synthetic biology as a rapid prototyping platform for blueprinting the design of synthetic biological devices. Exemplar efforts include translation of prototype designs into medical test kits for on-site identification of viruses (Zika and Ebola), while gene circuit cascades can be tested, debugged and re-designed within rapid turnover times. Coupled with mathematical modelling, this discipline lends itself towards the precision engineering of new synthetic life. The next stages of cell-free look set to unlock new microbial hosts that remain slow to engineer and unsuited to rapid iterative design cycles. It is hoped that the development of such systems will provide new tools to aid the transition from cell-free prototype designs to functioning synthetic genetic circuits and engineered natural product pathways in living cells. PMID:28620040
Kalman Filter for Calibrating a Telescope Focal Plane
NASA Technical Reports Server (NTRS)
Kang, Bryan; Bayard, David
2006-01-01
The instrument-pointing frame (IPF) Kalman filter, and an algorithm that implements this filter, have been devised for calibrating the focal plane of a telescope. As used here, calibration signifies, more specifically, a combination of measurements and calculations directed toward ensuring accuracy in aiming the telescope and determining the locations of objects imaged in various arrays of photodetectors in instruments located on the focal plane. The IPF Kalman filter was originally intended for application to a spaceborne infrared astronomical telescope, but can also be applied to other spaceborne and ground-based telescopes. In the traditional approach to calibration of a telescope, (1) one team of experts concentrates on estimating parameters (e.g., pointing alignments and gyroscope drifts) that are classified as being of primarily an engineering nature, (2) another team of experts concentrates on estimating calibration parameters (e.g., plate scales and optical distortions) that are classified as being primarily of a scientific nature, and (3) the two teams repeatedly exchange data in an iterative process in which each team refines its estimates with the help of the data provided by the other team. This iterative process is inefficient and uneconomical because it is time-consuming and entails the maintenance of two survey teams and the development of computer programs specific to the requirements of each team. Moreover, theoretical analysis reveals that the engineering/ science iterative approach is not optimal in that it does not yield the best estimates of focal-plane parameters and, depending on the application, may not even enable convergence toward a set of estimates.
Fast Bound Methods for Large Scale Simulation with Application for Engineering Optimization
NASA Technical Reports Server (NTRS)
Patera, Anthony T.; Peraire, Jaime; Zang, Thomas A. (Technical Monitor)
2002-01-01
In this work, we have focused on fast bound methods for large scale simulation with application for engineering optimization. The emphasis is on the development of techniques that provide both very fast turnaround and a certificate of Fidelity; these attributes ensure that the results are indeed relevant to - and trustworthy within - the engineering context. The bound methodology which underlies this work has many different instantiations: finite element approximation; iterative solution techniques; and reduced-basis (parameter) approximation. In this grant we have, in fact, treated all three, but most of our effort has been concentrated on the first and third. We describe these below briefly - but with a pointer to an Appendix which describes, in some detail, the current "state of the art."
NASA Astrophysics Data System (ADS)
Agostinetti, P.; Antoni, V.; Cavenago, M.; Chitarin, G.; Pilan, N.; Marcuzzi, D.; Serianni, G.; Veltri, P.
2011-09-01
Consorzio RFX in Padova is currently using a comprehensive set of numerical and analytical codes, for the physics and engineering design of the SPIDER (Source for Production of Ion of Deuterium Extracted from RF plasma) and MITICA (Megavolt ITER Injector Concept Advancement) experiments, planned to be built at Consorzio RFX. This paper presents a set of studies on different possible geometries for the MITICA accelerator, with the objective to compare different design concepts and choose the most suitable one (or ones) to be further developed and possibly adopted in the experiment. Different design solutions have been discussed and compared, taking into account their advantages and drawbacks by both the physics and engineering points of view.
NASA Technical Reports Server (NTRS)
Vonderesch, A. H.
1972-01-01
A second iteration of the program baseline configuration and cost for the solid propellant rocket engines used with the space shuttle booster system is presented. The purpose of the study was to ensure that total program costs were complete and to review areas where costs might be overly conservative and could be reduced. Labor and material were analyzed in more depth, more definition was prepared to separate recurring from nonrecurring costs, and the operations portions of the engine and stage were separated into more identifiable activities.
Application of IPAD to missile design
NASA Technical Reports Server (NTRS)
Santa, J. E.; Whiting, T. R.
1974-01-01
The application of an integrated program for aerospace-vehicle design (IPAD) to the design of a tactical missile is examined. The feasibility of modifying a proposed IPAD system for aircraft design work for use in missile design is evaluated. The tasks, cost, and schedule for the modification are presented. The basic engineering design process is described, explaining how missile design is achieved through iteration of six logical problem solving functions throughout the system studies, preliminary design, and detailed design phases of a new product. Existing computer codes used in various engineering disciplines are evaluated for their applicability to IPAD in missile design.
Integration of rocket turbine design and analysis through computer graphics
NASA Technical Reports Server (NTRS)
Hsu, Wayne; Boynton, Jim
1988-01-01
An interactive approach with engineering computer graphics is used to integrate the design and analysis processes of a rocket engine turbine into a progressive and iterative design procedure. The processes are interconnected through pre- and postprocessors. The graphics are used to generate the blade profiles, their stacking, finite element generation, and analysis presentation through color graphics. Steps of the design process discussed include pitch-line design, axisymmetric hub-to-tip meridional design, and quasi-three-dimensional analysis. The viscous two- and three-dimensional analysis codes are executed after acceptable designs are achieved and estimates of initial losses are confirmed.
Defining Gas Turbine Engine Performance Requirements for the Large Civil TiltRotor (LCTR2)
NASA Technical Reports Server (NTRS)
Snyder, Christopher A.
2013-01-01
Defining specific engine requirements is a critical part of identifying technologies and operational models for potential future rotary wing vehicles. NASA's Fundamental Aeronautics Program, Subsonic Rotary Wing Project has identified the Large Civil TiltRotor (LCTR) as the configuration to best meet technology goals. This notional vehicle concept has evolved with more clearly defined mission and operational requirements to the LCTR-iteration 2 (LCTR2). This paper reports on efforts to further review and refine the LCTR2 analyses to ascertain specific engine requirements and propulsion sizing criteria. The baseline mission and other design or operational requirements are reviewed. Analysis tools are described to help understand their interactions and underlying assumptions. Various design and operational conditions are presented and explained for their contribution to defining operational and engine requirements. These identified engine requirements are discussed to suggest which are most critical to the engine sizing and operation. The most-critical engine requirements are compared to in-house NASA engine simulations to try to ascertain which operational requirements define engine requirements versus points within the available engine operational capability. Finally, results are summarized with suggestions for future efforts to improve analysis capabilities, and better define and refine mission and operational requirements.
Integrating a Genetic Algorithm Into a Knowledge-Based System for Ordering Complex Design Processes
NASA Technical Reports Server (NTRS)
Rogers, James L.; McCulley, Collin M.; Bloebaum, Christina L.
1996-01-01
The design cycle associated with large engineering systems requires an initial decomposition of the complex system into design processes which are coupled through the transference of output data. Some of these design processes may be grouped into iterative subcycles. In analyzing or optimizing such a coupled system, it is essential to be able to determine the best ordering of the processes within these subcycles to reduce design cycle time and cost. Many decomposition approaches assume the capability is available to determine what design processes and couplings exist and what order of execution will be imposed during the design cycle. Unfortunately, this is often a complex problem and beyond the capabilities of a human design manager. A new feature, a genetic algorithm, has been added to DeMAID (Design Manager's Aid for Intelligent Decomposition) to allow the design manager to rapidly examine many different combinations of ordering processes in an iterative subcycle and to optimize the ordering based on cost, time, and iteration requirements. Two sample test cases are presented to show the effects of optimizing the ordering with a genetic algorithm.
A Burning Plasma Experiment: the role of international collaboration
NASA Astrophysics Data System (ADS)
Prager, Stewart
2003-04-01
The world effort to develop fusion energy is at the threshold of a new stage in its research: the investigation of burning plasmas. A burning plasma is self-heated. The 100 million degree temperature of the plasma is maintained by the heat generated by the fusion reactions themselves, as occurs in burning stars. The fusion-generated alpha particles produce new physical phenomena that are strongly coupled together as a nonlinear complex system, posing a major plasma physics challenge. Two attractive options are being considered by the US fusion community as burning plasma facilities: the international ITER experiment and the US-based FIRE experiment. ITER (the International Thermonuclear Experimental Reactor) is a large, power-plant scale facility. It was conceived and designed by a partnership of the European Union, Japan, the Soviet Union, and the United States. At the completion of the first engineering design in 1998, the US discontinued its participation. FIRE (the Fusion Ignition Research Experiment) is a smaller, domestic facility that is at an advanced pre-conceptual design stage. Each facility has different scientific, programmatic and political implications. Selecting the optimal path for burning plasma science is itself a challenge. Recently, the Fusion Energy Sciences Advisory Committee recommended a dual path strategy in which the US seek to rejoin ITER, but be prepared to move forward with FIRE if the ITER negotiations do not reach fruition by July, 2004. Either the ITER or FIRE experiment would reveal the behavior of burning plasmas, generate large amounts of fusion power, and be a huge step in establishing the potential of fusion energy to contribute to the world's energy security.
Design advances of the Core Plasma Thomson Scattering diagnostic for ITER
NASA Astrophysics Data System (ADS)
Scannell, R.; Maslov, M.; Naylor, G.; O'Gorman, T.; Kempenaars, M.; Carr, M.; Bilkova, P.; Bohm, P.; Giudicotti, L.; Pasqualotto, R.; Bassan, M.; Vayakis, G.; Walsh, M.; Huxford, R.
2017-11-01
The Core Plasma Thomson Scattering (CPTS) diagnostic on ITER performs measurements of the electron temperature and density profiles which are critical to the understanding of the ITER plasma. The diagnostic must satisfy the ITER project requirements, which translate to requirements on performance as well as reliability, safety and engineering. The implications are particularly challenging for beam dump lifetime, the need for continuous active alignment of the diagnostic during operation, allowable neutron flux in the interspace and the protection of the first mirror from plasma deposition. The CPTS design has been evolving over a number of years. One recent improvement is that the collection optics have been modified to include freeform surfaces. These freeform surfaces introduce extra complexity to the manufacturing but provide greater flexibility in the design. The greater flexibility introduced allows for example to lower neutron throughput or use fewer surfaces while improving optical performance. Performance assessment has shown that scattering from a 1064 nm laser will be sufficient to meet the measurement requirements, at least for the system at the start of operations. Optical transmission at λ < 600 nm is expected to degrade over the ITER lifetime due to fibre darkening and deposition on the first mirror. For this reason, it is proposed that the diagnostic should additionally include measurements of TS 'depolarised light' and a 1319 nm laser system. These additional techniques have different spectral and polarisation dependencies compared to scattering from a 1064 nm laser and hence provide greater robustness into the inferred measurements of Te and ne in the core.
Model-Based Systems Engineering in Concurrent Engineering Centers
NASA Technical Reports Server (NTRS)
Iwata, Curtis; Infeld, Samantha; Bracken, Jennifer Medlin; McGuire; McQuirk, Christina; Kisdi, Aron; Murphy, Jonathan; Cole, Bjorn; Zarifian, Pezhman
2015-01-01
Concurrent Engineering Centers (CECs) are specialized facilities with a goal of generating and maturing engineering designs by enabling rapid design iterations. This is accomplished by co-locating a team of experts (either physically or virtually) in a room with a focused design goal and a limited timeline of a week or less. The systems engineer uses a model of the system to capture the relevant interfaces and manage the overall architecture. A single model that integrates other design information and modeling allows the entire team to visualize the concurrent activity and identify conflicts more efficiently, potentially resulting in a systems model that will continue to be used throughout the project lifecycle. Performing systems engineering using such a system model is the definition of model-based systems engineering (MBSE); therefore, CECs evolving their approach to incorporate advances in MBSE are more successful in reducing time and cost needed to meet study goals. This paper surveys space mission CECs that are in the middle of this evolution, and the authors share their experiences in order to promote discussion within the community.
Model-Based Systems Engineering in Concurrent Engineering Centers
NASA Technical Reports Server (NTRS)
Iwata, Curtis; Infeld, Samatha; Bracken, Jennifer Medlin; McGuire, Melissa; McQuirk, Christina; Kisdi, Aron; Murphy, Jonathan; Cole, Bjorn; Zarifian, Pezhman
2015-01-01
Concurrent Engineering Centers (CECs) are specialized facilities with a goal of generating and maturing engineering designs by enabling rapid design iterations. This is accomplished by co-locating a team of experts (either physically or virtually) in a room with a narrow design goal and a limited timeline of a week or less. The systems engineer uses a model of the system to capture the relevant interfaces and manage the overall architecture. A single model that integrates other design information and modeling allows the entire team to visualize the concurrent activity and identify conflicts more efficiently, potentially resulting in a systems model that will continue to be used throughout the project lifecycle. Performing systems engineering using such a system model is the definition of model-based systems engineering (MBSE); therefore, CECs evolving their approach to incorporate advances in MBSE are more successful in reducing time and cost needed to meet study goals. This paper surveys space mission CECs that are in the middle of this evolution, and the authors share their experiences in order to promote discussion within the community.
ERIC Educational Resources Information Center
Ahrens, Fred; Mistry, Rajendra
2005-01-01
In product engineering there often arise design analysis problems for which a commercial software package is either unavailable or cost prohibitive. Further, these calculations often require successive iterations that can be time intensive when performed by hand, thus development of a software application is indicated. This case relates to the…
Spaceport Command and Control System Support Software Development
NASA Technical Reports Server (NTRS)
Brunotte, Leonard
2016-01-01
The Spaceport Command and Control System (SCCS) is a project developed and used by NASA at Kennedy Space Center in order to control and monitor the Space Launch System (SLS) at the time of its launch. One integral subteam under SCCS is the one assigned to the development of a data set building application to be used both on the launch pad and in the Launch Control Center (LCC) at the time of launch. This web application was developed in Ruby on Rails, a web framework using the Ruby object-oriented programming language, by a 15 - employee team (approx.). Because this application is such a huge undertaking with many facets and iterations, there were a few areas in which work could be more easily organized and expedited. As an intern working with this team, I was charged with the task of writing web applications that fulfilled this need, creating a virtual and highly customizable whiteboard in order to allow engineers to keep track of build iterations and their status. Additionally, I developed a knowledge capture web application wherein any engineer or contractor within SCCS could ask a question, answer an existing question, or leave a comment on any question or answer, similar to Stack Overflow.
Optimal Area Profiles for Ideal Single Nozzle Air-Breathing Pulse Detonation Engines
NASA Technical Reports Server (NTRS)
Paxson, Daniel E.
2003-01-01
The effects of cross-sectional area variation on idealized Pulse Detonation Engine performance are examined numerically. A quasi-one-dimensional, reacting, numerical code is used as the kernel of an algorithm that iteratively determines the correct sequencing of inlet air, inlet fuel, detonation initiation, and cycle time to achieve a limit cycle with specified fuel fraction, and volumetric purge fraction. The algorithm is exercised on a tube with a cross sectional area profile containing two degrees of freedom: overall exit-to-inlet area ratio, and the distance along the tube at which continuous transition from inlet to exit area begins. These two parameters are varied over three flight conditions (defined by inlet total temperature, inlet total pressure and ambient static pressure) and the performance is compared to a straight tube. It is shown that compared to straight tubes, increases of 20 to 35 percent in specific impulse and specific thrust are obtained with tubes of relatively modest area change. The iterative algorithm is described, and its limitations are noted and discussed. Optimized results are presented showing performance measurements, wave diagrams, and area profiles. Suggestions for future investigation are also discussed.
Planning as an Iterative Process
NASA Technical Reports Server (NTRS)
Smith, David E.
2012-01-01
Activity planning for missions such as the Mars Exploration Rover mission presents many technical challenges, including oversubscription, consideration of time, concurrency, resources, preferences, and uncertainty. These challenges have all been addressed by the research community to varying degrees, but significant technical hurdles still remain. In addition, the integration of these capabilities into a single planning engine remains largely unaddressed. However, I argue that there is a deeper set of issues that needs to be considered namely the integration of planning into an iterative process that begins before the goals, objectives, and preferences are fully defined. This introduces a number of technical challenges for planning, including the ability to more naturally specify and utilize constraints on the planning process, the ability to generate multiple qualitatively different plans, and the ability to provide deep explanation of plans.
Towards an Automated Classification of Transient Events in Synoptic Sky Surveys
NASA Technical Reports Server (NTRS)
Djorgovski, S. G.; Donalek, C.; Mahabal, A. A.; Moghaddam, B.; Turmon, M.; Graham, M. J.; Drake, A. J.; Sharma, N.; Chen, Y.
2011-01-01
We describe the development of a system for an automated, iterative, real-time classification of transient events discovered in synoptic sky surveys. The system under development incorporates a number of Machine Learning techniques, mostly using Bayesian approaches, due to the sparse nature, heterogeneity, and variable incompleteness of the available data. The classifications are improved iteratively as the new measurements are obtained. One novel featrue is the development of an automated follow-up recommendation engine, that suggest those measruements that would be the most advantageous in terms of resolving classification ambiguities and/or characterization of the astrophysically most interesting objects, given a set of available follow-up assets and their cost funcations. This illustrates the symbiotic relationship of astronomy and applied computer science through the emerging disciplne of AstroInformatics.
NASA Technical Reports Server (NTRS)
Chien, Steve; Knight, Russell; Stechert, Andre; Sherwood, Rob; Rabideau, Gregg
1998-01-01
An autonomous spacecraft must balance long-term and short-term considerations. It must perform purposeful activities that ensure long-term science and engineering goals are achieved and ensure that it maintains positive resource margins. This requires planning in advance to avoid a series of shortsighted decisions that can lead to failure, However, it must also respond in a timely fashion to a somewhat dynamic and unpredictable environment. Thus, spacecraft plans must often be modified due to fortuitous events such as early completion of observations and setbacks such as failure to acquire a guidestar for a science observation. This paper describes the use of iterative repair to support continuous modification and updating of a current working plan in light of changing operating context.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zaupa, M., E-mail: matteo.zaupa@igi.cnr.it; Consorzio RFX, Corso Stati Uniti 4, Padova 35127; Sartori, E.
Megavolt ITER Injector Concept Advancement is the full scale prototype of the heating and current drive neutral beam injectors for ITER, to be built at Consorzio RFX (Padova). The engineering design of its components is challenging: the total heat loads they will be subjected to (expected between 2 and 19 MW), the high heat fluxes (up to 20 MW/m{sup 2}), and the beam pulse duration up to 1 h, set demanding requirements for reliable active cooling circuits. In support of the design, the thermo-hydraulic behavior of each cooling circuit under steady state condition has been investigated by using one-dimensional models.more » The final results, obtained considering a number of optimizations for the cooling circuits, show that all the requirements in terms of flow rate, temperature, and pressure drop are properly fulfilled.« less
NASA Technical Reports Server (NTRS)
Henke, Luke
2010-01-01
The ICARE method is a flexible, widely applicable method for systems engineers to solve problems and resolve issues in a complete and comprehensive manner. The method can be tailored by diverse users for direct application to their function (e.g. system integrators, design engineers, technical discipline leads, analysts, etc.). The clever acronym, ICARE, instills the attitude of accountability, safety, technical rigor and engagement in the problem resolution: Identify, Communicate, Assess, Report, Execute (ICARE). This method was developed through observation of Space Shuttle Propulsion Systems Engineering and Integration (PSE&I) office personnel approach in an attempt to succinctly describe the actions of an effective systems engineer. Additionally it evolved from an effort to make a broadly-defined checklist for a PSE&I worker to perform their responsibilities in an iterative and recursive manner. The National Aeronautics and Space Administration (NASA) Systems Engineering Handbook states, engineering of NASA systems requires a systematic and disciplined set of processes that are applied recursively and iteratively for the design, development, operation, maintenance, and closeout of systems throughout the life cycle of the programs and projects. ICARE is a method that can be applied within the boundaries and requirements of NASA s systems engineering set of processes to provide an elevated sense of duty and responsibility to crew and vehicle safety. The importance of a disciplined set of processes and a safety-conscious mindset increases with the complexity of the system. Moreover, the larger the system and the larger the workforce, the more important it is to encourage the usage of the ICARE method as widely as possible. According to the NASA Systems Engineering Handbook, elements of a system can include people, hardware, software, facilities, policies and documents; all things required to produce system-level results, qualities, properties, characteristics, functions, behavior and performance. The ICARE method can be used to improve all elements of a system and, consequently, the system-level functional, physical and operational performance. Even though ICARE was specifically designed for a systems engineer, any person whose job is to examine another person, product, or process can use the ICARE method to improve effectiveness, implementation, usefulness, value, capability, efficiency, integration, design, and/or marketability. This paper provides the details of the ICARE method, emphasizing the method s application to systems engineering. In addition, a sample of other, non-systems engineering applications are briefly discussed to demonstrate how ICARE can be tailored to a variety of diverse jobs (from project management to parenting).
Assessing students' performance in software requirements engineering education using scoring rubrics
NASA Astrophysics Data System (ADS)
Mkpojiogu, Emmanuel O. C.; Hussain, Azham
2017-10-01
The study investigates how helpful the use of scoring rubrics is, in the performance assessment of software requirements engineering students and whether its use can lead to students' performance improvement in the development of software requirements artifacts and models. Scoring rubrics were used by two instructors to assess the cognitive performance of a student in the design and development of software requirements artifacts. The study results indicate that the use of scoring rubrics is very helpful in objectively assessing the performance of software requirements or software engineering students. Furthermore, the results revealed that the use of scoring rubrics can also produce a good achievement assessments direction showing whether a student is either improving or not in a repeated or iterative assessment. In a nutshell, its use leads to the performance improvement of students. The results provided some insights for further investigation and will be beneficial to researchers, requirements engineers, system designers, developers and project managers.
Usability engineering for augmented reality: employing user-based studies to inform design.
Gabbard, Joseph L; Swan, J Edward
2008-01-01
A major challenge, and thus opportunity, in the field of human-computer interaction and specifically usability engineering is designing effective user interfaces for emerging technologies that have no established design guidelines or interaction metaphors or introduce completely new ways for users to perceive and interact with technology and the world around them. Clearly, augmented reality is one such emerging technology. We propose a usability engineering approach that employs user-based studies to inform design, by iteratively inserting a series of user-based studies into a traditional usability engineering lifecycle to better inform initial user interface designs. We present an exemplar user-based study conducted to gain insight into how users perceive text in outdoor augmented reality settings and to derive implications for design in outdoor augmented reality. We also describe lessons learned from our experiences conducting user-based studies as part of the design process.
Principles of Biomimetic Vascular Network Design Applied to a Tissue-Engineered Liver Scaffold
Hoganson, David M.; Pryor, Howard I.; Spool, Ira D.; Burns, Owen H.; Gilmore, J. Randall
2010-01-01
Branched vascular networks are a central component of scaffold architecture for solid organ tissue engineering. In this work, seven biomimetic principles were established as the major guiding technical design considerations of a branched vascular network for a tissue-engineered scaffold. These biomimetic design principles were applied to a branched radial architecture to develop a liver-specific vascular network. Iterative design changes and computational fluid dynamic analysis were used to optimize the network before mold manufacturing. The vascular network mold was created using a new mold technique that achieves a 1:1 aspect ratio for all channels. In vitro blood flow testing confirmed the physiologic hemodynamics of the network as predicted by computational fluid dynamic analysis. These results indicate that this biomimetic liver vascular network design will provide a foundation for developing complex vascular networks for solid organ tissue engineering that achieve physiologic blood flow. PMID:20001254
Principles of biomimetic vascular network design applied to a tissue-engineered liver scaffold.
Hoganson, David M; Pryor, Howard I; Spool, Ira D; Burns, Owen H; Gilmore, J Randall; Vacanti, Joseph P
2010-05-01
Branched vascular networks are a central component of scaffold architecture for solid organ tissue engineering. In this work, seven biomimetic principles were established as the major guiding technical design considerations of a branched vascular network for a tissue-engineered scaffold. These biomimetic design principles were applied to a branched radial architecture to develop a liver-specific vascular network. Iterative design changes and computational fluid dynamic analysis were used to optimize the network before mold manufacturing. The vascular network mold was created using a new mold technique that achieves a 1:1 aspect ratio for all channels. In vitro blood flow testing confirmed the physiologic hemodynamics of the network as predicted by computational fluid dynamic analysis. These results indicate that this biomimetic liver vascular network design will provide a foundation for developing complex vascular networks for solid organ tissue engineering that achieve physiologic blood flow.
Denby, Charles M; Li, Rachel A; Vu, Van T; Costello, Zak; Lin, Weiyin; Chan, Leanne Jade G; Williams, Joseph; Donaldson, Bryan; Bamforth, Charles W; Petzold, Christopher J; Scheller, Henrik V; Martin, Hector Garcia; Keasling, Jay D
2018-03-20
Flowers of the hop plant provide both bitterness and "hoppy" flavor to beer. Hops are, however, both a water and energy intensive crop and vary considerably in essential oil content, making it challenging to achieve a consistent hoppy taste in beer. Here, we report that brewer's yeast can be engineered to biosynthesize aromatic monoterpene molecules that impart hoppy flavor to beer by incorporating recombinant DNA derived from yeast, mint, and basil. Whereas metabolic engineering of biosynthetic pathways is commonly enlisted to maximize product titers, tuning expression of pathway enzymes to affect target production levels of multiple commercially important metabolites without major collateral metabolic changes represents a unique challenge. By applying state-of-the-art engineering techniques and a framework to guide iterative improvement, strains are generated with target performance characteristics. Beers produced using these strains are perceived as hoppier than traditionally hopped beers by a sensory panel in a double-blind tasting.
Status of Europe's contribution to the ITER EC system
NASA Astrophysics Data System (ADS)
Albajar, F.; Aiello, G.; Alberti, S.; Arnold, F.; Avramidis, K.; Bader, M.; Batista, R.; Bertizzolo, R.; Bonicelli, T.; Braunmueller, F.; Brescan, C.; Bruschi, A.; von Burg, B.; Camino, K.; Carannante, G.; Casarin, V.; Castillo, A.; Cauvard, F.; Cavalieri, C.; Cavinato, M.; Chavan, R.; Chelis, J.; Cismondi, F.; Combescure, D.; Darbos, C.; Farina, D.; Fasel, D.; Figini, L.; Gagliardi, M.; Gandini, F.; Gantenbein, G.; Gassmann, T.; Gessner, R.; Goodman, T. P.; Gracia, V.; Grossetti, G.; Heemskerk, C.; Henderson, M.; Hermann, V.; Hogge, J. P.; Illy, S.; Ioannidis, Z.; Jelonnek, J.; Jin, J.; Kasparek, W.; Koning, J.; Krause, A. S.; Landis, J. D.; Latsas, G.; Li, F.; Mazzocchi, F.; Meier, A.; Moro, A.; Nousiainen, R.; Purohit, D.; Nowak, S.; Omori, T.; van Oosterhout, J.; Pacheco, J.; Pagonakis, I.; Platania, P.; Poli, E.; Preis, A. K.; Ronden, D.; Rozier, Y.; Rzesnicki, T.; Saibene, G.; Sanchez, F.; Sartori, F.; Sauter, O.; Scherer, T.; Schlatter, C.; Schreck, S.; Serikov, A.; Siravo, U.; Sozzi, C.; Spaeh, P.; Spichiger, A.; Strauss, D.; Takahashi, K.; Thumm, M.; Tigelis, I.; Vaccaro, A.; Vomvoridis, J.; Tran, M. Q.; Weinhorst, B.
2015-03-01
The electron cyclotron (EC) system of ITER for the initial configuration is designed to provide 20MW of RF power into the plasma during 3600s and a duty cycle of up to 25% for heating and (co and counter) non-inductive current drive, also used to control the MHD plasma instabilities. The EC system is being procured by 5 domestic agencies plus the ITER Organization (IO). F4E has the largest fraction of the EC procurements, which includes 8 high voltage power supplies (HVPS), 6 gyrotrons, the ex-vessel waveguides (includes isolation valves and diamond windows) for all launchers, 4 upper launchers and the main control system. F4E is working with IO to improve the overall design of the EC system by integrating consolidated technological advances, simplifying the interfaces, and doing global engineering analysis and assessments of EC heating and current drive physics and technology capabilities. Examples are the optimization of the HVPS and gyrotron requirements and performance relative to power modulation for MHD control, common qualification programs for diamond window procurements, assessment of the EC grounding system, and the optimization of the launcher steering angles for improved EC access. Here we provide an update on the status of Europe's contribution to the ITER EC system, and a summary of the global activities underway by F4E in collaboration with IO for the optimization of the subsystems.
Trace: a high-throughput tomographic reconstruction engine for large-scale datasets.
Bicer, Tekin; Gürsoy, Doğa; Andrade, Vincent De; Kettimuthu, Rajkumar; Scullin, William; Carlo, Francesco De; Foster, Ian T
2017-01-01
Modern synchrotron light sources and detectors produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used imaging techniques that generates data at tens of gigabytes per second is computed tomography (CT). Although CT experiments result in rapid data generation, the analysis and reconstruction of the collected data may require hours or even days of computation time with a medium-sized workstation, which hinders the scientific progress that relies on the results of analysis. We present Trace, a data-intensive computing engine that we have developed to enable high-performance implementation of iterative tomographic reconstruction algorithms for parallel computers. Trace provides fine-grained reconstruction of tomography datasets using both (thread-level) shared memory and (process-level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations that we apply to the replicated reconstruction objects and evaluate them using tomography datasets collected at the Advanced Photon Source. Our experimental evaluations show that our optimizations and parallelization techniques can provide 158× speedup using 32 compute nodes (384 cores) over a single-core configuration and decrease the end-to-end processing time of a large sinogram (with 4501 × 1 × 22,400 dimensions) from 12.5 h to <5 min per iteration. The proposed tomographic reconstruction engine can efficiently process large-scale tomographic data using many compute nodes and minimize reconstruction times.
Unsteady Probabilistic Analysis of a Gas Turbine System
NASA Technical Reports Server (NTRS)
Brown, Marilyn
2003-01-01
In this work, we have considered an annular cascade configuration subjected to unsteady inflow conditions. The unsteady response calculation has been implemented into the time marching CFD code, MSUTURBO. The computed steady state results for the pressure distribution demonstrated good agreement with experimental data. We have computed results for the amplitudes of the unsteady pressure over the blade surfaces. With the increase in gas turbine engine structural complexity and performance over the past 50 years, structural engineers have created an array of safety nets to ensure against component failures in turbine engines. In order to reduce what is now considered to be excessive conservatism and yet maintain the same adequate margins of safety, there is a pressing need to explore methods of incorporating probabilistic design procedures into engine development. Probabilistic methods combine and prioritize the statistical distributions of each design variable, generate an interactive distribution and offer the designer a quantified relationship between robustness, endurance and performance. The designer can therefore iterate between weight reduction, life increase, engine size reduction, speed increase etc.
Optimized tuner selection for engine performance estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L. (Inventor); Garg, Sanjay (Inventor)
2013-01-01
A methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. Theoretical Kalman filter estimation error bias and variance values are derived at steady-state operating conditions, and the tuner selection routine is applied to minimize these values. The new methodology yields an improvement in on-line engine performance estimation accuracy.
NASA Astrophysics Data System (ADS)
Tutschku, Kurt; Nakao, Akihiro
This paper introduces a methodology for engineering best-effort P2P algorithms into dependable P2P-based network control mechanism. The proposed method is built upon an iterative approach consisting of improving the original P2P algorithm by appropriate mechanisms and of thorough performance assessment with respect to dependability measures. The potential of the methodology is outlined by the example of timely routing control for vertical handover in B3G wireless networks. In detail, the well-known Pastry and CAN algorithms are enhanced to include locality. By showing how to combine algorithmic enhancements with performance indicators, this case study paves the way for future engineering of dependable network control mechanisms through P2P algorithms.
NASA Astrophysics Data System (ADS)
Shan, Ming; Carter, Ellison; Baumgartner, Jill; Deng, Mengsi; Clark, Sierra; Schauer, James J.; Ezzati, Majid; Li, Jiarong; Fu, Yu; Yang, Xudong
2017-09-01
Unclean combustion of solid fuel for cooking and other household energy needs leads to severe household air pollution and adverse health impacts in adults and children. Replacing traditional solid fuel stoves with high efficiency, low-polluting semi-gasifier stoves can potentially contribute to addressing this global problem. The success of semi-gasifier cookstove implementation initiatives depends not only on the technical performance and safety of the stove, but also the compatibility of the stove design with local cooking practices, the needs and preferences of stove users, and community economic structures. Many past stove design initiatives have failed to address one or more of these dimensions during the design process, resulting in failure of stoves to achieve long-term, exclusive use and market penetration. This study presents a user-centered, iterative engineering design approach to developing a semi-gasifier biomass cookstove for rural Chinese homes. Our approach places equal emphasis on stove performance and meeting the preferences of individuals most likely to adopt the clean stove technology. Five stove prototypes were iteratively developed following energy market and policy evaluation, laboratory and field evaluations of stove performance and user experience, and direct interactions with stove users. The most current stove prototype achieved high performance in the field on thermal efficiency (ISO Tier 3) and pollutant emissions (ISO Tier 4), and was received favorably by rural households in the Sichuan province of Southwest China. Among household cooks receiving the final prototype of the intervention stove, 88% reported lighting and using it at least once. At five months post-intervention, the semi-gasifier stoves were used at least once on an average of 68% [95% CI: 43, 93] of days. Our proposed design strategy can be applied to other stove development initiatives in China and other countries.
Three-dimensional localization of nanoscale battery reactions using soft X-ray tomography.
Yu, Young-Sang; Farmand, Maryam; Kim, Chunjoong; Liu, Yijin; Grey, Clare P; Strobridge, Fiona C; Tyliszczak, Tolek; Celestre, Rich; Denes, Peter; Joseph, John; Krishnan, Harinarayan; Maia, Filipe R N C; Kilcoyne, A L David; Marchesini, Stefano; Leite, Talita Perciano Costa; Warwick, Tony; Padmore, Howard; Cabana, Jordi; Shapiro, David A
2018-03-02
Battery function is determined by the efficiency and reversibility of the electrochemical phase transformations at solid electrodes. The microscopic tools available to study the chemical states of matter with the required spatial resolution and chemical specificity are intrinsically limited when studying complex architectures by their reliance on two-dimensional projections of thick material. Here, we report the development of soft X-ray ptychographic tomography, which resolves chemical states in three dimensions at 11 nm spatial resolution. We study an ensemble of nano-plates of lithium iron phosphate extracted from a battery electrode at 50% state of charge. Using a set of nanoscale tomograms, we quantify the electrochemical state and resolve phase boundaries throughout the volume of individual nanoparticles. These observations reveal multiple reaction points, intra-particle heterogeneity, and size effects that highlight the importance of multi-dimensional analytical tools in providing novel insight to the design of the next generation of high-performance devices.
Corrigendum to "Stability analysis of a deterministic dose calculation for MRI-guided radiotherapy".
Zelyak, Oleksandr; Fallone, B Gino; St-Aubin, Joel
2018-03-12
Modern effort in radiotherapy to address the challenges of tumor localization and motion has led to the development of MRI guided radiotherapy technologies. Accurate dose calculations must properly account for the effects of the MRI magnetic fields. Previous work has investigated the accuracy of a deterministic linear Boltzmann transport equation (LBTE) solver that includes magnetic field, but not the stability of the iterative solution method. In this work, we perform a stability analysis of this deterministic algorithm including an investigation of the convergence rate dependencies on the magnetic field, material density, energy, and anisotropy expansion. The iterative convergence rate of the continuous and discretized LBTE including magnetic fields is determined by analyzing the spectral radius using Fourier analysis for the stationary source iteration (SI) scheme. The spectral radius is calculated when the magnetic field is included (1) as a part of the iteration source, and (2) inside the streaming-collision operator. The non-stationary Krylov subspace solver GMRES is also investigated as a potential method to accelerate the iterative convergence, and an angular parallel computing methodology is investigated as a method to enhance the efficiency of the calculation. SI is found to be unstable when the magnetic field is part of the iteration source, but unconditionally stable when the magnetic field is included in the streaming-collision operator. The discretized LBTE with magnetic fields using a space-angle upwind stabilized discontinuous finite element method (DFEM) was also found to be unconditionally stable, but the spectral radius rapidly reaches unity for very low density media and increasing magnetic field strengths indicating arbitrarily slow convergence rates. However, GMRES is shown to significantly accelerate the DFEM convergence rate showing only a weak dependence on the magnetic field. In addition, the use of an angular parallel computing strategy is shown to potentially increase the efficiency of the dose calculation. © 2018 Institute of Physics and Engineering in Medicine.
NASA Astrophysics Data System (ADS)
Kim, S. H.; Casper, T. A.; Snipes, J. A.
2018-05-01
ITER will demonstrate the feasibility of burning plasma operation by operating DT plasmas in the ELMy H-mode regime with a high ratio of fusion power gain Q ~ 10. 15 MA ITER baseline operation scenario has been studied using CORSICA, focusing on the entry to burn, flat-top burning plasma operation and exit from burn. The burning plasma operation for about 400 s of the current flat-top was achieved in H-mode within the various engineering constraints imposed by the poloidal field coil and power supply systems. The target fusion gain (Q ~ 10) was achievable in the 15 MA ITER baseline operation with a moderate amount of the total auxiliary heating power (~50 MW). It has been observed that the tungsten (W) concentration needs to be maintained low level (n w/n e up to the order of 1.0 × 10-5) to avoid the radiative collapse and uncontrolled early termination of the discharge. The dynamic evolution of the density can modify the H-mode access unless the applied auxiliary heating power is significantly higher than the H-mode threshold power. Several qualitative sensitivity studies have been performed to provide guidance for further optimizing the plasma operation and performance. Increasing the density profile peaking factor was quite effective in increasing the alpha particle self-heating power and fusion power multiplication factor. Varying the combination of auxiliary heating power has shown that the fusion power multiplication factor can be reduced along with the increase in the total auxiliary heating power. As the 15 MA ITER baseline operation scenario requires full capacity of the coil and power supply systems, the operation window for H-mode access and shape modification was narrow. The updated ITER baseline operation scenarios developed in this work will become a basis for further optimization studies necessary along with the improvement in understanding the burning plasma physics.
Sensitivity based coupling strengths in complex engineering systems
NASA Technical Reports Server (NTRS)
Bloebaum, C. L.; Sobieszczanski-Sobieski, J.
1993-01-01
The iterative design scheme necessary for complex engineering systems is generally time consuming and difficult to implement. Although a decomposition approach results in a more tractable problem, the inherent couplings make establishing the interdependencies of the various subsystems difficult. Another difficulty lies in identifying the most efficient order of execution for the subsystem analyses. The paper describes an approach for determining the dependencies that could be suspended during the system analysis with minimal accuracy losses, thereby reducing the system complexity. A new multidisciplinary testbed is presented, involving the interaction of structures, aerodynamics, and performance disciplines. Results are presented to demonstrate the effectiveness of the system reduction scheme.
Performance evaluation of OpenFOAM on many-core architectures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brzobohatý, Tomáš; Říha, Lubomír; Karásek, Tomáš, E-mail: tomas.karasek@vsb.cz
In this article application of Open Source Field Operation and Manipulation (OpenFOAM) C++ libraries for solving engineering problems on many-core architectures is presented. Objective of this article is to present scalability of OpenFOAM on parallel platforms solving real engineering problems of fluid dynamics. Scalability test of OpenFOAM is performed using various hardware and different implementation of standard PCG and PBiCG Krylov iterative methods. Speed up of various implementations of linear solvers using GPU and MIC accelerators are presented in this paper. Numerical experiments of 3D lid-driven cavity flow for several cases with various number of cells are presented.
A study of the optimization method used in the NAVY/NASA gas turbine engine computer code
NASA Technical Reports Server (NTRS)
Horsewood, J. L.; Pines, S.
1977-01-01
Sources of numerical noise affecting the convergence properties of the Powell's Principal Axis Method of Optimization in the NAVY/NASA gas turbine engine computer code were investigated. The principal noise source discovered resulted from loose input tolerances used in terminating iterations performed in subroutine CALCFX to satisfy specified control functions. A minor source of noise was found to be introduced by an insufficient number of digits in stored coefficients used by subroutine THERM in polynomial expressions of thermodynamic properties. Tabular results of several computer runs are presented to show the effects on program performance of selective corrective actions taken to reduce noise.
ERIC Educational Resources Information Center
Saavedra Montes, A. J.; Botero Castro, H. A.; Hernandez Riveros, J. A.
2010-01-01
Many laboratory courses have become iterative processes in which students only seek to meet the requirements and pass the course. Some students believe these courses are boring and do not give them training as engineers. To provide a solution to the poor motivation of students in laboratories with few resources, this work proposes the method…
NASA Astrophysics Data System (ADS)
Iotti, Robert
2015-04-01
ITER is an international experimental facility being built by seven Parties to demonstrate the long term potential of fusion energy. The ITER Joint Implementation Agreement (JIA) defines the structure and governance model of such cooperation. There are a number of necessary conditions for such international projects to be successful: a complete design, strong systems engineering working with an agreed set of requirements, an experienced organization with systems and plans in place to manage the project, a cost estimate backed by industry, and someone in charge. Unfortunately for ITER many of these conditions were not present. The paper discusses the priorities in the JIA which led to setting up the project with a Central Integrating Organization (IO) in Cadarache, France as the ITER HQ, and seven Domestic Agencies (DAs) located in the countries of the Parties, responsible for delivering 90%+ of the project hardware as Contributions-in-Kind and also financial contributions to the IO, as ``Contributions-in-Cash.'' Theoretically the Director General (DG) is responsible for everything. In practice the DG does not have the power to control the work of the DAs, and there is not an effective management structure enabling the IO and the DAs to arbitrate disputes, so the project is not really managed, but is a loose collaboration of competing interests. Any DA can effectively block a decision reached by the DG. Inefficiencies in completing design while setting up a competent organization from scratch contributed to the delays and cost increases during the initial few years. So did the fact that the original estimate was not developed from industry input. Unforeseen inflation and market demand on certain commodities/materials further exacerbated the cost increases. Since then, improvements are debatable. Does this mean that the governance model of ITER is a wrong model for international scientific cooperation? I do not believe so. Had the necessary conditions for success been present at the beginning, ITER would be in far better shape. As is, it can provide good lessons to avoid the same problems in the future. The ITER Council is now applying those lessons. A very experienced new Director General has just been appointed. He has instituted a number of drastic changes, but still within the governance of the JIA. Will there changes be effective? Only time will tell, but I am optimistic.
NASA Astrophysics Data System (ADS)
Melentjev, Vladimir S.; Gvozdev, Alexander S.
2018-01-01
Improving the reliability of modern turbine engines is actual task. This is achieved due to prevent a vibration damage of the operating blades. On the department of structure and design of aircraft engines have accumulated a lot of experimental data on the protection of the blades of the gas turbine engine from a vibration. In this paper we proposed a method for calculating the characteristics of wire rope dampers in the root attachment of blade of a gas turbine engine. The method is based on the use of the finite element method and transient analysis. Contact interaction (Lagrange-Euler method) between the compressor blade and the disc of the rotor has been taken into account. Contribution of contact interaction between details in damping of the system was measured. The proposed method provides a convenient way for the iterative selection of the required parameters the wire rope elastic-damping element. This element is able to provide the necessary protection from the vibration for the blade of a gas turbine engine.
Definition of optical systems payloads
NASA Technical Reports Server (NTRS)
Downey, J. A., III
1981-01-01
The various phases in the formulation of a major NASA project include the inception of the project, planning of the concept, and the project definition. A baseline configuration is established during the planning stage, which serves as a basis for engineering trade studies. Basic technological problems should be recognized early, and a technological verification plan prepared before development of a project begins. A progressive series of iterations is required during the definition phase, illustrating the complex interdependence of existing subsystems. A systems error budget should be established to assess the overall systems performance, identify key performance drivers, and guide performance trades and iterations around these drivers, thus decreasing final systems requirements. Unnecessary interfaces should be avoided, and reasonable design and cost margins maintained. Certain aspects of the definition of the Advanced X-ray Astrophysics Facility are used as an example.
NASA's Platform for Cross-Disciplinary Microchannel Research
NASA Technical Reports Server (NTRS)
Son, Sang Young; Spearing, Scott; Allen, Jeffrey; Monaco, Lisa A.
2003-01-01
A team from the Structural Biology group located at the NASA Marshall Space Flight Center in Huntsville, Alabama is developing a platform suitable for cross-disciplinary microchannel research. The original objective of this engineering development effort was to deliver a multi-user flight-certified facility for iterative investigations of protein crystal growth; that is, Iterative Biological Crystallization (IBC). However, the unique capabilities of this facility are not limited to the low-gravity structural biology research community. Microchannel-based research in a number of other areas may be greatly accelerated through use of this facility. In particular, the potential for gas-liquid flow investigations and cellular biological research utilizing the exceptional pressure control and simplified coupling to macroscale diagnostics inherent in the IBC facility will be discussed. In conclusion, the opportunities for research-specific modifications to the microchannel configuration, control, and diagnostics will be discussed.
Hardware architecture design of image restoration based on time-frequency domain computation
NASA Astrophysics Data System (ADS)
Wen, Bo; Zhang, Jing; Jiao, Zipeng
2013-10-01
The image restoration algorithms based on time-frequency domain computation is high maturity and applied widely in engineering. To solve the high-speed implementation of these algorithms, the TFDC hardware architecture is proposed. Firstly, the main module is designed, by analyzing the common processing and numerical calculation. Then, to improve the commonality, the iteration control module is planed for iterative algorithms. In addition, to reduce the computational cost and memory requirements, the necessary optimizations are suggested for the time-consuming module, which include two-dimensional FFT/IFFT and the plural calculation. Eventually, the TFDC hardware architecture is adopted for hardware design of real-time image restoration system. The result proves that, the TFDC hardware architecture and its optimizations can be applied to image restoration algorithms based on TFDC, with good algorithm commonality, hardware realizability and high efficiency.
A Fully Non-Metallic Gas Turbine Engine Enabled by Additive Manufacturing
NASA Technical Reports Server (NTRS)
Grady, Joseph E.
2015-01-01
The Non-Metallic Gas Turbine Engine project, funded by NASA Aeronautics Research Institute, represents the first comprehensive evaluation of emerging materials and manufacturing technologies that will enable fully nonmetallic gas turbine engines. This will be achieved by assessing the feasibility of using additive manufacturing technologies to fabricate polymer matrix composite and ceramic matrix composite turbine engine components. The benefits include: 50 weight reduction compared to metallic parts, reduced manufacturing costs, reduced part count and rapid design iterations. Two high payoff metallic components have been identified for replacement with PMCs and will be fabricated using fused deposition modeling (FDM) with high temperature polymer filaments. The CMC effort uses a binder jet process to fabricate silicon carbide test coupons and demonstration articles. Microstructural analysis and mechanical testing will be conducted on the PMC and CMC materials. System studies will assess the benefits of fully nonmetallic gas turbine engine in terms of fuel burn, emissions, reduction of part count, and cost. The research project includes a multidisciplinary, multiorganization NASA - industry team that includes experts in ceramic materials and CMCs, polymers and PMCs, structural engineering, additive manufacturing, engine design and analysis, and system analysis.
A Framework for Automating Cost Estimates in Assembly Processes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calton, T.L.; Peters, R.R.
1998-12-09
When a product concept emerges, the manufacturing engineer is asked to sketch out a production strategy and estimate its cost. The engineer is given an initial product design, along with a schedule of expected production volumes. The engineer then determines the best approach to manufacturing the product, comparing a variey of alternative production strategies. The engineer must consider capital cost, operating cost, lead-time, and other issues in an attempt to maximize pro$ts. After making these basic choices and sketching the design of overall production, the engineer produces estimates of the required capital, operating costs, and production capacity. 177is process maymore » iterate as the product design is refined in order to improve its pe~ormance or manufacturability. The focus of this paper is on the development of computer tools to aid manufacturing engineers in their decision-making processes. This computer sof~are tool provides aj?amework in which accurate cost estimates can be seamlessly derivedfiom design requirements at the start of any engineering project. Z+e result is faster cycle times through first-pass success; lower ll~e cycie cost due to requirements-driven design and accurate cost estimates derived early in the process.« less
Overview of the preliminary design of the ITER plasma control system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Snipes, J. A.; Albanese, R.; Ambrosino, G.
An overview of the Preliminary Design of the ITER Plasma Control System (PCS) is described here, which focusses on the needs for 1st plasma and early plasma operation in hydrogen/helium (H/He) up to a plasma current of 15 MA with moderate auxiliary heating power in low confinement mode (L-mode). Candidate control schemes for basic magnetic control, including divertor operation and kinetic control of the electron density with gas puffing and pellet injection, were developed. Commissioning of the auxiliary heating systems is included as well as support functions for stray field topology and real-time plasma boundary reconstruction. Initial exception handling schemesmore » for faults of essential plant systems and for disruption protection were developed. The PCS architecture was also developed to be capable of handling basic control for early commissioning and the advanced control functions that will be needed for future high performance operation. A plasma control simulator is also being developed to test and validate control schemes. To handle the complexity of the ITER PCS, a systems engineering approach has been adopted with the development of a plasma control database to keep track of all control requirements.« less
Assessment and selection of materials for ITER in-vessel components
NASA Astrophysics Data System (ADS)
Kalinin, G.; Barabash, V.; Cardella, A.; Dietz, J.; Ioki, K.; Matera, R.; Santoro, R. T.; Tivey, R.; ITER Home Teams
2000-12-01
During the international thermonuclear experimental reactor (ITER) engineering design activities (EDA) significant progress has been made in the selection of materials for the in-vessel components of the reactor. This progress is a result of the worldwide collaboration of material scientists and industries which focused their effort on the optimisation of material and component manufacturing and on the investigation of the most critical material properties. Austenitic stainless steels 316L(N)-IG and 316L, nickel-based alloys Inconel 718 and Inconel 625, Ti-6Al-4V alloy and two copper alloys, CuCrZr-IG and CuAl25-IG, have been proposed as reference structural materials, and ferritic steel 430, and austenitic steel 304B7 with the addition of boron have been selected for some specific parts of the ITER in-vessel components. Beryllium, tungsten and carbon fibre composites are considered as plasma facing armour materials. The data base on the properties of all these materials is critically assessed and briefly reviewed in this paper together with the justification of the material selection (e.g., effect of neutron irradiation on the mechanical properties of materials, effect of manufacturing cycle, etc.).
Scalable domain decomposition solvers for stochastic PDEs in high performance computing
Desai, Ajit; Khalil, Mohammad; Pettit, Chris; ...
2017-09-21
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less
Scalable domain decomposition solvers for stochastic PDEs in high performance computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Desai, Ajit; Khalil, Mohammad; Pettit, Chris
Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less
A finite element solver for 3-D compressible viscous flows
NASA Technical Reports Server (NTRS)
Reddy, K. C.; Reddy, J. N.; Nayani, S.
1990-01-01
Computation of the flow field inside a space shuttle main engine (SSME) requires the application of state of the art computational fluid dynamic (CFD) technology. Several computer codes are under development to solve 3-D flow through the hot gas manifold. Some algorithms were designed to solve the unsteady compressible Navier-Stokes equations, either by implicit or explicit factorization methods, using several hundred or thousands of time steps to reach a steady state solution. A new iterative algorithm is being developed for the solution of the implicit finite element equations without assembling global matrices. It is an efficient iteration scheme based on a modified nonlinear Gauss-Seidel iteration with symmetric sweeps. The algorithm is analyzed for a model equation and is shown to be unconditionally stable. Results from a series of test problems are presented. The finite element code was tested for couette flow, which is flow under a pressure gradient between two parallel plates in relative motion. Another problem that was solved is viscous laminar flow over a flat plate. The general 3-D finite element code was used to compute the flow in an axisymmetric turnaround duct at low Mach numbers.
Overview of the preliminary design of the ITER plasma control system
NASA Astrophysics Data System (ADS)
Snipes, J. A.; Albanese, R.; Ambrosino, G.; Ambrosino, R.; Amoskov, V.; Blanken, T. C.; Bremond, S.; Cinque, M.; de Tommasi, G.; de Vries, P. C.; Eidietis, N.; Felici, F.; Felton, R.; Ferron, J.; Formisano, A.; Gribov, Y.; Hosokawa, M.; Hyatt, A.; Humphreys, D.; Jackson, G.; Kavin, A.; Khayrutdinov, R.; Kim, D.; Kim, S. H.; Konovalov, S.; Lamzin, E.; Lehnen, M.; Lukash, V.; Lomas, P.; Mattei, M.; Mineev, A.; Moreau, P.; Neu, G.; Nouailletas, R.; Pautasso, G.; Pironti, A.; Rapson, C.; Raupp, G.; Ravensbergen, T.; Rimini, F.; Schneider, M.; Travere, J.-M.; Treutterer, W.; Villone, F.; Walker, M.; Welander, A.; Winter, A.; Zabeo, L.
2017-12-01
An overview of the preliminary design of the ITER plasma control system (PCS) is described here, which focusses on the needs for 1st plasma and early plasma operation in hydrogen/helium (H/He) up to a plasma current of 15 MA with moderate auxiliary heating power in low confinement mode (L-mode). Candidate control schemes for basic magnetic control, including divertor operation and kinetic control of the electron density with gas puffing and pellet injection, were developed. Commissioning of the auxiliary heating systems is included as well as support functions for stray field topology and real-time plasma boundary reconstruction. Initial exception handling schemes for faults of essential plant systems and for disruption protection were developed. The PCS architecture was also developed to be capable of handling basic control for early commissioning and the advanced control functions that will be needed for future high performance operation. A plasma control simulator is also being developed to test and validate control schemes. To handle the complexity of the ITER PCS, a systems engineering approach has been adopted with the development of a plasma control database to keep track of all control requirements.
Overview of the preliminary design of the ITER plasma control system
Snipes, J. A.; Albanese, R.; Ambrosino, G.; ...
2017-09-11
An overview of the Preliminary Design of the ITER Plasma Control System (PCS) is described here, which focusses on the needs for 1st plasma and early plasma operation in hydrogen/helium (H/He) up to a plasma current of 15 MA with moderate auxiliary heating power in low confinement mode (L-mode). Candidate control schemes for basic magnetic control, including divertor operation and kinetic control of the electron density with gas puffing and pellet injection, were developed. Commissioning of the auxiliary heating systems is included as well as support functions for stray field topology and real-time plasma boundary reconstruction. Initial exception handling schemesmore » for faults of essential plant systems and for disruption protection were developed. The PCS architecture was also developed to be capable of handling basic control for early commissioning and the advanced control functions that will be needed for future high performance operation. A plasma control simulator is also being developed to test and validate control schemes. To handle the complexity of the ITER PCS, a systems engineering approach has been adopted with the development of a plasma control database to keep track of all control requirements.« less
The Design and Development of a Web-Interface for the Software Engineering Automation System
2001-09-01
application on the Internet. 14. SUBJECT TERMS Computer Aided Prototyping, Real Time Systems , Java 15. NUMBER OF...difficult. Developing the entire system only to find it does not meet the customer’s needs is a tremendous waste of time. Real - time systems need a...software prototyping is an iterative software development methodology utilized to improve the analysis and design of real - time systems [2]. One
Combining Architecture-Centric Engineering with the Team Software Process
2010-12-01
colleagues from Quarksoft and CIMAT have re- cently reported on their experiences in “Introducing Software Architecture Development Methods into a TSP...Postmortem Lessons, new goals, new requirements, new risk , etc. Business and technical goals Estimates, plans, process, commitment Work products...architecture to mitigate the risks unco- vered by the ATAM. At the end of the iteration, version 1.0 of the architec- ture is available. Implement a second
VINE: A Variational Inference -Based Bayesian Neural Network Engine
2018-01-01
networks are trained using the same dataset and hyper parameter settings as discussed. Table 1 Performance evaluation of the proposed transfer learning...multiplication/addition/subtraction. These operations can be implemented using nested loops in which various iterations of a loop are independent of...each other. This introduces an opportunity for optimization where a loop may be unrolled fully or partially to increase parallelism at the cost of
Hierarchical Engine for Large-scale Infrastructure Co-Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
2017-04-24
HELICS is designed to support very-large-scale (100,000+ federates) cosimulations with off-the-shelf power-system, communication, market, and end-use tools. Other key features include cross platform operating system support, the integration of both event driven (e.g., packetized communication) and time-series (e.g., power flow) simulations, and the ability to co-iterate among federates to ensure physical model convergence at each time step.
Towards Single-Step Biofabrication of Organs on a Chip via 3D Printing.
Knowlton, Stephanie; Yenilmez, Bekir; Tasoglu, Savas
2016-09-01
Organ-on-a-chip engineering employs microfabrication of living tissues within microscale fluid channels to create constructs that closely mimic human organs. With the advent of 3D printing, we predict that single-step fabrication of these devices will enable rapid design and cost-effective iterations in the development stage, facilitating rapid innovation in this field. Copyright © 2016 Elsevier Ltd. All rights reserved.
1994-06-01
algorithms for large, irreducibly coupled systems iteratively solve concurrent problems within different subspaces of a Hilbert space, or within different...effective on problems amenable to SIMD solution. Together with researchers at AT&T Bell Labs (Boris Lubachevsky, Albert Greenberg ) we have developed...reasonable measurement. In the study of different speedups, various causes of superlinear speedup are also presented. Greenberg , Albert G., Boris D
2010-03-01
service consumers, and infrastructure. Techniques from any iterative and incremental software development methodology followed by the organiza- tion... Service -Oriented Architecture Environment (CMU/SEI-2008-TN-008). Software Engineering Institute, Carnegie Mellon University, 2008. http://www.sei.cmu.edu...Integrating Legacy Software into a Service Oriented Architecture.” Proceedings of the 10th European Conference on Software Maintenance (CSMR 2006). Bari
NASA Technical Reports Server (NTRS)
Mendenhall, M. R.; Goodwin, F. K.; Spangler, S. B.
1976-01-01
A vortex lattice lifting-surface method is used to model the wing and multiple flaps. Each lifting surface may be of arbitrary planform having camber and twist, and the multiple-slotted trailing-edge flap system may consist of up to ten flaps with different spans and deflection angles. The engine wakes model consists of a series of closely spaced vortex rings with circular or elliptic cross sections. The rings are normal to a wake centerline which is free to move vertically and laterally to accommodate the local flow field beneath the wing and flaps. The two potential flow models are used in an iterative fashion to calculate the wing-flap loading distribution including the influence of the waves from up to two turbofan engines on the semispan. The method is limited to the condition where the flow and geometry of the configurations are symmetric about the vertical plane containing the wing root chord. The calculation procedure starts with arbitrarily positioned wake centerlines and the iterative calculation continues until the total configuration loading converges within a prescribed tolerance. Program results include total configuration forces and moments, individual lifting-surface load distributions, including pressure distributions, individual flap hinge moments, and flow field calculation at arbitrary field points.
NASA Astrophysics Data System (ADS)
Yue, Haosong; Chen, Weihai; Wu, Xingming; Wang, Jianhua
2016-03-01
Three-dimensional (3-D) simultaneous localization and mapping (SLAM) is a crucial technique for intelligent robots to navigate autonomously and execute complex tasks. It can also be applied to shape measurement, reverse engineering, and many other scientific or engineering fields. A widespread SLAM algorithm, named KinectFusion, performs well in environments with complex shapes. However, it cannot handle translation uncertainties well in highly structured scenes. This paper improves the KinectFusion algorithm and makes it competent in both structured and unstructured environments. 3-D line features are first extracted according to both color and depth data captured by Kinect sensor. Then the lines in the current data frame are matched with the lines extracted from the entire constructed world model. Finally, we fuse the distance errors of these line-pairs into the standard KinectFusion framework and estimate sensor poses using an iterative closest point-based algorithm. Comparative experiments with the KinectFusion algorithm and one state-of-the-art method in a corridor scene have been done. The experimental results demonstrate that after our improvement, the KinectFusion algorithm can also be applied to structured environments and has higher accuracy. Experiments on two open access datasets further validated our improvements.
Re-typograph phase I: a proof-of-concept for typeface parameter extraction from historical documents
NASA Astrophysics Data System (ADS)
Lamiroy, Bart; Bouville, Thomas; Blégean, Julien; Cao, Hongliu; Ghamizi, Salah; Houpin, Romain; Lloyd, Matthias
2015-01-01
This paper reports on the first phase of an attempt to create a full retro-engineering pipeline that aims to construct a complete set of coherent typographic parameters defining the typefaces used in a printed homogenous text. It should be stressed that this process cannot reasonably be expected to be fully automatic and that it is designed to include human interaction. Although font design is governed by a set of quite robust and formal geometric rulesets, it still heavily relies on subjective human interpretation. Furthermore, different parameters, applied to the generic rulesets may actually result in quite similar and visually difficult to distinguish typefaces, making the retro-engineering an inverse problem that is ill conditioned once shape distortions (related to the printing and/or scanning process) come into play. This work is the first phase of a long iterative process, in which we will progressively study and assess the techniques from the state-of-the-art that are most suited to our problem and investigate new directions when they prove to not quite adequate. As a first step, this is more of a feasibility proof-of-concept, that will allow us to clearly pinpoint the items that will require more in-depth research over the next iterations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farina, D.; Figini, L.; Henderson, M.
2014-06-15
The design of the ITER Electron Cyclotron Heating and Current Drive (EC H and CD) system has evolved in the last years both in goals and functionalities by considering an expanded range of applications. A large effort has been devoted to a better integration of the equatorial and the upper launchers, both from the point of view of the performance and of the design impact on the engineering constraints. However, from the analysis of the ECCD performance in two references H-mode scenarios at burn (the inductive H-mode and the advanced non-inductive scenario), it was clear that the EC power depositionmore » was not optimal for steady-state applications in the plasma region around mid radius. An optimization study of the equatorial launcher is presented here aiming at removing this limitation of the EC system capabilities. Changing the steering of the equatorial launcher from toroidal to poloidal ensures EC power deposition out to the normalized toroidal radius ρ ≈ 0.6, and nearly doubles the EC driven current around mid radius, without significant performance degradation in the core plasma region. In addition to the improved performance, the proposed design change is able to relax some engineering design constraints on both launchers.« less
SCOUSE: Semi-automated multi-COmponent Universal Spectral-line fitting Engine
NASA Astrophysics Data System (ADS)
Henshaw, J. D.; Longmore, S. N.; Kruijssen, J. M. D.; Davies, B.; Bally, J.; Barnes, A.; Battersby, C.; Burton, M.; Cunningham, M. R.; Dale, J. E.; Ginsburg, A.; Immer, K.; Jones, P. A.; Kendrew, S.; Mills, E. A. C.; Molinari, S.; Moore, T. J. T.; Ott, J.; Pillai, T.; Rathborne, J.; Schilke, P.; Schmiedeke, A.; Testi, L.; Walker, D.; Walsh, A.; Zhang, Q.
2016-01-01
The Semi-automated multi-COmponent Universal Spectral-line fitting Engine (SCOUSE) is a spectral line fitting algorithm that fits Gaussian files to spectral line emission. It identifies the spatial area over which to fit the data and generates a grid of spectral averaging areas (SAAs). The spatially averaged spectra are fitted according to user-provided tolerance levels, and the best fit is selected using the Akaike Information Criterion, which weights the chisq of a best-fitting solution according to the number of free-parameters. A more detailed inspection of the spectra can be performed to improve the fit through an iterative process, after which SCOUSE integrates the new solutions into the solution file.
Matsuoka, Yukiko; Ghosh, Samik; Kitano, Hiroaki
2009-01-01
The discovery by design paradigm driving research in synthetic biology entails the engineering of de novo biological constructs with well-characterized input–output behaviours and interfaces. The construction of biological circuits requires iterative phases of design, simulation and assembly, leading to the fabrication of a biological device. In order to represent engineered models in a consistent visual format and further simulating them in silico, standardization of representation and model formalism is imperative. In this article, we review different efforts for standardization, particularly standards for graphical visualization and simulation/annotation schemata adopted in systems biology. We identify the importance of integrating the different standardization efforts and provide insights into potential avenues for developing a common framework for model visualization, simulation and sharing across various tools. We envision that such a synergistic approach would lead to the development of global, standardized schemata in biology, empowering deeper understanding of molecular mechanisms as well as engineering of novel biological systems. PMID:19493898
User engineering: A new look at system engineering
NASA Technical Reports Server (NTRS)
Mclaughlin, Larry L.
1987-01-01
User Engineering is a new System Engineering perspective responsible for defining and maintaining the user view of the system. Its elements are a process to guide the project and customer, a multidisciplinary team including hard and soft sciences, rapid prototyping tools to build user interfaces quickly and modify them frequently at low cost, and a prototyping center for involving users and designers in an iterative way. The main consideration is reducing the risk that the end user will not or cannot effectively use the system. The process begins with user analysis to produce cognitive and work style models, and task analysis to produce user work functions and scenarios. These become major drivers of the human computer interface design which is presented and reviewed as an interactive prototype by users. Feedback is rapid and productive, and user effectiveness can be measured and observed before the system is built and fielded. Requirements are derived via the prototype and baselined early to serve as an input to the architecture and software design.
Digital computer program for generating dynamic turbofan engine models (DIGTEM)
NASA Technical Reports Server (NTRS)
Daniele, C. J.; Krosel, S. M.; Szuch, J. R.; Westerkamp, E. J.
1983-01-01
This report describes DIGTEM, a digital computer program that simulates two spool, two-stream turbofan engines. The turbofan engine model in DIGTEM contains steady-state performance maps for all of the components and has control volumes where continuity and energy balances are maintained. Rotor dynamics and duct momentum dynamics are also included. Altogether there are 16 state variables and state equations. DIGTEM features a backward-differnce integration scheme for integrating stiff systems. It trims the model equations to match a prescribed design point by calculating correction coefficients that balance out the dynamic equations. It uses the same coefficients at off-design points and iterates to a balanced engine condition. Transients can also be run. They are generated by defining controls as a function of time (open-loop control) in a user-written subroutine (TMRSP). DIGTEM has run on the IBM 370/3033 computer using implicit integration with time steps ranging from 1.0 msec to 1.0 sec. DIGTEM is generalized in the aerothermodynamic treatment of components.
The MEOW lunar project for education and science based on concurrent engineering approach
NASA Astrophysics Data System (ADS)
Roibás-Millán, E.; Sorribes-Palmer, F.; Chimeno-Manguán, M.
2018-07-01
The use of concurrent engineering in the design of space missions allows to take into account in an interrelated methodology the high level of coupling and iteration of mission subsystems in the preliminary conceptual phase. This work presents the result of applying concurrent engineering in a short time lapse to design the main elements of the preliminary design for a lunar exploration mission, developed within ESA Academy Concurrent Engineering Challenge 2017. During this program, students of the Master in Space Systems at Technical University of Madrid designed a low cost satellite to find water on the Moon south pole as prospect of a future human lunar base. The resulting mission, The Moon Explorer And Observer of Water/Ice (MEOW) compromises a 262 kg spacecraft to be launched into a Geostationary Transfer Orbit as a secondary payload in the 2023/2025 time frame. A three months Weak Stability Boundary transfer via the Sun-Earth L1 Lagrange point allows for a high launch timeframe flexibility. The different aspects of the mission (orbit analysis, spacecraft design and payload) and possibilities of concurrent engineering are described.
Comparing Freshman and doctoral engineering students in design: mapping with a descriptive framework
NASA Astrophysics Data System (ADS)
Carmona Marques, P.
2017-11-01
This paper reports the results of a study of engineering students' approaches to an open-ended design problem. To carry out this, sketches and interviews were collected from 9 freshmen (first year) and 10 doctoral engineering students, when they designed solutions for orange squeezers. Sketches and interviews were analysed and mapped with a descriptive 'ideation framework' (IF) of the design process, to document and compare their design creativity (Carmona Marques, P., A. Silva, E. Henriques, and C. Magee. 2014. "A Descriptive Framework of the Design Process from a Dual Cognitive Engineering Perspective." International Journal of Design Creativity and Innovation 2 (3): 142-164). The results show that the designers worked in a manner largely consistent with the IF for generalisation and specialisation loops. Also, doctoral students produced more alternative solutions during the ideation process. In addition, compared to freshman, doctoral used the generalisation loop of the IF, working at higher levels of abstraction. The iterative nature of design is highlighted during this study - a potential contribution to decrease the gap between both groups in engineering education.
Eigenproblem solution by a combined Sturm sequence and inverse iteration technique.
NASA Technical Reports Server (NTRS)
Gupta, K. K.
1973-01-01
Description of an efficient and numerically stable algorithm, along with a complete listing of the associated computer program, developed for the accurate computation of specified roots and associated vectors of the eigenvalue problem Aq = lambda Bq with band symmetric A and B, B being also positive-definite. The desired roots are first isolated by the Sturm sequence procedure; then a special variant of the inverse iteration technique is applied for the individual determination of each root along with its vector. The algorithm fully exploits the banded form of relevant matrices, and the associated program written in FORTRAN V for the JPL UNIVAC 1108 computer proves to be most significantly economical in comparison to similar existing procedures. The program may be conveniently utilized for the efficient solution of practical engineering problems, involving free vibration and buckling analysis of structures. Results of such analyses are presented for representative structures.
Huang, Zhihao; Zhao, Junfei; Wang, Zimu; Meng, Fanying; Ding, Kunshan; Pan, Xiangqiang; Zhou, Nianchen; Li, Xiaopeng; Zhang, Zhengbiao; Zhu, Xiulin
2017-10-23
Orthogonal maleimide and thiol deprotections were combined with thiol-maleimide coupling to synthesize discrete oligomers/macromolecules on a gram scale with molecular weights up to 27.4 kDa (128mer, 7.9 g) using an iterative exponential growth strategy with a degree of polymerization (DP) of 2 n -1. Using the same chemistry, a "readable" sequence-defined oligomer and a discrete cyclic topology were also created. Furthermore, uniform dendrons were fabricated using sequential growth (DP=2 n -1) or double exponential dendrimer growth approaches (DP=22n -1) with significantly accelerated growth rates. A versatile, efficient, and metal-free method for construction of discrete oligomers with tailored structures and a high growth rate would greatly facilitate research into the structure-property relationships of sophisticated polymeric materials. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
High density operation for reactor-relevant power exhaust
NASA Astrophysics Data System (ADS)
Wischmeier, M.; ASDEX Upgrade Team; Jet Efda Contributors
2015-08-01
With increasing size of a tokamak device and associated fusion power gain an increasing power flux density towards the divertor needs to be handled. A solution for handling this power flux is crucial for a safe and economic operation. Using purely geometric arguments in an ITER-like divertor this power flux can be reduced by approximately a factor 100. Based on a conservative extrapolation of current technology for an integrated engineering approach to remove power deposited on plasma facing components a further reduction of the power flux density via volumetric processes in the plasma by up to a factor of 50 is required. Our current ability to interpret existing power exhaust scenarios using numerical transport codes is analyzed and an operational scenario as a potential solution for ITER like divertors under high density and highly radiating reactor-relevant conditions is presented. Alternative concepts for risk mitigation as well as strategies for moving forward are outlined.
Linking the Long Tail of Data: A Bottoms-up Approach to Connecting Scientific Research
NASA Astrophysics Data System (ADS)
Jacob, B.; Arctur, D. K.
2016-12-01
Highly curated ontologies are often developed for big scientific data, but the long tail of research data rarely receives the same treatment. The learning curve for Semantic Web technology is steep, and the value of linking each long-tail data set to known taxonomies and ontologies in isolation rarely justifies the level of effort required to bring a Knowledge Engineer into the project. We present an approach that takes a bottoms-up approach of producing a Linked Data model of datasets mechanically, inferring the shape and structure of the data from the original format, and adding derived variables and semantic linkages via iterative, interactive refinements of that model. In this way, the vast corpus of small but rich scientific data becomes part of the greater linked web of knowledge, and the connectivity of that data can be iteratively improved over time.
Weinmann, Andreas; Storath, Martin
2015-01-01
Signals with discontinuities appear in many problems in the applied sciences ranging from mechanics, electrical engineering to biology and medicine. The concrete data acquired are typically discrete, indirect and noisy measurements of some quantities describing the signal under consideration. The task is to restore the signal and, in particular, the discontinuities. In this respect, classical methods perform rather poor, whereas non-convex non-smooth variational methods seem to be the correct choice. Examples are methods based on Mumford–Shah and piecewise constant Mumford–Shah functionals and discretized versions which are known as Blake–Zisserman and Potts functionals. Owing to their non-convexity, minimization of such functionals is challenging. In this paper, we propose a new iterative minimization strategy for Blake–Zisserman as well as Potts functionals and a related jump-sparsity problem dealing with indirect, noisy measurements. We provide a convergence analysis and underpin our findings with numerical experiments. PMID:27547074
Engineering design in the primary school: applying stem concepts to build an optical instrument
NASA Astrophysics Data System (ADS)
King, Donna; English, Lyn D.
2016-12-01
Internationally there is a need for research that focuses on STEM (Science, Technology, Engineering and Mathematics) education to equip students with the skills needed for a rapidly changing future. One way to do this is through designing engineering activities that reflect real-world problems and contextualise students' learning of STEM concepts. As such, this study examined the learning that occurred when fifth-grade students completed an optical engineering activity using an iterative engineering design model. Through a qualitative methodology using a case study design, we analysed multiple data sources including students' design sketches from eight focus groups. Three key findings emerged: first, the collaborative process of the first design sketch enabled students to apply core STEM concepts to model construction; second, during the construction stage students used experimentation for the positioning of lenses, mirrors and tubes resulting in a simpler 'working' model; and third, the redesign process enabled students to apply structural changes to their design. The engineering design model was useful for structuring stages of design, construction and redesign; however, we suggest a more flexible approach for advanced applications of STEM concepts in the future.
2011-02-25
fast method of predicting the number of iterations needed for converged results. A new hybrid technique is proposed to predict the convergence history...interchanging between the modes, whereas a smaller veering (or crossing) region shows fast mode switching. Then, the nonlinear vibration re- sponse of the...problems of interest involve dynamic ( fast ) crack propagation, then the nodes selected by the proposed approach at some time instant might not
Computer-Aided Engineering of Semiconductor Integrated Circuits
1979-07-01
equation using a five point finite difference approximation. Section 4.3.6 describes the numerical techniques and iterative algorithms which are used...neighbor points. This is generally referred to as a five point finite difference scheme on a rectangular grid, as described below. The finite difference ...problems in steady state have been analyzed by the finite difference method [4. 16 ] [4.17 3 or finite element method [4. 18 3, [4. 19 3 as reported last
Engineering Design Theory: Applying the Success of the Modern World to Campaign Creation
2009-05-21
and school of thought) to the simple methods of design.6 This progression is analogous to Peter Senge’s levels of learning disciplines.7 Senge...iterative learning and adaptive action that develops and employs critical and creative thinking , enabling leaders to apply the necessary logic to...overcome mental rigidity and develop group insight, the Army must learn to utilize group learning and thinking , through a fluid and creative open process
A hybrid multigroup neutron-pattern model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pogosbekyan, L.R.; Lysov, D.A.
In this paper, we use the general approach to construct a multigroup hybrid model for the neutron pattern. The equations are given together with a reasonably economic and simple iterative method of solving them. The algorithm can be used to calculate the pattern and the functionals as well as to correct the constants from the experimental data and to adapt the support over the constants to the engineering programs by reference to precision ones.
The Domain-Specific Software Architecture Program
1992-06-01
Kang, K.C; Cohen, S.C: Jess, J.A; Novak, W.E; Peterson, A.S. Feature- Oriented Domain Analysis ( FODA ) Feasibility Study. (CMU/SEI-90-TR-21, ADA235785...perspective of a con- trols engineer solving a problem using an iterative process of simulation and analysis . The CMU/SEI-92-SR-9 1 I ~math AnalysislP...for schedulability analysis and Markov processes for the determination of reliability. Software architectures are derived from these formal models. ORA
NASA Technical Reports Server (NTRS)
Hale, Mark A.; Craig, James I.; Mistree, Farrokh; Schrage, Daniel P.
1995-01-01
Integrated Product and Process Development (IPPD) embodies the simultaneous application of both system and quality engineering methods throughout an iterative design process. The use of IPPD results in the time-conscious, cost-saving development of engineering systems. Georgia Tech has proposed the development of an Integrated Design Engineering Simulator that will merge Integrated Product and Process Development with interdisciplinary analysis techniques and state-of-the-art computational technologies. To implement IPPD, a Decision-Based Design perspective is encapsulated in an approach that focuses on the role of the human designer in product development. The approach has two parts and is outlined in this paper. First, an architecture, called DREAMS, is being developed that facilitates design from a decision-based perspective. Second, a supporting computing infrastructure, called IMAGE, is being designed. The current status of development is given and future directions are outlined.
Knowledge-based assistance in costing the space station DMS
NASA Technical Reports Server (NTRS)
Henson, Troy; Rone, Kyle
1988-01-01
The Software Cost Engineering (SCE) methodology developed over the last two decades at IBM Systems Integration Division (SID) in Houston is utilized to cost the NASA Space Station Data Management System (DMS). An ongoing project to capture this methodology, which is built on a foundation of experiences and lessons learned, has resulted in the development of an internal-use-only, PC-based prototype that integrates algorithmic tools with knowledge-based decision support assistants. This prototype Software Cost Engineering Automation Tool (SCEAT) is being employed to assist in the DMS costing exercises. At the same time, DMS costing serves as a forcing function and provides a platform for the continuing, iterative development, calibration, and validation and verification of SCEAT. The data that forms the cost engineering database is derived from more than 15 years of development of NASA Space Shuttle software, ranging from low criticality, low complexity support tools to highly complex and highly critical onboard software.
NASA Technical Reports Server (NTRS)
Crasner, Aaron I.; Scola,Salvatore; Beyon, Jeffrey Y.; Petway, Larry B.
2014-01-01
Optimized designs of the Navigation Doppler Lidar (NDL) instrument for Autonomous Landing Hazard Avoidance Technology (ALHAT) were accomplished via Interdisciplinary Design Concept (IDEC) at NASA Langley Research Center during the summer of 2013. Three branches in the Engineering Directorate and three students were involved in this joint task through the NASA Langley Aerospace Research Summer Scholars (LARSS) Program. The Laser Remote Sensing Branch (LRSB), Mechanical Systems Branch (MSB), and Structural and Thermal Systems Branch (STSB) were engaged to achieve optimal designs through iterative and interactive collaborative design processes. A preliminary design iteration was able to reduce the power consumption, mass, and footprint by removing redundant components and replacing inefficient components with more efficient ones. A second design iteration reduced volume and mass by replacing bulky components with excessive performance with smaller components custom-designed for the power system. Thermal modeling software was used to run steady state thermal analyses, which were used to both validate the designs and recommend further changes. Analyses were run on each redesign, as well as the original system. Thermal Desktop was used to run trade studies to account for uncertainty and assumptions about fan performance and boundary conditions. The studies suggested that, even if the assumptions were significantly wrong, the redesigned systems would remain within operating temperature limits.
NASA Technical Reports Server (NTRS)
Blumenthal, Brennan T.; Elmiligui, Alaa; Geiselhart, Karl A.; Campbell, Richard L.; Maughmer, Mark D.; Schmitz, Sven
2016-01-01
The present paper examines potential propulsive and aerodynamic benefits of integrating a Boundary-Layer Ingestion (BLI) propulsion system into a typical commercial aircraft using the Common Research Model (CRM) geometry and the NASA Tetrahedral Unstructured Software System (TetrUSS). The Numerical Propulsion System Simulation (NPSS) environment is used to generate engine conditions for CFD analysis. Improvements to the BLI geometry are made using the Constrained Direct Iterative Surface Curvature (CDISC) design method. Previous studies have shown reductions of up to 25% in terms of propulsive power required for cruise for other axisymmetric geometries using the BLI concept. An analysis of engine power requirements, drag, and lift coefficients using the baseline and BLI geometries coupled with the NPSS model are shown. Potential benefits of the BLI system relating to cruise propulsive power are quantified using a power balance method, and a comparison to the baseline case is made. Iterations of the BLI geometric design are shown and any improvements between subsequent BLI designs presented. Simulations are conducted for a cruise flight condition of Mach 0.85 at an altitude of 38,500 feet and an angle of attack of 2 deg for all geometries. A comparison between available wind tunnel data, previous computational results, and the original CRM model is presented for model verification purposes along with full results for BLI power savings. Results indicate a 14.4% reduction in engine power requirements at cruise for the BLI configuration over the baseline geometry. Minor shaping of the aft portion of the fuselage using CDISC has been shown to increase the benefit from Boundary-Layer Ingestion further, resulting in a 15.6% reduction in power requirements for cruise as well as a drag reduction of eighteen counts over the baseline geometry.
NASA Technical Reports Server (NTRS)
Blumenthal, Brennan
2016-01-01
This thesis will examine potential propulsive and aerodynamic benefits of integrating a boundary-layer ingestion (BLI) propulsion system with a typical commercial aircraft using the Common Research Model geometry and the NASA Tetrahedral Unstructured Software System (TetrUSS). The Numerical Propulsion System Simulation (NPSS) environment will be used to generate engine conditions for CFD analysis. Improvements to the BLI geometry will be made using the Constrained Direct Iterative Surface Curvature (CDISC) design method. Previous studies have shown reductions of up to 25% in terms of propulsive power required for cruise for other axisymmetric geometries using the BLI concept. An analysis of engine power requirements, drag, and lift coefficients using the baseline and BLI geometries coupled with the NPSS model are shown. Potential benefits of the BLI system relating to cruise propulsive power are quantified using a power balance method and a comparison to the baseline case is made. Iterations of the BLI geometric design are shown and any improvements between subsequent BLI designs presented. Simulations are conducted for a cruise flight condition of Mach 0.85 at an altitude of 38,500 feet and an angle of attack of 2deg for all geometries. A comparison between available wind tunnel data, previous computational results, and the original CRM model is presented for model verification purposes along with full results for BLI power savings. Results indicate a 14.3% reduction in engine power requirements at cruise for the BLI configuration over the baseline geometry. Minor shaping of the aft portion of the fuselage using CDISC has been shown to increase the benefit from boundary-layer ingestion further, resulting in a 15.6% reduction in power requirements for cruise as well as a drag reduction of eighteen counts over the baseline geometry.
NASA Astrophysics Data System (ADS)
Elkurdi, Yousef; Fernández, David; Souleimanov, Evgueni; Giannacopoulos, Dennis; Gross, Warren J.
2008-04-01
The Finite Element Method (FEM) is a computationally intensive scientific and engineering analysis tool that has diverse applications ranging from structural engineering to electromagnetic simulation. The trends in floating-point performance are moving in favor of Field-Programmable Gate Arrays (FPGAs), hence increasing interest has grown in the scientific community to exploit this technology. We present an architecture and implementation of an FPGA-based sparse matrix-vector multiplier (SMVM) for use in the iterative solution of large, sparse systems of equations arising from FEM applications. FEM matrices display specific sparsity patterns that can be exploited to improve the efficiency of hardware designs. Our architecture exploits FEM matrix sparsity structure to achieve a balance between performance and hardware resource requirements by relying on external SDRAM for data storage while utilizing the FPGAs computational resources in a stream-through systolic approach. The architecture is based on a pipelined linear array of processing elements (PEs) coupled with a hardware-oriented matrix striping algorithm and a partitioning scheme which enables it to process arbitrarily big matrices without changing the number of PEs in the architecture. Therefore, this architecture is only limited by the amount of external RAM available to the FPGA. The implemented SMVM-pipeline prototype contains 8 PEs and is clocked at 110 MHz obtaining a peak performance of 1.76 GFLOPS. For 8 GB/s of memory bandwidth typical of recent FPGA systems, this architecture can achieve 1.5 GFLOPS sustained performance. Using multiple instances of the pipeline, linear scaling of the peak and sustained performance can be achieved. Our stream-through architecture provides the added advantage of enabling an iterative implementation of the SMVM computation required by iterative solution techniques such as the conjugate gradient method, avoiding initialization time due to data loading and setup inside the FPGA internal memory.
New design of cable-in-conduit conductor for application in future fusion reactors
NASA Astrophysics Data System (ADS)
Qin, Jinggang; Wu, Yu; Li, Jiangang; Liu, Fang; Dai, Chao; Shi, Yi; Liu, Huajun; Mao, Zhehua; Nijhuis, Arend; Zhou, Chao; Yagotintsev, Konstantin A.; Lubkemann, Ruben; Anvar, V. A.; Devred, Arnaud
2017-11-01
The China Fusion Engineering Test Reactor (CFETR) is a new tokamak device whose magnet system includes toroidal field, central solenoid (CS) and poloidal field coils. The main goal is to build a fusion engineering tokamak reactor with about 1 GW fusion power and self-sufficiency by blanket. In order to reach this high performance, the magnet field target is 15 T. However, the huge electromagnetic load caused by high field and current is a threat for conductor degradation under cycling. The conductor with a short-twist-pitch (STP) design has large stiffness, which enables a significant performance improvement in view of load and thermal cycling. But the conductor with STP design has a remarkable disadvantage: it can easily cause severe strand indentation during cabling. The indentation can reduce the strand performance, especially under high load cycling. In order to overcome this disadvantage, a new design is proposed. The main characteristic of this new design is an updated layout in the triplet. The triplet is made of two Nb3Sn strands and one soft copper strand. The twist pitch of the two Nb3Sn strands is large and cabled first. The copper strand is then wound around the two superconducting strands (CWS) with a shorter twist pitch. The following cable stages layout and twist pitches are similar to the ITER CS conductor with STP design. One short conductor sample with a similar scale to the ITER CS was manufactured and tested with the Twente Cable Press to investigate the mechanical properties, AC loss and internal inspection by destructive examination. The results are compared to the STP conductor (ITER CS and CFETR CSMC) tests. The results show that the new conductor design has similar stiffness, but much lower strand indentation than the STP design. The new design shows potential for application in future fusion reactors.
NASA Astrophysics Data System (ADS)
Ozbasaran, Hakan
Trusses have an important place amongst engineering structures due to many advantages such as high structural efficiency, fast assembly and easy maintenance. Iterative truss design procedures, which require analysis of a large number of candidate structural systems such as size, shape and topology optimization with stochastic methods, mostly lead the engineer to establish a link between the development platform and external structural analysis software. By increasing number of structural analyses, this (probably slow-response) link may climb to the top of the list of performance issues. This paper introduces a software for static, global member buckling and frequency analysis of 2D and 3D trusses to overcome this problem for Mathematica users.
NASA Technical Reports Server (NTRS)
Garrett, J. L.; Syed, S. A.
1992-01-01
CFD analyses of the Space Transportation Main Engine film/dump cooled subscale nozzle are presented, with an emphasis on the timely impact of CFD in the design of the subscale nozzle secondary coolant system. Calculations were performed with the Generalized Aerodynamic Simulation Program (GASP), using a Baldwin-Lomas Turbulence model, and finite rate hydrogen-oxygen chemistry. Design iterations for both the secondary coolant cavity passage and the secondary coolant lip are presented. In addition, validation of the GASP chemistry and turbulence models by comparison with data and other CFD codes are presented for a hypersonic laminar separation corner, a backward facing step, and a 2D scramjet nozzle with hydrogen-oxygen kinetics.
NASA Technical Reports Server (NTRS)
Ortega, J. M.
1985-01-01
Synopses are given for NASA supported work in computer science at the University of Virginia. Some areas of research include: error seeding as a testing method; knowledge representation for engineering design; analysis of faults in a multi-version software experiment; implementation of a parallel programming environment; two computer graphics systems for visualization of pressure distribution and convective density particles; task decomposition for multiple robot arms; vectorized incomplete conjugate gradient; and iterative methods for solving linear equations on the Flex/32.
A Resonant Synchronous Vibration Based Approach for Rotor Imbalance Detection
NASA Technical Reports Server (NTRS)
Luo, Huangeng; Rodriquez, Hector; Hallman, Darren; Lewicki, David G.
2006-01-01
This paper presents a methodology of detecting rotor imbalances, such as mass imbalance and crack-induced imbalance, using shaft synchronous vibrations. An iterative scheme is developed to identify parameters from measured synchronous vibration data. A detection system is integrated by using state-of-the-art commercial analysis equipment. A laboratory rotor test rig is used to verify the system integration and algorithm validation. A real engine test has been carried out and the results are reported.
Repeatable Reverse Engineering with the Platform for Architecture-Neutral Dynamic Analysis
2015-09-18
record and replay functionality: on a live execution, the amount of compute resources needed to identify and halt on every memory access and inspect...and iteratively, running a replay of the previously gathered recording over and over to construct a deeper understanding of the important aspects of...system events happen during the replay . A second analysis pass over the replay might focus in on the activity of a particular program or a portion of the
A new model for graduate education and innovation in medical technology.
Yazdi, Youseph; Acharya, Soumyadipta
2013-09-01
We describe a new model of graduate education in bioengineering innovation and design- a year long Master's degree program that educates engineers in the process of healthcare technology innovation for both advanced and low-resource global markets. Students are trained in an iterative "Spiral Innovation" approach that ensures early, staged, and repeated examination of all key elements of a successful medical device. This includes clinical immersion based problem identification and assessment (at Johns Hopkins Medicine and abroad), team based concept and business model development, and project planning based on iterative technical and business plan de-risking. The experiential, project based learning process is closely supported by several core courses in business, design, and engineering. Students in the program work on two team based projects, one focused on addressing healthcare needs in advanced markets and a second focused on low-resource settings. The program recently completed its fourth year of existence, and has graduated 61 students, who have continued on to industry or startups (one half), additional graduate education, or medical school (one third), or our own Global Health Innovation Fellowships. Over the 4 years, the program has sponsored 10 global health teams and 14 domestic/advanced market medtech teams, and launched 5 startups, of which 4 are still active. Projects have attracted over US$2.5M in follow-on awards and grants, that are supporting the continued development of over a dozen projects.
Neutronics Comparison Analysis of the Water Cooled Ceramics Breeding Blanket for CFETR
NASA Astrophysics Data System (ADS)
Li, Jia; Zhang, Xiaokang; Gao, Fangfang; Pu, Yong
2016-02-01
China Fusion Engineering Test Reactor (CFETR) is an ITER-like fusion engineering test reactor that is intended to fill the scientific and technical gaps between ITER and DEMO. One of the main missions of CFETR is to achieve a tritium breeding ratio that is no less than 1.2 to ensure tritium self-sufficiency. A concept design for a water cooled ceramics breeding blanket (WCCB) is presented based on a scheme with the breeder and the multiplier located in separate panels for CFETR. Based on this concept, a one-dimensional (1D) radial built breeding blanket was first designed, and then several three-dimensional models were developed with various neutron source definitions and breeding blanket module arrangements based on the 1D radial build. A set of nuclear analyses have been carried out to compare the differences in neutronics characteristics given by different calculation models, addressing neutron wall loading (NWL), tritium breeding ratio (TBR), fast neutron flux on inboard side and nuclear heating deposition on main in-vessel components. The impact of differences in modeling on the nuclear performance has been analyzed and summarized regarding the WCCB concept design. supported by the National Special Project for Magnetic Confined Nuclear Fusion Energy (Nos. 2013GB108004, 2014GB122000, and 2014GB119000), and National Natural Science Foundation of China (No. 11175207)
Developing stochastic model of thrust and flight dynamics for small UAVs
NASA Astrophysics Data System (ADS)
Tjhai, Chandra
This thesis presents a stochastic thrust model and aerodynamic model for small propeller driven UAVs whose power plant is a small electric motor. First a model which relates thrust generated by a small propeller driven electric motor as a function of throttle setting and commanded engine RPM is developed. A perturbation of this model is then used to relate the uncertainty in throttle and engine RPM commanded to the error in the predicted thrust. Such a stochastic model is indispensable in the design of state estimation and control systems for UAVs where the performance requirements of the systems are specied in stochastic terms. It is shown that thrust prediction models for small UAVs are not a simple, explicit functions relating throttle input and RPM command to thrust generated. Rather they are non-linear, iterative procedures which depend on a geometric description of the propeller and mathematical model of the motor. A detailed derivation of the iterative procedure is presented and the impact of errors which arise from inaccurate propeller and motor descriptions are discussed. Validation results from a series of wind tunnel tests are presented. The results show a favorable statistical agreement between the thrust uncertainty predicted by the model and the errors measured in the wind tunnel. The uncertainty model of aircraft aerodynamic coefficients developed based on wind tunnel experiment will be discussed at the end of this thesis.
Three-Dimensional Structure Analysis and Percolation Properties of a Barrier Marine Coating
Chen, Bo; Guizar-Sicairos, Manuel; Xiong, Gang; Shemilt, Laura; Diaz, Ana; Nutter, John; Burdet, Nicolas; Huo, Suguo; Mancuso, Joel; Monteith, Alexander; Vergeer, Frank; Burgess, Andrew; Robinson, Ian
2013-01-01
Artificially structured coatings are widely employed to minimize materials deterioration and corrosion, the annual direct cost of which is over 3% of the gross domestic product (GDP) for industrial countries. Manufacturing higher performance anticorrosive coatings is one of the most efficient approaches to reduce this loss. However, three-dimensional (3D) structure of coatings, which determines their performance, has not been investigated in detail. Here we present a quantitative nano-scale analysis of the 3D spatial structure of an anticorrosive aluminium epoxy barrier marine coating obtained by serial block-face scanning electron microscopy (SBFSEM) and ptychographic X-ray computed tomography (PXCT). We then use finite element simulations to demonstrate how percolation through this actual 3D structure impedes ion diffusion in the composite materials. We found the aluminium flakes align within 15° of the coating surface in the material, causing the perpendicular diffusion resistance of the coating to be substantially higher than the pure epoxy. PMID:23378910
Three-dimensional localization of nanoscale battery reactions using soft X-ray tomography
Yu, Young-Sang; Farmand, Maryam; Kim, Chunjoong; ...
2018-03-02
Battery function is determined by the efficiency and reversibility of the electrochemical phase transformations at solid electrodes. The microscopic tools available to study the chemical states of matter with the required spatial resolution and chemical specificity are intrinsically limited when studying complex architectures by their reliance on two-dimensional projections of thick material. Here in this paper, we report the development of soft X-ray ptychographic tomography, which resolves chemical states in three dimensions at 11 nm spatial resolution. We study an ensemble of nano-plates of lithium iron phosphate extracted from a battery electrode at 50% state of charge. Using a setmore » of nanoscale tomograms, we quantify the electrochemical state and resolve phase boundaries throughout the volume of individual nanoparticles. These observations reveal multiple reaction points, intra-particle heterogeneity, and size effects that highlight the importance of multi-dimensional analytical tools in providing novel insight to the design of the next generation of high-performance devices.« less
Optical Ptychographic Microscope for Quantitative Bio-Mechanical Imaging
NASA Astrophysics Data System (ADS)
Anthony, Nicholas; Cadenazzi, Guido; Nugent, Keith; Abbey, Brian
The role that mechanical forces play in biological processes such as cell movement and death is becoming of significant interest to further develop our understanding of the inner workings of cells. The most common method used to obtain stress information is photoelasticity which maps a samples birefringence, or its direction dependent refractive indices, using polarized light. However this method only provides qualitative data and for stress information to be useful quantitative data is required. Ptychography is a method for quantitatively determining the phase of a samples complex transmission function. The technique relies upon the collection of multiple overlapping coherent diffraction patterns from laterally displaced points on the sample. The overlap of measurement points provides complementary information that significantly aids in the reconstruction of the complex wavefield exiting the sample and allows for quantitative imaging of weakly interacting specimens. Here we describe recent advances at La Trobe University Melbourne on achieving quantitative birefringence mapping using polarized light ptychography with applications in cell mechanics. Australian Synchrotron, ARC Centre of Excellence for Advanced Molecular Imaging.
NASA Astrophysics Data System (ADS)
De Angelis, Salvatore; Jørgensen, Peter Stanley; Tsai, Esther Hsiao Rho; Holler, Mirko; Kreka, Kosova; Bowen, Jacob R.
2018-04-01
Nickel coarsening is considered a significant cause of solid oxide cell (SOC) performance degradation. Therefore, understanding the morphological changes in the nickel-yttria stabilized zirconia (Ni-YSZ) fuel electrode is crucial for the wide spread usage of SOC technology. This paper reports a study of the initial 3D microstructure evolution of a SOC analyzed in the pristine state and after 3 and 8 h of annealing at 850 °C, in dry hydrogen. The analysis of the evolution of the same location of the electrode shows a substantial change of the nickel and pore network during the first 3 h of treatment, while only negligible changes are observed after 8 h. The nickel coarsening results in loss of connectivity in the nickel network, reduced nickel specific surface area and decreased total triple phase boundary density. For the condition of this experiment, nickel coarsening is shown to be predominantly curvature driven, and changes in the electrode microstructure parameters are discussed in terms of local microstructural evolution.
Three-dimensional localization of nanoscale battery reactions using soft X-ray tomography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Young-Sang; Farmand, Maryam; Kim, Chunjoong
Battery function is determined by the efficiency and reversibility of the electrochemical phase transformations at solid electrodes. The microscopic tools available to study the chemical states of matter with the required spatial resolution and chemical specificity are intrinsically limited when studying complex architectures by their reliance on two-dimensional projections of thick material. Here in this paper, we report the development of soft X-ray ptychographic tomography, which resolves chemical states in three dimensions at 11 nm spatial resolution. We study an ensemble of nano-plates of lithium iron phosphate extracted from a battery electrode at 50% state of charge. Using a setmore » of nanoscale tomograms, we quantify the electrochemical state and resolve phase boundaries throughout the volume of individual nanoparticles. These observations reveal multiple reaction points, intra-particle heterogeneity, and size effects that highlight the importance of multi-dimensional analytical tools in providing novel insight to the design of the next generation of high-performance devices.« less
Brown, H G; Shibata, N; Sasaki, H; Petersen, T C; Paganin, D M; Morgan, M J; Findlay, S D
2017-11-01
Electric field mapping using segmented detectors in the scanning transmission electron microscope has recently been achieved at the nanometre scale. However, converting these results to quantitative field measurements involves assumptions whose validity is unclear for thick specimens. We consider three approaches to quantitative reconstruction of the projected electric potential using segmented detectors: a segmented detector approximation to differential phase contrast and two variants on ptychographical reconstruction. Limitations to these approaches are also studied, particularly errors arising from detector segment size, inelastic scattering, and non-periodic boundary conditions. A simple calibration experiment is described which corrects the differential phase contrast reconstruction to give reliable quantitative results despite the finite detector segment size and the effects of plasmon scattering in thick specimens. A plasmon scattering correction to the segmented detector ptychography approaches is also given. Avoiding the imposition of periodic boundary conditions on the reconstructed projected electric potential leads to more realistic reconstructions. Copyright © 2017 Elsevier B.V. All rights reserved.
The Applied Mathematics for Power Systems (AMPS)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chertkov, Michael
2012-07-24
Increased deployment of new technologies, e.g., renewable generation and electric vehicles, is rapidly transforming electrical power networks by crossing previously distinct spatiotemporal scales and invalidating many traditional approaches for designing, analyzing, and operating power grids. This trend is expected to accelerate over the coming years, bringing the disruptive challenge of complexity, but also opportunities to deliver unprecedented efficiency and reliability. Our Applied Mathematics for Power Systems (AMPS) Center will discover, enable, and solve emerging mathematics challenges arising in power systems and, more generally, in complex engineered networks. We will develop foundational applied mathematics resulting in rigorous algorithms and simulation toolboxesmore » for modern and future engineered networks. The AMPS Center deconstruction/reconstruction approach 'deconstructs' complex networks into sub-problems within non-separable spatiotemporal scales, a missing step in 20th century modeling of engineered networks. These sub-problems are addressed within the appropriate AMPS foundational pillar - complex systems, control theory, and optimization theory - and merged or 'reconstructed' at their boundaries into more general mathematical descriptions of complex engineered networks where important new questions are formulated and attacked. These two steps, iterated multiple times, will bridge the growing chasm between the legacy power grid and its future as a complex engineered network.« less
Quiet Clean Short-haul Experimental Engine (QCSEE) composite fan frame design report
NASA Technical Reports Server (NTRS)
Mitchell, S. C.
1978-01-01
An advanced composite frame which is flight-weight and integrates the functions of several structures was developed for the over the wing (OTW) engine and for the under the wing (UTW) engine. The composite material system selected as the basic material for the frame is Type AS graphite fiber in a Hercules 3501 epoxy resin matrix. The frame was analyzed using a finite element digital computer program. This program was used in an iterative fashion to arrive at practical thicknesses and ply orientations to achieve a final design that met all strength and stiffness requirements for critical conditions. Using this information, the detail design of each of the individual parts of the frame was completed and released. On the basis of these designs, the required tooling was designed to fabricate the various component parts of the frame. To verify the structural integrity of the critical joint areas, a full-scale test was conducted on the frame before engine testing. The testing of the frame established critical spring constants and subjected the frame to three critical load cases. The successful static load test was followed by 153 and 58 hours respectively of successful running on the UTW and OTW engines.
Low heat transfer oxidizer heat exchanger design and analysis
NASA Technical Reports Server (NTRS)
Kanic, P. G.; Kmiec, T. D.; Peckham, R. J.
1987-01-01
The RL10-IIB engine, a derivative of the RLIO, is capable of multi-mode thrust operation. This engine operates at two low thrust levels: tank head idle (THI), which is approximately 1 to 2 percent of full thrust, and pumped idle (PI), which is 10 percent of full thrust. Operation at THI provides vehicle propellant settling thrust and efficient engine thermal conditioning; PI operation provides vehicle tank pre-pressurization and maneuver thrust for log-g deployment. Stable combustion of the RL10-IIB engine at THI and PI thrust levels can be accomplished by providing gaseous oxygen at the propellant injector. Using gaseous hydrogen from the thrust chamber jacket as an energy source, a heat exchanger can be used to vaporize liquid oxygen without creating flow instability. This report summarizes the design and analysis of a United Aircraft Products (UAP) low-rate heat transfer heat exchanger concept for the RL10-IIB rocket engine. The design represents a second iteration of the RL10-IIB heat exchanger investigation program. The design and analysis of the first heat exchanger effort is presented in more detail in NASA CR-174857. Testing of the previous design is detailed in NASA CR-179487.
Using Long-Short-Term-Memory Recurrent Neural Networks to Predict Aviation Engine Vibrations
NASA Astrophysics Data System (ADS)
ElSaid, AbdElRahman Ahmed
This thesis examines building viable Recurrent Neural Networks (RNN) using Long Short Term Memory (LSTM) neurons to predict aircraft engine vibrations. The different networks are trained on a large database of flight data records obtained from an airline containing flights that suffered from excessive vibration. RNNs can provide a more generalizable and robust method for prediction over analytical calculations of engine vibration, as analytical calculations must be solved iteratively based on specific empirical engine parameters, and this database contains multiple types of engines. Further, LSTM RNNs provide a "memory" of the contribution of previous time series data which can further improve predictions of future vibration values. LSTM RNNs were used over traditional RNNs, as those suffer from vanishing/exploding gradients when trained with back propagation. The study managed to predict vibration values for 1, 5, 10, and 20 seconds in the future, with 2.84% 3.3%, 5.51% and 10.19% mean absolute error, respectively. These neural networks provide a promising means for the future development of warning systems so that suitable actions can be taken before the occurrence of excess vibration to avoid unfavorable situations during flight.
Optimal Tuner Selection for Kalman Filter-Based Aircraft Engine Performance Estimation
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Garg, Sanjay
2010-01-01
A linear point design methodology for minimizing the error in on-line Kalman filter-based aircraft engine performance estimation applications is presented. This technique specifically addresses the underdetermined estimation problem, where there are more unknown parameters than available sensor measurements. A systematic approach is applied to produce a model tuning parameter vector of appropriate dimension to enable estimation by a Kalman filter, while minimizing the estimation error in the parameters of interest. Tuning parameter selection is performed using a multi-variable iterative search routine which seeks to minimize the theoretical mean-squared estimation error. This paper derives theoretical Kalman filter estimation error bias and variance values at steady-state operating conditions, and presents the tuner selection routine applied to minimize these values. Results from the application of the technique to an aircraft engine simulation are presented and compared to the conventional approach of tuner selection. Experimental simulation results are found to be in agreement with theoretical predictions. The new methodology is shown to yield a significant improvement in on-line engine performance estimation accuracy
Directed combinatorial mutagenesis of Escherichia coli for complex phenotype engineering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Rongming; Liang, Liya; Garst, Andrew D.
Strain engineering for industrial production requires a targeted improvement of multiple complex traits, which range from pathway flux to tolerance to mixed sugar utilization. Here, we report the use of an iterative CRISPR EnAbled Trackable genome Engineering (iCREATE) method to engineer rapid glucose and xylose co-consumption and tolerance to hydrolysate inhibitors in E. coli. Deep mutagenesis libraries were rationally designed, constructed, and screened to target ~40,000 mutations across 30 genes. These libraries included global and high-level regulators that regulate global gene expression, transcription factors that play important roles in genome-level transcription, enzymes that function in the sugar transport system, NAD(P)Hmore » metabolism, and the aldehyde reduction system. Specific mutants that conferred increased growth in mixed sugars and hydrolysate tolerance conditions were isolated, confirmed, and evaluated for changes in genome-wide expression levels. As a result, we tested the strain with positive combinatorial mutations for 3-hydroxypropionic acid (3HP) production under high furfural and high acetate hydrolysate fermentation, which demonstrated a 7- and 8-fold increase in 3HP productivity relative to the parent strain, respectively.« less
Directed combinatorial mutagenesis of Escherichia coli for complex phenotype engineering
Liu, Rongming; Liang, Liya; Garst, Andrew D.; ...
2018-03-29
Strain engineering for industrial production requires a targeted improvement of multiple complex traits, which range from pathway flux to tolerance to mixed sugar utilization. Here, we report the use of an iterative CRISPR EnAbled Trackable genome Engineering (iCREATE) method to engineer rapid glucose and xylose co-consumption and tolerance to hydrolysate inhibitors in E. coli. Deep mutagenesis libraries were rationally designed, constructed, and screened to target ~40,000 mutations across 30 genes. These libraries included global and high-level regulators that regulate global gene expression, transcription factors that play important roles in genome-level transcription, enzymes that function in the sugar transport system, NAD(P)Hmore » metabolism, and the aldehyde reduction system. Specific mutants that conferred increased growth in mixed sugars and hydrolysate tolerance conditions were isolated, confirmed, and evaluated for changes in genome-wide expression levels. As a result, we tested the strain with positive combinatorial mutations for 3-hydroxypropionic acid (3HP) production under high furfural and high acetate hydrolysate fermentation, which demonstrated a 7- and 8-fold increase in 3HP productivity relative to the parent strain, respectively.« less
Sharing Research Models: Using Software Engineering Practices for Facilitation
Bryant, Stephanie P.; Solano, Eric; Cantor, Susanna; Cooley, Philip C.; Wagener, Diane K.
2011-01-01
Increasingly, researchers are turning to computational models to understand the interplay of important variables on systems’ behaviors. Although researchers may develop models that meet the needs of their investigation, application limitations—such as nonintuitive user interface features and data input specifications—may limit the sharing of these tools with other research groups. By removing these barriers, other research groups that perform related work can leverage these work products to expedite their own investigations. The use of software engineering practices can enable managed application production and shared research artifacts among multiple research groups by promoting consistent models, reducing redundant effort, encouraging rigorous peer review, and facilitating research collaborations that are supported by a common toolset. This report discusses three established software engineering practices— the iterative software development process, object-oriented methodology, and Unified Modeling Language—and the applicability of these practices to computational model development. Our efforts to modify the MIDAS TranStat application to make it more user-friendly are presented as an example of how computational models that are based on research and developed using software engineering practices can benefit a broader audience of researchers. PMID:21687780
Enhanced optical alignment of a digital micro mirror device through Bayesian adaptive exploration
NASA Astrophysics Data System (ADS)
Wynne, Kevin B.; Knuth, Kevin H.; Petruccelli, Jonathan
2017-12-01
As the use of Digital Micro Mirror Devices (DMDs) becomes more prevalent in optics research, the ability to precisely locate the Fourier "footprint" of an image beam at the Fourier plane becomes a pressing need. In this approach, Bayesian adaptive exploration techniques were employed to characterize the size and position of the beam on a DMD located at the Fourier plane. It couples a Bayesian inference engine with an inquiry engine to implement the search. The inquiry engine explores the DMD by engaging mirrors and recording light intensity values based on the maximization of the expected information gain. Using the data collected from this exploration, the Bayesian inference engine updates the posterior probability describing the beam's characteristics. The process is iterated until the beam is located to within the desired precision. This methodology not only locates the center and radius of the beam with remarkable precision but accomplishes the task in far less time than a brute force search. The employed approach has applications to system alignment for both Fourier processing and coded aperture design.
Performance of a Laser Ignited Multicylinder Lean Burn Natural Gas Engine
Almansour, Bader; Vasu, Subith; Gupta, Sreenath B.; ...
2017-06-06
Market demands for lower fueling costs and higher specific powers in stationary natural gas engines has engine designs trending towards higher in-cylinder pressures and leaner combustion operation. However, Ignition remains as the main limiting factor in achieving further performance improvements in these engines. Addressing this concern, while incorporating various recent advances in optics and laser technologies, laser igniters were designed and developed through numerous iterations. Final designs incorporated water-cooled, passively Q-switched, Nd:YAG micro-lasers that were optimized for stable operation under harsh engine conditions. Subsequently, the micro-lasers were installed in the individual cylinders of a lean-burn, 350 kW, inline 6-cylinder, open-chamber,more » spark ignited engine and tests were conducted. To the best of our knowledge, this is the world’s first demonstration of a laser ignited multi-cylinder natural gas engine. The engine was operated at high-load (298 kW) and rated speed (1800 rpm) conditions. Ignition timing sweeps and excess-air ratio (λ) sweeps were performed while keeping the NOx emissions below the USEPA regulated value (BSNOx < 1.34 g/kW-hr), and while maintaining ignition stability at industry acceptable values (COV_IMEP <5 %). Through such engine tests, the relative merits of (i) standard electrical ignition system, and (ii) laser ignition system were determined. In conclusion, a rigorous combustion data analysis was performed and the main reasons leading to improved performance in the case of laser ignition were identified.« less
Engineering central metabolism - a grand challenge for plant biologists.
Sweetlove, Lee J; Nielsen, Jens; Fernie, Alisdair R
2017-05-01
The goal of increasing crop productivity and nutrient-use efficiency is being addressed by a number of ambitious research projects seeking to re-engineer photosynthetic biochemistry. Many of these projects will require the engineering of substantial changes in fluxes of central metabolism. However, as has been amply demonstrated in simpler systems such as microbes, central metabolism is extremely difficult to rationally engineer. This is because of multiple layers of regulation that operate to maintain metabolic steady state and because of the highly connected nature of central metabolism. In this review we discuss new approaches for metabolic engineering that have the potential to address these problems and dramatically improve the success with which we can rationally engineer central metabolism in plants. In particular, we advocate the adoption of an iterative 'design-build-test-learn' cycle using fast-to-transform model plants as test beds. This approach can be realised by coupling new molecular tools to incorporate multiple transgenes in nuclear and plastid genomes with computational modelling to design the engineering strategy and to understand the metabolic phenotype of the engineered organism. We also envisage that mutagenesis could be used to fine-tune the balance between the endogenous metabolic network and the introduced enzymes. Finally, we emphasise the importance of considering the plant as a whole system and not isolated organs: the greatest increase in crop productivity will be achieved if both source and sink metabolism are engineered. © 2016 The Authors The Plant Journal © 2016 John Wiley & Sons Ltd.
Performance of a Laser Ignited Multicylinder Lean Burn Natural Gas Engine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Almansour, Bader; Vasu, Subith; Gupta, Sreenath B.
Market demands for lower fueling costs and higher specific powers in stationary natural gas engines has engine designs trending towards higher in-cylinder pressures and leaner combustion operation. However, Ignition remains as the main limiting factor in achieving further performance improvements in these engines. Addressing this concern, while incorporating various recent advances in optics and laser technologies, laser igniters were designed and developed through numerous iterations. Final designs incorporated water-cooled, passively Q-switched, Nd:YAG micro-lasers that were optimized for stable operation under harsh engine conditions. Subsequently, the micro-lasers were installed in the individual cylinders of a lean-burn, 350 kW, inline 6-cylinder, open-chamber,more » spark ignited engine and tests were conducted. To the best of our knowledge, this is the world’s first demonstration of a laser ignited multi-cylinder natural gas engine. The engine was operated at high-load (298 kW) and rated speed (1800 rpm) conditions. Ignition timing sweeps and excess-air ratio (λ) sweeps were performed while keeping the NOx emissions below the USEPA regulated value (BSNOx < 1.34 g/kW-hr), and while maintaining ignition stability at industry acceptable values (COV_IMEP <5 %). Through such engine tests, the relative merits of (i) standard electrical ignition system, and (ii) laser ignition system were determined. In conclusion, a rigorous combustion data analysis was performed and the main reasons leading to improved performance in the case of laser ignition were identified.« less
Status of DEMO-FNS development
NASA Astrophysics Data System (ADS)
Kuteev, B. V.; Shpanskiy, Yu. S.; DEMO-FNS Team
2017-07-01
Fusion-fission hybrid facility based on superconducting tokamak DEMO-FNS is developed in Russia for integrated commissioning of steady-state and nuclear fusion technologies at the power level up to 40 MW for fusion and 400 MW for fission reactions. The project status corresponds to the transition from a conceptual design to an engineering one. This facility is considered, in RF, as the main source of technological and nuclear science information, which should complement the ITER research results in the fields of burning plasma physics and control.
Scale-Up: Improving Large Enrollment Physics Courses
NASA Astrophysics Data System (ADS)
Beichner, Robert
1999-11-01
The Student-Centered Activities for Large Enrollment University Physics (SCALE-UP) project is working to establish a learning environment that will promote increased conceptual understanding, improved problem-solving performance, and greater student satisfaction, while still maintaining class sizes of approximately 100. We are also addressing the new ABET engineering accreditation requirements for inquiry-based learning along with communication and team-oriented skills development. Results of studies of our latest classroom design, plans for future classroom space, and the current iteration of instructional materials will be discussed.
NASA Astrophysics Data System (ADS)
Wu, M. Q.; Pan, C. K.; Chan, V. S.; Li, G. Q.; Garofalo, A. M.; Jian, X.; Liu, L.; Ren, Q. L.; Chen, J. L.; Gao, X.; Gong, X. Z.; Ding, S. Y.; Qian, J. P.; Cfetr Physics Team
2018-04-01
Time-dependent integrated modeling of DIII-D ITER-like and high bootstrap current plasma ramp-up discharges has been performed with the equilibrium code EFIT, and the transport codes TGYRO and ONETWO. Electron and ion temperature profiles are simulated by TGYRO with the TGLF (SAT0 or VX model) turbulent and NEO neoclassical transport models. The VX model is a new empirical extension of the TGLF turbulent model [Jian et al., Nucl. Fusion 58, 016011 (2018)], which captures the physics of multi-scale interaction between low-k and high-k turbulence from nonlinear gyro-kinetic simulation. This model is demonstrated to accurately model low Ip discharges from the EAST tokamak. Time evolution of the plasma current density profile is simulated by ONETWO with the experimental current ramp-up rate. The general trend of the predicted evolution of the current density profile is consistent with that obtained from the equilibrium reconstruction with Motional Stark effect constraints. The predicted evolution of βN , li , and βP also agrees well with the experiments. For the ITER-like cases, the predicted electron and ion temperature profiles using TGLF_Sat0 agree closely with the experimental measured profiles, and are demonstrably better than other proposed transport models. For the high bootstrap current case, the predicted electron and ion temperature profiles perform better in the VX model. It is found that the SAT0 model works well at high IP (>0.76 MA) while the VX model covers a wider range of plasma current ( IP > 0.6 MA). The results reported in this paper suggest that the developed integrated modeling could be a candidate for ITER and CFETR ramp-up engineering design modeling.
Numerical simulation and comparison of nonlinear self-focusing based on iteration and ray tracing
NASA Astrophysics Data System (ADS)
Li, Xiaotong; Chen, Hao; Wang, Weiwei; Ruan, Wangchao; Zhang, Luwei; Cen, Zhaofeng
2017-05-01
Self-focusing is observed in nonlinear materials owing to the interaction between laser and matter when laser beam propagates. Some of numerical simulation strategies such as the beam propagation method (BPM) based on nonlinear Schrödinger equation and ray tracing method based on Fermat's principle have applied to simulate the self-focusing process. In this paper we present an iteration nonlinear ray tracing method in that the nonlinear material is also cut into massive slices just like the existing approaches, but instead of paraxial approximation and split-step Fourier transform, a large quantity of sampled real rays are traced step by step through the system with changing refractive index and laser intensity by iteration. In this process a smooth treatment is employed to generate a laser density distribution at each slice to decrease the error caused by the under-sampling. The characteristics of this method is that the nonlinear refractive indices of the points on current slice are calculated by iteration so as to solve the problem of unknown parameters in the material caused by the causal relationship between laser intensity and nonlinear refractive index. Compared with the beam propagation method, this algorithm is more suitable for engineering application with lower time complexity, and has the calculation capacity for numerical simulation of self-focusing process in the systems including both of linear and nonlinear optical media. If the sampled rays are traced with their complex amplitudes and light paths or phases, it will be possible to simulate the superposition effects of different beam. At the end of the paper, the advantages and disadvantages of this algorithm are discussed.
Data Integration Tool: Permafrost Data Debugging
NASA Astrophysics Data System (ADS)
Wilcox, H.; Schaefer, K. M.; Jafarov, E. E.; Pulsifer, P. L.; Strawhacker, C.; Yarmey, L.; Basak, R.
2017-12-01
We developed a Data Integration Tool (DIT) to significantly speed up the time of manual processing needed to translate inconsistent, scattered historical permafrost data into files ready to ingest directly into the Global Terrestrial Network-Permafrost (GTN-P). The United States National Science Foundation funded this project through the National Snow and Ice Data Center (NSIDC) with the GTN-P to improve permafrost data access and discovery. We leverage this data to support science research and policy decisions. DIT is a workflow manager that divides data preparation and analysis into a series of steps or operations called widgets (https://github.com/PermaData/DIT). Each widget does a specific operation, such as read, multiply by a constant, sort, plot, and write data. DIT allows the user to select and order the widgets as desired to meet their specific needs, incrementally interact with and evolve the widget workflows, and save those workflows for reproducibility. Taking ideas from visual programming found in the art and design domain, debugging and iterative design principles from software engineering, and the scientific data processing and analysis power of Fortran and Python it was written for interactive, iterative data manipulation, quality control, processing, and analysis of inconsistent data in an easily installable application. DIT was used to completely translate one dataset (133 sites) that was successfully added to GTN-P, nearly translate three datasets (270 sites), and is scheduled to translate 10 more datasets ( 1000 sites) from the legacy inactive site data holdings of the Frozen Ground Data Center (FGDC). Iterative development has provided the permafrost and wider scientific community with an extendable tool designed specifically for the iterative process of translating unruly data.
Aerodynamic Optimization of Rocket Control Surface Geometry Using Cartesian Methods and CAD Geometry
NASA Technical Reports Server (NTRS)
Nelson, Andrea; Aftosmis, Michael J.; Nemec, Marian; Pulliam, Thomas H.
2004-01-01
Aerodynamic design is an iterative process involving geometry manipulation and complex computational analysis subject to physical constraints and aerodynamic objectives. A design cycle consists of first establishing the performance of a baseline design, which is usually created with low-fidelity engineering tools, and then progressively optimizing the design to maximize its performance. Optimization techniques have evolved from relying exclusively on designer intuition and insight in traditional trial and error methods, to sophisticated local and global search methods. Recent attempts at automating the search through a large design space with formal optimization methods include both database driven and direct evaluation schemes. Databases are being used in conjunction with surrogate and neural network models as a basis on which to run optimization algorithms. Optimization algorithms are also being driven by the direct evaluation of objectives and constraints using high-fidelity simulations. Surrogate methods use data points obtained from simulations, and possibly gradients evaluated at the data points, to create mathematical approximations of a database. Neural network models work in a similar fashion, using a number of high-fidelity database calculations as training iterations to create a database model. Optimal designs are obtained by coupling an optimization algorithm to the database model. Evaluation of the current best design then gives either a new local optima and/or increases the fidelity of the approximation model for the next iteration. Surrogate methods have also been developed that iterate on the selection of data points to decrease the uncertainty of the approximation model prior to searching for an optimal design. The database approximation models for each of these cases, however, become computationally expensive with increase in dimensionality. Thus the method of using optimization algorithms to search a database model becomes problematic as the number of design variables is increased.
In Praise of Numerical Computation
NASA Astrophysics Data System (ADS)
Yap, Chee K.
Theoretical Computer Science has developed an almost exclusively discrete/algebraic persona. We have effectively shut ourselves off from half of the world of computing: a host of problems in Computational Science & Engineering (CS&E) are defined on the continuum, and, for them, the discrete viewpoint is inadequate. The computational techniques in such problems are well-known to numerical analysis and applied mathematics, but are rarely discussed in theoretical algorithms: iteration, subdivision and approximation. By various case studies, I will indicate how our discrete/algebraic view of computing has many shortcomings in CS&E. We want embrace the continuous/analytic view, but in a new synthesis with the discrete/algebraic view. I will suggest a pathway, by way of an exact numerical model of computation, that allows us to incorporate iteration and approximation into our algorithms’ design. Some recent results give a peek into how this view of algorithmic development might look like, and its distinctive form suggests the name “numerical computational geometry” for such activities.
NASA Astrophysics Data System (ADS)
Lasche, George; Coldwell, Robert; Metzger, Robert
2017-09-01
A new application (known as "VRF", or "Visual RobFit") for analysis of high-resolution gamma-ray spectra has been developed using non-linear fitting techniques to fit full-spectrum nuclide shapes. In contrast to conventional methods based on the results of an initial peak-search, the VRF analysis method forms, at each of many automated iterations, a spectrum-wide shape for each nuclide and, also at each iteration, it adjusts the activities of each nuclide, as well as user-enabled parameters of energy calibration, attenuation by up to three intervening or self-absorbing materials, peak width as a function of energy, full-energy peak efficiency, and coincidence summing until no better fit to the data can be obtained. This approach, which employs a new and significantly advanced underlying fitting engine especially adapted to nuclear spectra, allows identification of minor peaks that are masked by larger, overlapping peaks that would not otherwise be possible. The application and method are briefly described and two examples are presented.
NASA Astrophysics Data System (ADS)
Ogawa, Yuichi
2016-05-01
A new strategic energy plan decided by the Japanese Cabinet in 2014 strongly supports the steady promotion of nuclear fusion development activities, including the ITER project and the Broader Approach activities from the long-term viewpoint. Atomic Energy Commission (AEC) in Japan formulated the Third Phase Basic Program so as to promote an experimental fusion reactor project. In 2005 AEC has reviewed this Program, and discussed on selection and concentration among many projects of fusion reactor development. In addition to the promotion of ITER project, advanced tokamak research by JT-60SA, helical plasma experiment by LHD, FIREX project in laser fusion research and fusion engineering by IFMIF were highly prioritized. Although the basic concept is quite different between tokamak, helical and laser fusion researches, there exist a lot of common features such as plasma physics on 3-D magnetic geometry, high power heat load on plasma facing component and so on. Therefore, a synergetic scenario on fusion reactor development among various plasma confinement concepts would be important.
Constitutive law for thermally-activated plasticity of recrystallized tungsten
NASA Astrophysics Data System (ADS)
Zinovev, Aleksandr; Terentyev, Dmitry; Dubinko, Andrii; Delannay, Laurent
2017-12-01
A physically-based constitutive law relevant for ITER-specification tungsten grade in as-recrystallized state is proposed. The material demonstrates stages III and IV of the plastic deformation, in which hardening rate does not drop to zero with the increase of applied stress. Despite the classical Kocks-Mecking model, valid at stage III, the strain hardening asymptotically decreases resembling a hyperbolic function. The material parameters are fitted by relying on tensile test data and by requiring that the strain and stress at the onset of diffuse necking (uniform elongation and ultimate tensile strength correspondingly) as well as the yield stress be reproduced. The model is then validated in the temperature range 300-600 °C with the help of finite element analysis of tensile tests which confirms the reproducibility of the experimental engineering curves up to the onset of diffuse necking, beyond which the development of ductile damage accelerates the material failure. This temperature range represents the low temperature application window for tungsten as divertor material in fusion reactor ITER.
NASA Astrophysics Data System (ADS)
Kassa, Semu Mitiku; Tsegay, Teklay Hailay
2017-08-01
Tri-level optimization problems are optimization problems with three nested hierarchical structures, where in most cases conflicting objectives are set at each level of hierarchy. Such problems are common in management, engineering designs and in decision making situations in general, and are known to be strongly NP-hard. Existing solution methods lack universality in solving these types of problems. In this paper, we investigate a tri-level programming problem with quadratic fractional objective functions at each of the three levels. A solution algorithm has been proposed by applying fuzzy goal programming approach and by reformulating the fractional constraints to equivalent but non-fractional non-linear constraints. Based on the transformed formulation, an iterative procedure is developed that can yield a satisfactory solution to the tri-level problem. The numerical results on various illustrative examples demonstrated that the proposed algorithm is very much promising and it can also be used to solve larger-sized as well as n-level problems of similar structure.
McClain, Arianna D; Hekler, Eric B; Gardner, Christopher D
2013-01-01
Previous research from the fields of computer science and engineering highlight the importance of an iterative design process (IDP) to create more creative and effective solutions. This study describes IDP as a new method for developing health behavior interventions and evaluates the effectiveness of a dining hall-based intervention developed using IDP on college students' eating behavior and values. participants were 458 students (52.6% female, age = 19.6 ± 1.5 years [M ± SD]). The intervention was developed via an IDP parallel process. A cluster-randomized controlled study compared differences in eating behavior among students in 4 university dining halls (2 intervention, 2 control). The final intervention was a multicomponent, point-of-selection marketing campaign. Students in the intervention dining halls consumed significantly less junk food and high-fat meat and increased their perceived importance of eating a healthful diet relative to the control group. IDP may be valuable for the development of behavior change interventions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feder, Russell; Youssef, Mahamoud; Klabacha, Jonathan
USITER is one of seven partner domestic agencies (DA) contributing components to the ITER project. Four diagnostic port plug packages (two equatorial ports and two upper ports) will be engineered and fabricated by Princeton Plasma Physics Lab (PPPL). Diagnostic port plugs as illustrated in Fig. 1 are large primarily stainless steel structures that serve several roles on ITER. The port plugs are the primary vacuum seal and tritium confinement barriers for the vessel. The port plugs also house several plasma diagnostic systems and other machine service equipment. Finally, each port plug must shield high energy neutrons and gamma photons frommore » escaping and creating radiological problems in maintenance areas behind the port plugs. The optimization of the balance between adequate shielding and the need for high performance, high throughput diagnostics systems is the focus of this paper. Neutronics calculations are also needed for assessing nuclear heating and nuclear damage in the port plug and diagnostic components. Attila, the commercially available discrete-ordinates software package, is used for all diagnostic port plug neutronics analysis studies at PPPL.« less
Iterative Mechanism of Macrodiolide Formation in the Anticancer Compound Conglobatin.
Zhou, Yongjun; Murphy, Annabel C; Samborskyy, Markiyan; Prediger, Patricia; Dias, Luiz Carlos; Leadlay, Peter F
2015-06-18
Conglobatin is an unusual C2-symmetrical macrodiolide from the bacterium Streptomyces conglobatus with promising antitumor activity. Insights into the genes and enzymes that govern both the assembly-line production of the conglobatin polyketide and its dimerization are essential to allow rational alterations to be made to the conglobatin structure. We have used a rapid, direct in vitro cloning method to obtain the entire cluster on a 41-kbp fragment, encoding a modular polyketide synthase assembly line. The cloned cluster directs conglobatin biosynthesis in a heterologous host strain. Using a model substrate to mimic the conglobatin monomer, we also show that the conglobatin cyclase/thioesterase acts iteratively, ligating two monomers head-to-tail then re-binding the dimer product and cyclizing it. Incubation of two different monomers with the cyclase produces hybrid dimers and trimers, providing the first evidence that conglobatin analogs may in future become accessible through engineering of the polyketide synthase. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Small-Scale Smart Grid Construction and Analysis
NASA Astrophysics Data System (ADS)
Surface, Nicholas James
The smart grid (SG) is a commonly used catch-phrase in the energy industry yet there is no universally accepted definition. The objectives and most useful concepts have been investigated extensively in economic, environmental and engineering research by applying statistical knowledge and established theories to develop simulations without constructing physical models. In this study, a small-scale version (SSSG) is constructed to physically represent these ideas so they can be evaluated. Results of construction show data acquisition three times more expensive than the grid itself although mainly due to the incapability to downsize 70% of data acquisition costs to small-scale. Experimentation on the fully assembled grid exposes the limitations of low cost modified sine wave power, significant enough to recommend pure sine wave investment in future SSSG iterations. Findings can be projected to full-size SG at a ratio of 1:10, based on the appliance representing average US household peak daily load. However this exposes disproportionalities in the SSSG compared with previous SG investigations and recommended changes for future iterations are established to remedy this issue. Also discussed are other ideas investigated in the literature and their suitability for SSSG incorporation. It is highly recommended to develop a user-friendly bidirectional charger to more accurately represent vehicle-to-grid (V2G) infrastructure. Smart homes, BEV swap stations and pumped hydroelectric storage can also be researched on future iterations of the SSSG.
NASA Technical Reports Server (NTRS)
Boyer, Charles M.; Jackson, Trevor P.; Beyon, Jeffrey Y.; Petway, Larry B.
2013-01-01
Optimized designs of the Navigation Doppler Lidar (NDL) instrument for Autonomous Landing Hazard Avoidance Technology (ALHAT) were accomplished via Interdisciplinary Design Concept (IDEC) at NASA Langley Research Center during the summer of 2013. Three branches in the Engineering Directorate and three students were involved in this joint task through the NASA Langley Aerospace Research Summer Scholars (LARSS) Program. The Laser Remote Sensing Branch (LRSB), Mechanical Systems Branch (MSB), and Structural and Thermal Systems Branch (STSB) were engaged to achieve optimal designs through iterative and interactive collaborative design processes. A preliminary design iteration was able to reduce the power consumption, mass, and footprint by removing redundant components and replacing inefficient components with more efficient ones. A second design iteration reduced volume and mass by replacing bulky components with excessive performance with smaller components custom-designed for the power system. Mechanical placement collaboration reduced potential electromagnetic interference (EMI). Through application of newly selected electrical components and thermal analysis data, a total electronic chassis redesign was accomplished. Use of an innovative forced convection tunnel heat sink was employed to meet and exceed project requirements for cooling, mass reduction, and volume reduction. Functionality was a key concern to make efficient use of airflow, and accessibility was also imperative to allow for servicing of chassis internals. The collaborative process provided for accelerated design maturation with substantiated function.
Fatty acid biosynthesis revisited: Structure elucidation and metabolic engineering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beld, Joris; Lee, D. John; Burkart, Michael D.
Fatty acids are primary metabolites synthesized by complex, elegant, and essential biosynthetic machinery. Fatty acid synthases resemble an iterative assembly line, with an acyl carrier protein conveying the growing fatty acid to necessary enzymatic domains for modification. Each catalytic domain is a unique enzyme spanning a wide range of folds and structures. Although they harbor the same enzymatic activities, two different types of fatty acid synthase architectures are observed in nature. During recent years, strained petroleum supplies have driven interest in engineering organisms to either produce more fatty acids or specific high value products. Such efforts require a fundamental understandingmore » of the enzymatic activities and regulation of fatty acid synthases. Despite more than one hundred years of research, we continue to learn new lessons about fatty acid synthases' many intricate structural and regulatory elements. Lastly, in this review, we summarize each enzymatic domain and discuss efforts to engineer fatty acid synthases, providing some clues to important challenges and opportunities in the field.« less
Viscous Aerodynamic Shape Optimization with Installed Propulsion Effects
NASA Technical Reports Server (NTRS)
Heath, Christopher M.; Seidel, Jonathan A.; Rallabhandi, Sriram K.
2017-01-01
Aerodynamic shape optimization is demonstrated to tailor the under-track pressure signature of a conceptual low-boom supersonic aircraft. Primarily, the optimization reduces nearfield pressure waveforms induced by propulsion integration effects. For computational efficiency, gradient-based optimization is used and coupled to the discrete adjoint formulation of the Reynolds-averaged Navier Stokes equations. The engine outer nacelle, nozzle, and vertical tail fairing are axi-symmetrically parameterized, while the horizontal tail is shaped using a wing-based parameterization. Overall, 48 design variables are coupled to the geometry and used to deform the outer mold line. During the design process, an inequality drag constraint is enforced to avoid major compromise in aerodynamic performance. Linear elastic mesh morphing is used to deform volume grids between design iterations. The optimization is performed at Mach 1.6 cruise, assuming standard day altitude conditions at 51,707-ft. To reduce uncertainty, a coupled thermodynamic engine cycle model is employed that captures installed inlet performance effects on engine operation.
Fatty acid biosynthesis revisited: Structure elucidation and metabolic engineering
Beld, Joris; Lee, D. John; Burkart, Michael D.
2014-10-20
Fatty acids are primary metabolites synthesized by complex, elegant, and essential biosynthetic machinery. Fatty acid synthases resemble an iterative assembly line, with an acyl carrier protein conveying the growing fatty acid to necessary enzymatic domains for modification. Each catalytic domain is a unique enzyme spanning a wide range of folds and structures. Although they harbor the same enzymatic activities, two different types of fatty acid synthase architectures are observed in nature. During recent years, strained petroleum supplies have driven interest in engineering organisms to either produce more fatty acids or specific high value products. Such efforts require a fundamental understandingmore » of the enzymatic activities and regulation of fatty acid synthases. Despite more than one hundred years of research, we continue to learn new lessons about fatty acid synthases' many intricate structural and regulatory elements. Lastly, in this review, we summarize each enzymatic domain and discuss efforts to engineer fatty acid synthases, providing some clues to important challenges and opportunities in the field.« less
RTE: A computer code for Rocket Thermal Evaluation
NASA Technical Reports Server (NTRS)
Naraghi, Mohammad H. N.
1995-01-01
The numerical model for a rocket thermal analysis code (RTE) is discussed. RTE is a comprehensive thermal analysis code for thermal analysis of regeneratively cooled rocket engines. The input to the code consists of the composition of fuel/oxidant mixture and flow rates, chamber pressure, coolant temperature and pressure. dimensions of the engine, materials and the number of nodes in different parts of the engine. The code allows for temperature variation in axial, radial and circumferential directions. By implementing an iterative scheme, it provides nodal temperature distribution, rates of heat transfer, hot gas and coolant thermal and transport properties. The fuel/oxidant mixture ratio can be varied along the thrust chamber. This feature allows the user to incorporate a non-equilibrium model or an energy release model for the hot-gas-side. The user has the option of bypassing the hot-gas-side calculations and directly inputting the gas-side fluxes. This feature is used to link RTE to a boundary layer module for the hot-gas-side heat flux calculations.
Failure is an option: Reactions to failure in elementary engineering design projects
NASA Astrophysics Data System (ADS)
Johnson, Matthew M.
Recent reform documents in science education have called for teachers to use epistemic practices of science and engineering researchers to teach disciplinary content (NRC, 2007; NRC, 2012; NGSS Lead States, 2013). Although this creates challenges for classroom teachers unfamiliar with engineering, it has created a need for high quality research about how students and teachers engage in engineering activities to improve curriculum development and teaching pedagogy. While framers of the Next Generation Science Standards (NRC, 2012; NGSS Lead States 2013) focused on the similarities of the practices of science researchers and engineering designers, some have proposed that engineering has a unique set of epistemic practices, including improving from failure (Cunningham & Carlsen, 2014; Cunningham & Kelly, in review). While no one will deny failures occur in science, failure in engineering is thought of in fundamentally different ways. In the study presented here, video data from eight classes of elementary students engaged in one of two civil engineering units were analyzed using methods borrowed from psychology, anthropology, and sociolinguistics to investigate: 1) the nature of failure in elementary engineering design; 2) the ways in which teachers react to failure; and 3) how the collective actions of students and teachers support or constrain improvement in engineering design. I propose new ways of considering the types and causes of failure, and note three teacher reactions to failure: the manager, the cheerleader, and the strategic partner. Because the goal of iteration in engineering is improvement, I also studied improvement. Students only systematically improve when they have the opportunity, productive strategies, and fair comparisons between prototypes. I then investigate the use of student engineering journals to assess learning from the process of improvement after failure. After discussion, I consider implications from this work as well as future research to advance our understanding in this area.
NASA Astrophysics Data System (ADS)
Pradhan, S. K.; Bhuyan, P.; Kaithwas, C.; Mandal, Sumantra
2018-05-01
Strain-annealing based thermo-mechanical processing has been performed to promote grain boundary engineering (GBE) in an extra-low carbon type austenitic stainless steel without altering the grain size and residual strain to evaluate its sole influence on intergranular corrosion. Single-step processing comprising low pre-strain ( 5 and 10 pct) followed by annealing at 1273 K for 1 hour have resulted in a large fraction of Σ3 n boundaries and significant disruption in random high-angle grain boundaries (RHAGBs) connectivity. This is due to the occurrence of prolific multiple twinning in these specimens as confirmed by their large twin-related domain and twin-related grain size ratio. Among the iterative processing, the schedule comprising two cycles of 10 and 5 pct deformation followed by annealing at 1173 K for 1 hour has yielded the optimum GBE microstructure with the grain size and residual strain akin to the as-received condition. The specimens subjected to the higher number of iterations failed to realize GBE microstructures due to the occurrence of partial recrystallization. Owing to the optimum grain boundary character distribution, the GBE specimen has exhibited remarkable resistance against sensitization and intergranular corrosion as compared to the as-received condition. Furthermore, the lower depth of percolation in the GBE specimen is due to the significant disruption of RHAGBs connectivity as confirmed from its large twin-related domain and lower fractal dimension.
NASA Astrophysics Data System (ADS)
Pradhan, S. K.; Bhuyan, P.; Kaithwas, C.; Mandal, Sumantra
2018-07-01
Strain-annealing based thermo-mechanical processing has been performed to promote grain boundary engineering (GBE) in an extra-low carbon type austenitic stainless steel without altering the grain size and residual strain to evaluate its sole influence on intergranular corrosion. Single-step processing comprising low pre-strain ( 5 and 10 pct) followed by annealing at 1273 K for 1 hour have resulted in a large fraction of Σ3 n boundaries and significant disruption in random high-angle grain boundaries (RHAGBs) connectivity. This is due to the occurrence of prolific multiple twinning in these specimens as confirmed by their large twin-related domain and twin-related grain size ratio. Among the iterative processing, the schedule comprising two cycles of 10 and 5 pct deformation followed by annealing at 1173 K for 1 hour has yielded the optimum GBE microstructure with the grain size and residual strain akin to the as-received condition. The specimens subjected to the higher number of iterations failed to realize GBE microstructures due to the occurrence of partial recrystallization. Owing to the optimum grain boundary character distribution, the GBE specimen has exhibited remarkable resistance against sensitization and intergranular corrosion as compared to the as-received condition. Furthermore, the lower depth of percolation in the GBE specimen is due to the significant disruption of RHAGBs connectivity as confirmed from its large twin-related domain and lower fractal dimension.
Tomographic image reconstruction using the cell broadband engine (CBE) general purpose hardware
NASA Astrophysics Data System (ADS)
Knaup, Michael; Steckmann, Sven; Bockenbach, Olivier; Kachelrieß, Marc
2007-02-01
Tomographic image reconstruction, such as the reconstruction of CT projection values, of tomosynthesis data, PET or SPECT events, is computational very demanding. In filtered backprojection as well as in iterative reconstruction schemes, the most time-consuming steps are forward- and backprojection which are often limited by the memory bandwidth. Recently, a novel general purpose architecture optimized for distributed computing became available: the Cell Broadband Engine (CBE). Its eight synergistic processing elements (SPEs) currently allow for a theoretical performance of 192 GFlops (3 GHz, 8 units, 4 floats per vector, 2 instructions, multiply and add, per clock). To maximize image reconstruction speed we modified our parallel-beam and perspective backprojection algorithms which are highly optimized for standard PCs, and optimized the code for the CBE processor. 1-3 In addition, we implemented an optimized perspective forwardprojection on the CBE which allows us to perform statistical image reconstructions like the ordered subset convex (OSC) algorithm. 4 Performance was measured using simulated data with 512 projections per rotation and 5122 detector elements. The data were backprojected into an image of 512 3 voxels using our PC-based approaches and the new CBE- based algorithms. Both the PC and the CBE timings were scaled to a 3 GHz clock frequency. On the CBE, we obtain total reconstruction times of 4.04 s for the parallel backprojection, 13.6 s for the perspective backprojection and 192 s for a complete OSC reconstruction, consisting of one initial Feldkamp reconstruction, followed by 4 OSC iterations.
AAL service development loom--from the idea to a marketable business model.
Kriegel, Johannes; Auinger, Klemens
2015-01-01
The Ambient Assisted Living (AAL) market is still in an early stage of development. Previous approaches of comprehensive AAL services are mostly supply-side driven and focused on hardware and software. Usually this type of AAL solutions does not lead to a sustainable success on the market. Research and development increasingly focuses on demand and customer requirements in addition to the social and legal framework. The question is: How can a systematic performance measurement strategy along a service development process support the market-ready design of a concrete business model for AAL service? Within the EU funded research project DALIA (Assistant for Daily Life Activities at Home) an iterative service development process uses an adapted Osterwalder business model canvas. The application of a performance measurement index (PMI) to support the process has been developed and tested. Development of an iterative service development model using a supporting PMI. The PMI framework is developed throughout the engineering of a virtual assistant (AVATAR) as a modular interface to connect informal carers with necessary and useful services. Future research should seek to ensure that the PMI enables meaningful transparency regarding targeting (e.g. innovative AAL service), design (e.g. functional hybrid AAL service) and implementation (e.g. marketable AAL support services). To this end, a further reference to further testing practices is required. The aim must be to develop a weighted PMI in the context of further research, which supports both the service engineering and the subsequent service management process.
Low order climate models as a tool for cross-disciplinary collaboration
NASA Astrophysics Data System (ADS)
Newton, R.; Pfirman, S. L.; Tremblay, B.; Schlosser, P.
2014-12-01
Human impacts on climate are pervasive and significant and project future states cannot be projected without taking human influence into account. We recently helped convene a meeting of climatologists, policy analysts, lawyers and social scientists to discuss the dramatic loss in Arctic summer sea ice. A dialogue emerged around distinct time scales in the integrated human/natural climate system. Climate scientists tended to discuss engineering solutions as though they could be implemented immediately, whereas lags of 2 or more decades were estimated by social scientists for societal shifts and similar lags were cited for deployment by the engineers. Social scientists tended to project new climate states virtually overnight, while climatologists described time scales of decades to centuries for the system to respond to changes in forcing functions. For the conversation to develop, the group had to come to grips with an increasingly complex set of transient effect time scales and lags between decisions, changes in forcing, and system outputs. We use several low-order dynamical system models to explore mismatched timescales, ranges of lags, and uncertainty in cost estimates on climate outcomes, focusing on Arctic-specific issues. In addition to lessons regarding what is/isn't feasible from a policy and engineering perspective, these models provide a useful tool to concretize cross-disciplinary thinking. They are fast and easy to iterate through a large region of the problem space, while including surprising complexity in their evolution. Thus they are appropriate for investigating the implications of policy in an efficient, but not unrealistic physical setting. (Earth System Models, by contrast, can be too resource- and time-intensive for iteratively testing "what if" scenarios in cross-disciplinary collaborations.) Our runs indicate, for example, that the combined social, engineering and climate physics lags make it extremely unlikely that an ice-free summer ecology in the Arctic can be avoided. Further, if prospective remediation strategies are successful, a return to perennial ice conditions between one and two centuries from now is entirely likely, with interesting and large impacts on Northern economies.
Coupled-Flow Simulation of HP-LP Turbines Has Resulted in Significant Fuel Savings
NASA Technical Reports Server (NTRS)
Veres, Joseph P.
2001-01-01
Our objective was to create a high-fidelity Navier-Stokes computer simulation of the flow through the turbines of a modern high-bypass-ratio turbofan engine. The simulation would have to capture the aerodynamic interactions between closely coupled high- and low-pressure turbines. A computer simulation of the flow in the GE90 turbofan engine's high-pressure (HP) and low-pressure (LP) turbines was created at GE Aircraft Engines under contract with the NASA Glenn Research Center. The three-dimensional steady-state computer simulation was performed using Glenn's average-passage approach named APNASA. The areas upstream and downstream of each blade row mutually interact with each other during engine operation. The embedded blade row operating conditions are modeled since the average passage equations in APNASA actively include the effects of the adjacent blade rows. The turbine airfoils, platforms, and casing are actively cooled by compressor bleed air. Hot gas leaks around the tips of rotors through labyrinth seals. The flow exiting the high work HP turbines is partially transonic and, therefore, has a strong shock system in the transition region. The simulation was done using 121 processors of a Silicon Graphics Origin 2000 (NAS 02K) cluster at the NASA Ames Research Center, with a parallel efficiency of 87 percent in 15 hr. The typical average-passage analysis mesh size per blade row was 280 by 45 by 55, or approx.700,000 grid points. The total number of blade rows was 18 for a combined HP and LP turbine system including the struts in the transition duct and exit guide vane, which contain 12.6 million grid points. Design cycle turnaround time requirements ran typically from 24 to 48 hr of wall clock time. The number of iterations for convergence was 10,000 at 8.03x10(exp -5) sec/iteration/grid point (NAS O2K). Parallel processing by up to 40 processors is required to meet the design cycle time constraints. This is the first-ever flow simulation of an HP and LP turbine. In addition, it includes the struts in the transition duct and exit guide vanes.
Cell-free metabolic engineering: Biomanufacturing beyond the cell
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dudley, QM; Karim, AS; Jewett, MC
2014-10-15
Industrial biotechnology and microbial metabolic engineering are poised to help meet the growing demand for sustainable, low-cost commodity chemicals and natural products, yet the fraction of biochemicals amenable to commercial production remains limited. Common problems afflicting the current state-of-the-art include low volumetric productivities, build-up of toxic intermediates or products, and byproduct losses via competing pathways. To overcome these limitations, cell-free metabolic engineering (CFME) is expanding the scope of the traditional bioengineering model by using in vitro ensembles of catalytic proteins prepared from purified enzymes or crude lysates of cells for the production of target products. In recent years, the unprecedentedmore » level of control and freedom of design, relative to in vivo systems, has inspired the development of engineering foundations for cell-free systems. These efforts have led to activation of long enzymatic pathways (>8 enzymes), near theoretical conversion yields, productivities greater than 100 mg L-1 h(-1), reaction scales of >100 L, and new directions in protein purification, spatial organization, and enzyme stability. In the coming years, CFME will offer exciting opportunities to: (i) debug and optimize biosynthetic pathways; (ii) carry out design-build-test iterations without re-engineering organisms; and (iii) perform molecular transformations when bioconversion yields, productivities, or cellular toxicity limit commercial feasibility.« less
Cell-Free Metabolic Engineering: Biomanufacturing beyond the cell
Dudley, Quentin M.; Karim, Ashty S.; Jewett, Michael C.
2014-01-01
Industrial biotechnology and microbial metabolic engineering are poised to help meet the growing demand for sustainable, low-cost commodity chemicals and natural products, yet the fraction of biochemicals amenable to commercial production remains limited. Common problems afflicting the current state-of-the-art include low volumetric productivities, build-up of toxic intermediates or products, and byproduct losses via competing pathways. To overcome these limitations, cell-free metabolic engineering (CFME) is expanding the scope of the traditional bioengineering model by using in vitro ensembles of catalytic proteins prepared from purified enzymes or crude lysates of cells for the production of target products. In recent years, the unprecedented level of control and freedom of design, relative to in vivo systems, has inspired the development of engineering foundations for cell-free systems. These efforts have led to activation of long enzymatic pathways (>8 enzymes), near theoretical conversion yields, productivities greater than 100 mg L−1 hr−1, reaction scales of >100L, and new directions in protein purification, spatial organization and enzyme stability. In the coming years, CFME will offer exciting opportunities to (i) debug and optimize biosynthetic pathways, (ii) carry out design-build-test iterations without re-engineering organisms, and (iii) perform molecular transformations when bioconversion yields, productivities, or cellular toxicity limit commercial feasibility. PMID:25319678
NASA Technical Reports Server (NTRS)
Sellers, J. F.; Daniele, C. J.
1975-01-01
The DYNGEN, a digital computer program for analyzing the steady state and transient performance of turbojet and turbofan engines, is described. The DYNGEN is based on earlier computer codes (SMOTE, GENENG, and GENENG 2) which are capable of calculating the steady state performance of turbojet and turbofan engines at design and off-design operating conditions. The DYNGEN has the combined capabilities of GENENG and GENENG 2 for calculating steady state performance; to these the further capability for calculating transient performance was added. The DYNGEN can be used to analyze one- and two-spool turbojet engines or two- and three-spool turbofan engines without modification to the basic program. A modified Euler method is used by DYNGEN to solve the differential equations which model the dynamics of the engine. This new method frees the programmer from having to minimize the number of equations which require iterative solution. As a result, some of the approximations normally used in transient engine simulations can be eliminated. This tends to produce better agreement when answers are compared with those from purely steady state simulations. The modified Euler method also permits the user to specify large time steps (about 0.10 sec) to be used in the solution of the differential equations. This saves computer execution time when long transients are run. Examples of the use of the program are included, and program results are compared with those from an existing hybrid-computer simulation of a two-spool turbofan.
Laser-Etched Designs for Molding Hydrogel-Based Engineered Tissues
Munarin, Fabiola; Kaiser, Nicholas J.; Kim, Tae Yun; Choi, Bum-Rak
2017-01-01
Rapid prototyping and fabrication of elastomeric molds for sterile culture of engineered tissues allow for the development of tissue geometries that can be tailored to different in vitro applications and customized as implantable scaffolds for regenerative medicine. Commercially available molds offer minimal capabilities for adaptation to unique conditions or applications versus those for which they are specifically designed. Here we describe a replica molding method for the design and fabrication of poly(dimethylsiloxane) (PDMS) molds from laser-etched acrylic negative masters with ∼0.2 mm resolution. Examples of the variety of mold shapes, sizes, and patterns obtained from laser-etched designs are provided. We use the patterned PDMS molds for producing and culturing engineered cardiac tissues with cardiomyocytes derived from human-induced pluripotent stem cells. We demonstrate that tight control over tissue morphology and anisotropy results in modulation of cell alignment and tissue-level conduction properties, including the appearance and elimination of reentrant arrhythmias, or circular electrical activation patterns. Techniques for handling engineered cardiac tissues during implantation in vivo in a rat model of myocardial infarction have been developed and are presented herein to facilitate development and adoption of surgical techniques for use with hydrogel-based engineered tissues. In summary, the method presented herein for engineered tissue mold generation is straightforward and low cost, enabling rapid design iteration and adaptation to a variety of applications in tissue engineering. Furthermore, the burden of equipment and expertise is low, allowing the technique to be accessible to all. PMID:28457187
Aerospace engineering design by systematic decomposition and multilevel optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; Barthelemy, J. F. M.; Giles, G. L.
1984-01-01
A method for systematic analysis and optimization of large engineering systems, by decomposition of a large task into a set of smaller subtasks that is solved concurrently is described. The subtasks may be arranged in hierarchical levels. Analyses are carried out in each subtask using inputs received from other subtasks, and are followed by optimizations carried out from the bottom up. Each optimization at the lower levels is augmented by analysis of its sensitivity to the inputs received from other subtasks to account for the couplings among the subtasks in a formal manner. The analysis and optimization operations alternate iteratively until they converge to a system design whose performance is maximized with all constraints satisfied. The method, which is still under development, is tentatively validated by test cases in structural applications and an aircraft configuration optimization.
Developing sustainable software solutions for bioinformatics by the “ Butterfly” paradigm
Ahmed, Zeeshan; Zeeshan, Saman; Dandekar, Thomas
2014-01-01
Software design and sustainable software engineering are essential for the long-term development of bioinformatics software. Typical challenges in an academic environment are short-term contracts, island solutions, pragmatic approaches and loose documentation. Upcoming new challenges are big data, complex data sets, software compatibility and rapid changes in data representation. Our approach to cope with these challenges consists of iterative intertwined cycles of development (“ Butterfly” paradigm) for key steps in scientific software engineering. User feedback is valued as well as software planning in a sustainable and interoperable way. Tool usage should be easy and intuitive. A middleware supports a user-friendly Graphical User Interface (GUI) as well as a database/tool development independently. We validated the approach of our own software development and compared the different design paradigms in various software solutions. PMID:25383181
NASA Astrophysics Data System (ADS)
Castanier, Eric; Paterne, Loic; Louis, Céline
2017-09-01
In the nuclear engineering, you have to manage time and precision. Especially in shielding design, you have to be more accurate and efficient to reduce cost (shielding thickness optimization), and for this, you use 3D codes. In this paper, we want to see if we can easily applicate the CADIS methods for design shielding of small pipes which go through large concrete walls. We assess the impact of the WW generated by the 3D-deterministic code ATTILA versus WW directly generated by MCNP (iterative and manual process). The comparison is based on the quality of the convergence (estimated relative error (σ), Variance of Variance (VOV) and Figure of Merit (FOM)), on time (computer time + modelling) and on the implement for the engineer.
Additive Manufacturing and High-Performance Computing: a Disruptive Latent Technology
NASA Astrophysics Data System (ADS)
Goodwin, Bruce
2015-03-01
This presentation will discuss the relationship between recent advances in Additive Manufacturing (AM) technology, High-Performance Computing (HPC) simulation and design capabilities, and related advances in Uncertainty Quantification (UQ), and then examines their impacts upon national and international security. The presentation surveys how AM accelerates the fabrication process, while HPC combined with UQ provides a fast track for the engineering design cycle. The combination of AM and HPC/UQ almost eliminates the engineering design and prototype iterative cycle, thereby dramatically reducing cost of production and time-to-market. These methods thereby present significant benefits for US national interests, both civilian and military, in an age of austerity. Finally, considering cyber security issues and the advent of the ``cloud,'' these disruptive, currently latent technologies may well enable proliferation and so challenge both nuclear and non-nuclear aspects of international security.
Plume flowfield analysis of the shuttle primary Reaction Control System (RCS) rocket engine
NASA Technical Reports Server (NTRS)
Hueser, J. E.; Brock, F. J.
1990-01-01
A solution was generated for the physical properties of the Shuttle RCS 4000 N (900 lb) rocket engine exhaust plume flowfield. The modeled exhaust gas consists of the five most abundant molecular species, H2, N2, H2O, CO, and CO2. The solution is for a bare RCS engine firing into a vacuum; the only additional hardware surface in the flowfield is a cylinder (=engine mount) which coincides with the nozzle lip outer corner at X = 0, extends to the flowfield outer boundary at X = -137 m and is coaxial with the negative symmetry axis. Continuum gas dynamic methods and the Direct Simulation Monte Carlo (DSMC) method were combined in an iterative procedure to produce a selfconsistent solution. Continuum methods were used in the RCS nozzle and in the plume as far as the P = 0.03 breakdown contour; the DSMC method was used downstream of this continuum flow boundary. The DSMC flowfield extends beyond 100 m from the nozzle exit and thus the solution includes the farfield flow properties, but substantial information is developed on lip flow dynamics and thus results are also presented for the flow properties in the vicinity of the nozzle lip.
Vanegas, Katherina García; Lehka, Beata Joanna; Mortensen, Uffe Hasbro
2017-02-08
The yeast Saccharomyces cerevisiae is increasingly used as a cell factory. However, cell factory construction time is a major obstacle towards using yeast for bio-production. Hence, tools to speed up cell factory construction are desirable. In this study, we have developed a new Cas9/dCas9 based system, SWITCH, which allows Saccharomyces cerevisiae strains to iteratively alternate between a genetic engineering state and a pathway control state. Since Cas9 induced recombination events are crucial for SWITCH efficiency, we first developed a technique TAPE, which we have successfully used to address protospacer efficiency. As proof of concept of the use of SWITCH in cell factory construction, we have exploited the genetic engineering state of a SWITCH strain to insert the five genes necessary for naringenin production. Next, the naringenin cell factory was switched to the pathway control state where production was optimized by downregulating an essential gene TSC13, hence, reducing formation of a byproduct. We have successfully integrated two CRISPR tools, one for genetic engineering and one for pathway control, into one system and successfully used it for cell factory construction.
Wang, Jian -Jun; Wang, Yi; Ihlefeld, Jon F.; ...
2016-04-06
Effective thermal conductivity as a function of domain structure is studied by solving the heat conduction equation using a spectral iterative perturbation algorithm in materials with inhomogeneous thermal conductivity distribution. Using this proposed algorithm, the experimentally measured effective thermal conductivities of domain-engineered {001} p-BiFeO 3 thin films are quantitatively reproduced. In conjunction with two other testing examples, this proposed algorithm is proven to be an efficient tool for interpreting the relationship between the effective thermal conductivity and micro-/domain-structures. By combining this algorithm with the phase-field model of ferroelectric thin films, the effective thermal conductivity for PbZr 1-xTi xO 3 filmsmore » under different composition, thickness, strain, and working conditions is predicted. It is shown that the chemical composition, misfit strain, film thickness, film orientation, and a Piezoresponse Force Microscopy tip can be used to engineer the domain structures and tune the effective thermal conductivity. Furthermore, we expect our findings will stimulate future theoretical, experimental and engineering efforts on developing devices based on the tunable effective thermal conductivity in ferroelectric nanostructures.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Jian -Jun; Wang, Yi; Ihlefeld, Jon F.
Effective thermal conductivity as a function of domain structure is studied by solving the heat conduction equation using a spectral iterative perturbation algorithm in materials with inhomogeneous thermal conductivity distribution. Using this proposed algorithm, the experimentally measured effective thermal conductivities of domain-engineered {001} p-BiFeO 3 thin films are quantitatively reproduced. In conjunction with two other testing examples, this proposed algorithm is proven to be an efficient tool for interpreting the relationship between the effective thermal conductivity and micro-/domain-structures. By combining this algorithm with the phase-field model of ferroelectric thin films, the effective thermal conductivity for PbZr 1-xTi xO 3 filmsmore » under different composition, thickness, strain, and working conditions is predicted. It is shown that the chemical composition, misfit strain, film thickness, film orientation, and a Piezoresponse Force Microscopy tip can be used to engineer the domain structures and tune the effective thermal conductivity. Furthermore, we expect our findings will stimulate future theoretical, experimental and engineering efforts on developing devices based on the tunable effective thermal conductivity in ferroelectric nanostructures.« less
Engineering of synthetic, stress-responsive yeast promoters
Rajkumar, Arun S.; Liu, Guodong; Bergenholm, David; Arsovska, Dushica; Kristensen, Mette; Nielsen, Jens; Jensen, Michael K.; Keasling, Jay D.
2016-01-01
Advances in synthetic biology and our understanding of the rules of promoter architecture have led to the development of diverse synthetic constitutive and inducible promoters in eukaryotes and prokaryotes. However, the design of promoters inducible by specific endogenous or environmental conditions is still rarely undertaken. In this study, we engineered and characterized a set of strong, synthetic promoters for budding yeast Saccharomyces cerevisiae that are inducible under acidic conditions (pH ≤ 3). Using available expression and transcription factor binding data, literature on transcriptional regulation, and known rules of promoter architecture we improved the low-pH performance of the YGP1 promoter by modifying transcription factor binding sites in its upstream activation sequence. The engineering strategy outlined for the YGP1 promoter was subsequently applied to create a response to low pH in the unrelated CCW14 promoter. We applied our best promoter variants to low-pH fermentations, enabling ten-fold increased production of lactic acid compared to titres obtained with the commonly used, native TEF1 promoter. Our findings outline and validate a general strategy to iteratively design and engineer synthetic yeast promoters inducible to environmental conditions or stresses of interest. PMID:27325743
1994-06-09
following dominating factor. A(L) - I r- (17) where sat(x)X i ox1. Then the above description can be expressed as the following equation: where uw is the...Electrical & Computer Engineering University of Wisconsin- Madison 1415 Johnson Drive Madison , WI 53706-1691 USA Abstract - Iterative least-mean square...u, = u:,. for all k. The worst-case relative mismatch for the case considered is then given by - ut,, . . = 28 (22) UW - 1+8., where 6 = Ao / o. For
Taking Lessons Learned from a Proxy Application to a Full Application for SNAP and PARTISN
Womeldorff, Geoffrey Alan; Payne, Joshua Estes; Bergen, Benjamin Karl
2017-06-09
SNAP is a proxy application which simulates the computational motion of a neutral particle transport code, PARTISN. Here in this work, we have adapted parts of SNAP separately; we have re-implemented the iterative shell of SNAP in the task-model runtime Legion, showing an improvement to the original schedule, and we have created multiple Kokkos implementations of the computational kernel of SNAP, displaying similar performance to the native Fortran. We then translate our Kokkos experiments in SNAP to PARTISN, necessitating engineering development, regression testing, and further thought.
Air pollution control system research: An iterative approach to developing affordable systems
NASA Technical Reports Server (NTRS)
Watt, Lewis C.; Cannon, Fred S.; Heinsohn, Robert J.; Spaeder, Timothy A.
1995-01-01
This paper describes a Strategic Environmental Research and Development Program (SERDP) funded project led jointly by the Marine Corps Multi-Commodity Maintenance Centers, and the Air and Energy Engineering Research Laboratory (AEERL) of the USEPA. The research focuses on paint booth exhaust minimization using recirculation, and on volatile organic compound (VOC) oxidation by the modules of a hybrid air pollution control system. The research team is applying bench, pilot and full scale systems to accomplish the goals of reduced cost and improved effectiveness of air treatment systems for paint booth exhaust.
Targeted exploration and analysis of large cross-platform human transcriptomic compendia
Zhu, Qian; Wong, Aaron K; Krishnan, Arjun; Aure, Miriam R; Tadych, Alicja; Zhang, Ran; Corney, David C; Greene, Casey S; Bongo, Lars A; Kristensen, Vessela N; Charikar, Moses; Li, Kai; Troyanskaya, Olga G.
2016-01-01
We present SEEK (http://seek.princeton.edu), a query-based search engine across very large transcriptomic data collections, including thousands of human data sets from almost 50 microarray and next-generation sequencing platforms. SEEK uses a novel query-level cross-validation-based algorithm to automatically prioritize data sets relevant to the query and a robust search approach to identify query-coregulated genes, pathways, and processes. SEEK provides cross-platform handling, multi-gene query search, iterative metadata-based search refinement, and extensive visualization-based analysis options. PMID:25581801
Taking Lessons Learned from a Proxy Application to a Full Application for SNAP and PARTISN
DOE Office of Scientific and Technical Information (OSTI.GOV)
Womeldorff, Geoffrey Alan; Payne, Joshua Estes; Bergen, Benjamin Karl
SNAP is a proxy application which simulates the computational motion of a neutral particle transport code, PARTISN. Here in this work, we have adapted parts of SNAP separately; we have re-implemented the iterative shell of SNAP in the task-model runtime Legion, showing an improvement to the original schedule, and we have created multiple Kokkos implementations of the computational kernel of SNAP, displaying similar performance to the native Fortran. We then translate our Kokkos experiments in SNAP to PARTISN, necessitating engineering development, regression testing, and further thought.
Perl Modules for Constructing Iterators
NASA Technical Reports Server (NTRS)
Tilmes, Curt
2009-01-01
The Iterator Perl Module provides a general-purpose framework for constructing iterator objects within Perl, and a standard API for interacting with those objects. Iterators are an object-oriented design pattern where a description of a series of values is used in a constructor. Subsequent queries can request values in that series. These Perl modules build on the standard Iterator framework and provide iterators for some other types of values. Iterator::DateTime constructs iterators from DateTime objects or Date::Parse descriptions and ICal/RFC 2445 style re-currence descriptions. It supports a variety of input parameters, including a start to the sequence, an end to the sequence, an Ical/RFC 2445 recurrence describing the frequency of the values in the series, and a format description that can refine the presentation manner of the DateTime. Iterator::String constructs iterators from string representations. This module is useful in contexts where the API consists of supplying a string and getting back an iterator where the specific iteration desired is opaque to the caller. It is of particular value to the Iterator::Hash module which provides nested iterations. Iterator::Hash constructs iterators from Perl hashes that can include multiple iterators. The constructed iterators will return all the permutations of the iterations of the hash by nested iteration of embedded iterators. A hash simply includes a set of keys mapped to values. It is a very common data structure used throughout Perl programming. The Iterator:: Hash module allows a hash to include strings defining iterators (parsed and dispatched with Iterator::String) that are used to construct an overall series of hash values.
A color-corrected strategy for information multiplexed Fourier ptychographic imaging
NASA Astrophysics Data System (ADS)
Wang, Mingqun; Zhang, Yuzhen; Chen, Qian; Sun, Jiasong; Fan, Yao; Zuo, Chao
2017-12-01
Fourier ptychography (FP) is a novel computational imaging technique that provides both wide field of view (FoV) and high-resolution (HR) imaging capacity for biomedical imaging. Combined with information multiplexing technology, wavelength multiplexed (or color multiplexed) FP imaging can be implemented by lighting up R/G/B LED units simultaneously. Furthermore, a HR image can be recovered at each wavelength from the multiplexed dataset. This enhances the efficiency of data acquisition. However, since the same dataset of intensity measurement is used to recover the HR image at each wavelength, the mean value in each channel would converge to the same value. In this paper, a color correction strategy embedded in the multiplexing FP scheme is demonstrated, which is termed as color corrected wavelength multiplexed Fourier ptychography (CWMFP). Three images captured by turning on a LED array in R/G/B are required as priori knowledge to improve the accuracy of reconstruction in the recovery process. Using the reported technique, the redundancy requirement of information multiplexed FP is reduced. Moreover, the accuracy of reconstruction at each channel is improved with correct color reproduction of the specimen.
Big Data Analytics for Scanning Transmission Electron Microscopy Ptychography
NASA Astrophysics Data System (ADS)
Jesse, S.; Chi, M.; Belianinov, A.; Beekman, C.; Kalinin, S. V.; Borisevich, A. Y.; Lupini, A. R.
2016-05-01
Electron microscopy is undergoing a transition; from the model of producing only a few micrographs, through the current state where many images and spectra can be digitally recorded, to a new mode where very large volumes of data (movies, ptychographic and multi-dimensional series) can be rapidly obtained. Here, we discuss the application of so-called “big-data” methods to high dimensional microscopy data, using unsupervised multivariate statistical techniques, in order to explore salient image features in a specific example of BiFeO3 domains. Remarkably, k-means clustering reveals domain differentiation despite the fact that the algorithm is purely statistical in nature and does not require any prior information regarding the material, any coexisting phases, or any differentiating structures. While this is a somewhat trivial case, this example signifies the extraction of useful physical and structural information without any prior bias regarding the sample or the instrumental modality. Further interpretation of these types of results may still require human intervention. However, the open nature of this algorithm and its wide availability, enable broad collaborations and exploratory work necessary to enable efficient data analysis in electron microscopy.
Coded aperture ptychography: uniqueness and reconstruction
NASA Astrophysics Data System (ADS)
Chen, Pengwen; Fannjiang, Albert
2018-02-01
Uniqueness of solution is proved for any ptychographic scheme with a random mask under a minimum overlap condition and local geometric convergence analysis is given for the alternating projection (AP) and Douglas-Rachford (DR) algorithms. DR is shown to possess a unique fixed point in the object domain and for AP a simple criterion for distinguishing the true solution among possibly many fixed points is given. A minimalist scheme, where the adjacent masks overlap 50% of the area and each pixel of the object is illuminated by exactly four illuminations, is conveniently parametrized by the number q of shifted masks in each direction. The lower bound 1 - C/q 2 is proved for the geometric convergence rate of the minimalist scheme, predicting a poor performance with large q which is confirmed by numerical experiments. The twin-image ambiguity is shown to arise for certain Fresnel masks and degrade the performance of reconstruction. Extensive numerical experiments are performed to explore the general features of a well-performing mask, the optimal value of q and the robustness with respect to measurement noise.
The Complex Dynamics of Student Engagement in Novel Engineering Design Activities
NASA Astrophysics Data System (ADS)
McCormick, Mary
In engineering design, making sense of "messy," design situations is at the heart of the discipline (Schon, 1983); engineers in practice bring structure to design situations by organizing, negotiating, and coordinating multiple aspects (Bucciarelli, 1994; Stevens, Johri, & O'Connor, 2014). In classroom settings, however, students are more often given well-defined, content-focused engineering tasks (Jonassen, 2014). These tasks are based on the assumption that elementary students are unable to grapple with the complexity or open-endedness of engineering design (Crismond & Adams, 2012). The data I present in this dissertation suggest the opposite. I show that students are not only able to make sense of, or frame (Goffman, 1974), complex design situations, but that their framings dynamically involve their nascent abilities for engineering design. The context of this work is Novel Engineering, a larger research project that explores using children's literature as an access point for engineering design. Novel Engineering activities are inherently messy: there are characters with needs, settings with implicit constraints, and rich design situations. In a series of three studies, I show how students' framings of Novel Engineering design activities involve their reasoning and acting as beginning engineers. In the first study, I show two students whose caring for the story characters contributes to their stability in framing the task: they identify the needs of their fictional clients and iteratively design a solution to meet their clients' needs. In the second, I show how students' shifting and negotiating framings influence their engineering assumptions and evaluation criteria. In the third, I show how students' coordinating framings involve navigating a design process to meet clients' needs, classroom expectations, and technical requirements. Collectively, these studies contribute to literature by documenting students' productive beginnings in engineering design. The implications span research and practice, specifically targeting how we attend to and support students as they engage in engineering design.
Using the Tritium Plasma Experiment to evaluate ITER PFC safety
NASA Astrophysics Data System (ADS)
Longhurst, Glen R.; Anderl, Robert A.; Bartlit, John R.; Causey, Rion A.; Haines, John R.
The Tritium Plasma Experiment was assembled at Sandia National Laboratories, Livermore to investigate interactions between dense plasmas at low energies and plasma-facing component materials. This apparatus has the unique capability of replicating plasma conditions in a tokamak divertor with particle flux densities of 2 x 10(exp 19) ions/((sq cm)(s)) and a plasma temperature of about 15 eV using a plasma that includes tritium. With the closure of the Tritium Research Laboratory at Livermore, the experiment was moved to the Tritium Systems Test Assembly facility at Los Alamos National Laboratory. An experimental program has been initiated there using the Tritium Plasma Experiment to examine safety issues related to tritium in plasma-facing components, particularly the ITER divertor. Those issues include tritium retention and release characteristics, tritium permeation rates and transient times to coolant streams, surface modification and erosion by the plasma, the effects of thermal loads and cycling, and particulate production. A considerable lack of data exists in these areas for many of the materials, especially beryllium, being considered for use in ITER. Not only will basic material behavior with respect to safety issues in the divertor environment be examined, but innovative techniques for optimizing performance with respect to tritium safety by material modification and process control will be investigated. Supplementary experiments will be carried out at the Idaho National Engineering Laboratory and Sandia National Laboratory to expand and clarify results obtained on the Tritium Plasma Experiment.
Zhao, Qilin; Chen, Li; Shao, Guojian
2014-01-01
The axial compressive strength of unidirectional FRP made by pultrusion is generally quite lower than its axial tensile strength. This fact decreases the advantages of FRP as main load bearing member in engineering structure. A theoretical iterative calculation approach was suggested to predict the ultimate axial compressive stress of the combined structure and analyze the influences of geometrical parameters on the ultimate axial compressive stress of the combined structure. In this paper, the experimental and theoretical research on the CFRP sheet confined GFRP short pole was extended to the CFRP sheet confined GFRP short pipe, namely, a hollow section pole. Experiment shows that the bearing capacity of the GFRP short pipe can also be heightened obviously by confining CFRP sheet. The theoretical iterative calculation approach in the previous paper is amended to predict the ultimate axial compressive stress of the CFRP sheet confined GFRP short pipe, of which the results agree with the experiment. Lastly the influences of geometrical parameters on the new combined structure are analyzed. PMID:24672288
Preliminary safety analysis of the Baita Bihor radioactive waste repository, Romania
DOE Office of Scientific and Technical Information (OSTI.GOV)
Little, Richard; Bond, Alex; Watson, Sarah
2007-07-01
A project funded under the European Commission's Phare Programme 2002 has undertaken an in-depth analysis of the operational and post-closure safety of the Baita Bihor repository. The repository has accepted low- and some intermediate-level radioactive waste from industry, medical establishments and research activities since 1985 and the current estimate is that disposals might continue for around another 20 to 35 years. The analysis of the operational and post-closure safety of the Baita Bihor repository was carried out in two iterations, with the second iteration resulting in reduced uncertainties, largely as a result taking into account new information on the hydrologymore » and hydrogeology of the area, collected as part of the project. Impacts were evaluated for the maximum potential inventory that might be available for disposal to Baita Bihor for a number of operational and postclosure scenarios and associated conceptual models. The results showed that calculated impacts were below the relevant regulatory criteria. In light of the assessment, a number of recommendations relating to repository operation, optimisation of repository engineering and waste disposals, and environmental monitoring were made. (authors)« less
Mishra, Nigam M; Stolarzewicz, Izabela; Cannaerts, David; Schuermans, Joris; Lavigne, Rob; Looz, Yannick; Landuyt, Bart; Schoofs, Liliane; Schols, Dominique; Paeshuyse, Jan; Hickenbotham, Peter; Clokie, Martha; Luyten, Walter; Van der Eycken, Erik V; Briers, Yves
2018-01-01
Vancomycin is a glycopeptide antibiotic that inhibits transpeptidation during cell wall synthesis by binding to the D-Ala-D-Ala termini of lipid II. For long, it has been used as a last resort antibiotic. However, since the emergence of the first vancomycin-resistant enterococci in 1987, vancomycin resistance has become widespread, especially in hospitals. We have synthesized and evaluated 110 vancomycin analogs modified at the C-terminal carboxyl group of the heptapeptide moiety with R 2 NHR 1 NH 2 substituents. Through iterative optimizations of the substituents, we identified vancomycin analogs that fully restore (or even exceed) the original inhibitory activity against vancomycin-resistant enterococci (VRE), vancomycin-intermediate (VISA) and vancomycin-resistant Staphylococcus aureus (VRSA) strains. The best analogs have improved growth inhibitory activity and in vitro therapeutic indices against a broad set of VRE and methicillin-resistant S. aureus (MRSA) isolates. They also exceed the activity of vancomycin against Clostridium difficile ribotypes. Vanc-39 and Vanc-42 have a low probability to provoke antibiotic resistance, and overcome different vancomycin resistance mechanisms (VanA, VanB, and VanC1).
Performance evaluation approach for the supercritical helium cold circulators of ITER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vaghela, H.; Sarkar, B.; Bhattacharya, R.
2014-01-29
The ITER project design foresees Supercritical Helium (SHe) forced flow cooling for the main cryogenic components, namely, the superconducting (SC) magnets and cryopumps (CP). Therefore, cold circulators have been selected to provide the required SHe mass flow rate to cope with specific operating conditions and technical requirements. Considering the availability impacts of such machines, it has been decided to perform evaluation tests of the cold circulators at operating conditions prior to the series production in order to minimize the project technical risks. A proposal has been conceptualized, evaluated and simulated to perform representative tests of the full scale SHe coldmore » circulators. The objectives of the performance tests include the validation of normal operating condition, transient and off-design operating modes as well as the efficiency measurement. A suitable process and instrumentation diagram of the test valve box (TVB) has been developed to implement the tests at the required thermodynamic conditions. The conceptual engineering design of the TVB has been developed along with the required thermal analysis for the normal operating conditions to support the performance evaluation of the SHe cold circulator.« less
Summer Proceedings 2016: The Center for Computing Research at Sandia National Laboratories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carleton, James Brian; Parks, Michael L.
Solving sparse linear systems from the discretization of elliptic partial differential equations (PDEs) is an important building block in many engineering applications. Sparse direct solvers can solve general linear systems, but are usually slower and use much more memory than effective iterative solvers. To overcome these two disadvantages, a hierarchical solver (LoRaSp) based on H2-matrices was introduced in [22]. Here, we have developed a parallel version of the algorithm in LoRaSp to solve large sparse matrices on distributed memory machines. On a single processor, the factorization time of our parallel solver scales almost linearly with the problem size for three-dimensionalmore » problems, as opposed to the quadratic scalability of many existing sparse direct solvers. Moreover, our solver leads to almost constant numbers of iterations, when used as a preconditioner for Poisson problems. On more than one processor, our algorithm has significant speedups compared to sequential runs. With this parallel algorithm, we are able to solve large problems much faster than many existing packages as demonstrated by the numerical experiments.« less
Recent progress of the JT-60SA project
NASA Astrophysics Data System (ADS)
Shirai, H.; Barabaschi, P.; Kamada, Y.; the JT-60SA Team
2017-10-01
The JT-60SA project has been implemented for the purpose of an early realization of fusion energy. With a powerful and versatile NBI and ECRF system, a flexible plasma-shaping capability, and various kinds of in-vessel coils to suppress MHD instabilities, JT-60SA plays an essential role in addressing the key physics and engineering issues of ITER and DEMO. It aims to achieve the long sustainment of high integrated performance plasmas under the high β N condition required in DEMO. The fabrication and installation of components and systems of JT-60SA procured by the EU and Japan are steadily progressing. The installation of toroidal field (TF) coils around the vacuum vessel started in December 2016. The commissioning of the cryogenic system and power supply system has been implemented in the Naka site, and JT-60SA will start operation in 2019. The JT-60SA research plan covers a wide area of issues in ITER and DEMO relevant operation regimes, and has been regularly updated on the basis of intensive discussion among European and Japanese researchers.
Fully Automated Detection of Cloud and Aerosol Layers in the CALIPSO Lidar Measurements
NASA Technical Reports Server (NTRS)
Vaughan, Mark A.; Powell, Kathleen A.; Kuehn, Ralph E.; Young, Stuart A.; Winker, David M.; Hostetler, Chris A.; Hunt, William H.; Liu, Zhaoyan; McGill, Matthew J.; Getzewich, Brian J.
2009-01-01
Accurate knowledge of the vertical and horizontal extent of clouds and aerosols in the earth s atmosphere is critical in assessing the planet s radiation budget and for advancing human understanding of climate change issues. To retrieve this fundamental information from the elastic backscatter lidar data acquired during the Cloud-Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) mission, a selective, iterated boundary location (SIBYL) algorithm has been developed and deployed. SIBYL accomplishes its goals by integrating an adaptive context-sensitive profile scanner into an iterated multiresolution spatial averaging scheme. This paper provides an in-depth overview of the architecture and performance of the SIBYL algorithm. It begins with a brief review of the theory of target detection in noise-contaminated signals, and an enumeration of the practical constraints levied on the retrieval scheme by the design of the lidar hardware, the geometry of a space-based remote sensing platform, and the spatial variability of the measurement targets. Detailed descriptions are then provided for both the adaptive threshold algorithm used to detect features of interest within individual lidar profiles and the fully automated multiresolution averaging engine within which this profile scanner functions. The resulting fusion of profile scanner and averaging engine is specifically designed to optimize the trade-offs between the widely varying signal-to-noise ratio of the measurements and the disparate spatial resolutions of the detection targets. Throughout the paper, specific algorithm performance details are illustrated using examples drawn from the existing CALIPSO dataset. Overall performance is established by comparisons to existing layer height distributions obtained by other airborne and space-based lidars.
NASA Technical Reports Server (NTRS)
Johnson, James E.; Conley, Cassie; Siegel, Bette
2015-01-01
As systems, technologies, and plans for the human exploration of Mars and other destinations beyond low Earth orbit begin to coalesce, it is imperative that frequent and early consideration is given to how planetary protection practices and policy will be upheld. While the development of formal planetary protection requirements for future human space systems and operations may still be a few years from fruition, guidance to appropriately influence mission and system design will be needed soon to avoid costly design and operational changes. The path to constructing such requirements is a journey that espouses key systems engineering practices of understanding shared goals, objectives and concerns, identifying key stakeholders, and iterating a draft requirement set to gain community consensus. This paper traces through each of these practices, beginning with a literature review of nearly three decades of publications addressing planetary protection concerns with respect to human exploration. Key goals, objectives and concerns, particularly with respect to notional requirements, required studies and research, and technology development needs have been compiled and categorized to provide a current 'state of knowledge'. This information, combined with the identification of key stakeholders in upholding planetary protection concerns for human missions, has yielded a draft requirement set that might feed future iteration among space system designers, exploration scientists, and the mission operations community. Combining the information collected with a proposed forward path will hopefully yield a mutually agreeable set of timely, verifiable, and practical requirements for human space exploration that will uphold international commitment to planetary protection.
Simultaneous non-contiguous deletions using large synthetic DNA and site-specific recombinases
Krishnakumar, Radha; Grose, Carissa; Haft, Daniel H.; Zaveri, Jayshree; Alperovich, Nina; Gibson, Daniel G.; Merryman, Chuck; Glass, John I.
2014-01-01
Toward achieving rapid and large scale genome modification directly in a target organism, we have developed a new genome engineering strategy that uses a combination of bioinformatics aided design, large synthetic DNA and site-specific recombinases. Using Cre recombinase we swapped a target 126-kb segment of the Escherichia coli genome with a 72-kb synthetic DNA cassette, thereby effectively eliminating over 54 kb of genomic DNA from three non-contiguous regions in a single recombination event. We observed complete replacement of the native sequence with the modified synthetic sequence through the action of the Cre recombinase and no competition from homologous recombination. Because of the versatility and high-efficiency of the Cre-lox system, this method can be used in any organism where this system is functional as well as adapted to use with other highly precise genome engineering systems. Compared to present-day iterative approaches in genome engineering, we anticipate this method will greatly speed up the creation of reduced, modularized and optimized genomes through the integration of deletion analyses data, transcriptomics, synthetic biology and site-specific recombination. PMID:24914053
Optimized bio-inspired stiffening design for an engine nacelle.
Lazo, Neil; Vodenitcharova, Tania; Hoffman, Mark
2015-11-04
Structural efficiency is a common engineering goal in which an ideal solution provides a structure with optimized performance at minimized weight, with consideration of material mechanical properties, structural geometry, and manufacturability. This study aims to address this goal in developing high performance lightweight, stiff mechanical components by creating an optimized design from a biologically-inspired template. The approach is implemented on the optimization of rib stiffeners along an aircraft engine nacelle. The helical and angled arrangements of cellulose fibres in plants were chosen as the bio-inspired template. Optimization of total displacement and weight was carried out using a genetic algorithm (GA) coupled with finite element analysis. Iterations showed a gradual convergence in normalized fitness. Displacement was given higher emphasis in optimization, thus the GA optimization tended towards individual designs with weights near the mass constraint. Dominant features of the resulting designs were helical ribs with rectangular cross-sections having large height-to-width ratio. Displacement reduction was at 73% as compared to an unreinforced nacelle, and is attributed to the geometric features and layout of the stiffeners, while mass is maintained within the constraint.
NASA Technical Reports Server (NTRS)
Watson, Willie R.; Nark, Douglas M.; Nguyen, Duc T.; Tungkahotara, Siroj
2006-01-01
A finite element solution to the convected Helmholtz equation in a nonuniform flow is used to model the noise field within 3-D acoustically treated aero-engine nacelles. Options to select linear or cubic Hermite polynomial basis functions and isoparametric elements are included. However, the key feature of the method is a domain decomposition procedure that is based upon the inter-mixing of an iterative and a direct solve strategy for solving the discrete finite element equations. This procedure is optimized to take full advantage of sparsity and exploit the increased memory and parallel processing capability of modern computer architectures. Example computations are presented for the Langley Flow Impedance Test facility and a rectangular mapping of a full scale, generic aero-engine nacelle. The accuracy and parallel performance of this new solver are tested on both model problems using a supercomputer that contains hundreds of central processing units. Results show that the method gives extremely accurate attenuation predictions, achieves super-linear speedup over hundreds of CPUs, and solves upward of 25 million complex equations in a quarter of an hour.
Aerospace engineering design by systematic decomposition and multilevel optimization
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, J.; Giles, G. L.; Barthelemy, J.-F. M.
1984-01-01
This paper describes a method for systematic analysis and optimization of large engineering systems, e.g., aircraft, by decomposition of a large task into a set of smaller, self-contained subtasks that can be solved concurrently. The subtasks may be arranged in many hierarchical levels with the assembled system at the top level. Analyses are carried out in each subtask using inputs received from other subtasks, and are followed by optimizations carried out from the bottom up. Each optimization at the lower levels is augmented by analysis of its sensitivity to the inputs received from other subtasks to account for the couplings among the subtasks in a formal manner. The analysis and optimization operations alternate iteratively until they converge to a system design whose performance is maximized with all constraints satisfied. The method, which is still under development, is tentatively validated by test cases in structural applications and an aircraft configuration optimization. It is pointed out that the method is intended to be compatible with the typical engineering organization and the modern technology of distributed computing.
NASA Astrophysics Data System (ADS)
Darbos, C.; Henderson, M.; Albajar, F.; Bigelow, T.; Bomcelli, T.; Chavan, R.; Denisov, G.; Farina, D.; Gandini, F.; Heidinger, R.; Goodman, T.; Hogge, J. P.; Kajiwara, K.; Kasugai, A.; Kern, S.; Kobayashi, N.; Oda, Y.; Ramponi, G.; Rao, S. L.; Rasmussen, D.; Rzesnicki, T.; Saibene, G.; Sakamoto, K.; Sauter, O.; Scherer, T.; Strauss, D.; Takahashi, K.; Zohm, H.
2009-11-01
A 26 MW Electron Cyclotron Heating and Current Drive (EC H&CD) system is to be installed for ITER. The main objectives are to provide, start-up assist, central H&CD and control of MHD activity. These are achieved by a combination of two types of launchers, one located in an equatorial port and the second type in four upper ports. The physics applications are partitioned between the two launchers, based on the deposition location and driven current profiles. The equatorial launcher (EL) will access from the plasma axis to mid radius with a relatively broad profile useful for central heating and current drive applications, while the upper launchers (ULs) will access roughly the outer half of the plasma radius with a very narrow peaked profile for the control of the Neoclassical Tearing Modes (NTM) and sawtooth oscillations. The EC power can be switched between launchers on a time scale as needed by the immediate physics requirements. A revision of all injection angles of all launchers is under consideration for increased EC physics capabilities while relaxing the engineering constraints of both the EL and ULs. A series of design reviews are being planned with the five parties (EU, IN, JA, RF, US) procuring the EC system, the EC community and ITER Organization (IO). The review meetings qualify the design and provide an environment for enhancing performances while reducing costs, simplifying interfaces, predicting technology upgrades and commercial availability. In parallel, the test programs for critical components are being supported by IO and performed by the Domestic Agencies (DAs) for minimizing risks. The wide participation of the DAs provides a broad representation from the EC community, with the aim of collecting all expertise in guiding the EC system optimization. Still a strong relationship between IO and the DA is essential for optimizing the design of the EC system and for the installation and commissioning of all ex-vessel components when several teams from several DAs will be involved together in the tests on the ITER site.
NASA Astrophysics Data System (ADS)
1993-08-01
The Committee's evaluation of vanadium alloys as a structural material for fusion reactors was constrained by limited data and time. The design of the International Thermonuclear Experimental Reactor is still in the concept stage, so meaningful design requirements were not available. The data on the effect of environment and irradiation on vanadium alloys were sparse, and interpolation of these data were made to select the V-5Cr-5Ti alloy. With an aggressive, fully funded program it is possible to qualify a vanadium alloy as the principal structural material for the ITER blanket in the available 5 to 8-year window. However, the data base for V-5Cr-5Ti is limited and will require an extensive development and test program. Because of the chemical reactivity of vanadium the alloy will be less tolerant of system failures, accidents, and off-normal events than most other candidate blanket structural materials and will require more careful handling during fabrication of hardware. Because of the cost of the material more stringent requirements on processes, and minimal historical working experience, it will cost an order of magnitude to qualify a vanadium alloy for ITER blanket structures than other candidate materials. The use of vanadium is difficult and uncertain; therefore, other options should be explored more thoroughly before a final selection of vanadium is confirmed. The Committee views the risk as being too high to rely solely on vanadium alloys. In viewing the state and nature of the design of the ITER blanket as presented to the Committee, it is obvious that there is a need to move toward integrating fabrication, welding, and materials engineers into the ITER design team. If the vanadium alloy option is to be pursued, a large program needs to be started immediately. The commitment of funding and other resources needs to be firm and consistent with a realistic program plan.
Derivative free Davidon-Fletcher-Powell (DFP) for solving symmetric systems of nonlinear equations
NASA Astrophysics Data System (ADS)
Mamat, M.; Dauda, M. K.; Mohamed, M. A. bin; Waziri, M. Y.; Mohamad, F. S.; Abdullah, H.
2018-03-01
Research from the work of engineers, economist, modelling, industry, computing, and scientist are mostly nonlinear equations in nature. Numerical solution to such systems is widely applied in those areas of mathematics. Over the years, there has been significant theoretical study to develop methods for solving such systems, despite these efforts, unfortunately the methods developed do have deficiency. In a contribution to solve systems of the form F(x) = 0, x ∈ Rn , a derivative free method via the classical Davidon-Fletcher-Powell (DFP) update is presented. This is achieved by simply approximating the inverse Hessian matrix with {Q}k+1-1 to θkI. The modified method satisfied the descent condition and possess local superlinear convergence properties. Interestingly, without computing any derivative, the proposed method never fail to converge throughout the numerical experiments. The output is based on number of iterations and CPU time, different initial starting points were used on a solve 40 benchmark test problems. With the aid of the squared norm merit function and derivative-free line search technique, the approach yield a method of solving symmetric systems of nonlinear equations that is capable of significantly reducing the CPU time and number of iteration, as compared to its counterparts. A comparison between the proposed method and classical DFP update were made and found that the proposed methodis the top performer and outperformed the existing method in almost all the cases. In terms of number of iterations, out of the 40 problems solved, the proposed method solved 38 successfully, (95%) while classical DFP solved 2 problems (i.e. 05%). In terms of CPU time, the proposed method solved 29 out of the 40 problems given, (i.e.72.5%) successfully whereas classical DFP solves 11 (27.5%). The method is valid in terms of derivation, reliable in terms of number of iterations and accurate in terms of CPU time. Thus, suitable and achived the objective.
Strohmaier, Markus; Walk, Simon; Pöschko, Jan; Lamprecht, Daniel; Tudorache, Tania; Nyulas, Csongor; Musen, Mark A; Noy, Natalya F
2013-05-01
Traditionally, evaluation methods in the field of semantic technologies have focused on the end result of ontology engineering efforts, mainly, on evaluating ontologies and their corresponding qualities and characteristics. This focus has led to the development of a whole arsenal of ontology-evaluation techniques that investigate the quality of ontologies as a product . In this paper, we aim to shed light on the process of ontology engineering construction by introducing and applying a set of measures to analyze hidden social dynamics. We argue that especially for ontologies which are constructed collaboratively, understanding the social processes that have led to its construction is critical not only in understanding but consequently also in evaluating the ontology. With the work presented in this paper, we aim to expose the texture of collaborative ontology engineering processes that is otherwise left invisible. Using historical change-log data, we unveil qualitative differences and commonalities between different collaborative ontology engineering projects. Explaining and understanding these differences will help us to better comprehend the role and importance of social factors in collaborative ontology engineering projects. We hope that our analysis will spur a new line of evaluation techniques that view ontologies not as the static result of deliberations among domain experts, but as a dynamic, collaborative and iterative process that needs to be understood, evaluated and managed in itself. We believe that advances in this direction would help our community to expand the existing arsenal of ontology evaluation techniques towards more holistic approaches.
Crater Morphology of Engineered and Natural Impactors into Planetary Ice
NASA Astrophysics Data System (ADS)
Danner, M.; Winglee, R.; Koch, J.
2017-12-01
Crater morphology of engineered impactors, such as those proposed for the Europa Kinetic Ice Penetrator (EKIP) mission, varies drastically from that of natural impactors (i.e. Asteroids, meteoroids). Previous work of natural impact craters in ice have been conducted with the intent to bound the thickness of Europa's ice crust; this work focuses on the depth, size, and compressional effects caused by various impactor designs, and the possible effects to the Europan surface. The present work details results from nine projectiles that were dropped on the Taku Glacier, AK at an altitude of 775 meters above surface; three rocks to simulate natural impactors, and six iterations of engineered steel and aluminum penetrator projectiles. Density measurements were taken at various locations within the craters, as well as through a cross section of the crater. Due to altitude restrictions, projectiles remained below terminal velocity. The natural/rock impact craters displayed typical cratering characteristics such as shallow, half meter scale depth, and orthogonal compressional forcing. The engineered projectiles produced impact craters with depths averaging two meters, with crater widths matching the impactor diameters. Compressional waves from the engineered impactors propagated downwards, parallel to direction of impact. Engineered impactors create significantly less lateral fracturing than natural impactors. Due to the EKIP landing mechanism, sampling of pristine ice closer to the lander is possible than previously thought with classical impact theory. Future work is planned to penetrate older, multiyear ice with higher velocity impacts.
Strohmaier, Markus; Walk, Simon; Pöschko, Jan; Lamprecht, Daniel; Tudorache, Tania; Nyulas, Csongor; Musen, Mark A.; Noy, Natalya F.
2013-01-01
Traditionally, evaluation methods in the field of semantic technologies have focused on the end result of ontology engineering efforts, mainly, on evaluating ontologies and their corresponding qualities and characteristics. This focus has led to the development of a whole arsenal of ontology-evaluation techniques that investigate the quality of ontologies as a product. In this paper, we aim to shed light on the process of ontology engineering construction by introducing and applying a set of measures to analyze hidden social dynamics. We argue that especially for ontologies which are constructed collaboratively, understanding the social processes that have led to its construction is critical not only in understanding but consequently also in evaluating the ontology. With the work presented in this paper, we aim to expose the texture of collaborative ontology engineering processes that is otherwise left invisible. Using historical change-log data, we unveil qualitative differences and commonalities between different collaborative ontology engineering projects. Explaining and understanding these differences will help us to better comprehend the role and importance of social factors in collaborative ontology engineering projects. We hope that our analysis will spur a new line of evaluation techniques that view ontologies not as the static result of deliberations among domain experts, but as a dynamic, collaborative and iterative process that needs to be understood, evaluated and managed in itself. We believe that advances in this direction would help our community to expand the existing arsenal of ontology evaluation techniques towards more holistic approaches. PMID:24311994
NASA Technical Reports Server (NTRS)
Zimmerman, W. F.; Matijevic, J. R.
1987-01-01
Novel system engineering techniques have been developed and applied to establishing structured design and performance objectives for the Telerobotics Testbed that reduce technical risk while still allowing the testbed to demonstrate an advancement in state-of-the-art robotic technologies. To estblish the appropriate tradeoff structure and balance of technology performance against technical risk, an analytical data base was developed which drew on: (1) automation/robot-technology availability projections, (2) typical or potential application mission task sets, (3) performance simulations, (4) project schedule constraints, and (5) project funding constraints. Design tradeoffs and configuration/performance iterations were conducted by comparing feasible technology/task set configurations against schedule/budget constraints as well as original program target technology objectives. The final system configuration, task set, and technology set reflected a balanced advancement in state-of-the-art robotic technologies, while meeting programmatic objectives and schedule/cost constraints.
Excel spreadsheet in teaching numerical methods
NASA Astrophysics Data System (ADS)
Djamila, Harimi
2017-09-01
One of the important objectives in teaching numerical methods for undergraduates’ students is to bring into the comprehension of numerical methods algorithms. Although, manual calculation is important in understanding the procedure, it is time consuming and prone to error. This is specifically the case when considering the iteration procedure used in many numerical methods. Currently, many commercial programs are useful in teaching numerical methods such as Matlab, Maple, and Mathematica. These are usually not user-friendly by the uninitiated. Excel spreadsheet offers an initial level of programming, which it can be used either in or off campus. The students will not be distracted with writing codes. It must be emphasized that general commercial software is required to be introduced later to more elaborated questions. This article aims to report on a teaching numerical methods strategy for undergraduates engineering programs. It is directed to students, lecturers and researchers in engineering field.
Object-oriented technologies in a multi-mission data system
NASA Technical Reports Server (NTRS)
Murphy, Susan C.; Miller, Kevin J.; Louie, John J.
1993-01-01
The Operations Engineering Laboratory (OEL) at JPL is developing new technologies that can provide more efficient and productive ways of doing business in flight operations. Over the past three years, we have worked closely with the Multi-Mission Control Team to develop automation tools, providing technology transfer into operations and resulting in substantial cost savings and error reduction. The OEL development philosophy is characterized by object-oriented design, extensive reusability of code, and an iterative development model with active participation of the end users. Through our work, the benefits of object-oriented design became apparent for use in mission control data systems. Object-oriented technologies and how they can be used in a mission control center to improve efficiency and productivity are explained. The current research and development efforts in the JPL Operations Engineering Laboratory are also discussed to architect and prototype a new paradigm for mission control operations based on object-oriented concepts.
Design consideration for a nuclear electric propulsion system
NASA Technical Reports Server (NTRS)
Phillips, W. M.; Pawlik, E. V.
1978-01-01
A study is currently underway to design a nuclear electric propulsion vehicle capable of performing detailed exploration of the outer-planets. Primary emphasis is on the power subsystem. Secondary emphasis includes integration into a spacecraft, and integration with the thrust subsystem and science package or payload. The results of several design iterations indicate an all-heat-pipe system offers greater reliability, elimination of many technology development areas and a specific weight of under 20 kg/kWe at the 400 kWe power level. The system is compatible with a single Shuttle launch and provides greater safety than could be obtained with designs using pumped liquid metal cooling. Two configurations, one with the reactor and power conversion forward on the spacecraft with the ion engines aft and the other with reactor, power conversion and ion engines aft were selected as dual baseline designs based on minimum weight, minimum required technology development and maximum growth potential and flexibility.
Human Engineering of Space Vehicle Displays and Controls
NASA Technical Reports Server (NTRS)
Whitmore, Mihriban; Holden, Kritina L.; Boyer, Jennifer; Stephens, John-Paul; Ezer, Neta; Sandor, Aniko
2010-01-01
Proper attention to the integration of the human needs in the vehicle displays and controls design process creates a safe and productive environment for crew. Although this integration is critical for all phases of flight, for crew interfaces that are used during dynamic phases (e.g., ascent and entry), the integration is particularly important because of demanding environmental conditions. This panel addresses the process of how human engineering involvement ensures that human-system integration occurs early in the design and development process and continues throughout the lifecycle of a vehicle. This process includes the development of requirements and quantitative metrics to measure design success, research on fundamental design questions, human-in-the-loop evaluations, and iterative design. Processes and results from research on displays and controls; the creation and validation of usability, workload, and consistency metrics; and the design and evaluation of crew interfaces for NASA's Crew Exploration Vehicle are used as case studies.
CANISTER HANDLING FACILITY DESCRIPTION DOCUMENT
DOE Office of Scientific and Technical Information (OSTI.GOV)
J.F. Beesley
The purpose of this facility description document (FDD) is to establish requirements and associated bases that drive the design of the Canister Handling Facility (CHF), which will allow the design effort to proceed to license application. This FDD will be revised at strategic points as the design matures. This FDD identifies the requirements and describes the facility design, as it currently exists, with emphasis on attributes of the design provided to meet the requirements. This FDD is an engineering tool for design control; accordingly, the primary audience and users are design engineers. This FDD is part of an iterative designmore » process. It leads the design process with regard to the flowdown of upper tier requirements onto the facility. Knowledge of these requirements is essential in performing the design process. The FDD follows the design with regard to the description of the facility. The description provided in this FDD reflects the current results of the design process.« less
Multi-Mounted X-Ray Computed Tomography.
Fu, Jian; Liu, Zhenzhong; Wang, Jingzheng
2016-01-01
Most existing X-ray computed tomography (CT) techniques work in single-mounted mode and need to scan the inspected objects one by one. It is time-consuming and not acceptable for the inspection in a large scale. In this paper, we report a multi-mounted CT method and its first engineering implementation. It consists of a multi-mounted scanning geometry and the corresponding algebraic iterative reconstruction algorithm. This approach permits the CT rotation scanning of multiple objects simultaneously without the increase of penetration thickness and the signal crosstalk. Compared with the conventional single-mounted methods, it has the potential to improve the imaging efficiency and suppress the artifacts from the beam hardening and the scatter. This work comprises a numerical study of the method and its experimental verification using a dataset measured with a developed multi-mounted X-ray CT prototype system. We believe that this technique is of particular interest for pushing the engineering applications of X-ray CT.
A Systems Engineering Approach to Architecture Development
NASA Technical Reports Server (NTRS)
Di Pietro, David A.
2014-01-01
Architecture development is conducted prior to system concept design when there is a need to determine the best-value mix of systems that works collectively in specific scenarios and time frames to accomplish a set of mission area objectives. While multiple architecture frameworks exist, they often require use of unique taxonomies and data structures. In contrast, this presentation characterizes architecture development using terminology widely understood within the systems engineering community. Using a notional civil space architecture example, it employs a multi-tier framework to describe the enterprise level architecture and illustrates how results of lower tier, mission area architectures integrate into the enterprise architecture. It also presents practices for conducting effective mission area architecture studies, including establishing the trade space, developing functions and metrics, evaluating the ability of potential design solutions to meet the required functions, and expediting study execution through the use of iterative design cycles.
A Systems Engineering Approach to Architecture Development
NASA Technical Reports Server (NTRS)
Di Pietro, David A.
2015-01-01
Architecture development is often conducted prior to system concept design when there is a need to determine the best-value mix of systems that works collectively in specific scenarios and time frames to accomplish a set of mission area objectives. While multiple architecture frameworks exist, they often require use of unique taxonomies and data structures. In contrast, this paper characterizes architecture development using terminology widely understood within the systems engineering community. Using a notional civil space architecture example, it employs a multi-tier framework to describe the enterprise level architecture and illustrates how results of lower tier, mission area architectures integrate into the enterprise architecture. It also presents practices for conducting effective mission area architecture studies, including establishing the trade space, developing functions and metrics, evaluating the ability of potential design solutions to meet the required functions, and expediting study execution through the use of iterative design cycles.
A Systems Engineering Approach to Architecture Development
NASA Technical Reports Server (NTRS)
Di Pietro, David A.
2015-01-01
Architecture development is often conducted prior to system concept design when there is a need to determine the best-value mix of systems that works collectively in specific scenarios and time frames to accomplish a set of mission area objectives. While multiple architecture frameworks exist, they often require use of unique taxonomies and data structures. In contrast, this paper characterizes architecture development using terminology widely understood within the systems engineering community. Using a notional civil space architecture example, it employs a multi-tier framework to describe the enterprise level architecture and illustrates how results of lower tier, mission area architectures integrate into the enterprise architecture. It also presents practices for conducting effective mission area architecture studies, including establishing the trade space, developing functions and metrics, evaluating the ability of potential design solutions to meet the required functions, and expediting study execution through the use of iterative design cycles
Stryjewska, Agnieszka; Kiepura, Katarzyna; Librowski, Tadeusz; Lochyński, Stanisław
2013-01-01
Industrial biotechnology has been defined as the use and application of biotechnology for the sustainable processing and production of chemicals, materials and fuels. It makes use of biocatalysts such as microbial communities, whole-cell microorganisms or purified enzymes. In the review these processes are described. Drug design is an iterative process which begins when a chemist identifies a compound that displays an interesting biological profile and ends when both the activity profile and the chemical synthesis of the new chemical entity are optimized. Traditional approaches to drug discovery rely on a stepwise synthesis and screening program for large numbers of compounds to optimize activity profiles. Over the past ten to twenty years, scientists have used computer models of new chemical entities to help define activity profiles, geometries and relativities. This article introduces inter alia the concepts of molecular modelling and contains references for further reading.
Inverse Theory for Petroleum Reservoir Characterization and History Matching
NASA Astrophysics Data System (ADS)
Oliver, Dean S.; Reynolds, Albert C.; Liu, Ning
This book is a guide to the use of inverse theory for estimation and conditional simulation of flow and transport parameters in porous media. It describes the theory and practice of estimating properties of underground petroleum reservoirs from measurements of flow in wells, and it explains how to characterize the uncertainty in such estimates. Early chapters present the reader with the necessary background in inverse theory, probability and spatial statistics. The book demonstrates how to calculate sensitivity coefficients and the linearized relationship between models and production data. It also shows how to develop iterative methods for generating estimates and conditional realizations. The text is written for researchers and graduates in petroleum engineering and groundwater hydrology and can be used as a textbook for advanced courses on inverse theory in petroleum engineering. It includes many worked examples to demonstrate the methodologies and a selection of exercises.
Reducing neural network training time with parallel processing
NASA Technical Reports Server (NTRS)
Rogers, James L., Jr.; Lamarsh, William J., II
1995-01-01
Obtaining optimal solutions for engineering design problems is often expensive because the process typically requires numerous iterations involving analysis and optimization programs. Previous research has shown that a near optimum solution can be obtained in less time by simulating a slow, expensive analysis with a fast, inexpensive neural network. A new approach has been developed to further reduce this time. This approach decomposes a large neural network into many smaller neural networks that can be trained in parallel. Guidelines are developed to avoid some of the pitfalls when training smaller neural networks in parallel. These guidelines allow the engineer: to determine the number of nodes on the hidden layer of the smaller neural networks; to choose the initial training weights; and to select a network configuration that will capture the interactions among the smaller neural networks. This paper presents results describing how these guidelines are developed.
Fast Formal Analysis of Requirements via "Topoi Diagrams"
NASA Technical Reports Server (NTRS)
Menzies, Tim; Powell, John; Houle, Michael E.; Kelly, John C. (Technical Monitor)
2001-01-01
Early testing of requirements can decrease the cost of removing errors in software projects. However, unless done carefully, that testing process can significantly add to the cost of requirements analysis. We show here that requirements expressed as topoi diagrams can be built and tested cheaply using our SP2 algorithm, the formal temporal properties of a large class of topoi can be proven very quickly, in time nearly linear in the number of nodes and edges in the diagram. There are two limitations to our approach. Firstly, topoi diagrams cannot express certain complex concepts such as iteration and sub-routine calls. Hence, our approach is more useful for requirements engineering than for traditional model checking domains. Secondly, out approach is better for exploring the temporal occurrence of properties than the temporal ordering of properties. Within these restrictions, we can express a useful range of concepts currently seen in requirements engineering, and a wide range of interesting temporal properties.
NASA Astrophysics Data System (ADS)
Gilchrist, Pamela O.; Carpenter, Eric D.; Gray-Battle, Asia
2014-07-01
A hybrid teacher professional development, student science technology mathematics and engineering pipeline enrichment program was operated by the reporting research group for the past 3 years. Overall, the program has reached 69 students from 13 counties in North Carolina and 57 teachers from 30 counties spread over a total of five states. Quantitative analysis of oral presentations given by participants at a program event is provided. Scores from multiple raters were averaged and used as a criterion in several regression analyses. Overall it was revealed that student grade point averages, most advanced science course taken, extra quality points earned in their most advanced science course taken, and posttest scores on a pilot research design survey were significant predictors of student oral presentation scores. Rationale for findings, opportunities for future research, and implications for the iterative development of the program are discussed.
Evaluation of ITER MSE Viewing Optics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Allen, S; Lerner, S; Morris, K
2007-03-26
The Motional Stark Effect (MSE) diagnostic on ITER determines the local plasma current density by measuring the polarization angle of light resulting from the interaction of a high energy neutral heating beam and the tokamak plasma. This light signal has to be transmitted from the edge and core of the plasma to a polarization analyzer located in the port plug. The optical system should either preserve the polarization information, or it should be possible to reliably calibrate any changes induced by the optics. This LLNL Work for Others project for the US ITER Project Office (USIPO) is focused on themore » design of the viewing optics for both the edge and core MSE systems. Several design constraints were considered, including: image quality, lack of polarization aberrations, ease of construction and cost of mirrors, neutron shielding, and geometric layout in the equatorial port plugs. The edge MSE optics are located in ITER equatorial port 3 and view Heating Beam 5, and the core system is located in equatorial port 1 viewing heating beam 4. The current work is an extension of previous preliminary design work completed by the ITER central team (ITER resources were not available to complete a detailed optimization of this system, and then the MSE was assigned to the US). The optimization of the optical systems at this level was done with the ZEMAX optical ray tracing code. The final LLNL designs decreased the ''blur'' in the optical system by nearly an order of magnitude, and the polarization blur was reduced by a factor of 3. The mirror sizes were reduced with an estimated cost savings of a factor of 3. The throughput of the system was greater than or equal to the previous ITER design. It was found that optical ray tracing was necessary to accurately measure the throughput. Metal mirrors, while they can introduce polarization aberrations, were used close to the plasma because of the anticipated high heat, particle, and neutron loads. These mirrors formed an intermediate image that then was relayed out of the port plug with more ideal (dielectric) mirrors. Engineering models of the optics, port plug, and neutral beam geometry were also created, using the CATIA ITER models. Two video conference calls with the USIPO provided valuable design guidelines, such as the minimum distance of the first optic from the plasma. A second focus of the project was the calibration of the system. Several different techniques are proposed, both before and during plasma operation. Fixed and rotatable polarizers would be used to characterize the system in the no-plasma case. Obtaining the full modulation spectrum from the polarization analyzer allows measurement of polarization effects and also MHD plasma phenomena. Light from neutral beam interaction with deuterium gas (no plasma) has been found useful to determine the wavelength of each spatial channel. The status of the optical design for the edge (upper) and core (lower) systems is included in the following figure. Several issues should be addressed by a follow-on study, including whether the optical labyrinth has sufficient neutron shielding and a detailed polarization characterization of actual mirrors.« less
NASA Astrophysics Data System (ADS)
Gardner, Elizabeth Claire
It is important that students understand not only how their local watershed functions, but also how it is being impacted by impervious surfaces. Additionally, students need experience exploring the scientific and engineering practices that are necessary for a strong STEM background. With this knowledge students can be empowered to tackle this real and local problem using engineering design, a powerful practice gaining momentum and clarity through its prominence in the recent Framework for K-12 Science Education. Twenty classes of suburban sixth-graders participated in a new five-week Watershed Engineering Design Unit taught by their regular science teachers. Students engaged in scientific inquiry to learn about the structure, function, and health of their local watersheds, focusing on the effects of impervious surfaces. In small groups, students used the engineering design process to propose solutions to lessen the impact of runoff from their school campuses. The goal of this evaluation was to determine the effectiveness of the curriculum in terms of student gains in understanding of (1) watershed function, (2) the impact of impervious surfaces, and (3) the engineering design process. To determine the impact of this curriculum on their learning, students took multiple-choice pre- and post-assessments made up of items covering the three categories above. This data was analyzed for statistical significance using a lower-tailed paired sample t-test. All three objectives showed statistically significant learning gains and the results were used to recommend improvements to the curriculum and the assessment instrument for future iterations.
Flutter optimization in fighter aircraft design
NASA Technical Reports Server (NTRS)
Triplett, W. E.
1984-01-01
The efficient design of aircraft structure involves a series of compromises among various engineering disciplines. These compromises are necessary to ensure the best overall design. To effectively reconcile the various technical constraints requires a number of design iterations, with the accompanying long elapsed time. Automated procedures can reduce the elapsed time, improve productivity and hold the promise of optimum designs which may be missed by batch processing. Several examples are given of optimization applications including aeroelastic constraints. Particular attention is given to the success or failure of each example and the lessons learned. The specific applications are shown. The final two applications were made recently.
Development Testing and Subsequent Failure Investigation of a Spring Strut Mechanism
NASA Technical Reports Server (NTRS)
Dervan, Jared; Robertson, Brandan; Staab, Lucas; Culberson, Michael; Pellicciotti, Joseph
2014-01-01
The NASA Engineering and Safety Center (NESC) and Lockheed Martin (LM) performed random vibration testing on a single spring strut development unit to assess its ability to withstand qualification level random vibration environments. Failure of the strut while exposed to random vibration resulted in a follow-on failure investigation, design changes, and additional development tests. This paper focuses on the results of the failure investigations referenced in detail in the NESC final report [1] including identified lessons learned to aid in future design iterations of the spring strut and to help other mechanism developers avoid similar pitfalls.
Development Testing and Subsequent Failure Investigation of a Spring Strut Mechanism
NASA Technical Reports Server (NTRS)
Dervan, Jared; Robertson, Brandon; Staab, Lucas; Culberson, Michael; Pellicciotti, Joseph
2014-01-01
The NASA Engineering and Safety Center (NESC) and Lockheed Martin (LM) performed random vibration testing on a single spring strut development unit to assess its ability to withstand qualification level random vibration environments. Failure of the strut while exposed to random vibration resulted in a follow-on failure investigation, design changes, and additional development tests. This paper focuses on the results of the failure investigations referenced in detail in the NESC final report including identified lessons learned to aid in future design iterations of the spring strut and to help other mechanism developers avoid similar pitfalls.
NASA Technical Reports Server (NTRS)
Chapman, Jeffryes W.; Lavelle, Thomas M.; May, Ryan D.; Litt, Jonathan S.; Guo, Ten-Huei (OA)
2014-01-01
A simulation toolbox has been developed for the creation of both steady-state and dynamic thermodynamic software models. This presentation describes the Toolbox for the Modeling and Analysis of Thermodynamic Systems (T-MATS), which combines generic thermodynamic and controls modeling libraries with a numerical iterative solver to create a framework for the development of thermodynamic system simulations, such as gas turbine engines. The objective of this presentation is to present an overview of T-MATS, the theory used in the creation of the module sets, and a possible propulsion simulation architecture.
Advanced control concepts. [for shuttle ascent vehicles
NASA Technical Reports Server (NTRS)
Sharp, J. B.; Coppey, J. M.
1973-01-01
The problems of excess control devices and insufficient trim control capability on shuttle ascent vehicles were investigated. The trim problem is solved at all time points of interest using Lagrangian multipliers and a Simplex based iterative algorithm developed as a result of the study. This algorithm has the capability to solve any bounded linear problem with physically realizable constraints, and to minimize any piecewise differentiable cost function. Both solution methods also automatically distribute the command torques to the control devices. It is shown that trim requirements are unrealizable if only the orbiter engines and the aerodynamic surfaces are used.
Natural Vibration Analysis of Clamped Rectangular Orthotropic Plates
NASA Astrophysics Data System (ADS)
dalaei, m.; kerr, a. d.
The natural vibrations of clamped rectangular orthotropic plates are analyzed using the extended Kantorovich method. The developed iterative scheme converges very rapidly to the final result. The obtained natural frequencies are evaluated for a square plate made of Kevlar 49 Epoxy and the obtained results are compared with those published by Kanazawa and Kawai, and by Leissa. The agreement was found to be very close. As there are no exact analytical solutions for clamped rectangular plates, the generated closed form expression for the natural modes, and the corresponding natural frequencies, are very suitable for use in engineering analyses.
Shaikh, Tanvir R; Gao, Haixiao; Baxter, William T; Asturias, Francisco J; Boisset, Nicolas; Leith, Ardean; Frank, Joachim
2009-01-01
This protocol describes the reconstruction of biological molecules from the electron micrographs of single particles. Computation here is performed using the image-processing software SPIDER and can be managed using a graphical user interface, termed the SPIDER Reconstruction Engine. Two approaches are described to obtain an initial reconstruction: random-conical tilt and common lines. Once an existing model is available, reference-based alignment can be used, a procedure that can be iterated. Also described is supervised classification, a method to look for homogeneous subsets when multiple known conformations of the molecule may coexist. PMID:19180078
Sizing of complex structure by the integration of several different optimal design algorithms
NASA Technical Reports Server (NTRS)
Sobieszczanski, J.
1974-01-01
Practical design of large-scale structures can be accomplished with the aid of the digital computer by bringing together in one computer program algorithms of nonlinear mathematical programing and optimality criteria with weight-strength and other so-called engineering methods. Applications of this approach to aviation structures are discussed with a detailed description of how the total problem of structural sizing can be broken down into subproblems for best utilization of each algorithm and for efficient organization of the program into iterative loops. Typical results are examined for a number of examples.
Wei, Qinglai; Liu, Derong; Lin, Qiao
In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.In this paper, a novel local value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon optimal control problems for discrete-time nonlinear systems. The focuses of this paper are to study admissibility properties and the termination criteria of discrete-time local value iteration ADP algorithms. In the discrete-time local value iteration ADP algorithm, the iterative value functions and the iterative control laws are both updated in a given subset of the state space in each iteration, instead of the whole state space. For the first time, admissibility properties of iterative control laws are analyzed for the local value iteration ADP algorithm. New termination criteria are established, which terminate the iterative local ADP algorithm with an admissible approximate optimal control law. Finally, simulation results are given to illustrate the performance of the developed algorithm.
An evolutionary metabolic engineering approach for enhancing lipogenesis in Yarrowia lipolytica.
Liu, Leqian; Pan, Anny; Spofford, Caitlin; Zhou, Nijia; Alper, Hal S
2015-05-01
Lipogenic organisms provide an ideal platform for biodiesel and oleochemical production. Through our previous rational metabolic engineering efforts, lipogenesis titers in Yarrowia lipolytica were significantly enhanced. However, the resulting strain still suffered from decreased biomass generation rates. Here, we employ a rapid evolutionary metabolic engineering approach linked with a floating cell enrichment process to improve lipogenesis rates, titers, and yields. Through this iterative process, we were able to ultimately improve yields from our prior strain by 55% to achieve production titers of 39.1g/L with upwards of 76% of the theoretical maximum yield of conversation. Isolated cells were saturated with up to 87% lipid content. An average specific productivity of 0.56g/L/h was achieved with a maximum instantaneous specific productivity of 0.89g/L/h during the lipid production phase in fermentation. Genomic sequencing of the evolved strains revealed a link between a decrease/loss of function mutation of succinate semialdehyde dehydrogenase, uga2, suggesting the importance of gamma-aminobutyric acid assimilation in lipogenesis. This linkage was validated through gene deletion experiments. This work presents an improved host strain that can serve as a platform for efficient oleochemical production. Copyright © 2015 International Metabolic Engineering Society. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Holota, P.; Nesvadba, O.
2016-12-01
The mathematical apparatus currently applied for geopotential determination is undoubtedly quite developed. This concerns numerical methods as well as methods based on classical analysis, equally as classical and weak solution concepts. Nevertheless, the nature of the real surface of the Earth has its specific features and is still rather complex. The aim of this paper is to consider these limits and to seek a balance between the performance of an apparatus developed for the surface of the Earth smoothed (or simplified) up to a certain degree and an iteration procedure used to bridge the difference between the real and smoothed topography. The approach is applied for the solution of the linear gravimetric boundary value problem in geopotential determination. Similarly as in other branches of engineering and mathematical physics a transformation of coordinates is used that offers a possibility to solve an alternative between the boundary complexity and the complexity of the coefficients of the partial differential equation governing the solution. As examples the use of modified spherical and also modified ellipsoidal coordinates for the transformation of the solution domain is discussed. However, the complexity of the boundary is then reflected in the structure of Laplace's operator. This effect is taken into account by means of successive approximations. The structure of the respective iteration steps is derived and analyzed. On the level of individual iteration steps the attention is paid to the representation of the solution in terms of function bases or in terms of Green's functions. The convergence of the procedure and the efficiency of its use for geopotential determination is discussed.
Myria: Scalable Analytics as a Service
NASA Astrophysics Data System (ADS)
Howe, B.; Halperin, D.; Whitaker, A.
2014-12-01
At the UW eScience Institute, we're working to empower non-experts, especially in the sciences, to write and use data-parallel algorithms. To this end, we are building Myria, a web-based platform for scalable analytics and data-parallel programming. Myria's internal model of computation is the relational algebra extended with iteration, such that every program is inherently data-parallel, just as every query in a database is inherently data-parallel. But unlike databases, iteration is a first class concept, allowing us to express machine learning tasks, graph traversal tasks, and more. Programs can be expressed in a number of languages and can be executed on a number of execution environments, but we emphasize a particular language called MyriaL that supports both imperative and declarative styles and a particular execution engine called MyriaX that uses an in-memory column-oriented representation and asynchronous iteration. We deliver Myria over the web as a service, providing an editor, performance analysis tools, and catalog browsing features in a single environment. We find that this web-based "delivery vector" is critical in reaching non-experts: they are insulated from irrelevant effort technical work associated with installation, configuration, and resource management. The MyriaX backend, one of several execution runtimes we support, is a main-memory, column-oriented, RDBMS-on-the-worker system that supports cyclic data flows as a first-class citizen and has been shown to outperform competitive systems on 100-machine cluster sizes. I will describe the Myria system, give a demo, and present some new results in large-scale oceanographic microbiology.
LOW-ENGINE-FRICTION TECHNOLOGY FOR ADVANCED NATURAL-GAS RECIPROCATING ENGINES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Victor W. Wong; Tian Tian; Grant Smedley
2004-09-30
This program aims at improving the efficiency of advanced natural-gas reciprocating engines (ANGRE) by reducing piston/ring assembly friction without major adverse effects on engine performance, such as increased oil consumption and emissions. An iterative process of simulation, experimentation and analysis, are being followed towards achieving the goal of demonstrating a complete optimized low-friction engine system. To date, a detailed set of piston/ring dynamic and friction models have been developed and applied that illustrated the fundamental relationships between design parameters and friction losses. Various low-friction strategies and ring-design concepts have been explored, and engine experiments have been done on a full-scalemore » Waukesha VGF F18 in-line 6 cylinder power generation engine rated at 370 kW at 1800 rpm. Current accomplishments include designing and testing ring-packs using a subtle top-compression-ring profile (skewed barrel design), lowering the tension of the oil-control ring, employing a negative twist to the scraper ring to control oil consumption. Initial test data indicate that piston ring-pack friction was reduced by 35% by lowering the oil-control ring tension alone, which corresponds to a 1.5% improvement in fuel efficiency. Although small in magnitude, this improvement represents a first step towards anticipated aggregate improvements from other strategies. Other ring-pack design strategies to lower friction have been identified, including reduced axial distance between the top two rings, tilted top-ring groove. Some of these configurations have been tested and some await further evaluation. Colorado State University performed the tests and Waukesha Engine Dresser, Inc. provided technical support. Key elements of the continuing work include optimizing the engine piston design, application of surface and material developments in conjunction with improved lubricant properties, system modeling and analysis, and continued technology demonstration in an actual full-sized reciprocating natural-gas engine.« less
Conversion of low BMEP 4-cylinder to high BMEP 2-cylinder large bore natural gas engine
NASA Astrophysics Data System (ADS)
Ladd, John
There are more than 6,000 integral compressor engines in use on US natural gas pipelines, operating 24 hours a day, 365 days a year. Many of these engines have operated continuously for more than 50 years, with little to no modifications. Due to recent emission regulations at the local, state and federal levels much of the aging infrastructure requires retrofit technology to remain within compliance. The Engines and Energy Conversion Laboratory was founded to test these retrofit technologies on its large bore engine testbed (LBET). The LBET is a low brake mean effective pressure (BMEP) Cooper Bessemer GMVTF-4. Newer GMV models, constructed in 1980's, utilize turbocharging to increase the output power, achieving BMEP's nearly double that of the LBET. To expand the lab's testing capability and to reduce the LBET's running cost: material testing, in-depth modeling, and on engine testing was completed to evaluate the feasibility of uprating the LBET to a high BMEP two cylinder engine. Due to the LBET's age, the crankcase material properties were not known. Material samples were removed from engine to conduct an in-depth material analysis. It was found that the crankcase was cast out of a specific grade of gray iron, class 25 meehanite. A complete three dimensional model of the LBET's crankcase and power cylinders was created. Using historical engine data, the force inputs were created for a finite element analysis model of the LBET, to determine the regions of high stress. The areas of high stress were instrumented with strain gauges to iterate and validate the model's findings. Several test cases were run at the high and intermediate BMEP engine conditions. The model found, at high BMEP conditions the LBET would operate at the fatigue limit of the class 25 meehanite, operating with no factor of safety but the intermediate case were deemed acceptable.
An Object Model for a Rocket Engine Numerical Simulator
NASA Technical Reports Server (NTRS)
Mitra, D.; Bhalla, P. N.; Pratap, V.; Reddy, P.
1998-01-01
Rocket Engine Numerical Simulator (RENS) is a packet of software which numerically simulates the behavior of a rocket engine. Different parameters of the components of an engine is the input to these programs. Depending on these given parameters the programs output the behaviors of those components. These behavioral values are then used to guide the design of or to diagnose a model of a rocket engine "built" by a composition of these programs simulating different components of the engine system. In order to use this software package effectively one needs to have a flexible model of a rocket engine. These programs simulating different components then should be plugged into this modular representation. Our project is to develop an object based model of such an engine system. We are following an iterative and incremental approach in developing the model, as is the standard practice in the area of object oriented design and analysis of softwares. This process involves three stages: object modeling to represent the components and sub-components of a rocket engine, dynamic modeling to capture the temporal and behavioral aspects of the system, and functional modeling to represent the transformational aspects. This article reports on the first phase of our activity under a grant (RENS) from the NASA Lewis Research center. We have utilized Rambaugh's object modeling technique and the tool UML for this purpose. The classes of a rocket engine propulsion system are developed and some of them are presented in this report. The next step, developing a dynamic model for RENS, is also touched upon here. In this paper we will also discuss the advantages of using object-based modeling for developing this type of an integrated simulator over other tools like an expert systems shell or a procedural language, e.g., FORTRAN. Attempts have been made in the past to use such techniques.
NASA Astrophysics Data System (ADS)
Chegwidden, O.; Nijssen, B.; Pytlak, E.
2017-12-01
Any model simulation has errors, including errors in meteorological data, process understanding, model structure, and model parameters. These errors may express themselves as bias, timing lags, and differences in sensitivity between the model and the physical world. The evaluation and handling of these errors can greatly affect the legitimacy, validity and usefulness of the resulting scientific product. In this presentation we will discuss a case study of handling and communicating model errors during the development of a hydrologic climate change dataset for the Pacific Northwestern United States. The dataset was the result of a four-year collaboration between the University of Washington, Oregon State University, the Bonneville Power Administration, the United States Army Corps of Engineers and the Bureau of Reclamation. Along the way, the partnership facilitated the discovery of multiple systematic errors in the streamflow dataset. Through an iterative review process, some of those errors could be resolved. For the errors that remained, honest communication of the shortcomings promoted the dataset's legitimacy. Thoroughly explaining errors also improved ways in which the dataset would be used in follow-on impact studies. Finally, we will discuss the development of the "streamflow bias-correction" step often applied to climate change datasets that will be used in impact modeling contexts. We will describe the development of a series of bias-correction techniques through close collaboration among universities and stakeholders. Through that process, both universities and stakeholders learned about the others' expectations and workflows. This mutual learning process allowed for the development of methods that accommodated the stakeholders' specific engineering requirements. The iterative revision process also produced a functional and actionable dataset while preserving its scientific merit. We will describe how encountering earlier techniques' pitfalls allowed us to develop improved methods for scientists and practitioners alike.
Designing magnetic systems for reliability
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heitzenroeder, P.J.
1991-01-01
Designing magnetic system is an iterative process in which the requirements are set, a design is developed, materials and manufacturing processes are defined, interrelationships with the various elements of the system are established, engineering analyses are performed, and fault modes and effects are studied. Reliability requires that all elements of the design process, from the seemingly most straightforward such as utilities connection design and implementation, to the most sophisticated such as advanced finite element analyses, receives a balanced and appropriate level of attention. D.B. Montgomery's study of magnet failures has shown that the predominance of magnet failures tend not tomore » be in the most intensively engineered areas, but are associated with insulation, leads, ad unanticipated conditions. TFTR, JET, JT-60, and PBX are all major tokamaks which have suffered loss of reliability due to water leaks. Similarly the majority of causes of loss of magnet reliability at PPPL has not been in the sophisticated areas of the design but are due to difficulties associated with coolant connections, bus connections, and external structural connections. Looking towards the future, the major next-devices such as BPX and ITER are most costly and complex than any of their predecessors and are pressing the bounds of operating levels, materials, and fabrication. Emphasis on reliability is a must as the fusion program enters a phase where there are fewer, but very costly devices with the goal of reaching a reactor prototype stage in the next two or three decades. This paper reviews some of the magnet reliability issues which PPPL has faced over the years the lessons learned from them, and magnet design and fabrication practices which have been found to contribute to magnet reliability.« less
Integrated tokamak modeling: when physics informs engineering and research planning
NASA Astrophysics Data System (ADS)
Poli, Francesca
2017-10-01
Simulations that integrate virtually all the relevant engineering and physics aspects of a real tokamak experiment are a power tool for experimental interpretation, model validation and planning for both present and future devices. This tutorial will guide through the building blocks of an ``integrated'' tokamak simulation, such as magnetic flux diffusion, thermal, momentum and particle transport, external heating and current drive sources, wall particle sources and sinks. Emphasis is given to the connection and interplay between external actuators and plasma response, between the slow time scales of the current diffusion and the fast time scales of transport, and how reduced and high-fidelity models can contribute to simulate a whole device. To illustrate the potential and limitations of integrated tokamak modeling for discharge prediction, a helium plasma scenario for the ITER pre-nuclear phase is taken as an example. This scenario presents challenges because it requires core-edge integration and advanced models for interaction between waves and fast-ions, which are subject to a limited experimental database for validation and guidance. Starting from a scenario obtained by re-scaling parameters from the demonstration inductive ``ITER baseline'', it is shown how self-consistent simulations that encompass both core and edge plasma regions, as well as high-fidelity heating and current drive source models are needed to set constraints on the density, magnetic field and heating scheme. This tutorial aims at demonstrating how integrated modeling, when used with adequate level of criticism, can not only support design of operational scenarios, but also help to asses the limitations and gaps in the available models, thus indicating where improved modeling tools are required and how present experiments can help their validation and inform research planning. Work supported by DOE under DE-AC02-09CH1146.
Human Factors Interface with Systems Engineering for NASA Human Spaceflights
NASA Technical Reports Server (NTRS)
Wong, Douglas T.
2009-01-01
This paper summarizes the past and present successes of the Habitability and Human Factors Branch (HHFB) at NASA Johnson Space Center s Space Life Sciences Directorate (SLSD) in including the Human-As-A-System (HAAS) model in many NASA programs and what steps to be taken to integrate the Human-Centered Design Philosophy (HCDP) into NASA s Systems Engineering (SE) process. The HAAS model stresses systems are ultimately designed for the humans; the humans should therefore be considered as a system within the systems. Therefore, the model places strong emphasis on human factors engineering. Since 1987, the HHFB has been engaging with many major NASA programs with much success. The HHFB helped create the NASA Standard 3000 (a human factors engineering practice guide) and the Human Systems Integration Requirements document. These efforts resulted in the HAAS model being included in many NASA programs. As an example, the HAAS model has been successfully introduced into the programmatic and systems engineering structures of the International Space Station Program (ISSP). Success in the ISSP caused other NASA programs to recognize the importance of the HAAS concept. Also due to this success, the HHFB helped update NASA s Systems Engineering Handbook in December 2007 to include HAAS as a recommended practice. Nonetheless, the HAAS model has yet to become an integral part of the NASA SE process. Besides continuing in integrating HAAS into current and future NASA programs, the HHFB will investigate incorporating the Human-Centered Design Philosophy (HCDP) into the NASA SE Handbook. The HCDP goes further than the HAAS model by emphasizing a holistic and iterative human-centered systems design concept.
Temperature Dependent Modal Test/Analysis Correlation of X-34 Fastrac Composite Rocket Nozzle
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; Brunty, Joseph A. (Technical Monitor)
2001-01-01
A unique high temperature modal test and model correlation/update program has been performed on the composite nozzle of the FASTRAC engine for the NASA X-34 Reusable Launch Vehicle. The program was required to provide an accurate high temperature model of the nozzle for incorporation into the engine system structural dynamics model for loads calculation; this model is significantly different from the ambient case due to the large decrease in composite stiffness properties due to heating. The high-temperature modal test was performed during a hot-fire test of the nozzle. Previously, a series of high fidelity modal tests and finite element model correlation of the nozzle in a free-free configuration had been performed. This model was then attached to a modal-test verified model of the engine hot-fire test stand and the ambient system mode shapes were identified. A reduced set of accelerometers was then attached to the nozzle, the engine fired full-duration, and the frequency peaks corresponding to the ambient nozzle modes individually isolated and tracked as they decreased during the test. To update the finite-element model of the nozzle to these frequency curves, the percentage differences of the anisotropic composite moduli due to temperature variation from ambient, which had been used in the initial modeling and which were obtained by small sample coupon testing, were multiplied by an iteratively determined constant factor. These new properties were used to create high-temperature nozzle models corresponding to 10 second engine operation increments and tied into the engine system model for loads determination.
Why and how Mastering an Incremental and Iterative Software Development Process
NASA Astrophysics Data System (ADS)
Dubuc, François; Guichoux, Bernard; Cormery, Patrick; Mescam, Jean Christophe
2004-06-01
One of the key issues regularly mentioned in the current software crisis of the space domain is related to the software development process that must be performed while the system definition is not yet frozen. This is especially true for complex systems like launchers or space vehicles.Several more or less mature solutions are under study by EADS SPACE Transportation and are going to be presented in this paper. The basic principle is to develop the software through an iterative and incremental process instead of the classical waterfall approach, with the following advantages:- It permits systematic management and incorporation of requirements changes over the development cycle with a minimal cost. As far as possible the most dimensioning requirements are analyzed and developed in priority for validating very early the architecture concept without the details.- A software prototype is very quickly available. It improves the communication between system and software teams, as it enables to check very early and efficiently the common understanding of the system requirements.- It allows the software team to complete a whole development cycle very early, and thus to become quickly familiar with the software development environment (methodology, technology, tools...). This is particularly important when the team is new, or when the environment has changed since the previous development. Anyhow, it improves a lot the learning curve of the software team.These advantages seem very attractive, but mastering efficiently an iterative development process is not so easy and induces a lot of difficulties such as:- How to freeze one configuration of the system definition as a development baseline, while most of thesystem requirements are completely and naturally unstable?- How to distinguish stable/unstable and dimensioning/standard requirements?- How to plan the development of each increment?- How to link classical waterfall development milestones with an iterative approach: when should theclassical reviews be performed: Software Specification Review? Preliminary Design Review? CriticalDesign Review? Code Review? Etc...Several solutions envisaged or already deployed by EADS SPACE Transportation will be presented, both from a methodological and technological point of view:- How the MELANIE EADS ST internal methodology improves the concurrent engineering activitiesbetween GNC, software and simulation teams in a very iterative and reactive way.- How the CMM approach can help by better formalizing Requirements Management and Planningprocesses.- How the Automatic Code Generation with "certified" tools (SCADE) can still dramatically shorten thedevelopment cycle.Then the presentation will conclude by showing an evaluation of the cost and planning reduction based on a pilot application by comparing figures on two similar projects: one with the classical waterfall process, the other one with an iterative and incremental approach.
Design Study of Propulsion and Drive Systems for the Large Civil TiltRotor (LCTR2) Rotorcraft
NASA Technical Reports Server (NTRS)
Robuck, Mark; Wilkerson, Joseph; Zhang, Yiyi; Snyder, Christopher A.; Vonderwell, Daniel
2013-01-01
Boeing, Rolls Royce, and NASA have worked together to complete a parametric sizing study for NASA's Large Civil Tilt Rotor (LCTR2) concept 2nd iteration. Vehicle gross weight and fuel usage were evaluated as propulsion and drive system characteristics were varied to maximize the benefit of reduced rotor tip speed during cruise conditions. The study examined different combinations of engine and gearbox variability to achieve rotor cruise tip speed reductions down to 54% of the hover tip speed. Previous NASA studies identified that a 54% rotor speed reduction in cruise minimizes vehicle gross weight and fuel burn. The LCTR2 was the study baseline for initial sizing. This study included rotor tip speed ratios (cruise to hover) of 100%, 77% and 54% at different combinations of engine RPM and gearbox speed reductions, which were analyzed to achieve the lightest overall vehicle gross weight (GW) at the chosen rotor tip speed ratio. Different engine and gearbox technology levels are applied ranging from commercial off-the-shelf (COTS) engines and gearbox technology to entry-in-service (EIS) dates of 2025 and 2035 to assess the benefits of advanced technology on vehicle gross weight and fuel burn. Interim results were previously reported1. This technical paper extends that work and summarizes the final study results including additional engine and drive system study accomplishments. New vehicle sizing data is presented for engine performance at a single operating speed with a multispeed drive system. Modeling details for LCTR2 vehicle sizing and subject engine and drive sub-systems are presented as well. This study was conducted in support of NASA's Fundamental Aeronautics Program, Subsonic Rotary Wing Project.
NASA Astrophysics Data System (ADS)
Ghanbari, M.; Najafi, G.; Ghobadian, B.; Mamat, R.; Noor, M. M.; Moosavian, A.
2015-12-01
This paper studies the use of adaptive neuro-fuzzy inference system (ANFIS) to predict the performance parameters and exhaust emissions of a diesel engine operating on nanodiesel blended fuels. In order to predict the engine parameters, the whole experimental data were randomly divided into training and testing data. For ANFIS modelling, Gaussian curve membership function (gaussmf) and 200 training epochs (iteration) were found to be optimum choices for training process. The results demonstrate that ANFIS is capable of predicting the diesel engine performance and emissions. In the experimental step, Carbon nano tubes (CNT) (40, 80 and 120 ppm) and nano silver particles (40, 80 and 120 ppm) with nanostructure were prepared and added as additive to the diesel fuel. Six cylinders, four-stroke diesel engine was fuelled with these new blended fuels and operated at different engine speeds. Experimental test results indicated the fact that adding nano particles to diesel fuel, increased diesel engine power and torque output. For nano-diesel it was found that the brake specific fuel consumption (bsfc) was decreased compared to the net diesel fuel. The results proved that with increase of nano particles concentrations (from 40 ppm to 120 ppm) in diesel fuel, CO2 emission increased. CO emission in diesel fuel with nano-particles was lower significantly compared to pure diesel fuel. UHC emission with silver nano-diesel blended fuel decreased while with fuels that contains CNT nano particles increased. The trend of NOx emission was inverse compared to the UHC emission. With adding nano particles to the blended fuels, NOx increased compared to the net diesel fuel. The tests revealed that silver & CNT nano particles can be used as additive in diesel fuel to improve combustion of the fuel and reduce the exhaust emissions significantly.
Value Iteration Adaptive Dynamic Programming for Optimal Control of Discrete-Time Nonlinear Systems.
Wei, Qinglai; Liu, Derong; Lin, Hanquan
2016-03-01
In this paper, a value iteration adaptive dynamic programming (ADP) algorithm is developed to solve infinite horizon undiscounted optimal control problems for discrete-time nonlinear systems. The present value iteration ADP algorithm permits an arbitrary positive semi-definite function to initialize the algorithm. A novel convergence analysis is developed to guarantee that the iterative value function converges to the optimal performance index function. Initialized by different initial functions, it is proven that the iterative value function will be monotonically nonincreasing, monotonically nondecreasing, or nonmonotonic and will converge to the optimum. In this paper, for the first time, the admissibility properties of the iterative control laws are developed for value iteration algorithms. It is emphasized that new termination criteria are established to guarantee the effectiveness of the iterative control laws. Neural networks are used to approximate the iterative value function and compute the iterative control law, respectively, for facilitating the implementation of the iterative ADP algorithm. Finally, two simulation examples are given to illustrate the performance of the present method.
SSME Condition Monitoring Using Neural Networks and Plume Spectral Signatures
NASA Technical Reports Server (NTRS)
Hopkins, Randall; Benzing, Daniel
1996-01-01
For a variety of reasons, condition monitoring of the Space Shuttle Main Engine (SSME) has become an important concern for both ground tests and in-flight operation. The complexities of the SSME suggest that active, real-time condition monitoring should be performed to avoid large-scale or catastrophic failure of the engine. In 1986, the SSME became the subject of a plume emission spectroscopy project at NASA's Marshall Space Flight Center (MSFC). Since then, plume emission spectroscopy has recorded many nominal tests and the qualitative spectral features of the SSME plume are now well established. Significant discoveries made with both wide-band and narrow-band plume emission spectroscopy systems led MSFC to develop the Optical Plume Anomaly Detection (OPAD) system. The OPAD system is designed to provide condition monitoring of the SSME during ground-level testing. The operational health of the engine is achieved through the acquisition of spectrally resolved plume emissions and the subsequent identification of abnormal emission levels in the plume indicative of engine erosion or component failure. Eventually, OPAD, or a derivative of the technology, could find its way on to an actual space vehicle and provide in-flight engine condition monitoring. This technology step, however, will require miniaturized hardware capable of processing plume spectral data in real-time. An objective of OPAD condition monitoring is to determine how much of an element is present in the SSME plume. The basic premise is that by knowing the element and its concentration, this could be related back to the health of components within the engine. For example, an abnormal amount of silver in the plume might signify increased wear or deterioration of a particular bearing in the engine. Once an anomaly is identified, the engine could be shut down before catastrophic failure occurs. Currently, element concentrations in the plume are determined iteratively with the help of a non-linear computer code called SPECTRA, developed at the USAF Arnold Engineering Development Center. Ostensibly, the code produces intensity versus wavelength plots (i.e., spectra) when inputs such as element concentrations, reaction temperature, and reaction pressure are provided. However, in order to provide a higher-level analysis, element concentration is not specified explicitly as an input. Instead, two quantum variables, number density and broadening parameter, are used. Past experience with OPAD data analysis has revealed that the region of primary interest in any SSME plume spectrum lies in the wavelength band of 3300 A to 4330 A. Experience has also revealed that some elements, such as iron, cobalt and nickel, cause multiple peaks over the chosen wavelength range whereas other elements (magnesium, for example) have a few, relatively isolated peaks in the chosen wavelength range. Iteration with SPECTRA as a part of OPAD data analysis is an incredibly labor intensive task and not one to be performed by hand. What is really needed is the "inverse" of the computer code but the mathematical model for the inverse mapping is tenuous at best. However, building generalized models based upon known input/output mappings while ignoring details of the governing physical model is possible using neural networks. Thus the objective of the research project described herein was to quickly and accurately predict combustion temperature and element concentrations (i.e., number density and broadening parameter) from a given spectrum using a neural network. In other words, a neural network had to be developed that would provide a generalized "inverse" of the computer code SPECTRA.
Combining CRISPR and CRISPRi Systems for Metabolic Engineering of E. coli and 1,4-BDO Biosynthesis.
Wu, Meng-Ying; Sung, Li-Yu; Li, Hung; Huang, Chun-Hung; Hu, Yu-Chen
2017-12-15
Biosynthesis of 1,4-butanediol (1,4-BDO) in E. coli requires an artificial pathway that involves six genes and time-consuming, iterative genome engineering. CRISPR is an effective gene editing tool, while CRISPR interference (CRISPRi) is repurposed for programmable gene suppression. This study aimed to combine both CRISPR and CRISPRi for metabolic engineering of E. coli and 1,4-BDO production. We first exploited CRISPR to perform point mutation of gltA, replacement of native lpdA with heterologous lpdA, knockout of sad and knock-in of two large (6.0 and 6.3 kb in length) gene cassettes encoding the six genes (cat1, sucD, 4hbd, cat2, bld, bdh) in the 1,4-BDO biosynthesis pathway. The successive E. coli engineering enabled production of 1,4-BDO to a titer of 0.9 g/L in 48 h. By combining the CRISPRi system to simultaneously suppress competing genes that divert the flux from the 1,4-BDO biosynthesis pathway (gabD, ybgC and tesB) for >85%, we further enhanced the 1,4-BDO titer for 100% to 1.8 g/L while reducing the titers of byproducts gamma-butyrolactone and succinate for 55% and 83%, respectively. These data demonstrate the potential of combining CRISPR and CRISPRi for genome engineering and metabolic flux regulation in microorganisms such as E. coli and production of chemicals (e.g., 1,4-BDO).
Invited Article: Mask-modulated lensless imaging with multi-angle illuminations
NASA Astrophysics Data System (ADS)
Zhang, Zibang; Zhou, You; Jiang, Shaowei; Guo, Kaikai; Hoshino, Kazunori; Zhong, Jingang; Suo, Jinli; Dai, Qionghai; Zheng, Guoan
2018-06-01
The use of multiple diverse measurements can make lensless phase retrieval more robust. Conventional diversity functions include aperture diversity, wavelength diversity, translational diversity, and defocus diversity. Here we discuss a lensless imaging scheme that employs multiple spherical-wave illuminations from a light-emitting diode array as diversity functions. In this scheme, we place a binary mask between the sample and the detector for imposing support constraints for the phase retrieval process. This support constraint enforces the light field to be zero at certain locations and is similar to the aperture constraint in Fourier ptychographic microscopy. We use a self-calibration algorithm to correct the misalignment of the binary mask. The efficacy of the proposed scheme is first demonstrated by simulations where we evaluate the reconstruction quality using mean square error and structural similarity index. The scheme is then experimentally tested by recovering images of a resolution target and biological samples. The proposed scheme may provide new insights for developing compact and large field-of-view lensless imaging platforms. The use of the binary mask can also be combined with other diversity functions for better constraining the phase retrieval solution space. We provide the open-source implementation code for the broad research community.
Continuous motion scan ptychography: Characterization for increased speed in coherent x-ray imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deng, Junjing; Nashed, Youssef S. G.; Chen, Si
Ptychography is a coherent diffraction imaging (CDI) method for extended objects in which diffraction patterns are acquired sequentially from overlapping coherent illumination spots. The object’s complex transmission function can be reconstructed from those diffraction patterns at a spatial resolution limited only by the scattering strength of the object and the detector geometry. Most experiments to date have positioned the illumination spots on the sample using a move-settle-measure sequence in which the move and settle steps can take longer to complete than the measure step. We describe here the use of a continuous “fly-scan” mode for ptychographic data collection in whichmore » the sample is moved continuously, so that the experiment resembles one of integrating the diffraction patterns from multiple probe positions. This allows one to use multiple probe mode reconstruction methods to obtain an image of the object and also of the illumination function. We show in simulations, and in x-ray imaging experiments, some of the characteristics of fly-scan ptychography, including a factor of 25 reduction in the data acquisition time. This approach will become increasingly important as brighter x-ray sources are developed, such as diffraction limited storage rings.« less
Monitoring dynamic electrochemical processes with in situ ptychography
NASA Astrophysics Data System (ADS)
Kourousias, George; Bozzini, Benedetto; Jones, Michael W. M.; Van Riessen, Grant A.; Dal Zilio, Simone; Billè, Fulvio; Kiskinova, Maya; Gianoncelli, Alessandra
2018-03-01
The present work reports novel soft X-ray Fresnel CDI ptychography results, demonstrating the potential of this method for dynamic in situ studies. Specifically, in situ ptychography experiments explored the electrochemical fabrication of Co-doped Mn-oxide/polypyrrole nanocomposites for sustainable and cost-effective fuel-cell air-electrodes. Oxygen-reduction catalysts based on Mn-oxides exhibit relatively high activity, but poor durability: doping with Co has been shown to improve both reduction rate and stability. In this study, we examine the chemical state distribution of the catalytically crucial Co dopant to elucidate details of the Co dopant incorporation into the Mn/polymer matrix. The measurements were performed using a custom-made three-electrode thin-layer microcell, developed at the TwinMic beamline of Elettra Synchrotron during a series of experiments that were continued at the SXRI beamline of the Australian Synchrotron. Our time-resolved ptychography-based investigation was carried out in situ after two representative growth steps, controlled by electrochemical bias. In addition to the observation of morphological changes, we retrieved the spectroscopic information, provided by multiple ptychographic energy scans across Co L3-edge, shedding light on the doping mechanism and demonstrating a general approach for the molecular-level investigation complex multimaterial electrodeposition processes.
Big Data Analytics for Scanning Transmission Electron Microscopy Ptychography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jesse, S.; Chi, M.; Belianinov, A.
Electron microscopy is undergoing a transition; from the model of producing only a few micrographs, through the current state where many images and spectra can be digitally recorded, to a new mode where very large volumes of data (movies, ptychographic and multi-dimensional series) can be rapidly obtained. In this paper, we discuss the application of so-called “big-data” methods to high dimensional microscopy data, using unsupervised multivariate statistical techniques, in order to explore salient image features in a specific example of BiFeO 3 domains. Remarkably, k-means clustering reveals domain differentiation despite the fact that the algorithm is purely statistical in naturemore » and does not require any prior information regarding the material, any coexisting phases, or any differentiating structures. While this is a somewhat trivial case, this example signifies the extraction of useful physical and structural information without any prior bias regarding the sample or the instrumental modality. Further interpretation of these types of results may still require human intervention. Finally, however, the open nature of this algorithm and its wide availability, enable broad collaborations and exploratory work necessary to enable efficient data analysis in electron microscopy.« less
Continuous motion scan ptychography: Characterization for increased speed in coherent x-ray imaging
Deng, Junjing; Nashed, Youssef S. G.; Chen, Si; ...
2015-02-23
Ptychography is a coherent diffraction imaging (CDI) method for extended objects in which diffraction patterns are acquired sequentially from overlapping coherent illumination spots. The object’s complex transmission function can be reconstructed from those diffraction patterns at a spatial resolution limited only by the scattering strength of the object and the detector geometry. Most experiments to date have positioned the illumination spots on the sample using a move-settle-measure sequence in which the move and settle steps can take longer to complete than the measure step. We describe here the use of a continuous “fly-scan” mode for ptychographic data collection in whichmore » the sample is moved continuously, so that the experiment resembles one of integrating the diffraction patterns from multiple probe positions. This allows one to use multiple probe mode reconstruction methods to obtain an image of the object and also of the illumination function. We show in simulations, and in x-ray imaging experiments, some of the characteristics of fly-scan ptychography, including a factor of 25 reduction in the data acquisition time. This approach will become increasingly important as brighter x-ray sources are developed, such as diffraction limited storage rings.« less
Big Data Analytics for Scanning Transmission Electron Microscopy Ptychography
Jesse, S.; Chi, M.; Belianinov, A.; ...
2016-05-23
Electron microscopy is undergoing a transition; from the model of producing only a few micrographs, through the current state where many images and spectra can be digitally recorded, to a new mode where very large volumes of data (movies, ptychographic and multi-dimensional series) can be rapidly obtained. In this paper, we discuss the application of so-called “big-data” methods to high dimensional microscopy data, using unsupervised multivariate statistical techniques, in order to explore salient image features in a specific example of BiFeO 3 domains. Remarkably, k-means clustering reveals domain differentiation despite the fact that the algorithm is purely statistical in naturemore » and does not require any prior information regarding the material, any coexisting phases, or any differentiating structures. While this is a somewhat trivial case, this example signifies the extraction of useful physical and structural information without any prior bias regarding the sample or the instrumental modality. Further interpretation of these types of results may still require human intervention. Finally, however, the open nature of this algorithm and its wide availability, enable broad collaborations and exploratory work necessary to enable efficient data analysis in electron microscopy.« less
Continuous motion scan ptychography: characterization for increased speed in coherent x-ray imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deng, Junjing; Nashed, Youssef S. G.; Chen, Si
2015-01-01
Ptychography is a coherent diffraction imaging (CDI) method for extended objects in which diffraction patterns are acquired sequentially from overlapping coherent illumination spots. The object's complex transmission function can be reconstructed from those diffraction patterns at a spatial resolution limited only by the scattering strength of the object and the detector geometry. Most experiments to date have positioned the illumination spots on the sample using a move-settle-measure sequence in which the move and settle steps can take longer to complete than the measure step. We describe here the use of a continuous "fly-scan" mode for ptychographic data collection in whichmore » the sample is moved continuously, so that the experiment resembles one of integrating the diffraction patterns from multiple probe positions. This allows one to use multiple probe mode reconstruction methods to obtain an image of the object and also of the illumination function. We show in simulations, and in x-ray imaging experiments, some of the characteristics of fly-scan ptychography, including a factor of 25 reduction in the data acquisition time. This approach will become increasingly important as brighter x-ray sources are developed, such as diffraction limited storage rings.« less
Continuous motion scan ptychography: characterization for increased speed in coherent x-ray imaging.
Deng, Junjing; Nashed, Youssef S G; Chen, Si; Phillips, Nicholas W; Peterka, Tom; Ross, Rob; Vogt, Stefan; Jacobsen, Chris; Vine, David J
2015-03-09
Ptychography is a coherent diffraction imaging (CDI) method for extended objects in which diffraction patterns are acquired sequentially from overlapping coherent illumination spots. The object's complex transmission function can be reconstructed from those diffraction patterns at a spatial resolution limited only by the scattering strength of the object and the detector geometry. Most experiments to date have positioned the illumination spots on the sample using a move-settle-measure sequence in which the move and settle steps can take longer to complete than the measure step. We describe here the use of a continuous "fly-scan" mode for ptychographic data collection in which the sample is moved continuously, so that the experiment resembles one of integrating the diffraction patterns from multiple probe positions. This allows one to use multiple probe mode reconstruction methods to obtain an image of the object and also of the illumination function. We show in simulations, and in x-ray imaging experiments, some of the characteristics of fly-scan ptychography, including a factor of 25 reduction in the data acquisition time. This approach will become increasingly important as brighter x-ray sources are developed, such as diffraction limited storage rings.
Big Data Analytics for Scanning Transmission Electron Microscopy Ptychography
Jesse, S.; Chi, M.; Belianinov, A.; Beekman, C.; Kalinin, S. V.; Borisevich, A. Y.; Lupini, A. R.
2016-01-01
Electron microscopy is undergoing a transition; from the model of producing only a few micrographs, through the current state where many images and spectra can be digitally recorded, to a new mode where very large volumes of data (movies, ptychographic and multi-dimensional series) can be rapidly obtained. Here, we discuss the application of so-called “big-data” methods to high dimensional microscopy data, using unsupervised multivariate statistical techniques, in order to explore salient image features in a specific example of BiFeO3 domains. Remarkably, k-means clustering reveals domain differentiation despite the fact that the algorithm is purely statistical in nature and does not require any prior information regarding the material, any coexisting phases, or any differentiating structures. While this is a somewhat trivial case, this example signifies the extraction of useful physical and structural information without any prior bias regarding the sample or the instrumental modality. Further interpretation of these types of results may still require human intervention. However, the open nature of this algorithm and its wide availability, enable broad collaborations and exploratory work necessary to enable efficient data analysis in electron microscopy. PMID:27211523
NASA Astrophysics Data System (ADS)
Ming, A. B.; Qin, Z. Y.; Zhang, W.; Chu, F. L.
2013-12-01
Bearing failure is one of the most common reasons of machine breakdowns and accidents. Therefore, the fault diagnosis of rolling element bearings is of great significance to the safe and efficient operation of machines owing to its fault indication and accident prevention capability in engineering applications. Based on the orthogonal projection theory, a novel method is proposed to extract the fault characteristic frequency for the incipient fault diagnosis of rolling element bearings in this paper. With the capability of exposing the oscillation frequency of the signal energy, the proposed method is a generalized form of the squared envelope analysis and named as spectral auto-correlation analysis (SACA). Meanwhile, the SACA is a simplified form of the cyclostationary analysis as well and can be iteratively carried out in applications. Simulations and experiments are used to evaluate the efficiency of the proposed method. Comparing the results of SACA, the traditional envelope analysis and the squared envelope analysis, it is found that the result of SACA is more legible due to the more prominent harmonic amplitudes of the fault characteristic frequency and that the SACA with the proper iteration will further enhance the fault features.
Utility of coupling nonlinear optimization methods with numerical modeling software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Murphy, M.J.
1996-08-05
Results of using GLO (Global Local Optimizer), a general purpose nonlinear optimization software package for investigating multi-parameter problems in science and engineering is discussed. The package consists of the modular optimization control system (GLO), a graphical user interface (GLO-GUI), a pre-processor (GLO-PUT), a post-processor (GLO-GET), and nonlinear optimization software modules, GLOBAL & LOCAL. GLO is designed for controlling and easy coupling to any scientific software application. GLO runs the optimization module and scientific software application in an iterative loop. At each iteration, the optimization module defines new values for the set of parameters being optimized. GLO-PUT inserts the new parametermore » values into the input file of the scientific application. GLO runs the application with the new parameter values. GLO-GET determines the value of the objective function by extracting the results of the analysis and comparing to the desired result. GLO continues to run the scientific application over and over until it finds the ``best`` set of parameters by minimizing (or maximizing) the objective function. An example problem showing the optimization of material model is presented (Taylor cylinder impact test).« less
GoldenBraid: An Iterative Cloning System for Standardized Assembly of Reusable Genetic Modules
Sarrion-Perdigones, Alejandro; Falconi, Erica Elvira; Zandalinas, Sara I.; Juárez, Paloma; Fernández-del-Carmen, Asun; Granell, Antonio; Orzaez, Diego
2011-01-01
Synthetic Biology requires efficient and versatile DNA assembly systems to facilitate the building of new genetic modules/pathways from basic DNA parts in a standardized way. Here we present GoldenBraid (GB), a standardized assembly system based on type IIS restriction enzymes that allows the indefinite growth of reusable gene modules made of standardized DNA pieces. The GB system consists of a set of four destination plasmids (pDGBs) designed to incorporate multipartite assemblies made of standard DNA parts and to combine them binarily to build increasingly complex multigene constructs. The relative position of type IIS restriction sites inside pDGB vectors introduces a double loop (“braid”) topology in the cloning strategy that allows the indefinite growth of composite parts through the succession of iterative assembling steps, while the overall simplicity of the system is maintained. We propose the use of GoldenBraid as an assembly standard for Plant Synthetic Biology. For this purpose we have GB-adapted a set of binary plasmids for A. tumefaciens-mediated plant transformation. Fast GB-engineering of several multigene T-DNAs, including two alternative modules made of five reusable devices each, and comprising a total of 19 basic parts are also described. PMID:21750718
GoldenBraid: an iterative cloning system for standardized assembly of reusable genetic modules.
Sarrion-Perdigones, Alejandro; Falconi, Erica Elvira; Zandalinas, Sara I; Juárez, Paloma; Fernández-del-Carmen, Asun; Granell, Antonio; Orzaez, Diego
2011-01-01
Synthetic Biology requires efficient and versatile DNA assembly systems to facilitate the building of new genetic modules/pathways from basic DNA parts in a standardized way. Here we present GoldenBraid (GB), a standardized assembly system based on type IIS restriction enzymes that allows the indefinite growth of reusable gene modules made of standardized DNA pieces. The GB system consists of a set of four destination plasmids (pDGBs) designed to incorporate multipartite assemblies made of standard DNA parts and to combine them binarily to build increasingly complex multigene constructs. The relative position of type IIS restriction sites inside pDGB vectors introduces a double loop ("braid") topology in the cloning strategy that allows the indefinite growth of composite parts through the succession of iterative assembling steps, while the overall simplicity of the system is maintained. We propose the use of GoldenBraid as an assembly standard for Plant Synthetic Biology. For this purpose we have GB-adapted a set of binary plasmids for A. tumefaciens-mediated plant transformation. Fast GB-engineering of several multigene T-DNAs, including two alternative modules made of five reusable devices each, and comprising a total of 19 basic parts are also described.
Design Issues of the Pre-Compression Rings of Iter
NASA Astrophysics Data System (ADS)
Knaster, J.; Baker, W.; Bettinali, L.; Jong, C.; Mallick, K.; Nardi, C.; Rajainmaki, H.; Rossi, P.; Semeraro, L.
2010-04-01
The pre-compression system is the keystone of ITER. A centripetal force of ˜30 MN will be applied at cryogenic conditions on top and bottom of each TF coil. It will prevent the `breathing effect' caused by the bursting forces occurring during plasma operation that would affect the machine design life of 30000 cycles. Different alternatives have been studied throughout the years. There are two major design requirements limiting the engineering possibilities: 1) the limited available space and 2) the need to hamper eddy currents flowing in the structures. Six unidirectionally wound glass-fibre composite rings (˜5 m diameter and ˜300 mm cross section) are the final design choice. The rings will withstand the maximum hoop stresses <500 MPa at room temperature conditions. Although retightening or replacing the pre-compression rings in case of malfunctioning is possible, they have to sustain the load during the entire 20 years of machine operation. The present paper summarizes the pre-compression ring R&D carried out during several years. In particular, we will address the composite choice and mechanical characterization, assessment of creep or stress relaxation phenomena, sub-sized rings testing and the optimal ring fabrication processes that have led to the present final design.
Bi-Level Integrated System Synthesis (BLISS)
NASA Technical Reports Server (NTRS)
Sobieszczanski-Sobieski, Jaroslaw; Agte, Jeremy S.; Sandusky, Robert R., Jr.
1998-01-01
BLISS is a method for optimization of engineering systems by decomposition. It separates the system level optimization, having a relatively small number of design variables, from the potentially numerous subsystem optimizations that may each have a large number of local design variables. The subsystem optimizations are autonomous and may be conducted concurrently. Subsystem and system optimizations alternate, linked by sensitivity data, producing a design improvement in each iteration. Starting from a best guess initial design, the method improves that design in iterative cycles, each cycle comprised of two steps. In step one, the system level variables are frozen and the improvement is achieved by separate, concurrent, and autonomous optimizations in the local variable subdomains. In step two, further improvement is sought in the space of the system level variables. Optimum sensitivity data link the second step to the first. The method prototype was implemented using MATLAB and iSIGHT programming software and tested on a simplified, conceptual level supersonic business jet design, and a detailed design of an electronic device. Satisfactory convergence and favorable agreement with the benchmark results were observed. Modularity of the method is intended to fit the human organization and map well on the computing technology of concurrent processing.
ITER in-vessel system design and performance
NASA Astrophysics Data System (ADS)
Parker, R. R.
2000-03-01
The article reviews the design and performance of the in-vessel components of ITER as developed for the Engineering Design Activities (EDA) Final Design Report. The double walled vacuum vessel is the first confinement boundary and is designed to maintain its integrity under all normal and off-normal conditions, e.g. the most intense vertical displacement events (VDEs) and seismic events. The shielding blanket consists of modules connected to a toroidal backplate by flexible connectors which allow differential displacements due to temperature non-uniformities. Breeding blanket modules replace the shield modules for the Enhanced Performance Phase. The divertor concept is based on a cassette structure which is convenient for remote installation and removal. High heat flux (HHF) components are mechanically attached and can be removed and replaced in the hot cell. Operation of the divertor is based on achieving partially detached plasma conditions along and near the separatrix. Nominal heat loads of 5-10 MW/m2 are expected on the target. These are accommodated by HHF technology developed during the EDA. Disruptions and VDEs can lead to melting of the first wall armour but no damage to the underlying structure. Stresses in the main structural components remain within allowable ranges for all postulated disruption and seismic events.
Cuevas, Erik; Díaz, Margarita
2015-01-01
In this paper, a new method for robustly estimating multiple view relations from point correspondences is presented. The approach combines the popular random sampling consensus (RANSAC) algorithm and the evolutionary method harmony search (HS). With this combination, the proposed method adopts a different sampling strategy than RANSAC to generate putative solutions. Under the new mechanism, at each iteration, new candidate solutions are built taking into account the quality of the models generated by previous candidate solutions, rather than purely random as it is the case of RANSAC. The rules for the generation of candidate solutions (samples) are motivated by the improvisation process that occurs when a musician searches for a better state of harmony. As a result, the proposed approach can substantially reduce the number of iterations still preserving the robust capabilities of RANSAC. The method is generic and its use is illustrated by the estimation of homographies, considering synthetic and real images. Additionally, in order to demonstrate the performance of the proposed approach within a real engineering application, it is employed to solve the problem of position estimation in a humanoid robot. Experimental results validate the efficiency of the proposed method in terms of accuracy, speed, and robustness.
NASA Technical Reports Server (NTRS)
Kreider, Kevin L.; Baumeister, Kenneth J.
1996-01-01
An explicit finite difference real time iteration scheme is developed to study harmonic sound propagation in aircraft engine nacelles. To reduce storage requirements for future large 3D problems, the time dependent potential form of the acoustic wave equation is used. To insure that the finite difference scheme is both explicit and stable for a harmonic monochromatic sound field, a parabolic (in time) approximation is introduced to reduce the order of the governing equation. The analysis begins with a harmonic sound source radiating into a quiescent duct. This fully explicit iteration method then calculates stepwise in time to obtain the 'steady state' harmonic solutions of the acoustic field. For stability, applications of conventional impedance boundary conditions requires coupling to explicit hyperbolic difference equations at the boundary. The introduction of the time parameter eliminates the large matrix storage requirements normally associated with frequency domain solutions, and time marching attains the steady-state quickly enough to make the method favorable when compared to frequency domain methods. For validation, this transient-frequency domain method is applied to sound propagation in a 2D hard wall duct with plug flow.
High-Level Performance Modeling of SAR Systems
NASA Technical Reports Server (NTRS)
Chen, Curtis
2006-01-01
SAUSAGE (Still Another Utility for SAR Analysis that s General and Extensible) is a computer program for modeling (see figure) the performance of synthetic- aperture radar (SAR) or interferometric synthetic-aperture radar (InSAR or IFSAR) systems. The user is assumed to be familiar with the basic principles of SAR imaging and interferometry. Given design parameters (e.g., altitude, power, and bandwidth) that characterize a radar system, the software predicts various performance metrics (e.g., signal-to-noise ratio and resolution). SAUSAGE is intended to be a general software tool for quick, high-level evaluation of radar designs; it is not meant to capture all the subtleties, nuances, and particulars of specific systems. SAUSAGE was written to facilitate the exploration of engineering tradeoffs within the multidimensional space of design parameters. Typically, this space is examined through an iterative process of adjusting the values of the design parameters and examining the effects of the adjustments on the overall performance of the system at each iteration. The software is designed to be modular and extensible to enable consideration of a variety of operating modes and antenna beam patterns, including, for example, strip-map and spotlight SAR acquisitions, polarimetry, burst modes, and squinted geometries.
2015-01-01
Iterative, nonreducing polyketide synthases (NR-PKSs) are multidomain enzymes responsible for the construction of the core architecture of aromatic polyketide natural products in fungi. Engineering these enzymes for the production of non-native metabolites has been a long-standing goal. We conducted a systematic survey of in vitro “domain swapped” NR-PKSs using an enzyme deconstruction approach. The NR-PKSs were dissected into mono- to multidomain fragments and recombined as noncognate pairs in vitro, reconstituting enzymatic activity. The enzymes used in this study produce aromatic polyketides that are representative of the four main chemical features set by the individual NR-PKS: starter unit selection, chain-length control, cyclization register control, and product release mechanism. We found that boundary conditions limit successful chemistry, which are dependent on a set of underlying enzymatic mechanisms. Crucial for successful redirection of catalysis, the rate of productive chemistry must outpace the rate of spontaneous derailment and thioesterase-mediated editing. Additionally, all of the domains in a noncognate system must interact efficiently if chemical redirection is to proceed. These observations refine and further substantiate current understanding of the mechanisms governing NR-PKS catalysis. PMID:24815013
NASA Technical Reports Server (NTRS)
Barnett, Alan R.; Widrick, Timothy W.; Ludwiczak, Damian R.
1995-01-01
Solving for the displacements of free-free coupled systems acted upon by static loads is commonly performed throughout the aerospace industry. Many times, these problems are solved using static analysis with inertia relief. This solution technique allows for a free-free static analysis by balancing the applied loads with inertia loads generated by the applied loads. For some engineering applications, the displacements of the free-free coupled system induce additional static loads. Hence, the applied loads are equal to the original loads plus displacement-dependent loads. Solving for the final displacements of such systems is commonly performed using iterative solution techniques. Unfortunately, these techniques can be time-consuming and labor-intensive. Since the coupled system equations for free-free systems with displacement-dependent loads can be written in closed-form, it is advantageous to solve for the displacements in this manner. Implementing closed-form equations in static analysis with inertia relief is analogous to implementing transfer functions in dynamic analysis. Using a MSC/NASTRAN DMAP Alter, displacement-dependent loads have been included in static analysis with inertia relief. Such an Alter has been used successfully to solve efficiently a common aerospace problem typically solved using an iterative technique.
Statistical Engineering in Air Traffic Management Research
NASA Technical Reports Server (NTRS)
Wilson, Sara R.
2015-01-01
NASA is working to develop an integrated set of advanced technologies to enable efficient arrival operations in high-density terminal airspace for the Next Generation Air Transportation System. This integrated arrival solution is being validated and verified in laboratories and transitioned to a field prototype for an operational demonstration at a major U.S. airport. Within NASA, this is a collaborative effort between Ames and Langley Research Centers involving a multi-year iterative experimentation process. Designing and analyzing a series of sequential batch computer simulations and human-in-the-loop experiments across multiple facilities and simulation environments involves a number of statistical challenges. Experiments conducted in separate laboratories typically have different limitations and constraints, and can take different approaches with respect to the fundamental principles of statistical design of experiments. This often makes it difficult to compare results from multiple experiments and incorporate findings into the next experiment in the series. A statistical engineering approach is being employed within this project to support risk-informed decision making and maximize the knowledge gained within the available resources. This presentation describes a statistical engineering case study from NASA, highlights statistical challenges, and discusses areas where existing statistical methodology is adapted and extended.
Using Approximations to Accelerate Engineering Design Optimization
NASA Technical Reports Server (NTRS)
Torczon, Virginia; Trosset, Michael W.
1998-01-01
Optimization problems that arise in engineering design are often characterized by several features that hinder the use of standard nonlinear optimization techniques. Foremost among these features is that the functions used to define the engineering optimization problem often are computationally intensive. Within a standard nonlinear optimization algorithm, the computational expense of evaluating the functions that define the problem would necessarily be incurred for each iteration of the optimization algorithm. Faced with such prohibitive computational costs, an attractive alternative is to make use of surrogates within an optimization context since surrogates can be chosen or constructed so that they are typically much less expensive to compute. For the purposes of this paper, we will focus on the use of algebraic approximations as surrogates for the objective. In this paper we introduce the use of so-called merit functions that explicitly recognize the desirability of improving the current approximation to the objective during the course of the optimization. We define and experiment with the use of merit functions chosen to simultaneously improve both the solution to the optimization problem (the objective) and the quality of the approximation. Our goal is to further improve the effectiveness of our general approach without sacrificing any of its rigor.
Collaborative damage mapping for emergency response: the role of Cognitive Systems Engineering
NASA Astrophysics Data System (ADS)
Kerle, N.; Hoffman, R. R.
2013-01-01
Remote sensing is increasingly used to assess disaster damage, traditionally by professional image analysts. A recent alternative is crowdsourcing by volunteers experienced in remote sensing, using internet-based mapping portals. We identify a range of problems in current approaches, including how volunteers can best be instructed for the task, ensuring that instructions are accurately understood and translate into valid results, or how the mapping scheme must be adapted for different map user needs. The volunteers, the mapping organizers, and the map users all perform complex cognitive tasks, yet little is known about the actual information needs of the users. We also identify problematic assumptions about the capabilities of the volunteers, principally related to the ability to perform the mapping, and to understand mapping instructions unambiguously. We propose that any robust scheme for collaborative damage mapping must rely on Cognitive Systems Engineering and its principal method, Cognitive Task Analysis (CTA), to understand the information and decision requirements of the map and image users, and how the volunteers can be optimally instructed and their mapping contributions merged into suitable map products. We recommend an iterative approach involving map users, remote sensing specialists, cognitive systems engineers and instructional designers, as well as experimental psychologists.
A Fully Non-metallic Gas Turbine Engine Enabled by Additive Manufacturing
NASA Technical Reports Server (NTRS)
Grady, Joseph E.
2014-01-01
The Non-Metallic Gas Turbine Engine project, funded by NASA Aeronautics Research Institute (NARI), represents the first comprehensive evaluation of emerging materials and manufacturing technologies that will enable fully nonmetallic gas turbine engines. This will be achieved by assessing the feasibility of using additive manufacturing technologies for fabricating polymer matrix composite (PMC) and ceramic matrix composite (CMC) gas turbine engine components. The benefits of the proposed effort include: 50 weight reduction compared to metallic parts, reduced manufacturing costs due to less machining and no tooling requirements, reduced part count due to net shape single component fabrication, and rapid design change and production iterations. Two high payoff metallic components have been identified for replacement with PMCs and will be fabricated using fused deposition modeling (FDM) with high temperature capable polymer filaments. The first component is an acoustic panel treatment with a honeycomb structure with an integrated back sheet and perforated front sheet. The second component is a compressor inlet guide vane. The CMC effort, which is starting at a lower technology readiness level, will use a binder jet process to fabricate silicon carbide test coupons and demonstration articles. The polymer and ceramic additive manufacturing efforts will advance from monolithic materials toward silicon carbide and carbon fiber reinforced composites for improved properties. Microstructural analysis and mechanical testing will be conducted on the PMC and CMC materials. System studies will assess the benefits of fully nonmetallic gas turbine engine in terms of fuel burn, emissions, reduction of part count, and cost. The proposed effort will be focused on a small 7000 lbf gas turbine engine. However, the concepts are equally applicable to large gas turbine engines. The proposed effort includes a multidisciplinary, multiorganization NASA - industry team that includes experts in ceramic materials and CMCs, polymers and PMCs, structural engineering, additive manufacturing, engine design and analysis, and system analysis.
Final Report on ITER Task Agreement 81-08
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richard L. Moore
As part of an ITER Implementing Task Agreement (ITA) between the ITER US Participant Team (PT) and the ITER International Team (IT), the INL Fusion Safety Program was tasked to provide the ITER IT with upgrades to the fusion version of the MELCOR 1.8.5 code including a beryllium dust oxidation model. The purpose of this model is to allow the ITER IT to investigate hydrogen production from beryllium dust layers on hot surfaces inside the ITER vacuum vessel (VV) during in-vessel loss-of-cooling accidents (LOCAs). Also included in the ITER ITA was a task to construct a RELAP5/ATHENA model of themore » ITER divertor cooling loop to model the draining of the loop during a large ex-vessel pipe break followed by an in-vessel divertor break and compare the results to a simular MELCOR model developed by the ITER IT. This report, which is the final report for this agreement, documents the completion of the work scope under this ITER TA, designated as TA 81-08.« less
ITER Construction—Plant System Integration
NASA Astrophysics Data System (ADS)
Tada, E.; Matsuda, S.
2009-02-01
This brief paper introduces how the ITER will be built in the international collaboration. The ITER Organization plays a central role in constructing ITER and leading it into operation. Since most of the ITER components are to be provided in-kind from the member countries, integral project management should be scoped in advance of real work. Those include design, procurement, system assembly, testing, licensing and commissioning of ITER.
Application of the optimal homotopy asymptotic method to nonlinear Bingham fluid dampers
NASA Astrophysics Data System (ADS)
Marinca, Vasile; Ene, Remus-Daniel; Bereteu, Liviu
2017-10-01
Dynamic response time is an important feature for determining the performance of magnetorheological (MR) dampers in practical civil engineering applications. The objective of this paper is to show how to use the Optimal Homotopy Asymptotic Method (OHAM) to give approximate analytical solutions of the nonlinear differential equation of a modified Bingham model with non-viscous exponential damping. Our procedure does not depend upon small parameters and provides us with a convenient way to optimally control the convergence of the approximate solutions. OHAM is very efficient in practice for ensuring very rapid convergence of the solution after only one iteration and with a small number of steps.
Flux-vector splitting algorithm for chain-rule conservation-law form
NASA Technical Reports Server (NTRS)
Shih, T. I.-P.; Nguyen, H. L.; Willis, E. A.; Steinthorsson, E.; Li, Z.
1991-01-01
A flux-vector splitting algorithm with Newton-Raphson iteration was developed for the 'full compressible' Navier-Stokes equations cast in chain-rule conservation-law form. The algorithm is intended for problems with deforming spatial domains and for problems whose governing equations cannot be cast in strong conservation-law form. The usefulness of the algorithm for such problems was demonstrated by applying it to analyze the unsteady, two- and three-dimensional flows inside one combustion chamber of a Wankel engine under nonfiring conditions. Solutions were obtained to examine the algorithm in terms of conservation error, robustness, and ability to handle complex flows on time-dependent grid systems.
AMPHION: Specification-based programming for scientific subroutine libraries
NASA Technical Reports Server (NTRS)
Lowry, Michael; Philpot, Andrew; Pressburger, Thomas; Underwood, Ian; Waldinger, Richard; Stickel, Mark
1994-01-01
AMPHION is a knowledge-based software engineering (KBSE) system that guides a user in developing a diagram representing a formal problem specification. It then automatically implements a solution to this specification as a program consisting of calls to subroutines from a library. The diagram provides an intuitive domain oriented notation for creating a specification that also facilitates reuse and modification. AMPHION'S architecture is domain independent. AMPHION is specialized to an application domain by developing a declarative domain theory. Creating a domain theory is an iterative process that currently requires the joint expertise of domain experts and experts in automated formal methods for software development.
Advanced Gas Turbine (AGT) powertrain system
NASA Technical Reports Server (NTRS)
Helms, H. E.; Kaufeld, J.; Kordes, R.
1981-01-01
A 74.5 kW(100 hp) advanced automotive gas turbine engine is described. A design iteration to improve the weight and production cost associated with the original concept is discussed. Major rig tests included 15 hours of compressor testing to 80% design speed and the results are presented. Approximately 150 hours of cold flow testing showed duct loss to be less than the design goal. Combustor test results are presented for initial checkout tests. Turbine design and rig fabrication is discussed. From a materials study of six methods to fabricate rotors, two have been selected for further effort. A discussion of all six methods is given.
The use of Lanczos's method to solve the large generalized symmetric definite eigenvalue problem
NASA Technical Reports Server (NTRS)
Jones, Mark T.; Patrick, Merrell L.
1989-01-01
The generalized eigenvalue problem, Kx = Lambda Mx, is of significant practical importance, especially in structural enginering where it arises as the vibration and buckling problem. A new algorithm, LANZ, based on Lanczos's method is developed. LANZ uses a technique called dynamic shifting to improve the efficiency and reliability of the Lanczos algorithm. A new algorithm for solving the tridiagonal matrices that arise when using Lanczos's method is described. A modification of Parlett and Scott's selective orthogonalization algorithm is proposed. Results from an implementation of LANZ on a Convex C-220 show it to be superior to a subspace iteration code.
Dual genetic selection of synthetic riboswitches in Escherichia coli.
Nomura, Yoko; Yokobayashi, Yohei
2014-01-01
This chapter describes a genetic selection strategy to engineer synthetic riboswitches that can chemically regulate gene expression in Escherichia coli. Riboswitch libraries are constructed by randomizing the nucleotides that potentially comprise an expression platform and fused to the hybrid selection/screening marker tetA-gfpuv. Iterative ON and OFF selections are performed under appropriate conditions that favor the survival or the growth of the cells harboring the desired riboswitches. After the selection, rapid screening of individual riboswitch clones is performed by measuring GFPuv fluorescence without subcloning. This optimized dual genetic selection strategy can be used to rapidly develop synthetic riboswitches without detailed computational design or structural knowledge.
Preliminary Design of a Helium-Cooled Ceramic Breeder Blanket for CFETR Based on the BIT Concept
NASA Astrophysics Data System (ADS)
Ma, Xuebin; Liu, Songlin; Li, Jia; Pu, Yong; Chen, Xiangcun
2014-04-01
CFETR is the “ITER-like” China fusion engineering test reactor. The design of the breeding blanket is one of the key issues in achieving the required tritium breeding radio for the self-sufficiency of tritium as a fuel. As one option, a BIT (breeder insider tube) type helium cooled ceramic breeder blanket (HCCB) was designed. This paper presents the design of the BIT—HCCB blanket configuration inside a reactor and its structure, along with neutronics, thermo-hydraulics and thermal stress analyses. Such preliminary performance analyses indicate that the design satisfies the requirements and the material allowable limits.
NASA Technical Reports Server (NTRS)
Lee, Katharine K.; Davis, Thomas J.
1995-01-01
Historically, the development of advanced automation for air traffic control in the United States has excluded the input of the air traffic controller until the need of the development process. In contrast, the development of the Final Approach Spacing Tool (FAST), for the terminal area controller, has incorporated the end-user in early, iterative testing. This paper describes a cooperative between the controller and the developer to create a tool which incorporates the complexity of the air traffic controller's job. This approach to software development has enhanced the usability of FAST and has helped smooth the introduction of FAST into the operational environment.
Lange, Bernd Markus; Rios-Estepa, Rigoberto
2014-01-01
The integration of mathematical modeling with analytical experimentation in an iterative fashion is a powerful approach to advance our understanding of the architecture and regulation of metabolic networks. Ultimately, such knowledge is highly valuable to support efforts aimed at modulating flux through target pathways by molecular breeding and/or metabolic engineering. In this article we describe a kinetic mathematical model of peppermint essential oil biosynthesis, a pathway that has been studied extensively for more than two decades. Modeling assumptions and approximations are described in detail. We provide step-by-step instructions on how to run simulations of dynamic changes in pathway metabolites concentrations.
Social and Personal Factors in Semantic Infusion Projects
NASA Astrophysics Data System (ADS)
West, P.; Fox, P. A.; McGuinness, D. L.
2009-12-01
As part of our semantic data framework activities across multiple, diverse disciplines we required the involvement of domain scientists, computer scientists, software engineers, data managers, and often, social scientists. This involvement from a cross-section of disciplines turns out to be a social exercise as much as it is a technical and methodical activity. Each member of the team is used to different modes of working, expectations, vocabularies, levels of participation, and incentive and reward systems. We will examine how both roles and personal responsibilities play in the development of semantic infusion projects, and how an iterative development cycle can contribute to the successful completion of such a project.
Research at ITER towards DEMO: Specific reactor diagnostic studies to be carried out on ITER
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krasilnikov, A. V.; Kaschuck, Y. A.; Vershkov, V. A.
2014-08-21
In ITER diagnostics will operate in the very hard radiation environment of fusion reactor. Extensive technology studies are carried out during development of the ITER diagnostics and procedures of their calibration and remote handling. Results of these studies and practical application of the developed diagnostics on ITER will provide the direct input to DEMO diagnostic development. The list of DEMO measurement requirements and diagnostics will be determined during ITER experiments on the bases of ITER plasma physics results and success of particular diagnostic application in reactor-like ITER plasma. Majority of ITER diagnostic already passed the conceptual design phase and representmore » the state of the art in fusion plasma diagnostic development. The number of related to DEMO results of ITER diagnostic studies such as design and prototype manufacture of: neutron and γ–ray diagnostics, neutral particle analyzers, optical spectroscopy including first mirror protection and cleaning technics, reflectometry, refractometry, tritium retention measurements etc. are discussed.« less
Research at ITER towards DEMO: Specific reactor diagnostic studies to be carried out on ITER
NASA Astrophysics Data System (ADS)
Krasilnikov, A. V.; Kaschuck, Y. A.; Vershkov, V. A.; Petrov, A. A.; Petrov, V. G.; Tugarinov, S. N.
2014-08-01
In ITER diagnostics will operate in the very hard radiation environment of fusion reactor. Extensive technology studies are carried out during development of the ITER diagnostics and procedures of their calibration and remote handling. Results of these studies and practical application of the developed diagnostics on ITER will provide the direct input to DEMO diagnostic development. The list of DEMO measurement requirements and diagnostics will be determined during ITER experiments on the bases of ITER plasma physics results and success of particular diagnostic application in reactor-like ITER plasma. Majority of ITER diagnostic already passed the conceptual design phase and represent the state of the art in fusion plasma diagnostic development. The number of related to DEMO results of ITER diagnostic studies such as design and prototype manufacture of: neutron and γ-ray diagnostics, neutral particle analyzers, optical spectroscopy including first mirror protection and cleaning technics, reflectometry, refractometry, tritium retention measurements etc. are discussed.
Naidu, Sailen G; Kriegshauser, J Scott; Paden, Robert G; He, Miao; Wu, Qing; Hara, Amy K
2014-12-01
An ultra-low-dose radiation protocol reconstructed with model-based iterative reconstruction was compared with our standard-dose protocol. This prospective study evaluated 20 men undergoing surveillance-enhanced computed tomography after endovascular aneurysm repair. All patients underwent standard-dose and ultra-low-dose venous phase imaging; images were compared after reconstruction with filtered back projection, adaptive statistical iterative reconstruction, and model-based iterative reconstruction. Objective measures of aortic contrast attenuation and image noise were averaged. Images were subjectively assessed (1 = worst, 5 = best) for diagnostic confidence, image noise, and vessel sharpness. Aneurysm sac diameter and endoleak detection were compared. Quantitative image noise was 26% less with ultra-low-dose model-based iterative reconstruction than with standard-dose adaptive statistical iterative reconstruction and 58% less than with ultra-low-dose adaptive statistical iterative reconstruction. Average subjective noise scores were not different between ultra-low-dose model-based iterative reconstruction and standard-dose adaptive statistical iterative reconstruction (3.8 vs. 4.0, P = .25). Subjective scores for diagnostic confidence were better with standard-dose adaptive statistical iterative reconstruction than with ultra-low-dose model-based iterative reconstruction (4.4 vs. 4.0, P = .002). Vessel sharpness was decreased with ultra-low-dose model-based iterative reconstruction compared with standard-dose adaptive statistical iterative reconstruction (3.3 vs. 4.1, P < .0001). Ultra-low-dose model-based iterative reconstruction and standard-dose adaptive statistical iterative reconstruction aneurysm sac diameters were not significantly different (4.9 vs. 4.9 cm); concordance for the presence of endoleak was 100% (P < .001). Compared with a standard-dose technique, an ultra-low-dose model-based iterative reconstruction protocol provides comparable image quality and diagnostic assessment at a 73% lower radiation dose.
Zhang, Kewei; Bhuiya, Mohammad-Wadud; Pazo, Jorge Rencoret; Miao, Yuchen; Kim, Hoon; Ralph, John; Liu, Chang-Jun
2012-01-01
Although the practice of protein engineering is industrially fruitful in creating biocatalysts and therapeutic proteins, applications of analogous techniques in the field of plant metabolic engineering are still in their infancy. Lignins are aromatic natural polymers derived from the oxidative polymerization of primarily three different hydroxycinnamyl alcohols, the monolignols. Polymerization of lignin starts with the oxidation of monolignols, followed by endwise cross-coupling of (radicals of) a monolignol and the growing oligomer/polymer. The para-hydroxyl of each monolignol is crucial for radical generation and subsequent coupling. Here, we describe the structure-function analysis and catalytic improvement of an artificial monolignol 4-O-methyltransferase created by iterative saturation mutagenesis and its use in modulating lignin and phenylpropanoid biosynthesis. We show that expressing the created enzyme in planta, thus etherifying the para-hydroxyls of lignin monomeric precursors, denies the derived monolignols any participation in the subsequent coupling process, substantially reducing lignification and, ultimately, lignin content. Concomitantly, the transgenic plants accumulated de novo synthesized 4-O-methylated soluble phenolics and wall-bound esters. The lower lignin levels of transgenic plants resulted in higher saccharification yields. Our study, through a structure-based protein engineering approach, offers a novel strategy for modulating phenylpropanoid/lignin biosynthesis to improve cell wall digestibility and diversify the repertories of biologically active compounds. PMID:22851762
Novel imaging analysis system to measure the spatial dimension of engineered tissue construct.
Choi, Kyoung-Hwan; Yoo, Byung-Su; Park, So Ra; Choi, Byung Hyune; Min, Byoung-Hyun
2010-02-01
The measurement of the spatial dimensions of tissue-engineered constructs is very important for their clinical applications. In this study, a novel method to measure the volume of tissue-engineered constructs was developed using iterative mathematical computations. The method measures and analyzes three-dimensional (3D) parameters of a construct to estimate its actual volume using a sequence of software-based mathematical algorithms. The mathematical algorithm is composed of two stages: the shape extraction and the determination of volume. The shape extraction utilized 3D images of a construct: length, width, and thickness, captured by a high-quality camera with charge coupled device. The surface of the 3D images was then divided into fine sections. The area of each section was measured and combined to obtain the total surface area. The 3D volume of the target construct was then mathematically obtained using its total surface area and thickness. The accuracy of the measurement method was verified by comparing the results with those obtained from the hydrostatic weighing method (Korea Research Institute of Standards and Science [KRISS], Korea). The mean difference in volume between two methods was 0.0313 +/- 0.0003% (n = 5, P = 0.523) with no significant statistical difference. In conclusion, our image-based spatial measurement system is a reliable and easy method to obtain an accurate 3D volume of a tissue-engineered construct.
NASA Technical Reports Server (NTRS)
Lee, Taesik; Jeziorek, Peter
2004-01-01
Large complex projects cost large sums of money throughout their life cycle for a variety of reasons and causes. For such large programs, the credible estimation of the project cost, a quick assessment of the cost of making changes, and the management of the project budget with effective cost reduction determine the viability of the project. Cost engineering that deals with these issues requires a rigorous method and systematic processes. This paper introduces a logical framework to a&e effective cost engineering. The framework is built upon Axiomatic Design process. The structure in the Axiomatic Design process provides a good foundation to closely tie engineering design and cost information together. The cost framework presented in this paper is a systematic link between the functional domain (FRs), physical domain (DPs), cost domain (CUs), and a task/process-based model. The FR-DP map relates a system s functional requirements to design solutions across all levels and branches of the decomposition hierarchy. DPs are mapped into CUs, which provides a means to estimate the cost of design solutions - DPs - from the cost of the physical entities in the system - CUs. The task/process model describes the iterative process ot-developing each of the CUs, and is used to estimate the cost of CUs. By linking the four domains, this framework provides a superior traceability from requirements to cost information.
Programmable polyproteams built using twin peptide superglues
Veggiani, Gianluca; Nakamura, Tomohiko; Brenner, Michael D.; Yan, Jun; Robinson, Carol V.; Howarth, Mark
2016-01-01
Programmed connection of amino acids or nucleotides into chains introduced a revolution in control of biological function. Reacting proteins together is more complex because of the number of reactive groups and delicate stability. Here we achieved sequence-programmed irreversible connection of protein units, forming polyprotein teams by sequential amidation and transamidation. SpyTag peptide is engineered to spontaneously form an isopeptide bond with SpyCatcher protein. By engineering the adhesin RrgA from Streptococcus pneumoniae, we developed the peptide SnoopTag, which formed a spontaneous isopeptide bond to its protein partner SnoopCatcher with >99% yield and no cross-reaction to SpyTag/SpyCatcher. Solid-phase attachment followed by sequential SpyTag or SnoopTag reaction between building-blocks enabled iterative extension. Linear, branched, and combinatorial polyproteins were synthesized, identifying optimal combinations of ligands against death receptors and growth factor receptors for cancer cell death signal activation. This simple and modular route to programmable “polyproteams” should enable exploration of a new area of biological space. PMID:26787909
Wang, Hailong; Sun, Yuqiu; Su, Qinghua; Xia, Xuewen
2018-01-01
The backtracking search optimization algorithm (BSA) is a population-based evolutionary algorithm for numerical optimization problems. BSA has a powerful global exploration capacity while its local exploitation capability is relatively poor. This affects the convergence speed of the algorithm. In this paper, we propose a modified BSA inspired by simulated annealing (BSAISA) to overcome the deficiency of BSA. In the BSAISA, the amplitude control factor (F) is modified based on the Metropolis criterion in simulated annealing. The redesigned F could be adaptively decreased as the number of iterations increases and it does not introduce extra parameters. A self-adaptive ε-constrained method is used to handle the strict constraints. We compared the performance of the proposed BSAISA with BSA and other well-known algorithms when solving thirteen constrained benchmarks and five engineering design problems. The simulation results demonstrated that BSAISA is more effective than BSA and more competitive with other well-known algorithms in terms of convergence speed. PMID:29666635
Rocketdyne LOX bearing tester program
NASA Technical Reports Server (NTRS)
Keba, J. E.; Beatty, R. F.
1988-01-01
The cause, or causes, for the Space Shuttle Main Engine ball wear were unknown, however, several mechanisms were suspected. Two testers were designed and built for operation in liquid oxygen to empirically gain insight into the problems and iterate solutions in a timely and cost efficient manner independent of engine testing. Schedules and test plans were developed that defined a test matrix consisting of parametric variations of loading, cooling or vapor margin, cage lubrication, material, and geometry studies. Initial test results indicated that the low pressure pump thrust bearing surface distress is a function of high axial load. Initial high pressure turbopump bearing tests produced the wear phenomenon observed in the turbopump and identified an inadequate vapor margin problem and a coolant flowrate sensitivity issue. These tests provided calibration data of analytical model predictions to give high confidence in the positive impact of future turbopump design modification for flight. Various modifications will be evaluated in these testers, since similar turbopump conditions can be produced and the benefit of the modification will be quantified in measured wear life comparisons.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hale, M.A.; Craig, J.I.
Integrated Product and Process Development (IPPD) embodies the simultaneous application to both system and quality engineering methods throughout an iterative design process. The use of IPPD results in the time-conscious, cost-saving development of engineering systems. To implement IPPD, a Decision-Based Design perspective is encapsulated in an approach that focuses on the role of the human designer in product development. The approach has two parts and is outlined in this paper. First, an architecture, called DREAMS, is being developed that facilitates design from a decision-based perspective. Second, a supporting computing infrastructure, called IMAGE, is being designed. Agents are used to implementmore » the overall infrastructure on the computer. Successful agent utilization requires that they be made of three components: the resource, the model, and the wrap. Current work is focused on the development of generalized agent schemes and associated demonstration projects. When in place, the technology independent computing infrastructure will aid the designer in systematically generating knowledge used to facilitate decision-making.« less
ACTIVIS: Visual Exploration of Industry-Scale Deep Neural Network Models.
Kahng, Minsuk; Andrews, Pierre Y; Kalro, Aditya; Polo Chau, Duen Horng
2017-08-30
While deep learning models have achieved state-of-the-art accuracies for many prediction tasks, understanding these models remains a challenge. Despite the recent interest in developing visual tools to help users interpret deep learning models, the complexity and wide variety of models deployed in industry, and the large-scale datasets that they used, pose unique design challenges that are inadequately addressed by existing work. Through participatory design sessions with over 15 researchers and engineers at Facebook, we have developed, deployed, and iteratively improved ACTIVIS, an interactive visualization system for interpreting large-scale deep learning models and results. By tightly integrating multiple coordinated views, such as a computation graph overview of the model architecture, and a neuron activation view for pattern discovery and comparison, users can explore complex deep neural network models at both the instance- and subset-level. ACTIVIS has been deployed on Facebook's machine learning platform. We present case studies with Facebook researchers and engineers, and usage scenarios of how ACTIVIS may work with different models.
Dankel, Dorothy J; Roland, Kenneth L; Fisher, Michael; Brenneman, Karen; Delgado, Ana; Santander, Javier; Baek, Chang-Ho; Clark-Curtiss, Josephine; Strand, Roger; Curtiss, Roy
2014-01-01
Researchers have iterated that the future of synthetic biology and biotechnology lies in novel consumer applications of crossing biology with engineering. However, if the new biology's future is to be sustainable, early and serious efforts must be made towards social sustainability. Therefore, the crux of new applications of synthetic biology and biotechnology is public understanding and acceptance. The RASVaccine is a novel recombinant design not found in nature that re-engineers a common bacteria ( Salmonella ) to produce a strong immune response in humans. Synthesis of the RASVaccine has the potential to improve public health as an inexpensive, non-injectable product. But how can scientists move forward to create a dialogue of creating a 'common sense' of this new technology in order to promote social sustainability? This paper delves into public issues raised around these novel technologies and uses the RASVaccine as an example of meeting the public with a common sense of its possibilities and limitations.
Programmable polyproteams built using twin peptide superglues.
Veggiani, Gianluca; Nakamura, Tomohiko; Brenner, Michael D; Gayet, Raphaël V; Yan, Jun; Robinson, Carol V; Howarth, Mark
2016-02-02
Programmed connection of amino acids or nucleotides into chains introduced a revolution in control of biological function. Reacting proteins together is more complex because of the number of reactive groups and delicate stability. Here we achieved sequence-programmed irreversible connection of protein units, forming polyprotein teams by sequential amidation and transamidation. SpyTag peptide is engineered to spontaneously form an isopeptide bond with SpyCatcher protein. By engineering the adhesin RrgA from Streptococcus pneumoniae, we developed the peptide SnoopTag, which formed a spontaneous isopeptide bond to its protein partner SnoopCatcher with >99% yield and no cross-reaction to SpyTag/SpyCatcher. Solid-phase attachment followed by sequential SpyTag or SnoopTag reaction between building-blocks enabled iterative extension. Linear, branched, and combinatorial polyproteins were synthesized, identifying optimal combinations of ligands against death receptors and growth factor receptors for cancer cell death signal activation. This simple and modular route to programmable "polyproteams" should enable exploration of a new area of biological space.
Use of agents to implement an integrated computing environment
NASA Technical Reports Server (NTRS)
Hale, Mark A.; Craig, James I.
1995-01-01
Integrated Product and Process Development (IPPD) embodies the simultaneous application to both system and quality engineering methods throughout an iterative design process. The use of IPPD results in the time-conscious, cost-saving development of engineering systems. To implement IPPD, a Decision-Based Design perspective is encapsulated in an approach that focuses on the role of the human designer in product development. The approach has two parts and is outlined in this paper. First, an architecture, called DREAMS, is being developed that facilitates design from a decision-based perspective. Second, a supporting computing infrastructure, called IMAGE, is being designed. Agents are used to implement the overall infrastructure on the computer. Successful agent utilization requires that they be made of three components: the resource, the model, and the wrap. Current work is focused on the development of generalized agent schemes and associated demonstration projects. When in place, the technology independent computing infrastructure will aid the designer in systematically generating knowledge used to facilitate decision-making.
Latimer, Luke N; Dueber, John E
2017-06-01
A common challenge in metabolic engineering is rapidly identifying rate-controlling enzymes in heterologous pathways for subsequent production improvement. We demonstrate a workflow to address this challenge and apply it to improving xylose utilization in Saccharomyces cerevisiae. For eight reactions required for conversion of xylose to ethanol, we screened enzymes for functional expression in S. cerevisiae, followed by a combinatorial expression analysis to achieve pathway flux balancing and identification of limiting enzymatic activities. In the next round of strain engineering, we increased the copy number of these limiting enzymes and again tested the eight-enzyme combinatorial expression library in this new background. This workflow yielded a strain that has a ∼70% increase in biomass yield and ∼240% increase in xylose utilization. Finally, we chromosomally integrated the expression library. This library enriched for strains with multiple integrations of the pathway, which likely were the result of tandem integrations mediated by promoter homology. Biotechnol. Bioeng. 2017;114: 1301-1309. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Editing plants for virus resistance using CRISPR-Cas.
Green, J C; Hu, J S
This minireview summarizes recent advancements using the clustered regularly interspaced palindromic repeats-associated nuclease systems (CRISPR-Cas) derived from prokaryotes to breed plants resistant to DNA and RNA viruses. The CRISPR-Cas system represents a powerful tool able to edit and insert novel traits into plants precisely at chosen loci offering enormous advantages to classical breeding. Approaches to engineering plant virus resistance in both transgenic and non-transgenic plants are discussed. Iterations of the CRISPR-Cas system, FnCas9 and C2c2 capable of editing RNA in eukaryotic cells offer a particular advantage for providing resistance to RNA viruses which represent the great majority of known plant viruses. Scientists have obtained conflicting results using gene silencing technology to produce transgenic plants resistant to geminiviruses. CRISPR-Cas systems engineered in plants to target geminiviruses have consistently reduced virus accumulation providing increased resistance to virus infection. CRISPR-Cas may provide novel and reliable approaches to control geminiviruses and other ssDNA viruses such as Banana bunchy top virus (BBTV).
Multi-Mounted X-Ray Computed Tomography
Fu, Jian; Liu, Zhenzhong; Wang, Jingzheng
2016-01-01
Most existing X-ray computed tomography (CT) techniques work in single-mounted mode and need to scan the inspected objects one by one. It is time-consuming and not acceptable for the inspection in a large scale. In this paper, we report a multi-mounted CT method and its first engineering implementation. It consists of a multi-mounted scanning geometry and the corresponding algebraic iterative reconstruction algorithm. This approach permits the CT rotation scanning of multiple objects simultaneously without the increase of penetration thickness and the signal crosstalk. Compared with the conventional single-mounted methods, it has the potential to improve the imaging efficiency and suppress the artifacts from the beam hardening and the scatter. This work comprises a numerical study of the method and its experimental verification using a dataset measured with a developed multi-mounted X-ray CT prototype system. We believe that this technique is of particular interest for pushing the engineering applications of X-ray CT. PMID:27073911
Advanced Stirling Duplex Materials Assessment for Potential Venus Mission Heater Head Application
NASA Technical Reports Server (NTRS)
Ritzert, Frank; Nathal, Michael V.; Salem, Jonathan; Jacobson, Nathan; Nesbitt, James
2011-01-01
This report will address materials selection for components in a proposed Venus lander system. The lander would use active refrigeration to allow Space Science instrumentation to survive the extreme environment that exists on the surface of Venus. The refrigeration system would be powered by a Stirling engine-based system and is termed the Advanced Stirling Duplex (ASD) concept. Stirling engine power conversion in its simplest definition converts heat from radioactive decay into electricity. Detailed design decisions will require iterations between component geometries, materials selection, system output, and tolerable risk. This study reviews potential component requirements against known materials performance. A lower risk, evolutionary advance in heater head materials could be offered by nickel-base superalloy single crystals, with expected capability of approximately 1100C. However, the high temperature requirements of the Venus mission may force the selection of ceramics or refractory metals, which are more developmental in nature and may not have a well-developed database or a mature supporting technology base such as fabrication and joining methods.
Programming self-organizing multicellular structures with synthetic cell-cell signaling.
Toda, Satoshi; Blauch, Lucas R; Tang, Sindy K Y; Morsut, Leonardo; Lim, Wendell A
2018-05-31
A common theme in the self-organization of multicellular tissues is the use of cell-cell signaling networks to induce morphological changes. We used the modular synNotch juxtacrine signaling platform to engineer artificial genetic programs in which specific cell-cell contacts induced changes in cadherin cell adhesion. Despite their simplicity, these minimal intercellular programs were sufficient to yield assemblies with hallmarks of natural developmental systems: robust self-organization into multi-domain structures, well-choreographed sequential assembly, cell type divergence, symmetry breaking, and the capacity for regeneration upon injury. The ability of these networks to drive complex structure formation illustrates the power of interlinking cell signaling with cell sorting: signal-induced spatial reorganization alters the local signals received by each cell, resulting in iterative cycles of cell fate branching. These results provide insights into the evolution of multi-cellularity and demonstrate the potential to engineer customized self-organizing tissues or materials. Copyright © 2018, American Association for the Advancement of Science.
NASA Astrophysics Data System (ADS)
Wang, Tonghe; Zhu, Lei
2016-09-01
Conventional dual-energy CT (DECT) reconstruction requires two full-size projection datasets with two different energy spectra. In this study, we propose an iterative algorithm to enable a new data acquisition scheme which requires one full scan and a second sparse-view scan for potential reduction in imaging dose and engineering cost of DECT. A bilateral filter is calculated as a similarity matrix from the first full-scan CT image to quantify the similarity between any two pixels, which is assumed unchanged on a second CT image since DECT scans are performed on the same object. The second CT image from reduced projections is reconstructed by an iterative algorithm which updates the image by minimizing the total variation of the difference between the image and its filtered image by the similarity matrix under data fidelity constraint. As the redundant structural information of the two CT images is contained in the similarity matrix for CT reconstruction, we refer to the algorithm as structure preserving iterative reconstruction (SPIR). The proposed method is evaluated on both digital and physical phantoms, and is compared with the filtered-backprojection (FBP) method, the conventional total-variation-regularization-based algorithm (TVR) and prior-image-constrained-compressed-sensing (PICCS). SPIR with a second 10-view scan reduces the image noise STD by a factor of one order of magnitude with same spatial resolution as full-view FBP image. SPIR substantially improves over TVR on the reconstruction accuracy of a 10-view scan by decreasing the reconstruction error from 6.18% to 1.33%, and outperforms TVR at 50 and 20-view scans on spatial resolution with a higher frequency at the modulation transfer function value of 10% by an average factor of 4. Compared with the 20-view scan PICCS result, the SPIR image has 7 times lower noise STD with similar spatial resolution. The electron density map obtained from the SPIR-based DECT images with a second 10-view scan has an average error of less than 1%.
Terascale Optimal PDE Simulations (TOPS) Center
DOE Office of Scientific and Technical Information (OSTI.GOV)
Professor Olof B. Widlund
2007-07-09
Our work has focused on the development and analysis of domain decomposition algorithms for a variety of problems arising in continuum mechanics modeling. In particular, we have extended and analyzed FETI-DP and BDDC algorithms; these iterative solvers were first introduced and studied by Charbel Farhat and his collaborators, see [11, 45, 12], and by Clark Dohrmann of SANDIA, Albuquerque, see [43, 2, 1], respectively. These two closely related families of methods are of particular interest since they are used more extensively than other iterative substructuring methods to solve very large and difficult problems. Thus, the FETI algorithms are part ofmore » the SALINAS system developed by the SANDIA National Laboratories for very large scale computations, and as already noted, BDDC was first developed by a SANDIA scientist, Dr. Clark Dohrmann. The FETI algorithms are also making inroads in commercial engineering software systems. We also note that the analysis of these algorithms poses very real mathematical challenges. The success in developing this theory has, in several instances, led to significant improvements in the performance of these algorithms. A very desirable feature of these iterative substructuring and other domain decomposition algorithms is that they respect the memory hierarchy of modern parallel and distributed computing systems, which is essential for approaching peak floating point performance. The development of improved methods, together with more powerful computer systems, is making it possible to carry out simulations in three dimensions, with quite high resolution, relatively easily. This work is supported by high quality software systems, such as Argonne's PETSc library, which facilitates code development as well as the access to a variety of parallel and distributed computer systems. The success in finding scalable and robust domain decomposition algorithms for very large number of processors and very large finite element problems is, e.g., illustrated in [24, 25, 26]. This work is based on [29, 31]. Our work over these five and half years has, in our opinion, helped advance the knowledge of domain decomposition methods significantly. We see these methods as providing valuable alternatives to other iterative methods, in particular, those based on multi-grid. In our opinion, our accomplishments also match the goals of the TOPS project quite closely.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ferrada, Juan J; Reiersen, Wayne T
U.S.-ITER is responsible for the design, engineering, and procurement of the Tokamak Cooling Water System (TCWS). TCWS is designed to provide cooling and baking for client systems that include the first wall/blanket, vacuum vessel, divertor, and neutral beam injector. Additional operations that support these primary functions include chemical control of water provided to client systems, draining and drying for maintenance, and leak detection/localization. TCWS interfaces with 27 systems including the secondary cooling system, which rejects this heat to the environment. TCWS transfers heat generated in the Tokamak during nominal pulsed operation - 850 MW at up to 150 C andmore » 4.2 MPa water pressure. Impurities are diffused from in-vessel components and the vacuum vessel by water baking at 200-240 C at up to 4.4 MPa. TCWS is complex because it serves vital functions for four primary clients whose performance is critical to ITER's success and interfaces with more than 20 additional ITER systems. Conceptual design of this one-of-a-kind cooling system has been completed; however, several issues remain that must be resolved before moving to the next stage of the design process. The 2004 baseline design indicated cooling loops that have no fault tolerance for component failures. During plasma operation, each cooling loop relies on a single pump, a single pressurizer, and one heat exchanger. Consequently, failure of any of these would render TCWS inoperable, resulting in plasma shutdown. The application of reliability, availability, maintainability, and inspectability (RAMI) tools during the different stages of TCWS design is crucial for optimization purposes and for maintaining compliance with project requirements. RAMI analysis will indicate appropriate equipment redundancy that provides graceful degradation in the event of an equipment failure. This analysis helps demonstrate that using proven, commercially available equipment is better than using custom-designed equipment with no field experience and lowers specific costs while providing higher reliability. This paper presents a brief description of the TCWS conceptual design and the application of RAMI tools to optimize the design at different stages during the project.« less
Optimised Iteration in Coupled Monte Carlo - Thermal-Hydraulics Calculations
NASA Astrophysics Data System (ADS)
Hoogenboom, J. Eduard; Dufek, Jan
2014-06-01
This paper describes an optimised iteration scheme for the number of neutron histories and the relaxation factor in successive iterations of coupled Monte Carlo and thermal-hydraulic reactor calculations based on the stochastic iteration method. The scheme results in an increasing number of neutron histories for the Monte Carlo calculation in successive iteration steps and a decreasing relaxation factor for the spatial power distribution to be used as input to the thermal-hydraulics calculation. The theoretical basis is discussed in detail and practical consequences of the scheme are shown, among which a nearly linear increase per iteration of the number of cycles in the Monte Carlo calculation. The scheme is demonstrated for a full PWR type fuel assembly. Results are shown for the axial power distribution during several iteration steps. A few alternative iteration method are also tested and it is concluded that the presented iteration method is near optimal.