Energy Efficient Cryogenics on Earth and in Space
NASA Technical Reports Server (NTRS)
Fesmire, James E.
2012-01-01
The Cryogenics Test Laboratory, NASA Kennedy Space Center, works to provide practical solutions to low-temperature problems while focusing on long-term technology targets for energy-efficient cryogenics on Earth and in space.
Space-planning and structural solutions of low-rise buildings: Optimal selection methods
NASA Astrophysics Data System (ADS)
Gusakova, Natalya; Minaev, Nikolay; Filushina, Kristina; Dobrynina, Olga; Gusakov, Alexander
2017-11-01
The present study is devoted to elaboration of methodology used to select appropriately the space-planning and structural solutions in low-rise buildings. Objective of the study is working out the system of criteria influencing the selection of space-planning and structural solutions which are most suitable for low-rise buildings and structures. Application of the defined criteria in practice aim to enhance the efficiency of capital investments, energy and resource saving, create comfortable conditions for the population considering climatic zoning of the construction site. Developments of the project can be applied while implementing investment-construction projects of low-rise housing at different kinds of territories based on the local building materials. The system of criteria influencing the optimal selection of space-planning and structural solutions of low-rise buildings has been developed. Methodological basis has been also elaborated to assess optimal selection of space-planning and structural solutions of low-rise buildings satisfying the requirements of energy-efficiency, comfort and safety, and economical efficiency. Elaborated methodology enables to intensify the processes of low-rise construction development for different types of territories taking into account climatic zoning of the construction site. Stimulation of low-rise construction processes should be based on the system of approaches which are scientifically justified; thus it allows enhancing energy efficiency, comfort, safety and economical effectiveness of low-rise buildings.
NASA Technical Reports Server (NTRS)
Meneghelli, Barry J.; Notardonato, William; Fesmire, James E.
2016-01-01
The Cryogenics Test Laboratory, NASA Kennedy Space Center, works to provide practical solutions to low-temperature problems while focusing on long-term technology targets for the energy-efficient use of cryogenics on Earth and in space.
Efficient Jacobian inversion for the control of simple robot manipulators
NASA Technical Reports Server (NTRS)
Fijany, Amir; Bejczy, Antal K.
1988-01-01
Symbolic inversion of the Jacobian matrix for spherical wrist arms is investigated. It is shown that, taking advantage of the simple geometry of these arms, the closed-form solution of the system Q = J-1X, representing a transformation from task space to joint space, can be obtained very efficiently. The solutions for PUMA, Stanford, and a six-revolute-joint coplanar arm, along with all singular points, are presented. The solution for each joint variable is found as an explicit function of the singular points which provides a better insight into the effect of different singular points on the motion and force exertion of each individual joint. For the above arms, the computation cost of the solution is on the same order as the cost of forward kinematic solution and it is significantly reduced if forward kinematic solution is already obtained. A comparison with previous methods shows that this method is the most efficient to date.
NASA Astrophysics Data System (ADS)
Nardello, Marco; Centro, Sandro
2017-09-01
TwinFocus® is a CPV solution that adopts quasi-parabolic, off axis mirrors, to obtain a concentration of 760× on 3J solar cells (Azur space technology) with 44% efficiency. The adoption of this optical solution allows for a cheap, lightweight and space efficient system. In particular, the addition of a secondary optics to the mirror, grants an efficient use of space, with very low thicknesses and a compact modular design. Materials are recyclable and allow for reduction of weights to a minimum level. The product is realized through the cooperation of leading edge industries active in automotive lighting and plastic materials molding. The produced prototypes provide up to 27.6% efficiency according to tests operated on the field with non-optimal spectral conditions.
Inversion Of Jacobian Matrix For Robot Manipulators
NASA Technical Reports Server (NTRS)
Fijany, Amir; Bejczy, Antal K.
1989-01-01
Report discusses inversion of Jacobian matrix for class of six-degree-of-freedom arms with spherical wrist, i.e., with last three joints intersecting. Shows by taking advantage of simple geometry of such arms, closed-form solution of Q=J-1X, which represents linear transformation from task space to joint space, obtained efficiently. Presents solutions for PUMA arm, JPL/Stanford arm, and six-revolute-joint coplanar arm along with all singular points. Main contribution of paper shows simple geometry of this type of arms exploited in performing inverse transformation without any need to compute Jacobian or its inverse explicitly. Implication of this computational efficiency advanced task-space control schemes for spherical-wrist arms implemented more efficiently.
An Efficient Algorithm for Partitioning and Authenticating Problem-Solutions of eLeaming Contents
ERIC Educational Resources Information Center
Dewan, Jahangir; Chowdhury, Morshed; Batten, Lynn
2013-01-01
Content authenticity and correctness is one of the important challenges in eLearning as there can be many solutions to one specific problem in cyber space. Therefore, the authors feel it is necessary to map problems to solutions using graph partition and weighted bipartite matching. This article proposes an efficient algorithm to partition…
Tracking fronts in solutions of the shallow-water equations
NASA Astrophysics Data System (ADS)
Bennett, Andrew F.; Cummins, Patrick F.
1988-02-01
A front-tracking algorithm of Chern et al. (1986) is tested on the shallow-water equations, using the Parrett and Cullen (1984) and Williams and Hori (1970) initial state, consisting of smooth finite amplitude waves depending on one space dimension alone. At high resolution the solution is almost indistinguishable from that obtained with the Glimm algorithm. The latter is known to converge to the true frontal solution, but is 20 times less efficient at the same resolution. The solutions obtained using the front-tracking algorithm at 8 times coarser resolution are quite acceptable, indicating a very substantial gain in efficiency, which encourages application in realistic ocean models possessing two or three space dimensions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bernal, Andrés; Patiny, Luc; Castillo, Andrés M.
2015-02-21
Nuclear magnetic resonance (NMR) assignment of small molecules is presented as a typical example of a combinatorial optimization problem in chemical physics. Three strategies that help improve the efficiency of solution search by the branch and bound method are presented: 1. reduction of the size of the solution space by resort to a condensed structure formula, wherein symmetric nuclei are grouped together; 2. partitioning of the solution space based on symmetry, that becomes the basis for an efficient branching procedure; and 3. a criterion of selection of input restrictions that leads to increased gaps between branches and thus faster pruningmore » of non-viable solutions. Although the examples chosen to illustrate this work focus on small-molecule NMR assignment, the results are generic and might help solving other combinatorial optimization problems.« less
Tensor-product preconditioners for higher-order space-time discontinuous Galerkin methods
NASA Astrophysics Data System (ADS)
Diosady, Laslo T.; Murman, Scott M.
2017-02-01
A space-time discontinuous-Galerkin spectral-element discretization is presented for direct numerical simulation of the compressible Navier-Stokes equations. An efficient solution technique based on a matrix-free Newton-Krylov method is developed in order to overcome the stiffness associated with high solution order. The use of tensor-product basis functions is key to maintaining efficiency at high-order. Efficient preconditioning methods are presented which can take advantage of the tensor-product formulation. A diagonalized Alternating-Direction-Implicit (ADI) scheme is extended to the space-time discontinuous Galerkin discretization. A new preconditioner for the compressible Euler/Navier-Stokes equations based on the fast-diagonalization method is also presented. Numerical results demonstrate the effectiveness of these preconditioners for the direct numerical simulation of subsonic turbulent flows.
Tensor-Product Preconditioners for Higher-Order Space-Time Discontinuous Galerkin Methods
NASA Technical Reports Server (NTRS)
Diosady, Laslo T.; Murman, Scott M.
2016-01-01
space-time discontinuous-Galerkin spectral-element discretization is presented for direct numerical simulation of the compressible Navier-Stokes equat ions. An efficient solution technique based on a matrix-free Newton-Krylov method is developed in order to overcome the stiffness associated with high solution order. The use of tensor-product basis functions is key to maintaining efficiency at high order. Efficient preconditioning methods are presented which can take advantage of the tensor-product formulation. A diagonalized Alternating-Direction-Implicit (ADI) scheme is extended to the space-time discontinuous Galerkin discretization. A new preconditioner for the compressible Euler/Navier-Stokes equations based on the fast-diagonalization method is also presented. Numerical results demonstrate the effectiveness of these preconditioners for the direct numerical simulation of subsonic turbulent flows.
Green's function methods in heavy ion shielding
NASA Technical Reports Server (NTRS)
Wilson, John W.; Costen, Robert C.; Shinn, Judy L.; Badavi, Francis F.
1993-01-01
An analytic solution to the heavy ion transport in terms of Green's function is used to generate a highly efficient computer code for space applications. The efficiency of the computer code is accomplished by a nonperturbative technique extending Green's function over the solution domain. The computer code can also be applied to accelerator boundary conditions to allow code validation in laboratory experiments.
Final Scientific Report - Wireless and Sensing Solutions Advancing Industrial Efficiency
DOE Office of Scientific and Technical Information (OSTI.GOV)
Budampati, Rama; McBrady, Adam; Nusseibeh, Fouad
2009-09-28
The project team's goal for the Wireless and Sensing Solution Advancing Industrial Efficiency award (DE-FC36-04GO14002) was to develop, demonstrate, and test a number of leading edge technologies that could enable the emergence of wireless sensor and sampling systems for the industrial market space. This effort combined initiatives in advanced sensor development, configurable sampling and deployment platforms, and robust wireless communications to address critical obstacles in enabling enhanced industrial efficiency.
An efficient solution procedure for the thermoelastic analysis of truss space structures
NASA Technical Reports Server (NTRS)
Givoli, D.; Rand, O.
1992-01-01
A solution procedure is proposed for the thermal and thermoelastic analysis of truss space structures in periodic motion. In this method, the spatial domain is first descretized using a consistent finite element formulation. Then the resulting semi-discrete equations in time are solved analytically by using Fourier decomposition. Geometrical symmetry is taken advantage of completely. An algorithm is presented for the calculation of heat flux distribution. The method is demonstrated via a numerical example of a cylindrically shaped space structure.
NASA Technical Reports Server (NTRS)
Kurtz, L. A.; Smith, R. E.; Parks, C. L.; Boney, L. R.
1978-01-01
Steady state solutions to two time dependent partial differential systems have been obtained by the Method of Lines (MOL) and compared to those obtained by efficient standard finite difference methods: (1) Burger's equation over a finite space domain by a forward time central space explicit method, and (2) the stream function - vorticity form of viscous incompressible fluid flow in a square cavity by an alternating direction implicit (ADI) method. The standard techniques were far more computationally efficient when applicable. In the second example, converged solutions at very high Reynolds numbers were obtained by MOL, whereas solution by ADI was either unattainable or impractical. With regard to 'set up' time, solution by MOL is an attractive alternative to techniques with complicated algorithms, as much of the programming difficulty is eliminated.
A first-order k-space model for elastic wave propagation in heterogeneous media.
Firouzi, K; Cox, B T; Treeby, B E; Saffari, N
2012-09-01
A pseudospectral model of linear elastic wave propagation is described based on the first order stress-velocity equations of elastodynamics. k-space adjustments to the spectral gradient calculations are derived from the dyadic Green's function solution to the second-order elastic wave equation and used to (a) ensure the solution is exact for homogeneous wave propagation for timesteps of arbitrarily large size, and (b) also allows larger time steps without loss of accuracy in heterogeneous media. The formulation in k-space allows the wavefield to be split easily into compressional and shear parts. A perfectly matched layer (PML) absorbing boundary condition was developed to effectively impose a radiation condition on the wavefield. The staggered grid, which is essential for accurate simulations, is described, along with other practical details of the implementation. The model is verified through comparison with exact solutions for canonical examples and further examples are given to show the efficiency of the method for practical problems. The efficiency of the model is by virtue of the reduced point-per-wavelength requirement, the use of the fast Fourier transform (FFT) to calculate the gradients in k space, and larger time steps made possible by the k-space adjustments.
NASA Astrophysics Data System (ADS)
Sahraei, S.; Asadzadeh, M.
2017-12-01
Any modern multi-objective global optimization algorithm should be able to archive a well-distributed set of solutions. While the solution diversity in the objective space has been explored extensively in the literature, little attention has been given to the solution diversity in the decision space. Selection metrics such as the hypervolume contribution and crowding distance calculated in the objective space would guide the search toward solutions that are well-distributed across the objective space. In this study, the diversity of solutions in the decision-space is used as the main selection criteria beside the dominance check in multi-objective optimization. To this end, currently archived solutions are clustered in the decision space and the ones in less crowded clusters are given more chance to be selected for generating new solution. The proposed approach is first tested on benchmark mathematical test problems. Second, it is applied to a hydrologic model calibration problem with more than three objective functions. Results show that the chance of finding more sparse set of high-quality solutions increases, and therefore the analyst would receive a well-diverse set of options with maximum amount of information. Pareto Archived-Dynamically Dimensioned Search, which is an efficient and parsimonious multi-objective optimization algorithm for model calibration, is utilized in this study.
Environment and the Space Program
ERIC Educational Resources Information Center
Schirra, Walter W., Jr.
1969-01-01
Data collected at projected space station will contribute to solution of environmental problems on earth and will enable more efficient use of earth's natural resources. Adapted from commencement address delivered at Newark College of Engineering, June 5, 1969. (WM)
Way Beyond Widgets: Delivering Integrated Lighting Design in Actionable Solutions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Myer, Michael; Richman, Eric E.; Jones, Carol C.
2008-08-17
Previously, energy-efficiency strategies for commercial spaces have focused on using efficient equipment without providing specific detailed instructions. Designs by experts in their fields are an energy-efficiency product in its own right. A new national program has developed interactive application-specific lighting designs for widespread use in four major commercial sectors. This paper will describe the technical basis for the solutions, energy efficiency and cost-savings methodology, and installations and measurement/verification to-date. Lighting designs have been developed for five types of retail stores (big box, small box, grocery, specialty market, and pharmacy) and are planned for the office, healthcare, and education sectors asmore » well. Nationally known sustainable lighting designers developed the designs using high-performance commercially available products, daylighting, and lighting controls. Input and peer review was received by stakeholders, including manufacturers, architects, utilities, energy-efficiency program sponsors (EEPS), and end-users (i.e., retailers). An interactive web tool delivers the lighting solutions and analyzes anticipated energy savings using project-specific inputs. The lighting solutions were analyzed against a reference building using the space-by-space method as allowed in the Energy Standard for Buildings Except Low-Rise Residential Buildings (ASHRAE 2004) co-sponsored by the American Society of Heating, Refrigeration, and Air Conditioning Engineers (ASHRAE) and the Illuminating Engineering Society of North America (IESNA). The results showed that the design vignettes ranged from a 9% to 28% reduction in the allowed lighting power density. Detailed control strategies are offered to further reduce the actual kilowatt-hour power consumption. When used together, the lighting design vignettes and control strategies show a modeled decrease in energy consumption (kWh) by 33% to 50% below the baseline design.« less
Computing Interactions Of Free-Space Radiation With Matter
NASA Technical Reports Server (NTRS)
Wilson, J. W.; Cucinotta, F. A.; Shinn, J. L.; Townsend, L. W.; Badavi, F. F.; Tripathi, R. K.; Silberberg, R.; Tsao, C. H.; Badwar, G. D.
1995-01-01
High Charge and Energy Transport (HZETRN) computer program computationally efficient, user-friendly package of software adressing problem of transport of, and shielding against, radiation in free space. Designed as "black box" for design engineers not concerned with physics of underlying atomic and nuclear radiation processes in free-space environment, but rather primarily interested in obtaining fast and accurate dosimetric information for design and construction of modules and devices for use in free space. Computational efficiency achieved by unique algorithm based on deterministic approach to solution of Boltzmann equation rather than computationally intensive statistical Monte Carlo method. Written in FORTRAN.
Approximate solution of space and time fractional higher order phase field equation
NASA Astrophysics Data System (ADS)
Shamseldeen, S.
2018-03-01
This paper is concerned with a class of space and time fractional partial differential equation (STFDE) with Riesz derivative in space and Caputo in time. The proposed STFDE is considered as a generalization of a sixth-order partial phase field equation. We describe the application of the optimal homotopy analysis method (OHAM) to obtain an approximate solution for the suggested fractional initial value problem. An averaged-squared residual error function is defined and used to determine the optimal convergence control parameter. Two numerical examples are studied, considering periodic and non-periodic initial conditions, to justify the efficiency and the accuracy of the adopted iterative approach. The dependence of the solution on the order of the fractional derivative in space and time and model parameters is investigated.
Ringe, Stefan; Oberhofer, Harald; Hille, Christoph; Matera, Sebastian; Reuter, Karsten
2016-08-09
The size-modified Poisson-Boltzmann (MPB) equation is an efficient implicit solvation model which also captures electrolytic solvent effects. It combines an account of the dielectric solvent response with a mean-field description of solvated finite-sized ions. We present a general solution scheme for the MPB equation based on a fast function-space-oriented Newton method and a Green's function preconditioned iterative linear solver. In contrast to popular multigrid solvers, this approach allows us to fully exploit specialized integration grids and optimized integration schemes. We describe a corresponding numerically efficient implementation for the full-potential density-functional theory (DFT) code FHI-aims. We show that together with an additional Stern layer correction the DFT+MPB approach can describe the mean activity coefficient of a KCl aqueous solution over a wide range of concentrations. The high sensitivity of the calculated activity coefficient on the employed ionic parameters thereby suggests to use extensively tabulated experimental activity coefficients of salt solutions for a systematic parametrization protocol.
NASA Technical Reports Server (NTRS)
Simon, Matthew A.; Toups, Larry
2014-01-01
Increased public awareness of carbon footprints, crowding in urban areas, and rising housing costs have spawned a 'small house movement' in the housing industry. Members of this movement desire small, yet highly functional residences which are both affordable and sensitive to consumer comfort standards. In order to create comfortable, minimum-volume interiors, recent advances have been made in furniture design and approaches to interior layout that improve both space utilization and encourage multi-functional design for small homes, apartments, naval, and recreational vehicles. Design efforts in this evolving niche of terrestrial architecture can provide useful insights leading to innovation and efficiency in the design of space habitats for future human space exploration missions. This paper highlights many of the cross-cutting architectural solutions used in small space design which are applicable to the spacecraft interior design problem. Specific solutions discussed include reconfigurable, multi-purpose spaces; collapsible or transformable furniture; multi-purpose accommodations; efficient, space saving appliances; stowable and mobile workstations; and the miniaturization of electronics and computing hardware. For each of these design features, descriptions of how they save interior volume or mitigate other small space issues such as confinement stress or crowding are discussed. Finally, recommendations are provided to provide guidance for future designs and identify potential collaborations with the small spaces design community.
Perazzolo, S; Lewis, R M; Sengers, B G
2017-12-01
A healthy pregnancy depends on placental transfer from mother to fetus. Placental transfer takes place at the micro scale across the placental villi. Solutes from the maternal blood are taken up by placental villi and enter the fetal capillaries. This study investigated the effect of maternal blood flow on solute uptake at the micro scale. A 3D image based modelling approach of the placental microstructures was undertaken. Solute transport in the intervillous space was modelled explicitly and solute uptake with respect to different maternal blood flow rates was estimated. Fetal capillary flow was not modelled and treated as a perfect sink. For a freely diffusing small solute, the flow of maternal blood through the intervillous space was found to be limiting the transfer. Ignoring the effects of maternal flow resulted in a 2.4 ± 0.4 fold over-prediction of transfer by simple diffusion, in absence of binding. Villous morphology affected the efficiency of solute transfer due to concentration depleted zones. Interestingly, less dense microvilli had lower surface area available for uptake which was compensated by increased flow due to their higher permeability. At super-physiological pressures, maternal flow was not limiting, however the efficiency of uptake decreased. This study suggests that the interplay between maternal flow and villous structure affects the efficiency of placental transfer but predicted that flow rate will be the major determinant of transfer. Copyright © 2017 Elsevier Ltd. All rights reserved.
An efficient and practical approach to obtain a better optimum solution for structural optimization
NASA Astrophysics Data System (ADS)
Chen, Ting-Yu; Huang, Jyun-Hao
2013-08-01
For many structural optimization problems, it is hard or even impossible to find the global optimum solution owing to unaffordable computational cost. An alternative and practical way of thinking is thus proposed in this research to obtain an optimum design which may not be global but is better than most local optimum solutions that can be found by gradient-based search methods. The way to reach this goal is to find a smaller search space for gradient-based search methods. It is found in this research that data mining can accomplish this goal easily. The activities of classification, association and clustering in data mining are employed to reduce the original design space. For unconstrained optimization problems, the data mining activities are used to find a smaller search region which contains the global or better local solutions. For constrained optimization problems, it is used to find the feasible region or the feasible region with better objective values. Numerical examples show that the optimum solutions found in the reduced design space by sequential quadratic programming (SQP) are indeed much better than those found by SQP in the original design space. The optimum solutions found in a reduced space by SQP sometimes are even better than the solution found using a hybrid global search method with approximate structural analyses.
2017-10-26
NASA is working with the Robert Wood Johnson Foundation (RWJF) to sponsor the Earth and Space Air Prize competition for a solution that could improve air quality and health in space and on Earth. This project is a technology innovation challenge to promote the development of robust, durable, inexpensive, efficient, lightweight, and easy-to-use aerosol sensors for space and Earth environments.
A finite state projection algorithm for the stationary solution of the chemical master equation.
Gupta, Ankit; Mikelson, Jan; Khammash, Mustafa
2017-10-21
The chemical master equation (CME) is frequently used in systems biology to quantify the effects of stochastic fluctuations that arise due to biomolecular species with low copy numbers. The CME is a system of ordinary differential equations that describes the evolution of probability density for each population vector in the state-space of the stochastic reaction dynamics. For many examples of interest, this state-space is infinite, making it difficult to obtain exact solutions of the CME. To deal with this problem, the Finite State Projection (FSP) algorithm was developed by Munsky and Khammash [J. Chem. Phys. 124(4), 044104 (2006)], to provide approximate solutions to the CME by truncating the state-space. The FSP works well for finite time-periods but it cannot be used for estimating the stationary solutions of CMEs, which are often of interest in systems biology. The aim of this paper is to develop a version of FSP which we refer to as the stationary FSP (sFSP) that allows one to obtain accurate approximations of the stationary solutions of a CME by solving a finite linear-algebraic system that yields the stationary distribution of a continuous-time Markov chain over the truncated state-space. We derive bounds for the approximation error incurred by sFSP and we establish that under certain stability conditions, these errors can be made arbitrarily small by appropriately expanding the truncated state-space. We provide several examples to illustrate our sFSP method and demonstrate its efficiency in estimating the stationary distributions. In particular, we show that using a quantized tensor-train implementation of our sFSP method, problems admitting more than 100 × 10 6 states can be efficiently solved.
A finite state projection algorithm for the stationary solution of the chemical master equation
NASA Astrophysics Data System (ADS)
Gupta, Ankit; Mikelson, Jan; Khammash, Mustafa
2017-10-01
The chemical master equation (CME) is frequently used in systems biology to quantify the effects of stochastic fluctuations that arise due to biomolecular species with low copy numbers. The CME is a system of ordinary differential equations that describes the evolution of probability density for each population vector in the state-space of the stochastic reaction dynamics. For many examples of interest, this state-space is infinite, making it difficult to obtain exact solutions of the CME. To deal with this problem, the Finite State Projection (FSP) algorithm was developed by Munsky and Khammash [J. Chem. Phys. 124(4), 044104 (2006)], to provide approximate solutions to the CME by truncating the state-space. The FSP works well for finite time-periods but it cannot be used for estimating the stationary solutions of CMEs, which are often of interest in systems biology. The aim of this paper is to develop a version of FSP which we refer to as the stationary FSP (sFSP) that allows one to obtain accurate approximations of the stationary solutions of a CME by solving a finite linear-algebraic system that yields the stationary distribution of a continuous-time Markov chain over the truncated state-space. We derive bounds for the approximation error incurred by sFSP and we establish that under certain stability conditions, these errors can be made arbitrarily small by appropriately expanding the truncated state-space. We provide several examples to illustrate our sFSP method and demonstrate its efficiency in estimating the stationary distributions. In particular, we show that using a quantized tensor-train implementation of our sFSP method, problems admitting more than 100 × 106 states can be efficiently solved.
Effect of ionic strength and presence of serum on lipoplexes structure monitorized by FRET
Madeira, Catarina; Loura, Luís MS; Prieto, Manuel; Fedorov, Aleksander; Aires-Barros, M Raquel
2008-01-01
Background Serum and high ionic strength solutions constitute important barriers to cationic lipid-mediated intravenous gene transfer. Preparation or incubation of lipoplexes in these media results in alteration of their biophysical properties, generally leading to a decrease in transfection efficiency. Accurate quantification of these changes is of paramount importance for the success of lipoplex-mediated gene transfer in vivo. Results In this work, a novel time-resolved fluorescence resonance energy transfer (FRET) methodology was used to monitor lipoplex structural changes in the presence of phosphate-buffered saline solution (PBS) and fetal bovine serum. 1,2-dioleoyl-3-trimethylammonium-propane (DOTAP)/pDNA lipoplexes, prepared in high and low ionic strength solutions, are compared in terms of complexation efficiency. Lipoplexes prepared in PBS show lower complexation efficiencies when compared to lipoplexes prepared in low ionic strength buffer followed by addition of PBS. Moreover, when serum is added to the referred formulation no significant effect on the complexation efficiency was observed. In physiological saline solutions and serum, a multilamellar arrangement of the lipoplexes is maintained, with reduced spacing distances between the FRET probes, relative to those in low ionic strength medium. Conclusion The time-resolved FRET methodology described in this work allowed us to monitor stability and characterize quantitatively the structural changes (variations in interchromophore spacing distances and complexation efficiencies) undergone by DOTAP/DNA complexes in high ionic strength solutions and in presence of serum, as well as to determine the minimum amount of potentially cytotoxic cationic lipid necessary for complete coverage of DNA. This constitutes essential information regarding thoughtful design of future in vivo applications. PMID:18302788
Solution of steady and unsteady transonic-vortex flows using Euler and full-potential equations
NASA Technical Reports Server (NTRS)
Kandil, Osama A.; Chuang, Andrew H.; Hu, Hong
1989-01-01
Two methods are presented for inviscid transonic flows: unsteady Euler equations in a rotating frame of reference for transonic-vortex flows and integral solution of full-potential equation with and without embedded Euler domains for transonic airfoil flows. The computational results covered: steady and unsteady conical vortex flows; 3-D steady transonic vortex flow; and transonic airfoil flows. The results are in good agreement with other computational results and experimental data. The rotating frame of reference solution is potentially efficient as compared with the space fixed reference formulation with dynamic gridding. The integral equation solution with embedded Euler domain is computationally efficient and as accurate as the Euler equations.
ERIC Educational Resources Information Center
Hourihan, Peter; Berry, Millard, III
2006-01-01
When well-designed and integrated into a campus living or learning space, an atrium can function as the heart and spirit of a building, connecting interior rooms and public spaces with the outside environment. However, schools and universities should seek technological and HVAC solutions that maximize energy efficiency. This article discusses how…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tavakoli, Rouhollah, E-mail: rtavakoli@sharif.ir
An unconditionally energy stable time stepping scheme is introduced to solve Cahn–Morral-like equations in the present study. It is constructed based on the combination of David Eyre's time stepping scheme and Schur complement approach. Although the presented method is general and independent of the choice of homogeneous free energy density function term, logarithmic and polynomial energy functions are specifically considered in this paper. The method is applied to study the spinodal decomposition in multi-component systems and optimal space tiling problems. A penalization strategy is developed, in the case of later problem, to avoid trivial solutions. Extensive numerical experiments demonstrate themore » success and performance of the presented method. According to the numerical results, the method is convergent and energy stable, independent of the choice of time stepsize. Its MATLAB implementation is included in the appendix for the numerical evaluation of algorithm and reproduction of the presented results. -- Highlights: •Extension of Eyre's convex–concave splitting scheme to multiphase systems. •Efficient solution of spinodal decomposition in multi-component systems. •Efficient solution of least perimeter periodic space partitioning problem. •Developing a penalization strategy to avoid trivial solutions. •Presentation of MATLAB implementation of the introduced algorithm.« less
Improved dynamic analysis method using load-dependent Ritz vectors
NASA Technical Reports Server (NTRS)
Escobedo-Torres, J.; Ricles, J. M.
1993-01-01
The dynamic analysis of large space structures is important in order to predict their behavior under operating conditions. Computer models of large space structures are characterized by having a large number of degrees of freedom, and the computational effort required to carry out the analysis is very large. Conventional methods of solution utilize a subset of the eigenvectors of the system, but for systems with many degrees of freedom, the solution of the eigenproblem is in many cases the most costly phase of the analysis. For this reason, alternate solution methods need to be considered. It is important that the method chosen for the analysis be efficient and that accurate results be obtainable. It is important that the method chosen for the analysis be efficient and that accurate results be obtainable. The load dependent Ritz vector method is presented as an alternative to the classical normal mode methods for obtaining dynamic responses of large space structures. A simplified model of a space station is used to compare results. Results show that the load dependent Ritz vector method predicts the dynamic response better than the classical normal mode method. Even though this alternate method is very promising, further studies are necessary to fully understand its attributes and limitations.
WeightLifter: Visual Weight Space Exploration for Multi-Criteria Decision Making.
Pajer, Stephan; Streit, Marc; Torsney-Weir, Thomas; Spechtenhauser, Florian; Muller, Torsten; Piringer, Harald
2017-01-01
A common strategy in Multi-Criteria Decision Making (MCDM) is to rank alternative solutions by weighted summary scores. Weights, however, are often abstract to the decision maker and can only be set by vague intuition. While previous work supports a point-wise exploration of weight spaces, we argue that MCDM can benefit from a regional and global visual analysis of weight spaces. Our main contribution is WeightLifter, a novel interactive visualization technique for weight-based MCDM that facilitates the exploration of weight spaces with up to ten criteria. Our technique enables users to better understand the sensitivity of a decision to changes of weights, to efficiently localize weight regions where a given solution ranks high, and to filter out solutions which do not rank high enough for any plausible combination of weights. We provide a comprehensive requirement analysis for weight-based MCDM and describe an interactive workflow that meets these requirements. For evaluation, we describe a usage scenario of WeightLifter in automotive engineering and report qualitative feedback from users of a deployed version as well as preliminary feedback from decision makers in multiple domains. This feedback confirms that WeightLifter increases both the efficiency of weight-based MCDM and the awareness of uncertainty in the ultimate decisions.
NASA Astrophysics Data System (ADS)
Doha, E.; Bhrawy, A.
2006-06-01
It is well known that spectral methods (tau, Galerkin, collocation) have a condition number of ( is the number of retained modes of polynomial approximations). This paper presents some efficient spectral algorithms, which have a condition number of , based on the Jacobi?Galerkin methods of second-order elliptic equations in one and two space variables. The key to the efficiency of these algorithms is to construct appropriate base functions, which lead to systems with specially structured matrices that can be efficiently inverted. The complexities of the algorithms are a small multiple of operations for a -dimensional domain with unknowns, while the convergence rates of the algorithms are exponentials with smooth solutions.
NASA Technical Reports Server (NTRS)
Englander, Jacob; Englander, Arnold
2014-01-01
Trajectory optimization methods using MBH have become well developed during the past decade. An essential component of MBH is a controlled random search through the multi-dimensional space of possible solutions. Historically, the randomness has been generated by drawing RVs from a uniform probability distribution. Here, we investigate the generating the randomness by drawing the RVs from Cauchy and Pareto distributions, chosen because of their characteristic long tails. We demonstrate that using Cauchy distributions (as first suggested by Englander significantly improves MBH performance, and that Pareto distributions provide even greater improvements. Improved performance is defined in terms of efficiency and robustness, where efficiency is finding better solutions in less time, and robustness is efficiency that is undiminished by (a) the boundary conditions and internal constraints of the optimization problem being solved, and (b) by variations in the parameters of the probability distribution. Robustness is important for achieving performance improvements that are not problem specific. In this work we show that the performance improvements are the result of how these long-tailed distributions enable MBH to search the solution space faster and more thoroughly. In developing this explanation, we use the concepts of sub-diffusive, normally-diffusive, and super-diffusive RWs originally developed in the field of statistical physics.
Global Search Capabilities of Indirect Methods for Impulsive Transfers
NASA Astrophysics Data System (ADS)
Shen, Hong-Xin; Casalino, Lorenzo; Luo, Ya-Zhong
2015-09-01
An optimization method which combines an indirect method with homotopic approach is proposed and applied to impulsive trajectories. Minimum-fuel, multiple-impulse solutions, with either fixed or open time are obtained. The homotopic approach at hand is relatively straightforward to implement and does not require an initial guess of adjoints, unlike previous adjoints estimation methods. A multiple-revolution Lambert solver is used to find multiple starting solutions for the homotopic procedure; this approach can guarantee to obtain multiple local solutions without relying on the user's intuition, thus efficiently exploring the solution space to find the global optimum. The indirect/homotopic approach proves to be quite effective and efficient in finding optimal solutions, and outperforms the joint use of evolutionary algorithms and deterministic methods in the test cases.
Logistics: An integral part of cost efficient space operations
NASA Technical Reports Server (NTRS)
Montgomery, Ann D.
1996-01-01
The logistics of space programs and its history within NASA are discussed, with emphasis on manned space flight and the Space Shuttle program. The lessons learned and the experience gained during these programs are reported on. Key elements of logistics are highlighted, and the problems and issues that can be expected to arise in relation to the support of long-term space operations and future space programs, are discussed. Such missions include the International Space Station program and the reusable launch vehicle. Possible solutions to the problems identified are outlined.
Shielding from space radiations
NASA Technical Reports Server (NTRS)
Chang, C. Ken; Badavi, Forooz F.; Tripathi, Ram K.
1993-01-01
This Progress Report covering the period of December 1, 1992 to June 1, 1993 presents the development of an analytical solution to the heavy ion transport equation in terms of Green's function formalism. The mathematical development results are recasted into a highly efficient computer code for space applications. The efficiency of this algorithm is accomplished by a nonperturbative technique of extending the Green's function over the solution domain. The code may also be applied to accelerator boundary conditions to allow code validation in laboratory experiments. Results from the isotopic version of the code with 59 isotopes present for a single layer target material, for the case of an iron beam projectile at 600 MeV/nucleon in water is presented. A listing of the single layer isotopic version of the code is included.
NASA Astrophysics Data System (ADS)
Zheng, Chang-Jun; Chen, Hai-Bo; Chen, Lei-Lei
2013-04-01
This paper presents a novel wideband fast multipole boundary element approach to 3D half-space/plane-symmetric acoustic wave problems. The half-space fundamental solution is employed in the boundary integral equations so that the tree structure required in the fast multipole algorithm is constructed for the boundary elements in the real domain only. Moreover, a set of symmetric relations between the multipole expansion coefficients of the real and image domains are derived, and the half-space fundamental solution is modified for the purpose of applying such relations to avoid calculating, translating and saving the multipole/local expansion coefficients of the image domain. The wideband adaptive multilevel fast multipole algorithm associated with the iterative solver GMRES is employed so that the present method is accurate and efficient for both lowand high-frequency acoustic wave problems. As for exterior acoustic problems, the Burton-Miller method is adopted to tackle the fictitious eigenfrequency problem involved in the conventional boundary integral equation method. Details on the implementation of the present method are described, and numerical examples are given to demonstrate its accuracy and efficiency.
Parametric State Space Structuring
NASA Technical Reports Server (NTRS)
Ciardo, Gianfranco; Tilgner, Marco
1997-01-01
Structured approaches based on Kronecker operators for the description and solution of the infinitesimal generator of a continuous-time Markov chains are receiving increasing interest. However, their main advantage, a substantial reduction in the memory requirements during the numerical solution, comes at a price. Methods based on the "potential state space" allocate a probability vector that might be much larger than actually needed. Methods based on the "actual state space", instead, have an additional logarithmic overhead. We present an approach that realizes the advantages of both methods with none of their disadvantages, by partitioning the local state spaces of each submodel. We apply our results to a model of software rendezvous, and show how they reduce memory requirements while, at the same time, improving the efficiency of the computation.
Kwon, Jeong; Kim, Sung June; Park, Jong Hyoek
2015-06-28
We fabricated a perovskite solar cell with enhanced device efficiency based on the tailored inner space of the TiO2 electrode by utilizing a very short chemical etching process. It was found that the mesoporous TiO2 photoanode treated with a HF solution exhibited remarkably enhanced power conversion efficiencies under simulated AM 1.5G one sun illumination. The controlled inner space and morphology of the etched TiO2 electrode provide an optimized space for perovskite sensitizers and infiltration of a hole transport layer without sacrificing its original electron transport ability, which resulted in higher JSC, FF and VOC values. This simple platform provides new opportunities for tailoring the microstructure of the TiO2 electrode and has great potential in various optoelectronic devices utilizing metal oxide nanostructures.
Tensor-product preconditioners for a space-time discontinuous Galerkin method
NASA Astrophysics Data System (ADS)
Diosady, Laslo T.; Murman, Scott M.
2014-10-01
A space-time discontinuous Galerkin spectral element discretization is presented for direct numerical simulation of the compressible Navier-Stokes equations. An efficient solution technique based on a matrix-free Newton-Krylov method is presented. A diagonalized alternating direction implicit preconditioner is extended to a space-time formulation using entropy variables. The effectiveness of this technique is demonstrated for the direct numerical simulation of turbulent flow in a channel.
A Numerical Scheme for the Solution of the Space Charge Problem on a Multiply Connected Region
NASA Astrophysics Data System (ADS)
Budd, C. J.; Wheeler, A. A.
1991-11-01
In this paper we extend the work of Budd and Wheeler ( Proc. R. Soc. London A, 417, 389, 1988) , who described a new numerical scheme for the solution of the space charge equation on a simple connected domain, to multiply connected regions. The space charge equation, ▿ · ( Δ overlineϕ ▽ overlineϕ) = 0 , is a third-order nonlinear partial differential equation for the electric potential overlineϕ which models the electric field in the vicinity of a coronating conductor. Budd and Wheeler described a new way of analysing this equation by constructing an orthogonal coordinate system ( overlineϕ, overlineψ) and recasting the equation in terms of x, y, and ▽ overlineϕ as functions of ( overlineϕ, overlineψ). This transformation is singular on multiply connected regions and in this paper we show how this may be overcome to provide an efficient numerical scheme for the solution of the space charge equation. This scheme also provides a new method for the solution of Laplaces equation and the calculation of orthogonal meshes on multiply connected regions.
Ecological Safety of the Internal Space of the Cattle-Breeding Facility (Cowshed)
NASA Astrophysics Data System (ADS)
Potseluev, A. A.; Nazarov, I. V.; Tolstoukhova, T. N.; Kostenko, M. V.
2018-01-01
The article emphasizes the importance of observing the ecology of the internal airspace. The factors affecting the state of the air in the internal space of the cattle-breeding facility (cowshed) are revealed. Technical and technological solutions providing for a reduction in the airspace contamination of the livestock facility are proposed. The results of investigations of a technological operation for treating skin integuments of cows with activated water are disclosed, as well as the constructive solution of a heat and power unit that ensures a change in the hydrogen index of the treated water. The justification of the efficiency of the proposed technical and technological solutions is given.
NASA Astrophysics Data System (ADS)
Vdovin, V. F.; Grachev, V. G.; Dryagin, S. Yu.; Eliseev, A. I.; Kamaletdinov, R. K.; Korotaev, D. V.; Lesnov, I. V.; Mansfeld, M. A.; Pevzner, E. L.; Perminov, V. G.; Pilipenko, A. M.; Sapozhnikov, B. D.; Saurin, V. P.
2016-01-01
We report a design solution for a highly reliable, low-noise and extremely efficient cryogenically cooled transmit/receive unit for a large antenna system meant for radio-astronomical observations and deep-space communications in the X band. We describe our design solution and the results of a series of laboratory and antenna tests carried out in order to investigate the properties of the cryogenically cooled low-noise amplifier developed. The transmit/receive unit designed for deep-space communications (Mars missions, radio observatories located at Lagrangian point L2, etc.) was used in practice for communication with live satellites including "Radioastron" observatory, which moves in a highly elliptical orbit.
Leap-dynamics: efficient sampling of conformational space of proteins and peptides in solution.
Kleinjung, J; Bayley, P; Fraternali, F
2000-03-31
A molecular simulation scheme, called Leap-dynamics, that provides efficient sampling of protein conformational space in solution is presented. The scheme is a combined approach using a fast sampling method, imposing conformational 'leaps' to force the system over energy barriers, and molecular dynamics (MD) for refinement. The presence of solvent is approximated by a potential of mean force depending on the solvent accessible surface area. The method has been successfully applied to N-acetyl-L-alanine-N-methylamide (alanine dipeptide), sampling experimentally observed conformations inaccessible to MD alone under the chosen conditions. The method predicts correctly the increased partial flexibility of the mutant Y35G compared to native bovine pancreatic trypsin inhibitor. In particular, the improvement over MD consists of the detection of conformational flexibility that corresponds closely to slow motions identified by nuclear magnetic resonance techniques.
NASA Technical Reports Server (NTRS)
Englander, Jacob A.; Englander, Arnold C.
2014-01-01
Trajectory optimization methods using monotonic basin hopping (MBH) have become well developed during the past decade [1, 2, 3, 4, 5, 6]. An essential component of MBH is a controlled random search through the multi-dimensional space of possible solutions. Historically, the randomness has been generated by drawing random variable (RV)s from a uniform probability distribution. Here, we investigate the generating the randomness by drawing the RVs from Cauchy and Pareto distributions, chosen because of their characteristic long tails. We demonstrate that using Cauchy distributions (as first suggested by J. Englander [3, 6]) significantly improves monotonic basin hopping (MBH) performance, and that Pareto distributions provide even greater improvements. Improved performance is defined in terms of efficiency and robustness. Efficiency is finding better solutions in less time. Robustness is efficiency that is undiminished by (a) the boundary conditions and internal constraints of the optimization problem being solved, and (b) by variations in the parameters of the probability distribution. Robustness is important for achieving performance improvements that are not problem specific. In this work we show that the performance improvements are the result of how these long-tailed distributions enable MBH to search the solution space faster and more thoroughly. In developing this explanation, we use the concepts of sub-diffusive, normally-diffusive, and super-diffusive random walks (RWs) originally developed in the field of statistical physics.
An efficient genetic algorithm for maximum coverage deployment in wireless sensor networks.
Yoon, Yourim; Kim, Yong-Hyuk
2013-10-01
Sensor networks have a lot of applications such as battlefield surveillance, environmental monitoring, and industrial diagnostics. Coverage is one of the most important performance metrics for sensor networks since it reflects how well a sensor field is monitored. In this paper, we introduce the maximum coverage deployment problem in wireless sensor networks and analyze the properties of the problem and its solution space. Random deployment is the simplest way to deploy sensor nodes but may cause unbalanced deployment and therefore, we need a more intelligent way for sensor deployment. We found that the phenotype space of the problem is a quotient space of the genotype space in a mathematical view. Based on this property, we propose an efficient genetic algorithm using a novel normalization method. A Monte Carlo method is adopted to design an efficient evaluation function, and its computation time is decreased without loss of solution quality using a method that starts from a small number of random samples and gradually increases the number for subsequent generations. The proposed genetic algorithms could be further improved by combining with a well-designed local search. The performance of the proposed genetic algorithm is shown by a comparative experimental study. When compared with random deployment and existing methods, our genetic algorithm was not only about twice faster, but also showed significant performance improvement in quality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malcolm Pitts; Jie Qi; Dan Wilson
2005-04-01
Gelation technologies have been developed to provide more efficient vertical sweep efficiencies for flooding naturally fractured oil reservoirs or more efficient areal sweep efficiency for those with high permeability contrast ''thief zones''. The field proven alkaline-surfactant-polymer technology economically recovers 15% to 25% OOIP more oil than waterflooding from swept pore space of an oil reservoir. However, alkaline-surfactant-polymer technology is not amenable to naturally fractured reservoirs or those with thief zones because much of injected solution bypasses target pore space containing oil. This work investigates whether combining these two technologies could broaden applicability of alkaline-surfactant-polymer flooding into these reservoirs. A priormore » fluid-fluid report discussed interaction of different gel chemical compositions and alkaline-surfactant-polymer solutions. Gel solutions under dynamic conditions of linear corefloods showed similar stability to alkaline-surfactant-polymer solutions as in the fluid-fluid analyses. Aluminum-polyacrylamide, flowing gels are not stable to alkaline-surfactant-polymer solutions of either pH 10.5 or 12.9. Chromium acetate-polyacrylamide flowing and rigid flowing gels are stable to subsequent alkaline-surfactant-polymer solution injection. Rigid flowing chromium acetate-polyacrylamide gels maintained permeability reduction better than flowing chromium acetate-polyacrylamide gels. Silicate-polyacrylamide gels are not stable with subsequent injection of either a pH 10.5 or a 12.9 alkaline-surfactant-polymer solution. Chromium acetate-xanthan gum rigid gels are not stable to subsequent alkaline-surfactant-polymer solution injection. Resorcinol-formaldehyde gels were stable to subsequent alkaline-surfactant-polymer solution injection. When evaluated in a dual core configuration, injected fluid flows into the core with the greatest effective permeability to the injected fluid. The same gel stability trends to subsequent alkaline-surfactant-polymer injected solution were observed. Aluminum citrate-polyacrylamide, resorcinol-formaldehyde, and the silicate-polyacrylamide gel systems did not produce significant incremental oil in linear corefloods. Both flowing and rigid flowing chromium acetate-polyacrylamide gels and the xanthan gum-chromium acetate gel system produced incremental oil with the rigid flowing gel producing the greatest amount. Higher oil recovery could have been due to higher differential pressures across cores. None of the gels tested appeared to alter alkaline-surfactant-polymer solution oil recovery. Total waterflood plus chemical flood oil recovery sequence recoveries were all similar.« less
Solving Upwind-Biased Discretizations. 2; Multigrid Solver Using Semicoarsening
NASA Technical Reports Server (NTRS)
Diskin, Boris
1999-01-01
This paper studies a novel multigrid approach to the solution for a second order upwind biased discretization of the convection equation in two dimensions. This approach is based on semi-coarsening and well balanced explicit correction terms added to coarse-grid operators to maintain on coarse-grid the same cross-characteristic interaction as on the target (fine) grid. Colored relaxation schemes are used on all the levels allowing a very efficient parallel implementation. The results of the numerical tests can be summarized as follows: 1) The residual asymptotic convergence rate of the proposed V(0, 2) multigrid cycle is about 3 per cycle. This convergence rate far surpasses the theoretical limit (4/3) predicted for standard multigrid algorithms using full coarsening. The reported efficiency does not deteriorate with increasing the cycle, depth (number of levels) and/or refining the target-grid mesh spacing. 2) The full multi-grid algorithm (FMG) with two V(0, 2) cycles on the target grid and just one V(0, 2) cycle on all the coarse grids always provides an approximate solution with the algebraic error less than the discretization error. Estimates of the total work in the FMG algorithm are ranged between 18 and 30 minimal work units (depending on the target (discretizatioin). Thus, the overall efficiency of the FMG solver closely approaches (if does not achieve) the goal of the textbook multigrid efficiency. 3) A novel approach to deriving a discrete solution approximating the true continuous solution with a relative accuracy given in advance is developed. An adaptive multigrid algorithm (AMA) using comparison of the solutions on two successive target grids to estimate the accuracy of the current target-grid solution is defined. A desired relative accuracy is accepted as an input parameter. The final target grid on which this accuracy can be achieved is chosen automatically in the solution process. the actual relative accuracy of the discrete solution approximation obtained by AMA is always better than the required accuracy; the computational complexity of the AMA algorithm is (nearly) optimal (comparable with the complexity of the FMG algorithm applied to solve the problem on the optimally spaced target grid).
The IASI cold box subsystem (CBS) a passive cryocooler for cryogenic detectors and optics
NASA Astrophysics Data System (ADS)
Bailly, B.; Courteau, P.; Maciaszek, T.
2017-11-01
In space, cooling down Infra Red detectors and optics to cryogenic temperature raises always the same issue : what is the best way to manage simultaneously thermal cooling, stability, mechanical discoupling and accurate focal plane components location, in a lightweight and compact solution? The passive cryocooler developed by Alcatel SPace Industries under CNES contract in the frame of the IASI instrument (Infrared Atmospheric Sounding Interferometer), offers an efficient solution for 90K to 100K temperature levels. We intend you to present the architecture and performance validation plan of the CBS.
NASA Technical Reports Server (NTRS)
Thacker, B. H.; Mcclung, R. C.; Millwater, H. R.
1990-01-01
An eigenvalue analysis of a typical space propulsion system turbopump blade is presented using an approximate probabilistic analysis methodology. The methodology was developed originally to investigate the feasibility of computing probabilistic structural response using closed-form approximate models. This paper extends the methodology to structures for which simple closed-form solutions do not exist. The finite element method will be used for this demonstration, but the concepts apply to any numerical method. The results agree with detailed analysis results and indicate the usefulness of using a probabilistic approximate analysis in determining efficient solution strategies.
Genetic Algorithm Optimizes Q-LAW Control Parameters
NASA Technical Reports Server (NTRS)
Lee, Seungwon; von Allmen, Paul; Petropoulos, Anastassios; Terrile, Richard
2008-01-01
A document discusses a multi-objective, genetic algorithm designed to optimize Lyapunov feedback control law (Q-law) parameters in order to efficiently find Pareto-optimal solutions for low-thrust trajectories for electronic propulsion systems. These would be propellant-optimal solutions for a given flight time, or flight time optimal solutions for a given propellant requirement. The approximate solutions are used as good initial solutions for high-fidelity optimization tools. When the good initial solutions are used, the high-fidelity optimization tools quickly converge to a locally optimal solution near the initial solution. Q-law control parameters are represented as real-valued genes in the genetic algorithm. The performances of the Q-law control parameters are evaluated in the multi-objective space (flight time vs. propellant mass) and sorted by the non-dominated sorting method that assigns a better fitness value to the solutions that are dominated by a fewer number of other solutions. With the ranking result, the genetic algorithm encourages the solutions with higher fitness values to participate in the reproduction process, improving the solutions in the evolution process. The population of solutions converges to the Pareto front that is permitted within the Q-law control parameter space.
NASA Astrophysics Data System (ADS)
Zabihi, F.; Saffarian, M.
2016-07-01
The aim of this article is to obtain the numerical solution of the two-dimensional KdV-Burgers equation. We construct the solution by using a different approach, that is based on using collocation points. The solution is based on using the thin plate splines radial basis function, which builds an approximated solution with discretizing the time and the space to small steps. We use a predictor-corrector scheme to avoid solving the nonlinear system. The results of numerical experiments are compared with analytical solutions to confirm the accuracy and efficiency of the presented scheme.
3D Reconfigurable MPSoC for Unmanned Spacecraft Navigation
NASA Astrophysics Data System (ADS)
Dekoulis, George
2016-07-01
This paper describes the design of a new lightweight spacecraft navigation system for unmanned space missions. The system addresses the demands for more efficient autonomous navigation in the near-Earth environment or deep space. The proposed instrumentation is directly suitable for unmanned systems operation and testing of new airborne prototypes for remote sensing applications. The system features a new sensor technology and significant improvements over existing solutions. Fluxgate type sensors have been traditionally used in unmanned defense systems such as target drones, guided missiles, rockets and satellites, however, the guidance sensors' configurations exhibit lower specifications than the presented solution. The current implementation is based on a recently developed material in a reengineered optimum sensor configuration for unprecedented low-power consumption. The new sensor's performance characteristics qualify it for spacecraft navigation applications. A major advantage of the system is the efficiency in redundancy reduction achieved in terms of both hardware and software requirements.
NASA Astrophysics Data System (ADS)
Shiangjen, Kanokwatt; Chaijaruwanich, Jeerayut; Srisujjalertwaja, Wijak; Unachak, Prakarn; Somhom, Samerkae
2018-02-01
This article presents an efficient heuristic placement algorithm, namely, a bidirectional heuristic placement, for solving the two-dimensional rectangular knapsack packing problem. The heuristic demonstrates ways to maximize space utilization by fitting the appropriate rectangle from both sides of the wall of the current residual space layer by layer. The iterative local search along with a shift strategy is developed and applied to the heuristic to balance the exploitation and exploration tasks in the solution space without the tuning of any parameters. The experimental results on many scales of packing problems show that this approach can produce high-quality solutions for most of the benchmark datasets, especially for large-scale problems, within a reasonable duration of computational time.
NASA Astrophysics Data System (ADS)
Vrugt, Jasper A.; Beven, Keith J.
2018-04-01
This essay illustrates some recent developments to the DiffeRential Evolution Adaptive Metropolis (DREAM) MATLAB toolbox of Vrugt (2016) to delineate and sample the behavioural solution space of set-theoretic likelihood functions used within the GLUE (Limits of Acceptability) framework (Beven and Binley, 1992, 2014; Beven and Freer, 2001; Beven, 2006). This work builds on the DREAM(ABC) algorithm of Sadegh and Vrugt (2014) and enhances significantly the accuracy and CPU-efficiency of Bayesian inference with GLUE. In particular it is shown how lack of adequate sampling in the model space might lead to unjustified model rejection.
Evaluation of Genetic Algorithm Concepts Using Model Problems. Part 2; Multi-Objective Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.; Pulliam, Thomas H.
2003-01-01
A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of simple model problems. Several new features including a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all optimization problems attempted. The binning algorithm generally provides pareto front quality enhancements and moderate convergence efficiency improvements for most of the model problems. The gene-space transformation procedure provides a large convergence efficiency enhancement for problems with non-convoluted pareto fronts and a degradation in efficiency for problems with convoluted pareto fronts. The most difficult problems --multi-mode search spaces with a large number of genes and convoluted pareto fronts-- require a large number of function evaluations for GA convergence, but always converge.
Biomimetics on seed dispersal: survey and insights for space exploration.
Pandolfi, Camilla; Izzo, Dario
2013-06-01
Seeds provide the vital genetic link and dispersal agent between successive generations of plants. Without seed dispersal as a means of reproduction, many plants would quickly die out. Because plants lack any sort of mobility and remain in the same spot for their entire lives, they rely on seed dispersal to transport their offspring throughout the environment. This can be accomplished either collectively or individually; in any case as seeds ultimately abdicate their movement, they are at the mercy of environmental factors. Thus, seed dispersal strategies are characterized by robustness, adaptability, intelligence (both behavioral and morphological), and mass and energy efficiency (including the ability to utilize environmental sources of energy available): all qualities that advanced engineering systems aim at in general, and in particular those that need to enable complex endeavors such as space exploration. Plants evolved and adapted their strategy according to their environment, and taken together, they enclose many desirable characteristics that a space mission needs to have. Understanding in detail how plants control the development of seeds, fabricate structural components for their dispersal, build molecular machineries to keep seeds dormant up to the right moment and monitor the environment to release them at the right time could provide several solutions impacting current space mission design practices. It can lead to miniaturization, higher integration and packing efficiency, energy efficiency and higher autonomy and robustness. Consequently, there would appear to be good reasons for considering biomimetic solutions from plant kingdom when designing space missions, especially to other celestial bodies, where solid and liquid surfaces, atmosphere, etc constitute and are obviously parallel with the terrestrial environment where plants evolved. In this paper, we review the current state of biomimetics on seed dispersal to improve space mission design.
Megchelenbrink, Wout; Huynen, Martijn; Marchiori, Elena
2014-01-01
Constraint-based models of metabolic networks are typically underdetermined, because they contain more reactions than metabolites. Therefore the solutions to this system do not consist of unique flux rates for each reaction, but rather a space of possible flux rates. By uniformly sampling this space, an estimated probability distribution for each reaction's flux in the network can be obtained. However, sampling a high dimensional network is time-consuming. Furthermore, the constraints imposed on the network give rise to an irregularly shaped solution space. Therefore more tailored, efficient sampling methods are needed. We propose an efficient sampling algorithm (called optGpSampler), which implements the Artificial Centering Hit-and-Run algorithm in a different manner than the sampling algorithm implemented in the COBRA Toolbox for metabolic network analysis, here called gpSampler. Results of extensive experiments on different genome-scale metabolic networks show that optGpSampler is up to 40 times faster than gpSampler. Application of existing convergence diagnostics on small network reconstructions indicate that optGpSampler converges roughly ten times faster than gpSampler towards similar sampling distributions. For networks of higher dimension (i.e. containing more than 500 reactions), we observed significantly better convergence of optGpSampler and a large deviation between the samples generated by the two algorithms. optGpSampler for Matlab and Python is available for non-commercial use at: http://cs.ru.nl/~wmegchel/optGpSampler/.
ERIC Educational Resources Information Center
JAMRICH, JOHN X.
A SOLUTION TO PROBLEMS OF GROWING COLLEGE ENROLLMENTS IS TO INCREASE THE EFFICIENCY OF USE OF EXISTING SPACE TO MAKE ROOM FOR MORE STUDENTS, RATHER THAN TO RESTRICT ENROLLMENTS OR TO CREATE MORE SPACE. PLANNING OF COLLEGE FACILITIES MUST INCLUDE ANALYSIS OF THE PRESENT PLANT, THE INSTRUCTIONAL PROGRAM, THE STUDENT BODY, AND THE FINANCIAL…
Technology for Manufacturing Efficiency
NASA Technical Reports Server (NTRS)
1995-01-01
The Ground Processing Scheduling System (GPSS) was developed by Ames Research Center, Kennedy Space Center and divisions of the Lockheed Company to maintain the scheduling for preparing a Space Shuttle Orbiter for a mission. Red Pepper Software Company, now part of PeopleSoft, Inc., commercialized the software as their ResponseAgent product line. The software enables users to monitor manufacturing variables, report issues and develop solutions to existing problems.
Parallel CE/SE Computations via Domain Decomposition
NASA Technical Reports Server (NTRS)
Himansu, Ananda; Jorgenson, Philip C. E.; Wang, Xiao-Yen; Chang, Sin-Chung
2000-01-01
This paper describes the parallelization strategy and achieved parallel efficiency of an explicit time-marching algorithm for solving conservation laws. The Space-Time Conservation Element and Solution Element (CE/SE) algorithm for solving the 2D and 3D Euler equations is parallelized with the aid of domain decomposition. The parallel efficiency of the resultant algorithm on a Silicon Graphics Origin 2000 parallel computer is checked.
Compositional Solution Space Quantification for Probabilistic Software Analysis
NASA Technical Reports Server (NTRS)
Borges, Mateus; Pasareanu, Corina S.; Filieri, Antonio; d'Amorim, Marcelo; Visser, Willem
2014-01-01
Probabilistic software analysis aims at quantifying how likely a target event is to occur during program execution. Current approaches rely on symbolic execution to identify the conditions to reach the target event and try to quantify the fraction of the input domain satisfying these conditions. Precise quantification is usually limited to linear constraints, while only approximate solutions can be provided in general through statistical approaches. However, statistical approaches may fail to converge to an acceptable accuracy within a reasonable time. We present a compositional statistical approach for the efficient quantification of solution spaces for arbitrarily complex constraints over bounded floating-point domains. The approach leverages interval constraint propagation to improve the accuracy of the estimation by focusing the sampling on the regions of the input domain containing the sought solutions. Preliminary experiments show significant improvement on previous approaches both in results accuracy and analysis time.
Klaseboer, Evert; Sepehrirahnama, Shahrokh; Chan, Derek Y C
2017-08-01
The general space-time evolution of the scattering of an incident acoustic plane wave pulse by an arbitrary configuration of targets is treated by employing a recently developed non-singular boundary integral method to solve the Helmholtz equation in the frequency domain from which the space-time solution of the wave equation is obtained using the fast Fourier transform. The non-singular boundary integral solution can enforce the radiation boundary condition at infinity exactly and can account for multiple scattering effects at all spacings between scatterers without adverse effects on the numerical precision. More generally, the absence of singular kernels in the non-singular integral equation confers high numerical stability and precision for smaller numbers of degrees of freedom. The use of fast Fourier transform to obtain the time dependence is not constrained to discrete time steps and is particularly efficient for studying the response to different incident pulses by the same configuration of scatterers. The precision that can be attained using a smaller number of Fourier components is also quantified.
New Whole-House Solutions Case Study: Pulte Homes, Las Vegas, Nevada
DOE Office of Scientific and Technical Information (OSTI.GOV)
none,
2013-09-01
The builder teamed with Building Science Corporation to design HERS-54 homes with high-efficiency HVAC with ducts in conditioned space, jump ducts, and a fresh air intake; advanced framed walls; low-e windows; and PV roof tiles.
Sparse Image Reconstruction on the Sphere: Analysis and Synthesis.
Wallis, Christopher G R; Wiaux, Yves; McEwen, Jason D
2017-11-01
We develop techniques to solve ill-posed inverse problems on the sphere by sparse regularization, exploiting sparsity in both axisymmetric and directional scale-discretized wavelet space. Denoising, inpainting, and deconvolution problems and combinations thereof, are considered as examples. Inverse problems are solved in both the analysis and synthesis settings, with a number of different sampling schemes. The most effective approach is that with the most restricted solution-space, which depends on the interplay between the adopted sampling scheme, the selection of the analysis/synthesis problem, and any weighting of the l 1 norm appearing in the regularization problem. More efficient sampling schemes on the sphere improve reconstruction fidelity by restricting the solution-space and also by improving sparsity in wavelet space. We apply the technique to denoise Planck 353-GHz observations, improving the ability to extract the structure of Galactic dust emission, which is important for studying Galactic magnetism.
Intelligent Space Tube Optimization for speeding ground water remedial design.
Kalwij, Ineke M; Peralta, Richard C
2008-01-01
An innovative Intelligent Space Tube Optimization (ISTO) two-stage approach facilitates solving complex nonlinear flow and contaminant transport management problems. It reduces computational effort of designing optimal ground water remediation systems and strategies for an assumed set of wells. ISTO's stage 1 defines an adaptive mobile space tube that lengthens toward the optimal solution. The space tube has overlapping multidimensional subspaces. Stage 1 generates several strategies within the space tube, trains neural surrogate simulators (NSS) using the limited space tube data, and optimizes using an advanced genetic algorithm (AGA) with NSS. Stage 1 speeds evaluating assumed well locations and combinations. For a large complex plume of solvents and explosives, ISTO stage 1 reaches within 10% of the optimal solution 25% faster than an efficient AGA coupled with comprehensive tabu search (AGCT) does by itself. ISTO input parameters include space tube radius and number of strategies used to train NSS per cycle. Larger radii can speed convergence to optimality for optimizations that achieve it but might increase the number of optimizations reaching it. ISTO stage 2 automatically refines the NSS-AGA stage 1 optimal strategy using heuristic optimization (we used AGCT), without using NSS surrogates. Stage 2 explores the entire solution space. ISTO is applicable for many heuristic optimization settings in which the numerical simulator is computationally intensive, and one would like to reduce that burden.
Long-term Calibration Considerations during Subcutaneous Microdialysis Sampling in Mobile Rats
Mou, Xiaodun; Lennartz, Michelle; Loegering, Daniel J.; Stenken, Julie A.
2010-01-01
The level at which implanted sensors and sampling devices maintain their calibration is an important research area. In this work, microdialysis probes with identical geometry and different membranes, polycarbonate/polyether (PC) or polyethersulfone (PES), were used with internal standards (vitamin B12 (MW 1355), antipyrine (MW 188) and 2-deoxyglucose (2-DG, MW 164)) and endogenous glucose to investigate changes in their long-term calibration after implantation into the subcutaneous space of Sprague-Dawley rats. Histological analysis confirmed an inflammatory response to the microdialysis probes and the presence of a collagen capsule. The membrane extraction efficiency (percentage delivered to the tissue space) for antipyrine and 2-DG was not altered throughout the implant lifetime for either PC- or PES-membranes. Yet, Vitamin B12 extraction efficiency and collected glucose concentrations decreased during the implant lifetime. Antipyrine was administered i.v. and its concentrations obtained in both PC-and PES-membrane probes were significantly reduced between the implant day and seven (PC) or 10 (PES) days post implantation suggesting that solute supply is critical for in vivo extraction efficiency. For the low molecular weight solutes such as antipyrine and glucose, localized delivery is not affected by the foreign body reaction, but recovery is significantly reduced. For Vitamin B12, a larger solute, the fibrotic capsule formed around the probe significantly restricts diffusion from the implanted microdialysis probes. PMID:20223515
A numerical study of adaptive space and time discretisations for Gross–Pitaevskii equations
Thalhammer, Mechthild; Abhau, Jochen
2012-01-01
As a basic principle, benefits of adaptive discretisations are an improved balance between required accuracy and efficiency as well as an enhancement of the reliability of numerical computations. In this work, the capacity of locally adaptive space and time discretisations for the numerical solution of low-dimensional nonlinear Schrödinger equations is investigated. The considered model equation is related to the time-dependent Gross–Pitaevskii equation arising in the description of Bose–Einstein condensates in dilute gases. The performance of the Fourier-pseudo spectral method constrained to uniform meshes versus the locally adaptive finite element method and of higher-order exponential operator splitting methods with variable time stepsizes is studied. Numerical experiments confirm that a local time stepsize control based on a posteriori local error estimators or embedded splitting pairs, respectively, is effective in different situations with an enhancement either in efficiency or reliability. As expected, adaptive time-splitting schemes combined with fast Fourier transform techniques are favourable regarding accuracy and efficiency when applied to Gross–Pitaevskii equations with a defocusing nonlinearity and a mildly varying regular solution. However, the numerical solution of nonlinear Schrödinger equations in the semi-classical regime becomes a demanding task. Due to the highly oscillatory and nonlinear nature of the problem, the spatial mesh size and the time increments need to be of the size of the decisive parameter 0<ε≪1, especially when it is desired to capture correctly the quantitative behaviour of the wave function itself. The required high resolution in space constricts the feasibility of numerical computations for both, the Fourier pseudo-spectral and the finite element method. Nevertheless, for smaller parameter values locally adaptive time discretisations facilitate to determine the time stepsizes sufficiently small in order that the numerical approximation captures correctly the behaviour of the analytical solution. Further illustrations for Gross–Pitaevskii equations with a focusing nonlinearity or a sharp Gaussian as initial condition, respectively, complement the numerical study. PMID:25550676
A numerical study of adaptive space and time discretisations for Gross-Pitaevskii equations.
Thalhammer, Mechthild; Abhau, Jochen
2012-08-15
As a basic principle, benefits of adaptive discretisations are an improved balance between required accuracy and efficiency as well as an enhancement of the reliability of numerical computations. In this work, the capacity of locally adaptive space and time discretisations for the numerical solution of low-dimensional nonlinear Schrödinger equations is investigated. The considered model equation is related to the time-dependent Gross-Pitaevskii equation arising in the description of Bose-Einstein condensates in dilute gases. The performance of the Fourier-pseudo spectral method constrained to uniform meshes versus the locally adaptive finite element method and of higher-order exponential operator splitting methods with variable time stepsizes is studied. Numerical experiments confirm that a local time stepsize control based on a posteriori local error estimators or embedded splitting pairs, respectively, is effective in different situations with an enhancement either in efficiency or reliability. As expected, adaptive time-splitting schemes combined with fast Fourier transform techniques are favourable regarding accuracy and efficiency when applied to Gross-Pitaevskii equations with a defocusing nonlinearity and a mildly varying regular solution. However, the numerical solution of nonlinear Schrödinger equations in the semi-classical regime becomes a demanding task. Due to the highly oscillatory and nonlinear nature of the problem, the spatial mesh size and the time increments need to be of the size of the decisive parameter [Formula: see text], especially when it is desired to capture correctly the quantitative behaviour of the wave function itself. The required high resolution in space constricts the feasibility of numerical computations for both, the Fourier pseudo-spectral and the finite element method. Nevertheless, for smaller parameter values locally adaptive time discretisations facilitate to determine the time stepsizes sufficiently small in order that the numerical approximation captures correctly the behaviour of the analytical solution. Further illustrations for Gross-Pitaevskii equations with a focusing nonlinearity or a sharp Gaussian as initial condition, respectively, complement the numerical study.
Efficient electromagnetic source imaging with adaptive standardized LORETA/FOCUSS.
Schimpf, Paul H; Liu, Hesheng; Ramon, Ceon; Haueisen, Jens
2005-05-01
Functional brain imaging and source localization based on the scalp's potential field require a solution to an ill-posed inverse problem with many solutions. This makes it necessary to incorporate a priori knowledge in order to select a particular solution. A computational challenge for some subject-specific head models is that many inverse algorithms require a comprehensive sampling of the candidate source space at the desired resolution. In this study, we present an algorithm that can accurately reconstruct details of localized source activity from a sparse sampling of the candidate source space. Forward computations are minimized through an adaptive procedure that increases source resolution as the spatial extent is reduced. With this algorithm, we were able to compute inverses using only 6% to 11% of the full resolution lead-field, with a localization accuracy that was not significantly different than an exhaustive search through a fully-sampled source space. The technique is, therefore, applicable for use with anatomically-realistic, subject-specific forward models for applications with spatially concentrated source activity.
NASA Astrophysics Data System (ADS)
Balusu, K.; Huang, H.
2017-04-01
A combined dislocation fan-finite element (DF-FE) method is presented for efficient and accurate simulation of dislocation nodal forces in 3D elastically anisotropic crystals with dislocations intersecting the free surfaces. The finite domain problem is decomposed into half-spaces with singular traction stresses, an infinite domain, and a finite domain with non-singular traction stresses. As such, the singular and non-singular parts of the traction stresses are addressed separately; the dislocation fan (DF) method is introduced to balance the singular traction stresses in the half-spaces while the finite element method (FEM) is employed to enforce the non-singular boundary conditions. The accuracy and efficiency of the DF method is demonstrated using a simple isotropic test case, by comparing it with the analytical solution as well as the FEM solution. The DF-FE method is subsequently used for calculating the dislocation nodal forces in a finite elastically anisotropic crystal, which produces dislocation nodal forces that converge rapidly with increasing mesh resolutions. In comparison, the FEM solution fails to converge, especially for nodes closer to the surfaces.
NASA Astrophysics Data System (ADS)
Mohebbi, Akbar
2018-02-01
In this paper we propose two fast and accurate numerical methods for the solution of multidimensional space fractional Ginzburg-Landau equation (FGLE). In the presented methods, to avoid solving a nonlinear system of algebraic equations and to increase the accuracy and efficiency of method, we split the complex problem into simpler sub-problems using the split-step idea. For a homogeneous FGLE, we propose a method which has fourth-order of accuracy in time component and spectral accuracy in space variable and for nonhomogeneous one, we introduce another scheme based on the Crank-Nicolson approach which has second-order of accuracy in time variable. Due to using the Fourier spectral method for fractional Laplacian operator, the resulting schemes are fully diagonal and easy to code. Numerical results are reported in terms of accuracy, computational order and CPU time to demonstrate the accuracy and efficiency of the proposed methods and to compare the results with the analytical solutions. The results show that the present methods are accurate and require low CPU time. It is illustrated that the numerical results are in good agreement with the theoretical ones.
Ion sieving in graphene oxide membranes via cationic control of interlayer spacing
NASA Astrophysics Data System (ADS)
Chen, Liang; Shi, Guosheng; Shen, Jie; Peng, Bingquan; Zhang, Bowu; Wang, Yuzhu; Bian, Fenggang; Wang, Jiajun; Li, Deyuan; Qian, Zhe; Xu, Gang; Liu, Gongping; Zeng, Jianrong; Zhang, Lijuan; Yang, Yizhou; Zhou, Guoquan; Wu, Minghong; Jin, Wanqin; Li, Jingye; Fang, Haiping
2017-10-01
Graphene oxide membranes—partially oxidized, stacked sheets of graphene—can provide ultrathin, high-flux and energy-efficient membranes for precise ionic and molecular sieving in aqueous solution. These materials have shown potential in a variety of applications, including water desalination and purification, gas and ion separation, biosensors, proton conductors, lithium-based batteries and super-capacitors. Unlike the pores of carbon nanotube membranes, which have fixed sizes, the pores of graphene oxide membranes—that is, the interlayer spacing between graphene oxide sheets (a sheet is a single flake inside the membrane)—are of variable size. Furthermore, it is difficult to reduce the interlayer spacing sufficiently to exclude small ions and to maintain this spacing against the tendency of graphene oxide membranes to swell when immersed in aqueous solution. These challenges hinder the potential ion filtration applications of graphene oxide membranes. Here we demonstrate cationic control of the interlayer spacing of graphene oxide membranes with ångström precision using K+, Na+, Ca2+, Li+ or Mg2+ ions. Moreover, membrane spacings controlled by one type of cation can efficiently and selectively exclude other cations that have larger hydrated volumes. First-principles calculations and ultraviolet absorption spectroscopy reveal that the location of the most stable cation adsorption is where oxide groups and aromatic rings coexist. Previous density functional theory computations show that other cations (Fe2+, Co2+, Cu2+, Cd2+, Cr2+ and Pb2+) should have a much stronger cation-π interaction with the graphene sheet than Na+ has, suggesting that other ions could be used to produce a wider range of interlayer spacings.
NASA Astrophysics Data System (ADS)
Simoni, L.; Secchi, S.; Schrefler, B. A.
2008-12-01
This paper analyses the numerical difficulties commonly encountered in solving fully coupled numerical models and proposes a numerical strategy apt to overcome them. The proposed procedure is based on space refinement and time adaptivity. The latter, which in mainly studied here, is based on the use of a finite element approach in the space domain and a Discontinuous Galerkin approximation within each time span. Error measures are defined for the jump of the solution at each time station. These constitute the parameters allowing for the time adaptivity. Some care is however, needed for a useful definition of the jump measures. Numerical tests are presented firstly to demonstrate the advantages and shortcomings of the method over the more traditional use of finite differences in time, then to assess the efficiency of the proposed procedure for adapting the time step. The proposed method reveals its efficiency and simplicity to adapt the time step in the solution of coupled field problems.
Scheduling with genetic algorithms
NASA Technical Reports Server (NTRS)
Fennel, Theron R.; Underbrink, A. J., Jr.; Williams, George P. W., Jr.
1994-01-01
In many domains, scheduling a sequence of jobs is an important function contributing to the overall efficiency of the operation. At Boeing, we develop schedules for many different domains, including assembly of military and commercial aircraft, weapons systems, and space vehicles. Boeing is under contract to develop scheduling systems for the Space Station Payload Planning System (PPS) and Payload Operations and Integration Center (POIC). These applications require that we respect certain sequencing restrictions among the jobs to be scheduled while at the same time assigning resources to the jobs. We call this general problem scheduling and resource allocation. Genetic algorithms (GA's) offer a search method that uses a population of solutions and benefits from intrinsic parallelism to search the problem space rapidly, producing near-optimal solutions. Good intermediate solutions are probabalistically recombined to produce better offspring (based upon some application specific measure of solution fitness, e.g., minimum flowtime, or schedule completeness). Also, at any point in the search, any intermediate solution can be accepted as a final solution; allowing the search to proceed longer usually produces a better solution while terminating the search at virtually any time may yield an acceptable solution. Many processes are constrained by restrictions of sequence among the individual jobs. For a specific job, other jobs must be completed beforehand. While there are obviously many other constraints on processes, it is these on which we focussed for this research: how to allocate crews to jobs while satisfying job precedence requirements and personnel, and tooling and fixture (or, more generally, resource) requirements.
Teng, Pengpeng; Han, Xiaopeng; Li, Jiawei; Xu, Ya; Kang, Lei; Wang, Yangrunqian; Yang, Ying; Yu, Tao
2018-03-21
It is a great challenge to obtain the uniform films of bromide-rich perovskites such as CsPbBr 3 in the two-step sequential solution process (two-step method), which was mainly due to the decomposition of the precursor films in solution. Herein, we demonstrated a novel and elegant face-down liquid-space-restricted deposition to inhibit the decomposition and fabricate high-quality CsPbBr 3 perovskite films. This method is highly reproducible, and the surface of the films was smooth and uniform with an average grain size of 860 nm. As a consequence, the planar perovskite solar cells (PSCs) without the hole-transport layer based on CsPbBr 3 and carbon electrodes exhibit enhanced power conversion efficiency (PCE) along with high open circuit voltage ( V OC ). The champion device has achieved a PCE of 5.86% with a V OC of 1.34 V, which to our knowledge is the highest performing CsPbBr 3 PSC in planar structure. Our results suggest an efficient and low-cost route to fabricate the high-quality planar all-inorganic PSCs.
Interactive orbital proximity operations planning system instruction and training guide
NASA Technical Reports Server (NTRS)
Grunwald, Arthur J.; Ellis, Stephen R.
1994-01-01
This guide instructs users in the operation of a Proximity Operations Planning System. This system uses an interactive graphical method for planning fuel-efficient rendezvous trajectories in the multi-spacecraft environment of the space station and allows the operator to compose a multi-burn transfer trajectory between orbit initial chaser and target trajectories. The available task time (window) of the mission is predetermined and the maneuver is subject to various operational constraints, such as departure, arrival, spatial, plume impingement, and en route passage constraints. The maneuvers are described in terms of the relative motion experienced in a space station centered coordinate system. Both in-orbital plane as well as out-of-orbital plane maneuvering is considered. A number of visual optimization aids are used for assisting the operator in reaching fuel-efficient solutions. These optimization aids are based on the Primer Vector theory. The visual feedback of trajectory shapes, operational constraints, and optimization functions, provided by user-transparent and continuously active background computations, allows the operator to make fast, iterative design changes that rapidly converge to fuel-efficient solutions. The planning tool is an example of operator-assisted optimization of nonlinear cost functions.
Building Operations Efficiencies into NASA's Ares I Crew Launch Vehicle Design
NASA Technical Reports Server (NTRS)
Dumbacher, Daniel
2006-01-01
The U.S. Vision for Space Exploration guides the National Aeronautics and Space Administration s (NASA's) challenging missions that expand humanity s boundaries and open new routes to the space frontier. With the Agency's commitment to complete the International Space Station (ISS) and to retire the venerable Space Shuttle by 2010, the NASA Administrator commissioned the Exploration Systems Architecture Study (ESAS) in mid 2005 to analyze options for safe, simple, cost-efficient launch solutions that could deliver human-rated space transportation capabilities in a timely manner within fixed budget guidelines. The Exploration Launch Projects Office, chartered in October 2005, has been conducting systems engineering studies and business planning over the past few months to successively refine the design configurations and better align vehicle concepts with customer and stakeholder requirements, such as significantly reduced life-cycle costs. As the Agency begins the process of replacing the Shuttle with a new generation of spacecraft destined for missions beyond low-Earth orbit to the Moon and Mars, NASA is designing the follow-on crew and cargo launch systems for maximum operational efficiencies. To sustain the long-term exploration of space, it is imperative to reduce the $4.5 billion NASA typically spends on space transportation each year. This paper gives top-level information about how the follow-on Ares I Crew Launch Vehicle (CLV) is being designed for improved safety and reliability, coupled with reduced operations costs.
NASA Astrophysics Data System (ADS)
Camporeale, E.; Delzanno, G. L.; Bergen, B. K.; Moulton, J. D.
2016-01-01
We describe a spectral method for the numerical solution of the Vlasov-Poisson system where the velocity space is decomposed by means of an Hermite basis, and the configuration space is discretized via a Fourier decomposition. The novelty of our approach is an implicit time discretization that allows exact conservation of charge, momentum and energy. The computational efficiency and the cost-effectiveness of this method are compared to the fully-implicit PIC method recently introduced by Markidis and Lapenta (2011) and Chen et al. (2011). The following examples are discussed: Langmuir wave, Landau damping, ion-acoustic wave, two-stream instability. The Fourier-Hermite spectral method can achieve solutions that are several orders of magnitude more accurate at a fraction of the cost with respect to PIC.
Nonequilibrium gas absorption in rotating permeable media
NASA Astrophysics Data System (ADS)
Baev, V. K.; Bazhaikin, A. N.
2016-08-01
The absorption of ammonia, sulfur dioxide, and carbon dioxide by water and aqueous solutions in rotating permeable media, a cellular porous disk, and a set of spaced-apart thin disks has been considered. The efficiency of cleaning air to remove these impurities is determined, and their anomalously high solubility (higher than equilibrium value) has been discovered. The results demonstrate the feasibility of designing cheap efficient rotor-type absorbers to clean gases of harmful impurities.
Deep Learning for Flow Sculpting: Insights into Efficient Learning using Scientific Simulation Data
NASA Astrophysics Data System (ADS)
Stoecklein, Daniel; Lore, Kin Gwn; Davies, Michael; Sarkar, Soumik; Ganapathysubramanian, Baskar
2017-04-01
A new technique for shaping microfluid flow, known as flow sculpting, offers an unprecedented level of passive fluid flow control, with potential breakthrough applications in advancing manufacturing, biology, and chemistry research at the microscale. However, efficiently solving the inverse problem of designing a flow sculpting device for a desired fluid flow shape remains a challenge. Current approaches struggle with the many-to-one design space, requiring substantial user interaction and the necessity of building intuition, all of which are time and resource intensive. Deep learning has emerged as an efficient function approximation technique for high-dimensional spaces, and presents a fast solution to the inverse problem, yet the science of its implementation in similarly defined problems remains largely unexplored. We propose that deep learning methods can completely outpace current approaches for scientific inverse problems while delivering comparable designs. To this end, we show how intelligent sampling of the design space inputs can make deep learning methods more competitive in accuracy, while illustrating their generalization capability to out-of-sample predictions.
Space shuttle main engine numerical modeling code modifications and analysis
NASA Technical Reports Server (NTRS)
Ziebarth, John P.
1988-01-01
The user of computational fluid dynamics (CFD) codes must be concerned with the accuracy and efficiency of the codes if they are to be used for timely design and analysis of complicated three-dimensional fluid flow configurations. A brief discussion of how accuracy and efficiency effect the CFD solution process is given. A more detailed discussion of how efficiency can be enhanced by using a few Cray Research Inc. utilities to address vectorization is presented and these utilities are applied to a three-dimensional Navier-Stokes CFD code (INS3D).
Multiple Objective Evolution Strategies (MOES): A User’s Guide to Running the Software
2014-11-01
L2-norm distance is computed in parameter space between each pair of solutions in the elite population and tested against the tolerance Dclone, which...the most efficient solutions to the test problems in the Input_Files directory. The developers recommend using mu,kappa,lambda. The mu,kappa,lambda...be used as a sanity test for complicated multimodal problems. Whenever the optimum cannot be reached by a local search, the evolutionary results
Insight into the ten-penny problem: guiding search by constraints and maximization.
Öllinger, Michael; Fedor, Anna; Brodt, Svenja; Szathmáry, Eörs
2017-09-01
For a long time, insight problem solving has been either understood as nothing special or as a particular class of problem solving. The first view implicates the necessity to find efficient heuristics that restrict the search space, the second, the necessity to overcome self-imposed constraints. Recently, promising hybrid cognitive models attempt to merge both approaches. In this vein, we were interested in the interplay of constraints and heuristic search, when problem solvers were asked to solve a difficult multi-step problem, the ten-penny problem. In three experimental groups and one control group (N = 4 × 30) we aimed at revealing, what constraints drive problem difficulty in this problem, and how relaxing constraints, and providing an efficient search criterion facilitates the solution. We also investigated how the search behavior of successful problem solvers and non-solvers differ. We found that relaxing constraints was necessary but not sufficient to solve the problem. Without efficient heuristics that facilitate the restriction of the search space, and testing the progress of the problem solving process, the relaxation of constraints was not effective. Relaxing constraints and applying the search criterion are both necessary to effectively increase solution rates. We also found that successful solvers showed promising moves earlier and had a higher maximization and variation rate across solution attempts. We propose that this finding sheds light on how different strategies contribute to solving difficult problems. Finally, we speculate about the implications of our findings for insight problem solving.
Continuing Development for Free-Piston Stirling Space Power Systems
NASA Astrophysics Data System (ADS)
Peterson, Allen A.; Qiu, Songgang; Redinger, Darin L.; Augenblick, John E.; Petersen, Stephen L.
2004-02-01
Long-life radioisotope power generators based on free-piston Stirling engines are an energy-conversion solution for future space applications. The high efficiency of Stirling machines makes them more attractive than the thermoelectric generators currently used in space. Stirling Technology Company (STC) has been developing free-piston Stirling machines for over 30 years, and its family of Stirling generators is ideally suited for reliable, maintenance-free operation. This paper describes recent progress and status of the STC RemoteGen™ 55 W-class Stirling generator (RG-55), presents an overview of recent testing, and discusses how the technology demonstration design has evolved toward space-qualified hardware.
Self-adaptive multi-objective harmony search for optimal design of water distribution networks
NASA Astrophysics Data System (ADS)
Choi, Young Hwan; Lee, Ho Min; Yoo, Do Guen; Kim, Joong Hoon
2017-11-01
In multi-objective optimization computing, it is important to assign suitable parameters to each optimization problem to obtain better solutions. In this study, a self-adaptive multi-objective harmony search (SaMOHS) algorithm is developed to apply the parameter-setting-free technique, which is an example of a self-adaptive methodology. The SaMOHS algorithm attempts to remove some of the inconvenience from parameter setting and selects the most adaptive parameters during the iterative solution search process. To verify the proposed algorithm, an optimal least cost water distribution network design problem is applied to three different target networks. The results are compared with other well-known algorithms such as multi-objective harmony search and the non-dominated sorting genetic algorithm-II. The efficiency of the proposed algorithm is quantified by suitable performance indices. The results indicate that SaMOHS can be efficiently applied to the search for Pareto-optimal solutions in a multi-objective solution space.
Space power technology applied to the energy problem
NASA Technical Reports Server (NTRS)
Miller, J. L.; Morgan, J. R.
1977-01-01
A solution to the energy problem is suggested through the technology of photovoltaic electrolysis of water to generate hydrogen. Efficient solar devices are discussed in relation to available solar energy, and photovoltaic energy cost. It is concluded that photovoltaic electrolytic generation of hydrogen will be economically feasible in 1985.
Solution and reasoning reuse in space planning and scheduling applications
NASA Technical Reports Server (NTRS)
Verfaillie, Gerard; Schiex, Thomas
1994-01-01
In the space domain, as in other domains, the CSP (Constraint Satisfaction Problems) techniques are increasingly used to represent and solve planning and scheduling problems. But these techniques have been developed to solve CSP's which are composed of fixed sets of variables and constraints, whereas many planning and scheduling problems are dynamic. It is therefore important to develop methods which allow a new solution to be rapidly found, as close as possible to the previous one, when some variables or constraints are added or removed. After presenting some existing approaches, this paper proposes a simple and efficient method, which has been developed on the basis of the dynamic backtracking algorithm. This method allows previous solution and reasoning to be reused in the framework of a CSP which is close to the previous one. Some experimental results on general random CSPs and on operation scheduling problems for remote sensing satellites are given.
Recursive Branching Simulated Annealing Algorithm
NASA Technical Reports Server (NTRS)
Bolcar, Matthew; Smith, J. Scott; Aronstein, David
2012-01-01
This innovation is a variation of a simulated-annealing optimization algorithm that uses a recursive-branching structure to parallelize the search of a parameter space for the globally optimal solution to an objective. The algorithm has been demonstrated to be more effective at searching a parameter space than traditional simulated-annealing methods for a particular problem of interest, and it can readily be applied to a wide variety of optimization problems, including those with a parameter space having both discrete-value parameters (combinatorial) and continuous-variable parameters. It can take the place of a conventional simulated- annealing, Monte-Carlo, or random- walk algorithm. In a conventional simulated-annealing (SA) algorithm, a starting configuration is randomly selected within the parameter space. The algorithm randomly selects another configuration from the parameter space and evaluates the objective function for that configuration. If the objective function value is better than the previous value, the new configuration is adopted as the new point of interest in the parameter space. If the objective function value is worse than the previous value, the new configuration may be adopted, with a probability determined by a temperature parameter, used in analogy to annealing in metals. As the optimization continues, the region of the parameter space from which new configurations can be selected shrinks, and in conjunction with lowering the annealing temperature (and thus lowering the probability for adopting configurations in parameter space with worse objective functions), the algorithm can converge on the globally optimal configuration. The Recursive Branching Simulated Annealing (RBSA) algorithm shares some features with the SA algorithm, notably including the basic principles that a starting configuration is randomly selected from within the parameter space, the algorithm tests other configurations with the goal of finding the globally optimal solution, and the region from which new configurations can be selected shrinks as the search continues. The key difference between these algorithms is that in the SA algorithm, a single path, or trajectory, is taken in parameter space, from the starting point to the globally optimal solution, while in the RBSA algorithm, many trajectories are taken; by exploring multiple regions of the parameter space simultaneously, the algorithm has been shown to converge on the globally optimal solution about an order of magnitude faster than when using conventional algorithms. Novel features of the RBSA algorithm include: 1. More efficient searching of the parameter space due to the branching structure, in which multiple random configurations are generated and multiple promising regions of the parameter space are explored; 2. The implementation of a trust region for each parameter in the parameter space, which provides a natural way of enforcing upper- and lower-bound constraints on the parameters; and 3. The optional use of a constrained gradient- search optimization, performed on the continuous variables around each branch s configuration in parameter space to improve search efficiency by allowing for fast fine-tuning of the continuous variables within the trust region at that configuration point.
High efficiency solution processed sintered CdTe nanocrystal solar cells: the role of interfaces.
Panthani, Matthew G; Kurley, J Matthew; Crisp, Ryan W; Dietz, Travis C; Ezzyat, Taha; Luther, Joseph M; Talapin, Dmitri V
2014-02-12
Solution processing of photovoltaic semiconducting layers offers the potential for drastic cost reduction through improved materials utilization and high device throughput. One compelling solution-based processing strategy utilizes semiconductor layers produced by sintering nanocrystals into large-grain semiconductors at relatively low temperatures. Using n-ZnO/p-CdTe as a model system, we fabricate sintered CdTe nanocrystal solar cells processed at 350 °C with power conversion efficiencies (PCE) as high as 12.3%. JSC of over 25 mA cm(-2) are achieved, which are comparable or higher than those achieved using traditional, close-space sublimated CdTe. We find that the VOC can be substantially increased by applying forward bias for short periods of time. Capacitance measurements as well as intensity- and temperature-dependent analysis indicate that the increased VOC is likely due to relaxation of an energetic barrier at the ITO/CdTe interface.
Genetic Algorithms Applied to Multi-Objective Aerodynamic Shape Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.
2004-01-01
A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of aerodynamic shape optimization problems. Several new features including two variations of a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. A new masking array capability is included allowing any gene or gene subset to be eliminated as decision variables from the design space. This allows determination of the effect of a single gene or gene subset on the pareto optimal solution. Results indicate that the genetic algorithm optimization approach is flexible in application and reliable. The binning selection algorithms generally provide pareto front quality enhancements and moderate convergence efficiency improvements for most of the problems solved.
Genetic Algorithms Applied to Multi-Objective Aerodynamic Shape Optimization
NASA Technical Reports Server (NTRS)
Holst, Terry L.
2005-01-01
A genetic algorithm approach suitable for solving multi-objective problems is described and evaluated using a series of aerodynamic shape optimization problems. Several new features including two variations of a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding Pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. A new masking array capability is included allowing any gene or gene subset to be eliminated as decision variables from the design space. This allows determination of the effect of a single gene or gene subset on the Pareto optimal solution. Results indicate that the genetic algorithm optimization approach is flexible in application and reliable. The binning selection algorithms generally provide Pareto front quality enhancements and moderate convergence efficiency improvements for most of the problems solved.
An unsteady Euler scheme for the analysis of ducted propellers
NASA Technical Reports Server (NTRS)
Srivastava, R.
1992-01-01
An efficient unsteady solution procedure has been developed for analyzing inviscid unsteady flow past ducted propeller configurations. This scheme is first order accurate in time and second order accurate in space. The solution procedure has been applied to a ducted propeller consisting of an 8-bladed SR7 propeller with a duct of NACA 0003 airfoil cross section around it, operating in a steady axisymmetric flowfield. The variation of elemental blade loading with radius, compares well with other published numerical results.
False colors removal on the YCr-Cb color space
NASA Astrophysics Data System (ADS)
Tomaselli, Valeria; Guarnera, Mirko; Messina, Giuseppe
2009-01-01
Post-processing algorithms are usually placed in the pipeline of imaging devices to remove residual color artifacts introduced by the demosaicing step. Although demosaicing solutions aim to eliminate, limit or correct false colors and other impairments caused by a non ideal sampling, post-processing techniques are usually more powerful in achieving this purpose. This is mainly because the input of post-processing algorithms is a fully restored RGB color image. Moreover, post-processing can be applied more than once, in order to meet some quality criteria. In this paper we propose an effective technique for reducing the color artifacts generated by conventional color interpolation algorithms, in YCrCb color space. This solution efficiently removes false colors and can be executed while performing the edge emphasis process.
Constrained orbital intercept-evasion
NASA Astrophysics Data System (ADS)
Zatezalo, Aleksandar; Stipanovic, Dusan M.; Mehra, Raman K.; Pham, Khanh
2014-06-01
An effective characterization of intercept-evasion confrontations in various space environments and a derivation of corresponding solutions considering a variety of real-world constraints are daunting theoretical and practical challenges. Current and future space-based platforms have to simultaneously operate as components of satellite formations and/or systems and at the same time, have a capability to evade potential collisions with other maneuver constrained space objects. In this article, we formulate and numerically approximate solutions of a Low Earth Orbit (LEO) intercept-maneuver problem in terms of game-theoretic capture-evasion guaranteed strategies. The space intercept-evasion approach is based on Liapunov methodology that has been successfully implemented in a number of air and ground based multi-player multi-goal game/control applications. The corresponding numerical algorithms are derived using computationally efficient and orbital propagator independent methods that are previously developed for Space Situational Awareness (SSA). This game theoretical but at the same time robust and practical approach is demonstrated on a realistic LEO scenario using existing Two Line Element (TLE) sets and Simplified General Perturbation-4 (SGP-4) propagator.
Desired Precision in Multi-Objective Optimization: Epsilon Archiving or Rounding Objectives?
NASA Astrophysics Data System (ADS)
Asadzadeh, M.; Sahraei, S.
2016-12-01
Multi-objective optimization (MO) aids in supporting the decision making process in water resources engineering and design problems. One of the main goals of solving a MO problem is to archive a set of solutions that is well-distributed across a wide range of all the design objectives. Modern MO algorithms use the epsilon dominance concept to define a mesh with pre-defined grid-cell size (often called epsilon) in the objective space and archive at most one solution at each grid-cell. Epsilon can be set to the desired precision level of each objective function to make sure that the difference between each pair of archived solutions is meaningful. This epsilon archiving process is computationally expensive in problems that have quick-to-evaluate objective functions. This research explores the applicability of a similar but computationally more efficient approach to respect the desired precision level of all objectives in the solution archiving process. In this alternative approach each objective function is rounded to the desired precision level before comparing any new solution to the set of archived solutions that already have rounded objective function values. This alternative solution archiving approach is compared to the epsilon archiving approach in terms of efficiency and quality of archived solutions for solving mathematical test problems and hydrologic model calibration problems.
SmartAdP: Visual Analytics of Large-scale Taxi Trajectories for Selecting Billboard Locations.
Liu, Dongyu; Weng, Di; Li, Yuhong; Bao, Jie; Zheng, Yu; Qu, Huamin; Wu, Yingcai
2017-01-01
The problem of formulating solutions immediately and comparing them rapidly for billboard placements has plagued advertising planners for a long time, owing to the lack of efficient tools for in-depth analyses to make informed decisions. In this study, we attempt to employ visual analytics that combines the state-of-the-art mining and visualization techniques to tackle this problem using large-scale GPS trajectory data. In particular, we present SmartAdP, an interactive visual analytics system that deals with the two major challenges including finding good solutions in a huge solution space and comparing the solutions in a visual and intuitive manner. An interactive framework that integrates a novel visualization-driven data mining model enables advertising planners to effectively and efficiently formulate good candidate solutions. In addition, we propose a set of coupled visualizations: a solution view with metaphor-based glyphs to visualize the correlation between different solutions; a location view to display billboard locations in a compact manner; and a ranking view to present multi-typed rankings of the solutions. This system has been demonstrated using case studies with a real-world dataset and domain-expert interviews. Our approach can be adapted for other location selection problems such as selecting locations of retail stores or restaurants using trajectory data.
Some problems of the calculation of three-dimensional boundary layer flows on general configurations
NASA Technical Reports Server (NTRS)
Cebeci, T.; Kaups, K.; Mosinskis, G. J.; Rehn, J. A.
1973-01-01
An accurate solution of the three-dimensional boundary layer equations over general configurations such as those encountered in aircraft and space shuttle design requires a very efficient, fast, and accurate numerical method with suitable turbulence models for the Reynolds stresses. The efficiency, speed, and accuracy of a three-dimensional numerical method together with the turbulence models for the Reynolds stresses are examined. The numerical method is the implicit two-point finite difference approach (Box Method) developed by Keller and applied to the boundary layer equations by Keller and Cebeci. In addition, a study of some of the problems that may arise in the solution of these equations for three-dimensional boundary layer flows over general configurations.
Nonuniform depth grids in parabolic equation solutions.
Sanders, William M; Collins, Michael D
2013-04-01
The parabolic wave equation is solved using a finite-difference solution in depth that involves a nonuniform grid. The depth operator is discretized using Galerkin's method with asymmetric hat functions. Examples are presented to illustrate that this approach can be used to improve efficiency for problems in ocean acoustics and seismo-acoustics. For shallow water problems, accuracy is sensitive to the precise placement of the ocean bottom interface. This issue is often addressed with the inefficient approach of using a fine grid spacing over all depth. Efficiency may be improved by using a relatively coarse grid with nonuniform sampling to precisely position the interface. Efficiency may also be improved by reducing the sampling in the sediment and in an absorbing layer that is used to truncate the computational domain. Nonuniform sampling may also be used to improve the implementation of a single-scattering approximation for sloping fluid-solid interfaces.
Efficient and robust model-to-image alignment using 3D scale-invariant features.
Toews, Matthew; Wells, William M
2013-04-01
This paper presents feature-based alignment (FBA), a general method for efficient and robust model-to-image alignment. Volumetric images, e.g. CT scans of the human body, are modeled probabilistically as a collage of 3D scale-invariant image features within a normalized reference space. Features are incorporated as a latent random variable and marginalized out in computing a maximum a posteriori alignment solution. The model is learned from features extracted in pre-aligned training images, then fit to features extracted from a new image to identify a globally optimal locally linear alignment solution. Novel techniques are presented for determining local feature orientation and efficiently encoding feature intensity in 3D. Experiments involving difficult magnetic resonance (MR) images of the human brain demonstrate FBA achieves alignment accuracy similar to widely-used registration methods, while requiring a fraction of the memory and computation resources and offering a more robust, globally optimal solution. Experiments on CT human body scans demonstrate FBA as an effective system for automatic human body alignment where other alignment methods break down. Copyright © 2012 Elsevier B.V. All rights reserved.
Efficient and Robust Model-to-Image Alignment using 3D Scale-Invariant Features
Toews, Matthew; Wells, William M.
2013-01-01
This paper presents feature-based alignment (FBA), a general method for efficient and robust model-to-image alignment. Volumetric images, e.g. CT scans of the human body, are modeled probabilistically as a collage of 3D scale-invariant image features within a normalized reference space. Features are incorporated as a latent random variable and marginalized out in computing a maximum a-posteriori alignment solution. The model is learned from features extracted in pre-aligned training images, then fit to features extracted from a new image to identify a globally optimal locally linear alignment solution. Novel techniques are presented for determining local feature orientation and efficiently encoding feature intensity in 3D. Experiments involving difficult magnetic resonance (MR) images of the human brain demonstrate FBA achieves alignment accuracy similar to widely-used registration methods, while requiring a fraction of the memory and computation resources and offering a more robust, globally optimal solution. Experiments on CT human body scans demonstrate FBA as an effective system for automatic human body alignment where other alignment methods break down. PMID:23265799
NASA Technical Reports Server (NTRS)
Chan, William M.
1995-01-01
Algorithms and computer code developments were performed for the overset grid approach to solving computational fluid dynamics problems. The techniques developed are applicable to compressible Navier-Stokes flow for any general complex configurations. The computer codes developed were tested on different complex configurations with the Space Shuttle launch vehicle configuration as the primary test bed. General, efficient and user-friendly codes were produced for grid generation, flow solution and force and moment computation.
NASA Astrophysics Data System (ADS)
Apreyan, R. A.; Fleck, M.; Atanesyan, A. K.; Sukiasyan, R. P.; Petrosyan, A. M.
2015-12-01
L-Nitroargininium picrate has been obtained from an aqueous solution containing equimolar quantities of L-nitroarginine and picric acid by slow evaporation. Single crystal was grown by evaporation method. Crystal structure was determined at room temperature. The salt crystallizes in monoclinic crystal system (space group P21). Vibrational spectra and thermal properties were studied. Second harmonic generation efficiency measured by powder method is found to be four times higher than in L-nitroarginine, which in turn is ten times more efficient than KDP (KH2PO4).
3D Space Radiation Transport in a Shielded ICRU Tissue Sphere
NASA Technical Reports Server (NTRS)
Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.
2014-01-01
A computationally efficient 3DHZETRN code capable of simulating High Charge (Z) and Energy (HZE) and light ions (including neutrons) under space-like boundary conditions with enhanced neutron and light ion propagation was recently developed for a simple homogeneous shield object. Monte Carlo benchmarks were used to verify the methodology in slab and spherical geometry, and the 3D corrections were shown to provide significant improvement over the straight-ahead approximation in some cases. In the present report, the new algorithms with well-defined convergence criteria are extended to inhomogeneous media within a shielded tissue slab and a shielded tissue sphere and tested against Monte Carlo simulation to verify the solution methods. The 3D corrections are again found to more accurately describe the neutron and light ion fluence spectra as compared to the straight-ahead approximation. These computationally efficient methods provide a basis for software capable of space shield analysis and optimization.
NASA Technical Reports Server (NTRS)
Hepp, Aloysius F.; Harris, Jerry D.; Raffaelle, Ryne P.; Banger, Kulbinder K.; Smith, Mark A.; Cowen, Jonathan E.
2001-01-01
The key to achieving high specific power (watts per kilogram) space photovoltaic arrays is the development of high-efficiency thin-film solar cells that are fabricated on lightweight, space-qualified substrates such as Kapton (DuPont) or another polymer film. Cell efficiencies of 20 percent air mass zero (AM0) are required. One of the major obstacles to developing lightweight, flexible, thin-film solar cells is the unavailability of lightweight substrate or superstrate materials that are compatible with current deposition techniques. There are two solutions for working around this problem: (1) develop new substrate or superstrate materials that are compatible with current deposition techniques, or (2) develop new deposition techniques that are compatible with existing materials. The NASA Glenn Research Center has been focusing on the latter approach and has been developing a deposition technique for depositing thin-film absorbers at temperatures below 400 C.
Technique Developed for Optimizing Traveling-Wave Tubes
NASA Technical Reports Server (NTRS)
Wilson, Jeffrey D.
1999-01-01
A traveling-wave tube (TWT) is an electron beam device that is used to amplify electromagnetic communication waves at radio and microwave frequencies. TWT s are critical components in deep-space probes, geosynchronous communication satellites, and high-power radar systems. Power efficiency is of paramount importance for TWT s employed in deep-space probes and communications satellites. Consequently, increasing the power efficiency of TWT s has been the primary goal of the TWT group at the NASA Lewis Research Center over the last 25 years. An in-house effort produced a technique (ref. 1) to design TWT's for optimized power efficiency. This technique is based on simulated annealing, which has an advantage over conventional optimization techniques in that it enables the best possible solution to be obtained (ref. 2). A simulated annealing algorithm was created and integrated into the NASA TWT computer model (ref. 3). The new technique almost doubled the computed conversion power efficiency of a TWT from 7.1 to 13.5 percent (ref. 1).
Door of Hope or Despair: Students' Perception of Distance Education at University of Ghana
ERIC Educational Resources Information Center
Oteng-Ababio, M.
2011-01-01
Distance Education has globally become one of the important solutions for increasing admission into the universities, decongesting campuses and efficient utilization of time and space. To ensure the sustainability of the programmes' noble objectives calls for periodic re-evaluation of its modus operandi including the assessment of the perception…
Error Control Coding Techniques for Space and Satellite Communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.; Takeshita, Oscar Y.; Cabral, Hermano A.; He, Jiali; White, Gregory S.
1997-01-01
Turbo coding using iterative SOVA decoding and M-ary differentially coherent or non-coherent modulation can provide an effective coding modulation solution: (1) Energy efficient with relatively simple SOVA decoding and small packet lengths, depending on BEP required; (2) Low number of decoding iterations required; and (3) Robustness in fading with channel interleaving.
CSM solutions of rotating blade dynamics using integrating matrices
NASA Technical Reports Server (NTRS)
Lakin, William D.
1992-01-01
The dynamic behavior of flexible rotating beams continues to receive considerable research attention as it constitutes a fundamental problem in applied mechanics. Further, beams comprise parts of many rotating structures of engineering significance. A topic of particular interest at the present time involves the development of techniques for obtaining the behavior in both space and time of a rotor acted upon by a simple airload loading. Most current work on problems of this type use solution techniques based on normal modes. It is certainly true that normal modes cannot be disregarded, as knowledge of natural blade frequencies is always important. However, the present work has considered a computational structural mechanics (CSM) approach to rotor blade dynamics problems in which the physical properties of the rotor blade provide input for a direct numerical solution of the relevant boundary-and-initial-value problem. Analysis of the dynamics of a given rotor system may require solution of the governing equations over a long time interval corresponding to many revolutions of the loaded flexible blade. For this reason, most of the common techniques in computational mechanics, which treat the space-time behavior concurrently, cannot be applied to the rotor dynamics problem without a large expenditure of computational resources. By contrast, the integrating matrix technique of computational mechanics has the ability to consistently incorporate boundary conditions and 'remove' dependence on a space variable. For problems involving both space and time, this feature of the integrating matrix approach thus can generate a 'splitting' which forms the basis of an efficient CSM method for numerical solution of rotor dynamics problems.
Rodríguez, Alfonso; Valverde, Juan; Portilla, Jorge; Otero, Andrés; Riesgo, Teresa; de la Torre, Eduardo
2018-06-08
Cyber-Physical Systems are experiencing a paradigm shift in which processing has been relocated to the distributed sensing layer and is no longer performed in a centralized manner. This approach, usually referred to as Edge Computing, demands the use of hardware platforms that are able to manage the steadily increasing requirements in computing performance, while keeping energy efficiency and the adaptability imposed by the interaction with the physical world. In this context, SRAM-based FPGAs and their inherent run-time reconfigurability, when coupled with smart power management strategies, are a suitable solution. However, they usually fail in user accessibility and ease of development. In this paper, an integrated framework to develop FPGA-based high-performance embedded systems for Edge Computing in Cyber-Physical Systems is presented. This framework provides a hardware-based processing architecture, an automated toolchain, and a runtime to transparently generate and manage reconfigurable systems from high-level system descriptions without additional user intervention. Moreover, it provides users with support for dynamically adapting the available computing resources to switch the working point of the architecture in a solution space defined by computing performance, energy consumption and fault tolerance. Results show that it is indeed possible to explore this solution space at run time and prove that the proposed framework is a competitive alternative to software-based edge computing platforms, being able to provide not only faster solutions, but also higher energy efficiency for computing-intensive algorithms with significant levels of data-level parallelism.
Measurement of glomerulus diameter and Bowman's space width of renal albino rats.
Kotyk, Taras; Dey, Nilanjan; Ashour, Amira S; Balas-Timar, Dana; Chakraborty, Sayan; Ashour, Ahmed S; Tavares, João Manuel R S
2016-04-01
Glomerulus diameter and Bowman's space width in renal microscopic images indicate various diseases. Therefore, the detection of the renal corpuscle and related objects is a key step in histopathological evaluation of renal microscopic images. However, the task of automatic glomeruli detection is challenging due to their wide intensity variation, besides the inconsistency in terms of shape and size of the glomeruli in the renal corpuscle. Here, a novel solution is proposed which includes the Particles Analyzer technique based on median filter for morphological image processing to detect the renal corpuscle objects. Afterwards, the glomerulus diameter and Bowman's space width are measured. The solution was tested with a dataset of 21 rats' renal corpuscle images acquired using light microscope. The experimental results proved that the proposed solution can detect the renal corpuscle and its objects efficiently. As well as, the proposed solution has the ability to manage any input images assuring its robustness to the deformations of the glomeruli even with the glomerular hypertrophy cases. Also, the results reported significant difference between the control and affected (due to ingested additional daily dose (14.6mg) of fructose) groups in terms of glomerulus diameter (97.40±19.02μm and 177.03±54.48μm, respectively). Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
Stability control of a flexible maneuverable tethered space net robot
NASA Astrophysics Data System (ADS)
Zhang, Fan; Huang, Panfeng
2018-04-01
As a promising solution for active space debris capture and removal, a maneuverable Tethered Space Net Robot (TSNR) is proposed as an improved Space Tethered Net (TSN). In addition to the advantages inherit to the TSN, the TSNR's maneuverability expands the capture's potential. However, oscillations caused by the TSNR's flexibility and elasticity of make higher requests of the control scheme. Based on the dynamics model, a modified adaptive super-twisting sliding mode control scheme is proposed in this paper for TSNR stability control. The proposed continuous control force can effectively suppress oscillations. Theoretical verification and numerical simulations demonstrate that the desired trajectory can be tracked steadily and efficiently by employing the proposed control scheme.
On conforming mixed finite element methods for incompressible viscous flow problems
NASA Technical Reports Server (NTRS)
Gunzburger, M. D; Nicolaides, R. A.; Peterson, J. S.
1982-01-01
The application of conforming mixed finite element methods to obtain approximate solutions of linearized Navier-Stokes equations is examined. Attention is given to the convergence rates of various finite element approximations of the pressure and the velocity field. The optimality of the convergence rates are addressed in terms of comparisons of the approximation convergence to a smooth solution in relation to the best approximation available for the finite element space used. Consideration is also devoted to techniques for efficient use of a Gaussian elimination algorithm to obtain a solution to a system of linear algebraic equations derived by finite element discretizations of linear partial differential equations.
NASA Technical Reports Server (NTRS)
Englander, Arnold C.; Englander, Jacob A.
2017-01-01
Interplanetary trajectory optimization problems are highly complex and are characterized by a large number of decision variables and equality and inequality constraints as well as many locally optimal solutions. Stochastic global search techniques, coupled with a large-scale NLP solver, have been shown to solve such problems but are inadequately robust when the problem constraints become very complex. In this work, we present a novel search algorithm that takes advantage of the fact that equality constraints effectively collapse the solution space to lower dimensionality. This new approach walks the filament'' of feasibility to efficiently find the global optimal solution.
Post-Optimality Analysis In Aerospace Vehicle Design
NASA Technical Reports Server (NTRS)
Braun, Robert D.; Kroo, Ilan M.; Gage, Peter J.
1993-01-01
This analysis pertains to the applicability of optimal sensitivity information to aerospace vehicle design. An optimal sensitivity (or post-optimality) analysis refers to computations performed once the initial optimization problem is solved. These computations may be used to characterize the design space about the present solution and infer changes in this solution as a result of constraint or parameter variations, without reoptimizing the entire system. The present analysis demonstrates that post-optimality information generated through first-order computations can be used to accurately predict the effect of constraint and parameter perturbations on the optimal solution. This assessment is based on the solution of an aircraft design problem in which the post-optimality estimates are shown to be within a few percent of the true solution over the practical range of constraint and parameter variations. Through solution of a reusable, single-stage-to-orbit, launch vehicle design problem, this optimal sensitivity information is also shown to improve the efficiency of the design process, For a hierarchically decomposed problem, this computational efficiency is realized by estimating the main-problem objective gradient through optimal sep&ivity calculations, By reducing the need for finite differentiation of a re-optimized subproblem, a significant decrease in the number of objective function evaluations required to reach the optimal solution is obtained.
NASA Technical Reports Server (NTRS)
Mattick, A. T.; Hertzberg, A.
1984-01-01
A heat rejection system for space is described which uses a recirculating free stream of liquid droplets in place of a solid surface to radiate waste heat. By using sufficiently small droplets ( 100 micron diameter) of low vapor pressure liquids the radiating droplet sheet can be made many times lighter than the lightest solid surface radiators (heat pipes). The liquid droplet radiator (LDR) is less vulnerable to damage by micrometeoroids than solid surface radiators, and may be transported into space far more efficiently. Analyses are presented of LDR applications in thermal and photovoltaic energy conversion which indicate that fluid handling components (droplet generator, droplet collector, heat exchanger, and pump) may comprise most of the radiator system mass. Even the unoptimized models employed yield LDR system masses less than heat pipe radiator system masses, and significant improvement is expected using design approaches that incorporate fluid handling components more efficiently. Technical problems (e.g., spacecraft contamination and electrostatic deflection of droplets) unique to this method of heat rejectioon are discussed and solutions are suggested.
NASA Technical Reports Server (NTRS)
Mattick, A. T.; Hertzberg, A.
1981-01-01
A heat rejection system for space is described which uses a recirculating free stream of liquid droplets in place of a solid surface to radiate waste heat. By using sufficiently small droplets (less than about 100 micron diameter) of low vapor pressure liquids (tin, tin-lead-bismuth eutectics, vacuum oils) the radiating droplet sheet can be made many times lighter than the lightest solid surface radiators (heat pipes). The liquid droplet radiator (LDR) is less vulnerable to damage by micrometeoroids than solid surface radiators, and may be transported into space far more efficiently. Analyses are presented of LDR applications in thermal and photovoltaic energy conversion which indicate that fluid handling components (droplet generator, droplet collector, heat exchanger, and pump) may comprise most of the radiator system mass. Even the unoptimized models employed yield LDR system masses less than heat pipe radiator system masses, and significant improvement is expected using design approaches that incorporate fluid handling components more efficiently. Technical problems (e.g., spacecraft contamination and electrostatic deflection of droplets) unique to this method of heat rejection are discussed and solutions are suggested.
Deep Learning for Flow Sculpting: Insights into Efficient Learning using Scientific Simulation Data
Stoecklein, Daniel; Lore, Kin Gwn; Davies, Michael; Sarkar, Soumik; Ganapathysubramanian, Baskar
2017-01-01
A new technique for shaping microfluid flow, known as flow sculpting, offers an unprecedented level of passive fluid flow control, with potential breakthrough applications in advancing manufacturing, biology, and chemistry research at the microscale. However, efficiently solving the inverse problem of designing a flow sculpting device for a desired fluid flow shape remains a challenge. Current approaches struggle with the many-to-one design space, requiring substantial user interaction and the necessity of building intuition, all of which are time and resource intensive. Deep learning has emerged as an efficient function approximation technique for high-dimensional spaces, and presents a fast solution to the inverse problem, yet the science of its implementation in similarly defined problems remains largely unexplored. We propose that deep learning methods can completely outpace current approaches for scientific inverse problems while delivering comparable designs. To this end, we show how intelligent sampling of the design space inputs can make deep learning methods more competitive in accuracy, while illustrating their generalization capability to out-of-sample predictions. PMID:28402332
Court, Richard W; Sims, Mark R; Cullen, David C; Sephton, Mark A
2014-09-01
Life-detection instruments on future Mars missions may use surfactant solutions to extract organic matter from samples of martian rocks. The thermal and radiation environments of space and Mars are capable of degrading these solutions, thereby reducing their ability to dissolve organic species. Successful extraction and detection of biosignatures on Mars requires an understanding of how degradation in extraterrestrial environments can affect surfactant performance. We exposed solutions of the surfactants polysorbate 80 (PS80), Zonyl FS-300, and poly[dimethylsiloxane-co-[3-(2-(2-hydroxyethoxy)ethoxy)propyl]methylsiloxane] (PDMSHEPMS) to elevated radiation and heat levels, combined with prolonged storage. Degradation was investigated by measuring changes in pH and electrical conductivity and by using the degraded solutions to extract a suite of organic compounds spiked onto grains of the martian soil simulant JSC Mars-1. Results indicate that the proton fluences expected during a mission to Mars do not cause significant degradation of surfactant compounds. Solutions of PS80 or PDMSHEPMS stored at -20 °C are able to extract the spiked standards with acceptable recovery efficiencies. Extraction efficiencies for spiked standards decrease progressively with increasing temperature, and prolonged storage at 60°C renders the surfactant solutions ineffective. Neither the presence of ascorbic acid nor the choice of solvent unequivocally alters the efficiency of extraction of the spiked standards. Since degradation of polysorbates has the potential to produce organic compounds that could be mistaken for indigenous martian organic matter, the polysiloxane PDMSHEPMS may be a superior choice of surfactant for the exploration of Mars.
Inversion of geophysical potential field data using the finite element method
NASA Astrophysics Data System (ADS)
Lamichhane, Bishnu P.; Gross, Lutz
2017-12-01
The inversion of geophysical potential field data can be formulated as an optimization problem with a constraint in the form of a partial differential equation (PDE). It is common practice, if possible, to provide an analytical solution for the forward problem and to reduce the problem to a finite dimensional optimization problem. In an alternative approach the optimization is applied to the problem and the resulting continuous problem which is defined by a set of coupled PDEs is subsequently solved using a standard PDE discretization method, such as the finite element method (FEM). In this paper, we show that under very mild conditions on the data misfit functional and the forward problem in the three-dimensional space, the continuous optimization problem and its FEM discretization are well-posed including the existence and uniqueness of respective solutions. We provide error estimates for the FEM solution. A main result of the paper is that the FEM spaces used for the forward problem and the Lagrange multiplier need to be identical but can be chosen independently from the FEM space used to represent the unknown physical property. We will demonstrate the convergence of the solution approximations in a numerical example. The second numerical example which investigates the selection of FEM spaces, shows that from the perspective of computational efficiency one should use 2 to 4 times finer mesh for the forward problem in comparison to the mesh of the physical property.
Searching Fragment Spaces with feature trees.
Lessel, Uta; Wellenzohn, Bernd; Lilienthal, Markus; Claussen, Holger
2009-02-01
Virtual combinatorial chemistry easily produces billions of compounds, for which conventional virtual screening cannot be performed even with the fastest methods available. An efficient solution for such a scenario is the generation of Fragment Spaces, which encode huge numbers of virtual compounds by their fragments/reagents and rules of how to combine them. Similarity-based searches can be performed in such spaces without ever fully enumerating all virtual products. Here we describe the generation of a huge Fragment Space encoding about 5 * 10(11) compounds based on established in-house synthesis protocols for combinatorial libraries, i.e., we encode practically evaluated combinatorial chemistry protocols in a machine readable form, rendering them accessible to in silico search methods. We show how such searches in this Fragment Space can be integrated as a first step in an overall workflow. It reduces the extremely huge number of virtual products by several orders of magnitude so that the resulting list of molecules becomes more manageable for further more elaborated and time-consuming analysis steps. Results of a case study are presented and discussed, which lead to some general conclusions for an efficient expansion of the chemical space to be screened in pharmaceutical companies.
On computations of the integrated space shuttle flowfield using overset grids
NASA Technical Reports Server (NTRS)
Chiu, I-T.; Pletcher, R. H.; Steger, J. L.
1990-01-01
Numerical simulations using the thin-layer Navier-Stokes equations and chimera (overset) grid approach were carried out for flows around the integrated space shuttle vehicle over a range of Mach numbers. Body-conforming grids were used for all the component grids. Testcases include a three-component overset grid - the external tank (ET), the solid rocket booster (SRB) and the orbiter (ORB), and a five-component overset grid - the ET, SRB, ORB, forward and aft attach hardware, configurations. The results were compared with the wind tunnel and flight data. In addition, a Poisson solution procedure (a special case of the vorticity-velocity formulation) using primitive variables was developed to solve three-dimensional, irrotational, inviscid flows for single as well as overset grids. The solutions were validated by comparisons with other analytical or numerical solution, and/or experimental results for various geometries. The Poisson solution was also used as an initial guess for the thin-layer Navier-Stokes solution procedure to improve the efficiency of the numerical flow simulations. It was found that this approach resulted in roughly a 30 percent CPU time savings as compared with the procedure solving the thin-layer Navier-Stokes equations from a uniform free stream flowfield.
Study on workshop layout of a motorcycle company based on systematic layout planning (SLP)
NASA Astrophysics Data System (ADS)
Zhou, Kang-Qu; Zhang, Rui-Juan; Wang, Ying-Dong; Wang, Bing-Jie
2010-08-01
The method of SLP has been applied in a motorcycle company's layout planning. In this layout design, the related graphics have been used to illuminate the logistics and non-logistics relationships of every workshop to get the integrated relationships of workshops and preliminary plans. Comparing the two preliminary plans including logistics efficiency, space utilization, management conveniences, etc, an improvement solution is proposed. Through the improvement solution, the productivity has been increased by 18% and the production capacity is able to make 1600 engines each day.
An abstract approach to evaporation models in rarefied gas dynamics
NASA Astrophysics Data System (ADS)
Greenberg, W.; van der Mee, C. V. M.
1984-03-01
Strong evaporation models involving 1D stationary problems with linear self-adjoint collision operators and solutions in abstract Hilbert spaces are investigated analytically. An efficient algorithm for locating the transition from existence to nonexistence of solutions is developed and applied to the 1D and 3D BGK model equations and the 3D BGK model in moment form, demonstrating the nonexistence of stationary evaporation states with supersonic drift velocities. Applications to similar models in electron and phonon transport, radiative transfer, and neutron transport are suggested.
Parameter estimation uncertainty: Comparing apples and apples?
NASA Astrophysics Data System (ADS)
Hart, D.; Yoon, H.; McKenna, S. A.
2012-12-01
Given a highly parameterized ground water model in which the conceptual model of the heterogeneity is stochastic, an ensemble of inverse calibrations from multiple starting points (MSP) provides an ensemble of calibrated parameters and follow-on transport predictions. However, the multiple calibrations are computationally expensive. Parameter estimation uncertainty can also be modeled by decomposing the parameterization into a solution space and a null space. From a single calibration (single starting point) a single set of parameters defining the solution space can be extracted. The solution space is held constant while Monte Carlo sampling of the parameter set covering the null space creates an ensemble of the null space parameter set. A recently developed null-space Monte Carlo (NSMC) method combines the calibration solution space parameters with the ensemble of null space parameters, creating sets of calibration-constrained parameters for input to the follow-on transport predictions. Here, we examine the consistency between probabilistic ensembles of parameter estimates and predictions using the MSP calibration and the NSMC approaches. A highly parameterized model of the Culebra dolomite previously developed for the WIPP project in New Mexico is used as the test case. A total of 100 estimated fields are retained from the MSP approach and the ensemble of results defining the model fit to the data, the reproduction of the variogram model and prediction of an advective travel time are compared to the same results obtained using NSMC. We demonstrate that the NSMC fields based on a single calibration model can be significantly constrained by the calibrated solution space and the resulting distribution of advective travel times is biased toward the travel time from the single calibrated field. To overcome this, newly proposed strategies to employ a multiple calibration-constrained NSMC approach (M-NSMC) are evaluated. Comparison of the M-NSMC and MSP methods suggests that M-NSMC can provide a computationally efficient and practical solution for predictive uncertainty analysis in highly nonlinear and complex subsurface flow and transport models. This material is based upon work supported as part of the Center for Frontiers of Subsurface Energy Security, an Energy Frontier Research Center funded by the U.S. Department of Energy, Office of Science, Office of Basic Energy Sciences under Award Number DE-SC0001114. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Study of data entry requirements at Marshall Space Flight Computation Center
NASA Technical Reports Server (NTRS)
Sherman, G. R.
1975-01-01
An economic and systems analysis of a data center was conducted. Current facilities for data storage of documentation are shown to be inadequate and outmoded for efficient data handling. Redesign of documents, condensation of the keypunching operation, upgrading of hardware, and retraining of personnel are the solutions proposed to improve the present data system.
Statistical Inference-Based Cache Management for Mobile Learning
ERIC Educational Resources Information Center
Li, Qing; Zhao, Jianmin; Zhu, Xinzhong
2009-01-01
Supporting efficient data access in the mobile learning environment is becoming a hot research problem in recent years, and the problem becomes tougher when the clients are using light-weight mobile devices such as cell phones whose limited storage space prevents the clients from holding a large cache. A practical solution is to store the cache…
Intelligent Control of Flexible-Joint Robotic Manipulators
NASA Technical Reports Server (NTRS)
Colbaugh, R.; Gallegos, G.
1997-01-01
This paper considers the trajectory tracking problem for uncertain rigid-link. flexible.joint manipulators, and presents a new intelligent controller as a solution to this problem. The proposed control strategy is simple and computationally efficient, requires little information concerning either the manipulator or actuator/transmission models and ensures uniform boundedness of all signals and arbitrarily accurate task-space trajectory tracking.
Rapid and efficient formation of propagation invariant shaped laser beams.
Chriki, Ronen; Barach, Gilad; Tradosnky, Chene; Smartsev, Slava; Pal, Vishwa; Friesem, Asher A; Davidson, Nir
2018-02-19
A rapid and efficient all-optical method for forming propagation invariant shaped beams by exploiting the optical feedback of a laser cavity is presented. The method is based on the modified degenerate cavity laser (MDCL), which is a highly incoherent cavity laser. The MDCL has a very large number of degrees of freedom (320,000 modes in our system) that can be coupled and controlled, and allows direct access to both the real space and Fourier space of the laser beam. By inserting amplitude masks into the cavity, constraints can be imposed on the laser in order to obtain minimal loss solutions that would optimally lead to a superposition of Bessel-Gauss beams forming a desired shaped beam. The resulting beam maintains its transverse intensity distribution for relatively long propagation distances.
Towards a multilevel cognitive probabilistic representation of space
NASA Astrophysics Data System (ADS)
Tapus, Adriana; Vasudevan, Shrihari; Siegwart, Roland
2005-03-01
This paper addresses the problem of perception and representation of space for a mobile agent. A probabilistic hierarchical framework is suggested as a solution to this problem. The method proposed is a combination of probabilistic belief with "Object Graph Models" (OGM). The world is viewed from a topological optic, in terms of objects and relationships between them. The hierarchical representation that we propose permits an efficient and reliable modeling of the information that the mobile agent would perceive from its environment. The integration of both navigational and interactional capabilities through efficient representation is also addressed. Experiments on a set of images taken from the real world that validate the approach are reported. This framework draws on the general understanding of human cognition and perception and contributes towards the overall efforts to build cognitive robot companions.
NASA Astrophysics Data System (ADS)
Shang, J. S.; Andrienko, D. A.; Huang, P. G.; Surzhikov, S. T.
2014-06-01
An efficient computational capability for nonequilibrium radiation simulation via the ray tracing technique has been accomplished. The radiative rate equation is iteratively coupled with the aerodynamic conservation laws including nonequilibrium chemical and chemical-physical kinetic models. The spectral properties along tracing rays are determined by a space partition algorithm of the nearest neighbor search process, and the numerical accuracy is further enhanced by a local resolution refinement using the Gauss-Lobatto polynomial. The interdisciplinary governing equations are solved by an implicit delta formulation through the diminishing residual approach. The axisymmetric radiating flow fields over the reentry RAM-CII probe have been simulated and verified with flight data and previous solutions by traditional methods. A computational efficiency gain nearly forty times is realized over that of the existing simulation procedures.
Integrated analysis of large space systems
NASA Technical Reports Server (NTRS)
Young, J. P.
1980-01-01
Based on the belief that actual flight hardware development of large space systems will necessitate a formalized method of integrating the various engineering discipline analyses, an efficient highly user oriented software system capable of performing interdisciplinary design analyses with tolerable solution turnaround time is planned Specific analysis capability goals were set forth with initial emphasis given to sequential and quasi-static thermal/structural analysis and fully coupled structural/control system analysis. Subsequently, the IAC would be expanded to include a fully coupled thermal/structural/control system, electromagnetic radiation, and optical performance analyses.
NASA management of the Space Shuttle Program
NASA Technical Reports Server (NTRS)
Peters, F.
1975-01-01
The management system and management technology described have been developed to meet stringent cost and schedule constraints of the Space Shuttle Program. Management of resources available to this program requires control and motivation of a large number of efficient creative personnel trained in various technical specialties. This must be done while keeping track of numerous parallel, yet interdependent activities involving different functions, organizations, and products all moving together in accordance with intricate plans for budgets, schedules, performance, and interaction. Some techniques developed to identify problems at an early stage and seek immediate solutions are examined.
ACCURATE CHEMICAL MASTER EQUATION SOLUTION USING MULTI-FINITE BUFFERS
Cao, Youfang; Terebus, Anna; Liang, Jie
2016-01-01
The discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multi-scale nature of many networks where reaction rates have large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the Accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multi-finite buffers for reducing the state space by O(n!), exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes, and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be pre-computed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multi-scale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks. PMID:27761104
NASA Technical Reports Server (NTRS)
Goodrich, John W.
1995-01-01
Two methods for developing high order single step explicit algorithms on symmetric stencils with data on only one time level are presented. Examples are given for the convection and linearized Euler equations with up to the eighth order accuracy in both space and time in one space dimension, and up to the sixth in two space dimensions. The method of characteristics is generalized to nondiagonalizable hyperbolic systems by using exact local polynominal solutions of the system, and the resulting exact propagator methods automatically incorporate the correct multidimensional wave propagation dynamics. Multivariate Taylor or Cauchy-Kowaleskaya expansions are also used to develop algorithms. Both of these methods can be applied to obtain algorithms of arbitrarily high order for hyperbolic systems in multiple space dimensions. Cross derivatives are included in the local approximations used to develop the algorithms in this paper in order to obtain high order accuracy, and improved isotropy and stability. Efficiency in meeting global error bounds is an important criterion for evaluating algorithms, and the higher order algorithms are shown to be up to several orders of magnitude more efficient even though they are more complex. Stable high order boundary conditions for the linearized Euler equations are developed in one space dimension, and demonstrated in two space dimensions.
Space debris removal by ground-based lasers: main conclusions of the European project CLEANSPACE.
Esmiller, Bruno; Jacquelard, Christophe; Eckel, Hans-Albert; Wnuk, Edwin
2014-11-01
Studies show that the number of debris in low Earth orbit is exponentially growing despite future debris release mitigation measures considered. Specifically, the already existing population of small and medium debris (between 1 cm and several dozens of cm) is today a concrete threat to operational satellites. A ground-based laser solution which can remove, at low expense and in a nondestructive way, hazardous debris around selected space assets appears as a highly promising answer. This solution is studied within the framework of the CLEANSPACE project which is part of the FP7 space program. The overall CLEANSPACE objective is: to propose an efficient and affordable global system architecture, to tackle safety regulation aspects, political implications and future collaborations, to develop affordable technological bricks, and to establish a roadmap for the development and the future implantation of a fully functional laser protection system. This paper will present the main conclusions of the CLEANSPACE project.
A geometric viewpoint on generalized hydrodynamics
NASA Astrophysics Data System (ADS)
Doyon, Benjamin; Spohn, Herbert; Yoshimura, Takato
2018-01-01
Generalized hydrodynamics (GHD) is a large-scale theory for the dynamics of many-body integrable systems. It consists of an infinite set of conservation laws for quasi-particles traveling with effective ("dressed") velocities that depend on the local state. We show that these equations can be recast into a geometric dynamical problem. They are conservation equations with state-independent quasi-particle velocities, in a space equipped with a family of metrics, parametrized by the quasi-particles' type and speed, that depend on the local state. In the classical hard rod or soliton gas picture, these metrics measure the free length of space as perceived by quasi-particles; in the quantum picture, they weigh space with the density of states available to them. Using this geometric construction, we find a general solution to the initial value problem of GHD, in terms of a set of integral equations where time appears explicitly. These integral equations are solvable by iteration and provide an extremely efficient solution algorithm for GHD.
Networks of Firms and the Ridge in the Production Space
NASA Astrophysics Data System (ADS)
Souma, Wataru
We develop complex networks that represent activities in the economy. The network in this study is constructed from firms and the relationships between firms, i.e., shareholding, interlocking directors, transactions, and joint applications for patents. Thus, the network is regarded as a multigraph, and it is also regarded as a weighted network. By calculating various network indices, we clarify the characteristics of the network. We also consider the dynamics of firms in the production space that are characterized by capital stock, employment, and profit. Each firm moves within this space to maximize their profit by using controlling of capital stock and employment. We show that the dynamics of rational firms can be described using a ridge equation. We analytically solve this equation by assuming the extensive Cobb-Douglas production function, and thereby obtain a solution. By comparing the distribution of firms and this solution, we find that almost all of the 1,100 firms listed on the first section of the Tokyo stock exchange and belonging to the manufacturing sector are managed efficiently.
An imperialist competitive algorithm for virtual machine placement in cloud computing
NASA Astrophysics Data System (ADS)
Jamali, Shahram; Malektaji, Sepideh; Analoui, Morteza
2017-05-01
Cloud computing, the recently emerged revolution in IT industry, is empowered by virtualisation technology. In this paradigm, the user's applications run over some virtual machines (VMs). The process of selecting proper physical machines to host these virtual machines is called virtual machine placement. It plays an important role on resource utilisation and power efficiency of cloud computing environment. In this paper, we propose an imperialist competitive-based algorithm for the virtual machine placement problem called ICA-VMPLC. The base optimisation algorithm is chosen to be ICA because of its ease in neighbourhood movement, good convergence rate and suitable terminology. The proposed algorithm investigates search space in a unique manner to efficiently obtain optimal placement solution that simultaneously minimises power consumption and total resource wastage. Its final solution performance is compared with several existing methods such as grouping genetic and ant colony-based algorithms as well as bin packing heuristic. The simulation results show that the proposed method is superior to other tested algorithms in terms of power consumption, resource wastage, CPU usage efficiency and memory usage efficiency.
On the next generation of reliability analysis tools
NASA Technical Reports Server (NTRS)
Babcock, Philip S., IV; Leong, Frank; Gai, Eli
1987-01-01
The current generation of reliability analysis tools concentrates on improving the efficiency of the description and solution of the fault-handling processes and providing a solution algorithm for the full system model. The tools have improved user efficiency in these areas to the extent that the problem of constructing the fault-occurrence model is now the major analysis bottleneck. For the next generation of reliability tools, it is proposed that techniques be developed to improve the efficiency of the fault-occurrence model generation and input. Further, the goal is to provide an environment permitting a user to provide a top-down design description of the system from which a Markov reliability model is automatically constructed. Thus, the user is relieved of the tedious and error-prone process of model construction, permitting an efficient exploration of the design space, and an independent validation of the system's operation is obtained. An additional benefit of automating the model construction process is the opportunity to reduce the specialized knowledge required. Hence, the user need only be an expert in the system he is analyzing; the expertise in reliability analysis techniques is supplied.
Building Operations Efficiencies into NASA's Ares I Crew Launch Vehicle Design
NASA Technical Reports Server (NTRS)
Dumbacher, Daniel L.; Davis, Stephan R.
2007-01-01
The U.S. Vision for Space Exploration guides the National Aeronautics and Space Administration's (NASA's) challenging missions that expand humanity's boundaries and open new routes to the space frontier. With the Agency's commitment to complete the International Space Station (ISS) and to retire the venerable Space Shuttle by 2010, the NASA Administrator commissioned the Exploration Systems Architecture Study (ESAS) in 2005 to analyze options for safe, simple, cost-efficient launch solutions that could deliver human-rated space transportation capabilities in a timely manner within fixed budget guidelines. The Exploration Launch Projects (ELP) Office, chartered by the Constellation Program in October 2005, has been conducting systems engineering studies and business planning to successively refine the design configurations and better align vehicle concepts with customer and stakeholder requirements, such as significantly reduced life-cycle costs. As the Agency begins the process of replacing the Shuttle with a new generation of spacecraft destined for missions beyond low-Earth orbit to the Moon and Mars, NASA is designing the follow-on crew and cargo launch systems for maximum operational efficiencies. To sustain the long-term exploration of space, it is imperative to reduce the $4 billion NASA typically spends on space transportation each year. This paper gives toplevel information about how the follow-on Ares I Crew Launch Vehicle (CLV) is being designed for improved safety and reliability, coupled with reduced operations costs. These methods include carefully developing operational requirements; conducting operability design and analysis; using the latest information technology tools to design and simulate the vehicle; and developing a learning culture across the workforce to ensure a smooth transition between Space Shuttle operations and Ares vehicle development.
Dynamic Programming for Structured Continuous Markov Decision Problems
NASA Technical Reports Server (NTRS)
Dearden, Richard; Meuleau, Nicholas; Washington, Richard; Feng, Zhengzhu
2004-01-01
We describe an approach for exploiting structure in Markov Decision Processes with continuous state variables. At each step of the dynamic programming, the state space is dynamically partitioned into regions where the value function is the same throughout the region. We first describe the algorithm for piecewise constant representations. We then extend it to piecewise linear representations, using techniques from POMDPs to represent and reason about linear surfaces efficiently. We show that for complex, structured problems, our approach exploits the natural structure so that optimal solutions can be computed efficiently.
Limitations to the study of man in the United States space program
NASA Technical Reports Server (NTRS)
Bishop, Phillip A.; Greenisen, Mike
1992-01-01
Research on humans conducted during space flight is fraught both with great opportunities and great obstacles. The purpose of this paper is to review some of the limitations to United States research in space in the hope that an informed scientific community may lead to more rapid and efficient solution of these problems. Limitations arise because opportunities to study the same astronauts in well-controlled situations on repeated space flights are practically non-existent. Human research opportunities are further limited by the necessity of avoiding simultaneous mutually-interfereing experiments. Environmental factors including diet and other physiological perturbations concomitant with space flight also complicates research design and interpretation. Technical limitations to research methods and opportunities further restrict the development of the knowledge base. Finally, earth analogues of space travel all suffer from inadequacies. Though all of these obstacles will eventually be overcome; creativity, diligence, and persistence are required to further our knowledge of humans in space.
Space use optimisation and sustainability-environmental assessment of space use concepts.
van den Dobbelsteen, Andy; de Wilde, Sebastiaan
2004-11-01
In this paper, as part of a diptych, we discuss the factor space as a means of improving the environmental performance of building projects. There are indicators for space use efficiency and several more or less broadly supported methods for assessment of environmental issues such as ecological quality, use of building materials and energy consumption. These are discussed in this paper. Assessment methods coupling space use to environmental indicators had not been available until now. Beforehand, plans with different spatial properties could therefore not be environmentally compared. We present a method for the implementation of space use in assessments concerning sustainability. This method was applied to the urban case study presented in our second paper in this journal. In this paper, we also present solutions for improved environmental performance through intensive and multiple use of space in the second, third and fourth dimension.
Mixed Integer Programming and Heuristic Scheduling for Space Communication
NASA Technical Reports Server (NTRS)
Lee, Charles H.; Cheung, Kar-Ming
2013-01-01
Optimal planning and scheduling for a communication network was created where the nodes within the network are communicating at the highest possible rates while meeting the mission requirements and operational constraints. The planning and scheduling problem was formulated in the framework of Mixed Integer Programming (MIP) to introduce a special penalty function to convert the MIP problem into a continuous optimization problem, and to solve the constrained optimization problem using heuristic optimization. The communication network consists of space and ground assets with the link dynamics between any two assets varying with respect to time, distance, and telecom configurations. One asset could be communicating with another at very high data rates at one time, and at other times, communication is impossible, as the asset could be inaccessible from the network due to planetary occultation. Based on the network's geometric dynamics and link capabilities, the start time, end time, and link configuration of each view period are selected to maximize the communication efficiency within the network. Mathematical formulations for the constrained mixed integer optimization problem were derived, and efficient analytical and numerical techniques were developed to find the optimal solution. By setting up the problem using MIP, the search space for the optimization problem is reduced significantly, thereby speeding up the solution process. The ratio of the dimension of the traditional method over the proposed formulation is approximately an order N (single) to 2*N (arraying), where N is the number of receiving antennas of a node. By introducing a special penalty function, the MIP problem with non-differentiable cost function and nonlinear constraints can be converted into a continuous variable problem, whose solution is possible.
Fuel Injector Design Optimization for an Annular Scramjet Geometry
NASA Technical Reports Server (NTRS)
Steffen, Christopher J., Jr.
2003-01-01
A four-parameter, three-level, central composite experiment design has been used to optimize the configuration of an annular scramjet injector geometry using computational fluid dynamics. The computational fluid dynamic solutions played the role of computer experiments, and response surface methodology was used to capture the simulation results for mixing efficiency and total pressure recovery within the scramjet flowpath. An optimization procedure, based upon the response surface results of mixing efficiency, was used to compare the optimal design configuration against the target efficiency value of 92.5%. The results of three different optimization procedures are presented and all point to the need to look outside the current design space for different injector geometries that can meet or exceed the stated mixing efficiency target.
Trajectory design strategies that incorporate invariant manifolds and swingby
NASA Technical Reports Server (NTRS)
Guzman, J. J.; Cooley, D. S.; Howell, K. C.; Folta, D. C.
1998-01-01
Libration point orbits serve as excellent platforms for scientific investigations involving the Sun as well as planetary environments. Trajectory design in support of such missions is increasingly challenging as more complex missions are envisioned in the next few decades. Software tools for trajectory design in this regime must be further developed to incorporate better understanding of the solution space and, thus, improve the efficiency and expand the capabilities of current approaches. Only recently applied to trajectory design, dynamical systems theory now offers new insights into the natural dynamics associated with the multi-body problem. The goal of this effort is the blending of analysis from dynamical systems theory with the well established NASA Goddard software program SWINGBY to enhance and expand the capabilities for mission design. Basic knowledge concerning the solution space is improved as well.
HyPlane for Space Tourism and Business Transportation
NASA Astrophysics Data System (ADS)
Savino, R.
In the present work a preliminary study on a small hypersonic airplane for a long duration space tourism mission is presented. It is also consistent with a point-to-point medium range (5000-6000 km) hypersonic trip, in the frame of the "urgent business travel" market segment. The main ideas is to transfer technological solutions developed for aeronautical and space atmospheric re-entry systems to the design of such a hypersonic airplane. A winged vehicle characterized by high aerodynamic efficiency and able to manoeuvre along the flight path, in all aerodynamic regimes encountered, is taken into consideration. Rocket-Based Combined Cycle and Turbine-Based Combined Cycle engines are investigated to ensure higher performances in terms of flight duration and range. Different flight-paths are also considered, including sub-orbital parabolic trajectories and steady state hypersonic cruise. The former, in particular, takes advantage of the high aerodynamic efficiency during the unpowered phase, in combination with a periodic engine actuation, to guarantee a long duration oscillating flight path. These trajectories offer Space tourists the opportunity of extended missions, characterized by repeated periods of low-gravity at altitudes high enough to ensure a wide view of the Earth from Space.
NASA Astrophysics Data System (ADS)
Lv, Chao; Zheng, Lianqing; Yang, Wei
2012-01-01
Molecular dynamics sampling can be enhanced via the promoting of potential energy fluctuations, for instance, based on a Hamiltonian modified with the addition of a potential-energy-dependent biasing term. To overcome the diffusion sampling issue, which reveals the fact that enlargement of event-irrelevant energy fluctuations may abolish sampling efficiency, the essential energy space random walk (EESRW) approach was proposed earlier. To more effectively accelerate the sampling of solute conformations in aqueous environment, in the current work, we generalized the EESRW method to a two-dimension-EESRW (2D-EESRW) strategy. Specifically, the essential internal energy component of a focused region and the essential interaction energy component between the focused region and the environmental region are employed to define the two-dimensional essential energy space. This proposal is motivated by the general observation that in different conformational events, the two essential energy components have distinctive interplays. Model studies on the alanine dipeptide and the aspartate-arginine peptide demonstrate sampling improvement over the original one-dimension-EESRW strategy; with the same biasing level, the present generalization allows more effective acceleration of the sampling of conformational transitions in aqueous solution. The 2D-EESRW generalization is readily extended to higher dimension schemes and employed in more advanced enhanced-sampling schemes, such as the recent orthogonal space random walk method.
Contaminants in ventilated filling boxes
NASA Astrophysics Data System (ADS)
Bolster, D. T.; Linden, P. F.
While energy efficiency is important, the adoption of energy-efficient ventilation systems still requires the provision of acceptable indoor air quality. Many low-energy systems, such as displacement or natural ventilation, rely on temperature stratification within the interior environment, always extracting the warmest air from the top of the room. Understanding buoyancy-driven convection in a confined ventilated space is key to understanding the flow that develops with many of these modern low-energy ventilation schemes. In this work we study the transport of an initially uniformly distributed passive contaminant in a displacement-ventilated space. Representing a heat source as an ideal sourced of buoyancy, analytical and numerical models are developed that allow us to compare the average efficiency of contaminant removal between traditional mixing and modern low-energy systems. A set of small-scale analogue laboratory experiments was also conducted to further validate our analytical and numerical solutions.We find that on average traditional and low-energy ventilation methods are similar with regard to pollutant flushing efficiency. This is because the concentration being extracted from the system at any given time is approximately the same for both systems. However, very different vertical concentration gradients exist. For the low-energy system, a peak in contaminant concentration occurs at the temperature interface that is established within the space. This interface is typically designed to sit at some intermediate height in the space. Since this peak does not coincide with the extraction point, displacement ventilation does not offer the same benefits for pollutant flushing as it does for buoyancy removal.
On computing the global time-optimal motions of robotic manipulators in the presence of obstacles
NASA Technical Reports Server (NTRS)
Shiller, Zvi; Dubowsky, Steven
1991-01-01
A method for computing the time-optimal motions of robotic manipulators is presented that considers the nonlinear manipulator dynamics, actuator constraints, joint limits, and obstacles. The optimization problem is reduced to a search for the time-optimal path in the n-dimensional position space. A small set of near-optimal paths is first efficiently selected from a grid, using a branch and bound search and a series of lower bound estimates on the traveling time along a given path. These paths are further optimized with a local path optimization to yield the global optimal solution. Obstacles are considered by eliminating the collision points from the tessellated space and by adding a penalty function to the motion time in the local optimization. The computational efficiency of the method stems from the reduced dimensionality of the searched spaced and from combining the grid search with a local optimization. The method is demonstrated in several examples for two- and six-degree-of-freedom manipulators with obstacles.
Deconvolution of mixing time series on a graph
Blocker, Alexander W.; Airoldi, Edoardo M.
2013-01-01
In many applications we are interested in making inference on latent time series from indirect measurements, which are often low-dimensional projections resulting from mixing or aggregation. Positron emission tomography, super-resolution, and network traffic monitoring are some examples. Inference in such settings requires solving a sequence of ill-posed inverse problems, yt = Axt, where the projection mechanism provides information on A. We consider problems in which A specifies mixing on a graph of times series that are bursty and sparse. We develop a multilevel state-space model for mixing times series and an efficient approach to inference. A simple model is used to calibrate regularization parameters that lead to efficient inference in the multilevel state-space model. We apply this method to the problem of estimating point-to-point traffic flows on a network from aggregate measurements. Our solution outperforms existing methods for this problem, and our two-stage approach suggests an efficient inference strategy for multilevel models of multivariate time series. PMID:25309135
Coupled Neutron Transport for HZETRN
NASA Technical Reports Server (NTRS)
Slaba, Tony C.; Blattnig, Steve R.
2009-01-01
Exposure estimates inside space vehicles, surface habitats, and high altitude aircrafts exposed to space radiation are highly influenced by secondary neutron production. The deterministic transport code HZETRN has been identified as a reliable and efficient tool for such studies, but improvements to the underlying transport models and numerical methods are still necessary. In this paper, the forward-backward (FB) and directionally coupled forward-backward (DC) neutron transport models are derived, numerical methods for the FB model are reviewed, and a computationally efficient numerical solution is presented for the DC model. Both models are compared to the Monte Carlo codes HETC-HEDS, FLUKA, and MCNPX, and the DC model is shown to agree closely with the Monte Carlo results. Finally, it is found in the development of either model that the decoupling of low energy neutrons from the light particle transport procedure adversely affects low energy light ion fluence spectra and exposure quantities. A first order correction is presented to resolve the problem, and it is shown to be both accurate and efficient.
Real-space observation of unbalanced charge distribution inside a perovskite-sensitized solar cell.
Bergmann, Victor W; Weber, Stefan A L; Javier Ramos, F; Nazeeruddin, Mohammad Khaja; Grätzel, Michael; Li, Dan; Domanski, Anna L; Lieberwirth, Ingo; Ahmad, Shahzada; Berger, Rüdiger
2014-09-22
Perovskite-sensitized solar cells have reached power conversion efficiencies comparable to commercially available solar cells used for example in solar farms. In contrast to silicon solar cells, perovskite-sensitized solar cells can be made by solution processes from inexpensive materials. The power conversion efficiency of these cells depends substantially on the charge transfer at interfaces. Here we use Kelvin probe force microscopy to study the real-space cross-sectional distribution of the internal potential within high efficiency mesoscopic methylammonium lead tri-iodide solar cells. We show that the electric field is homogeneous through these devices, similar to that of a p-i-n type junction. On illumination under short-circuit conditions, holes accumulate in front of the hole-transport layer as a consequence of unbalanced charge transport in the device. After light illumination, we find that trapped charges remain inside the active device layers. Removing these traps and the unbalanced charge injection could enable further improvements in performance of perovskite-sensitized solar cells.
Efficient computational methods for electromagnetic imaging with applications to 3D magnetotellurics
NASA Astrophysics Data System (ADS)
Kordy, Michal Adam
The motivation for this work is the forward and inverse problem for magnetotellurics, a frequency domain electromagnetic remote-sensing geophysical method used in mineral, geothermal, and groundwater exploration. The dissertation consists of four papers. In the first paper, we prove the existence and uniqueness of a representation of any vector field in H(curl) by a vector lying in H(curl) and H(div). It allows us to represent electric or magnetic fields by another vector field, for which nodal finite element approximation may be used in the case of non-constant electromagnetic properties. With this approach, the system matrix does not become ill-posed for low-frequency. In the second paper, we consider hexahedral finite element approximation of an electric field for the magnetotelluric forward problem. The near-null space of the system matrix for low frequencies makes the numerical solution unstable in the air. We show that the proper solution may obtained by applying a correction on the null space of the curl. It is done by solving a Poisson equation using discrete Helmholtz decomposition. We parallelize the forward code on multicore workstation with large RAM. In the next paper, we use the forward code in the inversion. Regularization of the inversion is done by using the second norm of the logarithm of conductivity. The data space Gauss-Newton approach allows for significant savings in memory and computational time. We show the efficiency of the method by considering a number of synthetic inversions and we apply it to real data collected in Cascade Mountains. The last paper considers a cross-frequency interpolation of the forward response as well as the Jacobian. We consider Pade approximation through model order reduction and rational Krylov subspace. The interpolating frequencies are chosen adaptively in order to minimize the maximum error of interpolation. Two error indicator functions are compared. We prove a theorem of almost always lucky failure in the case of the right hand analytically dependent on frequency. The operator's null space is treated by decomposing the solution into the part in the null space and orthogonal to it.
A Distributed Wireless Camera System for the Management of Parking Spaces.
Vítek, Stanislav; Melničuk, Petr
2017-12-28
The importance of detection of parking space availability is still growing, particularly in major cities. This paper deals with the design of a distributed wireless camera system for the management of parking spaces, which can determine occupancy of the parking space based on the information from multiple cameras. The proposed system uses small camera modules based on Raspberry Pi Zero and computationally efficient algorithm for the occupancy detection based on the histogram of oriented gradients (HOG) feature descriptor and support vector machine (SVM) classifier. We have included information about the orientation of the vehicle as a supporting feature, which has enabled us to achieve better accuracy. The described solution can deliver occupancy information at the rate of 10 parking spaces per second with more than 90% accuracy in a wide range of conditions. Reliability of the implemented algorithm is evaluated with three different test sets which altogether contain over 700,000 samples of parking spaces.
NASA Astrophysics Data System (ADS)
Woollands, Robyn M.; Read, Julie L.; Probe, Austin B.; Junkins, John L.
2017-12-01
We present a new method for solving the multiple revolution perturbed Lambert problem using the method of particular solutions and modified Chebyshev-Picard iteration. The method of particular solutions differs from the well-known Newton-shooting method in that integration of the state transition matrix (36 additional differential equations) is not required, and instead it makes use of a reference trajectory and a set of n particular solutions. Any numerical integrator can be used for solving two-point boundary problems with the method of particular solutions, however we show that using modified Chebyshev-Picard iteration affords an avenue for increased efficiency that is not available with other step-by-step integrators. We take advantage of the path approximation nature of modified Chebyshev-Picard iteration (nodes iteratively converge to fixed points in space) and utilize a variable fidelity force model for propagating the reference trajectory. Remarkably, we demonstrate that computing the particular solutions with only low fidelity function evaluations greatly increases the efficiency of the algorithm while maintaining machine precision accuracy. Our study reveals that solving the perturbed Lambert's problem using the method of particular solutions with modified Chebyshev-Picard iteration is about an order of magnitude faster compared with the classical shooting method and a tenth-twelfth order Runge-Kutta integrator. It is well known that the solution to Lambert's problem over multiple revolutions is not unique and to ensure that all possible solutions are considered we make use of a reliable preexisting Keplerian Lambert solver to warm start our perturbed algorithm.
Analysis of the coupling efficiency of a tapered space receiver with a calculus mathematical model
NASA Astrophysics Data System (ADS)
Hu, Qinggui; Mu, Yining
2018-03-01
We establish a calculus mathematical model to study the coupling characteristics of tapered optical fibers in a space communications system, and obtained the coupling efficiency equation. Then, using MATLAB software, the solution was calculated. After this, the sample was produced by the mature flame-brush technique. The experiment was then performed, and the results were in accordance with the theoretical analysis. This shows that the theoretical analysis was correct and indicates that a tapered structure could improve its tolerance with misalignment. Project supported by The National Natural Science Foundation of China (grant no. 61275080); 2017 Jilin Province Science and Technology Development Plan-Science and Technology Innovation Fund for Small and Medium Enterprises (20170308029HJ); ‘thirteen five’ science and technology research project of the Department of Education of Jilin 2016 (16JK009).
Development of a Rotating Rake Array for Boundary-Layer-Ingesting Fan-Stage Measurements
NASA Technical Reports Server (NTRS)
Wolter, John D.; Arend, David J.; Hirt, Stefanie M.; Gazzaniga, John A.
2017-01-01
The recent Boundary-Layer-Ingesting Inlet/Distortion Tolerant Fan wind tunnel experiment at NASA Glenn Research Center's 8- by 6-foot Supersonic Wind Tunnel (SWT) examined the performance of a novel inlet and fan stage that was designed to ingest the vehicle boundary layer in order to take advantage of a predicted overall propulsive efficiency benefit. A key piece of the experiment's instrumentation was a pair of rotating rake arrays located upstream and downstream of the fan stage. This paper examines the development of these rake arrays. Pre-test numerical solutions were sampled to determine placement and spacing for rake pressure and temperature probes. The effects of probe spacing and survey density on the repeatability of survey measurements was examined. These data were then used to estimate measurement uncertainty for the adiabatic efficiency.
Development of a Rotating Rake Array for Boundary-Layer-Ingesting Fan-Stage Measurements
NASA Technical Reports Server (NTRS)
Wolter, John D.; Arend, David J.; Hirt, Stefanie M.; Gazzaniga, John A.
2017-01-01
The recent Boundary-Layer-Ingesting Inlet/Distortion Tolerant Fan wind tunnel experiment at NASA Glenn Research Center's 8-foot by 6-foot supersonic wind tunnel examined the performance of a novel inlet and fan stage that was designed to ingest the vehicle boundary layer in order to take advantage of a predicted overall propulsive efficiency benefit. A key piece of the experiment's instrumentation was a pair of rotating rake arrays located upstream and downstream of the fan stage. This paper examines the development of these rake arrays. Pre-test numerical solutions were sampled to determine placement and spacing for rake pressure and temperature probes. The effects of probe spacing and survey density on the repeatability of survey measurements was examined. These data were then used to estimate measurement uncertainty for the adiabatic efficiency.
Saha, S. K.; Dutta, R.; Choudhury, R.; Kar, R.; Mandal, D.; Ghoshal, S. P.
2013-01-01
In this paper, opposition-based harmony search has been applied for the optimal design of linear phase FIR filters. RGA, PSO, and DE have also been adopted for the sake of comparison. The original harmony search algorithm is chosen as the parent one, and opposition-based approach is applied. During the initialization, randomly generated population of solutions is chosen, opposite solutions are also considered, and the fitter one is selected as a priori guess. In harmony memory, each such solution passes through memory consideration rule, pitch adjustment rule, and then opposition-based reinitialization generation jumping, which gives the optimum result corresponding to the least error fitness in multidimensional search space of FIR filter design. Incorporation of different control parameters in the basic HS algorithm results in the balancing of exploration and exploitation of search space. Low pass, high pass, band pass, and band stop FIR filters are designed with the proposed OHS and other aforementioned algorithms individually for comparative optimization performance. A comparison of simulation results reveals the optimization efficacy of the OHS over the other optimization techniques for the solution of the multimodal, nondifferentiable, nonlinear, and constrained FIR filter design problems. PMID:23844390
Saha, S K; Dutta, R; Choudhury, R; Kar, R; Mandal, D; Ghoshal, S P
2013-01-01
In this paper, opposition-based harmony search has been applied for the optimal design of linear phase FIR filters. RGA, PSO, and DE have also been adopted for the sake of comparison. The original harmony search algorithm is chosen as the parent one, and opposition-based approach is applied. During the initialization, randomly generated population of solutions is chosen, opposite solutions are also considered, and the fitter one is selected as a priori guess. In harmony memory, each such solution passes through memory consideration rule, pitch adjustment rule, and then opposition-based reinitialization generation jumping, which gives the optimum result corresponding to the least error fitness in multidimensional search space of FIR filter design. Incorporation of different control parameters in the basic HS algorithm results in the balancing of exploration and exploitation of search space. Low pass, high pass, band pass, and band stop FIR filters are designed with the proposed OHS and other aforementioned algorithms individually for comparative optimization performance. A comparison of simulation results reveals the optimization efficacy of the OHS over the other optimization techniques for the solution of the multimodal, nondifferentiable, nonlinear, and constrained FIR filter design problems.
Shah, A A; Xing, W W; Triantafyllidis, V
2017-04-01
In this paper, we develop reduced-order models for dynamic, parameter-dependent, linear and nonlinear partial differential equations using proper orthogonal decomposition (POD). The main challenges are to accurately and efficiently approximate the POD bases for new parameter values and, in the case of nonlinear problems, to efficiently handle the nonlinear terms. We use a Bayesian nonlinear regression approach to learn the snapshots of the solutions and the nonlinearities for new parameter values. Computational efficiency is ensured by using manifold learning to perform the emulation in a low-dimensional space. The accuracy of the method is demonstrated on a linear and a nonlinear example, with comparisons with a global basis approach.
Xing, W. W.; Triantafyllidis, V.
2017-01-01
In this paper, we develop reduced-order models for dynamic, parameter-dependent, linear and nonlinear partial differential equations using proper orthogonal decomposition (POD). The main challenges are to accurately and efficiently approximate the POD bases for new parameter values and, in the case of nonlinear problems, to efficiently handle the nonlinear terms. We use a Bayesian nonlinear regression approach to learn the snapshots of the solutions and the nonlinearities for new parameter values. Computational efficiency is ensured by using manifold learning to perform the emulation in a low-dimensional space. The accuracy of the method is demonstrated on a linear and a nonlinear example, with comparisons with a global basis approach. PMID:28484327
NASA Astrophysics Data System (ADS)
Rolla, L. Barrera; Rice, H. J.
2006-09-01
In this paper a "forward-advancing" field discretization method suitable for solving the Helmholtz equation in large-scale problems is proposed. The forward wave expansion method (FWEM) is derived from a highly efficient discretization procedure based on interpolation of wave functions known as the wave expansion method (WEM). The FWEM computes the propagated sound field by means of an exclusively forward advancing solution, neglecting the backscattered field. It is thus analogous to methods such as the (one way) parabolic equation method (PEM) (usually discretized using standard finite difference or finite element methods). These techniques do not require the inversion of large system matrices and thus enable the solution of large-scale acoustic problems where backscatter is not of interest. Calculations using FWEM are presented for two propagation problems and comparisons to data computed with analytical and theoretical solutions and show this forward approximation to be highly accurate. Examples of sound propagation over a screen in upwind and downwind refracting atmospheric conditions at low nodal spacings (0.2 per wavelength in the propagation direction) are also included to demonstrate the flexibility and efficiency of the method.
Advanced multiple access concepts in mobile satellite systems
NASA Technical Reports Server (NTRS)
Ananasso, Fulvio
1990-01-01
Some multiple access strategies for Mobile Satellite Systems (MSS) are discussed. These strategies were investigated in the context of three separate studies conducted for the International Maritime Satellite Organization (INMARSAT) and the European Space Agency (ESA). Satellite-Switched Frequency Division Multiple Access (SS-FDMA), Code Division Multiple Access (CDMA), and Frequency-Addressable Beam architectures are addressed, discussing both system and technology aspects and outlining advantages and drawbacks of either solution with associated relevant hardware issues. An attempt is made to compare the considered option from the standpoint of user terminal/space segment complexity, synchronization requirements, spectral efficiency, and interference rejection.
Ground based experiments on the growth and characterization of L-Arginine Phosphate (LAP) crystals
NASA Technical Reports Server (NTRS)
Rao, S. M.; Cao, C.; Batra, A. K.; Lal, R. B.; Mookherji, T. K.
1991-01-01
L-Arginine Phosphate (LAP) is a new nonlinear optical material with higher efficiency for harmonic generation compared to KDP. Crystals of LAP were grown in the laboratory from supersaturated solutions by temperature lowering technique. Investigations revealed the presence of large dislocation densities inside the crystals which are observed to produce refractive index changes causing damage at high laser powers. This is a result of the convection during crystal growth from supersaturated solutions. It is proposed to grow these crystals in a diffusion controlled growth condition under microgravity environment and compare the crystals grown in space with those grown on ground. Physical properties of the solutions needed for modelling of crystal growth are also presented.
NASA Astrophysics Data System (ADS)
Berselli, Luigi C.; Spirito, Stefano
2018-06-01
Obtaining reliable numerical simulations of turbulent fluids is a challenging problem in computational fluid mechanics. The large eddy simulation (LES) models are efficient tools to approximate turbulent fluids, and an important step in the validation of these models is the ability to reproduce relevant properties of the flow. In this paper, we consider a fully discrete approximation of the Navier-Stokes-Voigt model by an implicit Euler algorithm (with respect to the time variable) and a Fourier-Galerkin method (in the space variables). We prove the convergence to weak solutions of the incompressible Navier-Stokes equations satisfying the natural local entropy condition, hence selecting the so-called physically relevant solutions.
Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks.
Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph
2015-08-01
Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is "non-intrusive" and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design.
Efficient Characterization of Parametric Uncertainty of Complex (Bio)chemical Networks
Schillings, Claudia; Sunnåker, Mikael; Stelling, Jörg; Schwab, Christoph
2015-01-01
Parametric uncertainty is a particularly challenging and relevant aspect of systems analysis in domains such as systems biology where, both for inference and for assessing prediction uncertainties, it is essential to characterize the system behavior globally in the parameter space. However, current methods based on local approximations or on Monte-Carlo sampling cope only insufficiently with high-dimensional parameter spaces associated with complex network models. Here, we propose an alternative deterministic methodology that relies on sparse polynomial approximations. We propose a deterministic computational interpolation scheme which identifies most significant expansion coefficients adaptively. We present its performance in kinetic model equations from computational systems biology with several hundred parameters and state variables, leading to numerical approximations of the parametric solution on the entire parameter space. The scheme is based on adaptive Smolyak interpolation of the parametric solution at judiciously and adaptively chosen points in parameter space. As Monte-Carlo sampling, it is “non-intrusive” and well-suited for massively parallel implementation, but affords higher convergence rates. This opens up new avenues for large-scale dynamic network analysis by enabling scaling for many applications, including parameter estimation, uncertainty quantification, and systems design. PMID:26317784
Classical space-times from the S-matrix
NASA Astrophysics Data System (ADS)
Neill, Duff; Rothstein, Ira Z.
2013-12-01
We show that classical space-times can be derived directly from the S-matrix for a theory of massive particles coupled to a massless spin two particle. As an explicit example we derive the Schwarzchild space-time as a series in GN. At no point of the derivation is any use made of the Einstein-Hilbert action or the Einstein equations. The intermediate steps involve only on-shell S-matrix elements which are generated via BCFW recursion relations and unitarity sewing techniques. The notion of a space-time metric is only introduced at the end of the calculation where it is extracted by matching the potential determined by the S-matrix to the geodesic motion of a test particle. Other static space-times such as Kerr follow in a similar manner. Furthermore, given that the procedure is action independent and depends only upon the choice of the representation of the little group, solutions to Yang-Mills (YM) theory can be generated in the same fashion. Moreover, the squaring relation between the YM and gravity three point functions shows that the seeds that generate solutions in the two theories are algebraically related. From a technical standpoint our methodology can also be utilized to calculate quantities relevant for the binary inspiral problem more efficiently then the more traditional Feynman diagram approach.
Electrokinetic remediation prefield test methods
NASA Technical Reports Server (NTRS)
Hodko, Dalibor (Inventor)
2000-01-01
Methods for determining the parameters critical in designing an electrokinetic soil remediation process including electrode well spacing, operating current/voltage, electroosmotic flow rate, electrode well wall design, and amount of buffering or neutralizing solution needed in the electrode wells at operating conditions are disclosed These methods are preferably performed prior to initiating a full scale electrokinetic remediation process in order to obtain efficient remediation of the contaminants.
Approaches and possible improvements in the area of multibody dynamics modeling
NASA Technical Reports Server (NTRS)
Lips, K. W.; Singh, R.
1987-01-01
A wide ranging look is taken at issues involved in the dynamic modeling of complex, multibodied orbiting space systems. Capabilities and limitations of two major codes (DISCOS, TREETOPS) are assessed and possible extensions to the CONTOPS software are outlined. In addition, recommendations are made concerning the direction future development should take in order to achieve higher fidelity, more computationally efficient multibody software solutions.
Beaser, Eric; Schwartz, Jennifer K; Bell, Caleb B; Solomon, Edward I
2011-09-26
A Genetic Algorithm (GA) is a stochastic optimization technique based on the mechanisms of biological evolution. These algorithms have been successfully applied in many fields to solve a variety of complex nonlinear problems. While they have been used with some success in chemical problems such as fitting spectroscopic and kinetic data, many have avoided their use due to the unconstrained nature of the fitting process. In engineering, this problem is now being addressed through incorporation of adaptive penalty functions, but their transfer to other fields has been slow. This study updates the Nanakorrn Adaptive Penalty function theory, expanding its validity beyond maximization problems to minimization as well. The expanded theory, using a hybrid genetic algorithm with an adaptive penalty function, was applied to analyze variable temperature variable field magnetic circular dichroism (VTVH MCD) spectroscopic data collected on exchange coupled Fe(II)Fe(II) enzyme active sites. The data obtained are described by a complex nonlinear multimodal solution space with at least 6 to 13 interdependent variables and are costly to search efficiently. The use of the hybrid GA is shown to improve the probability of detecting the global optimum. It also provides large gains in computational and user efficiency. This method allows a full search of a multimodal solution space, greatly improving the quality and confidence in the final solution obtained, and can be applied to other complex systems such as fitting of other spectroscopic or kinetics data.
Building technolgies program. 1994 annual report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Selkowitz, S.E.
1995-04-01
The objective of the Building Technologies program is to assist the U.S. building industry in achieving substantial reductions in building sector energy use and associated greenhouse gas emissions while improving comfort, amenity, health, and productivity in the building sector. We have focused our past efforts on two major building systems, windows and lighting, and on the simulation tools needed by researchers and designers to integrate the full range of energy efficiency solutions into achievable, cost-effective design solutions for new and existing buildings. In addition, we are now taking more of an integrated systems and life cycle perspective to create cost-effectivemore » solutions for more energy efficient, comfortable, and productive work and living environments. More than 30% of all energy use in buildings is attributable to two sources: windows and lighting. Together they account for annual consumer energy expenditures of more than $50 billion. Each affects not only energy use by other major building systems, but also comfort and productivity-factors that influence building economics far more than does direct energy consumption alone. Windows play a unique role in the building envelope, physically separating the conditioned space from the world outside without sacrificing vital visual contact. Throughout every space in a building, lighting systems facilitate a variety of tasks associated with a wide range of visual requirements while defining the luminous qualities of the indoor environment. Window and lighting systems are thus essential components of any comprehensive building science program.« less
Analytic solution of magnetic induction distribution of ideal hollow spherical field sources
NASA Astrophysics Data System (ADS)
Xu, Xiaonong; Lu, Dingwei; Xu, Xibin; Yu, Yang; Gu, Min
2017-12-01
The Halbach type hollow spherical permanent magnet arrays (HSPMA) are volume compacted, energy efficient field sources, and capable of producing multi-Tesla field in the cavity of the array, which have attracted intense interests in many practical applications. Here, we present analytical solutions of magnetic induction to the ideal HSPMA in entire space, outside of array, within the cavity of array, and in the interior of the magnet. We obtain solutions using concept of magnetic charge to solve the Poisson's and Laplace's equations for the HSPMA. Using these analytical field expressions inside the material, a scalar demagnetization function is defined to approximately indicate the regions of magnetization reversal, partial demagnetization, and inverse magnetic saturation. The analytical field solution provides deeper insight into the nature of HSPMA and offer guidance in designing optimized one.
Using Grid Cells for Navigation
Bush, Daniel; Barry, Caswell; Manson, Daniel; Burgess, Neil
2015-01-01
Summary Mammals are able to navigate to hidden goal locations by direct routes that may traverse previously unvisited terrain. Empirical evidence suggests that this “vector navigation” relies on an internal representation of space provided by the hippocampal formation. The periodic spatial firing patterns of grid cells in the hippocampal formation offer a compact combinatorial code for location within large-scale space. Here, we consider the computational problem of how to determine the vector between start and goal locations encoded by the firing of grid cells when this vector may be much longer than the largest grid scale. First, we present an algorithmic solution to the problem, inspired by the Fourier shift theorem. Second, we describe several potential neural network implementations of this solution that combine efficiency of search and biological plausibility. Finally, we discuss the empirical predictions of these implementations and their relationship to the anatomy and electrophysiology of the hippocampal formation. PMID:26247860
NASA Astrophysics Data System (ADS)
Izquierdo, Joaquín; Montalvo, Idel; Campbell, Enrique; Pérez-García, Rafael
2016-08-01
Selecting the most appropriate heuristic for solving a specific problem is not easy, for many reasons. This article focuses on one of these reasons: traditionally, the solution search process has operated in a given manner regardless of the specific problem being solved, and the process has been the same regardless of the size, complexity and domain of the problem. To cope with this situation, search processes should mould the search into areas of the search space that are meaningful for the problem. This article builds on previous work in the development of a multi-agent paradigm using techniques derived from knowledge discovery (data-mining techniques) on databases of so-far visited solutions. The aim is to improve the search mechanisms, increase computational efficiency and use rules to enrich the formulation of optimization problems, while reducing the search space and catering to realistic problems.
FormTracer. A mathematica tracing package using FORM
NASA Astrophysics Data System (ADS)
Cyrol, Anton K.; Mitter, Mario; Strodthoff, Nils
2017-10-01
We present FormTracer, a high-performance, general purpose, easy-to-use Mathematica tracing package which uses FORM. It supports arbitrary space and spinor dimensions as well as an arbitrary number of simple compact Lie groups. While keeping the usability of the Mathematica interface, it relies on the efficiency of FORM. An additional performance gain is achieved by a decomposition algorithm that avoids redundant traces in the product tensors spaces. FormTracer supports a wide range of syntaxes which endows it with a high flexibility. Mathematica notebooks that automatically install the package and guide the user through performing standard traces in space-time, spinor and gauge-group spaces are provided. Program Files doi:http://dx.doi.org/10.17632/7rd29h4p3m.1 Licensing provisions: GPLv3 Programming language: Mathematica and FORM Nature of problem: Efficiently compute traces of large expressions Solution method: The expression to be traced is decomposed into its subspaces by a recursive Mathematica expansion algorithm. The result is subsequently translated to a FORM script that takes the traces. After FORM is executed, the final result is either imported into Mathematica or exported as optimized C/C++/Fortran code. Unusual features: The outstanding features of FormTracer are the simple interface, the capability to efficiently handle an arbitrary number of Lie groups in addition to Dirac and Lorentz tensors, and a customizable input-syntax.
Compatible-strain mixed finite element methods for incompressible nonlinear elasticity
NASA Astrophysics Data System (ADS)
Faghih Shojaei, Mostafa; Yavari, Arash
2018-05-01
We introduce a new family of mixed finite elements for incompressible nonlinear elasticity - compatible-strain mixed finite element methods (CSFEMs). Based on a Hu-Washizu-type functional, we write a four-field mixed formulation with the displacement, the displacement gradient, the first Piola-Kirchhoff stress, and a pressure-like field as the four independent unknowns. Using the Hilbert complexes of nonlinear elasticity, which describe the kinematics and the kinetics of motion, we identify the solution spaces of the independent unknown fields. In particular, we define the displacement in H1, the displacement gradient in H (curl), the stress in H (div), and the pressure field in L2. The test spaces of the mixed formulations are chosen to be the same as the corresponding solution spaces. Next, in a conforming setting, we approximate the solution and the test spaces with some piecewise polynomial subspaces of them. Among these approximation spaces are the tensorial analogues of the Nédélec and Raviart-Thomas finite element spaces of vector fields. This approach results in compatible-strain mixed finite element methods that satisfy both the Hadamard compatibility condition and the continuity of traction at the discrete level independently of the refinement level of the mesh. By considering several numerical examples, we demonstrate that CSFEMs have a good performance for bending problems and for bodies with complex geometries. CSFEMs are capable of capturing very large strains and accurately approximating stress and pressure fields. Using CSFEMs, we do not observe any numerical artifacts, e.g., checkerboarding of pressure, hourglass instability, or locking in our numerical examples. Moreover, CSFEMs provide an efficient framework for modeling heterogeneous solids.
Efficient 3D movement-based kernel density estimator and application to wildlife ecology
Tracey-PR, Jeff; Sheppard, James K.; Lockwood, Glenn K.; Chourasia, Amit; Tatineni, Mahidhar; Fisher, Robert N.; Sinkovits, Robert S.
2014-01-01
We describe an efficient implementation of a 3D movement-based kernel density estimator for determining animal space use from discrete GPS measurements. This new method provides more accurate results, particularly for species that make large excursions in the vertical dimension. The downside of this approach is that it is much more computationally expensive than simpler, lower-dimensional models. Through a combination of code restructuring, parallelization and performance optimization, we were able to reduce the time to solution by up to a factor of 1000x, thereby greatly improving the applicability of the method.
Wavelet and adaptive methods for time dependent problems and applications in aerosol dynamics
NASA Astrophysics Data System (ADS)
Guo, Qiang
Time dependent partial differential equations (PDEs) are widely used as mathematical models of environmental problems. Aerosols are now clearly identified as an important factor in many environmental aspects of climate and radiative forcing processes, as well as in the health effects of air quality. The mathematical models for the aerosol dynamics with respect to size distribution are nonlinear partial differential and integral equations, which describe processes of condensation, coagulation and deposition. Simulating the general aerosol dynamic equations on time, particle size and space exhibits serious difficulties because the size dimension ranges from a few nanometer to several micrometer while the spatial dimension is usually described with kilometers. Therefore, it is an important and challenging task to develop efficient techniques for solving time dependent dynamic equations. In this thesis, we develop and analyze efficient wavelet and adaptive methods for the time dependent dynamic equations on particle size and further apply them to the spatial aerosol dynamic systems. Wavelet Galerkin method is proposed to solve the aerosol dynamic equations on time and particle size due to the fact that aerosol distribution changes strongly along size direction and the wavelet technique can solve it very efficiently. Daubechies' wavelets are considered in the study due to the fact that they possess useful properties like orthogonality, compact support, exact representation of polynomials to a certain degree. Another problem encountered in the solution of the aerosol dynamic equations results from the hyperbolic form due to the condensation growth term. We propose a new characteristic-based fully adaptive multiresolution numerical scheme for solving the aerosol dynamic equation, which combines the attractive advantages of adaptive multiresolution technique and the characteristics method. On the aspect of theoretical analysis, the global existence and uniqueness of solutions of continuous time wavelet numerical methods for the nonlinear aerosol dynamics are proved by using Schauder's fixed point theorem and the variational technique. Optimal error estimates are derived for both continuous and discrete time wavelet Galerkin schemes. We further derive reliable and efficient a posteriori error estimate which is based on stable multiresolution wavelet bases and an adaptive space-time algorithm for efficient solution of linear parabolic differential equations. The adaptive space refinement strategies based on the locality of corresponding multiresolution processes are proved to converge. At last, we develop efficient numerical methods by combining the wavelet methods proposed in previous parts and the splitting technique to solve the spatial aerosol dynamic equations. Wavelet methods along the particle size direction and the upstream finite difference method along the spatial direction are alternately used in each time interval. Numerical experiments are taken to show the effectiveness of our developed methods.
A privacy-preserving solution for compressed storage and selective retrieval of genomic data.
Huang, Zhicong; Ayday, Erman; Lin, Huang; Aiyar, Raeka S; Molyneaux, Adam; Xu, Zhenyu; Fellay, Jacques; Steinmetz, Lars M; Hubaux, Jean-Pierre
2016-12-01
In clinical genomics, the continuous evolution of bioinformatic algorithms and sequencing platforms makes it beneficial to store patients' complete aligned genomic data in addition to variant calls relative to a reference sequence. Due to the large size of human genome sequence data files (varying from 30 GB to 200 GB depending on coverage), two major challenges facing genomics laboratories are the costs of storage and the efficiency of the initial data processing. In addition, privacy of genomic data is becoming an increasingly serious concern, yet no standard data storage solutions exist that enable compression, encryption, and selective retrieval. Here we present a privacy-preserving solution named SECRAM (Selective retrieval on Encrypted and Compressed Reference-oriented Alignment Map) for the secure storage of compressed aligned genomic data. Our solution enables selective retrieval of encrypted data and improves the efficiency of downstream analysis (e.g., variant calling). Compared with BAM, the de facto standard for storing aligned genomic data, SECRAM uses 18% less storage. Compared with CRAM, one of the most compressed nonencrypted formats (using 34% less storage than BAM), SECRAM maintains efficient compression and downstream data processing, while allowing for unprecedented levels of security in genomic data storage. Compared with previous work, the distinguishing features of SECRAM are that (1) it is position-based instead of read-based, and (2) it allows random querying of a subregion from a BAM-like file in an encrypted form. Our method thus offers a space-saving, privacy-preserving, and effective solution for the storage of clinical genomic data. © 2016 Huang et al.; Published by Cold Spring Harbor Laboratory Press.
A privacy-preserving solution for compressed storage and selective retrieval of genomic data
Huang, Zhicong; Ayday, Erman; Lin, Huang; Aiyar, Raeka S.; Molyneaux, Adam; Xu, Zhenyu; Hubaux, Jean-Pierre
2016-01-01
In clinical genomics, the continuous evolution of bioinformatic algorithms and sequencing platforms makes it beneficial to store patients’ complete aligned genomic data in addition to variant calls relative to a reference sequence. Due to the large size of human genome sequence data files (varying from 30 GB to 200 GB depending on coverage), two major challenges facing genomics laboratories are the costs of storage and the efficiency of the initial data processing. In addition, privacy of genomic data is becoming an increasingly serious concern, yet no standard data storage solutions exist that enable compression, encryption, and selective retrieval. Here we present a privacy-preserving solution named SECRAM (Selective retrieval on Encrypted and Compressed Reference-oriented Alignment Map) for the secure storage of compressed aligned genomic data. Our solution enables selective retrieval of encrypted data and improves the efficiency of downstream analysis (e.g., variant calling). Compared with BAM, the de facto standard for storing aligned genomic data, SECRAM uses 18% less storage. Compared with CRAM, one of the most compressed nonencrypted formats (using 34% less storage than BAM), SECRAM maintains efficient compression and downstream data processing, while allowing for unprecedented levels of security in genomic data storage. Compared with previous work, the distinguishing features of SECRAM are that (1) it is position-based instead of read-based, and (2) it allows random querying of a subregion from a BAM-like file in an encrypted form. Our method thus offers a space-saving, privacy-preserving, and effective solution for the storage of clinical genomic data. PMID:27789525
Design of a Recommendation System for Adding Support in the Treatment of Chronic Patients.
Torkar, Simon; Benedik, Peter; Rajkovič, Uroš; Šušteršič, Olga; Rajkovič, Vladislav
2016-01-01
Rapid growth of chronic disease cases around the world is adding pressure on healthcare providers to ensure a structured patent follow-up during chronic disease management process. In response to the increasing demand for better chronic disease management and improved health care efficiency, nursing roles have been specialized or enhanced in the primary health care setting. Nurses become key players in chronic disease management process. Study describes a system to help nurses manage the care process of patient with chronic disease. It supports focusing nurse's attention on those resources/solutions that are likely to be most relevant to their particular situation/problem in nursing domain. System is based on multi-relational property graph representing a flexible modeling construct. Graph allows modeling a nursing ontology and the indices that partition domain into an efficient, searchable space where the solution to a problem is seen as abstractly defined traversals through its vertices and edges.
Numerical integration and optimization of motions for multibody dynamic systems
NASA Astrophysics Data System (ADS)
Aguilar Mayans, Joan
This thesis considers the optimization and simulation of motions involving rigid body systems. It does so in three distinct parts, with the following topics: optimization and analysis of human high-diving motions, efficient numerical integration of rigid body dynamics with contacts, and motion optimization of a two-link robot arm using Finite-Time Lyapunov Analysis. The first part introduces the concept of eigenpostures, which we use to simulate and analyze human high-diving motions. Eigenpostures are used in two different ways: first, to reduce the complexity of the optimal control problem that we solve to obtain such motions, and second, to generate an eigenposture space to which we map existing real world motions to better analyze them. The benefits of using eigenpostures are showcased through different examples. The second part reviews an extensive list of integration algorithms used for the integration of rigid body dynamics. We analyze the accuracy and stability of the different integrators in the three-dimensional space and the rotation space SO(3). Integrators with an accuracy higher than first order perform more efficiently than integrators with first order accuracy, even in the presence of contacts. The third part uses Finite-time Lyapunov Analysis to optimize motions for a two-link robot arm. Finite-Time Lyapunov Analysis diagnoses the presence of time-scale separation in the dynamics of the optimized motion and provides the information and methodology for obtaining an accurate approximation to the optimal solution, avoiding the complications that timescale separation causes for alternative solution methods.
The Problem of Size in Robust Design
NASA Technical Reports Server (NTRS)
Koch, Patrick N.; Allen, Janet K.; Mistree, Farrokh; Mavris, Dimitri
1997-01-01
To facilitate the effective solution of multidisciplinary, multiobjective complex design problems, a departure from the traditional parametric design analysis and single objective optimization approaches is necessary in the preliminary stages of design. A necessary tradeoff becomes one of efficiency vs. accuracy as approximate models are sought to allow fast analysis and effective exploration of a preliminary design space. In this paper we apply a general robust design approach for efficient and comprehensive preliminary design to a large complex system: a high speed civil transport (HSCT) aircraft. Specifically, we investigate the HSCT wing configuration design, incorporating life cycle economic uncertainties to identify economically robust solutions. The approach is built on the foundation of statistical experimentation and modeling techniques and robust design principles, and is specialized through incorporation of the compromise Decision Support Problem for multiobjective design. For large problems however, as in the HSCT example, this robust design approach developed for efficient and comprehensive design breaks down with the problem of size - combinatorial explosion in experimentation and model building with number of variables -and both efficiency and accuracy are sacrificed. Our focus in this paper is on identifying and discussing the implications and open issues associated with the problem of size for the preliminary design of large complex systems.
The finite state projection algorithm for the solution of the chemical master equation.
Munsky, Brian; Khammash, Mustafa
2006-01-28
This article introduces the finite state projection (FSP) method for use in the stochastic analysis of chemically reacting systems. One can describe the chemical populations of such systems with probability density vectors that evolve according to a set of linear ordinary differential equations known as the chemical master equation (CME). Unlike Monte Carlo methods such as the stochastic simulation algorithm (SSA) or tau leaping, the FSP directly solves or approximates the solution of the CME. If the CME describes a system that has a finite number of distinct population vectors, the FSP method provides an exact analytical solution. When an infinite or extremely large number of population variations is possible, the state space can be truncated, and the FSP method provides a certificate of accuracy for how closely the truncated space approximation matches the true solution. The proposed FSP algorithm systematically increases the projection space in order to meet prespecified tolerance in the total probability density error. For any system in which a sufficiently accurate FSP exists, the FSP algorithm is shown to converge in a finite number of steps. The FSP is utilized to solve two examples taken from the field of systems biology, and comparisons are made between the FSP, the SSA, and tau leaping algorithms. In both examples, the FSP outperforms the SSA in terms of accuracy as well as computational efficiency. Furthermore, due to very small molecular counts in these particular examples, the FSP also performs far more effectively than tau leaping methods.
On the Five-Moment Hamburger Maximum Entropy Reconstruction
NASA Astrophysics Data System (ADS)
Summy, D. P.; Pullin, D. I.
2018-05-01
We consider the Maximum Entropy Reconstruction (MER) as a solution to the five-moment truncated Hamburger moment problem in one dimension. In the case of five monomial moment constraints, the probability density function (PDF) of the MER takes the form of the exponential of a quartic polynomial. This implies a possible bimodal structure in regions of moment space. An analytical model is developed for the MER PDF applicable near a known singular line in a centered, two-component, third- and fourth-order moment (μ _3 , μ _4 ) space, consistent with the general problem of five moments. The model consists of the superposition of a perturbed, centered Gaussian PDF and a small-amplitude packet of PDF-density, called the outlying moment packet (OMP), sitting far from the mean. Asymptotic solutions are obtained which predict the shape of the perturbed Gaussian and both the amplitude and position on the real line of the OMP. The asymptotic solutions show that the presence of the OMP gives rise to an MER solution that is singular along a line in (μ _3 , μ _4 ) space emanating from, but not including, the point representing a standard normal distribution, or thermodynamic equilibrium. We use this analysis of the OMP to develop a numerical regularization of the MER, creating a procedure we call the Hybrid MER (HMER). Compared with the MER, the HMER is a significant improvement in terms of robustness and efficiency while preserving accuracy in its prediction of other important distribution features, such as higher order moments.
Limitations to the study of man in space in the U.S. space program
NASA Technical Reports Server (NTRS)
Bishop, Phillip A.; Greenisen, Mike
1993-01-01
Research on humans conducted during spaceflight is fraught both with great opportunities and great obstacles. The purpose of this paper is to review some of the limitations to research in space in the United States with hope that an informed scientific community may lead to more rapid and efficient solution of these problems. Limitations arise because opportunities to study the same astronauts in well-controlled situations on repeated spaceflights are practically non-existent. Human research opportunities are further limited by the necessity of avoiding simultaneous mutually-interfering experiments. Environmental factors, including diet and other physiological perturbations concomitant with spaceflight, also complicate research design and interpretation. Technical limitations to research methods and opportunities further restrict the development of the knowledge base. Finally, Earth analogues of space travel all suffer from inadequacies. Though all of these obstacles will eventually be overcome, creativity, diligence, and persistence are required to further our knowledge of humans in space.
Limitations to the study of man in space in the U.S. space program.
Bishop, P A; Greenisen, M
1993-03-01
Research on humans conducted during spaceflight is fraught both with great opportunities and great obstacles. The purpose of this paper is to review some of the limitations to research in space in the United States with hope that an informed scientific community may lead to more rapid and efficient solution of these problems. Limitations arise because opportunities to study the same astronauts in well-controlled situations on repeated spaceflights are practically non-existent. Human research opportunities are further limited by the necessity of avoiding simultaneous mutually-interfering experiments. Environmental factors, including diet and other physiological perturbations concomitant with spaceflight, also complicate research design and interpretation. Technical limitations to research methods and opportunities further restrict the development of the knowledge base. Finally, Earth analogues of space travel all suffer from inadequacies. Though all of these obstacles will eventually be overcome, creativity, diligence, and persistence are required to further our knowledge of humans in space.
Boyd, O.S.
2006-01-01
We have created a second-order finite-difference solution to the anisotropic elastic wave equation in three dimensions and implemented the solution as an efficient Matlab script. This program allows the user to generate synthetic seismograms for three-dimensional anisotropic earth structure. The code was written for teleseismic wave propagation in the 1-0.1 Hz frequency range but is of general utility and can be used at all scales of space and time. This program was created to help distinguish among various types of lithospheric structure given the uneven distribution of sources and receivers commonly utilized in passive source seismology. Several successful implementations have resulted in a better appreciation for subduction zone structure, the fate of a transform fault with depth, lithospheric delamination, and the effects of wavefield focusing and defocusing on attenuation. Companion scripts are provided which help the user prepare input to the finite-difference solution. Boundary conditions including specification of the initial wavefield, absorption and two types of reflection are available. ?? 2005 Elsevier Ltd. All rights reserved.
Network Management and FDIR for SpaceWire Networks (N-MaSS)
NASA Astrophysics Data System (ADS)
Montano, Giuseppe; Jameux, David; Cook, Barry; Peel, Rodger; McCormick, Ecaterina; Walker, Paul; Kollias, Vangelis; Pogkas, Nikos
2014-08-01
The SpaceWire network management layer, which manages network topology and routing, is not yet standardised. This paper presents the European Space Agency (ESA) N-MaSS study, which focuses on implementation and standardisation of Fault Detection, Isolation and Recovery (FDIR) functions within the SpaceWire network management layer. N-MaSS provides an autonomous FDIR solution. It is defined at the SpaceWire network layer in order to achieve efficient re-use for heterogeneous missions, allowing for the incorporation of legacy equipment. The N-MaSS FDIR functions identify SpaceWire link and node failures and provide recovery using redundant nodes.This paper provides an overview of the overall N- MaSS study. In particular, the following topics are discussed: (a) how user requirements have been captured from the industry, SpaceWire Working Group and ESA; (b) how the N-MaSS architecture was organically shaped on the basis of the requirements captured; (c) how the N-MaSS concept is currently being implemented in a demonstrator and verified.
Azolla as a component of the space diet during habitation on Mars
NASA Astrophysics Data System (ADS)
Katayama, Naomi; Yamashita, Masamichi; Kishida, Yoshiro; Liu, Chung-Chu; Watanabe, Iwao; Wada, Hidenori; Space Agriculture Task Force
We evaluate a candidate diet and specify its space agricultural requirements for habitation on Mars. Rice, soybean, sweet potato and a green-yellow vegetable have been selected as the basic vegetarian menu. The addition of silkworm pupa, loach, and Azolla to that basic menu was found to meet human nutritional requirements. Co-culture of rice, Azolla, and loach is proposed for developing bio-regenerative life support capability with high efficiency of the usage of habitation and agriculture area. Agriculture designed under the severe constraints of limited materials resources in space would make a positive contribution toward solving the food shortages and environmental problems facing humans on Earth, and may provide an effective sustainable solution for our civilization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jimenez, Bienvenido; Novo, Vicente
We provide second-order necessary and sufficient conditions for a point to be an efficient element of a set with respect to a cone in a normed space, so that there is only a small gap between necessary and sufficient conditions. To this aim, we use the common second-order tangent set and the asymptotic second-order cone utilized by Penot. As an application we establish second-order necessary conditions for a point to be a solution of a vector optimization problem with an arbitrary feasible set and a twice Frechet differentiable objective function between two normed spaces. We also establish second-order sufficient conditionsmore » when the initial space is finite-dimensional so that there is no gap with necessary conditions. Lagrange multiplier rules are also given.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jin, Shi, E-mail: sjin@wisc.edu; Institute of Natural Sciences, Department of Mathematics, MOE-LSEC and SHL-MAC, Shanghai Jiao Tong University, Shanghai 200240; Lu, Hanqing, E-mail: hanqing@math.wisc.edu
2017-04-01
In this paper, we develop an Asymptotic-Preserving (AP) stochastic Galerkin scheme for the radiative heat transfer equations with random inputs and diffusive scalings. In this problem the random inputs arise due to uncertainties in cross section, initial data or boundary data. We use the generalized polynomial chaos based stochastic Galerkin (gPC-SG) method, which is combined with the micro–macro decomposition based deterministic AP framework in order to handle efficiently the diffusive regime. For linearized problem we prove the regularity of the solution in the random space and consequently the spectral accuracy of the gPC-SG method. We also prove the uniform (inmore » the mean free path) linear stability for the space-time discretizations. Several numerical tests are presented to show the efficiency and accuracy of proposed scheme, especially in the diffusive regime.« less
The space station integrated refuse management system
NASA Technical Reports Server (NTRS)
Anderson, Loren A.
1988-01-01
The design and development of an Integrated Refuse Management System for the proposed International Space Station was performed. The primary goal was to make use of any existing potential energy or material properties that refuse may possess. The secondary goal was based on the complete removal or disposal of those products that could not, in any way, benefit astronauts' needs aboard the Space Station. The design of a continuous living and experimental habitat in space has spawned the need for a highly efficient and effective refuse management system capable of managing nearly forty-thousand pounds of refuse annually. To satisfy this need, the following four integrable systems were researched and developed: collection and transfer; recycle and reuse; advance disposal; and propulsion assist in disposal. The design of a Space Station subsystem capable of collecting and transporting refuse from its generation site to its disposal and/or recycling site was accomplished. Several methods of recycling or reusing refuse in the space environment were researched. The optimal solution was determined to be the method of pyrolysis. The objective of removing refuse from the Space Station environment, subsequent to recycling, was fulfilled with the design of a jettison vehicle. A number of jettison vehicle launch scenarios were analyzed. Selection of a proper disposal site and the development of a system to propel the vehicle to that site were completed. Reentry into the earth atmosphere for the purpose of refuse incineration was determined to be the most attractive solution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Malcolm Pitts; Jie Qi; Dan Wilson
2005-10-01
Gelation technologies have been developed to provide more efficient vertical sweep efficiencies for flooding naturally fractured oil reservoirs or more efficient areal sweep efficiency for those with high permeability contrast ''thief zones''. The field proven alkaline-surfactant-polymer technology economically recovers 15% to 25% OOIP more oil than waterflooding from swept pore space of an oil reservoir. However, alkaline-surfactant-polymer technology is not amenable to naturally fractured reservoirs or those with thief zones because much of injected solution bypasses target pore space containing oil. This work investigates whether combining these two technologies could broaden applicability of alkaline-surfactant-polymer flooding into these reservoirs. A priormore » fluid-fluid report discussed interaction of different gel chemical compositions and alkaline-surfactant-polymer solutions. Gel solutions under dynamic conditions of linear corefloods showed similar stability to alkaline-surfactant-polymer solutions as in the fluid-fluid analyses. Aluminum-polyacrylamide, flowing gels are not stable to alkaline-surfactant-polymer solutions of either pH 10.5 or 12.9. Chromium acetate-polyacrylamide flowing and rigid flowing gels are stable to subsequent alkaline-surfactant-polymer solution injection. Rigid flowing chromium acetate-polyacrylamide gels maintained permeability reduction better than flowing chromium acetate-polyacrylamide gels. Silicate-polyacrylamide gels are not stable with subsequent injection of either a pH 10.5 or a 12.9 alkaline-surfactant-polymer solution. Chromium acetate-xanthan gum rigid gels are not stable to subsequent alkaline-surfactant-polymer solution injection. Resorcinol-formaldehyde gels were stable to subsequent alkaline-surfactant-polymer solution injection. When evaluated in a dual core configuration, injected fluid flows into the core with the greatest effective permeability to the injected fluid. The same gel stability trends to subsequent alkaline-surfactant-polymer injected solution were observed. Aluminum citrate-polyacrylamide, resorcinol-formaldehyde, and the silicate-polyacrylamide gel systems did not produce significant incremental oil in linear corefloods. Both flowing and rigid flowing chromium acetate-polyacrylamide gels and the xanthan gum-chromium acetate gel system produced incremental oil with the rigid flowing gel producing the greatest amount. Higher oil recovery could have been due to higher differential pressures across cores. None of the gels tested appeared to alter alkaline-surfactant-polymer solution oil recovery. Total waterflood plus chemical flood oil recovery sequence recoveries were all similar. Chromium acetate-polyacrylamide gel used to seal fractured core maintain fracture closure if followed by an alkaline-surfactant-polymer solution. Chromium acetate gels that were stable to injection of alkaline-surfactant-polymer solutions at 72 F were stable to injection of alkaline-surfactant-polymer solutions at 125 F and 175 F in linear corefloods. Chromium acetate-polyacrylamide gels maintained diversion capability after injection of an alkaline-surfactant-polymer solution in stacked; radial coreflood with a common well bore. Xanthan gum-chromium acetate gels maintained gel integrity in linear corefloods after injection of an alkaline-surfactant-polymer solution at 125 F. At 175 F, Xanthan gum-chromium acetate gels were not stable either with or without subsequent alkaline-surfactant-polymer solution injection. Numerical simulation demonstrated that reducing the permeability of a high permeability zone of a reservoir with gel improved both waterflood and alkaline-surfactant-polymer flood oil recovery. A Minnelusa reservoir with both A and B sand production was simulated. A and B sands are separated by a shale layer. A sand and B sand waterflood oil recovery was improved by 196,000 bbls when a gel was placed in the B sand. A sand and B sand alkaline-surfactant-polymer flood oil recovery was improved by 596,000 bbls when a gel was placed in the B sand. Alkaline-surfactant-polymer flood oil recovery improvement over a waterflood was 392,000 bbls. Placing a gel into the B sand prior to an alkaline-surfactant-polymer flood resulted in 989,000 bbl more oil than only water injection.« less
NASA Technical Reports Server (NTRS)
Deavours, Daniel D.; Qureshi, M. Akber; Sanders, William H.
1997-01-01
Modeling tools and technologies are important for aerospace development. At the University of Illinois, we have worked on advancing the state of the art in modeling by Markov reward models in two important areas: reducing the memory necessary to numerically solve systems represented as stochastic activity networks and other stochastic Petri net extensions while still obtaining solutions in a reasonable amount of time, and finding numerically stable and memory-efficient methods to solve for the reward accumulated during a finite mission time. A long standing problem when modeling with high level formalisms such as stochastic activity networks is the so-called state space explosion, where the number of states increases exponentially with size of the high level model. Thus, the corresponding Markov model becomes prohibitively large and solution is constrained by the the size of primary memory. To reduce the memory necessary to numerically solve complex systems, we propose new methods that can tolerate such large state spaces that do not require any special structure in the model (as many other techniques do). First, we develop methods that generate row and columns of the state transition-rate-matrix on-the-fly, eliminating the need to explicitly store the matrix at all. Next, we introduce a new iterative solution method, called modified adaptive Gauss-Seidel, that exhibits locality in its use of data from the state transition-rate-matrix, permitting us to cache portions of the matrix and hence reduce the solution time. Finally, we develop a new memory and computationally efficient technique for Gauss-Seidel based solvers that avoids the need for generating rows of A in order to solve Ax = b. This is a significant performance improvement for on-the-fly methods as well as other recent solution techniques based on Kronecker operators. Taken together, these new results show that one can solve very large models without any special structure.
Efficient solution of the simplified P N equations
Hamilton, Steven P.; Evans, Thomas M.
2014-12-23
We show new solver strategies for the multigroup SPN equations for nuclear reactor analysis. By forming the complete matrix over space, moments, and energy a robust set of solution strategies may be applied. Moreover, power iteration, shifted power iteration, Rayleigh quotient iteration, Arnoldi's method, and a generalized Davidson method, each using algebraic and physics-based multigrid preconditioners, have been compared on C5G7 MOX test problem as well as an operational PWR model. These results show that the most ecient approach is the generalized Davidson method, that is 30-40 times faster than traditional power iteration and 6-10 times faster than Arnoldi's method.
Numerical Simulation of the Flow over a Segment-Conical Body on the Basis of Reynolds Equations
NASA Astrophysics Data System (ADS)
Egorov, I. V.; Novikov, A. V.; Palchekovskaya, N. V.
2018-01-01
Numerical simulation was used to study the 3D supersonic flow over a segment-conical body similar in shape to the ExoMars space vehicle. The nonmonotone behavior of the normal force acting on the body placed in a supersonic gas flow was analyzed depending on the angle of attack. The simulation was based on the numerical solution of the unsteady Reynolds-averaged Navier-Stokes equations with a two-parameter differential turbulence model. The solution of the problem was obtained using the in-house solver HSFlow with an efficient parallel algorithm intended for multiprocessor super computers.
An efficient algorithm for orbital evolution of space debris
NASA Astrophysics Data System (ADS)
Abdel-Aziz, Y.; Abd El-Salam, F.
More than four decades of space exploration have led to accumulation of significant quantities of debris around the Earth. These objects range in size from a tiny piece of junk to a large inoperable satellite, although these objects that have small size they have high are-to-mass ratios, and consequently their orbits are strongly influenced by solar radiation pressure and atmospheric drag. So the increasing population of space debris object in the LEO, MEO and GEO present growing with time, serious hazard for the survival of operating spacecrafts, particularly satellites and astronomical observatories. Since the average collision velocity between any spacecraft orbiting in the LOE and debris objects is about 10 km/s and about 3 km/s in the GEO. Space debris may significantly disturb any satellite operations or cause catastrophic damage to a spacecraft itself. Applying different shielding techniques spacecraft my be protected against impacts of space debris with diameters smaller than 1 cm. For larger debris objects, only one effective method to avoid catastrophic consequence of collision is a manoeuvre that will change the spacecraft orbit. The necessary conditions in this case is to evaluate and predict future positions of the spacecraft and space debris with sufficient accuray. Numerical integration of equations of motion are used until now. Existing analytical methods can solve this problem only with low accuracy. Difficulties are caused mainly by the lack of satisfying analytical solution of the resonance problem for geosynchronous orbit as well as from the lack of efficient analytical theory combining luni-solar perturbation and solar radiation pressure with geopotential attraction. Numerical integration is time consuming in some cases, and then for qualitative analysis of the satellite's and debris's motion it is necessary to apply analytical solution. This is the reason for searching for an accurate model to evaluate the orbital position of the operating satellites and space debris. The present paper developes a second order theory of perturbations (in the sense of the Hori-Lie perturbation method), that include the geopotential effect, luni-solar perturbations, solar radiation pressure and atmospheric drag. Resonance and very long period perturbations are modeled with the use of semi-secular terms for a short time span predictions. We present a comparision of our analytical solution with numerical integration of motion for chosen artificial satellites at (Low, MEO, GEO), also for different spase debris objets with different are-to-mass ratios showing good accuracy of the theory.
NASA Astrophysics Data System (ADS)
Yang, L. M.; Shu, C.; Yang, W. M.; Wu, J.
2018-04-01
High consumption of memory and computational effort is the major barrier to prevent the widespread use of the discrete velocity method (DVM) in the simulation of flows in all flow regimes. To overcome this drawback, an implicit DVM with a memory reduction technique for solving a steady discrete velocity Boltzmann equation (DVBE) is presented in this work. In the method, the distribution functions in the whole discrete velocity space do not need to be stored, and they are calculated from the macroscopic flow variables. As a result, its memory requirement is in the same order as the conventional Euler/Navier-Stokes solver. In the meantime, it is more efficient than the explicit DVM for the simulation of various flows. To make the method efficient for solving flow problems in all flow regimes, a prediction step is introduced to estimate the local equilibrium state of the DVBE. In the prediction step, the distribution function at the cell interface is calculated by the local solution of DVBE. For the flow simulation, when the cell size is less than the mean free path, the prediction step has almost no effect on the solution. However, when the cell size is much larger than the mean free path, the prediction step dominates the solution so as to provide reasonable results in such a flow regime. In addition, to further improve the computational efficiency of the developed scheme in the continuum flow regime, the implicit technique is also introduced into the prediction step. Numerical results showed that the proposed implicit scheme can provide reasonable results in all flow regimes and increase significantly the computational efficiency in the continuum flow regime as compared with the existing DVM solvers.
Marginal Space Deep Learning: Efficient Architecture for Volumetric Image Parsing.
Ghesu, Florin C; Krubasik, Edward; Georgescu, Bogdan; Singh, Vivek; Yefeng Zheng; Hornegger, Joachim; Comaniciu, Dorin
2016-05-01
Robust and fast solutions for anatomical object detection and segmentation support the entire clinical workflow from diagnosis, patient stratification, therapy planning, intervention and follow-up. Current state-of-the-art techniques for parsing volumetric medical image data are typically based on machine learning methods that exploit large annotated image databases. Two main challenges need to be addressed, these are the efficiency in scanning high-dimensional parametric spaces and the need for representative image features which require significant efforts of manual engineering. We propose a pipeline for object detection and segmentation in the context of volumetric image parsing, solving a two-step learning problem: anatomical pose estimation and boundary delineation. For this task we introduce Marginal Space Deep Learning (MSDL), a novel framework exploiting both the strengths of efficient object parametrization in hierarchical marginal spaces and the automated feature design of Deep Learning (DL) network architectures. In the 3D context, the application of deep learning systems is limited by the very high complexity of the parametrization. More specifically 9 parameters are necessary to describe a restricted affine transformation in 3D, resulting in a prohibitive amount of billions of scanning hypotheses. The mechanism of marginal space learning provides excellent run-time performance by learning classifiers in clustered, high-probability regions in spaces of gradually increasing dimensionality. To further increase computational efficiency and robustness, in our system we learn sparse adaptive data sampling patterns that automatically capture the structure of the input. Given the object localization, we propose a DL-based active shape model to estimate the non-rigid object boundary. Experimental results are presented on the aortic valve in ultrasound using an extensive dataset of 2891 volumes from 869 patients, showing significant improvements of up to 45.2% over the state-of-the-art. To our knowledge, this is the first successful demonstration of the DL potential to detection and segmentation in full 3D data with parametrized representations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2014-09-01
This case study describes the University of Minnesota’s Cloquet Residential Research Facility (CRRF) in northern Minnesota, which features more than 2,500 ft2 of below-grade space for building systems foundation hygrothermal research. Here, the NorthernSTAR Building America Partnership team researches ways to improve the energy efficiency of the building envelope, including wall assemblies, basements, roofs, insulation, and air leakage.
NASA Technical Reports Server (NTRS)
Badger, Julia M.; Claunch, Charles; Mathis, Frank
2017-01-01
The Modular Autonomous Systems Technology (MAST) framework is a tool for building distributed, hierarchical autonomous systems. Originally intended for the autonomous monitoring and control of spacecraft, this framework concept provides support for variable autonomy, assume-guarantee contracts, and efficient communication between subsystems and a centralized systems manager. MAST was developed at NASA's Johnson Space Center (JSC) and has been applied to an integrated spacecraft example scenario.
NASA Astrophysics Data System (ADS)
Jonsson, Thorsteinn H.; Manolescu, Andrei; Goan, Hsi-Sheng; Abdullah, Nzar Rauf; Sitek, Anna; Tang, Chi-Shung; Gudmundsson, Vidar
2017-11-01
Master equations are commonly used to describe time evolution of open systems. We introduce a general computationally efficient method for calculating a Markovian solution of the Nakajima-Zwanzig generalized master equation. We do so for a time-dependent transport of interacting electrons through a complex nano scale system in a photon cavity. The central system, described by 120 many-body states in a Fock space, is weakly coupled to the external leads. The efficiency of the approach allows us to place the bias window defined by the external leads high into the many-body spectrum of the cavity photon-dressed states of the central system revealing a cascade of intermediate transitions as the system relaxes to a steady state. The very diverse relaxation times present in the open system, reflecting radiative or non-radiative transitions, require information about the time evolution through many orders of magnitude. In our approach, the generalized master equation is mapped from a many-body Fock space of states to a Liouville space of transitions. We show that this results in a linear equation which is solved exactly through an eigenvalue analysis, which supplies information on the steady state and the time evolution of the system.
NASA Technical Reports Server (NTRS)
Johnson, Les; Fabisinski, Leo; Justice, Stefanie
2014-01-01
Affordable and convenient access to electrical power is critical to consumers, spacecraft, military and other applications alike. In the aerospace industry, an increased emphasis on small satellite flights and a move toward CubeSat and NanoSat technologies, the need for systems that could package into a small stowage volume while still being able to power robust space missions has become more critical. As a result, the Marshall Space Flight Center's Advanced Concepts Office identified a need for more efficient, affordable, and smaller space power systems to trade in performing design and feasibility studies. The Lightweight Inflatable Solar Array (LISA), a concept designed, prototyped, and tested at the NASA Marshall Space Flight Center (MSFC) in Huntsville, Alabama provides an affordable, lightweight, scalable, and easily manufactured approach for power generation in space or on Earth. This flexible technology has many wide-ranging applications from serving small satellites to soldiers in the field. By using very thin, ultraflexible solar arrays adhered to an inflatable structure, a large area (and thus large amount of power) can be folded and packaged into a relatively small volume (shown in artist rendering in Figure 1 below). The proposed presentation will provide an overview of the progress to date on the LISA project as well as a look at its potential, with continued development, to revolutionize small spacecraft and portable terrestrial power systems.
Selection of active spaces for multiconfigurational wavefunctions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Keller, Sebastian; Boguslawski, Katharina; Reiher, Markus, E-mail: markus.reiher@phys.chem.ethz.ch
2015-06-28
The efficient and accurate description of the electronic structure of strongly correlated systems is still a largely unsolved problem. The usual procedures start with a multiconfigurational (usually a Complete Active Space, CAS) wavefunction which accounts for static correlation and add dynamical correlation by perturbation theory, configuration interaction, or coupled cluster expansion. This procedure requires the correct selection of the active space. Intuitive methods are unreliable for complex systems. The inexpensive black-box unrestricted natural orbital (UNO) criterion postulates that the Unrestricted Hartree-Fock (UHF) charge natural orbitals with fractional occupancy (e.g., between 0.02 and 1.98) constitute the active space. UNOs generally approximatemore » the CAS orbitals so well that the orbital optimization in CAS Self-Consistent Field (CASSCF) may be omitted, resulting in the inexpensive UNO-CAS method. A rigorous testing of the UNO criterion requires comparison with approximate full configuration interaction wavefunctions. This became feasible with the advent of Density Matrix Renormalization Group (DMRG) methods which can approximate highly correlated wavefunctions at affordable cost. We have compared active orbital occupancies in UNO-CAS and CASSCF calculations with DMRG in a number of strongly correlated molecules: compounds of electronegative atoms (F{sub 2}, ozone, and NO{sub 2}), polyenes, aromatic molecules (naphthalene, azulene, anthracene, and nitrobenzene), radicals (phenoxy and benzyl), diradicals (o-, m-, and p-benzyne), and transition metal compounds (nickel-acetylene and Cr{sub 2}). The UNO criterion works well in these cases. Other symmetry breaking solutions, with the possible exception of spatial symmetry, do not appear to be essential to generate the correct active space. In the case of multiple UHF solutions, the natural orbitals of the average UHF density should be used. The problems of the UNO criterion and their potential solutions are discussed: finding the UHF solutions, discontinuities on potential energy surfaces, and inclusion of dynamical electron correlation and generalization to excited states.« less
Selection of active spaces for multiconfigurational wavefunctions
NASA Astrophysics Data System (ADS)
Keller, Sebastian; Boguslawski, Katharina; Janowski, Tomasz; Reiher, Markus; Pulay, Peter
2015-06-01
The efficient and accurate description of the electronic structure of strongly correlated systems is still a largely unsolved problem. The usual procedures start with a multiconfigurational (usually a Complete Active Space, CAS) wavefunction which accounts for static correlation and add dynamical correlation by perturbation theory, configuration interaction, or coupled cluster expansion. This procedure requires the correct selection of the active space. Intuitive methods are unreliable for complex systems. The inexpensive black-box unrestricted natural orbital (UNO) criterion postulates that the Unrestricted Hartree-Fock (UHF) charge natural orbitals with fractional occupancy (e.g., between 0.02 and 1.98) constitute the active space. UNOs generally approximate the CAS orbitals so well that the orbital optimization in CAS Self-Consistent Field (CASSCF) may be omitted, resulting in the inexpensive UNO-CAS method. A rigorous testing of the UNO criterion requires comparison with approximate full configuration interaction wavefunctions. This became feasible with the advent of Density Matrix Renormalization Group (DMRG) methods which can approximate highly correlated wavefunctions at affordable cost. We have compared active orbital occupancies in UNO-CAS and CASSCF calculations with DMRG in a number of strongly correlated molecules: compounds of electronegative atoms (F2, ozone, and NO2), polyenes, aromatic molecules (naphthalene, azulene, anthracene, and nitrobenzene), radicals (phenoxy and benzyl), diradicals (o-, m-, and p-benzyne), and transition metal compounds (nickel-acetylene and Cr2). The UNO criterion works well in these cases. Other symmetry breaking solutions, with the possible exception of spatial symmetry, do not appear to be essential to generate the correct active space. In the case of multiple UHF solutions, the natural orbitals of the average UHF density should be used. The problems of the UNO criterion and their potential solutions are discussed: finding the UHF solutions, discontinuities on potential energy surfaces, and inclusion of dynamical electron correlation and generalization to excited states.
High-Efficiency Artificial Photosynthesis Using a Novel Alkaline Membrane Cell
NASA Technical Reports Server (NTRS)
Narayan, Sri; Haines, Brennan; Blosiu, Julian; Marzwell, Neville
2009-01-01
A new cell designed to mimic the photosynthetic processes of plants to convert carbon dioxide into carbonaceous products and oxygen at high efficiency, has an improved configuration using a polymer membrane electrolyte and an alkaline medium. This increases efficiency of the artificial photosynthetic process, achieves high conversion rates, permits the use of inexpensive catalysts, and widens the range of products generated by this type of process. The alkaline membrane electrolyte allows for the continuous generation of sodium formate without the need for any additional separation system. The electrolyte type, pH, electrocatalyst type, and cell voltage were found to have a strong effect on the efficiency of conversion of carbon dioxide to formate. Indium electrodes were found to have higher conversion efficiency compared to lead. Bicarbonate electrolyte offers higher conversion efficiency and higher rates than water solutions saturated with carbon dioxide. pH values between 8 and 9 lead to the maximum values of efficiency. The operating cell voltage of 2.5 V, or higher, ensures conversion of the carbon dioxide to formate, although the hydrogen evolution reaction begins to compete strongly with the formate production reaction at higher cell voltages. Formate is produced at indium and lead electrodes at a conversion efficiency of 48 mg of CO2/kilojoule of energy input. This efficiency is about eight times that of natural photosynthesis in green plants. The electrochemical method of artificial photosynthesis is a promising approach for the conversion, separation and sequestration of carbon dioxide for confined environments as in space habitats, and also for carbon dioxide management in the terrestrial context. The heart of the reactor is a membrane cell fabricated from an alkaline polymer electrolyte membrane and catalyst- coated electrodes. This cell is assembled and held in compression in gold-plated hardware. The cathode side of the cell is supplied with carbon dioxide-saturated water or bicarbonate solution. The anode side of the cell is supplied with sodium hydroxide solution. The solutions are circulated past the electrodes in the electrochemical cell using pumps. A regulated power supply provides the electrical energy required for the reactions. Photovoltaic cells can be used to better mimic the photosynthetic reaction. The current flowing through the electrochemical cell, and the cell voltage, are monitored during experimentation. The products of the electrochemical reduction of carbon dioxide are allowed to accumulate in the cathode reservoir. Samples of the cathode solution are withdrawn for product analysis. Oxygen is generated on the anode side and is allowed to vent out of the reservoir.
Visualization of multi-INT fusion data using Java Viewer (JVIEW)
NASA Astrophysics Data System (ADS)
Blasch, Erik; Aved, Alex; Nagy, James; Scott, Stephen
2014-05-01
Visualization is important for multi-intelligence fusion and we demonstrate issues for presenting physics-derived (i.e., hard) and human-derived (i.e., soft) fusion results. Physics-derived solutions (e.g., imagery) typically involve sensor measurements that are objective, while human-derived (e.g., text) typically involve language processing. Both results can be geographically displayed for user-machine fusion. Attributes of an effective and efficient display are not well understood, so we demonstrate issues and results for filtering, correlation, and association of data for users - be they operators or analysts. Operators require near-real time solutions while analysts have the opportunities of non-real time solutions for forensic analysis. In a use case, we demonstrate examples using the JVIEW concept that has been applied to piloting, space situation awareness, and cyber analysis. Using the open-source JVIEW software, we showcase a big data solution for multi-intelligence fusion application for context-enhanced information fusion.
Space Situational Awareness using Market Based Agents
NASA Astrophysics Data System (ADS)
Sullivan, C.; Pier, E.; Gregory, S.; Bush, M.
2012-09-01
Space surveillance for the DoD is not limited to the Space Surveillance Network (SSN). Other DoD-owned assets have some existing capabilities for tasking but have no systematic way to work collaboratively with the SSN. These are run by diverse organizations including the Services, other defense and intelligence agencies and national laboratories. Beyond these organizations, academic and commercial entities have systems that possess SSA capability. Most all of these assets have some level of connectivity, security, and potential autonomy. Exploiting them in a mutually beneficial structure could provide a more comprehensive, efficient and cost effective solution for SSA. The collection of all potential assets, providers and consumers of SSA data comprises a market which is functionally illiquid. The development of a dynamic marketplace for SSA data could enable would-be providers the opportunity to sell data to SSA consumers for monetary or incentive based compensation. A well-conceived market architecture could drive down SSA data costs through increased supply and improve efficiency through increased competition. Oceanit will investigate market and market agent architectures, protocols, standards, and incentives toward producing high-volume/low-cost SSA.
Libration Orbit Mission Design: Applications of Numerical & Dynamical Methods
NASA Technical Reports Server (NTRS)
Bauer, Frank (Technical Monitor); Folta, David; Beckman, Mark
2002-01-01
Sun-Earth libration point orbits serve as excellent locations for scientific investigations. These orbits are often selected to minimize environmental disturbances and maximize observing efficiency. Trajectory design in support of libration orbits is ever more challenging as more complex missions are envisioned in the next decade. Trajectory design software must be further enabled to incorporate better understanding of the libration orbit solution space and thus improve the efficiency and expand the capabilities of current approaches. The Goddard Space Flight Center (GSFC) is currently supporting multiple libration missions. This end-to-end support consists of mission operations, trajectory design, and control. It also includes algorithm and software development. The recently launched Microwave Anisotropy Probe (MAP) and upcoming James Webb Space Telescope (JWST) and Constellation-X missions are examples of the use of improved numerical methods for attaining constrained orbital parameters and controlling their dynamical evolution at the collinear libration points. This paper presents a history of libration point missions, a brief description of the numerical and dynamical design techniques including software used, and a sample of future GSFC mission designs.
Performance and reliability enhancement of linear coolers
NASA Astrophysics Data System (ADS)
Mai, M.; Rühlich, I.; Schreiter, A.; Zehner, S.
2010-04-01
Highest efficiency states a crucial requirement for modern tactical IR cryocooling systems. For enhancement of overall efficiency, AIM cryocooler designs where reassessed considering all relevant loss mechanisms and associated components. Performed investigation was based on state-of-the-art simulation software featuring magnet circuitry analysis as well as computational fluid dynamics (CFD) to realistically replicate thermodynamic interactions. As a result, an improved design for AIM linear coolers could be derived. This paper gives an overview on performance enhancement activities and major results. An additional key-requirement for cryocoolers is reliability. In recent time, AIM has introduced linear coolers with full Flexure Bearing suspension on both ends of the driving mechanism incorporating Moving Magnet piston drive. In conjunction with a Pulse-Tube coldfinger these coolers are capable of meeting MTTF's (Mean Time To Failure) in excess of 50,000 hours offering superior reliability for space applications. Ongoing development also focuses on reliability enhancement, deriving space technology into tactical solutions combining both, excelling specific performance with space like reliability. Concerned publication will summarize the progress of this reliability program and give further prospect.
Neutron Transport Models and Methods for HZETRN and Coupling to Low Energy Light Ion Transport
NASA Technical Reports Server (NTRS)
Blattnig, S.R.; Slaba, T.C.; Heinbockel, J.H.
2008-01-01
Exposure estimates inside space vehicles, surface habitats, and high altitude aircraft exposed to space radiation are highly influenced by secondary neutron production. The deterministic transport code HZETRN has been identified as a reliable and efficient tool for such studies, but improvements to the underlying transport models and numerical methods are still necessary. In this paper, the forward-backward (FB) and directionally coupled forward-backward (DC) neutron transport models are derived, numerical methods for the FB model are reviewed, and a computationally efficient numerical solution is presented for the DC model. Both models are compared to the Monte Carlo codes HETCHEDS and FLUKA, and the DC model is shown to agree closely with the Monte Carlo results. Finally, it is found in the development of either model that the decoupling of low energy neutrons from the light ion (A<4) transport procedure adversely affects low energy light ion fluence spectra and exposure quantities. A first order correction is presented to resolve the problem, and it is shown to be both accurate and efficient.
Diffeomorphic demons: efficient non-parametric image registration.
Vercauteren, Tom; Pennec, Xavier; Perchant, Aymeric; Ayache, Nicholas
2009-03-01
We propose an efficient non-parametric diffeomorphic image registration algorithm based on Thirion's demons algorithm. In the first part of this paper, we show that Thirion's demons algorithm can be seen as an optimization procedure on the entire space of displacement fields. We provide strong theoretical roots to the different variants of Thirion's demons algorithm. This analysis predicts a theoretical advantage for the symmetric forces variant of the demons algorithm. We show on controlled experiments that this advantage is confirmed in practice and yields a faster convergence. In the second part of this paper, we adapt the optimization procedure underlying the demons algorithm to a space of diffeomorphic transformations. In contrast to many diffeomorphic registration algorithms, our solution is computationally efficient since in practice it only replaces an addition of displacement fields by a few compositions. Our experiments show that in addition to being diffeomorphic, our algorithm provides results that are similar to the ones from the demons algorithm but with transformations that are much smoother and closer to the gold standard, available in controlled experiments, in terms of Jacobians.
Berkeley lab checkpoint/restart (BLCR) for Linux clusters
Hargrove, Paul H.; Duell, Jason C.
2006-09-01
This article describes the motivation, design and implementation of Berkeley Lab Checkpoint/Restart (BLCR), a system-level checkpoint/restart implementation for Linux clusters that targets the space of typical High Performance Computing applications, including MPI. Application-level solutions, including both checkpointing and fault-tolerant algorithms, are recognized as more time and space efficient than system-level checkpoints, which cannot make use of any application-specific knowledge. However, system-level checkpointing allows for preemption, making it suitable for responding to fault precursors (for instance, elevated error rates from ECC memory or network CRCs, or elevated temperature from sensors). Preemption can also increase the efficiency of batch scheduling; for instancemore » reducing idle cycles (by allowing for shutdown without any queue draining period or reallocation of resources to eliminate idle nodes when better fitting jobs are queued), and reducing the average queued time (by limiting large jobs to running during off-peak hours, without the need to limit the length of such jobs). Each of these potential uses makes BLCR a valuable tool for efficient resource management in Linux clusters. © 2006 IOP Publishing Ltd.« less
CORRELATED AND ZONAL ERRORS OF GLOBAL ASTROMETRIC MISSIONS: A SPHERICAL HARMONIC SOLUTION
DOE Office of Scientific and Technical Information (OSTI.GOV)
Makarov, V. V.; Dorland, B. N.; Gaume, R. A.
We propose a computer-efficient and accurate method of estimating spatially correlated errors in astrometric positions, parallaxes, and proper motions obtained by space- and ground-based astrometry missions. In our method, the simulated observational equations are set up and solved for the coefficients of scalar and vector spherical harmonics representing the output errors rather than for individual objects in the output catalog. Both accidental and systematic correlated errors of astrometric parameters can be accurately estimated. The method is demonstrated on the example of the JMAPS mission, but can be used for other projects in space astrometry, such as SIM or JASMINE.
[Development of domain specific search engines].
Takai, T; Tokunaga, M; Maeda, K; Kaminuma, T
2000-01-01
As cyber space exploding in a pace that nobody has ever imagined, it becomes very important to search cyber space efficiently and effectively. One solution to this problem is search engines. Already a lot of commercial search engines have been put on the market. However these search engines respond with such cumbersome results that domain specific experts can not tolerate. Using a dedicate hardware and a commercial software called OpenText, we have tried to develop several domain specific search engines. These engines are for our institute's Web contents, drugs, chemical safety, endocrine disruptors, and emergent response for chemical hazard. These engines have been on our Web site for testing.
Correlated and Zonal Errors of Global Astrometric Missions: A Spherical Harmonic Solution
NASA Astrophysics Data System (ADS)
Makarov, V. V.; Dorland, B. N.; Gaume, R. A.; Hennessy, G. S.; Berghea, C. T.; Dudik, R. P.; Schmitt, H. R.
2012-07-01
We propose a computer-efficient and accurate method of estimating spatially correlated errors in astrometric positions, parallaxes, and proper motions obtained by space- and ground-based astrometry missions. In our method, the simulated observational equations are set up and solved for the coefficients of scalar and vector spherical harmonics representing the output errors rather than for individual objects in the output catalog. Both accidental and systematic correlated errors of astrometric parameters can be accurately estimated. The method is demonstrated on the example of the JMAPS mission, but can be used for other projects in space astrometry, such as SIM or JASMINE.
Genetic algorithms applied to the scheduling of the Hubble Space Telescope
NASA Technical Reports Server (NTRS)
Sponsler, Jeffrey L.
1989-01-01
A prototype system employing a genetic algorithm (GA) has been developed to support the scheduling of the Hubble Space Telescope. A non-standard knowledge structure is used and appropriate genetic operators have been created. Several different crossover styles (random point selection, evolving points, and smart point selection) are tested and the best GA is compared with a neural network (NN) based optimizer. The smart crossover operator produces the best results and the GA system is able to evolve complete schedules using it. The GA is not as time-efficient as the NN system and the NN solutions tend to be better.
Iterative procedures for space shuttle main engine performance models
NASA Technical Reports Server (NTRS)
Santi, L. Michael
1989-01-01
Performance models of the Space Shuttle Main Engine (SSME) contain iterative strategies for determining approximate solutions to nonlinear equations reflecting fundamental mass, energy, and pressure balances within engine flow systems. Both univariate and multivariate Newton-Raphson algorithms are employed in the current version of the engine Test Information Program (TIP). Computational efficiency and reliability of these procedures is examined. A modified trust region form of the multivariate Newton-Raphson method is implemented and shown to be superior for off nominal engine performance predictions. A heuristic form of Broyden's Rank One method is also tested and favorable results based on this algorithm are presented.
Exploration Space Suit Architecture and Destination Environmental-Based Technology Development
NASA Technical Reports Server (NTRS)
Hill, Terry R.; Korona, F. Adam; McFarland, Shane
2012-01-01
This paper continues forward where EVA Space Suit Architecture: Low Earth Orbit Vs. Moon Vs. Mars [1] left off in the development of a space suit architecture that is modular in design and could be reconfigured prior to launch or during any given mission depending on the tasks or destination. This paper will address the space suit system architecture and technologies required based upon human exploration extravehicular activity (EVA) destinations, and describe how they should evolve to meet the future exploration EVA needs of the US human space flight program.1, 2, 3 In looking forward to future US space exploration to a space suit architecture with maximum reuse of technology and functionality across a range of mission profiles and destinations, a series of exercises and analyses have provided a strong indication that the Constellation Program (CxP) space suit architecture is postured to provide a viable solution for future exploration missions4. The destination environmental analysis presented in this paper demonstrates that the modular architecture approach could provide the lowest mass and mission cost for the protection of the crew given any human mission outside of low-Earth orbit (LEO). Additionally, some of the high-level trades presented here provide a review of the environmental and non-environmental design drivers that will become increasingly important the farther away from Earth humans venture. This paper demonstrates a logical clustering of destination design environments that allows a focused approach to technology prioritization, development, and design that will maximize the return on investment, independent of any particular program, and provide architecture and design solutions for space suit systems in time or ahead of need dates for any particular crewed flight program in the future. The approach to space suit design and interface definition discussion will show how the architecture is very adaptable to programmatic and funding changes with minimal redesign effort such that the modular architecture can be quickly and efficiently honed into a specific mission point solution if required. Additionally, the modular system will allow for specific technology incorporation and upgrade as required with minimal redesign of the system.
Exploration Space Suit Architecture: Destination Environmental-Based Technology Development
NASA Technical Reports Server (NTRS)
Hill, Terry R.
2010-01-01
This paper picks up where EVA Space Suit Architecture: Low Earth Orbit Vs. Moon Vs. Mars (Hill, Johnson, IEEEAC paper #1209) left off in the development of a space suit architecture that is modular in design and interfaces and could be reconfigured to meet the mission or during any given mission depending on the tasks or destination. This paper will walk though the continued development of a space suit system architecture, and how it should evolve to meeting the future exploration EVA needs of the United States space program. In looking forward to future US space exploration and determining how the work performed to date in the CxP and how this would map to a future space suit architecture with maximum re-use of technology and functionality, a series of thought exercises and analysis have provided a strong indication that the CxP space suit architecture is well postured to provide a viable solution for future exploration missions. Through the destination environmental analysis that is presented in this paper, the modular architecture approach provides the lowest mass, lowest mission cost for the protection of the crew given any human mission outside of low Earth orbit. Some of the studies presented here provide a look and validation of the non-environmental design drivers that will become every-increasingly important the further away from Earth humans venture and the longer they are away. Additionally, the analysis demonstrates a logical clustering of design environments that allows a very focused approach to technology prioritization, development and design that will maximize the return on investment independent of any particular program and provide architecture and design solutions for space suit systems in time or ahead of being required for any particular manned flight program in the future. The new approach to space suit design and interface definition the discussion will show how the architecture is very adaptable to programmatic and funding changes with minimal redesign effort required such that the modular architecture can be quickly and efficiently honed into a specific mission point solution if required.
Rapid space trajectory generation using a Fourier series shape-based approach
NASA Astrophysics Data System (ADS)
Taheri, Ehsan
With the insatiable curiosity of human beings to explore the universe and our solar system, it is essential to benefit from larger propulsion capabilities to execute efficient transfers and carry more scientific equipments. In the field of space trajectory optimization the fundamental advances in using low-thrust propulsion and exploiting the multi-body dynamics has played pivotal role in designing efficient space mission trajectories. The former provides larger cumulative momentum change in comparison with the conventional chemical propulsion whereas the latter results in almost ballistic trajectories with negligible amount of propellant. However, the problem of space trajectory design translates into an optimal control problem which is, in general, time-consuming and very difficult to solve. Therefore, the goal of the thesis is to address the above problem by developing a methodology to simplify and facilitate the process of finding initial low-thrust trajectories in both two-body and multi-body environments. This initial solution will not only provide mission designers with a better understanding of the problem and solution but also serves as a good initial guess for high-fidelity optimal control solvers and increases their convergence rate. Almost all of the high-fidelity solvers enjoy the existence of an initial guess that already satisfies the equations of motion and some of the most important constraints. Despite the nonlinear nature of the problem, it is sought to find a robust technique for a wide range of typical low-thrust transfers with reduced computational intensity. Another important aspect of our developed methodology is the representation of low-thrust trajectories by Fourier series with which the number of design variables reduces significantly. Emphasis is given on simplifying the equations of motion to the possible extent and avoid approximating the controls. These facts contribute to speeding up the solution finding procedure. Several example applications of two and three-dimensional two-body low-thrust transfers are considered. In addition, in the multi-body dynamic, and in particular the restricted-three-body dynamic, several Earth-to-Moon low-thrust transfers are investigated.
A Distributed Wireless Camera System for the Management of Parking Spaces
Melničuk, Petr
2017-01-01
The importance of detection of parking space availability is still growing, particularly in major cities. This paper deals with the design of a distributed wireless camera system for the management of parking spaces, which can determine occupancy of the parking space based on the information from multiple cameras. The proposed system uses small camera modules based on Raspberry Pi Zero and computationally efficient algorithm for the occupancy detection based on the histogram of oriented gradients (HOG) feature descriptor and support vector machine (SVM) classifier. We have included information about the orientation of the vehicle as a supporting feature, which has enabled us to achieve better accuracy. The described solution can deliver occupancy information at the rate of 10 parking spaces per second with more than 90% accuracy in a wide range of conditions. Reliability of the implemented algorithm is evaluated with three different test sets which altogether contain over 700,000 samples of parking spaces. PMID:29283371
NASA Technical Reports Server (NTRS)
Dumbacher, Daniel L.; Singer, Christopher E.; Onken, Jay F.
2008-01-01
The United States (U.S.) plans to return to the Moon by 2020, with the development of a new human-rated space transportation system to replace the Space Shuttle, which is due for retirement in 2010 after it completes its missions of building the International Space Station and servicing the Hubble Space Telescope. Powering the future of space-based scientific exploration will be the Ares I Crew Launch Vehicle, which will transport the Orion Crew Exploration Vehicle to orbit where it will rendezvous with the Lunar Lander. which will be delivered by the Ares V Cargo Launch Vehicle. This new transportation infrastructure, developed by the National Aeronautics and Space Administration (NASA), will allow astronauts to leave low-Earth orbit for extended lunar exploration and preparation for the first footprint on Mars. All space-based operations begin and are controlled from Earth. NASA's philosophy is to deliver safe, reliable, and cost-effective solutions to sustain a multi-billion-dollar program across several decades. Leveraging 50 years of lessons learned, NASA is partnering with private industry, while building on proven hardware experience. This paper will discuss how the Engineering Directorate at NASA's Marshall Space Flight Center is working with the Ares Projects Office to streamline ground operations concepts and reduce costs. Currently, NASA's budget is around $17 billion, which is less than 1 percent of the U.S. Federal budget. Of this amount, NASA invests approximately $4.5 billion each year in Space Shuttle operations, regardless of whether the spacecraft is flying or not. The affordability requirement is for the Ares I to reduce this expense by 50 percent, in order to allow NASA to invest more in space-based scientific operations. Focusing on this metric, the Engineering Directorate provides several solutions-oriented approaches, including Lean/Six Sigma practices and streamlined hardware testing and integration, such as assembling major hardware elements before shipping to the Kennedy Space Center for launch operations. This paper provides top-level details for several cost saving initiatives, including both process and product improvements that will result in space transportation systems that are designed with operations efficiencies in mind. The Engineering Directorate provides both the intellectual capital embodied in an experienced workforce and unique facilities in which to validate the information technology tools that allow a nationwide team to collaboratively connect across miles that separate them and the engineering disciplines that integrate various piece parts into a whole system. As NASA transforms ground-based operations, it also is transitioning its workforce from an era of intense hands-on labor to a new one of mechanized conveniences and robust hardware with simpler interfaces. Ensuring that space exploration is on sound footing requires that operations efficiencies be designed into the transportation system and implemented in the development stage. Applying experience gained through decades of ground and space op'erations, while using value-added processes and modern business and engineering tools, is the philosophy upon which a new era of exploration will be built to solve some of the most pressing exploration challenges today -- namely, safety, reliability, and affordability.
NASA Astrophysics Data System (ADS)
Wang, B.; Gan, Z. H.
2013-08-01
The importance of liquid helium temperature cooling technology in the aerospace field is discussed, and the results indicate that improving the efficiency of liquid helium cooling technologies, especially the liquid helium high frequency pulse tube cryocoolers, is the principal difficulty to be solved. The state of the art and recent developments of liquid helium high frequency pulse tube cryocoolers are summarized. The main scientific challenges for high frequency pulse tube cryocoolers to efficiently reach liquid helium temperatures are outlined, and the research progress addressing those challenges are reviewed. Additionally some possible solutions to the challenges are pointed out and discussed.
Computational complexity of ecological and evolutionary spatial dynamics
Ibsen-Jensen, Rasmus; Chatterjee, Krishnendu; Nowak, Martin A.
2015-01-01
There are deep, yet largely unexplored, connections between computer science and biology. Both disciplines examine how information proliferates in time and space. Central results in computer science describe the complexity of algorithms that solve certain classes of problems. An algorithm is deemed efficient if it can solve a problem in polynomial time, which means the running time of the algorithm is a polynomial function of the length of the input. There are classes of harder problems for which the fastest possible algorithm requires exponential time. Another criterion is the space requirement of the algorithm. There is a crucial distinction between algorithms that can find a solution, verify a solution, or list several distinct solutions in given time and space. The complexity hierarchy that is generated in this way is the foundation of theoretical computer science. Precise complexity results can be notoriously difficult. The famous question whether polynomial time equals nondeterministic polynomial time (i.e., P = NP) is one of the hardest open problems in computer science and all of mathematics. Here, we consider simple processes of ecological and evolutionary spatial dynamics. The basic question is: What is the probability that a new invader (or a new mutant) will take over a resident population? We derive precise complexity results for a variety of scenarios. We therefore show that some fundamental questions in this area cannot be answered by simple equations (assuming that P is not equal to NP). PMID:26644569
An effective system to produce smoke solutions from dried plant tissue for seed germination studies1
Coons, Janice; Coutant, Nancy; Lawrence, Barbara; Finn, Daniel; Finn, Stephanie
2014-01-01
• Premise of the study: An efficient and inexpensive system was developed to produce smoke solutions from plant material to research the influence of water-soluble compounds from smoke on seed germination. • Methods and Results: Smoke solutions (300 mL per batch) were produced by burning small quantities (100–200 g) of dried plant material from a range of species in a bee smoker attached by a heater hose to a side-arm flask. The flask was attached to a vacuum water aspirator, to pull the smoke through the water. The entire apparatus was operated in a laboratory fume hood. • Conclusions: Compared with other smoke solution preparation systems, the system described is easy to assemble and operate, inexpensive to build, and effective at producing smoke solutions from desired species in a small indoor space. Quantitative measurements can be made when using this system, allowing for replication of the process. PMID:25202613
Chemical Continuous Time Random Walks
NASA Astrophysics Data System (ADS)
Aquino, T.; Dentz, M.
2017-12-01
Traditional methods for modeling solute transport through heterogeneous media employ Eulerian schemes to solve for solute concentration. More recently, Lagrangian methods have removed the need for spatial discretization through the use of Monte Carlo implementations of Langevin equations for solute particle motions. While there have been recent advances in modeling chemically reactive transport with recourse to Lagrangian methods, these remain less developed than their Eulerian counterparts, and many open problems such as efficient convergence and reconstruction of the concentration field remain. We explore a different avenue and consider the question: In heterogeneous chemically reactive systems, is it possible to describe the evolution of macroscopic reactant concentrations without explicitly resolving the spatial transport? Traditional Kinetic Monte Carlo methods, such as the Gillespie algorithm, model chemical reactions as random walks in particle number space, without the introduction of spatial coordinates. The inter-reaction times are exponentially distributed under the assumption that the system is well mixed. In real systems, transport limitations lead to incomplete mixing and decreased reaction efficiency. We introduce an arbitrary inter-reaction time distribution, which may account for the impact of incomplete mixing. This process defines an inhomogeneous continuous time random walk in particle number space, from which we derive a generalized chemical Master equation and formulate a generalized Gillespie algorithm. We then determine the modified chemical rate laws for different inter-reaction time distributions. We trace Michaelis-Menten-type kinetics back to finite-mean delay times, and predict time-nonlocal macroscopic reaction kinetics as a consequence of broadly distributed delays. Non-Markovian kinetics exhibit weak ergodicity breaking and show key features of reactions under local non-equilibrium.
Applying extrusive orthodontic force without compromising the obturated canal space.
Keinan, David; Szwec, Jerard; Matas, Avital; Moshonov, Joshua; Yitschaky, Oded
2013-08-01
Complicated tooth fractures can be the unfortunate result of orofacial trauma and can offer a therapeutic challenge for the dentist. A conservative solution for gaining supragingival sound tooth structure often includes orthodontic forced eruption. Usually, this procedure is carried out by applying extrusive force after placing a provisional acrylic Richmond crown on the tooth. However, this long-lasting dental treatment may jeopardize the coronal seal of the root canal space, leading to microleakage and endodontic failure. Orthodontic forced eruption demands application of force to an attachment connected to the remaining short clinical crown. In this article, the authors describe a case in which they used a new technique for orthodontic forced eruption of a traumatized tooth, using an extracanal attachment to apply extrusion force, and discuss its possible advantages and limitations. An extracanal attachment approach for orthodontic forced eruption without compromising the obturated canal space can be a solution for posttraumatic crown fracture. Practical Implications. The described procedure for forced eruption by using an extracanal pin attachment is efficient and convenient and does not require the clinician to apply force directly to the provisional crown. Therefore, during the application of force, there is less risk of loosening the provisional crown, and the canal space is kept intact with either the final restoration or dressing material.
Phase partitioning in space and on earth
NASA Technical Reports Server (NTRS)
Van Alstine, James M.; Karr, Laurel J.; Snyder, Robert S.; Matsos, Helen C.; Curreri, Peter A.; Harris, J. Milton; Bamberger, Stephan B.; Boyce, John; Brooks, Donald E.
1987-01-01
The influence of gravity on the efficiency and quality of the impressive separations achievable by bioparticle partitioning is investigated by demixing polymer phase systems in microgravity. The study involves the neutral polymers dextran and polyethylene glycol, which form a two-phase system in aqueous solution at low concentrations. It is found that demixing in low-gravity occurs primarily by coalescence, whereas on earth the demixing occurs because of density differences between the phases.
1994-05-25
small highly efficient power systems to provide electricity for space applications. These converters are solar heated for near earth orbit applications...processing in NASA’s Wake Shield Facility. AMPS plans to complete product development in each of these specific technology areas utilizing SBIR...Corrosion: Crevice corrosion is a form of localized corrosion that occurs within crevices or shielded surfaces where stagnant solution is present
Generalization of mixed multiscale finite element methods with applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, C S
Many science and engineering problems exhibit scale disparity and high contrast. The small scale features cannot be omitted in the physical models because they can affect the macroscopic behavior of the problems. However, resolving all the scales in these problems can be prohibitively expensive. As a consequence, some types of model reduction techniques are required to design efficient solution algorithms. For practical purpose, we are interested in mixed finite element problems as they produce solutions with certain conservative properties. Existing multiscale methods for such problems include the mixed multiscale finite element methods. We show that for complicated problems, the mixedmore » multiscale finite element methods may not be able to produce reliable approximations. This motivates the need of enrichment for coarse spaces. Two enrichment approaches are proposed, one is based on generalized multiscale finte element metthods (GMsFEM), while the other is based on spectral element-based algebraic multigrid (rAMGe). The former one, which is called mixed GMsFEM, is developed for both Darcy’s flow and linear elasticity. Application of the algorithm in two-phase flow simulations are demonstrated. For linear elasticity, the algorithm is subtly modified due to the symmetry requirement of the stress tensor. The latter enrichment approach is based on rAMGe. The algorithm differs from GMsFEM in that both of the velocity and pressure spaces are coarsened. Due the multigrid nature of the algorithm, recursive application is available, which results in an efficient multilevel construction of the coarse spaces. Stability, convergence analysis, and exhaustive numerical experiments are carried out to validate the proposed enrichment approaches. iii« less
Some solutions of the general three body problem in form space
NASA Astrophysics Data System (ADS)
Titov, Vladimir
2018-05-01
Some solutions of three body problem with equal masses are first considered in form space. The solutions in usual euclidean space may be restored from these form space solutions. If constant energy h < 0, the trajectories are located inside of Hill's surface. Without loss of generality due to scale symmetry we can set h = -1. Such surface has a simple form in form space. Solutions of isosceles and rectilinear three body problems lie within Hill's curve; periodic solutions of free fall three body problem start in one point of this curve, and finish in another. The solutions are illustrated by number of figures.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dehghani, M.H.; Department of Physics, University of Waterloo, 200 University Avenue West, Waterloo, Ontario, N2L 3G1; Perimeter Institute for Theoretical Physics, 35 Caroline Street North, Waterloo, Ontario
We investigate the existence of Taub-NUT (Newman-Unti-Tamburino) and Taub-bolt solutions in Gauss-Bonnet gravity and obtain the general form of these solutions in d dimensions. We find that for all nonextremal NUT solutions of Einstein gravity having no curvature singularity at r=N, there exist NUT solutions in Gauss-Bonnet gravity that contain these solutions in the limit that the Gauss-Bonnet parameter {alpha} goes to zero. Furthermore there are no NUT solutions in Gauss-Bonnet gravity that yield nonextremal NUT solutions to Einstein gravity having a curvature singularity at r=N in the limit {alpha}{yields}0. Indeed, we have nonextreme NUT solutions in 2+2k dimensions withmore » nontrivial fibration only when the 2k-dimensional base space is chosen to be CP{sup 2k}. We also find that the Gauss-Bonnet gravity has extremal NUT solutions whenever the base space is a product of 2-torii with at most a two-dimensional factor space of positive curvature. Indeed, when the base space has at most one positively curved two-dimensional space as one of its factor spaces, then Gauss-Bonnet gravity admits extreme NUT solutions, even though there a curvature singularity exists at r=N. We also find that one can have bolt solutions in Gauss-Bonnet gravity with any base space with factor spaces of zero or positive constant curvature. The only case for which one does not have bolt solutions is in the absence of a cosmological term with zero curvature base space.« less
Human motion planning based on recursive dynamics and optimal control techniques
NASA Technical Reports Server (NTRS)
Lo, Janzen; Huang, Gang; Metaxas, Dimitris
2002-01-01
This paper presents an efficient optimal control and recursive dynamics-based computer animation system for simulating and controlling the motion of articulated figures. A quasi-Newton nonlinear programming technique (super-linear convergence) is implemented to solve minimum torque-based human motion-planning problems. The explicit analytical gradients needed in the dynamics are derived using a matrix exponential formulation and Lie algebra. Cubic spline functions are used to make the search space for an optimal solution finite. Based on our formulations, our method is well conditioned and robust, in addition to being computationally efficient. To better illustrate the efficiency of our method, we present results of natural looking and physically correct human motions for a variety of human motion tasks involving open and closed loop kinematic chains.
Considerations of persistence and security in CHOICES, an object-oriented operating system
NASA Technical Reports Server (NTRS)
Campbell, Roy H.; Madany, Peter W.
1990-01-01
The current design of the CHOICES persistent object implementation is summarized, and research in progress is outlined. CHOICES is implemented as an object-oriented system, and persistent objects appear to simplify and unify many functions of the system. It is demonstrated that persistent data can be accessed through an object-oriented file system model as efficiently as by an existing optimized commercial file system. The object-oriented file system can be specialized to provide an object store for persistent objects. The problems that arise in building an efficient persistent object scheme in a 32-bit virtual address space that only uses paging are described. Despite its limitations, the solution presented allows quite large numbers of objects to be active simultaneously, and permits sharing and efficient method calls.
An Implicit Characteristic Based Method for Electromagnetics
NASA Technical Reports Server (NTRS)
Beggs, John H.; Briley, W. Roger
2001-01-01
An implicit characteristic-based approach for numerical solution of Maxwell's time-dependent curl equations in flux conservative form is introduced. This method combines a characteristic based finite difference spatial approximation with an implicit lower-upper approximate factorization (LU/AF) time integration scheme. This approach is advantageous for three-dimensional applications because the characteristic differencing enables a two-factor approximate factorization that retains its unconditional stability in three space dimensions, and it does not require solution of tridiagonal systems. Results are given both for a Fourier analysis of stability, damping and dispersion properties, and for one-dimensional model problems involving propagation and scattering for free space and dielectric materials using both uniform and nonuniform grids. The explicit Finite Difference Time Domain Method (FDTD) algorithm is used as a convenient reference algorithm for comparison. The one-dimensional results indicate that for low frequency problems on a highly resolved uniform or nonuniform grid, this LU/AF algorithm can produce accurate solutions at Courant numbers significantly greater than one, with a corresponding improvement in efficiency for simulating a given period of time. This approach appears promising for development of dispersion optimized LU/AF schemes for three dimensional applications.
Verification of continuum drift kinetic equation solvers in NIMROD
DOE Office of Scientific and Technical Information (OSTI.GOV)
Held, E. D.; Ji, J.-Y.; Kruger, S. E.
Verification of continuum solutions to the electron and ion drift kinetic equations (DKEs) in NIMROD [C. R. Sovinec et al., J. Comp. Phys. 195, 355 (2004)] is demonstrated through comparison with several neoclassical transport codes, most notably NEO [E. A. Belli and J. Candy, Plasma Phys. Controlled Fusion 54, 015015 (2012)]. The DKE solutions use NIMROD's spatial representation, 2D finite-elements in the poloidal plane and a 1D Fourier expansion in toroidal angle. For 2D velocity space, a novel 1D expansion in finite elements is applied for the pitch angle dependence and a collocation grid is used for the normalized speedmore » coordinate. The full, linearized Coulomb collision operator is kept and shown to be important for obtaining quantitative results. Bootstrap currents, parallel ion flows, and radial particle and heat fluxes show quantitative agreement between NIMROD and NEO for a variety of tokamak equilibria. In addition, velocity space distribution function contours for ions and electrons show nearly identical detailed structure and agree quantitatively. A Θ-centered, implicit time discretization and a block-preconditioned, iterative linear algebra solver provide efficient electron and ion DKE solutions that ultimately will be used to obtain closures for NIMROD's evolving fluid model.« less
Monte Carlo simulations for the space radiation superconducting shield project (SR2S).
Vuolo, M; Giraudo, M; Musenich, R; Calvelli, V; Ambroglini, F; Burger, W J; Battiston, R
2016-02-01
Astronauts on deep-space long-duration missions will be exposed for long time to galactic cosmic rays (GCR) and Solar Particle Events (SPE). The exposure to space radiation could lead to both acute and late effects in the crew members and well defined countermeasures do not exist nowadays. The simplest solution given by optimized passive shielding is not able to reduce the dose deposited by GCRs below the actual dose limits, therefore other solutions, such as active shielding employing superconducting magnetic fields, are under study. In the framework of the EU FP7 SR2S Project - Space Radiation Superconducting Shield--a toroidal magnetic system based on MgB2 superconductors has been analyzed through detailed Monte Carlo simulations using Geant4 interface GRAS. Spacecraft and magnets were modeled together with a simplified mechanical structure supporting the coils. Radiation transport through magnetic fields and materials was simulated for a deep-space mission scenario, considering for the first time the effect of secondary particles produced in the passage of space radiation through the active shielding and spacecraft structures. When modeling the structures supporting the active shielding systems and the habitat, the radiation protection efficiency of the magnetic field is severely decreasing compared to the one reported in previous studies, when only the magnetic field was modeled around the crew. This is due to the large production of secondary radiation taking place in the material surrounding the habitat. Copyright © 2016 The Committee on Space Research (COSPAR). Published by Elsevier Ltd. All rights reserved.
Xia, J.; Xu, Y.; Miller, R.D.; Chen, C.
2006-01-01
A Gibson half-space model (a non-layered Earth model) has the shear modulus varying linearly with depth in an inhomogeneous elastic half-space. In a half-space of sedimentary granular soil under a geostatic state of initial stress, the density and the Poisson's ratio do not vary considerably with depth. In such an Earth body, the dynamic shear modulus is the parameter that mainly affects the dispersion of propagating waves. We have estimated shear-wave velocities in the compressible Gibson half-space by inverting Rayleigh-wave phase velocities. An analytical dispersion law of Rayleigh-type waves in a compressible Gibson half-space is given in an algebraic form, which makes our inversion process extremely simple and fast. The convergence of the weighted damping solution is guaranteed through selection of the damping factor using the Levenberg-Marquardt method. Calculation efficiency is achieved by reconstructing a weighted damping solution using singular value decomposition techniques. The main advantage of this algorithm is that only three parameters define the compressible Gibson half-space model. Theoretically, to determine the model by the inversion, only three Rayleigh-wave phase velocities at different frequencies are required. This is useful in practice where Rayleigh-wave energy is only developed in a limited frequency range or at certain frequencies as data acquired at manmade structures such as dams and levees. Two real examples are presented and verified by borehole S-wave velocity measurements. The results of these real examples are also compared with the results of the layered-Earth model. ?? Springer 2006.
Essentially nonoscillatory postprocessing filtering methods
NASA Technical Reports Server (NTRS)
Lafon, F.; Osher, S.
1992-01-01
High order accurate centered flux approximations used in the computation of numerical solutions to nonlinear partial differential equations produce large oscillations in regions of sharp transitions. Here, we present a new class of filtering methods denoted by Essentially Nonoscillatory Least Squares (ENOLS), which constructs an upgraded filtered solution that is close to the physically correct weak solution of the original evolution equation. Our method relies on the evaluation of a least squares polynomial approximation to oscillatory data using a set of points which is determined via the ENO network. Numerical results are given in one and two space dimensions for both scalar and systems of hyperbolic conservation laws. Computational running time, efficiency, and robustness of method are illustrated in various examples such as Riemann initial data for both Burgers' and Euler's equations of gas dynamics. In all standard cases, the filtered solution appears to converge numerically to the correct solution of the original problem. Some interesting results based on nonstandard central difference schemes, which exactly preserve entropy, and have been recently shown generally not to be weakly convergent to a solution of the conservation law, are also obtained using our filters.
Tamosiunaite, Minija; Asfour, Tamim; Wörgötter, Florentin
2009-03-01
Reinforcement learning methods can be used in robotics applications especially for specific target-oriented problems, for example the reward-based recalibration of goal directed actions. To this end still relatively large and continuous state-action spaces need to be efficiently handled. The goal of this paper is, thus, to develop a novel, rather simple method which uses reinforcement learning with function approximation in conjunction with different reward-strategies for solving such problems. For the testing of our method, we use a four degree-of-freedom reaching problem in 3D-space simulated by a two-joint robot arm system with two DOF each. Function approximation is based on 4D, overlapping kernels (receptive fields) and the state-action space contains about 10,000 of these. Different types of reward structures are being compared, for example, reward-on- touching-only against reward-on-approach. Furthermore, forbidden joint configurations are punished. A continuous action space is used. In spite of a rather large number of states and the continuous action space these reward/punishment strategies allow the system to find a good solution usually within about 20 trials. The efficiency of our method demonstrated in this test scenario suggests that it might be possible to use it on a real robot for problems where mixed rewards can be defined in situations where other types of learning might be difficult.
Bratsas, Charalampos; Koutkias, Vassilis; Kaimakamis, Evangelos; Bamidis, Panagiotis; Maglaveras, Nicos
2007-01-01
Medical Computational Problem (MCP) solving is related to medical problems and their computerized algorithmic solutions. In this paper, an extension of an ontology-based model to fuzzy logic is presented, as a means to enhance the information retrieval (IR) procedure in semantic management of MCPs. We present herein the methodology followed for the fuzzy expansion of the ontology model, the fuzzy query expansion procedure, as well as an appropriate ontology-based Vector Space Model (VSM) that was constructed for efficient mapping of user-defined MCP search criteria and MCP acquired knowledge. The relevant fuzzy thesaurus is constructed by calculating the simultaneous occurrences of terms and the term-to-term similarities derived from the ontology that utilizes UMLS (Unified Medical Language System) concepts by using Concept Unique Identifiers (CUI), synonyms, semantic types, and broader-narrower relationships for fuzzy query expansion. The current approach constitutes a sophisticated advance for effective, semantics-based MCP-related IR.
Using Grid Cells for Navigation.
Bush, Daniel; Barry, Caswell; Manson, Daniel; Burgess, Neil
2015-08-05
Mammals are able to navigate to hidden goal locations by direct routes that may traverse previously unvisited terrain. Empirical evidence suggests that this "vector navigation" relies on an internal representation of space provided by the hippocampal formation. The periodic spatial firing patterns of grid cells in the hippocampal formation offer a compact combinatorial code for location within large-scale space. Here, we consider the computational problem of how to determine the vector between start and goal locations encoded by the firing of grid cells when this vector may be much longer than the largest grid scale. First, we present an algorithmic solution to the problem, inspired by the Fourier shift theorem. Second, we describe several potential neural network implementations of this solution that combine efficiency of search and biological plausibility. Finally, we discuss the empirical predictions of these implementations and their relationship to the anatomy and electrophysiology of the hippocampal formation. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Fast and Accurate Circuit Design Automation through Hierarchical Model Switching.
Huynh, Linh; Tagkopoulos, Ilias
2015-08-21
In computer-aided biological design, the trifecta of characterized part libraries, accurate models and optimal design parameters is crucial for producing reliable designs. As the number of parts and model complexity increase, however, it becomes exponentially more difficult for any optimization method to search the solution space, hence creating a trade-off that hampers efficient design. To address this issue, we present a hierarchical computer-aided design architecture that uses a two-step approach for biological design. First, a simple model of low computational complexity is used to predict circuit behavior and assess candidate circuit branches through branch-and-bound methods. Then, a complex, nonlinear circuit model is used for a fine-grained search of the reduced solution space, thus achieving more accurate results. Evaluation with a benchmark of 11 circuits and a library of 102 experimental designs with known characterization parameters demonstrates a speed-up of 3 orders of magnitude when compared to other design methods that provide optimality guarantees.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clark, M. A.; Strelchenko, Alexei; Vaquero, Alejandro
Lattice quantum chromodynamics simulations in nuclear physics have benefited from a tremendous number of algorithmic advances such as multigrid and eigenvector deflation. These improve the time to solution but do not alleviate the intrinsic memory-bandwidth constraints of the matrix-vector operation dominating iterative solvers. Batching this operation for multiple vectors and exploiting cache and register blocking can yield a super-linear speed up. Block-Krylov solvers can naturally take advantage of such batched matrix-vector operations, further reducing the iterations to solution by sharing the Krylov space between solves. However, practical implementations typically suffer from the quadratic scaling in the number of vector-vector operations.more » Using the QUDA library, we present an implementation of a block-CG solver on NVIDIA GPUs which reduces the memory-bandwidth complexity of vector-vector operations from quadratic to linear. We present results for the HISQ discretization, showing a 5x speedup compared to highly-optimized independent Krylov solves on NVIDIA's SaturnV cluster.« less
NASA Astrophysics Data System (ADS)
Zhang, A.; Guo, Z.; Xiong, S.-M.
2018-05-01
The influence of natural convection on lamellar eutectic growth was determined by a comprehensive phase-field lattice-Boltzmann study for Al-Cu and CB r4-C2C l6 eutectic alloys. The mass differences resulting from concentration differences led to the fluid flow and a robust parallel and adaptive mesh refinement algorithm was employed to improve the computational efficiency. By means of carefully designed "numerical experiments", the eutectic growth under natural convection was explored and a simple analytical model was proposed to predict the adjustment of the lamellar spacing. Furthermore, by alternating the solute expansion coefficient, initial lamellar spacing, and undercooling, the microstructure evolution was presented and compared with the classical eutectic growth theory. Results showed that both interfacial solute distribution and average curvature were affected by the natural convection, the effect of which could be further quantified by adding a constant into the growth rule proposed by Jackson and Hunt [Jackson and Hunt, Trans. Metall. Soc. AIME 236, 1129 (1966)].
Design and Implementation of a Novel Portable 360° Stereo Camera System with Low-Cost Action Cameras
NASA Astrophysics Data System (ADS)
Holdener, D.; Nebiker, S.; Blaser, S.
2017-11-01
The demand for capturing indoor spaces is rising with the digitalization trend in the construction industry. An efficient solution for measuring challenging indoor environments is mobile mapping. Image-based systems with 360° panoramic coverage allow a rapid data acquisition and can be processed to georeferenced 3D images hosted in cloud-based 3D geoinformation services. For the multiview stereo camera system presented in this paper, a 360° coverage is achieved with a layout consisting of five horizontal stereo image pairs in a circular arrangement. The design is implemented as a low-cost solution based on a 3D printed camera rig and action cameras with fisheye lenses. The fisheye stereo system is successfully calibrated with accuracies sufficient for the applied measurement task. A comparison of 3D distances with reference data delivers maximal deviations of 3 cm on typical distances in indoor space of 2-8 m. Also the automatic computation of coloured point clouds from the stereo pairs is demonstrated.
Global Optimization of Low-Thrust Interplanetary Trajectories Subject to Operational Constraints
NASA Technical Reports Server (NTRS)
Englander, Jacob A.; Vavrina, Matthew A.; Hinckley, David
2016-01-01
Low-thrust interplanetary space missions are highly complex and there can be many locally optimal solutions. While several techniques exist to search for globally optimal solutions to low-thrust trajectory design problems, they are typically limited to unconstrained trajectories. The operational design community in turn has largely avoided using such techniques and has primarily focused on accurate constrained local optimization combined with grid searches and intuitive design processes at the expense of efficient exploration of the global design space. This work is an attempt to bridge the gap between the global optimization and operational design communities by presenting a mathematical framework for global optimization of low-thrust trajectories subject to complex constraints including the targeting of planetary landing sites, a solar range constraint to simplify the thermal design of the spacecraft, and a real-world multi-thruster electric propulsion system that must switch thrusters on and off as available power changes over the course of a mission.
A numerical solution method for acoustic radiation from axisymmetric bodies
NASA Technical Reports Server (NTRS)
Caruthers, John E.; Raviprakash, G. K.
1995-01-01
A new and very efficient numerical method for solving equations of the Helmholtz type is specialized for problems having axisymmetric geometry. It is then demonstrated by application to the classical problem of acoustic radiation from a vibrating piston set in a stationary infinite plane. The method utilizes 'Green's Function Discretization', to obtain an accurate resolution of the waves using only 2-3 points per wave. Locally valid free space Green's functions, used in the discretization step, are obtained by quadrature. Results are computed for a range of grid spacing/piston radius ratios at a frequency parameter, omega R/c(sub 0), of 2 pi. In this case, the minimum required grid resolution appears to be fixed by the need to resolve a step boundary condition at the piston edge rather than by the length scale imposed by the wave length of the acoustic radiation. It is also demonstrated that a local near-field radiation boundary procedure allows the domain to be truncated very near the radiating source with little effect on the solution.
Dal Palù, Alessandro; Dovier, Agostino; Pontelli, Enrico
2010-01-01
Crystal lattices are discrete models of the three-dimensional space that have been effectively employed to facilitate the task of determining proteins' natural conformation. This paper investigates alternative global constraints that can be introduced in a constraint solver over discrete crystal lattices. The objective is to enhance the efficiency of lattice solvers in dealing with the construction of approximate solutions of the protein structure determination problem. Some of them (e.g., self-avoiding-walk) have been explicitly or implicitly already used in previous approaches, while others (e.g., the density constraint) are new. The intrinsic complexities of all of them are studied and preliminary experimental results are discussed.
Activity of Cu-activated carbon fiber catalyst in wet oxidation of ammonia solution.
Hung, Chang-Mao
2009-07-30
Aqueous solutions of 200-1000 mg/L of ammonia were oxidized in a trickle-bed reactor using Cu-activated carbon fiber (ACF) catalysts, which were prepared by incipient wet impregnation with aqueous solutions of copper nitrate that was deposited on ACF substrates. The results reveal that the conversion of ammonia by wet oxidation in the presence of Cu-ACF catalysts was a function of the metal loading weight ratio of the catalyst. The total conversion efficiency of ammonia was 95% during wet oxidation over the catalyst at 463 K at an oxygen partial pressure of 3.0 MPa. Moreover, the effect of the initial concentration of ammonia and the reaction temperature on the removal of ammonia from the effluent streams was also studied at a liquid space velocity of less than 3.0 h(-1).
NASA Astrophysics Data System (ADS)
Mahalakshmi; Murugesan, R.
2018-04-01
This paper regards with the minimization of total cost of Greenhouse Gas (GHG) efficiency in Automated Storage and Retrieval System (AS/RS). A mathematical model is constructed based on tax cost, penalty cost and discount cost of GHG emission of AS/RS. A two stage algorithm namely positive selection based clonal selection principle (PSBCSP) is used to find the optimal solution of the constructed model. In the first stage positive selection principle is used to reduce the search space of the optimal solution by fixing a threshold value. In the later stage clonal selection principle is used to generate best solutions. The obtained results are compared with other existing algorithms in the literature, which shows that the proposed algorithm yields a better result compared to others.
Algebraic geometry and Bethe ansatz. Part I. The quotient ring for BAE
NASA Astrophysics Data System (ADS)
Jiang, Yunfeng; Zhang, Yang
2018-03-01
In this paper and upcoming ones, we initiate a systematic study of Bethe ansatz equations for integrable models by modern computational algebraic geometry. We show that algebraic geometry provides a natural mathematical language and powerful tools for understanding the structure of solution space of Bethe ansatz equations. In particular, we find novel efficient methods to count the number of solutions of Bethe ansatz equations based on Gröbner basis and quotient ring. We also develop analytical approach based on companion matrix to perform the sum of on-shell quantities over all physical solutions without solving Bethe ansatz equations explicitly. To demonstrate the power of our method, we revisit the completeness problem of Bethe ansatz of Heisenberg spin chain, and calculate the sum rules of OPE coefficients in planar N=4 super-Yang-Mills theory.
Collaborative learning in networks.
Mason, Winter; Watts, Duncan J
2012-01-17
Complex problems in science, business, and engineering typically require some tradeoff between exploitation of known solutions and exploration for novel ones, where, in many cases, information about known solutions can also disseminate among individual problem solvers through formal or informal networks. Prior research on complex problem solving by collectives has found the counterintuitive result that inefficient networks, meaning networks that disseminate information relatively slowly, can perform better than efficient networks for problems that require extended exploration. In this paper, we report on a series of 256 Web-based experiments in which groups of 16 individuals collectively solved a complex problem and shared information through different communication networks. As expected, we found that collective exploration improved average success over independent exploration because good solutions could diffuse through the network. In contrast to prior work, however, we found that efficient networks outperformed inefficient networks, even in a problem space with qualitative properties thought to favor inefficient networks. We explain this result in terms of individual-level explore-exploit decisions, which we find were influenced by the network structure as well as by strategic considerations and the relative payoff between maxima. We conclude by discussing implications for real-world problem solving and possible extensions.
General form of a cooperative gradual maximal covering location problem
NASA Astrophysics Data System (ADS)
Bagherinejad, Jafar; Bashiri, Mahdi; Nikzad, Hamideh
2018-07-01
Cooperative and gradual covering are two new methods for developing covering location models. In this paper, a cooperative maximal covering location-allocation model is developed (CMCLAP). In addition, both cooperative and gradual covering concepts are applied to the maximal covering location simultaneously (CGMCLP). Then, we develop an integrated form of a cooperative gradual maximal covering location problem, which is called a general CGMCLP. By setting the model parameters, the proposed general model can easily be transformed into other existing models, facilitating general comparisons. The proposed models are developed without allocation for physical signals and with allocation for non-physical signals in discrete location space. Comparison of the previously introduced gradual maximal covering location problem (GMCLP) and cooperative maximal covering location problem (CMCLP) models with our proposed CGMCLP model in similar data sets shows that the proposed model can cover more demands and acts more efficiently. Sensitivity analyses are performed to show the effect of related parameters and the model's validity. Simulated annealing (SA) and a tabu search (TS) are proposed as solution algorithms for the developed models for large-sized instances. The results show that the proposed algorithms are efficient solution approaches, considering solution quality and running time.
Collaborative learning in networks
Mason, Winter; Watts, Duncan J.
2012-01-01
Complex problems in science, business, and engineering typically require some tradeoff between exploitation of known solutions and exploration for novel ones, where, in many cases, information about known solutions can also disseminate among individual problem solvers through formal or informal networks. Prior research on complex problem solving by collectives has found the counterintuitive result that inefficient networks, meaning networks that disseminate information relatively slowly, can perform better than efficient networks for problems that require extended exploration. In this paper, we report on a series of 256 Web-based experiments in which groups of 16 individuals collectively solved a complex problem and shared information through different communication networks. As expected, we found that collective exploration improved average success over independent exploration because good solutions could diffuse through the network. In contrast to prior work, however, we found that efficient networks outperformed inefficient networks, even in a problem space with qualitative properties thought to favor inefficient networks. We explain this result in terms of individual-level explore-exploit decisions, which we find were influenced by the network structure as well as by strategic considerations and the relative payoff between maxima. We conclude by discussing implications for real-world problem solving and possible extensions. PMID:22184216
A filtering approach to edge preserving MAP estimation of images.
Humphrey, David; Taubman, David
2011-05-01
The authors present a computationally efficient technique for maximum a posteriori (MAP) estimation of images in the presence of both blur and noise. The image is divided into statistically independent regions. Each region is modelled with a WSS Gaussian prior. Classical Wiener filter theory is used to generate a set of convex sets in the solution space, with the solution to the MAP estimation problem lying at the intersection of these sets. The proposed algorithm uses an underlying segmentation of the image, and a means of determining the segmentation and refining it are described. The algorithm is suitable for a range of image restoration problems, as it provides a computationally efficient means to deal with the shortcomings of Wiener filtering without sacrificing the computational simplicity of the filtering approach. The algorithm is also of interest from a theoretical viewpoint as it provides a continuum of solutions between Wiener filtering and Inverse filtering depending upon the segmentation used. We do not attempt to show here that the proposed method is the best general approach to the image reconstruction problem. However, related work referenced herein shows excellent performance in the specific problem of demosaicing.
Glocker, Ben; Paragios, Nikos; Komodakis, Nikos; Tziritas, Georgios; Navab, Nassir
2007-01-01
In this paper we propose a novel non-rigid volume registration based on discrete labeling and linear programming. The proposed framework reformulates registration as a minimal path extraction in a weighted graph. The space of solutions is represented using a set of a labels which are assigned to predefined displacements. The graph topology corresponds to a superimposed regular grid onto the volume. Links between neighborhood control points introduce smoothness, while links between the graph nodes and the labels (end-nodes) measure the cost induced to the objective function through the selection of a particular deformation for a given control point once projected to the entire volume domain, Higher order polynomials are used to express the volume deformation from the ones of the control points. Efficient linear programming that can guarantee the optimal solution up to (a user-defined) bound is considered to recover the optimal registration parameters. Therefore, the method is gradient free, can encode various similarity metrics (simple changes on the graph construction), can guarantee a globally sub-optimal solution and is computational tractable. Experimental validation using simulated data with known deformation, as well as manually segmented data demonstrate the extreme potentials of our approach.
NASA Astrophysics Data System (ADS)
Luu, Thomas; Brooks, Eugene D.; Szőke, Abraham
2010-03-01
In the difference formulation for the transport of thermally emitted photons the photon intensity is defined relative to a reference field, the black body at the local material temperature. This choice of reference field combines the separate emission and absorption terms that nearly cancel, thereby removing the dominant cause of noise in the Monte Carlo solution of thick systems, but introduces time and space derivative source terms that cannot be determined until the end of the time step. The space derivative source term can also lead to noise induced crashes under certain conditions where the real physical photon intensity differs strongly from a black body at the local material temperature. In this paper, we consider a difference formulation relative to the material temperature at the beginning of the time step, or in cases where an alternative temperature better describes the radiation field, that temperature. The result is a method where iterative solution of the material energy equation is efficient and noise induced crashes are avoided. We couple our generalized reference field scheme with an ad hoc interpolation of the space derivative source, resulting in an algorithm that produces the correct flux between zones as the physical system approaches the thick limit.
NASA Astrophysics Data System (ADS)
Udell, C.; Selker, J. S.
2017-12-01
The increasing availability and functionality of Open-Source software and hardware along with 3D printing, low-cost electronics, and proliferation of open-access resources for learning rapid prototyping are contributing to fundamental transformations and new technologies in environmental sensing. These tools invite reevaluation of time-tested methodologies and devices toward more efficient, reusable, and inexpensive alternatives. Building upon Open-Source design facilitates community engagement and invites a Do-It-Together (DIT) collaborative framework for research where solutions to complex problems may be crowd-sourced. However, barriers persist that prevent researchers from taking advantage of the capabilities afforded by open-source software, hardware, and rapid prototyping. Some of these include: requisite technical skillsets, knowledge of equipment capabilities, identifying inexpensive sources for materials, money, space, and time. A university MAKER space staffed by engineering students to assist researchers is one proposed solution to overcome many of these obstacles. This presentation investigates the unique capabilities the USDA-funded Openly Published Environmental Sensing (OPEnS) Lab affords researchers, within Oregon State and internationally, and the unique functions these types of initiatives support at the intersection of MAKER spaces, Open-Source academic research, and open-access dissemination.
Pure quasi-P-wave calculation in transversely isotropic media using a hybrid method
NASA Astrophysics Data System (ADS)
Wu, Zedong; Liu, Hongwei; Alkhalifah, Tariq
2018-07-01
The acoustic approximation for anisotropic media is widely used in current industry imaging and inversion algorithms mainly because Pwaves constitute the majority of the energy recorded in seismic exploration. The resulting acoustic formulae tend to be simpler, resulting in more efficient implementations, and depend on fewer medium parameters. However, conventional solutions of the acoustic wave equation with higher-order derivatives suffer from shear wave artefacts. Thus, we derive a new acoustic wave equation for wave propagation in transversely isotropic (TI) media, which is based on a partially separable approximation of the dispersion relation for TI media and free of shear wave artefacts. Even though our resulting equation is not a partial differential equation, it is still a linear equation. Thus, we propose to implement this equation efficiently by combining the finite difference approximation with spectral evaluation of the space-independent parts. The resulting algorithm provides solutions without the constraint ɛ ≥ δ. Numerical tests demonstrate the effectiveness of the approach.
Approach to an Affordable and Productive Space Transportation System
NASA Technical Reports Server (NTRS)
McCleskey, Carey M.; Rhodes, Russel E.; Lepsch, Roger A.; Henderson, Edward M.; Robinson, John W.
2012-01-01
This paper describes an approach for creating space transportation architectures that are affordable, productive, and sustainable. The architectural scope includes both flight and ground system elements, and focuses on their compatibility to achieve a technical solution that is operationally productive, and also affordable throughout its life cycle. Previous papers by the authors and other members of the Space Propulsion Synergy Team (SPST) focused on space flight system engineering methods, along with operationally efficient propulsion system concepts and technologies. This paper follows up previous work by using a structured process to derive examples of conceptual architectures that integrate a number of advanced concepts and technologies. The examples are not intended to provide a near-term alternative architecture to displace current near-term design and development activity. Rather, the examples demonstrate an approach that promotes early investments in advanced system concept studies and trades (flight and ground), as well as in advanced technologies with the goal of enabling highly affordable, productive flight and ground space transportation systems.
Large zeolites - Why and how to grow in space
NASA Technical Reports Server (NTRS)
Sacco, Albert, Jr.
1991-01-01
The growth of zeolite crystals which are considered to be the most valuable catalytic and adsorbent materials of the chemical processing industry are discussed. It is proposed to use triethanolamine as a nucleation control agent to control the time release of Al in a zeolite A solution and to increase the average and maximum crystal size by 25-50 times. Large zeolites could be utilized to make membranes for reactors/separators which will substantially increase their efficiency.
NASA Astrophysics Data System (ADS)
Andersen, G.; Dearborn, M.; Hcharg, G.
2010-09-01
We are investigating new technologies for creating ultra-large apertures (>20m) for space-based imagery. Our approach has been to create diffractive primaries in flat membranes deployed from compact payloads. These structures are attractive in that they are much simpler to fabricate, launch and deploy compared to conventional three-dimensional optics. In this case the flat focusing element is a photon sieve which consists of a large number of holes in an otherwise opaque substrate. A photon sieve is essentially a large number of holes located according to an underlying Fresnel Zone Plate (FZP) geometry. The advantages over the FZP are that there are no support struts which lead to diffraction spikes in the far-field and non-uniform tension which can cause wrinkling of the substrate. Furthermore, with modifications in hole size and distribution we can achieve improved resolution and contrast over conventional optics. The trade-offs in using diffractive optics are the large amounts of dispersion and decreased efficiency. We present both theoretical and experimental results from small-scale prototypes. Several key solutions to issues of limited bandwidth and efficiency have been addressed. Along with these we have studied the materials aspects in order to optimize performance and achieve a scalable solution to an on-orbit demonstrator. Our current efforts are being directed towards an on-orbit 1m solar observatory demonstration deployed from a CubeSat bus.
Poulain, Christophe A.; Finlayson, Bruce A.; Bassingthwaighte, James B.
2010-01-01
The analysis of experimental data obtained by the multiple-indicator method requires complex mathematical models for which capillary blood-tissue exchange (BTEX) units are the building blocks. This study presents a new, nonlinear, two-region, axially distributed, single capillary, BTEX model. A facilitated transporter model is used to describe mass transfer between plasma and intracellular spaces. To provide fast and accurate solutions, numerical techniques suited to nonlinear convection-dominated problems are implemented. These techniques are the random choice method, an explicit Euler-Lagrange scheme, and the MacCormack method with and without flux correction. The accuracy of the numerical techniques is demonstrated, and their efficiencies are compared. The random choice, Euler-Lagrange and plain MacCormack method are the best numerical techniques for BTEX modeling. However, the random choice and Euler-Lagrange methods are preferred over the MacCormack method because they allow for the derivation of a heuristic criterion that makes the numerical methods stable without degrading their efficiency. Numerical solutions are also used to illustrate some nonlinear behaviors of the model and to show how the new BTEX model can be used to estimate parameters from experimental data. PMID:9146808
NASA Technical Reports Server (NTRS)
Bierman, G. J.
1975-01-01
Square root information estimation, starting from its beginnings in least-squares parameter estimation, is considered. Special attention is devoted to discussions of sensitivity and perturbation matrices, computed solutions and their formal statistics, consider-parameters and consider-covariances, and the effects of a priori statistics. The constant-parameter model is extended to include time-varying parameters and process noise, and the error analysis capabilities are generalized. Efficient and elegant smoothing results are obtained as easy consequences of the filter formulation. The value of the techniques is demonstrated by the navigation results that were obtained for the Mariner Venus-Mercury (Mariner 10) multiple-planetary space probe and for the Viking Mars space mission.
Non-parametric diffeomorphic image registration with the demons algorithm.
Vercauteren, Tom; Pennec, Xavier; Perchant, Aymeric; Ayache, Nicholas
2007-01-01
We propose a non-parametric diffeomorphic image registration algorithm based on Thirion's demons algorithm. The demons algorithm can be seen as an optimization procedure on the entire space of displacement fields. The main idea of our algorithm is to adapt this procedure to a space of diffeomorphic transformations. In contrast to many diffeomorphic registration algorithms, our solution is computationally efficient since in practice it only replaces an addition of free form deformations by a few compositions. Our experiments show that in addition to being diffeomorphic, our algorithm provides results that are similar to the ones from the demons algorithm but with transformations that are much smoother and closer to the true ones in terms of Jacobians.
NASA Astrophysics Data System (ADS)
Khachaturov, R. V.
2016-09-01
It is shown that finding the equivalence set for solving multiobjective discrete optimization problems is advantageous over finding the set of Pareto optimal decisions. An example of a set of key parameters characterizing the economic efficiency of a commercial firm is proposed, and a mathematical model of its activities is constructed. In contrast to the classical problem of finding the maximum profit for any business, this study deals with a multiobjective optimization problem. A method for solving inverse multiobjective problems in a multidimensional pseudometric space is proposed for finding the best project of firm's activities. The solution of a particular problem of this type is presented.
Stirling Convertor Performance Mapping Test Results for Future Radioisotope Power Systems
NASA Astrophysics Data System (ADS)
Qiu, Songgang; Peterson, Allen A.; Faultersack, Franklyn D.; Redinger, Darin L.; Augenblick, John E.
2004-02-01
Long-life radioisotope-fueled generators based on free-piston Stirling convertors are an energy-conversion solution for future space applications. The high efficiency of Stirling machines makes them more attractive than the thermoelectric generators currently used in space. Stirling Technology Company (STC) has been performance-testing its Stirling generators to provide data for potential system integration contractors. This paper describes the most recent test results from the STC RemoteGen™ 55 W-class Stirling generators (RG-55). Comparisons are made between the new data and previous Stirling thermodynamic simulation models. Performance-mapping tests are presented including variations in: internal charge pressure, cold end temperature, hot end temperature, alternator temperature, input power, and variation of control voltage.
NASA Astrophysics Data System (ADS)
Käser, Martin; Dumbser, Michael; de la Puente, Josep; Igel, Heiner
2007-01-01
We present a new numerical method to solve the heterogeneous anelastic, seismic wave equations with arbitrary high order accuracy in space and time on 3-D unstructured tetrahedral meshes. Using the velocity-stress formulation provides a linear hyperbolic system of equations with source terms that is completed by additional equations for the anelastic functions including the strain history of the material. These additional equations result from the rheological model of the generalized Maxwell body and permit the incorporation of realistic attenuation properties of viscoelastic material accounting for the behaviour of elastic solids and viscous fluids. The proposed method combines the Discontinuous Galerkin (DG) finite element (FE) method with the ADER approach using Arbitrary high order DERivatives for flux calculations. The DG approach, in contrast to classical FE methods, uses a piecewise polynomial approximation of the numerical solution which allows for discontinuities at element interfaces. Therefore, the well-established theory of numerical fluxes across element interfaces obtained by the solution of Riemann problems can be applied as in the finite volume framework. The main idea of the ADER time integration approach is a Taylor expansion in time in which all time derivatives are replaced by space derivatives using the so-called Cauchy-Kovalewski procedure which makes extensive use of the governing PDE. Due to the ADER time integration technique the same approximation order in space and time is achieved automatically and the method is a one-step scheme advancing the solution for one time step without intermediate stages. To this end, we introduce a new unrolled recursive algorithm for efficiently computing the Cauchy-Kovalewski procedure by making use of the sparsity of the system matrices. The numerical convergence analysis demonstrates that the new schemes provide very high order accuracy even on unstructured tetrahedral meshes while computational cost and storage space for a desired accuracy can be reduced when applying higher degree approximation polynomials. In addition, we investigate the increase in computing time, when the number of relaxation mechanisms due to the generalized Maxwell body are increased. An application to a well-acknowledged test case and comparisons with analytic and reference solutions, obtained by different well-established numerical methods, confirm the performance of the proposed method. Therefore, the development of the highly accurate ADER-DG approach for tetrahedral meshes including viscoelastic material provides a novel, flexible and efficient numerical technique to approach 3-D wave propagation problems including realistic attenuation and complex geometry.
NASA Astrophysics Data System (ADS)
Králik, Juraj
2017-07-01
The paper presents the probabilistic and sensitivity analysis of the efficiency of the damping devices cover of nuclear power plant under impact of the container of nuclear fuel of type TK C30 drop. The finite element idealization of nuclear power plant structure is used in space. The steel pipe damper system is proposed for dissipation of the kinetic energy of the container free fall. The experimental results of the shock-damper basic element behavior under impact loads are presented. The Newmark integration method is used for solution of the dynamic equations. The sensitivity and probabilistic analysis of damping devices was realized in the AntHILL and ANSYS software.
Thermal and Structural Analysis of Micro-Fabricated Involute Regenerators
NASA Astrophysics Data System (ADS)
Qiu, Songgang; Augenblick, Jack E.
2005-02-01
Long-life, high-efficiency power generators based on free-piston Stirling engines are an energy conversion solution for future space power generation and commercial applications. As part of the efforts to further improve Stirling engine efficiency and reliability, a micro-fabricated, involute regenerator structure is proposed by a Cleveland State University-led regenerator research team. This paper reports on thermal and structural analyses of the involute regenerator to demonstrate the feasibility of the proposed regenerator. The results indicate that the involute regenerator has extremely high axial stiffness to sustain reasonable axial compression forces with negligible lateral deformation. The relatively low radial stiffness may impose some challenges to the appropriate installation of the in-volute regenerators.
Spectral methods in time for a class of parabolic partial differential equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ierley, G.; Spencer, B.; Worthing, R.
1992-09-01
In this paper, we introduce a fully spectral solution for the partial differential equation u[sub t] + uu[sub x] + vu[sub xx] + [mu]u[sub xxx] + [lambda]u[sub xxxx] = O. For periodic boundary conditions in space, the use of a Fourier expansion in x admits of a particularly efficient algorithm with respect to expansion of the time dependence in a Chebyshev series. Boundary conditions other than periodic may still be treated with reasonable, though lesser, efficiency. for all cases, very high accuracy is attainable at moderate computational cost relative to the expense of variable order finite difference methods in time.more » 14 refs., 9 figs.« less
Formation Flying Design and Applications in Weak Stability Boundary Regions
NASA Technical Reports Server (NTRS)
Folta, David
2003-01-01
Weak Stability regions serve as superior locations for interferometric scientific investigations. These regions are often selected to minimize environmental disturbances and maximize observing efficiency. Design of formations in these regions are becoming ever more challenging as more complex missions are envisioned. The development of algorithms to enable the capability for formation design must be further enabled to incorporate better understanding of WSB solution space. This development will improve the efficiency and expand the capabilities of current approaches. The Goddard Space Flight Center (GSFC) is currently supporting multiple formation missions in WSB regions. This end-to-end support consists of mission operations, trajectory design, and control. It also includes both algorithm and software development. The Constellation-X, Maxim, and Stellar Imager missions are examples of the use of improved numerical methods for attaining constrained formation geometries and controlling their dynamical evolution. This paper presents a survey of formation missions in the WSB regions and a brief description of the formation design using numerical and dynamical techniques.
Formation flying design and applications in weak stability boundary regions.
Folta, David
2004-05-01
Weak stability regions serve as superior locations for interferomertric scientific investigations. These regions are often selected to minimize environmental disturbances and maximize observation efficiency. Designs of formations in these regions are becoming ever more challenging as more complex missions are envisioned. The development of algorithms to enable the capability for formation design must be further enabled to incorporate better understanding of weak stability boundary solution space. This development will improve the efficiency and expand the capabilities of current approaches. The Goddard Space Flight Center (GSFC) is currently supporting multiple formation missions in weak stability boundary regions. This end-to-end support consists of mission operations, trajectory design, and control. It also includes both algorithm and software development. The Constellation-X, Maxim, and Stellar Imager missions are examples of the use of improved numeric methods to attain constrained formation geometries and control their dynamical evolution. This paper presents a survey of formation missions in the weak stability boundary regions and a brief description of formation design using numerical and dynamical techniques.
Bruno, Oscar P.; Turc, Catalin; Venakides, Stephanos
2016-01-01
This work, part I in a two-part series, presents: (i) a simple and highly efficient algorithm for evaluation of quasi-periodic Green functions, as well as (ii) an associated boundary-integral equation method for the numerical solution of problems of scattering of waves by doubly periodic arrays of scatterers in three-dimensional space. Except for certain ‘Wood frequencies’ at which the quasi-periodic Green function ceases to exist, the proposed approach, which is based on smooth windowing functions, gives rise to tapered lattice sums which converge superalgebraically fast to the Green function—that is, faster than any power of the number of terms used. This is in sharp contrast to the extremely slow convergence exhibited by the lattice sums in the absence of smooth windowing. (The Wood-frequency problem is treated in part II.) This paper establishes rigorously the superalgebraic convergence of the windowed lattice sums. A variety of numerical results demonstrate the practical efficiency of the proposed approach. PMID:27493573
Space Radiation Transport Methods Development
NASA Technical Reports Server (NTRS)
Wilson, J. W.; Tripathi, R. K.; Qualls, G. D.; Cucinotta, F. A.; Prael, R. E.; Norbury, J. W.; Heinbockel, J. H.; Tweed, J.
2002-01-01
Improved spacecraft shield design requires early entry of radiation constraints into the design process to maximize performance and minimize costs. As a result, we have been investigating high-speed computational procedures to allow shield analysis from the preliminary design concepts to the final design. In particular, we will discuss the progress towards a full three-dimensional and computationally efficient deterministic code for which the current HZETRN evaluates the lowest order asymptotic term. HZETRN is the first deterministic solution to the Boltzmann equation allowing field mapping within the International Space Station (ISS) in tens of minutes using standard Finite Element Method (FEM) geometry common to engineering design practice enabling development of integrated multidisciplinary design optimization methods. A single ray trace in ISS FEM geometry requires 14 milliseconds and severely limits application of Monte Carlo methods to such engineering models. A potential means of improving the Monte Carlo efficiency in coupling to spacecraft geometry is given in terms of reconfigurable computing and could be utilized in the final design as verification of the deterministic method optimized design.
FAST SIMULATION OF SOLID TUMORS THERMAL ABLATION TREATMENTS WITH A 3D REACTION DIFFUSION MODEL *
BERTACCINI, DANIELE; CALVETTI, DANIELA
2007-01-01
An efficient computational method for near real-time simulation of thermal ablation of tumors via radio frequencies is proposed. Model simulations of the temperature field in a 3D portion of tissue containing the tumoral mass for different patterns of source heating can be used to design the ablation procedure. The availability of a very efficient computational scheme makes it possible update the predicted outcome of the procedure in real time. In the algorithms proposed here a discretization in space of the governing equations is followed by an adaptive time integration based on implicit multistep formulas. A modification of the ode15s MATLAB function which uses Krylov space iterative methods for the solution of for the linear systems arising at each integration step makes it possible to perform the simulations on standard desktop for much finer grids than using the built-in ode15s. The proposed algorithm can be applied to a wide class of nonlinear parabolic differential equations. PMID:17173888
A 3DHZETRN Code in a Spherical Uniform Sphere with Monte Carlo Verification
NASA Technical Reports Server (NTRS)
Wilson, John W.; Slaba, Tony C.; Badavi, Francis F.; Reddell, Brandon D.; Bahadori, Amir A.
2014-01-01
The computationally efficient HZETRN code has been used in recent trade studies for lunar and Martian exploration and is currently being used in the engineering development of the next generation of space vehicles, habitats, and extra vehicular activity equipment. A new version (3DHZETRN) capable of transporting High charge (Z) and Energy (HZE) and light ions (including neutrons) under space-like boundary conditions with enhanced neutron and light ion propagation is under development. In the present report, new algorithms for light ion and neutron propagation with well-defined convergence criteria in 3D objects is developed and tested against Monte Carlo simulations to verify the solution methodology. The code will be available through the software system, OLTARIS, for shield design and validation and provides a basis for personal computer software capable of space shield analysis and optimization.
An Optimized Trajectory Planning for Welding Robot
NASA Astrophysics Data System (ADS)
Chen, Zhilong; Wang, Jun; Li, Shuting; Ren, Jun; Wang, Quan; Cheng, Qunchao; Li, Wentao
2018-03-01
In order to improve the welding efficiency and quality, this paper studies the combined planning between welding parameters and space trajectory for welding robot and proposes a trajectory planning method with high real-time performance, strong controllability and small welding error. By adding the virtual joint at the end-effector, the appropriate virtual joint model is established and the welding process parameters are represented by the virtual joint variables. The trajectory planning is carried out in the robot joint space, which makes the control of the welding process parameters more intuitive and convenient. By using the virtual joint model combined with the B-spline curve affine invariant, the welding process parameters are indirectly controlled by controlling the motion curve of the real joint. To solve the optimal time solution as the goal, the welding process parameters and joint space trajectory joint planning are optimized.
Brownian dynamics simulations on a hypersphere in 4-space
NASA Astrophysics Data System (ADS)
Nissfolk, Jarl; Ekholm, Tobias; Elvingson, Christer
2003-10-01
We describe an algorithm for performing Brownian dynamics simulations of particles diffusing on S3, a hypersphere in four dimensions. The system is chosen due to recent interest in doing computer simulations in a closed space where periodic boundary conditions can be avoided. We specifically address the question how to generate a random walk on the 3-sphere, starting from the solution of the corresponding diffusion equation, and we also discuss an efficient implementation based on controlled approximations. Since S3 is a closed manifold (space), the average square displacement during a random walk is no longer proportional to the elapsed time, as in R3. Instead, its time rate of change is continuously decreasing, and approaches zero as time becomes large. We show, however, that the effective diffusion coefficient can still be obtained from the time dependence of the square displacement.
High Efficiency Space Power Systems Project Advanced Space-Rated Batteries
NASA Technical Reports Server (NTRS)
Reid, Concha M.
2011-01-01
Case Western Reserve University (CWRU) has an agreement with China National Offshore Oil Corporation New Energy Investment Company, Ltd. (CNOOC), under the United States-China EcoPartnerships Framework, to create a bi-national entity seeking to develop technically feasible and economically viable solutions to energy and environmental issues. Advanced batteries have been identified as one of the initial areas targeted for collaborations. CWRU invited NASA Glenn Research Center (GRC) personnel from the Electrochemistry Branch to CWRU to discuss various aspects of advanced battery development as they might apply to this partnership. Topics discussed included: the process for the selection of a battery chemistry; the establishment of an integrated development program; project management/technical interactions; new technology developments; and synergies between batteries for automotive and space operations. Additional collaborations between CWRU and NASA GRC's Electrochemistry Branch were also discussed.
Upwind schemes and bifurcating solutions in real gas computations
NASA Technical Reports Server (NTRS)
Suresh, Ambady; Liou, Meng-Sing
1992-01-01
The area of high speed flow is seeing a renewed interest due to advanced propulsion concepts such as the National Aerospace Plane (NASP), Space Shuttle, and future civil transport concepts. Upwind schemes to solve such flows have become increasingly popular in the last decade due to their excellent shock capturing properties. In the first part of this paper the authors present the extension of the Osher scheme to equilibrium and non-equilibrium gases. For simplicity, the source terms are treated explicitly. Computations based on the above scheme are presented to demonstrate the feasibility, accuracy and efficiency of the proposed scheme. One of the test problems is a Chapman-Jouguet detonation problem for which numerical solutions have been known to bifurcate into spurious weak detonation solutions on coarse grids. Results indicate that the numerical solution obtained depends both on the upwinding scheme used and the limiter employed to obtain second order accuracy. For example, the Osher scheme gives the correct CJ solution when the super-bee limiter is used, but gives the spurious solution when the Van Leer limiter is used. With the Roe scheme the spurious solution is obtained for all limiters.
Thin-film Organic-based Solar Cells for Space Power
NASA Technical Reports Server (NTRS)
Bailey, Sheila G.; Harris, Jerry D.; Hepp, Aloysius F.; Anglin, Emily J.; Raffaelle, Ryne P.; Clark, Harry R., Jr.; Gardner, Susan T. P.; Sun, Sam S.
2002-01-01
Recent advances in dye-sensitized and organic polymer solar cells have lead NASA to investigate the potential of these devices for space power generation. Dye-sensitized solar cells were exposed to simulated low-earth orbit conditions and their performance evaluated. All cells were characterized under simulated air mass zero (AM0) illumination. Complete cells were exposed to pressures less than 1 x 10(exp -7) torr for over a month, with no sign of sealant failure or electrolyte leakage. Cells from Solaronix SA were rapid thermal cycled under simulated low-earth orbit conditions. The cells were cycled 100 times from -80 C to 80 C, which is equivalent to 6 days in orbit. The best cell had a 4.6 percent loss in efficiency as a result of the thermal cycling. In a separate project, novel -Bridge-Donor-Bridge- Acceptor- (-BDBA-) type conjugated block copolymer systems have been synthesized and characterized by photoluminescence (PL). In comparison to pristine donor or acceptor, the PL emissions of final -B-D-B-A- block copolymer films were quenched over 99 percent. Effective and efficient photo induced electron transfer and charge separation occurs due to the interfaces of micro phase separated donor and acceptor blocks. The system is very promising for a variety high efficiency light harvesting applications. Under an SBIR contract, fullerene-doped polymer-based photovoltaic devices were fabricated and characterized. The best devices showed overall power efficiencies of approx. 0.14 percent under white light. Devices fabricated from 2 percent solids content solutions in chlorobenzene gave the best results. Presently, device lifetimes are too short to be practical for space applications.
Thin-Film Organic-Based Solar Cells for Space Power
NASA Technical Reports Server (NTRS)
Bailey, Sheila G.; Harris, Jerry D.; Hepp, Aloysius F.; Anglin, Emily J.; Raffaelle, Ryne P.; Clark, Harry R., Jr.; Gardner, Susan T. P.; Sun, Sam S.
2001-01-01
Recent advances in dye-sensitized and organic polymer solar cells have lead NASA to investigate the potential of these devices for space power generation. Dye-sensitaized solar cells were exposed to simulated low-earth orbit conditions and their performance evaluated. All cells were characterized under simulated air mass zero (AM0) illumination. Complete cells were exposed to pressures less than 1 x 10 (exp -7)torr for over a month, with no sign of sealant failure or electrolyte leakage. Cells from Solaronix SA were rapid thermal cycled under simulated low-earth orbit conditions. The cells were cycled 100 times from -80 C to 80 C, which is equivalent to 6 days in orbit. The best cell had a 4.6% loss in efficiency as a result of the thermal cycling. In a separate project, novel -Bridge-Donor-Bridge-Acceptor- (-BDBA-) type conjugated block copolymer systems have been synthesized and characterized by photoluminescence (PL). In comparison to pristine donor or acceptor, the PL emissions of final -B-D-B-A- block copolymer films were quenched over 99%. Effective and efficient photo induced electron transfer and charge separation occurs due to the interfaces of micro phase separated donor and acceptor blocks. The system is very promising for a variety high efficiency light harvesting applications. Under an SBIR contract, fullerene-doped polymer-based photovoltaic devices were fabricated and characterized. The best devices showed overall power efficiencies of approximately 0.14% under white light. Devices fabricated from 2% solids content solutions in chlorobenzene gave the best results. Presently, device lifetimes are too short to be practical for space applications.
Air-to-Water Heat Pumps With Radiant Delivery in Low-Load Homes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Backman, C.; German, A.; Dakin, B.
2013-12-01
Space conditioning represents nearly 50% of average residential household energy consumption, highlighting the need to identify alternative cost-effective, energy-efficient cooling and heating strategies. As homes are better built, there is an increasing need for strategies that are particularly well suited for high performance, low load homes. ARBI researchers worked with two test homes in hot-dry climates to evaluate the in-situ performance of air-to-water heat pump (AWHP) systems, an energy efficient space conditioning solution designed to cost-effectively provide comfort in homes with efficient, safe, and durable operation. Two monitoring projects of test houses in hot-dry climates were initiated in 2010 tomore » test this system. Both systems were fully instrumented and have been monitored over one year to capture complete performance data over the cooling and heating seasons. Results are used to quantify energy savings, cost-effectiveness, and system performance using different operating modes and strategies. A calibrated TRNSYS model was developed and used to evaluate performance in various climate regions. This strategy is most effective in tight, insulated homes with high levels of thermal mass (i.e. exposed slab floors).« less
A non-local computational boundary condition for duct acoustics
NASA Technical Reports Server (NTRS)
Zorumski, William E.; Watson, Willie R.; Hodge, Steve L.
1994-01-01
A non-local boundary condition is formulated for acoustic waves in ducts without flow. The ducts are two dimensional with constant area, but with variable impedance wall lining. Extension of the formulation to three dimensional and variable area ducts is straightforward in principle, but requires significantly more computation. The boundary condition simulates a nonreflecting wave field in an infinite duct. It is implemented by a constant matrix operator which is applied at the boundary of the computational domain. An efficient computational solution scheme is developed which allows calculations for high frequencies and long duct lengths. This computational solution utilizes the boundary condition to limit the computational space while preserving the radiation boundary condition. The boundary condition is tested for several sources. It is demonstrated that the boundary condition can be applied close to the sound sources, rendering the computational domain small. Computational solutions with the new non-local boundary condition are shown to be consistent with the known solutions for nonreflecting wavefields in an infinite uniform duct.
The Market as an Institution for Zoning the Ocean
NASA Astrophysics Data System (ADS)
Clinton, J. E.; Hoagland, P.
2008-12-01
In recent years, spatial conflicts among ocean users have increased significantly, particularly in the coastal ocean. Ocean zoning has been proposed as a promising solution to these conflicts. Strikingly, most ocean zoning proponents focus on a centralized approach, involving government oversight, planning, and spatial allocations. We hypothesize that a market may be more efficient for allocating ocean space, because it tends to put ocean space in the hands of the highest valued uses, and it does not require public decision-makers to compile and analyze large amounts of information. Importantly, where external costs arise, a market in ocean space may need government oversight or regulation. We develop four case studies demonstrating that private allocations of ocean space are taking place already. This evidence suggests that a regulated market in ocean space may perform well as an allocative institution. We find that the proper functioning of a market in ocean space depends positively upon the strength of legal property rights and supportive public policies and negatively upon the number of users and the size of transaction costs.
Influence of vibration on the coupling efficiency in spatial receiver and its compensation method
NASA Astrophysics Data System (ADS)
Hu, Qinggui; Mu, Yining
2018-04-01
In order to analyze the loss of the free-space optical receiver caused by the vibration, we set up the coordinate systems on both the receiving lens surface and the receiving fiber surface, respectively. Then, with Gauss optics theory, the coupling efficiency equation is obtained. And the solution is calculated with MATLAB® software. To lower the impact of the vibration, the directional tapered communication fiber receiver is proposed. In the next step, the sample was produced and two experiments were done. The first experiment shows that the coupling efficiency of the receiver is higher than that of the traditional one. The second experiment shows the bit error rate of the new receiver is lower. Both of the experiments show the new receiver could improve the receiving system's tolerance with the vibration.
NASA Technical Reports Server (NTRS)
Dinetta, L. C.; Hannon, M. H.; Mcneely, J. B.; Barnett, A. M.
1991-01-01
The AstroPower self-supporting, transparent AlGaAs top solar cell can be stacked upon any well-developed bottom solar cell for improved system performance. This is an approach to improve the performance and scale of space photovoltaic power systems. Mechanically stacked tandem solar cell concentrator systems based on the AlGaAs top concentrator solar cell can provide near term efficiencies of 36 percent (AMO, 100x). Possible tandem stack efficiencies greater than 38 percent (100x, AMO) are feasible with a careful selection of materials. In a three solar cell stack, system efficiencies exceed 41 percent (100x, AMO). These device results demonstrate a practical solution for a state-of-the-art top solar cell for attachment to an existing, well-developed solar cell.
NASA Astrophysics Data System (ADS)
Yuan, Zonghao; Cao, Zhigang; Boström, Anders; Cai, Yuanqiang
2018-04-01
A computationally efficient semi-analytical solution for ground-borne vibrations from underground railways is proposed and used to investigate the influence of hydraulic boundary conditions at the scattering surfaces and the moving ground water table on ground vibrations. The arrangement of a dry soil layer with varying thickness resting on a saturated poroelastic half-space, which includes a circular tunnel subject to a harmonic load at the tunnel invert, creates the scenario of a moving water table for research purposes in this paper. The tunnel is modelled as a hollow cylinder, which is made of viscoelastic material and buried in the half-space below the ground water table. The wave field in the dry soil layer consists of up-going and down-going waves while the wave field in the tunnel wall consists of outgoing and regular cylindrical waves. The complete solution for the saturated half-space with a cylindrical hole is composed of down-going plane waves and outgoing cylindrical waves. By adopting traction-free boundary conditions on the ground surface and continuity conditions at the interfaces of the two soil layers and of the tunnel and the surrounding soil, a set of algebraic equations can be obtained and solved in the transformed domain. Numerical results show that the moving ground water table can cause an uncertainty of up to 20 dB for surface vibrations.
Kennedy Space Center Five Year Sustainability Plan
NASA Technical Reports Server (NTRS)
Williams, Ann T.
2016-01-01
The Federal Government is committed to following sustainable principles. At its heart, sustainability integrates environmental, societal and economic solutions for present needs without compromising the ability of future generations to meet their needs. Building upon its pledge towards environmental stewardship, the Administration generated a vision of sustainability spanning ten goals mandated within Executive Order (EO) 13693, Planning for Federal Sustainability in the Next Decade. In November 2015, the National Aeronautics and Space Administration (NASA) responded to this EO by incorporating it into a new release of the NASA Strategic Sustainability Performance Plan (SSPP). The SSPP recognizes the importance of aligning environmental practices in a manner that preserves, enhances and strengthens NASA's ability to perform its mission indefinitely. The Kennedy Space Center (KSC) is following suit with KSC's Sustainability Plan (SP) by promoting, maintaining and pioneering green practices in all aspects of our mission. KSC's SP recognizes that the best sustainable solutions use an interdisciplinary, collaborative approach spanning civil servant and contractor personnel from across the Center. This approach relies on the participation of all employees to develop and implement sustainability endeavors connected with the following ten goals: Reduce greenhouse gas (GHG) emissions. Design, build and maintain sustainable buildings, facilities and infrastructure. Leverage clean and renewable energy. Increase water conservation. Improve fleet and vehicle efficiency and management. Purchase sustainable products and services. Minimize waste and prevent pollution. Implement performance contracts for Federal buildings. Manage electronic equipment and data centers responsibly. Pursue climate change resilience. The KSC SP details the strategies and actions that address the following objectives: Reduce Center costs. center dot Increase energy and water efficiencies. Promote smart buying practices. Increase reuse and recycling while decreasing waste. Benefit the community. Meet or exceed the EO and NASA SSPP sustainability goals.
Back-illuminated large area frame transfer CCDs for space-based hyper-spectral imaging applications
NASA Astrophysics Data System (ADS)
Philbrick, Robert H.; Gilmore, Angelo S.; Schrein, Ronald J.
2016-07-01
Standard offerings of large area, back-illuminated full frame CCD sensors are available from multiple suppliers and they continue to be commonly deployed in ground- and space-based applications. By comparison the availability of large area frame transfers CCDs is sparse, with the accompanying 2x increase in die area no doubt being a contributing factor. Modern back-illuminated CCDs yield very high quantum efficiency in the 290 to 400 nm band, a wavelength region of great interest in space-based instruments studying atmospheric phenomenon. In fast framing (e.g. 10 - 20 Hz), space-based applications such as hyper-spectral imaging, the use of a mechanical shutter to block incident photons during readout can prove costly and lower instrument reliability. The emergence of large area, all-digital visible CMOS sensors, with integrate while read functionality, are an alternative solution to CCDs; but, even after factoring in reduced complexity and cost of support electronics, the present cost to implement such novel sensors is prohibitive to cost constrained missions. Hence, there continues to be a niche set of applications where large area, back-illuminated frame transfer CCDs with high UV quantum efficiency, high frame rate, high full well, and low noise provide an advantageous solution. To address this need a family of large area frame transfer CCDs has been developed that includes 2048 (columns) x 256 (rows) (FT4), 2048 x 512 (FT5), and 2048 x 1024 (FT6) full frame transfer CCDs; and a 2048 x 1024 (FT7) split-frame transfer CCD. Each wafer contains 4 FT4, 2 FT5, 2 FT6, and 2 FT7 die. The designs have undergone radiation and accelerated life qualification and the electro-optical performance of these CCDs over the wavelength range of 290 to 900 nm is discussed.
Di Girolamo, Paolo; Behrendt, Andreas; Wulfmeyer, Volker
2018-04-02
The performance of a space-borne water vapour and temperature lidar exploiting the vibrational and pure rotational Raman techniques in the ultraviolet is simulated. This paper discusses simulations under a variety of environmental and climate scenarios. Simulations demonstrate the capability of Raman lidars deployed on-board low-Earth-orbit satellites to provide global-scale water vapour mixing ratio and temperature measurements in the lower to middle troposphere, with accuracies exceeding most observational requirements for numerical weather prediction (NWP) and climate research applications. These performances are especially attractive for measurements in the low troposphere in order to close the most critical gaps in the current earth observation system. In all climate zones, considering vertical and horizontal resolutions of 200 m and 50 km, respectively, mean water vapour mixing ratio profiling precision from the surface up to an altitude of 4 km is simulated to be 10%, while temperature profiling precision is simulated to be 0.40-0.75 K in the altitude interval up to 15 km. Performances in the presence of clouds are also simulated. Measurements are found to be possible above and below cirrus clouds with an optical thickness of 0.3. This combination of accuracy and vertical resolution cannot be achieved with any other space borne remote sensing technique and will provide a breakthrough in our knowledge of global and regional water and energy cycles, as well as in the quality of short- to medium-range weather forecasts. Besides providing a comprehensive set of simulations, this paper also provides an insight into specific possible technological solutions that are proposed for the implementation of a space-borne Raman lidar system. These solutions refer to technological breakthroughs gained during the last decade in the design and development of specific lidar devices and sub-systems, primarily in high-power, high-efficiency solid-state laser sources, low-weight large aperture telescopes, and high-gain, high-quantum efficiency detectors.
Key Gaps for Enabling Plant Growth in Future Missions
NASA Technical Reports Server (NTRS)
Anderson, Molly; Motil, Brian; Barta, Dan; Fritsche, Ralph; Massa, Gioia; Quincy, Charlie; Romeyn, Matthew; Wheeler, Ray; Hanford, Anthony
2017-01-01
Growing plants to provide food or psychological benefits to crewmembers is a common vision for the future of human spaceflight, often represented in media and in serious concept studies. The complexity of controlled environment agriculture, and plant growth in microgravity have and continue to be the subject of dedicated scientific research. However, actually implementing these systems in a way that will be cost effective, efficient, and sustainable for future space missions is a complex, multi-disciplinary problem. Key questions exist in many areas: human medical research in nutrition and psychology, horticulture, plant physiology and microbiology, multi-phase microgravity fluid physics, hardware design and technology development, and system design, operations and mission planning. This paper describes key knowledge gaps identified by a multi-disciplinary working group within the National Aeronautics and Space Administration (NASA). It also begins to identify solutions to the simpler questions identified by the group based on work initiated in 2017. Growing plants to provide food or psychological benefits to crewmembers is a common vision for the future of human spaceflight, often represented in media and in serious concept studies. The complexity of controlled environment agriculture, and plant growth in microgravity have and continue to be the subject of dedicated scientific research. However, actually implementing these systems in a way that will be cost effective, efficient, and sustainable for future space missions is a complex, multi-disciplinary problem. Key questions exist in many areas: human medical research in nutrition and psychology, horticulture, plant physiology and microbiology, multi-phase microgravity fluid physics, hardware design and technology development, and system design, operations and mission planning. This paper describes key knowledge gaps identified by a multi-disciplinary working group within the National Aeronautics and Space Administration (NASA). It also begins to identify solutions to the simpler questions identified by the group based on work initiated in 2017.
NASA Astrophysics Data System (ADS)
Schuster, Thomas; Hofmann, Bernd; Kaltenbacher, Barbara
2012-10-01
Inverse problems can usually be modelled as operator equations in infinite-dimensional spaces with a forward operator acting between Hilbert or Banach spaces—a formulation which quite often also serves as the basis for defining and analyzing solution methods. The additional amount of structure and geometric interpretability provided by the concept of an inner product has rendered these methods amenable to a convergence analysis, a fact which has led to a rigorous and comprehensive study of regularization methods in Hilbert spaces over the last three decades. However, for numerous problems such as x-ray diffractometry, certain inverse scattering problems and a number of parameter identification problems in PDEs, the reasons for using a Hilbert space setting seem to be based on conventions rather than an appropriate and realistic model choice, so often a Banach space setting would be closer to reality. Furthermore, non-Hilbertian regularization and data fidelity terms incorporating a priori information on solution and noise, such as general Lp-norms, TV-type norms, or the Kullback-Leibler divergence, have recently become very popular. These facts have motivated intensive investigations on regularization methods in Banach spaces, a topic which has emerged as a highly active research field within the area of inverse problems. Meanwhile some of the most well-known regularization approaches, such as Tikhonov-type methods requiring the solution of extremal problems, and iterative ones like the Landweber method, the Gauss-Newton method, as well as the approximate inverse method, have been investigated for linear and nonlinear operator equations in Banach spaces. Convergence with rates has been proven and conditions on the solution smoothness and on the structure of nonlinearity have been formulated. Still, beyond the existing results a large number of challenging open questions have arisen, due to the more involved handling of general Banach spaces and the larger variety of concrete instances with special properties. The aim of this special section is to provide a forum for highly topical ongoing work in the area of regularization in Banach spaces, its numerics and its applications. Indeed, we have been lucky enough to obtain a number of excellent papers both from colleagues who have previously been contributing to this topic and from researchers entering the field due to its relevance in practical inverse problems. We would like to thank all contributers for enabling us to present a high quality collection of papers on topics ranging from various aspects of regularization via efficient numerical solution to applications in PDE models. We give a brief overview of the contributions included in this issue (here ordered alphabetically by first author). In their paper, Iterative regularization with general penalty term—theory and application to L1 and TV regularization, Radu Bot and Torsten Hein provide an extension of the Landweber iteration for linear operator equations in Banach space to general operators in place of the inverse duality mapping, which corresponds to the use of general regularization functionals in variational regularization. The L∞ topology in data space corresponds to the frequently occuring situation of uniformly distributed data noise. A numerically efficient solution of the resulting Tikhonov regularization problem via a Moreau-Yosida appriximation and a semismooth Newton method, along with a δ-free regularization parameter choice rule, is the topic of the paper L∞ fitting for inverse problems with uniform noise by Christian Clason. Extension of convergence rates results from classical source conditions to their generalization via variational inequalities with a priori and a posteriori stopping rules is the main contribution of the paper Regularization of linear ill-posed problems by the augmented Lagrangian method and variational inequalities by Klaus Frick and Markus Grasmair, again in the context of some iterative method. A powerful tool for proving convergence rates of Tikhonov type but also other regularization methods in Banach spaces are assumptions of the type of variational inequalities that combine conditions on solution smoothness (i.e., source conditions in the Hilbert space case) and nonlinearity of the forward operator. In Parameter choice in Banach space regularization under variational inequalities, Bernd Hofmann and Peter Mathé provide results with general error measures and especially study the question of regularization parameter choice. Daijun Jiang, Hui Feng, and Jun Zou consider an application of Banach space ideas in the context of an application problem in their paper Convergence rates of Tikhonov regularizations for parameter identifiation in a parabolic-elliptic system, namely the identification of a distributed diffusion coefficient in a coupled elliptic-parabolic system. In particular, they show convergence rates of Lp-H1 (variational) regularization for the application under consideration via the use and verification of certain source and nonlinearity conditions. In computational practice, the Lp norm with p close to one is often used as a substitute for the actually sparsity promoting L1 norm. In Norm sensitivity of sparsity regularization with respect to p, Kamil S Kazimierski, Peter Maass and Robin Strehlow consider the question of how sensitive the Tikhonov regularized solution is with respect to p. They do so by computing the derivative via the implicit function theorem, particularly at the crucial value, p=1. Another iterative regularization method in Banach space is considered by Qinian Jin and Linda Stals in Nonstationary iterated Tikhonov regularization for ill-posed problems in Banach spaces. Using a variational formulation and under some smoothness and convexity assumption on the preimage space, they extend the convergence analysis of the well-known iterative Tikhonov method for linear problems in Hilbert space to a more general Banach space framework. Systems of linear or nonlinear operators can be efficiently treated by cyclic iterations, thus several variants of gradient and Newton-type Kaczmarz methods have already been studied in the Hilbert space setting. Antonio Leitão and M Marques Alves in their paper On Landweber---Kaczmarz methods for regularizing systems of ill-posed equations in Banach spaces carry out an extension to Banach spaces for the fundamental Landweber version. The impact of perturbations in the evaluation of the forward operator and its derivative on the convergence behaviour of regularization methods is a practically and highly relevant issue. It is treated in the paper Convergence rates analysis of Tikhonov regularization for nonlinear ill-posed problems with noisy operators by Shuai Lu and Jens Flemming for variational regularization of nonlinear problems in Banach spaces. In The approximate inverse in action: IV. Semi-discrete equations in a Banach space setting, Thomas Schuster, Andreas Rieder and Frank Schöpfer extend the concept of approximate inverse to the practically and highly relevant situation of finitely many measurements and a general smooth and convex Banach space as preimage space. They devise two approaches for computing the reconstruction kernels required in the method and provide convergence and regularization results. Frank Werner and Thorsten Hohage in Convergence rates in expectation for Tikhonov-type regularization of inverse problems with Poisson data prove convergence rates results for variational regularization with general convex regularization term and the Kullback-Leibler distance as data fidelity term by combining a new result on Poisson distributed data with a deterministic rates analysis. Finally, we would like to thank the Inverse Problems team, especially Joanna Evangelides and Chris Wileman, for their extraordinary smooth and productive cooperation, as well as Alfred K Louis for his kind support of our initiative.
Anomalous Protein-Protein Interactions in Multivalent Salt Solution.
Pasquier, Coralie; Vazdar, Mario; Forsman, Jan; Jungwirth, Pavel; Lund, Mikael
2017-04-13
The stability of aqueous protein solutions is strongly affected by multivalent ions, which induce ion-ion correlations beyond the scope of classical mean-field theory. Using all-atom molecular dynamics (MD) and coarse grained Monte Carlo (MC) simulations, we investigate the interaction between a pair of protein molecules in 3:1 electrolyte solution. In agreement with available experimental findings of "reentrant protein condensation", we observe an anomalous trend in the protein-protein potential of mean force with increasing electrolyte concentration in the order: (i) double-layer repulsion, (ii) ion-ion correlation attraction, (iii) overcharge repulsion, and in excess of 1:1 salt, (iv) non Coulombic attraction. To efficiently sample configurational space we explore hybrid continuum solvent models, applicable to many-protein systems, where weakly coupled ions are treated implicitly, while strongly coupled ones are treated explicitly. Good agreement is found with the primitive model of electrolytes, as well as with atomic models of protein and solvent.
NASA Astrophysics Data System (ADS)
Penkov, V. B.; Levina, L. V.; Novikova, O. S.; Shulmin, A. S.
2018-03-01
Herein we propose a methodology for structuring a full parametric analytical solution to problems featuring elastostatic media based on state-of-the-art computing facilities that support computerized algebra. The methodology includes: direct and reverse application of P-Theorem; methods of accounting for physical properties of media; accounting for variable geometrical parameters of bodies, parameters of boundary states, independent parameters of volume forces, and remote stress factors. An efficient tool to address the task is the sustainable method of boundary states originally designed for the purposes of computerized algebra and based on the isomorphism of Hilbertian spaces of internal states and boundary states of bodies. We performed full parametric solutions of basic problems featuring a ball with a nonconcentric spherical cavity, a ball with a near-surface flaw, and an unlimited medium with two spherical cavities.
Comparative genomics meets topology: a novel view on genome median and halving problems.
Alexeev, Nikita; Avdeyev, Pavel; Alekseyev, Max A
2016-11-11
Genome median and genome halving are combinatorial optimization problems that aim at reconstruction of ancestral genomes by minimizing the number of evolutionary events between them and genomes of the extant species. While these problems have been widely studied in past decades, their solutions are often either not efficient or not biologically adequate. These shortcomings have been recently addressed by restricting the problems solution space. We show that the restricted variants of genome median and halving problems are, in fact, closely related. We demonstrate that these problems have a neat topological interpretation in terms of embedded graphs and polygon gluings. We illustrate how such interpretation can lead to solutions to these problems in particular cases. This study provides an unexpected link between comparative genomics and topology, and demonstrates advantages of solving genome median and halving problems within the topological framework.
NASA Astrophysics Data System (ADS)
Alahbakhshi, Masoud; Fallahi, Afsoon; Mohajerani, Ezeddin; Fathollahi, Mohammad-Reza; Taromi, Faramarz Afshar; Shahinpoor, Mohsen
2017-02-01
A novel and innovative approach to develop reduction of graphene oxide (GO) solution for fabrication of highly and truly transparent conductive electrode (TCE) has been presented. Thanks to outstanding mechanical and electronic properties of graphene which offer practical applications in synthesizing composites as well as fabricating various optoelectronic devices, in this study, conductive reduced graphene oxide (r-GO) thin films were prepared through sequential chemical and thermal reduction process of homogeneously dispersed GO solutions. The conductivity and transparency of r-GO thin film is regulated using hydroiodic acid (HI) as reducing agent following by vacuum thermal annealing. The prepared r-GO is characterized by XRD, AFM, UV-vis and Raman spectroscopy. the AFM topographic images reveal surface roughness almost ∼11 nm which became less than 2 nm for the 4 mg/mL solution. Moreover, XRD analysis and Raman spectra substantiate the interlayer spacing between rGO layers has been reduced dramatically and also electronic conjugation has been ameliorated after using HI chemical agent and 700 °C thermal annealing sequentially. Subsequently providing r-GO transparent electrode with decent and satisfactory transparency, acceptable conductivity and suitable work function, it has been exploited as the anode in organic light emitting diode (OLED). The maximum luminance efficiency and maximum power efficiency reached 4.2 cd/A and 0.83 lm/W, respectively. We believe that by optimizing the hole density, sheet resistance, transparency and surface morphology of the r-GO anodes, the device efficiencies can be remarkably increased further.
Freeing Water from Viruses and Bacteria
NASA Technical Reports Server (NTRS)
2004-01-01
Four years ago, Argonide Corporation, a company focused on the research, production, and marketing of specialty nano materials, was seeking to develop applications for its NanoCeram[R] fibers. Only 2 nanometers in diameter, these nano aluminum oxide fibers possessed unusual bio-adhesive properties. When formulated into a filter material, the electropositive fibers attracted and retained electro-negative particles such as bacteria and viruses in water-based solutions. This technology caught the interest of NASA as a possible solution for improved water filtration in space cabins. NASA's Johnson Space Center awarded Sanford, Florida-based Argonide a Phase I Small Business Innovation Research (SBIR) contract to determine the feasibility of using the company's filter for purifying recycled space cabin water. Since viruses and bacteria can be carried aboard space cabins by space crews, the ability to detect and remove these harmful substances is a concern for NASA. The Space Agency also desired an improved filter to polish the effluent from condensed and waste water, producing potable drinking water. During its Phase I partnership with NASA, Argonide developed a laboratory-size filter capable of removing greater than 99.9999 percent of bacteria and viruses from water at flow rates more than 200 times faster than virus-rated membranes that remove particles by sieving. Since the new filter s pore size is rather large compared to other membranes, it is also less susceptible to clogging by small particles. In September 2002, Argonide began a Phase II SBIR project with Johnson to develop a full-size cartridge capable of serving a full space crew. This effort, which is still ongoing, enabled the company to demonstrate that its filter media is an efficient absorbent for DNA and RNA.
System dynamics and simulation of LSS
NASA Technical Reports Server (NTRS)
Ryan, R. F.
1978-01-01
Large Space Structures have many unique problems arising from mission objectives and the resulting configuration. Inherent in these configurations is a strong coupling among several of the designing disciplines. In particular, the coupling between structural dynamics and control is a key design consideration. The solution to these interactive problems requires efficient and accurate analysis, simulation and test techniques, and properly planned and conducted design trade studies. The discussion presented deals with these subjects and concludes with a brief look at some NASA capabilities which can support these technology studies.
Recent update of the RPLUS2D/3D codes
NASA Technical Reports Server (NTRS)
Tsai, Y.-L. Peter
1991-01-01
The development of the RPLUS2D/3D codes is summarized. These codes utilize LU algorithms to solve chemical non-equilibrium flows in a body-fitted coordinate system. The motivation behind the development of these codes is the need to numerically predict chemical non-equilibrium flows for the National AeroSpace Plane Program. Recent improvements include vectorization method, blocking algorithms for geometric flexibility, out-of-core storage for large-size problems, and an LU-SW/UP combination for CPU-time efficiency and solution quality.
Dynamical stability of slip-stacking particles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Eldred, Jeffrey; Zwaska, Robert
2014-09-01
We study the stability of particles in slip-stacking configuration, used to nearly double proton beam intensity at Fermilab. We introduce universal area factors to calculate the available phase space area for any set of beam parameters without individual simulation. We find perturbative solutions for stable particle trajectories. We establish Booster beam quality requirements to achieve 97% slip-stacking efficiency. We show that slip-stacking dynamics directly correspond to the driven pendulum and to the system of two standing-wave traps moving with respect to each other.
Damping seals for turbomachinery
NASA Technical Reports Server (NTRS)
Vonpragenau, G. L.
1985-01-01
Rotor whirl stabilization of high performance turbomachinery which operates at supercritical speed is discussed. Basic whirl driving forces are reviewed. Stabilization and criteria are discussed. Damping seals are offered as a solution to whirl and high vibration problems. Concept, advantages, retrofitting, and limits of damping seals are explained. Dynamic and leakage properties are shown to require a rough stator surface for stability and efficiency. Typical seal characteristics are given for the case of the high pressure oxidizer turbopump of the Space Shuttle. Ways of implementation and bearing load effects are discussed.
NASA Technical Reports Server (NTRS)
Yarrow, Maurice; Vastano, John A.; Lomax, Harvard
1992-01-01
Generic shapes are subjected to pulsed plane waves of arbitrary shape. The resulting scattered electromagnetic fields are determined analytically. These fields are then computed efficiently at field locations for which numerically determined EM fields are required. Of particular interest are the pulsed waveform shapes typically utilized by radar systems. The results can be used to validate the accuracy of finite difference time domain Maxwell's equations solvers. A two-dimensional solver which is second- and fourth-order accurate in space and fourth-order accurate in time is examined. Dielectric media properties are modeled by a ramping technique which simplifies the associated gridding of body shapes. The attributes of the ramping technique are evaluated by comparison with the analytic solutions.
ANNIT - An Efficient Inversion Algorithm based on Prediction Principles
NASA Astrophysics Data System (ADS)
Růžek, B.; Kolář, P.
2009-04-01
Solution of inverse problems represents meaningful job in geophysics. The amount of data is continuously increasing, methods of modeling are being improved and the computer facilities are also advancing great technical progress. Therefore the development of new and efficient algorithms and computer codes for both forward and inverse modeling is still up to date. ANNIT is contributing to this stream since it is a tool for efficient solution of a set of non-linear equations. Typical geophysical problems are based on parametric approach. The system is characterized by a vector of parameters p, the response of the system is characterized by a vector of data d. The forward problem is usually represented by unique mapping F(p)=d. The inverse problem is much more complex and the inverse mapping p=G(d) is available in an analytical or closed form only exceptionally and generally it may not exist at all. Technically, both forward and inverse mapping F and G are sets of non-linear equations. ANNIT solves such situation as follows: (i) joint subspaces {pD, pM} of original data and model spaces D, M, resp. are searched for, within which the forward mapping F is sufficiently smooth that the inverse mapping G does exist, (ii) numerical approximation of G in subspaces {pD, pM} is found, (iii) candidate solution is predicted by using this numerical approximation. ANNIT is working in an iterative way in cycles. The subspaces {pD, pM} are searched for by generating suitable populations of individuals (models) covering data and model spaces. The approximation of the inverse mapping is made by using three methods: (a) linear regression, (b) Radial Basis Function Network technique, (c) linear prediction (also known as "Kriging"). The ANNIT algorithm has built in also an archive of already evaluated models. Archive models are re-used in a suitable way and thus the number of forward evaluations is minimized. ANNIT is now implemented both in MATLAB and SCILAB. Numerical tests show good performance of the algorithm. Both versions and documentation are available on Internet and anybody can download them. The goal of this presentation is to offer the algorithm and computer codes for anybody interested in the solution to inverse problems.
A FAST ITERATIVE METHOD FOR SOLVING THE EIKONAL EQUATION ON TRIANGULATED SURFACES*
Fu, Zhisong; Jeong, Won-Ki; Pan, Yongsheng; Kirby, Robert M.; Whitaker, Ross T.
2012-01-01
This paper presents an efficient, fine-grained parallel algorithm for solving the Eikonal equation on triangular meshes. The Eikonal equation, and the broader class of Hamilton–Jacobi equations to which it belongs, have a wide range of applications from geometric optics and seismology to biological modeling and analysis of geometry and images. The ability to solve such equations accurately and efficiently provides new capabilities for exploring and visualizing parameter spaces and for solving inverse problems that rely on such equations in the forward model. Efficient solvers on state-of-the-art, parallel architectures require new algorithms that are not, in many cases, optimal, but are better suited to synchronous updates of the solution. In previous work [W. K. Jeong and R. T. Whitaker, SIAM J. Sci. Comput., 30 (2008), pp. 2512–2534], the authors proposed the fast iterative method (FIM) to efficiently solve the Eikonal equation on regular grids. In this paper we extend the fast iterative method to solve Eikonal equations efficiently on triangulated domains on the CPU and on parallel architectures, including graphics processors. We propose a new local update scheme that provides solutions of first-order accuracy for both architectures. We also propose a novel triangle-based update scheme and its corresponding data structure for efficient irregular data mapping to parallel single-instruction multiple-data (SIMD) processors. We provide detailed descriptions of the implementations on a single CPU, a multicore CPU with shared memory, and SIMD architectures with comparative results against state-of-the-art Eikonal solvers. PMID:22641200
NASA Astrophysics Data System (ADS)
Laumond, Jean-Paul
2016-07-01
Grasping an object is a matter of first moving a prehensile organ at some position in the world, and then managing the contact relationship between the prehensile organ and the object. Once the contact relationship has been established and made stable, the object is part of the body and it can move in the world. As any action, the action of grasping is ontologically anchored in the physical space while the correlative movement originates in the space of the body. Robots-as any living system-access the physical space only indirectly through sensors and motors. Sensors and motors constitute the space of the body where homeostasis takes place. Physical space and both sensor space and motor space constitute a triangulation, which is the locus of the action embodiment, i.e. the locus of operations allowing the fundamental inversion between world-centered and body-centered frames. Referring to these three fundamental spaces, geometry appears as the best abstraction to capture the nature of action-driven movements. Indeed, a particular geometry is captured by a particular group of transformations of the points of a space such that every point or every direction in space can be transformed by an element of the group to every other point or direction within the group. Quoting mathematician Poincaré, the issue is not find the truest geometry but the most practical one to account for the complexity of the world [1]. Geometry is then the language fostering the dialog between neurophysiology and engineering about natural and artificial movement science and technology. Evolution has found amazing solutions that allow organisms to rapidly and efficiently manage the relationship between their body and the world [2]. It is then natural that roboticists consider taking inspiration of these natural solutions, while contributing to better understand their origin.
NASA Astrophysics Data System (ADS)
Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.
2010-07-01
The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte-Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward difference formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte-Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.
NASA Astrophysics Data System (ADS)
Guner, Ozkan; Korkmaz, Alper; Bekir, Ahmet
2017-02-01
Dark soliton solutions for space-time fractional Sharma-Tasso-Olver and space-time fractional potential Kadomtsev-Petviashvili equations are determined by using the properties of modified Riemann-Liouville derivative and fractional complex transform. After reducing both equations to nonlinear ODEs with constant coefficients, the \\tanh ansatz is substituted into the resultant nonlinear ODEs. The coefficients of the solutions in the ansatz are calculated by algebraic computer computations. Two different solutions are obtained for the Sharma-Tasso-Olver equation as only one solution for the potential Kadomtsev-Petviashvili equation. The solution profiles are demonstrated in 3D plots in finite domains of time and space.
Hao, Xiao-Hu; Zhang, Gui-Jun; Zhou, Xiao-Gen; Yu, Xu-Feng
2016-01-01
To address the searching problem of protein conformational space in ab-initio protein structure prediction, a novel method using abstract convex underestimation (ACUE) based on the framework of evolutionary algorithm was proposed. Computing such conformations, essential to associate structural and functional information with gene sequences, is challenging due to the high-dimensionality and rugged energy surface of the protein conformational space. As a consequence, the dimension of protein conformational space should be reduced to a proper level. In this paper, the high-dimensionality original conformational space was converted into feature space whose dimension is considerably reduced by feature extraction technique. And, the underestimate space could be constructed according to abstract convex theory. Thus, the entropy effect caused by searching in the high-dimensionality conformational space could be avoided through such conversion. The tight lower bound estimate information was obtained to guide the searching direction, and the invalid searching area in which the global optimal solution is not located could be eliminated in advance. Moreover, instead of expensively calculating the energy of conformations in the original conformational space, the estimate value is employed to judge if the conformation is worth exploring to reduce the evaluation time, thereby making computational cost lower and the searching process more efficient. Additionally, fragment assembly and the Monte Carlo method are combined to generate a series of metastable conformations by sampling in the conformational space. The proposed method provides a novel technique to solve the searching problem of protein conformational space. Twenty small-to-medium structurally diverse proteins were tested, and the proposed ACUE method was compared with It Fix, HEA, Rosetta and the developed method LEDE without underestimate information. Test results show that the ACUE method can more rapidly and more efficiently obtain the near-native protein structure.
Smart LED lighting for major reductions in power and energy use for plant lighting in space
NASA Astrophysics Data System (ADS)
Poulet, Lucie
Launching or resupplying food, oxygen, and water into space for long-duration, crewed missions to distant destinations, such as Mars, is currently impossible. Bioregenerative life-support systems under development worldwide involving photoautotrophic organisms offer a solution to the food dilemma. However, using traditional Earth-based lighting methods, growth of food crops consumes copious energy, and since sunlight will not always be available at different space destinations, efficient electric lighting solutions are badly needed to reduce the Equivalent System Mass (ESM) of life-support infrastructure to be launched and transported to future space destinations with sustainable human habitats. The scope of the present study was to demonstrate that using LEDs coupled to plant detection, and optimizing spectral and irradiance parameters of LED light, the model crop lettuce (
Integrability and Linear Stability of Nonlinear Waves
NASA Astrophysics Data System (ADS)
Degasperis, Antonio; Lombardo, Sara; Sommacal, Matteo
2018-03-01
It is well known that the linear stability of solutions of 1+1 partial differential equations which are integrable can be very efficiently investigated by means of spectral methods. We present here a direct construction of the eigenmodes of the linearized equation which makes use only of the associated Lax pair with no reference to spectral data and boundary conditions. This local construction is given in the general N× N matrix scheme so as to be applicable to a large class of integrable equations, including the multicomponent nonlinear Schrödinger system and the multiwave resonant interaction system. The analytical and numerical computations involved in this general approach are detailed as an example for N=3 for the particular system of two coupled nonlinear Schrödinger equations in the defocusing, focusing and mixed regimes. The instabilities of the continuous wave solutions are fully discussed in the entire parameter space of their amplitudes and wave numbers. By defining and computing the spectrum in the complex plane of the spectral variable, the eigenfrequencies are explicitly expressed. According to their topological properties, the complete classification of these spectra in the parameter space is presented and graphically displayed. The continuous wave solutions are linearly unstable for a generic choice of the coupling constants.
Accurate chemical master equation solution using multi-finite buffers
Cao, Youfang; Terebus, Anna; Liang, Jie
2016-06-29
Here, the discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multiscale nature of many networks where reaction rates have a large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multifinite buffers for reducing the state spacemore » by $O(n!)$, exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be precomputed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multiscale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks.« less
Accurate chemical master equation solution using multi-finite buffers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao, Youfang; Terebus, Anna; Liang, Jie
Here, the discrete chemical master equation (dCME) provides a fundamental framework for studying stochasticity in mesoscopic networks. Because of the multiscale nature of many networks where reaction rates have a large disparity, directly solving dCMEs is intractable due to the exploding size of the state space. It is important to truncate the state space effectively with quantified errors, so accurate solutions can be computed. It is also important to know if all major probabilistic peaks have been computed. Here we introduce the accurate CME (ACME) algorithm for obtaining direct solutions to dCMEs. With multifinite buffers for reducing the state spacemore » by $O(n!)$, exact steady-state and time-evolving network probability landscapes can be computed. We further describe a theoretical framework of aggregating microstates into a smaller number of macrostates by decomposing a network into independent aggregated birth and death processes and give an a priori method for rapidly determining steady-state truncation errors. The maximal sizes of the finite buffers for a given error tolerance can also be precomputed without costly trial solutions of dCMEs. We show exactly computed probability landscapes of three multiscale networks, namely, a 6-node toggle switch, 11-node phage-lambda epigenetic circuit, and 16-node MAPK cascade network, the latter two with no known solutions. We also show how probabilities of rare events can be computed from first-passage times, another class of unsolved problems challenging for simulation-based techniques due to large separations in time scales. Overall, the ACME method enables accurate and efficient solutions of the dCME for a large class of networks.« less
FDTD simulation of EM wave propagation in 3-D media
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, T.; Tripp, A.C.
1996-01-01
A finite-difference, time-domain solution to Maxwell`s equations has been developed for simulating electromagnetic wave propagation in 3-D media. The algorithm allows arbitrary electrical conductivity and permittivity variations within a model. The staggered grid technique of Yee is used to sample the fields. A new optimized second-order difference scheme is designed to approximate the spatial derivatives. Like the conventional fourth-order difference scheme, the optimized second-order scheme needs four discrete values to calculate a single derivative. However, the optimized scheme is accurate over a wider wavenumber range. Compared to the fourth-order scheme, the optimized scheme imposes stricter limitations on the time stepmore » sizes but allows coarser grids. The net effect is that the optimized scheme is more efficient in terms of computation time and memory requirement than the fourth-order scheme. The temporal derivatives are approximated by second-order central differences throughout. The Liao transmitting boundary conditions are used to truncate an open problem. A reflection coefficient analysis shows that this transmitting boundary condition works very well. However, it is subject to instability. A method that can be easily implemented is proposed to stabilize the boundary condition. The finite-difference solution is compared to closed-form solutions for conducting and nonconducting whole spaces and to an integral-equation solution for a 3-D body in a homogeneous half-space. In all cases, the finite-difference solutions are in good agreement with the other solutions. Finally, the use of the algorithm is demonstrated with a 3-D model. Numerical results show that both the magnetic field response and electric field response can be useful for shallow-depth and small-scale investigations.« less
Secomb, Timothy W.
2016-01-01
A novel theoretical method is presented for simulating the spatially resolved convective and diffusive transport of reacting solutes between microvascular networks and the surrounding tissues. The method allows for efficient computational solution of problems involving convection and non-linear binding of solutes in blood flowing through microvascular networks with realistic 3D geometries, coupled with transvascular exchange and diffusion and reaction in the surrounding tissue space. The method is based on a Green's function approach, in which the solute concentration distribution in the tissue is expressed as a sum of fields generated by time-varying distributions of discrete sources and sinks. As an example of the application of the method, the washout of an inert diffusible tracer substance from a tissue region perfused by a network of microvessels is simulated, showing its dependence on the solute's transvascular permeability and tissue diffusivity. Exponential decay of the washout concentration is predicted, with rate constants that are about 10–30% lower than the rate constants for a tissue cylinder model with the same vessel length, vessel surface area and blood flow rate per tissue volume. PMID:26443811
Advanced air distribution: improving health and comfort while reducing energy use.
Melikov, A K
2016-02-01
Indoor environment affects the health, comfort, and performance of building occupants. The energy used for heating, cooling, ventilating, and air conditioning of buildings is substantial. Ventilation based on total volume air distribution in spaces is not always an efficient way to provide high-quality indoor environments at the same time as low-energy consumption. Advanced air distribution, designed to supply clean air where, when, and as much as needed, makes it possible to efficiently achieve thermal comfort, control exposure to contaminants, provide high-quality air for breathing and minimizing the risk of airborne cross-infection while reducing energy use. This study justifies the need for improving the present air distribution design in occupied spaces, and in general the need for a paradigm shift from the design of collective environments to the design of individually controlled environments. The focus is on advanced air distribution in spaces, its guiding principles and its advantages and disadvantages. Examples of advanced air distribution solutions in spaces for different use, such as offices, hospital rooms, vehicle compartments, are presented. The potential of advanced air distribution, and individually controlled macro-environment in general, for achieving shared values, that is, improved health, comfort, and performance, energy saving, reduction of healthcare costs and improved well-being is demonstrated. Performance criteria are defined and further research in the field is outlined. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
24-26 GHz radio-over-fiber and free-space optics for fifth-generation systems.
Bohata, Jan; Komanec, Matěj; Spáčil, Jan; Ghassemlooy, Zabih; Zvánovec, Stanislav; Slavík, Radan
2018-03-01
This Letter outlines radio-over-fiber combined with radio-over-free-space optics (RoFSO) and radio frequency free-space transmission, which is of particular relevance for fifth-generation networks. Here, the frequency band of 24-26 GHz is adopted to demonstrate a low-cost, compact, and high-energy-efficient solution based on the direct intensity modulation and direct detection scheme. For our proof-of-concept demonstration, we use 64 quadrature amplitude modulation with a 100 MHz bandwidth. We assess the link performance by exposing the RoFSO section to atmospheric turbulence conditions. Further, we show that the measured minimum error vector magnitude (EVM) is 4.7% and also verify that the proposed system with the free-space-optics link span of 100 m under strong turbulence can deliver an acceptable EVM of <9% with signal-to-noise ratio levels of 22 dB and 10 dB with and without turbulence, respectively.
Development of a gravity-independent wastewater bioprocessor for advanced life support in space
NASA Technical Reports Server (NTRS)
Nashashibi-Rabah, Majda; Christodoulatos, Christos; Korfiatis, George P.; Janes, H. W. (Principal Investigator)
2005-01-01
Operation of aerobic biological reactors in space is controlled by a number of challenging constraints, mainly stemming from mass transfer limitations and phase separation. Immobilized-cell packed-bed bioreactors, specially designed to function in the absence of gravity, offer a viable solution for the treatment of gray water generated in space stations and spacecrafts. A novel gravity-independent wastewater biological processor, capable of carbon oxidation and nitrification of high-strength aqueous waste streams, is presented. The system, consisting of a fully saturated pressurized packed bed and a membrane oxygenation module attached to an external recirculation loop, operated continuously for over one year. The system attained high carbon oxidation efficiencies often exceeding 90% and ammonia oxidation reaching approximately 60%. The oxygen supply module relies on hydrophobic, nonporous, oxygen selective membranes, in a shell and tube configuration, for transferring oxygen to the packed bed, while keeping the gaseous and liquid phases separated. This reactor configuration and operating mode render the system gravity-independent and suitable for space applications.
NASA Astrophysics Data System (ADS)
Markov, Detelin
2012-11-01
This paper presents an easy-to-understand procedure for prediction of indoor air composition time variation in air-tight occupied spaces during the night periods. The mathematical model is based on the assumptions for homogeneity and perfect mixing of the indoor air, the ideal gas model for non-reacting gas mixtures, mass conservation equations for the entire system and for each species, a model for prediction of basal metabolic rate of humans as well as a model for prediction of O2 consumption rate and both CO2 and H2O generation rates by breathing. Time variation of indoor air composition is predicted at constant indoor air temperature for three scenarios based on the analytical solution of the mathematical model. The results achieved reveal both the most probable scenario for indoor air time variation in air-tight occupied spaces as well as the cause for morning tiredness after having a sleep in a modern energy efficient space.
Discontinuity minimization for omnidirectional video projections
NASA Astrophysics Data System (ADS)
Alshina, Elena; Zakharchenko, Vladyslav
2017-09-01
Advances in display technologies both for head mounted devices and television panels demand resolution increase beyond 4K for source signal in virtual reality video streaming applications. This poses a problem of content delivery trough a bandwidth limited distribution networks. Considering a fact that source signal covers entire surrounding space investigation reviled that compression efficiency may fluctuate 40% in average depending on origin selection at the conversion stage from 3D space to 2D projection. Based on these knowledge the origin selection algorithm for video compression applications has been proposed. Using discontinuity entropy minimization function projection origin rotation may be defined to provide optimal compression results. Outcome of this research may be applied across various video compression solutions for omnidirectional content.
NASA Astrophysics Data System (ADS)
Dolgov, S. V.; Smirnov, A. P.; Tyrtyshnikov, E. E.
2014-04-01
We consider numerical modeling of the Farley-Buneman instability in the Earth's ionosphere plasma. The ion behavior is governed by the kinetic Vlasov equation with the BGK collisional term in the four-dimensional phase space, and since the finite difference discretization on a tensor product grid is used, this equation becomes the most computationally challenging part of the scheme. To relax the complexity and memory consumption, an adaptive model reduction using the low-rank separation of variables, namely the Tensor Train format, is employed. The approach was verified via a prototype MATLAB implementation. Numerical experiments demonstrate the possibility of efficient separation of space and velocity variables, resulting in the solution storage reduction by a factor of order tens.
Electrostatically driven fog collection using space charge injection
Damak, Maher; Varanasi, Kripa K.
2018-01-01
Fog collection can be a sustainable solution to water scarcity in many regions around the world. Most proposed collectors are meshes that rely on inertial collision for droplet capture and are inherently limited by aerodynamics. We propose a new approach in which we introduce electrical forces that can overcome aerodynamic drag forces. Using an ion emitter, we introduce a space charge into the fog to impart a net charge to the incoming fog droplets and direct them toward a collector using an imposed electric field. We experimentally measure the collection efficiency on single wires, two-wire systems, and meshes and propose a physical model to quantify it. We identify the regimes of optimal collection and provide insights into designing effective fog harvesting systems. PMID:29888324
Phase-space finite elements in a least-squares solution of the transport equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drumm, C.; Fan, W.; Pautz, S.
2013-07-01
The linear Boltzmann transport equation is solved using a least-squares finite element approximation in the space, angular and energy phase-space variables. The method is applied to both neutral particle transport and also to charged particle transport in the presence of an electric field, where the angular and energy derivative terms are handled with the energy/angular finite elements approximation, in a manner analogous to the way the spatial streaming term is handled. For multi-dimensional problems, a novel approach is used for the angular finite elements: mapping the surface of a unit sphere to a two-dimensional planar region and using a meshingmore » tool to generate a mesh. In this manner, much of the spatial finite-elements machinery can be easily adapted to handle the angular variable. The energy variable and the angular variable for one-dimensional problems make use of edge/beam elements, also building upon the spatial finite elements capabilities. The methods described here can make use of either continuous or discontinuous finite elements in space, angle and/or energy, with the use of continuous finite elements resulting in a smaller problem size and the use of discontinuous finite elements resulting in more accurate solutions for certain types of problems. The work described in this paper makes use of continuous finite elements, so that the resulting linear system is symmetric positive definite and can be solved with a highly efficient parallel preconditioned conjugate gradients algorithm. The phase-space finite elements capability has been built into the Sceptre code and applied to several test problems, including a simple one-dimensional problem with an analytic solution available, a two-dimensional problem with an isolated source term, showing how the method essentially eliminates ray effects encountered with discrete ordinates, and a simple one-dimensional charged-particle transport problem in the presence of an electric field. (authors)« less
Controlled ecological life-support system - Use of plants for human life-support in space
NASA Technical Reports Server (NTRS)
Chamberland, D.; Knott, W. M.; Sager, J. C.; Wheeler, R.
1992-01-01
Scientists and engineers within NASA are conducting research which will lead to development of advanced life-support systems that utilize higher plants in a unique approach to solving long-term life-support problems in space. This biological solution to life-support, Controlled Ecological Life-Support System (CELSS), is a complex, extensively controlled, bioengineered system that relies on plants to provide the principal elements from gas exchange and food production to potable water reclamation. Research at John F. Kennedy Space Center (KSC) is proceeding with a comprehensive investigation of the individual parts of the CELSS system at a one-person scale in an approach called the Breadboard Project. Concurrently a relatively new NASA sponsored research effort is investigating plant growth and metabolism in microgravity, innovative hydroponic nutrient delivery systems, and use of highly efficient light emitting diodes for artificial plant illumination.
SPIRiT: Iterative Self-consistent Parallel Imaging Reconstruction from Arbitrary k-Space
Lustig, Michael; Pauly, John M.
2010-01-01
A new approach to autocalibrating, coil-by-coil parallel imaging reconstruction is presented. It is a generalized reconstruction framework based on self consistency. The reconstruction problem is formulated as an optimization that yields the most consistent solution with the calibration and acquisition data. The approach is general and can accurately reconstruct images from arbitrary k-space sampling patterns. The formulation can flexibly incorporate additional image priors such as off-resonance correction and regularization terms that appear in compressed sensing. Several iterative strategies to solve the posed reconstruction problem in both image and k-space domain are presented. These are based on a projection over convex sets (POCS) and a conjugate gradient (CG) algorithms. Phantom and in-vivo studies demonstrate efficient reconstructions from undersampled Cartesian and spiral trajectories. Reconstructions that include off-resonance correction and nonlinear ℓ1-wavelet regularization are also demonstrated. PMID:20665790
Application of Gauss's law space-charge limited emission model in iterative particle tracking method
NASA Astrophysics Data System (ADS)
Altsybeyev, V. V.; Ponomarev, V. A.
2016-11-01
The particle tracking method with a so-called gun iteration for modeling the space charge is discussed in the following paper. We suggest to apply the emission model based on the Gauss's law for the calculation of the space charge limited current density distribution using considered method. Based on the presented emission model we have developed a numerical algorithm for this calculations. This approach allows us to perform accurate and low time consumpting numerical simulations for different vacuum sources with the curved emitting surfaces and also in the presence of additional physical effects such as bipolar flows and backscattered electrons. The results of the simulations of the cylindrical diode and diode with elliptical emitter with the use of axysimmetric coordinates are presented. The high efficiency and accuracy of the suggested approach are confirmed by the obtained results and comparisons with the analytical solutions.
Inflated speedups in parallel simulations via malloc()
NASA Technical Reports Server (NTRS)
Nicol, David M.
1990-01-01
Discrete-event simulation programs make heavy use of dynamic memory allocation in order to support simulation's very dynamic space requirements. When programming in C one is likely to use the malloc() routine. However, a parallel simulation which uses the standard Unix System V malloc() implementation may achieve an overly optimistic speedup, possibly superlinear. An alternate implementation provided on some (but not all systems) can avoid the speedup anomaly, but at the price of significantly reduced available free space. This is especially severe on most parallel architectures, which tend not to support virtual memory. It is shown how a simply implemented user-constructed interface to malloc() can both avoid artificially inflated speedups, and make efficient use of the dynamic memory space. The interface simply catches blocks on the basis of their size. The problem is demonstrated empirically, and the effectiveness of the solution is shown both empirically and analytically.
Estimating the size of the solution space of metabolic networks
Braunstein, Alfredo; Mulet, Roberto; Pagnani, Andrea
2008-01-01
Background Cellular metabolism is one of the most investigated system of biological interactions. While the topological nature of individual reactions and pathways in the network is quite well understood there is still a lack of comprehension regarding the global functional behavior of the system. In the last few years flux-balance analysis (FBA) has been the most successful and widely used technique for studying metabolism at system level. This method strongly relies on the hypothesis that the organism maximizes an objective function. However only under very specific biological conditions (e.g. maximization of biomass for E. coli in reach nutrient medium) the cell seems to obey such optimization law. A more refined analysis not assuming extremization remains an elusive task for large metabolic systems due to algorithmic limitations. Results In this work we propose a novel algorithmic strategy that provides an efficient characterization of the whole set of stable fluxes compatible with the metabolic constraints. Using a technique derived from the fields of statistical physics and information theory we designed a message-passing algorithm to estimate the size of the affine space containing all possible steady-state flux distributions of metabolic networks. The algorithm, based on the well known Bethe approximation, can be used to approximately compute the volume of a non full-dimensional convex polytope in high dimensions. We first compare the accuracy of the predictions with an exact algorithm on small random metabolic networks. We also verify that the predictions of the algorithm match closely those of Monte Carlo based methods in the case of the Red Blood Cell metabolic network. Then we test the effect of gene knock-outs on the size of the solution space in the case of E. coli central metabolism. Finally we analyze the statistical properties of the average fluxes of the reactions in the E. coli metabolic network. Conclusion We propose a novel efficient distributed algorithmic strategy to estimate the size and shape of the affine space of a non full-dimensional convex polytope in high dimensions. The method is shown to obtain, quantitatively and qualitatively compatible results with the ones of standard algorithms (where this comparison is possible) being still efficient on the analysis of large biological systems, where exact deterministic methods experience an explosion in algorithmic time. The algorithm we propose can be considered as an alternative to Monte Carlo sampling methods. PMID:18489757
Direct numerical simulation of particulate flows with an overset grid method
NASA Astrophysics Data System (ADS)
Koblitz, A. R.; Lovett, S.; Nikiforakis, N.; Henshaw, W. D.
2017-08-01
We evaluate an efficient overset grid method for two-dimensional and three-dimensional particulate flows for small numbers of particles at finite Reynolds number. The rigid particles are discretised using moving overset grids overlaid on a Cartesian background grid. This allows for strongly-enforced boundary conditions and local grid refinement at particle surfaces, thereby accurately capturing the viscous boundary layer at modest computational cost. The incompressible Navier-Stokes equations are solved with a fractional-step scheme which is second-order-accurate in space and time, while the fluid-solid coupling is achieved with a partitioned approach including multiple sub-iterations to increase stability for light, rigid bodies. Through a series of benchmark studies we demonstrate the accuracy and efficiency of this approach compared to other boundary conformal and static grid methods in the literature. In particular, we find that fully resolving boundary layers at particle surfaces is crucial to obtain accurate solutions to many common test cases. With our approach we are able to compute accurate solutions using as little as one third the number of grid points as uniform grid computations in the literature. A detailed convergence study shows a 13-fold decrease in CPU time over a uniform grid test case whilst maintaining comparable solution accuracy.
Solving the flexible job shop problem by hybrid metaheuristics-based multiagent model
NASA Astrophysics Data System (ADS)
Nouri, Houssem Eddine; Belkahla Driss, Olfa; Ghédira, Khaled
2018-03-01
The flexible job shop scheduling problem (FJSP) is a generalization of the classical job shop scheduling problem that allows to process operations on one machine out of a set of alternative machines. The FJSP is an NP-hard problem consisting of two sub-problems, which are the assignment and the scheduling problems. In this paper, we propose how to solve the FJSP by hybrid metaheuristics-based clustered holonic multiagent model. First, a neighborhood-based genetic algorithm (NGA) is applied by a scheduler agent for a global exploration of the search space. Second, a local search technique is used by a set of cluster agents to guide the research in promising regions of the search space and to improve the quality of the NGA final population. The efficiency of our approach is explained by the flexible selection of the promising parts of the search space by the clustering operator after the genetic algorithm process, and by applying the intensification technique of the tabu search allowing to restart the search from a set of elite solutions to attain new dominant scheduling solutions. Computational results are presented using four sets of well-known benchmark literature instances. New upper bounds are found, showing the effectiveness of the presented approach.
Initial value problem of space dynamics in universal Stumpff anomaly
NASA Astrophysics Data System (ADS)
Sharaf, M. A.; Dwidar, H. R.
2018-05-01
In this paper, the initial value problem of space dynamics in universal Stumpff anomaly ψ is set up and developed in analytical and computational approach. For the analytical expansions, the linear independence of the functions U_{j} (ψ;σ); {j=0,1,2,3} are proved. The differential and recurrence equations satisfied by them and their relations with the elementary functions are given. The universal Kepler equation and its validations for different conic orbits are established together with the Lagrangian coefficients. Efficient representations of these functions are developed in terms of the continued fractions. For the computational developments we consider the following items: 1.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shimojo, Fuyuki; Hattori, Shinnosuke; Department of Physics, Kumamoto University, Kumamoto 860-8555
We introduce an extension of the divide-and-conquer (DC) algorithmic paradigm called divide-conquer-recombine (DCR) to perform large quantum molecular dynamics (QMD) simulations on massively parallel supercomputers, in which interatomic forces are computed quantum mechanically in the framework of density functional theory (DFT). In DCR, the DC phase constructs globally informed, overlapping local-domain solutions, which in the recombine phase are synthesized into a global solution encompassing large spatiotemporal scales. For the DC phase, we design a lean divide-and-conquer (LDC) DFT algorithm, which significantly reduces the prefactor of the O(N) computational cost for N electrons by applying a density-adaptive boundary condition at themore » peripheries of the DC domains. Our globally scalable and locally efficient solver is based on a hybrid real-reciprocal space approach that combines: (1) a highly scalable real-space multigrid to represent the global charge density; and (2) a numerically efficient plane-wave basis for local electronic wave functions and charge density within each domain. Hybrid space-band decomposition is used to implement the LDC-DFT algorithm on parallel computers. A benchmark test on an IBM Blue Gene/Q computer exhibits an isogranular parallel efficiency of 0.984 on 786 432 cores for a 50.3 × 10{sup 6}-atom SiC system. As a test of production runs, LDC-DFT-based QMD simulation involving 16 661 atoms is performed on the Blue Gene/Q to study on-demand production of hydrogen gas from water using LiAl alloy particles. As an example of the recombine phase, LDC-DFT electronic structures are used as a basis set to describe global photoexcitation dynamics with nonadiabatic QMD (NAQMD) and kinetic Monte Carlo (KMC) methods. The NAQMD simulations are based on the linear response time-dependent density functional theory to describe electronic excited states and a surface-hopping approach to describe transitions between the excited states. A series of techniques are employed for efficiently calculating the long-range exact exchange correction and excited-state forces. The NAQMD trajectories are analyzed to extract the rates of various excitonic processes, which are then used in KMC simulation to study the dynamics of the global exciton flow network. This has allowed the study of large-scale photoexcitation dynamics in 6400-atom amorphous molecular solid, reaching the experimental time scales.« less
Development of Adaptive Model Refinement (AMoR) for Multiphysics and Multifidelity Problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Turinsky, Paul
This project investigated the development and utilization of Adaptive Model Refinement (AMoR) for nuclear systems simulation applications. AMoR refers to utilization of several models of physical phenomena which differ in prediction fidelity. If the highest fidelity model is judged to always provide or exceeded the desired fidelity, than if one can determine the difference in a Quantity of Interest (QoI) between the highest fidelity model and lower fidelity models, one could utilize the fidelity model that would just provide the magnitude of the QoI desired. Assuming lower fidelity models require less computational resources, in this manner computational efficiency can bemore » realized provided the QoI value can be accurately and efficiently evaluated. This work utilized Generalized Perturbation Theory (GPT) to evaluate the QoI, by convoluting the GPT solution with the residual of the highest fidelity model determined using the solution from lower fidelity models. Specifically, a reactor core neutronics problem and thermal-hydraulics problem were studied to develop and utilize AMoR. The highest fidelity neutronics model was based upon the 3D space-time, two-group, nodal diffusion equations as solved in the NESTLE computer code. Added to the NESTLE code was the ability to determine the time-dependent GPT neutron flux. The lower fidelity neutronics model was based upon the point kinetics equations along with utilization of a prolongation operator to determine the 3D space-time, two-group flux. The highest fidelity thermal-hydraulics model was based upon the space-time equations governing fluid flow in a closed channel around a heat generating fuel rod. The Homogenous Equilibrium Mixture (HEM) model was used for the fluid and Finite Difference Method was applied to both the coolant and fuel pin energy conservation equations. The lower fidelity thermal-hydraulic model was based upon the same equations as used for the highest fidelity model but now with coarse spatial meshing, corrected somewhat by employing effective fuel heat conduction values. The effectiveness of switching between the highest fidelity model and lower fidelity model as a function of time was assessed using the neutronics problem. Based upon work completed to date, one concludes that the time switching is effective in annealing out differences between the highest and lower fidelity solutions. The effectiveness of using a lower fidelity GPT solution, along with a prolongation operator, to estimate the QoI was also assessed. The utilization of a lower fidelity GPT solution was done in an attempt to avoid the high computational burden associated with solving for the highest fidelity GPT solution. Based upon work completed to date, one concludes that the lower fidelity adjoint solution is not sufficiently accurate with regard to estimating the QoI; however, a formulation has been revealed that may provide a path for addressing this shortcoming.« less
Liu, Xin; Chen, Zhao-Qiong; Han, Bin; Su, Chun-Li; Han, Qin; Chen, Wei-Zhong
2018-04-15
In this paper, the adsorption behaviors of Cu(II) from the aqueous solution using rape straw powders were studied. The effects of initial Cu(II) concentration, pH range and absorbent dosage on the adsorption efficiency of Cu(II) by rape straw powder were investigated by Box-Behnken Design based on response surface methodology. The values of coefficient constant of the nonlinear models were 0.9997, 0.9984 and 0.9944 for removal Cu(II) from aqueous solution using rape straw shell, seed pods and straw pith core, respectively, which could navigate the design space for various factors on effects of biosorption Cu(II) from aqueous solution. The various factors of pH and biosorbents dosage were the key factors that affecting the removal efficiency of Cu(II) from aqueous solution. The biosorption equilibrium data presented its favorable monolayer adsorption Cu(II) onto shell, seed pods and straw pith core, respectively. The pseudo-second order kinetic model was the proper approach to determine the adsorption kinetics. The biosorption of Cu(II) onto surfaces of rape straw powders were confirmed and ion-exchanged in the adsorption process by energy dispersive spectrometer. The critical groups, -OH, -CH, -NH 3 + , -CH 3 , -NH and -C-O, exhibited by the infrared spectra results, changed to suggest that these groups played critical roles, especially -CH 3 in the adsorption of copper ions onto rape straw powders. The study provided evidences that rape straw powders can be used for removing Cu(II) from aqueous water. Copyright © 2017 Elsevier Inc. All rights reserved.
Machine learning action parameters in lattice quantum chromodynamics
NASA Astrophysics Data System (ADS)
Shanahan, Phiala E.; Trewartha, Daniel; Detmold, William
2018-05-01
Numerical lattice quantum chromodynamics studies of the strong interaction are important in many aspects of particle and nuclear physics. Such studies require significant computing resources to undertake. A number of proposed methods promise improved efficiency of lattice calculations, and access to regions of parameter space that are currently computationally intractable, via multi-scale action-matching approaches that necessitate parametric regression of generated lattice datasets. The applicability of machine learning to this regression task is investigated, with deep neural networks found to provide an efficient solution even in cases where approaches such as principal component analysis fail. The high information content and complex symmetries inherent in lattice QCD datasets require custom neural network layers to be introduced and present opportunities for further development.
Towards developing robust algorithms for solving partial differential equations on MIMD machines
NASA Technical Reports Server (NTRS)
Saltz, Joel H.; Naik, Vijay K.
1988-01-01
Methods for efficient computation of numerical algorithms on a wide variety of MIMD machines are proposed. These techniques reorganize the data dependency patterns to improve the processor utilization. The model problem finds the time-accurate solution to a parabolic partial differential equation discretized in space and implicitly marched forward in time. The algorithms are extensions of Jacobi and SOR. The extensions consist of iterating over a window of several timesteps, allowing efficient overlap of computation with communication. The methods increase the degree to which work can be performed while data are communicated between processors. The effect of the window size and of domain partitioning on the system performance is examined both by implementing the algorithm on a simulated multiprocessor system.
Towards developing robust algorithms for solving partial differential equations on MIMD machines
NASA Technical Reports Server (NTRS)
Saltz, J. H.; Naik, V. K.
1985-01-01
Methods for efficient computation of numerical algorithms on a wide variety of MIMD machines are proposed. These techniques reorganize the data dependency patterns to improve the processor utilization. The model problem finds the time-accurate solution to a parabolic partial differential equation discretized in space and implicitly marched forward in time. The algorithms are extensions of Jacobi and SOR. The extensions consist of iterating over a window of several timesteps, allowing efficient overlap of computation with communication. The methods increase the degree to which work can be performed while data are communicated between processors. The effect of the window size and of domain partitioning on the system performance is examined both by implementing the algorithm on a simulated multiprocessor system.
An Onsager Singularity Theorem for Turbulent Solutions of Compressible Euler Equations
NASA Astrophysics Data System (ADS)
Drivas, Theodore D.; Eyink, Gregory L.
2017-12-01
We prove that bounded weak solutions of the compressible Euler equations will conserve thermodynamic entropy unless the solution fields have sufficiently low space-time Besov regularity. A quantity measuring kinetic energy cascade will also vanish for such Euler solutions, unless the same singularity conditions are satisfied. It is shown furthermore that strong limits of solutions of compressible Navier-Stokes equations that are bounded and exhibit anomalous dissipation are weak Euler solutions. These inviscid limit solutions have non-negative anomalous entropy production and kinetic energy dissipation, with both vanishing when solutions are above the critical degree of Besov regularity. Stationary, planar shocks in Euclidean space with an ideal-gas equation of state provide simple examples that satisfy the conditions of our theorems and which demonstrate sharpness of our L 3-based conditions. These conditions involve space-time Besov regularity, but we show that they are satisfied by Euler solutions that possess similar space regularity uniformly in time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
García-Sánchez, Tania; Gómez-Lázaro, Emilio; Muljadi, E.
An alternative approach to characterise real voltage dips is proposed and evaluated in this study. The proposed methodology is based on voltage-space vector solutions, identifying parameters for ellipses trajectories by using the least-squares algorithm applied on a sliding window along the disturbance. The most likely patterns are then estimated through a clustering process based on the k-means algorithm. The objective is to offer an efficient and easily implemented alternative to characterise faults and visualise the most likely instantaneous phase-voltage evolution during events through their corresponding voltage-space vector trajectories. This novel solution minimises the data to be stored but maintains extensivemore » information about the dips including starting and ending transients. The proposed methodology has been applied satisfactorily to real voltage dips obtained from intensive field-measurement campaigns carried out in a Spanish wind power plant up to a time period of several years. A comparison to traditional minimum root mean square-voltage and time-duration classifications is also included in this study.« less
Extending the coverage of the internet of things with low-cost nanosatellite networks
NASA Astrophysics Data System (ADS)
Almonacid, Vicente; Franck, Laurent
2017-09-01
Recent technology advances have made CubeSats not only an affordable means of access to space, but also promising platforms to develop a new variety of space applications. In this paper, we explore the idea of using nanosatellites as access points to provide extended coverage to the Internet of Things (IoT) and Machine-to-Machine (M2M) communications. This study is mainly motivated by two facts: on the one hand, it is already obvious that the number of machine-type devices deployed globally will experiment an exponential growth over the forthcoming years. This trend is pushed by the available terrestrial cellular infrastructure, which allows adding support for M2M connectivity at marginal costs. On the other hand, the same growth is not observed in remote areas that must rely on space-based connectivity. In such environments, the demand for M2M communications is potentially large, yet it is challenged by the lack of cost-effective service providers. The traffic characteristics of typical M2M applications translate into the requirement for an extremely low cost per transmitted message. Under these strong economical constraints, we expect that nanosatellites in the low Earth orbit will play a fundamental role in overcoming what we may call the IoT digital divide. The objective of this paper is therefore to provide a general analysis of a nanosatellite-based, global IoT/M2M network. We put emphasis in the engineering challenges faced in designing the Earth-to-Space communication link, where the adoption of an efficient multiple-access scheme is paramount for ensuring connectivity to a large number of terminal nodes. In particular, the trade-offs energy efficiency-access delay and energy efficiency-throughput are discussed, and a novel access approach suitable for delay-tolerant applications is proposed. Thus, by keeping a system-level standpoint, we identify key issues and discuss perspectives towards energy efficient and cost-effective solutions.
NASA Astrophysics Data System (ADS)
Kacem, S.; Eichwald, O.; Ducasse, O.; Renon, N.; Yousfi, M.; Charrada, K.
2012-01-01
Streamers dynamics are characterized by the fast propagation of ionized shock waves at the nanosecond scale under very sharp space charge variations. The streamer dynamics modelling needs the solution of charged particle transport equations coupled to the elliptic Poisson's equation. The latter has to be solved at each time step of the streamers evolution in order to follow the propagation of the resulting space charge electric field. In the present paper, a full multi grid (FMG) and a multi grid (MG) methods have been adapted to solve Poisson's equation for streamer discharge simulations between asymmetric electrodes. The validity of the FMG method for the computation of the potential field is first shown by performing direct comparisons with analytic solution of the Laplacian potential in the case of a point-to-plane geometry. The efficiency of the method is also compared with the classical successive over relaxation method (SOR) and MUltifrontal massively parallel solver (MUMPS). MG method is then applied in the case of the simulation of positive streamer propagation and its efficiency is evaluated from comparisons to SOR and MUMPS methods in the chosen point-to-plane configuration. Very good agreements are obtained between the three methods for all electro-hydrodynamics characteristics of the streamer during its propagation in the inter-electrode gap. However in the case of MG method, the computational time to solve the Poisson's equation is at least 2 times faster in our simulation conditions.
NASA Astrophysics Data System (ADS)
Owolabi, Kolade M.
2018-03-01
In this work, we are concerned with the solution of non-integer space-fractional reaction-diffusion equations with the Riemann-Liouville space-fractional derivative in high dimensions. We approximate the Riemann-Liouville derivative with the Fourier transform method and advance the resulting system in time with any time-stepping solver. In the numerical experiments, we expect the travelling wave to arise from the given initial condition on the computational domain (-∞, ∞), which we terminate in the numerical experiments with a large but truncated value of L. It is necessary to choose L large enough to allow the waves to have enough space to distribute. Experimental results in high dimensions on the space-fractional reaction-diffusion models with applications to biological models (Fisher and Allen-Cahn equations) are considered. Simulation results reveal that fractional reaction-diffusion equations can give rise to a range of physical phenomena when compared to non-integer-order cases. As a result, most meaningful and practical situations are found to be modelled with the concept of fractional calculus.
A High Performance COTS Based Computer Architecture
NASA Astrophysics Data System (ADS)
Patte, Mathieu; Grimoldi, Raoul; Trautner, Roland
2014-08-01
Using Commercial Off The Shelf (COTS) electronic components for space applications is a long standing idea. Indeed the difference in processing performance and energy efficiency between radiation hardened components and COTS components is so important that COTS components are very attractive for use in mass and power constrained systems. However using COTS components in space is not straightforward as one must account with the effects of the space environment on the COTS components behavior. In the frame of the ESA funded activity called High Performance COTS Based Computer, Airbus Defense and Space and its subcontractor OHB CGS have developed and prototyped a versatile COTS based architecture for high performance processing. The rest of the paper is organized as follows: in a first section we will start by recapitulating the interests and constraints of using COTS components for space applications; then we will briefly describe existing fault mitigation architectures and present our solution for fault mitigation based on a component called the SmartIO; in the last part of the paper we will describe the prototyping activities executed during the HiP CBC project.
NASA Technical Reports Server (NTRS)
Rash, James L.
2010-01-01
NASA's space data-communications infrastructure, the Space Network and the Ground Network, provide scheduled (as well as some limited types of unscheduled) data-communications services to user spacecraft via orbiting relay satellites and ground stations. An implementation of the methods and algorithms disclosed herein will be a system that produces globally optimized schedules with not only optimized service delivery by the space data-communications infrastructure but also optimized satisfaction of all user requirements and prescribed constraints, including radio frequency interference (RFI) constraints. Evolutionary search, a class of probabilistic strategies for searching large solution spaces, constitutes the essential technology in this disclosure. Also disclosed are methods and algorithms for optimizing the execution efficiency of the schedule-generation algorithm itself. The scheduling methods and algorithms as presented are adaptable to accommodate the complexity of scheduling the civilian and/or military data-communications infrastructure. Finally, the problem itself, and the methods and algorithms, are generalized and specified formally, with applicability to a very broad class of combinatorial optimization problems.
Srinivasa, Narayan; Zhang, Deying; Grigorian, Beayna
2014-03-01
This paper describes a novel architecture for enabling robust and efficient neuromorphic communication. The architecture combines two concepts: 1) synaptic time multiplexing (STM) that trades space for speed of processing to create an intragroup communication approach that is firing rate independent and offers more flexibility in connectivity than cross-bar architectures and 2) a wired multiple input multiple output (MIMO) communication with orthogonal frequency division multiplexing (OFDM) techniques to enable a robust and efficient intergroup communication for neuromorphic systems. The MIMO-OFDM concept for the proposed architecture was analyzed by simulating large-scale spiking neural network architecture. Analysis shows that the neuromorphic system with MIMO-OFDM exhibits robust and efficient communication while operating in real time with a high bit rate. Through combining STM with MIMO-OFDM techniques, the resulting system offers a flexible and scalable connectivity as well as a power and area efficient solution for the implementation of very large-scale spiking neural architectures in hardware.
Active Solution Space and Search on Job-shop Scheduling Problem
NASA Astrophysics Data System (ADS)
Watanabe, Masato; Ida, Kenichi; Gen, Mitsuo
In this paper we propose a new searching method of Genetic Algorithm for Job-shop scheduling problem (JSP). The coding method that represent job number in order to decide a priority to arrange a job to Gannt Chart (called the ordinal representation with a priority) in JSP, an active schedule is created by using left shift. We define an active solution at first. It is solution which can create an active schedule without using left shift, and set of its defined an active solution space. Next, we propose an algorithm named Genetic Algorithm with active solution space search (GA-asol) which can create an active solution while solution is evaluated, in order to search the active solution space effectively. We applied it for some benchmark problems to compare with other method. The experimental results show good performance.
Algal cell disruption using microbubbles to localize ultrasonic energy
Krehbiel, Joel D.; Schideman, Lance C.; King, Daniel A.; Freund, Jonathan B.
2015-01-01
Microbubbles were added to an algal solution with the goal of improving cell disruption efficiency and the net energy balance for algal biofuel production. Experimental results showed that disruption increases with increasing peak rarefaction ultrasound pressure over the range studied: 1.90 to 3.07 MPa. Additionally, ultrasound cell disruption increased by up to 58% by adding microbubbles, with peak disruption occurring in the range of 108 microbubbles/ml. The localization of energy in space and time provided by the bubbles improve efficiency: energy requirements for such a process were estimated to be one-fourth of the available heat of combustion of algal biomass and one-fifth of currently used cell disruption methods. This increase in energy efficiency could make microbubble enhanced ultrasound viable for bioenergy applications and is expected to integrate well with current cell harvesting methods based upon dissolved air flotation. PMID:25311188
NASA Technical Reports Server (NTRS)
Holladay, Jon; Day, Greg; Roberts, Barry; Leahy, Frank
2003-01-01
The efficiency of re-useable aerospace systems requires a focus on the total operations process rather than just orbital performance. For the Multi-Purpose Logistics Module this activity included special attention to terrestrial conditions both pre-launch and post-landing and how they inter-relate to the mission profile. Several of the efficiencies implemented for the MPLM Mission Engineering were NASA firsts and all served to improve the overall operations activities. This paper will provide an explanation of how various issues were addressed and the resulting solutions. Topics range from statistical analysis of over 30 years of atmospheric data at the launch and landing site to a new approach for operations with the Shuttle Carrier Aircraft. In each situation the goal was to "tune" the thermal management of the overall flight system for minimizing requirement risk while optimizing power and energy performance.
NASA Astrophysics Data System (ADS)
Chandra, Rishabh
Partial differential equation-constrained combinatorial optimization (PDECCO) problems are a mixture of continuous and discrete optimization problems. PDECCO problems have discrete controls, but since the partial differential equations (PDE) are continuous, the optimization space is continuous as well. Such problems have several applications, such as gas/water network optimization, traffic optimization, micro-chip cooling optimization, etc. Currently, no efficient classical algorithm which guarantees a global minimum for PDECCO problems exists. A new mapping has been developed that transforms PDECCO problem, which only have linear PDEs as constraints, into quadratic unconstrained binary optimization (QUBO) problems that can be solved using an adiabatic quantum optimizer (AQO). The mapping is efficient, it scales polynomially with the size of the PDECCO problem, requires only one PDE solve to form the QUBO problem, and if the QUBO problem is solved correctly and efficiently on an AQO, guarantees a global optimal solution for the original PDECCO problem.
NASA Astrophysics Data System (ADS)
Ameen, M. Yoosuf; Shamjid, P.; Abhijith, T.; Reddy, V. S.
2018-01-01
Polymer solar cells were fabricated with solution-processed transition metal oxides, MoO3 and V2O5 as anode buffer layers (ABLs). The optimized device with V2O5 ABL exhibited considerably higher power conversion efficiency (PCE) compared to the devices based on MoO3 and poly(3,4-ethylenedioxythiophene):poly(styrene sulfonate) (PEDOT:PSS) ABLs. The space charge limited current measurements and impedance spectroscopy results of hole-only devices revealed that V2O5 provided a very low charge transfer resistance and high hole mobility, facilitating efficient hole transfer from the active layer to the ITO anode. More importantly, incorporation of V2O5 as ABL resulted in substantial improvement in device stability compared to MoO3 and PEDOT:PSS based devices. Unencapsulated PEDOT:PSS-based devices stored at a relative humidity of 45% have shown complete failure within 96 h. Whereas, MoO3 and V2O5 based devices stored in similar conditions retained 22% and 80% of their initial PCEs after 96 h. Significantly higher stability of the V2O5-based device is ascribed to the reduction in degradation of the anode/active layer interface, as evident from the electrical measurements.
Analysis of Phase-Type Stochastic Petri Nets With Discrete and Continuous Timing
NASA Technical Reports Server (NTRS)
Jones, Robert L.; Goode, Plesent W. (Technical Monitor)
2000-01-01
The Petri net formalism is useful in studying many discrete-state, discrete-event systems exhibiting concurrency, synchronization, and other complex behavior. As a bipartite graph, the net can conveniently capture salient aspects of the system. As a mathematical tool, the net can specify an analyzable state space. Indeed, one can reason about certain qualitative properties (from state occupancies) and how they arise (the sequence of events leading there). By introducing deterministic or random delays, the model is forced to sojourn in states some amount of time, giving rise to an underlying stochastic process, one that can be specified in a compact way and capable of providing quantitative, probabilistic measures. We formalize a new non-Markovian extension to the Petri net that captures both discrete and continuous timing in the same model. The approach affords efficient, stationary analysis in most cases and efficient transient analysis under certain restrictions. Moreover, this new formalism has the added benefit in modeling fidelity stemming from the simultaneous capture of discrete- and continuous-time events (as opposed to capturing only one and approximating the other). We show how the underlying stochastic process, which is non-Markovian, can be resolved into simpler Markovian problems that enjoy efficient solutions. Solution algorithms are provided that can be easily programmed.
NASA Astrophysics Data System (ADS)
Chen, Zuojing; Polizzi, Eric
2010-11-01
Effective modeling and numerical spectral-based propagation schemes are proposed for addressing the challenges in time-dependent quantum simulations of systems ranging from atoms, molecules, and nanostructures to emerging nanoelectronic devices. While time-dependent Hamiltonian problems can be formally solved by propagating the solutions along tiny simulation time steps, a direct numerical treatment is often considered too computationally demanding. In this paper, however, we propose to go beyond these limitations by introducing high-performance numerical propagation schemes to compute the solution of the time-ordered evolution operator. In addition to the direct Hamiltonian diagonalizations that can be efficiently performed using the new eigenvalue solver FEAST, we have designed a Gaussian propagation scheme and a basis-transformed propagation scheme (BTPS) which allow to reduce considerably the simulation times needed by time intervals. It is outlined that BTPS offers the best computational efficiency allowing new perspectives in time-dependent simulations. Finally, these numerical schemes are applied to study the ac response of a (5,5) carbon nanotube within a three-dimensional real-space mesh framework.
Biswas, Subhodip; Kundu, Souvik; Das, Swagatam
2014-10-01
In real life, we often need to find multiple optimally sustainable solutions of an optimization problem. Evolutionary multimodal optimization algorithms can be very helpful in such cases. They detect and maintain multiple optimal solutions during the run by incorporating specialized niching operations in their actual framework. Differential evolution (DE) is a powerful evolutionary algorithm (EA) well-known for its ability and efficiency as a single peak global optimizer for continuous spaces. This article suggests a niching scheme integrated with DE for achieving a stable and efficient niching behavior by combining the newly proposed parent-centric mutation operator with synchronous crowding replacement rule. The proposed approach is designed by considering the difficulties associated with the problem dependent niching parameters (like niche radius) and does not make use of such control parameter. The mutation operator helps to maintain the population diversity at an optimum level by using well-defined local neighborhoods. Based on a comparative study involving 13 well-known state-of-the-art niching EAs tested on an extensive collection of benchmarks, we observe a consistent statistical superiority enjoyed by our proposed niching algorithm.
Development Of The Prototype Space Non-Foam Membrane Bioreactor
NASA Astrophysics Data System (ADS)
Guo, S.; Xi, W.; Liu, X.
The essential method of making Controlled Ecological Life Support System (CELSS) operate and regenerate efficiently, is to transform and utilize the recycleable materials in the system rapidly. Currently, it is generally recognized that the fundamental way of achieving the goal is to utilize micro-biotechnology. Exactly based on this thinking, a Groundbased Prototype of Space Waste-treating-microbially Facility(GPSWF) was developed in our laboratory, with the purpose of transforming biologically-degradeable waste including inedible plant biomass into plant nutrient solution for attaining future regenerated utilization of materials in the space environment. The facility holds the automatic measurement and control systems of temperature, pH and dissolved oxygen (DO) in treated solution, and the systems of non-foam membrane oxygen provision and post-treated liquid collection. The experimental results showed that the facility could maintain a stable operating state; the pH and DO in the liquid were controlled automatically and precisely; the oxygen in the liquid was non-foamedly provided by membrane technology; the plant inedible biomass could be completely degraded by three species of microbes selected; the decreasing rates of total organic carbon(TOC) and chemical oxygen demand(COD) reached to 92.1% and 95.5% respectively; the post-treated liquid could be automatically drained and collected; the plants could grow almost normally when the post-treated liquid was used as nutrient liquid. Therefore, it can be concluded that the facility possesses a reasonably-designed structure, and its working principle is nearly able to meet the condition of space microgravity environment. So it's hopeful to be applied in space for biological degradation of materials after further improvement.
Efficient modeling of photonic crystals with local Hermite polynomials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Boucher, C. R.; Li, Zehao; Albrecht, J. D.
2014-04-21
Developing compact algorithms for accurate electrodynamic calculations with minimal computational cost is an active area of research given the increasing complexity in the design of electromagnetic composite structures such as photonic crystals, metamaterials, optical interconnects, and on-chip routing. We show that electric and magnetic (EM) fields can be calculated using scalar Hermite interpolation polynomials as the numerical basis functions without having to invoke edge-based vector finite elements to suppress spurious solutions or to satisfy boundary conditions. This approach offers several fundamental advantages as evidenced through band structure solutions for periodic systems and through waveguide analysis. Compared with reciprocal space (planemore » wave expansion) methods for periodic systems, advantages are shown in computational costs, the ability to capture spatial complexity in the dielectric distributions, the demonstration of numerical convergence with scaling, and variational eigenfunctions free of numerical artifacts that arise from mixed-order real space basis sets or the inherent aberrations from transforming reciprocal space solutions of finite expansions. The photonic band structure of a simple crystal is used as a benchmark comparison and the ability to capture the effects of spatially complex dielectric distributions is treated using a complex pattern with highly irregular features that would stress spatial transform limits. This general method is applicable to a broad class of physical systems, e.g., to semiconducting lasers which require simultaneous modeling of transitions in quantum wells or dots together with EM cavity calculations, to modeling plasmonic structures in the presence of EM field emissions, and to on-chip propagation within monolithic integrated circuits.« less
Real time PI-backstepping induction machine drive with efficiency optimization.
Farhani, Fethi; Ben Regaya, Chiheb; Zaafouri, Abderrahmen; Chaari, Abdelkader
2017-09-01
This paper describes a robust and efficient speed control of a three phase induction machine (IM) subjected to load disturbances. First, a Multiple-Input Multiple-Output (MIMO) PI-Backstepping controller is proposed for a robust and highly accurate tracking of the mechanical speed and rotor flux. Asymptotic stability of the control scheme is proven by Lyapunov Stability Theory. Second, an active online optimization algorithm is used to optimize the efficiency of the drive system. The efficiency improvement approach consists of adjusting the rotor flux with respect to the load torque in order to minimize total losses in the IM. A dSPACE DS1104 R&D board is used to implement the proposed solution. The experimental results released on 3kW squirrel cage IM, show that the reference speed as well as the rotor flux are rapidly achieved with a fast transient response and without overshoot. A good load disturbances rejection response and IM parameters variation are fairly handled. The improvement of drive system efficiency reaches up to 180% at light load. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Gagnon, Jessica K.; Law, Sean M.; Brooks, Charles L.
2016-01-01
Protein-ligand docking is a commonly used method for lead identification and refinement. While traditional structure-based docking methods represent the receptor as a rigid body, recent developments have been moving toward the inclusion of protein flexibility. Proteins exist in an inter-converting ensemble of conformational states, but effectively and efficiently searching the conformational space available to both the receptor and ligand remains a well-appreciated computational challenge. To this end, we have developed the Flexible CDOCKER method as an extension of the family of complete docking solutions available within CHARMM. This method integrates atomically detailed side chain flexibility with grid-based docking methods, maintaining efficiency while allowing the protein and ligand configurations to explore their conformational space simultaneously. This is in contrast to existing approaches that use induced-fit like sampling, such as Glide or Autodock, where the protein or the ligand space is sampled independently in an iterative fashion. Presented here are developments to the CHARMM docking methodology to incorporate receptor flexibility and improvements to the sampling protocol as demonstrated with re-docking trials on a subset of the CCDC/Astex set. These developments within CDOCKER achieve docking accuracy competitive with or exceeding the performance of other widely utilized docking programs. PMID:26691274
Gagnon, Jessica K; Law, Sean M; Brooks, Charles L
2016-03-30
Protein-ligand docking is a commonly used method for lead identification and refinement. While traditional structure-based docking methods represent the receptor as a rigid body, recent developments have been moving toward the inclusion of protein flexibility. Proteins exist in an interconverting ensemble of conformational states, but effectively and efficiently searching the conformational space available to both the receptor and ligand remains a well-appreciated computational challenge. To this end, we have developed the Flexible CDOCKER method as an extension of the family of complete docking solutions available within CHARMM. This method integrates atomically detailed side chain flexibility with grid-based docking methods, maintaining efficiency while allowing the protein and ligand configurations to explore their conformational space simultaneously. This is in contrast to existing approaches that use induced-fit like sampling, such as Glide or Autodock, where the protein or the ligand space is sampled independently in an iterative fashion. Presented here are developments to the CHARMM docking methodology to incorporate receptor flexibility and improvements to the sampling protocol as demonstrated with re-docking trials on a subset of the CCDC/Astex set. These developments within CDOCKER achieve docking accuracy competitive with or exceeding the performance of other widely utilized docking programs. © 2015 Wiley Periodicals, Inc.
Space station advanced automation
NASA Technical Reports Server (NTRS)
Woods, Donald
1990-01-01
In the development of a safe, productive and maintainable space station, Automation and Robotics (A and R) has been identified as an enabling technology which will allow efficient operation at a reasonable cost. The Space Station Freedom's (SSF) systems are very complex, and interdependent. The usage of Advanced Automation (AA) will help restructure, and integrate system status so that station and ground personnel can operate more efficiently. To use AA technology for the augmentation of system management functions requires a development model which consists of well defined phases of: evaluation, development, integration, and maintenance. The evaluation phase will consider system management functions against traditional solutions, implementation techniques and requirements; the end result of this phase should be a well developed concept along with a feasibility analysis. In the development phase the AA system will be developed in accordance with a traditional Life Cycle Model (LCM) modified for Knowledge Based System (KBS) applications. A way by which both knowledge bases and reasoning techniques can be reused to control costs is explained. During the integration phase the KBS software must be integrated with conventional software, and verified and validated. The Verification and Validation (V and V) techniques applicable to these KBS are based on the ideas of consistency, minimal competency, and graph theory. The maintenance phase will be aided by having well designed and documented KBS software.
Comparative study of beam losses and heat loads reduction methods in MITICA beam source
NASA Astrophysics Data System (ADS)
Sartori, E.; Agostinetti, P.; Dal Bello, S.; Marcuzzi, D.; Serianni, G.; Sonato, P.; Veltri, P.
2014-02-01
In negative ion electrostatic accelerators a considerable fraction of extracted ions is lost by collision processes causing efficiency loss and heat deposition over the components. Stripping is proportional to the local density of gas, which is steadily injected in the plasma source; its pumping from the extraction and acceleration stages is a key functionality for the prototype of the ITER Neutral Beam Injector, and it can be simulated with the 3D code AVOCADO. Different geometric solutions were tested aiming at the reduction of the gas density. The parameter space considered is limited by constraints given by optics, aiming, voltage holding, beam uniformity, and mechanical feasibility. The guidelines of the optimization process are presented together with the proposed solutions and the results of numerical simulations.
A multiobjective hybrid genetic algorithm for the capacitated multipoint network design problem.
Lo, C C; Chang, W H
2000-01-01
The capacitated multipoint network design problem (CMNDP) is NP-complete. In this paper, a hybrid genetic algorithm for CMNDP is proposed. The multiobjective hybrid genetic algorithm (MOHGA) differs from other genetic algorithms (GAs) mainly in its selection procedure. The concept of subpopulation is used in MOHGA. Four subpopulations are generated according to the elitism reservation strategy, the shifting Prufer vector, the stochastic universal sampling, and the complete random method, respectively. Mixing these four subpopulations produces the next generation population. The MOHGA can effectively search the feasible solution space due to population diversity. The MOHGA has been applied to CMNDP. By examining computational and analytical results, we notice that the MOHGA can find most nondominated solutions and is much more effective and efficient than other multiobjective GAs.
The reduced basis method for the electric field integral equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fares, M., E-mail: fares@cerfacs.f; Hesthaven, J.S., E-mail: Jan_Hesthaven@Brown.ed; Maday, Y., E-mail: maday@ann.jussieu.f
We introduce the reduced basis method (RBM) as an efficient tool for parametrized scattering problems in computational electromagnetics for problems where field solutions are computed using a standard Boundary Element Method (BEM) for the parametrized electric field integral equation (EFIE). This combination enables an algorithmic cooperation which results in a two step procedure. The first step consists of a computationally intense assembling of the reduced basis, that needs to be effected only once. In the second step, we compute output functionals of the solution, such as the Radar Cross Section (RCS), independently of the dimension of the discretization space, formore » many different parameter values in a many-query context at very little cost. Parameters include the wavenumber, the angle of the incident plane wave and its polarization.« less
Accelerating molecular property calculations with nonorthonormal Krylov space methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.
Here, we formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remainmore » small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations, and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.« less
Accelerating molecular property calculations with nonorthonormal Krylov space methods
Furche, Filipp; Krull, Brandon T.; Nguyen, Brian D.; ...
2016-05-03
Here, we formulate Krylov space methods for large eigenvalue problems and linear equation systems that take advantage of decreasing residual norms to reduce the cost of matrix-vector multiplication. The residuals are used as subspace basis without prior orthonormalization, which leads to generalized eigenvalue problems or linear equation systems on the Krylov space. These nonorthonormal Krylov space (nKs) algorithms are favorable for large matrices with irregular sparsity patterns whose elements are computed on the fly, because fewer operations are necessary as the residual norm decreases as compared to the conventional method, while errors in the desired eigenpairs and solution vectors remainmore » small. We consider real symmetric and symplectic eigenvalue problems as well as linear equation systems and Sylvester equations as they appear in configuration interaction and response theory. The nKs method can be implemented in existing electronic structure codes with minor modifications and yields speed-ups of 1.2-1.8 in typical time-dependent Hartree-Fock and density functional applications without accuracy loss. The algorithm can compute entire linear subspaces simultaneously which benefits electronic spectra and force constant calculations requiring many eigenpairs or solution vectors. The nKs approach is related to difference density methods in electronic ground state calculations, and particularly efficient for integral direct computations of exchange-type contractions. By combination with resolution-of-the-identity methods for Coulomb contractions, three- to fivefold speed-ups of hybrid time-dependent density functional excited state and response calculations are achieved.« less
Hesford, Andrew J; Tillett, Jason C; Astheimer, Jeffrey P; Waag, Robert C
2014-08-01
Accurate and efficient modeling of ultrasound propagation through realistic tissue models is important to many aspects of clinical ultrasound imaging. Simplified problems with known solutions are often used to study and validate numerical methods. Greater confidence in a time-domain k-space method and a frequency-domain fast multipole method is established in this paper by analyzing results for realistic models of the human breast. Models of breast tissue were produced by segmenting magnetic resonance images of ex vivo specimens into seven distinct tissue types. After confirming with histologic analysis by pathologists that the model structures mimicked in vivo breast, the tissue types were mapped to variations in sound speed and acoustic absorption. Calculations of acoustic scattering by the resulting model were performed on massively parallel supercomputer clusters using parallel implementations of the k-space method and the fast multipole method. The efficient use of these resources was confirmed by parallel efficiency and scalability studies using large-scale, realistic tissue models. Comparisons between the temporal and spectral results were performed in representative planes by Fourier transforming the temporal results. An RMS field error less than 3% throughout the model volume confirms the accuracy of the methods for modeling ultrasound propagation through human breast.
Nonnegative least-squares image deblurring: improved gradient projection approaches
NASA Astrophysics Data System (ADS)
Benvenuto, F.; Zanella, R.; Zanni, L.; Bertero, M.
2010-02-01
The least-squares approach to image deblurring leads to an ill-posed problem. The addition of the nonnegativity constraint, when appropriate, does not provide regularization, even if, as far as we know, a thorough investigation of the ill-posedness of the resulting constrained least-squares problem has still to be done. Iterative methods, converging to nonnegative least-squares solutions, have been proposed. Some of them have the 'semi-convergence' property, i.e. early stopping of the iteration provides 'regularized' solutions. In this paper we consider two of these methods: the projected Landweber (PL) method and the iterative image space reconstruction algorithm (ISRA). Even if they work well in many instances, they are not frequently used in practice because, in general, they require a large number of iterations before providing a sensible solution. Therefore, the main purpose of this paper is to refresh these methods by increasing their efficiency. Starting from the remark that PL and ISRA require only the computation of the gradient of the functional, we propose the application to these algorithms of special acceleration techniques that have been recently developed in the area of the gradient methods. In particular, we propose the application of efficient step-length selection rules and line-search strategies. Moreover, remarking that ISRA is a scaled gradient algorithm, we evaluate its behaviour in comparison with a recent scaled gradient projection (SGP) method for image deblurring. Numerical experiments demonstrate that the accelerated methods still exhibit the semi-convergence property, with a considerable gain both in the number of iterations and in the computational time; in particular, SGP appears definitely the most efficient one.
Modeling Natural Space Ionizing Radiation Effects on External Materials
NASA Technical Reports Server (NTRS)
Alstatt, Richard L.; Edwards, David L.; Parker, Nelson C. (Technical Monitor)
2000-01-01
Predicting the effective life of materials for space applications has become increasingly critical with the drive to reduce mission cost. Programs have considered many solutions to reduce launch costs including novel, low mass materials and thin thermal blankets to reduce spacecraft mass. Determining the long-term survivability of these materials before launch is critical for mission success. This presentation will describe an analysis performed on the outer layer of the passive thermal control blanket of the Hubble Space Telescope. This layer had degraded for unknown reasons during the mission, however ionizing radiation (IR) induced embrittlement was suspected. A methodology was developed which allowed direct comparison between the energy deposition of the natural environment and that of the laboratory generated environment. Commercial codes were used to predict the natural space IR environment model energy deposition in the material from both natural and laboratory IR sources, and design the most efficient test. Results were optimized for total and local energy deposition with an iterative spreadsheet. This method has been used successfully for several laboratory tests at the Marshall Space Flight Center. The study showed that the natural space IR environment, by itself, did not cause the premature degradation observed in the thermal blanket.
Modeling natural space ionizing radiation effects on external materials
NASA Astrophysics Data System (ADS)
Altstatt, Richard L.; Edwards, David L.
2000-10-01
Predicting the effective life of materials for space applications has become increasingly critical with the drive to reduce mission cost. Programs have considered many solutions to reduce launch costs including novel, low mass materials and thin thermal blankets to reduce spacecraft mass. Determining the long-term survivability of these materials before launch is critical for mission success. This presentation will describe an analysis performed on the outer layer of the passive thermal control blanket of the Hubble Space Telescope. This layer had degraded for unknown reasons during the mission, however ionizing radiation (IR) induced embrittlement was suspected. A methodology was developed which allowed direct comparison between the energy deposition of the natural environment and that of the laboratory generated environment. Commercial codes were used to predict the natural space IR environment, model energy deposition in the material from both natural and laboratory IR sources, and design the most efficient test. Results were optimized for total and local energy deposition with an iterative spreadsheet. This method has been used successfully for several laboratory tests at the Marshall Space Flight Center. The study showed that the natural space IR environment, by itself, did not cause the premature degradation observed in the thermal blanket.
Graf, Peter A.; Billups, Stephen
2017-07-24
Computational materials design has suffered from a lack of algorithms formulated in terms of experimentally accessible variables. Here we formulate the problem of (ternary) alloy optimization at the level of choice of atoms and their composition that is normal for synthesists. Mathematically, this is a mixed integer problem where a candidate solution consists of a choice of three elements, and how much of each of them to use. This space has the natural structure of a set of equilateral triangles. We solve this problem by introducing a novel version of the DIRECT algorithm that (1) operates on equilateral triangles insteadmore » of rectangles and (2) works across multiple triangles. We demonstrate on a test case that the algorithm is both robust and efficient. Lastly, we offer an explanation of the efficacy of DIRECT -- specifically, its balance of global and local search -- by showing that 'potentially optimal rectangles' of the original algorithm are akin to the Pareto front of the 'multi-component optimization' of global and local search.« less
A space radiation transport method development
NASA Technical Reports Server (NTRS)
Wilson, J. W.; Tripathi, R. K.; Qualls, G. D.; Cucinotta, F. A.; Prael, R. E.; Norbury, J. W.; Heinbockel, J. H.; Tweed, J.
2004-01-01
Improved spacecraft shield design requires early entry of radiation constraints into the design process to maximize performance and minimize costs. As a result, we have been investigating high-speed computational procedures to allow shield analysis from the preliminary design concepts to the final design. In particular, we will discuss the progress towards a full three-dimensional and computationally efficient deterministic code for which the current HZETRN evaluates the lowest-order asymptotic term. HZETRN is the first deterministic solution to the Boltzmann equation allowing field mapping within the International Space Station (ISS) in tens of minutes using standard finite element method (FEM) geometry common to engineering design practice enabling development of integrated multidisciplinary design optimization methods. A single ray trace in ISS FEM geometry requires 14 ms and severely limits application of Monte Carlo methods to such engineering models. A potential means of improving the Monte Carlo efficiency in coupling to spacecraft geometry is given in terms of re-configurable computing and could be utilized in the final design as verification of the deterministic method optimized design. Published by Elsevier Ltd on behalf of COSPAR.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Graf, Peter A.; Billups, Stephen
Computational materials design has suffered from a lack of algorithms formulated in terms of experimentally accessible variables. Here we formulate the problem of (ternary) alloy optimization at the level of choice of atoms and their composition that is normal for synthesists. Mathematically, this is a mixed integer problem where a candidate solution consists of a choice of three elements, and how much of each of them to use. This space has the natural structure of a set of equilateral triangles. We solve this problem by introducing a novel version of the DIRECT algorithm that (1) operates on equilateral triangles insteadmore » of rectangles and (2) works across multiple triangles. We demonstrate on a test case that the algorithm is both robust and efficient. Lastly, we offer an explanation of the efficacy of DIRECT -- specifically, its balance of global and local search -- by showing that 'potentially optimal rectangles' of the original algorithm are akin to the Pareto front of the 'multi-component optimization' of global and local search.« less
Exploring Lovelock theory moduli space for Schrödinger solutions
NASA Astrophysics Data System (ADS)
Jatkar, Dileep P.; Kundu, Nilay
2016-09-01
We look for Schrödinger solutions in Lovelock gravity in D > 4. We span the entire parameter space and determine parametric relations under which the Schrödinger solution exists. We find that in arbitrary dimensions pure Lovelock theories have Schrödinger solutions of arbitrary radius, on a co-dimension one locus in the Lovelock parameter space. This co-dimension one locus contains the subspace over which the Lovelock gravity can be written in the Chern-Simons form. Schrödinger solutions do not exist outside this locus and on this locus they exist for arbitrary dynamical exponent z. This freedom in z is due to the degeneracy in the configuration space. We show that this degeneracy survives certain deformation away from the Lovelock moduli space.
An analysis for high Reynolds number inviscid/viscid interactions in cascades
NASA Technical Reports Server (NTRS)
Barnett, Mark; Verdon, Joseph M.; Ayer, Timothy C.
1993-01-01
An efficient steady analysis for predicting strong inviscid/viscid interaction phenomena such as viscous-layer separation, shock/boundary-layer interaction, and trailing-edge/near-wake interaction in turbomachinery blade passages is needed as part of a comprehensive analytical blade design prediction system. Such an analysis is described. It uses an inviscid/viscid interaction approach, in which the flow in the outer inviscid region is assumed to be potential, and that in the inner or viscous-layer region is governed by Prandtl's equations. The inviscid solution is determined using an implicit, least-squares, finite-difference approximation, the viscous-layer solution using an inverse, finite-difference, space-marching method which is applied along the blade surfaces and wake streamlines. The inviscid and viscid solutions are coupled using a semi-inverse global iteration procedure, which permits the prediction of boundary-layer separation and other strong-interaction phenomena. Results are presented for three cascades, with a range of inlet flow conditions considered for one of them, including conditions leading to large-scale flow separations. Comparisons with Navier-Stokes solutions and experimental data are also given.
NASA Astrophysics Data System (ADS)
Trinkle, Dallas R.
2017-10-01
A general solution for vacancy-mediated diffusion in the dilute-vacancy/dilute-solute limit for arbitrary crystal structures is derived from the master equation. A general numerical approach to the vacancy lattice Green function reduces to the sum of a few analytic functions and numerical integration of a smooth function over the Brillouin zone for arbitrary crystals. The Dyson equation solves for the Green function in the presence of a solute with arbitrary but finite interaction range to compute the transport coefficients accurately, efficiently and automatically, including cases with very large differences in solute-vacancy exchange rates. The methodology takes advantage of the space group symmetry of a crystal to reduce the complexity of the matrix inversion in the Dyson equation. An open-source implementation of the algorithm is available, and numerical results are presented for the convergence of the integration error of the bare vacancy Green function, and tracer correlation factors for a variety of crystals including wurtzite (hexagonal diamond) and garnet.
An integrated solution for remote data access
NASA Astrophysics Data System (ADS)
Sapunenko, Vladimir; D'Urso, Domenico; dell'Agnello, Luca; Vagnoni, Vincenzo; Duranti, Matteo
2015-12-01
Data management constitutes one of the major challenges that a geographically- distributed e-Infrastructure has to face, especially when remote data access is involved. We discuss an integrated solution which enables transparent and efficient access to on-line and near-line data through high latency networks. The solution is based on the joint use of the General Parallel File System (GPFS) and of the Tivoli Storage Manager (TSM). Both products, developed by IBM, are well known and extensively used in the HEP computing community. Owing to a new feature introduced in GPFS 3.5, so-called Active File Management (AFM), the definition of a single, geographically-distributed namespace, characterised by automated data flow management between different locations, becomes possible. As a practical example, we present the implementation of AFM-based remote data access between two data centres located in Bologna and Rome, demonstrating the validity of the solution for the use case of the AMS experiment, an astro-particle experiment supported by the INFN CNAF data centre with the large disk space requirements (more than 1.5 PB).
Transient Thermoelectric Solution Employing Green's Functions
NASA Technical Reports Server (NTRS)
Mackey, Jon; Sehirlioglu, Alp; Dynys, Fred
2014-01-01
The study works to formulate convenient solutions to the problem of a thermoelectric couple operating under a time varying condition. Transient operation of a thermoelectric will become increasingly common as thermoelectric technology permits applications in an increasing number of uses. A number of terrestrial applications, in contrast to steady-state space applications, can subject devices to time varying conditions. For instance thermoelectrics can be exposed to transient conditions in the automotive industry depending on engine system dynamics along with factors like driving style. In an effort to generalize the thermoelectric solution a Greens function method is used, so that arbitrary time varying boundary and initial conditions may be applied to the system without reformulation. The solution demonstrates that in thermoelectric applications of a transient nature additional factors must be taken into account and optimized. For instance, the materials specific heat and density become critical parameters in addition to the thermal mass of a heat sink or the details of the thermal profile, such as oscillating frequency. The calculations can yield the optimum operating conditions to maximize power output andor efficiency for a given type of device.
Optimal placement of tuning masses on truss structures by genetic algorithms
NASA Technical Reports Server (NTRS)
Ponslet, Eric; Haftka, Raphael T.; Cudney, Harley H.
1993-01-01
Optimal placement of tuning masses, actuators and other peripherals on large space structures is a combinatorial optimization problem. This paper surveys several techniques for solving this problem. The genetic algorithm approach to the solution of the placement problem is described in detail. An example of minimizing the difference between the two lowest frequencies of a laboratory truss by adding tuning masses is used for demonstrating some of the advantages of genetic algorithms. The relative efficiencies of different codings are compared using the results of a large number of optimization runs.
Platform Design for Fleet-Level Efficiency: Application for Air Mobility Command (AMC)
2013-04-01
and networking that has been the hallmark of previous symposia. By purposely limiting attendance to 350 people, we encourage just that. This forum...F X Cap x C (capacity) (7) , , ,TO X X X XS Pallet AR W S T W D (aircraft take-off distance) (8) 6 36 XPallet (9...the solution space. Heuristic algorithms such as Simulated Annealing (SA), Genetic Algorithms ( GA ), and so forth, may be needed to solve the small
An interacting boundary layer model for cascades
NASA Technical Reports Server (NTRS)
Davis, R. T.; Rothmayer, A. P.
1983-01-01
A laminar, incompressible interacting boundary layer model is developed for two-dimensional cascades. In the limit of large cascade spacing these equations reduce to the interacting boundary layer equations for a single body immersed in an infinite stream. A fully implicit numerical method is used to solve the governing equations, and is found to be at least as efficient as the same technique applied to the single body problem. Solutions are then presented for a cascade of finite flat plates and a cascade of finite sine-waves, with cusped leading and trailing edges.
Corrosion consequences of microfouling in water reclamation systems
NASA Technical Reports Server (NTRS)
Ford, Tim; Mitchell, Ralph
1991-01-01
This paper examines the potential fouling and corrosion problems associated with microbial film formation throughout the water reclamation system (WRS) designed for the Space Station Freedom. It is shown that the use of advanced metal sputtering techiques combined with image analysis and FTIR spectroscopy will present realistic solutions for investigating the formation and function of biofilm on different alloys, the subsequent corrosion, and the efficiency of different treatments. These techniques, used in combination with electrochemical measurements of corrosion, will provide a powerful approach to examinations of materials considered for use in the WRS.
NASA Astrophysics Data System (ADS)
Chandran, Senthilkumar; Paulraj, Rajesh; Ramasamy, P.
2017-05-01
Semi-organic lithium hydrogen oxalate monohydrate non-linear optical single crystals have been grown by slow evaporation solution growth technique at 35 °C. Single crystal X-ray diffraction study showed that the grown crystal belongs to the triclinic system with space group P1. The mechanical strength decreases with increasing load. The piezoelectric coefficient is found to be 1.41 pC/N. The nonlinear optical property was measured using Kurtz Perry powder technique and SHG efficiency was almost equal to that of KDP.
Projection methods for line radiative transfer in spherical media.
NASA Astrophysics Data System (ADS)
Anusha, L. S.; Nagendra, K. N.
An efficient numerical method called the Preconditioned Bi-Conjugate Gradient (Pre-BiCG) method is presented for the solution of radiative transfer equation in spherical geometry. A variant of this method called Stabilized Preconditioned Bi-Conjugate Gradient (Pre-BiCG-STAB) is also presented. These methods are based on projections on the subspaces of the n dimensional Euclidean space mathbb {R}n called Krylov subspaces. The methods are shown to be faster in terms of convergence rate compared to the contemporary iterative methods such as Jacobi, Gauss-Seidel and Successive Over Relaxation (SOR).
A Novel Particle Swarm Optimization Approach for Grid Job Scheduling
NASA Astrophysics Data System (ADS)
Izakian, Hesam; Tork Ladani, Behrouz; Zamanifar, Kamran; Abraham, Ajith
This paper represents a Particle Swarm Optimization (PSO) algorithm, for grid job scheduling. PSO is a population-based search algorithm based on the simulation of the social behavior of bird flocking and fish schooling. Particles fly in problem search space to find optimal or near-optimal solutions. In this paper we used a PSO approach for grid job scheduling. The scheduler aims at minimizing makespan and flowtime simultaneously. Experimental studies show that the proposed novel approach is more efficient than the PSO approach reported in the literature.
NASA Astrophysics Data System (ADS)
Kanerva, M.; Koerselman, J. R.; Revitzer, H.; Johansson, L.-S.; Sarlin, E.; Rautiainen, A.; Brander, T.; Saarela, O.
2014-06-01
Spacecraft include sensitive electronics that must be protected against radiation from the space environment. Hybrid laminates consisting of tungsten layers and carbon- fibre-reinforced epoxy composite are a potential solution for lightweight, efficient, and protective enclosure material. Here, we analysed six different surface treatments for tungsten foils in terms of the resulting surface tension components, composition, and bonding strength with epoxy. A hydrofluoric-nitric-sulfuric-acid method and a diamond-like carbon-based DIARC® coating were found the most potential surface treatments for tungsten foils in this study.
Faster Heavy Ion Transport for HZETRN
NASA Technical Reports Server (NTRS)
Slaba, Tony C.
2013-01-01
The deterministic particle transport code HZETRN was developed to enable fast and accurate space radiation transport through materials. As more complex transport solutions are implemented for neutrons, light ions (Z < 2), mesons, and leptons, it is important to maintain overall computational efficiency. In this work, the heavy ion (Z > 2) transport algorithm in HZETRN is reviewed, and a simple modification is shown to provide an approximate 5x decrease in execution time for galactic cosmic ray transport. Convergence tests and other comparisons are carried out to verify that numerical accuracy is maintained in the new algorithm.
Inverse kinematics of a dual linear actuator pitch/roll heliostat
NASA Astrophysics Data System (ADS)
Freeman, Joshua; Shankar, Balakrishnan; Sundaram, Ganesh
2017-06-01
This work presents a simple, computationally efficient inverse kinematics solution for a pitch/roll heliostat using two linear actuators. The heliostat design and kinematics have been developed, modeled and tested using computer simulation software. A physical heliostat prototype was fabricated to validate the theoretical computations and data. Pitch/roll heliostats have numerous advantages including reduced cost potential and reduced space requirements, with a primary disadvantage being the significantly more complicated kinematics, which are solved here. Novel methods are applied to simplify the inverse kinematics problem which could be applied to other similar problems.
Some exact solutions for maximally symmetric topological defects in Anti de Sitter space
NASA Astrophysics Data System (ADS)
Alvarez, Orlando; Haddad, Matthew
2018-03-01
We obtain exact analytical solutions for a class of SO( l) Higgs field theories in a non-dynamic background n-dimensional anti de Sitter space. These finite transverse energy solutions are maximally symmetric p-dimensional topological defects where n = ( p + 1) + l. The radius of curvature of anti de Sitter space provides an extra length scale that allows us to study the equations of motion in a limit where the masses of the Higgs field and the massive vector bosons are both vanishing. We call this the double BPS limit. In anti de Sitter space, the equations of motion depend on both p and l. The exact analytical solutions are expressed in terms of standard special functions. The known exact analytical solutions are for kink-like defects ( p = 0 , 1 , 2 , . . . ; l = 1), vortex-like defects ( p = 1 , 2 , 3; l = 2), and the 't Hooft-Polyakov monopole ( p = 0; l = 3). A bonus is that the double BPS limit automatically gives a maximally symmetric classical glueball type solution. In certain cases where we did not find an analytic solution, we present numerical solutions to the equations of motion. The asymptotically exponentially increasing volume with distance of anti de Sitter space imposes different constraints than those found in the study of defects in Minkowski space.
A roadmap towards advanced space weather science to protect society's technological infrastructure
NASA Astrophysics Data System (ADS)
Schrijver, Carolus
As mankind’s technological capabilities grow, society constructs a rapidly deepening insight into the workings of the universe at large, being guided by exploring space near to our home. But at the same time our societal dependence on technology increases and with that comes a growing appreciation of the challenges presented by the phenomena that occur in that space around our home planet: Magnetic explosions on the Sun and their counterparts in the geomagnetic field can in extreme cases endanger our all-pervasive electrical infrastructure. Powerful space storms occasionally lower the reliability of the globe-spanning satellite navigation systems and interrupt radio communications. Energetic particle storms lead to malfunctions and even failures in satellites that are critical to the flow of information in the globally connected economies. These and other Sun-driven effects on Earth’s environment, collectively known as space weather, resemble some other natural hazards in the sense that they pose a risk for the safe and efficient functioning of society that needs to be understood, quantified, and - ultimately - mitigated against. The complexity of the coupled Sun-Earth system, the sparseness by which it can be covered by remote-sensing and in-situ instrumentation, and the costs of the required observational and computational infrastructure warrant a well-planned and well-coordinated approach with cost-efficient solutions. Our team is tasked with the development of a roadmap with the goal of demonstrably improving our observational capabilities, scientific understanding, and the ability to forecast. This paper summarizes the accomplishments of the roadmap team in identifying the highest-priority challenges to achieve these goals.
NASA Astrophysics Data System (ADS)
Verardo, E.; Atteia, O.; Rouvreau, L.
2015-12-01
In-situ bioremediation is a commonly used remediation technology to clean up the subsurface of petroleum-contaminated sites. Forecasting remedial performance (in terms of flux and mass reduction) is a challenge due to uncertainties associated with source properties and the uncertainties associated with contribution and efficiency of concentration reducing mechanisms. In this study, predictive uncertainty analysis of bio-remediation system efficiency is carried out with the null-space Monte Carlo (NSMC) method which combines the calibration solution-space parameters with the ensemble of null-space parameters, creating sets of calibration-constrained parameters for input to follow-on remedial efficiency. The first step in the NSMC methodology for uncertainty analysis is model calibration. The model calibration was conducted by matching simulated BTEX concentration to a total of 48 observations from historical data before implementation of treatment. Two different bio-remediation designs were then implemented in the calibrated model. The first consists in pumping/injection wells and the second in permeable barrier coupled with infiltration across slotted piping. The NSMC method was used to calculate 1000 calibration-constrained parameter sets for the two different models. Several variants of the method were implemented to investigate their effect on the efficiency of the NSMC method. The first variant implementation of the NSMC is based on a single calibrated model. In the second variant, models were calibrated from different initial parameter sets. NSMC calibration-constrained parameter sets were sampled from these different calibrated models. We demonstrate that in context of nonlinear model, second variant avoids to underestimate parameter uncertainty which may lead to a poor quantification of predictive uncertainty. Application of the proposed approach to manage bioremediation of groundwater in a real site shows that it is effective to provide support in management of the in-situ bioremediation systems. Moreover, this study demonstrates that the NSMC method provides a computationally efficient and practical methodology of utilizing model predictive uncertainty methods in environmental management.
Dissipative advective accretion disc solutions with variable adiabatic index around black holes
NASA Astrophysics Data System (ADS)
Kumar, Rajiv; Chattopadhyay, Indranil
2014-10-01
We investigated accretion on to black holes in presence of viscosity and cooling, by employing an equation of state with variable adiabatic index and multispecies fluid. We obtained the expression of generalized Bernoulli parameter which is a constant of motion for an accretion flow in presence of viscosity and cooling. We obtained all possible transonic solutions for a variety of boundary conditions, viscosity parameters and accretion rates. We identified the solutions with their positions in the parameter space of generalized Bernoulli parameter and the angular momentum on the horizon. We showed that a shocked solution is more luminous than a shock-free one. For particular energies and viscosity parameters, we obtained accretion disc luminosities in the range of 10- 4 - 1.2 times Eddington luminosity, and the radiative efficiency seemed to increase with the mass accretion rate too. We found steady state shock solutions even for high-viscosity parameters, high accretion rates and for wide range of composition of the flow, starting from purely electron-proton to lepton-dominated accretion flow. However, similar to earlier studies of inviscid flow, accretion shock was not obtained for electron-positron pair plasma.
Impact of Various Irrigating Agents on Root Fracture: An in vitro Study.
Tiwari, Sukriti; Nikhade, Pradnya; Chandak, Manoj; Sudarshan, C; Shetty, Priyadarshini; Gupta, Naveen K
2016-08-01
Irrigating solutions are used for cleaning and removing dentinal debris, and the other remains from pulpal space during biomechanical preparation. Therefore, we evaluated the impact of various irrigating agents on root fracture at 5-minute time exposure. We sectioned 60 permanent maxillary premolars with fully formed root structures transversely maintaining the root length of approximately 14 mm. Five study groups were made comprising ethylenediaminetetraacetic acid (EDTA), cetrimide, citric acid, and so on as various irrigating agents. A universal force test machine was used to calculate the force which was enough to fracture each root. Analysis of variance (ANOVA) test was used to access the level of significance. About 10% citric acid solution as an irrigating agent showed minimal fracture opposing results, whereas 10% EDTA solution showed the maximum fracture resistance of root portion. Selection of suitable EDTA concentration that has minimal adverse effect on the mechanical properties of the tooth is very important for the successful management of tooth fracture. About 10% EDTA provided the highest fracture resistance, necessitating the use of irrigating solution in root canal therapy (RCT). Further research with higher and different study groups is required to search for more efficient irrigating solution to improve the outcome of RCT.
An efficient flexible-order model for 3D nonlinear water waves
NASA Astrophysics Data System (ADS)
Engsig-Karup, A. P.; Bingham, H. B.; Lindberg, O.
2009-04-01
The flexible-order, finite difference based fully nonlinear potential flow model described in [H.B. Bingham, H. Zhang, On the accuracy of finite difference solutions for nonlinear water waves, J. Eng. Math. 58 (2007) 211-228] is extended to three dimensions (3D). In order to obtain an optimal scaling of the solution effort multigrid is employed to precondition a GMRES iterative solution of the discretized Laplace problem. A robust multigrid method based on Gauss-Seidel smoothing is found to require special treatment of the boundary conditions along solid boundaries, and in particular on the sea bottom. A new discretization scheme using one layer of grid points outside the fluid domain is presented and shown to provide convergent solutions over the full physical and discrete parameter space of interest. Linear analysis of the fundamental properties of the scheme with respect to accuracy, robustness and energy conservation are presented together with demonstrations of grid independent iteration count and optimal scaling of the solution effort. Calculations are made for 3D nonlinear wave problems for steep nonlinear waves and a shoaling problem which show good agreement with experimental measurements and other calculations from the literature.
A New Expanded Mixed Element Method for Convection-Dominated Sobolev Equation
Wang, Jinfeng; Li, Hong; Fang, Zhichao
2014-01-01
We propose and analyze a new expanded mixed element method, whose gradient belongs to the simple square integrable space instead of the classical H(div; Ω) space of Chen's expanded mixed element method. We study the new expanded mixed element method for convection-dominated Sobolev equation, prove the existence and uniqueness for finite element solution, and introduce a new expanded mixed projection. We derive the optimal a priori error estimates in L 2-norm for the scalar unknown u and a priori error estimates in (L 2)2-norm for its gradient λ and its flux σ. Moreover, we obtain the optimal a priori error estimates in H 1-norm for the scalar unknown u. Finally, we obtained some numerical results to illustrate efficiency of the new method. PMID:24701153
Some aspects of algorithm performance and modeling in transient analysis of structures
NASA Technical Reports Server (NTRS)
Adelman, H. M.; Haftka, R. T.; Robinson, J. C.
1981-01-01
The status of an effort to increase the efficiency of calculating transient temperature fields in complex aerospace vehicle structures is described. The advantages and disadvantages of explicit algorithms with variable time steps, known as the GEAR package, is described. Four test problems, used for evaluating and comparing various algorithms, were selected and finite-element models of the configurations are described. These problems include a space shuttle frame component, an insulated cylinder, a metallic panel for a thermal protection system, and a model of the wing of the space shuttle orbiter. Results generally indicate a preference for implicit over explicit algorithms for solution of transient structural heat transfer problems when the governing equations are stiff (typical of many practical problems such as insulated metal structures).
Simulation of multipactor on the rectangular grooved dielectric surface
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cai, Libing; Wang, Jianguo, E-mail: wanguiuc@mail.xjtu.edu.cn; Northwest Institute of Nuclear Technology, Xi'an, Shaanxi 710024
2015-11-15
Multipactor discharge on the rectangular grooved dielectric surface is simulated self-consistently by using a two-and-a-half dimensional (2.5 D) electrostatic particle-in-cell (PIC) code. Compared with the electromagnetic PIC code, the former can give much more accurate solution for the space charge field caused by the multipactor electrons and the deposited surface charge. According to the rectangular groove width and height, the multipactor can be divided into four models, the spatial distributions of the multipactor electrons and the space charge fields are presented for these models. It shows that the rectangular groove in different models gives very different suppression effect on themore » multipactor, effective and efficient suppression on the multipactor can only be reached with a proper groove size.« less
Efficiency optimization of a fast Poisson solver in beam dynamics simulation
NASA Astrophysics Data System (ADS)
Zheng, Dawei; Pöplau, Gisela; van Rienen, Ursula
2016-01-01
Calculating the solution of Poisson's equation relating to space charge force is still the major time consumption in beam dynamics simulations and calls for further improvement. In this paper, we summarize a classical fast Poisson solver in beam dynamics simulations: the integrated Green's function method. We introduce three optimization steps of the classical Poisson solver routine: using the reduced integrated Green's function instead of the integrated Green's function; using the discrete cosine transform instead of discrete Fourier transform for the Green's function; using a novel fast convolution routine instead of an explicitly zero-padded convolution. The new Poisson solver routine preserves the advantages of fast computation and high accuracy. This provides a fast routine for high performance calculation of the space charge effect in accelerators.
One-dimensional transport equation models for sound energy propagation in long spaces: theory.
Jing, Yun; Larsen, Edward W; Xiang, Ning
2010-04-01
In this paper, a three-dimensional transport equation model is developed to describe the sound energy propagation in a long space. Then this model is reduced to a one-dimensional model by approximating the solution using the method of weighted residuals. The one-dimensional transport equation model directly describes the sound energy propagation in the "long" dimension and deals with the sound energy in the "short" dimensions by prescribed functions. Also, the one-dimensional model consists of a coupled set of N transport equations. Only N=1 and N=2 are discussed in this paper. For larger N, although the accuracy could be improved, the calculation time is expected to significantly increase, which diminishes the advantage of the model in terms of its computational efficiency.
NASA Technical Reports Server (NTRS)
Huffaker, R. C.
1982-01-01
The presence of NO2(-) in the external solution increased the overall efficiency of the mixed N sources by cereal leaves. The NH4(+) in the substrate solution decreased the efficiency of NO3(-) reduction, while NO3(-) in the substrate solution increased the efficiency of NH4(+) assimilation.
Method and apparatus for measuring volatile compounds in an aqueous solution
Gilmore, Tyler J [Pasco, WA; Cantrell, Kirk J [West Richland, WA
2002-07-16
The present invention is an improvement to the method and apparatus for measuring volatile compounds in an aqueous solution. The apparatus is a chamber with sides and two ends, where the first end is closed. The chamber contains a solution volume of the aqueous solution and a gas that is trapped within the first end of the chamber above the solution volume. The gas defines a head space within the chamber above the solution volume. The chamber may also be a cup with the second end. open and facing down and submerged in the aqueous solution so that the gas defines the head space within the cup above the solution volume. The cup can also be entirely submerged in the aqueous solution. The second end of the. chamber may be closed such that the chamber can be used while resting on a flat surface such as a bench. The improvement is a sparger for mixing the gas with the solution volume. The sparger can be a rotating element such as a propeller on a shaft or a cavitating impeller. The sparger can also be a pump and nozzle where the pump is a liquid pump and the nozzle is a liquid spray nozzle open, to the head space for spraying the solution volume into the head space of gas. The pump could also be a gas pump and the nozzle a gas nozzle submerged in the solution volume for spraying the head space gas into the solution volume.
Tetrahedral-Mesh Simulation of Turbulent Flows with the Space-Time Conservative Schemes
NASA Technical Reports Server (NTRS)
Chang, Chau-Lyan; Venkatachari, Balaji; Cheng, Gary C.
2015-01-01
Direct numerical simulations of turbulent flows are predominantly carried out using structured, hexahedral meshes despite decades of development in unstructured mesh methods. Tetrahedral meshes offer ease of mesh generation around complex geometries and the potential of an orientation free grid that would provide un-biased small-scale dissipation and more accurate intermediate scale solutions. However, due to the lack of consistent multi-dimensional numerical formulations in conventional schemes for triangular and tetrahedral meshes at the cell interfaces, numerical issues exist when flow discontinuities or stagnation regions are present. The space-time conservative conservation element solution element (CESE) method - due to its Riemann-solver-free shock capturing capabilities, non-dissipative baseline schemes, and flux conservation in time as well as space - has the potential to more accurately simulate turbulent flows using unstructured tetrahedral meshes. To pave the way towards accurate simulation of shock/turbulent boundary-layer interaction, a series of wave and shock interaction benchmark problems that increase in complexity, are computed in this paper with triangular/tetrahedral meshes. Preliminary computations for the normal shock/turbulence interactions are carried out with a relatively coarse mesh, by direct numerical simulations standards, in order to assess other effects such as boundary conditions and the necessity of a buffer domain. The results indicate that qualitative agreement with previous studies can be obtained for flows where, strong shocks co-exist along with unsteady waves that display a broad range of scales, with a relatively compact computational domain and less stringent requirements for grid clustering near the shock. With the space-time conservation properties, stable solutions without any spurious wave reflections can be obtained without a need for buffer domains near the outflow/farfield boundaries. Computational results for the isotropic turbulent flow decay, at a relatively high turbulent Mach number, show a nicely behaved spectral decay rate for medium to high wave numbers. The high-order CESE schemes offer very robust solutions even with the presence of strong shocks or widespread shocklets. The explicit formulation in conjunction with a close to unity theoretical upper Courant number bound has the potential to offer an efficient numerical framework for general compressible turbulent flow simulations with unstructured meshes.
NASA Astrophysics Data System (ADS)
Licsandru, Erol-Dan; Schneider, Susanne; Tingry, Sophie; Ellis, Thomas; Moulin, Emilie; Maaloum, Mounir; Lehn, Jean-Marie; Barboiu, Mihail; Giuseppone, Nicolas
2016-03-01
Biocompatible silica-based mesoporous materials, which present high surface areas combined with uniform distribution of nanopores, can be organized in functional nanopatterns for a number of applications. However, silica is by essence an electrically insulating material which precludes applications for electro-chemical devices. The formation of hybrid electroactive silica nanostructures is thus expected to be of great interest for the design of biocompatible conducting materials such as bioelectrodes. Here we show that we can grow supramolecular stacks of triarylamine molecules in the confined space of oriented mesopores of a silica nanolayer covering a gold electrode. This addressable bottom-up construction is triggered from solution simply by light irradiation. The resulting self-assembled nanowires act as highly conducting electronic pathways crossing the silica layer. They allow very efficient charge transfer from the redox species in solution to the gold surface. We demonstrate the potential of these hybrid constitutional materials by implementing them as biocathodes and by measuring laccase activity that reduces dioxygen to produce water.Biocompatible silica-based mesoporous materials, which present high surface areas combined with uniform distribution of nanopores, can be organized in functional nanopatterns for a number of applications. However, silica is by essence an electrically insulating material which precludes applications for electro-chemical devices. The formation of hybrid electroactive silica nanostructures is thus expected to be of great interest for the design of biocompatible conducting materials such as bioelectrodes. Here we show that we can grow supramolecular stacks of triarylamine molecules in the confined space of oriented mesopores of a silica nanolayer covering a gold electrode. This addressable bottom-up construction is triggered from solution simply by light irradiation. The resulting self-assembled nanowires act as highly conducting electronic pathways crossing the silica layer. They allow very efficient charge transfer from the redox species in solution to the gold surface. We demonstrate the potential of these hybrid constitutional materials by implementing them as biocathodes and by measuring laccase activity that reduces dioxygen to produce water. Electronic supplementary information (ESI) available: Synthetic protocols, XPS measurements, contact angle measurements, additional cyclic voltammograms and electrochemical impedance spectroscopy. See DOI: 10.1039/c5nr06977g
NASA Astrophysics Data System (ADS)
Liu, Qiao
2015-06-01
In recent paper [7], Y. Du and K. Wang (2013) proved that the global-in-time Koch-Tataru type solution (u, d) to the n-dimensional incompressible nematic liquid crystal flow with small initial data (u0, d0) in BMO-1 × BMO has arbitrary space-time derivative estimates in the so-called Koch-Tataru space norms. The purpose of this paper is to show that the Koch-Tataru type solution satisfies the decay estimates for any space-time derivative involving some borderline Besov space norms.
Complicated asymptotic behavior of solutions for porous medium equation in unbounded space
NASA Astrophysics Data System (ADS)
Wang, Liangwei; Yin, Jingxue; Zhou, Yong
2018-05-01
In this paper, we find that the unbounded spaces Yσ (RN) (0 < σ <2/m-1 ) can provide the work spaces where complicated asymptotic behavior appears in the solutions of the Cauchy problem of the porous medium equation. To overcome the difficulties caused by the nonlinearity of the equation and the unbounded solutions, we establish the propagation estimates, the growth estimates and the weighted L1-L∞ estimates for the solutions.
Machine learning action parameters in lattice quantum chromodynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shanahan, Phiala; Trewartha, Daneil; Detmold, William
Numerical lattice quantum chromodynamics studies of the strong interaction underpin theoretical understanding of many aspects of particle and nuclear physics. Such studies require significant computing resources to undertake. A number of proposed methods promise improved efficiency of lattice calculations, and access to regions of parameter space that are currently computationally intractable, via multi-scale action-matching approaches that necessitate parametric regression of generated lattice datasets. The applicability of machine learning to this regression task is investigated, with deep neural networks found to provide an efficient solution even in cases where approaches such as principal component analysis fail. Finally, the high information contentmore » and complex symmetries inherent in lattice QCD datasets require custom neural network layers to be introduced and present opportunities for further development.« less
Machine learning action parameters in lattice quantum chromodynamics
Shanahan, Phiala; Trewartha, Daneil; Detmold, William
2018-05-16
Numerical lattice quantum chromodynamics studies of the strong interaction underpin theoretical understanding of many aspects of particle and nuclear physics. Such studies require significant computing resources to undertake. A number of proposed methods promise improved efficiency of lattice calculations, and access to regions of parameter space that are currently computationally intractable, via multi-scale action-matching approaches that necessitate parametric regression of generated lattice datasets. The applicability of machine learning to this regression task is investigated, with deep neural networks found to provide an efficient solution even in cases where approaches such as principal component analysis fail. Finally, the high information contentmore » and complex symmetries inherent in lattice QCD datasets require custom neural network layers to be introduced and present opportunities for further development.« less
NASA Astrophysics Data System (ADS)
McLinko, Ryan M.; Sagar, Basant V.
2009-12-01
Space-based solar power (SSP) generation is being touted as a solution to our ever-increasing energy consumption and dependence on fossil fuels. Satellites in Earth's orbit can capture solar energy through photovoltaic cells and transmit that power to ground based stations. Solar cells in orbit are not hindered by weather, clouds, or night. The energy generated by this process is clean and pollution-free. Although the concept of space-based solar power was initially proposed nearly 40 years ago, the level of technology in photovoltaics, power transmission, materials, and efficient satellite design has finally reached a level of maturity that makes solar power from space a feasible prospect. Furthermore, new strategies in methods for solar energy acquisition and transmission can lead to simplifications in design, reductions in cost and reduced risk. This paper proposes using a distributed array of small satellites to collect power from the Sun, as compared to the more traditional SSP design that consists of one monolithic satellite. This concept mitigates some of SSP's most troublesome historic constraints, such as the requirement for heavy lift launch vehicles and the need for significant assembly in space. Instead, a larger number of smaller satellites designed to collect solar energy are launched independently. A high frequency beam will be used to aggregate collected power into a series of transmission antennas, which beam the energy to Earth's surface at a lower frequency. Due to the smaller power expectations of each satellite and the relatively short distance of travel from low earth orbit, such satellites can be designed with smaller arrays. The inter-satellite rectenna devices can also be smaller and lighter in weight. Our paper suggests how SSP satellites can be designed small enough to fit within ESPA standards and therefore use rideshare to achieve orbit. Alternatively, larger versions could be launched on Falcon 9s or on Falcon 1s with booster stages. The only satellites that are constrained to a significant mass are the beam-down satellites, which still require significant transmission arrays to sufficiently focus the beams targeting corresponding ground stations. With robust design and inherent redundancy built-in, power generation and transmission will not be interrupted in the event of mishaps like space debris collision. Furthermore, the "plug and play" nature of this system significantly reduces the cost, complexity, and risk of upgrading the system. The distributed nature of smallsat clusters maximizes the use of economies of scale. This approach retains some problems of older designs and introduces additional ones. Mitigations will be explored further. For example, the distributed nature of the system requires very precise coordination between and among satellites and a mature attitude control and determination system. Such a design incorporates multiple beaming stages, which has the potential to reduce overall system efficiency. Although this design eliminates the need for space assembly, it retains the challenge of significant on-orbit deployment of solar and transmission arrays. Space power "beaming" is a three step process that involves: 1) conversion of dc power generated by solar cells on the satellite into an electromagnetic wave of suitable frequency, 2) transmission of that wave to power stations on ground, and 3) conversion of the radio waves back into dc power. A great deal of research has been done on the use of microwaves for this purpose. Various factors that affect efficient power generation and transmission will be analyzed in this paper. Based on relevant theory and performance and optimization models, the paper proposes solutions that will help make space-based solar power generation a practical and viable option for addressing the world's growing energy needs.
Yao, Yanping; Kou, Ziming; Meng, Wenjun; Han, Gang
2014-01-01
Properly evaluating the overall performance of tubular scraper conveyors (TSCs) can increase their overall efficiency and reduce economic investments, but such methods have rarely been studied. This study evaluated the overall performance of TSCs based on the technique for order of preference by similarity to ideal solution (TOPSIS). Three conveyors of the same type produced in the same factory were investigated. Their scraper space, material filling coefficient, and vibration coefficient of the traction components were evaluated. A mathematical model of the multiattribute decision matrix was constructed; a weighted judgment matrix was obtained using the DELPHI method. The linguistic positive-ideal solution (LPIS), the linguistic negative-ideal solution (LNIS), and the distance from each solution to the LPIS and the LNIS, that is, the approximation degrees, were calculated. The optimal solution was determined by ordering the approximation degrees for each solution. The TOPSIS-based results were compared with the measurement results provided by the manufacturer. The ordering result based on the three evaluated parameters was highly consistent with the result provided by the manufacturer. The TOPSIS-based method serves as a suitable evaluation tool for the overall performance of TSCs. It facilitates the optimal deployment of TSCs for industrial purposes. PMID:24991646
Secomb, Timothy W
2016-12-01
A novel theoretical method is presented for simulating the spatially resolved convective and diffusive transport of reacting solutes between microvascular networks and the surrounding tissues. The method allows for efficient computational solution of problems involving convection and non-linear binding of solutes in blood flowing through microvascular networks with realistic 3D geometries, coupled with transvascular exchange and diffusion and reaction in the surrounding tissue space. The method is based on a Green's function approach, in which the solute concentration distribution in the tissue is expressed as a sum of fields generated by time-varying distributions of discrete sources and sinks. As an example of the application of the method, the washout of an inert diffusible tracer substance from a tissue region perfused by a network of microvessels is simulated, showing its dependence on the solute's transvascular permeability and tissue diffusivity. Exponential decay of the washout concentration is predicted, with rate constants that are about 10-30% lower than the rate constants for a tissue cylinder model with the same vessel length, vessel surface area and blood flow rate per tissue volume. © The authors 2015. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.
Excessive Counterion Condensation on Immobilized ssDNA in Solutions of High Ionic Strength
Rant, Ulrich; Arinaga, Kenji; Fujiwara, Tsuyoshi; Fujita, Shozo; Tornow, Marc; Yokoyama, Naoki; Abstreiter, Gerhard
2003-01-01
We present experiments on the bias-induced release of immobilized, single-stranded (ss) 24-mer oligonucleotides from Au-surfaces into electrolyte solutions of varying ionic strength. Desorption is evidenced by fluorescence measurements of dye-labeled ssDNA. Electrostatic interactions between adsorbed ssDNA and the Au-surface are investigated with respect to 1), a variation of the bias potential applied to the Au-electrode; and 2), the screening effect of the electrolyte solution. For the latter, the concentration of monovalent salt in solution is varied from 3 to 1600 mM. We find that the strength of electric interaction is predominantly determined by the effective charge of the ssDNA itself and that the release of DNA mainly occurs before the electrochemical double layer has been established at the electrolyte/Au interface. In agreement with Manning's condensation theory, the measured desorption efficiency (ηrel) stays constant over a wide range of salt concentrations; however, as the Debye length is reduced below a value comparable to the axial charge spacing of the DNA, ηrel decreases substantially. We assign this effect to excessive counterion condensation on the DNA in solutions of high ionic strength. In addition, the relative translational diffusion coefficient of ssDNA in solution is evaluated for different salt concentrations. PMID:14645075
Excessive counterion condensation on immobilized ssDNA in solutions of high ionic strength.
Rant, Ulrich; Arinaga, Kenji; Fujiwara, Tsuyoshi; Fujita, Shozo; Tornow, Marc; Yokoyama, Naoki; Abstreiter, Gerhard
2003-12-01
We present experiments on the bias-induced release of immobilized, single-stranded (ss) 24-mer oligonucleotides from Au-surfaces into electrolyte solutions of varying ionic strength. Desorption is evidenced by fluorescence measurements of dye-labeled ssDNA. Electrostatic interactions between adsorbed ssDNA and the Au-surface are investigated with respect to 1), a variation of the bias potential applied to the Au-electrode; and 2), the screening effect of the electrolyte solution. For the latter, the concentration of monovalent salt in solution is varied from 3 to 1600 mM. We find that the strength of electric interaction is predominantly determined by the effective charge of the ssDNA itself and that the release of DNA mainly occurs before the electrochemical double layer has been established at the electrolyte/Au interface. In agreement with Manning's condensation theory, the measured desorption efficiency (etarel) stays constant over a wide range of salt concentrations; however, as the Debye length is reduced below a value comparable to the axial charge spacing of the DNA, etarel decreases substantially. We assign this effect to excessive counterion condensation on the DNA in solutions of high ionic strength. In addition, the relative translational diffusion coefficient of ssDNA in solution is evaluated for different salt concentrations.
An all-at-once reduced Hessian SQP scheme for aerodynamic design optimization
NASA Technical Reports Server (NTRS)
Feng, Dan; Pulliam, Thomas H.
1995-01-01
This paper introduces a computational scheme for solving a class of aerodynamic design problems that can be posed as nonlinear equality constrained optimizations. The scheme treats the flow and design variables as independent variables, and solves the constrained optimization problem via reduced Hessian successive quadratic programming. It updates the design and flow variables simultaneously at each iteration and allows flow variables to be infeasible before convergence. The solution of an adjoint flow equation is never needed. In addition, a range space basis is chosen so that in a certain sense the 'cross term' ignored in reduced Hessian SQP methods is minimized. Numerical results for a nozzle design using the quasi-one-dimensional Euler equations show that this scheme is computationally efficient and robust. The computational cost of a typical nozzle design is only a fraction more than that of the corresponding analysis flow calculation. Superlinear convergence is also observed, which agrees with the theoretical properties of this scheme. All optimal solutions are obtained by starting far away from the final solution.
NASA Astrophysics Data System (ADS)
Nakayama, Akira; Arai, Gaku; Yamazaki, Shohei; Taketsugu, Tetsuya
2013-12-01
On-the-fly excited-state quantum mechanics/molecular mechanics molecular dynamics (QM/MM-MD) simulations of thymine in aqueous solution are performed to investigate the role of solvent water molecules on the nonradiative deactivation process. The complete active space second-order perturbation theory (CASPT2) method is employed for a thymine molecule as the QM part in order to provide a reliable description of the excited-state potential energies. It is found that, in addition to the previously reported deactivation pathway involving the twisting of the C-C double bond in the pyrimidine ring, another efficient deactivation pathway leading to conical intersections that accompanies the out-of-plane displacement of the carbonyl group is observed in aqueous solution. Decay through this pathway is not observed in the gas phase simulations, and our analysis indicates that the hydrogen bonds with solvent water molecules play a key role in stabilizing the potential energies of thymine in this additional decay pathway.
Mechanical energy flow models of rods and beams
NASA Technical Reports Server (NTRS)
Wohlever, J. C.; Bernhard, R. J.
1992-01-01
It has been proposed that the flow of mechanical energy through a structural/acoustic system may be modeled in a manner similar to that of flow of thermal energy/in a heat conduction problem. If this hypothesis is true, it would result in relatively efficient numerical models of structure-borne energy in large built-up structures. Fewer parameters are required to approximate the energy solution than are required to model the characteristic wave behavior of structural vibration by using traditional displacement formulations. The energy flow hypothesis is tested in this investigation for both longitudinal vibration in rods and transverse flexural vibrations of beams. The rod is shown to behave approximately according to the thermal energy flow analogy. However, the beam solutions behave significantly differently than predicted by the thermal analogy unless locally-space-averaged energy and power are considered. Several techniques for coupling dissimilar rods and beams are also discussed. Illustrations of the solution accuracy of the methods are included.
Smith, N L; Coukouma, A; Dubnik, S; Asher, S A
2017-12-06
We fabricate 2D photonic crystals (2DPC) by spreading a dispersion of charged colloidal particles (diameters = 409, 570, and 915 nm) onto the surface of electrolyte solutions using a needle tip flow method. When the interparticle electrostatic interaction potential is large, particles self-assemble into highly ordered hexagonal close packed (hcp) monolayers. Ordered 2DPC efficiently forward diffract monochromatic light to produce a Debye ring on a screen parallel to the 2DPC. The diameter of the Debye ring is inversely proportional to the 2DPC particle spacing, while the Debye ring brightness and thickness depends on the 2DPC ordering. The Debye ring thickness increases as the 2DPC order decreases. The Debye ring ordering measurements of 2DPC attached to glass slides track measurements of the 2D pair correlation function order parameter calculated from SEM micrographs. The Debye ring method was used to investigate the 2DPC particle spacing, and ordering at the air-solution interface of NaCl solutions, and for 2DPC arrays attached to glass slides. Surprisingly, the 2DPC ordering does not monotonically decrease as the salt concentration increases. This is because of chloride ion adsorption onto the anionic particle surfaces. This adsorption increases the particle surface charge and compensates for the decreased Debye length of the electric double layer when the NaCl concentration is below a critical value.
Efficient droplet router for digital microfluidic biochip using particle swarm optimizer
NASA Astrophysics Data System (ADS)
Pan, Indrajit; Samanta, Tuhina
2013-01-01
Digital Microfluidic Biochip has emerged as a revolutionary finding in the field of micro-electromechanical research. Different complex bioassays and pathological analysis are being efficiently performed on this miniaturized chip with negligible amount of sample specimens. Initially biochip was invented on continuous-fluid-flow mechanism but later it has evolved with more efficient concept of digital-fluid-flow. These second generation biochips are capable of serving more complex bioassays. This operational change in biochip technology emerged with the requirement of high end computer aided design needs for physical design automation. The change also paved new avenues of research to assist the proficient design automation. Droplet routing is one of those major aspects where it necessarily requires minimization of both routing completion time and total electrode usage. This task involves optimization of multiple associated parameters. In this paper we have proposed a particle swarm optimization based approach for droplet outing. The process mainly operates in two phases where initially we perform clustering of state space and classification of nets into designated clusters. This helps us to reduce solution space by redefining local sub optimal target in the interleaved space between source and global target of a net. In the next phase we resolve the concurrent routing issues of every sub optimal situation to generate final routing schedule. The method was applied on some standard test benches and hard test sets. Comparative analysis of experimental results shows good improvement on the aspect of unit cell usage, routing completion time and execution time over some well existing methods.
A POD reduced order model for resolving angular direction in neutron/photon transport problems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buchan, A.G., E-mail: andrew.buchan@imperial.ac.uk; Calloo, A.A.; Goffin, M.G.
2015-09-01
This article presents the first Reduced Order Model (ROM) that efficiently resolves the angular dimension of the time independent, mono-energetic Boltzmann Transport Equation (BTE). It is based on Proper Orthogonal Decomposition (POD) and uses the method of snapshots to form optimal basis functions for resolving the direction of particle travel in neutron/photon transport problems. A unique element of this work is that the snapshots are formed from the vector of angular coefficients relating to a high resolution expansion of the BTE's angular dimension. In addition, the individual snapshots are not recorded through time, as in standard POD, but instead theymore » are recorded through space. In essence this work swaps the roles of the dimensions space and time in standard POD methods, with angle and space respectively. It is shown here how the POD model can be formed from the POD basis functions in a highly efficient manner. The model is then applied to two radiation problems; one involving the transport of radiation through a shield and the other through an infinite array of pins. Both problems are selected for their complex angular flux solutions in order to provide an appropriate demonstration of the model's capabilities. It is shown that the POD model can resolve these fluxes efficiently and accurately. In comparison to high resolution models this POD model can reduce the size of a problem by up to two orders of magnitude without compromising accuracy. Solving times are also reduced by similar factors.« less
NASA Astrophysics Data System (ADS)
Veprik, A.; Zechtzer, S.; Pundak, N.; Kirkconnell, C.; Freeman, J.; Riabzev, S.
2011-06-01
Cryogenic coolers are often used in modern spacecraft in conjunction with sensitive electronics and sensors of military, commercial and scientific instrumentation. The typical space requirements are: power efficiency, low vibration export, proven reliability, ability to survive launch vibration/shock and long-term exposure to space radiation. A long-standing paradigm of exclusively using "space heritage" equipment has become the standard practice for delivering high reliability components. Unfortunately, this conservative "space heritage" practice can result in using outdated, oversized, overweight and overpriced cryogenic coolers and is becoming increasingly unacceptable for space agencies now operating within tough monetary and time constraints. The recent trend in developing mini and micro satellites for relatively inexpensive missions has prompted attempts to adapt leading-edge tactical cryogenic coolers for suitability in the space environment. The primary emphasis has been on reducing cost, weight and size. The authors are disclosing theoretical and practical aspects of a collaborative effort to develop a space qualified cryogenic refrigerator system based on the tactical cooler model Ricor K527 and the Iris Technology radiation hardened Low Cost Cryocooler Electronics (LCCE). The K27/LCCE solution is ideal for applications where cost, size, weight, power consumption, vibration export, reliability and time to spacecraft integration are of concern.
Campaign-level dynamic network modelling for spaceflight logistics for the flexible path concept
NASA Astrophysics Data System (ADS)
Ho, Koki; de Weck, Olivier L.; Hoffman, Jeffrey A.; Shishko, Robert
2016-06-01
This paper develops a network optimization formulation for dynamic campaign-level space mission planning. Although many past space missions have been designed mainly from a mission-level perspective, a campaign-level perspective will be important for future space exploration. In order to find the optimal campaign-level space transportation architecture, a mixed-integer linear programming (MILP) formulation with a generalized multi-commodity flow and a time-expanded network is developed. Particularly, a new heuristics-based method, a partially static time-expanded network, is developed to provide a solution quickly. The developed method is applied to a case study containing human exploration of a near-Earth object (NEO) and Mars, related to the concept of the Flexible Path. The numerical results show that using the specific combinations of propulsion technologies, in-situ resource utilization (ISRU), and other space infrastructure elements can reduce the initial mass in low-Earth orbit (IMLEO) significantly. In addition, the case study results also show that we can achieve large IMLEO reduction by designing NEO and Mars missions together as a campaign compared with designing them separately owing to their common space infrastructure pre-deployment. This research will be an important step toward efficient and flexible campaign-level space mission planning.
Stability of Internal Space in Kaluza-Klein Theory
NASA Astrophysics Data System (ADS)
Maeda, K.; Soda, J.
1998-12-01
We extend a model studied by Li and Gott III to investigate a stability of internal space in Kaluza-Klein theory. Our model is a four-dimensional de-Sitter space plus a n-dimensional compactified internal space. We introduce a solution of the semi-classical Einstein equation which shows us the fact that a n-dimensional compactified internal space can be stable by the Casimir effect. The self-consistency of this solution is checked. One may apply this solution to study the issue of the Black Hole singularity.
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung; Wang, Xiao-Yen; Chow, Chuen-Yen
1998-01-01
A new high resolution and genuinely multidimensional numerical method for solving conservation laws is being, developed. It was designed to avoid the limitations of the traditional methods. and was built from round zero with extensive physics considerations. Nevertheless, its foundation is mathmatically simple enough that one can build from it a coherent, robust. efficient and accurate numerical framework. Two basic beliefs that set the new method apart from the established methods are at the core of its development. The first belief is that, in order to capture physics more efficiently and realistically, the modeling, focus should be placed on the original integral form of the physical conservation laws, rather than the differential form. The latter form follows from the integral form under the additional assumption that the physical solution is smooth, an assumption that is difficult to realize numerically in a region of rapid chance. such as a boundary layer or a shock. The second belief is that, with proper modeling of the integral and differential forms themselves, the resulting, numerical solution should automatically be consistent with the properties derived front the integral and differential forms, e.g., the jump conditions across a shock and the properties of characteristics. Therefore a much simpler and more robust method can be developed by not using the above derived properties explicitly.
Response of CR-39 to 0.9-2.5 MeV protons for KOH and NaOH etching solutions
NASA Astrophysics Data System (ADS)
Bahrami, F.; Mianji, F.; Faghihi, R.; Taheri, M.; Ansarinejad, A.
2016-03-01
In some circumstances passive detecting methods are the only or preferable measuring approaches. For instance, defining particles' energy profile inside the objects being irradiated with heavy ions and measuring fluence of neutrons or heavy particles in space missions are the cases covered by these methods. In this paper the ability of polyallyl diglycol carbonate (PADC) track detector (commercially known as CR-39) for passive spectrometry of proton particles is studied. Furthermore, the effect of KOH and NaOH as commonly used chemical etching solutions on the response of the detector is investigated. The experiments were carried out with protons in the energy range of 0.94-2.5 MeV generated by a Van de Graaff accelerator. Then, the exposed track dosimeters were etched in the two aforementioned etchants through similar procedure with the same normality of 6.25 N and the same temperature of 85 °C. Formation of the tracks was precisely investigated and the track diameters were recorded following every etching step for each solution using a multistage etching process. The results showed that the proposed method can be efficiently used for the spectrometry of protons over a wider dynamic range and with a reasonable accuracy. Moreover, NaOH and KOH outperformed each other over different regions of the proton energy range. The detection efficiency of both etchants was approximately 100%.
Autonomous Assembly of Modular Structures in Space and on Extraterrestrial Locations
NASA Astrophysics Data System (ADS)
Alhorn, Dean C.
2005-02-01
The new U.S. National Vision for Space Exploration requires many new enabling technologies to accomplish the goals of space commercialization and returning humans to the moon and extraterrestrial environments. Traditionally, flight elements are complete sub-systems requiring humans to complete the integration and assembly. These bulky structures also require the use of heavy launch vehicles to send the units to a desired location. This philosophy necessitates a high degree of safety, numerous space walks at a significant cost. Future space mission costs must be reduced and safety increased to reasonably achieve exploration goals. One proposed concept is the autonomous assembly of space structures. This concept is an affordable, reliable solution to in-space and extraterrestrial assembly. Assembly is autonomously performed when two components join after determining that specifications are correct. Local sensors continue monitor joint integrity post assembly, which is critical for safety and structural reliability. Achieving this concept requires a change in space structure design philosophy and the development of innovative technologies to perform autonomous assembly. Assembly of large space structures will require significant numbers of integrity sensors. Thus simple, low-cost sensors are integral to the success of this concept. This paper addresses these issues and proposes a novel concept for assembling space structures autonomously. Core technologies required to achieve in space assembly are presented. These core technologies are critical to the goal of utilizing space in a cost efficient and safe manner. Additionally, these novel technologies can be applied to other systems both on earth and extraterrestrial environments.
Autonomous Assembly of Modular Structures in Space and on Extraterrestrial Locations
NASA Technical Reports Server (NTRS)
Alhorn, Dean C.
2005-01-01
The new U.S. National Vision for Space Exploration requires many new enabling technologies to accomplish the goals of space commercialization and returning humans to the moon and extraterrestrial environments. Traditionally, flight elements are complete subsystems requiring humans to complete the integration and assembly. These bulky structures also require the use of heavy launch vehicles to send the units to a desired location. This philosophy necessitates a high degree of safety, numerous space walks at a significant cost. Future space mission costs must be reduced and safety increased to reasonably achieve exploration goals. One proposed concept is the autonomous assembly of space structures. This concept is an affordable, reliable solution to in-space and extraterrestrial assembly. Assembly is autonomously performed when two components join after determining that specifications are correct. Local sensors continue monitor joint integrity post assembly, which is critical for safety and structural reliability. Achieving this concept requires a change in space structure design philosophy and the development of innovative technologies to perform autonomous assembly. Assembly of large space structures will require significant numbers of integrity sensors. Thus simple, low-cost sensors are integral to the success of this concept. This paper addresses these issues and proposes a novel concept for assembling space structures autonomously. Core technologies required to achieve in space assembly are presented. These core technologies are critical to the goal of utilizing space in a cost efficient and safe manner. Additionally, these novel technologies can be applied to other systems both on earth and extraterrestrial environments.
TSOS and TSOS-FK hybrid methods for modelling the propagation of seismic waves
NASA Astrophysics Data System (ADS)
Ma, Jian; Yang, Dinghui; Tong, Ping; Ma, Xiao
2018-05-01
We develop a new time-space optimized symplectic (TSOS) method for numerically solving elastic wave equations in heterogeneous isotropic media. We use the phase-preserving symplectic partitioned Runge-Kutta method to evaluate the time derivatives and optimized explicit finite-difference (FD) schemes to discretize the space derivatives. We introduce the averaged medium scheme into the TSOS method to further increase its capability of dealing with heterogeneous media and match the boundary-modified scheme for implementing free-surface boundary conditions and the auxiliary differential equation complex frequency-shifted perfectly matched layer (ADE CFS-PML) non-reflecting boundaries with the TSOS method. A comparison of the TSOS method with analytical solutions and standard FD schemes indicates that the waveform generated by the TSOS method is more similar to the analytic solution and has a smaller error than other FD methods, which illustrates the efficiency and accuracy of the TSOS method. Subsequently, we focus on the calculation of synthetic seismograms for teleseismic P- or S-waves entering and propagating in the local heterogeneous region of interest. To improve the computational efficiency, we successfully combine the TSOS method with the frequency-wavenumber (FK) method and apply the ADE CFS-PML to absorb the scattered waves caused by the regional heterogeneity. The TSOS-FK hybrid method is benchmarked against semi-analytical solutions provided by the FK method for a 1-D layered model. Several numerical experiments, including a vertical cross-section of the Chinese capital area crustal model, illustrate that the TSOS-FK hybrid method works well for modelling waves propagating in complex heterogeneous media and remains stable for long-time computation. These numerical examples also show that the TSOS-FK method can tackle the converted and scattered waves of the teleseismic plane waves caused by local heterogeneity. Thus, the TSOS and TSOS-FK methods proposed in this study present an essential tool for the joint inversion of local, regional, and teleseismic waveform data.
An LES-PBE-PDF approach for modeling particle formation in turbulent reacting flows
NASA Astrophysics Data System (ADS)
Sewerin, Fabian; Rigopoulos, Stelios
2017-10-01
Many chemical and environmental processes involve the formation of a polydispersed particulate phase in a turbulent carrier flow. Frequently, the immersed particles are characterized by an intrinsic property such as the particle size, and the distribution of this property across a sample population is taken as an indicator for the quality of the particulate product or its environmental impact. In the present article, we propose a comprehensive model and an efficient numerical solution scheme for predicting the evolution of the property distribution associated with a polydispersed particulate phase forming in a turbulent reacting flow. Here, the particulate phase is described in terms of the particle number density whose evolution in both physical and particle property space is governed by the population balance equation (PBE). Based on the concept of large eddy simulation (LES), we augment the existing LES-transported probability density function (PDF) approach for fluid phase scalars by the particle number density and obtain a modeled evolution equation for the filtered PDF associated with the instantaneous fluid composition and particle property distribution. This LES-PBE-PDF approach allows us to predict the LES-filtered fluid composition and particle property distribution at each spatial location and point in time without any restriction on the chemical or particle formation kinetics. In view of a numerical solution, we apply the method of Eulerian stochastic fields, invoking an explicit adaptive grid technique in order to discretize the stochastic field equation for the number density in particle property space. In this way, sharp moving features of the particle property distribution can be accurately resolved at a significantly reduced computational cost. As a test case, we consider the condensation of an aerosol in a developed turbulent mixing layer. Our investigation not only demonstrates the predictive capabilities of the LES-PBE-PDF model but also indicates the computational efficiency of the numerical solution scheme.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Besse, Nicolas; Latu, Guillaume; Ghizzo, Alain
In this paper we present a new method for the numerical solution of the relativistic Vlasov-Maxwell system on a phase-space grid using an adaptive semi-Lagrangian method. The adaptivity is performed through a wavelet multiresolution analysis, which gives a powerful and natural refinement criterion based on the local measurement of the approximation error and regularity of the distribution function. Therefore, the multiscale expansion of the distribution function allows to get a sparse representation of the data and thus save memory space and CPU time. We apply this numerical scheme to reduced Vlasov-Maxwell systems arising in laser-plasma physics. Interaction of relativistically strongmore » laser pulses with overdense plasma slabs is investigated. These Vlasov simulations revealed a rich variety of phenomena associated with the fast particle dynamics induced by electromagnetic waves as electron trapping, particle acceleration, and electron plasma wavebreaking. However, the wavelet based adaptive method that we developed here, does not yield significant improvements compared to Vlasov solvers on a uniform mesh due to the substantial overhead that the method introduces. Nonetheless they might be a first step towards more efficient adaptive solvers based on different ideas for the grid refinement or on a more efficient implementation. Here the Vlasov simulations are performed in a two-dimensional phase-space where the development of thin filaments, strongly amplified by relativistic effects requires an important increase of the total number of points of the phase-space grid as they get finer as time goes on. The adaptive method could be more useful in cases where these thin filaments that need to be resolved are a very small fraction of the hyper-volume, which arises in higher dimensions because of the surface-to-volume scaling and the essentially one-dimensional structure of the filaments. Moreover, the main way to improve the efficiency of the adaptive method is to increase the local character in phase-space of the numerical scheme, by considering multiscale reconstruction with more compact support and by replacing the semi-Lagrangian method with more local - in space - numerical scheme as compact finite difference schemes, discontinuous-Galerkin method or finite element residual schemes which are well suited for parallel domain decomposition techniques.« less
Fan, Peilei; Ouyang, Zutao; Basnou, Corina; Pino, Joan; Park, Hogeun; Chen, Jiquan
2017-07-01
Using Barcelona and Shanghai as case studies, we examined the nature-based solutions (NBS) in urban settings-specifically within cities experiencing post-industrialization and globalization. Our specific research questions are: (1) What are the spatiotemporal changes in urban built-up land and green space in Barcelona and Shanghai? (2) What are the relationships between economic development, exemplified by post-industrialization, globalization, and urban green space? Urban land use and green space change were evaluated using data derived from a variety of sources, including satellite images, landscape matrix indicators, and a land conversion matrix. The relationships between economic development, globalization, and environmental quality were analyzed through partial least squares structural equation modeling based on secondary statistical data. Both Barcelona and Shanghai have undergone rapid urbanization, with urban expansion in Barcelona beginning in the 1960s-1970s and in Shanghai in the last decade. While Barcelona's urban green space and green space per capita began declining between the 1950s and 1990s, they increased slightly over the past two decades. Shanghai, however, has consistently and significantly improved urban green space and green space per capita over the past six decades, especially since the economic reform in 1978. Economic development has a direct and significant influence on urban green space for both cities and post-industrialization had served as the main driving force for urban landscape change in Barcelona and Shanghai. Based on secondary statistical and qualitative data from on-site observations and interviews with local experts, we highlighted the institution's role in NBS planning. Furthermore, aspiration to become a global or globalizing city motivated both cities to use NBS planning as a place-making tool to attract global investment, which is reflected in various governing policies and regulations. The cities' effort to achieve a higher status in the global city hierarchy may have contributed to the increase in total green space and urban green per capita. In addition, various institutional shifts, such as land property rights in a market economy vs. a transitional economy, may also have contributed to the differences in efficiency when expanding urban green space in Barcelona and Shanghai. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Cui, Tiangang; Marzouk, Youssef; Willcox, Karen
2016-06-01
Two major bottlenecks to the solution of large-scale Bayesian inverse problems are the scaling of posterior sampling algorithms to high-dimensional parameter spaces and the computational cost of forward model evaluations. Yet incomplete or noisy data, the state variation and parameter dependence of the forward model, and correlations in the prior collectively provide useful structure that can be exploited for dimension reduction in this setting-both in the parameter space of the inverse problem and in the state space of the forward model. To this end, we show how to jointly construct low-dimensional subspaces of the parameter space and the state space in order to accelerate the Bayesian solution of the inverse problem. As a byproduct of state dimension reduction, we also show how to identify low-dimensional subspaces of the data in problems with high-dimensional observations. These subspaces enable approximation of the posterior as a product of two factors: (i) a projection of the posterior onto a low-dimensional parameter subspace, wherein the original likelihood is replaced by an approximation involving a reduced model; and (ii) the marginal prior distribution on the high-dimensional complement of the parameter subspace. We present and compare several strategies for constructing these subspaces using only a limited number of forward and adjoint model simulations. The resulting posterior approximations can rapidly be characterized using standard sampling techniques, e.g., Markov chain Monte Carlo. Two numerical examples demonstrate the accuracy and efficiency of our approach: inversion of an integral equation in atmospheric remote sensing, where the data dimension is very high; and the inference of a heterogeneous transmissivity field in a groundwater system, which involves a partial differential equation forward model with high dimensional state and parameters.
Han, Tae-Hee; Choi, Mi-Ri; Jeon, Chan-Woo; Kim, Yun-Hi; Kwon, Soon-Ki; Lee, Tae-Woo
2016-01-01
Although solution processing of small-molecule organic light-emitting diodes (OLEDs) has been considered as a promising alternative to standard vacuum deposition requiring high material and processing cost, the devices have suffered from low luminous efficiency and difficulty of multilayer solution processing. Therefore, high efficiency should be achieved in simple-structured small-molecule OLEDs fabricated using a solution process. We report very efficient solution-processed simple-structured small-molecule OLEDs that use novel universal electron-transporting host materials based on tetraphenylsilane with pyridine moieties. These materials have wide band gaps, high triplet energy levels, and good solution processabilities; they provide balanced charge transport in a mixed-host emitting layer. Orange-red (~97.5 cd/A, ~35.5% photons per electron), green (~101.5 cd/A, ~29.0% photons per electron), and white (~74.2 cd/A, ~28.5% photons per electron) phosphorescent OLEDs exhibited the highest recorded electroluminescent efficiencies of solution-processed OLEDs reported to date. We also demonstrate a solution-processed flexible solid-state lighting device as a potential application of our devices. PMID:27819053
Liu, Lan; Jiang, Tao
2007-01-01
With the launch of the international HapMap project, the haplotype inference problem has attracted a great deal of attention in the computational biology community recently. In this paper, we study the question of how to efficiently infer haplotypes from genotypes of individuals related by a pedigree without mating loops, assuming that the hereditary process was free of mutations (i.e. the Mendelian law of inheritance) and recombinants. We model the haplotype inference problem as a system of linear equations as in [10] and present an (optimal) linear-time (i.e. O(mn) time) algorithm to generate a particular solution (A particular solution of any linear system is an assignment of numerical values to the variables in the system which satisfies the equations in the system.) to the haplotype inference problem, where m is the number of loci (or markers) in a genotype and n is the number of individuals in the pedigree. Moreover, the algorithm also provides a general solution (A general solution of any linear system is denoted by the span of a basis in the solution space to its associated homogeneous system, offset from the origin by a vector, namely by any particular solution. A general solution for ZRHC is very useful in practice because it allows the end user to efficiently enumerate all solutions for ZRHC and performs tasks such as random sampling.) in O(mn2) time, which is optimal because the size of a general solution could be as large as Theta(mn2). The key ingredients of our construction are (i) a fast consistency checking procedure for the system of linear equations introduced in [10] based on a careful investigation of the relationship between the equations (ii) a novel linear-time method for solving linear equations without invoking the Gaussian elimination method. Although such a fast method for solving equations is not known for general systems of linear equations, we take advantage of the underlying loop-free pedigree graph and some special properties of the linear equations.
The Quiet Eye and Motor Expertise: Explaining the “Efficiency Paradox”
Klostermann, André; Hossner, Ernst-Joachim
2018-01-01
It has been consistently reported that experts show longer quiet eye (QE) durations when compared to near-experts and novices. However, this finding is rather paradoxical as motor expertise is characterized by an economization of motor-control processes rather than by a prolongation in response programming, a suggested explanatory mechanism of the QE phenomenon. Therefore, an inhibition hypothesis was proposed that suggests an inhibition of non-optimal task solutions over movement parametrization, which is particularly necessary in experts due to the great extent and high density of their experienced task-solution space. In the current study, the effect of the task-solution space’ extension was tested by comparing the QE-duration gains in groups that trained a far-aiming task with a small number (low-extent) vs. a large number (high-extent) of task variants. After an extensive training period of more than 750 trials, both groups showed superior performance in post-test and retention test when compared to pretest and longer QE durations in post-test when compared to pretest. However, the QE durations dropped to baseline values at retention. Finally, the expected additional gain in QE duration for the high-extent group was not found and thus, the assumption of long QE durations due to an extended task-solution space was not confirmed. The findings were (by tendency) more in line with the density explanation of the inhibition hypothesis. This density argument suits research revealing a high specificity of motor skills in experts thus providing worthwhile options for future research on the paradoxical relation between the QE and motor expertise. PMID:29472882
Crew Transfer Options for Servicing of Geostationary Satellites
NASA Technical Reports Server (NTRS)
Cerro, Jeffrey A.
2012-01-01
In 2011, NASA and DARPA undertook a study to examine capabilities and system architecture options which could be used to provide manned servicing of satellites in Geostationary Earth Orbit (GEO). The study focused on understanding the generic nature of the problem and examining technology requirements, it was not for the purpose of proposing or justifying particular solutions. A portion of this study focused on assessing possible capabilities to efficiently transfer crew between Earth, Low Earth Orbit (LEO), and GEO satellite servicing locations. This report summarizes the crew transfer aspects of manned GEO satellite servicing. Direct placement of crew via capsule vehicles was compared to concepts of operation which divided crew transfer into multiple legs, first between earth and LEO and second between LEO and GEO. In space maneuvering via purely propulsive means was compared to in-space maneuvering which utilized aerobraking maneuvers for return to LEO from GEO. LEO waypoint locations such as equatorial, Kennedy Space Center, and International Space Station inclinations were compared. A discussion of operational concepts is followed by a discussion of appropriate areas for technology development.
NASA Astrophysics Data System (ADS)
Harris, E.
Planning, Implementation and Optimization of Future Space Missions using an Immersive Visualization Environment (IVE) Machine E. N. Harris, Lockheed Martin Space Systems, Denver, CO and George.W. Morgenthaler, U. of Colorado at Boulder History: A team of 3-D engineering visualization experts at the Lockheed Martin Space Systems Company have developed innovative virtual prototyping simulation solutions for ground processing and real-time visualization of design and planning of aerospace missions over the past 6 years. At the University of Colorado, a team of 3-D visualization experts are developing the science of 3-D visualization and immersive visualization at the newly founded BP Center for Visualization, which began operations in October, 2001. (See IAF/IAA-01-13.2.09, "The Use of 3-D Immersive Visualization Environments (IVEs) to Plan Space Missions," G. A. Dorn and G. W. Morgenthaler.) Progressing from Today's 3-D Engineering Simulations to Tomorrow's 3-D IVE Mission Planning, Simulation and Optimization Techniques: 3-D (IVEs) and visualization simulation tools can be combined for efficient planning and design engineering of future aerospace exploration and commercial missions. This technology is currently being developed and will be demonstrated by Lockheed Martin in the (IVE) at the BP Center using virtual simulation for clearance checks, collision detection, ergonomics and reach-ability analyses to develop fabrication and processing flows for spacecraft and launch vehicle ground support operations and to optimize mission architecture and vehicle design subject to realistic constraints. Demonstrations: Immediate aerospace applications to be demonstrated include developing streamlined processing flows for Reusable Space Transportation Systems and Atlas Launch Vehicle operations and Mars Polar Lander visual work instructions. Long-range goals include future international human and robotic space exploration missions such as the development of a Mars Reconnaissance Orbiter and Lunar Base construction scenarios. Innovative solutions utilizing Immersive Visualization provide the key to streamlining the mission planning and optimizing engineering design phases of future aerospace missions.
A synergic simulation-optimization approach for analyzing biomolecular dynamics in living organisms.
Sadegh Zadeh, Kouroush
2011-01-01
A synergic duo simulation-optimization approach was developed and implemented to study protein-substrate dynamics and binding kinetics in living organisms. The forward problem is a system of several coupled nonlinear partial differential equations which, with a given set of kinetics and diffusion parameters, can provide not only the commonly used bleached area-averaged time series in fluorescence microscopy experiments but more informative full biomolecular/drug space-time series and can be successfully used to study dynamics of both Dirac and Gaussian fluorescence-labeled biomacromolecules in vivo. The incomplete Cholesky preconditioner was coupled with the finite difference discretization scheme and an adaptive time-stepping strategy to solve the forward problem. The proposed approach was validated with analytical as well as reference solutions and used to simulate dynamics of GFP-tagged glucocorticoid receptor (GFP-GR) in mouse cancer cell during a fluorescence recovery after photobleaching experiment. Model analysis indicates that the commonly practiced bleach spot-averaged time series is not an efficient approach to extract physiological information from the fluorescence microscopy protocols. It was recommended that experimental biophysicists should use full space-time series, resulting from experimental protocols, to study dynamics of biomacromolecules and drugs in living organisms. It was also concluded that in parameterization of biological mass transfer processes, setting the norm of the gradient of the penalty function at the solution to zero is not an efficient stopping rule to end the inverse algorithm. Theoreticians should use multi-criteria stopping rules to quantify model parameters by optimization. Copyright © 2010 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary
2013-01-01
With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.
NASA Astrophysics Data System (ADS)
Denicol, Gabriel; Heinz, Ulrich; Martinez, Mauricio; Noronha, Jorge; Strickland, Michael
2014-12-01
We present an exact solution to the Boltzmann equation which describes a system undergoing boost-invariant longitudinal and azimuthally symmetric radial expansion for arbitrary shear viscosity to entropy density ratio. This new solution is constructed by considering the conformal map between Minkowski space and the direct product of three-dimensional de Sitter space with a line. The resulting solution respects S O (3 )q⊗S O (1 ,1 )⊗Z2 symmetry. We compare the exact kinetic solution with exact solutions of the corresponding macroscopic equations that were obtained from the kinetic theory in ideal and second-order viscous hydrodynamic approximations. The macroscopic solutions are obtained in de Sitter space and are subject to the same symmetries used to obtain the exact kinetic solution.
Encapsulation Efficiency and Micellar Structure of Solute-Carrying Block Copolymer Nanoparticles
Woodhead, Jeffrey L.; Hall, Carol K.
2011-01-01
We use discontinuous molecular dynamics (DMD) computer simulation to investigate the encapsulation efficiency and micellar structure of solute-carrying block copolymer nanoparticles as a function of packing fraction, polymer volume fraction, solute mole fraction, and the interaction parameters between the hydrophobic head blocks and between the head and the solute. The encapsulation efficiency increases with increasing polymer volume fraction and packing fraction but decreases with increasing head-head interaction strength. The latter is due to an increased tendency for the solute to remain on the micelle surface. We compared two different nanoparticle assembly methods, one in which the solute and copolymer co-associate and the other in which the copolymer micelle is formed before the introduction of solute. The assembly method does not affect the encapsulation efficiency but does affect the solute uptake kinetics. Both head-solute interaction strength and head-head interaction strength affect the density profile of the micelles; increases in the former cause the solute to distribute more evenly throughout the micelle, while increases in the latter cause the solute to concentrate further from the center of the micelle. We explain our results in the context of a model of drug insertion into micelles formulated by Kumar and Prud’homme; as conditions become more conducive to micelle formation, a stronger energy barrier to solute insertion forms which in turn decreases the encapsulation efficiency of the system. PMID:21918582
Mobilizing slit lamp to the field: A new affordable solution
Farooqui, Javed Hussain; Jorgenson, Richard; Gomaa, Ahmed
2015-01-01
We are describing a simple and affordable design to pack and carry the slit lamp to the field. Orbis staff working on the Flying Eye Hospital (FEH) developed this design to facilitate mobilization of the slit lamp to the field during various FEH programs. The solution involves using a big toolbox, a central plywood apparatus, and foam. These supplies were cut to measure and used to support the slit lamp after being fitted snuggly in the box. This design allows easy and safe mobilization of the slit lamp to remote places. It was developed with the efficient use of space in mind and it can be easily reproduced in developing countries using same or similar supplies. Mobilizing slit lamp will be of great help for staff and institutes doing regular outreach clinical work. PMID:26669342
NASA Astrophysics Data System (ADS)
Weatherford, Charles; Gebremedhin, Daniel
2016-03-01
A new and efficient way of evolving a solution to an ordinary differential equation is presented. A finite element method is used where we expand in a convenient local basis set of functions that enforce both function and first derivative continuity across the boundaries of each element. We also implement an adaptive step size choice for each element that is based on a Taylor series expansion. The method is applied to solve for the eigenpairs of the one-dimensional soft-coulomb potential and the hard-coulomb limit is studied. The method is then used to calculate a numerical solution of the Kohn-Sham differential equation within the local density approximation is presented and is applied to the helium atom. Supported by the National Nuclear Security Agency, the Nuclear Regulatory Commission, and the Defense Threat Reduction Agency.
NASA Technical Reports Server (NTRS)
Harten, A.; Tal-Ezer, H.
1981-01-01
This paper presents a family of two-level five-point implicit schemes for the solution of one-dimensional systems of hyperbolic conservation laws, which generalized the Crank-Nicholson scheme to fourth order accuracy (4-4) in both time and space. These 4-4 schemes are nondissipative and unconditionally stable. Special attention is given to the system of linear equations associated with these 4-4 implicit schemes. The regularity of this system is analyzed and efficiency of solution-algorithms is examined. A two-datum representation of these 4-4 implicit schemes brings about a compactification of the stencil to three mesh points at each time-level. This compact two-datum representation is particularly useful in deriving boundary treatments. Numerical results are presented to illustrate some properties of the proposed scheme.
Elementary solutions of coupled model equations in the kinetic theory of gases
NASA Technical Reports Server (NTRS)
Kriese, J. T.; Siewert, C. E.; Chang, T. S.
1974-01-01
The method of elementary solutions is employed to solve two coupled integrodifferential equations sufficient for determining temperature-density effects in a linearized BGK model in the kinetic theory of gases. Full-range completeness and orthogonality theorems are proved for the developed normal modes and the infinite-medium Green's function is constructed as an illustration of the full-range formalism. The appropriate homogeneous matrix Riemann problem is discussed, and half-range completeness and orthogonality theorems are proved for a certain subset of the normal modes. The required existence and uniqueness theorems relevant to the H matrix, basic to the half-range analysis, are proved, and an accurate and efficient computational method is discussed. The half-space temperature-slip problem is solved analytically, and a highly accurate value of the temperature-slip coefficient is reported.
Power and Efficiency Optimized in Traveling-Wave Tubes Over a Broad Frequency Bandwidth
NASA Technical Reports Server (NTRS)
Wilson, Jeffrey D.
2001-01-01
A traveling-wave tube (TWT) is an electron beam device that is used to amplify electromagnetic communication waves at radio and microwave frequencies. TWT's are critical components in deep space probes, communication satellites, and high-power radar systems. Power conversion efficiency is of paramount importance for TWT's employed in deep space probes and communication satellites. A previous effort was very successful in increasing efficiency and power at a single frequency (ref. 1). Such an algorithm is sufficient for narrow bandwidth designs, but for optimal designs in applications that require high radiofrequency power over a wide bandwidth, such as high-density communications or high-resolution radar, the variation of the circuit response with respect to frequency must be considered. This work at the NASA Glenn Research Center is the first to develop techniques for optimizing TWT efficiency and output power over a broad frequency bandwidth (ref. 2). The techniques are based on simulated annealing, which has the advantage over conventional optimization techniques in that it enables the best possible solution to be obtained (ref. 3). Two new broadband simulated annealing algorithms were developed that optimize (1) minimum saturated power efficiency over a frequency bandwidth and (2) simultaneous bandwidth and minimum power efficiency over the frequency band with constant input power. The algorithms were incorporated into the NASA coupled-cavity TWT computer model (ref. 4) and used to design optimal phase velocity tapers using the 59- to 64-GHz Hughes 961HA coupled-cavity TWT as a baseline model. In comparison to the baseline design, the computational results of the first broad-band design algorithm show an improvement of 73.9 percent in minimum saturated efficiency (see the top graph). The second broadband design algorithm (see the bottom graph) improves minimum radiofrequency efficiency with constant input power drive by a factor of 2.7 at the high band edge (64 GHz) and increases simultaneous bandwidth by 500 MHz.
Hayashibe, Mitsuhiro; Shimoda, Shingo
2014-01-01
A human motor system can improve its behavior toward optimal movement. The skeletal system has more degrees of freedom than the task dimensions, which incurs an ill-posed problem. The multijoint system involves complex interaction torques between joints. To produce optimal motion in terms of energy consumption, the so-called cost function based optimization has been commonly used in previous works.Even if it is a fact that an optimal motor pattern is employed phenomenologically, there is no evidence that shows the existence of a physiological process that is similar to such a mathematical optimization in our central nervous system.In this study, we aim to find a more primitive computational mechanism with a modular configuration to realize adaptability and optimality without prior knowledge of system dynamics.We propose a novel motor control paradigm based on tacit learning with task space feedback. The motor command accumulation during repetitive environmental interactions, play a major role in the learning process. It is applied to a vertical cyclic reaching which involves complex interaction torques.We evaluated whether the proposed paradigm can learn how to optimize solutions with a 3-joint, planar biomechanical model. The results demonstrate that the proposed method was valid for acquiring motor synergy and resulted in energy efficient solutions for different load conditions. The case in feedback control is largely affected by the interaction torques. In contrast, the trajectory is corrected over time with tacit learning toward optimal solutions.Energy efficient solutions were obtained by the emergence of motor synergy. During learning, the contribution from feedforward controller is augmented and the one from the feedback controller is significantly minimized down to 12% for no load at hand, 16% for a 0.5 kg load condition.The proposed paradigm could provide an optimization process in redundant system with dynamic-model-free and cost-function-free approach. PMID:24616695
Hayashibe, Mitsuhiro; Shimoda, Shingo
2014-01-01
A human motor system can improve its behavior toward optimal movement. The skeletal system has more degrees of freedom than the task dimensions, which incurs an ill-posed problem. The multijoint system involves complex interaction torques between joints. To produce optimal motion in terms of energy consumption, the so-called cost function based optimization has been commonly used in previous works.Even if it is a fact that an optimal motor pattern is employed phenomenologically, there is no evidence that shows the existence of a physiological process that is similar to such a mathematical optimization in our central nervous system.In this study, we aim to find a more primitive computational mechanism with a modular configuration to realize adaptability and optimality without prior knowledge of system dynamics.We propose a novel motor control paradigm based on tacit learning with task space feedback. The motor command accumulation during repetitive environmental interactions, play a major role in the learning process. It is applied to a vertical cyclic reaching which involves complex interaction torques.We evaluated whether the proposed paradigm can learn how to optimize solutions with a 3-joint, planar biomechanical model. The results demonstrate that the proposed method was valid for acquiring motor synergy and resulted in energy efficient solutions for different load conditions. The case in feedback control is largely affected by the interaction torques. In contrast, the trajectory is corrected over time with tacit learning toward optimal solutions.Energy efficient solutions were obtained by the emergence of motor synergy. During learning, the contribution from feedforward controller is augmented and the one from the feedback controller is significantly minimized down to 12% for no load at hand, 16% for a 0.5 kg load condition.The proposed paradigm could provide an optimization process in redundant system with dynamic-model-free and cost-function-free approach.
NASA Astrophysics Data System (ADS)
Furzeland, R. M.; Verwer, J. G.; Zegeling, P. A.
1990-08-01
In recent years, several sophisticated packages based on the method of lines (MOL) have been developed for the automatic numerical integration of time-dependent problems in partial differential equations (PDEs), notably for problems in one space dimension. These packages greatly benefit from the very successful developments of automatic stiff ordinary differential equation solvers. However, from the PDE point of view, they integrate only in a semiautomatic way in the sense that they automatically adjust the time step sizes, but use just a fixed space grid, chosen a priori, for the entire calculation. For solutions possessing sharp spatial transitions that move, e.g., travelling wave fronts or emerging boundary and interior layers, a grid held fixed for the entire calculation is computationally inefficient, since for a good solution this grid often must contain a very large number of nodes. In such cases methods which attempt automatically to adjust the sizes of both the space and the time steps are likely to be more successful in efficiently resolving critical regions of high spatial and temporal activity. Methods and codes that operate this way belong to the realm of adaptive or moving-grid methods. Following the MOL approach, this paper is devoted to an evaluation and comparison, mainly based on extensive numerical tests, of three moving-grid methods for 1D problems, viz., the finite-element method of Miller and co-workers, the method published by Petzold, and a method based on ideas adopted from Dorfi and Drury. Our examination of these three methods is aimed at assessing which is the most suitable from the point of view of retaining the acknowledged features of reliability, robustness, and efficiency of the conventional MOL approach. Therefore, considerable attention is paid to the temporal performance of the methods.
Advanced Space Transportation Concepts and Propulsion Technologies for a New Delivery Paradigm
NASA Technical Reports Server (NTRS)
Robinson, John W.; McCleskey, Carey M.; Rhodes, Russel E.; Lepsch, Roger A.; Henderson, Edward M.; Joyner, Claude R., III; Levack, Daniel J. H.
2013-01-01
This paper describes Advanced Space Transportation Concepts and Propulsion Technologies for a New Delivery Paradigm. It builds on the work of the previous paper "Approach to an Affordable and Productive Space Transportation System". The scope includes both flight and ground system elements, and focuses on their compatibility and capability to achieve a technical solution that is operationally productive and also affordable. A clear and revolutionary approach, including advanced propulsion systems (advanced LOX rich booster engine concept having independent LOX and fuel cooling systems, thrust augmentation with LOX rich boost and fuel rich operation at altitude), improved vehicle concepts (autogeneous pressurization, turbo alternator for electric power during ascent, hot gases to purge system and keep moisture out), and ground delivery systems, was examined. Previous papers by the authors and other members of the Space Propulsion Synergy Team (SPST) focused on space flight system engineering methods, along with operationally efficient propulsion system concepts and technologies. This paper continues the previous work by exploring the propulsion technology aspects in more depth and how they may enable the vehicle designs from the previous paper. Subsequent papers will explore the vehicle design, the ground support system, and the operations aspects of the new delivery paradigm in greater detail.
Sci—Sat AM: Stereo — 02: Implementation of a VMAT class solution for kidney SBRT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sonier, M; Lalani, N; Korol, R
An emerging treatment option for inoperable primary renal cell carcinoma and oligometastatic adrenal lesions is stereotactic body radiation therapy (SBRT). At our center, kidney SBRT treatments were originally planned with IMRT. The goal was to plan future patients using VMAT to improve treatment delivery efficiency. The purpose of this work was twofold: 1) to develop a VMAT class solution for the treatment of kidney SBRT; and, 2) to assess VMAT plan quality when compared to IMRT plans. Five patients treated with IMRT for kidney SBRT were reviewed and replanned in Pinnacle using a single VMAT arc with a 15° collimatormore » rotation, constrained leaf motion and 4° gantry spacing. In comparison, IMRT plans utilized 7–9 6MV beams, with various collimator rotations and up to 2 non-coplanar beams for maximum organ-at-risk (OAR) sparing. Comparisons were made concerning target volume conformity, homogeneity, dose to OARs, treatment time and monitor units (MUs). There was no difference in MUs; however, VMAT reduced the treatment time from 13.0±2.6min, for IMRT, to 4.0±0.9min. The collection of target and OAR constraints and SmartArc parameters, produced a class solution that generated VMAT plans with increased target homogeneity and improved 95% conformity index calculated at < 1.2. In general, the VMAT plans displayed a reduced maximum point dose to nearby OARs with increased intermediate dose to distant OARs. Overall, the introduction of a VMAT class solution for kidney SBRT improves efficiency by reducing treatment planning and delivery time.« less
Zeolite crystal growth in space - What has been learned
NASA Technical Reports Server (NTRS)
Sacco, A., Jr.; Thompson, R. W.; Dixon, A. G.
1993-01-01
Three zeolite crystal growth experiments developed at WPI have been performed in space in last twelve months. One experiment, GAS-1, illustrated that to grow large, crystallographically uniform crystals in space, the precursor solutions should be mixed in microgravity. Another experiment evaluated the optimum mixing protocol for solutions that chemically interact ('gel') on contact. These results were utilized in setting the protocol for mixing nineteen zeolite solutions that were then processed and yielded zeolites A, X and mordenite. All solutions in which the nucleation event was influenced produced larger, more 'uniform' crystals than did identical solutions processed on earth.
All symmetric space solutions of eleven-dimensional supergravity
NASA Astrophysics Data System (ADS)
Wulff, Linus
2017-06-01
We find all symmetric space solutions of eleven-dimensional supergravity completing an earlier classification by Figueroa-O’Farrill. They come in two types: AdS solutions and pp-wave solutions. We analyze the supersymmetry conditions and show that out of the 99 AdS geometries the only supersymmetric ones are the well known backgrounds arising as near-horizon limits of (intersecting) branes and preserving 32, 16 or 8 supersymmetries. The general form of the superisometry algebra for symmetric space backgrounds is also derived.
Real-time validation of receiver state information in optical space-time block code systems.
Alamia, John; Kurzweg, Timothy
2014-06-15
Free space optical interconnect (FSOI) systems are a promising solution to interconnect bottlenecks in high-speed systems. To overcome some sources of diminished FSOI performance caused by close proximity of multiple optical channels, multiple-input multiple-output (MIMO) systems implementing encoding schemes such as space-time block coding (STBC) have been developed. These schemes utilize information pertaining to the optical channel to reconstruct transmitted data. The STBC system is dependent on accurate channel state information (CSI) for optimal system performance. As a result of dynamic changes in optical channels, a system in operation will need to have updated CSI. Therefore, validation of the CSI during operation is a necessary tool to ensure FSOI systems operate efficiently. In this Letter, we demonstrate a method of validating CSI, in real time, through the use of moving averages of the maximum likelihood decoder data, and its capacity to predict the bit error rate (BER) of the system.
Multispectral optical telescope alignment testing for a cryogenic space environment
NASA Astrophysics Data System (ADS)
Newswander, Trent; Hooser, Preston; Champagne, James
2016-09-01
Multispectral space telescopes with visible to long wave infrared spectral bands provide difficult alignment challenges. The visible channels require precision in alignment and stability to provide good image quality in short wavelengths. This is most often accomplished by choosing materials with near zero thermal expansion glass or ceramic mirrors metered with carbon fiber reinforced polymer (CFRP) that are designed to have a matching thermal expansion. The IR channels are less sensitive to alignment but they often require cryogenic cooling for improved sensitivity with the reduced radiometric background. Finding efficient solutions to this difficult problem of maintaining good visible image quality at cryogenic temperatures has been explored with the building and testing of a telescope simulator. The telescope simulator is an onaxis ZERODUR® mirror, CFRP metered set of optics. Testing has been completed to accurately measure telescope optical element alignment and mirror figure changes in a cryogenic space simulated environment. Measured alignment error and mirror figure error test results are reported with a discussion of their impact on system optical performance.
A lifelong learning hyper-heuristic method for bin packing.
Sim, Kevin; Hart, Emma; Paechter, Ben
2015-01-01
We describe a novel hyper-heuristic system that continuously learns over time to solve a combinatorial optimisation problem. The system continuously generates new heuristics and samples problems from its environment; and representative problems and heuristics are incorporated into a self-sustaining network of interacting entities inspired by methods in artificial immune systems. The network is plastic in both its structure and content, leading to the following properties: it exploits existing knowledge captured in the network to rapidly produce solutions; it can adapt to new problems with widely differing characteristics; and it is capable of generalising over the problem space. The system is tested on a large corpus of 3,968 new instances of 1D bin-packing problems as well as on 1,370 existing problems from the literature; it shows excellent performance in terms of the quality of solutions obtained across the datasets and in adapting to dynamically changing sets of problem instances compared to previous approaches. As the network self-adapts to sustain a minimal repertoire of both problems and heuristics that form a representative map of the problem space, the system is further shown to be computationally efficient and therefore scalable.
Columbus stowage optimization by cast (cargo accommodation support tool)
NASA Astrophysics Data System (ADS)
Fasano, G.; Saia, D.; Piras, A.
2010-08-01
A challenging issue related to the International Space Station utilization concerns the on-board stowage, implying a strong impact on habitability, safety and crew productivity. This holds in particular for the European Columbus laboratory, nowadays also utilized to provide the station with logistic support. The volume exploitation has to be maximized, in compliance with the given accommodation rules. At each upload step, the stowage problem must be solved quickly and efficiently. This leads to the comparison of different scenarios to select the most suitable one. Last minute upgrades, due to possible re-planning, may, moreover arise, imposing the further capability to rapidly readapt the current solution to the updated status. In this context, looking into satisfactory solutions represents a very demanding job, even for experienced designers. Thales Alenia Space Italia has achieved a remarkable expertise in the field of cargo accommodation and stowage. The company has recently developed CAST, a dedicated in-house software tool, to support the cargo accommodation of the European automated transfer vehicle. An ad hoc version, tailored to the Columbus stowage, has been further implemented and is going to be used from now on. This paper surveys the on-board stowage issue, pointing out the advantages of the proposed approach.
NASA Technical Reports Server (NTRS)
Rogers, Aaron; Anderson, Kalle; Mracek, Anna; Zenick, Ray
2004-01-01
With the space industry's increasing focus upon multi-spacecraft formation flight missions, the ability to precisely determine system topology and the orientation of member spacecraft relative to both inertial space and each other is becoming a critical design requirement. Topology determination in satellite systems has traditionally made use of GPS or ground uplink position data for low Earth orbits, or, alternatively, inter-satellite ranging between all formation pairs. While these techniques work, they are not ideal for extension to interplanetary missions or to large fleets of decentralized, mixed-function spacecraft. The Vision-Based Attitude and Formation Determination System (VBAFDS) represents a novel solution to both the navigation and topology determination problems with an integrated approach that combines a miniature star tracker with a suite of robust processing algorithms. By combining a single range measurement with vision data to resolve complete system topology, the VBAFDS design represents a simple, resource-efficient solution that is not constrained to certain Earth orbits or formation geometries. In this paper, analysis and design of the VBAFDS integrated guidance, navigation and control (GN&C) technology will be discussed, including hardware requirements, algorithm development, and simulation results in the context of potential mission applications.
Hyper-Parallel Tempering Monte Carlo Method and It's Applications
NASA Astrophysics Data System (ADS)
Yan, Qiliang; de Pablo, Juan
2000-03-01
A new generalized hyper-parallel tempering Monte Carlo molecular simulation method is presented for study of complex fluids. The method is particularly useful for simulation of many-molecule complex systems, where rough energy landscapes and inherently long characteristic relaxation times can pose formidable obstacles to effective sampling of relevant regions of configuration space. The method combines several key elements from expanded ensemble formalisms, parallel-tempering, open ensemble simulations, configurational bias techniques, and histogram reweighting analysis of results. It is found to accelerate significantly the diffusion of a complex system through phase-space. In this presentation, we demonstrate the effectiveness of the new method by implementing it in grand canonical ensembles for a Lennard-Jones fluid, for the restricted primitive model of electrolyte solutions (RPM), and for polymer solutions and blends. Our results indicate that the new algorithm is capable of overcoming the large free energy barriers associated with phase transitions, thereby greatly facilitating the simulation of coexistence properties. It is also shown that the method can be orders of magnitude more efficient than previously available techniques. More importantly, the method is relatively simple and can be incorporated into existing simulation codes with minor efforts.
NASA Astrophysics Data System (ADS)
Shallal, Muhannad A.; Jabbar, Hawraz N.; Ali, Khalid K.
2018-03-01
In this paper, we constructed a travelling wave solution for space-time fractional nonlinear partial differential equations by using the modified extended Tanh method with Riccati equation. The method is used to obtain analytic solutions for the space-time fractional Klein-Gordon and coupled conformable space-time fractional Boussinesq equations. The fractional complex transforms and the properties of modified Riemann-Liouville derivative have been used to convert these equations into nonlinear ordinary differential equations.
An exact solution of the Currie-Hill equations in 1 + 1 dimensional Minkowski space
NASA Astrophysics Data System (ADS)
Balog, János
2014-11-01
We present an exact two-particle solution of the Currie-Hill equations of Predictive Relativistic Mechanics in 1 + 1 dimensional Minkowski space. The instantaneous accelerations are given in terms of elementary functions depending on the relative particle position and velocities. The general solution of the equations of motion is given and by studying the global phase space of this system it is shown that this is a subspace of the full kinematic phase space.
Nikolaenko, Andrey E; Cass, Michael; Bourcet, Florence; Mohamad, David; Roberts, Matthew
2015-11-25
Efficient intermonomer thermally activated delayed fluorescence is demonstrated for the first time, opening a new route to achieving high-efficiency solution processable polymer light-emitting device materials. External quantum efficiency (EQE) of up to 10% is achieved in a simple fully solution-processed device structure, and routes for further EQE improvement identified. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Fuel Efficient Strategies for Reducing Contrail Formations in United States Air Space
NASA Technical Reports Server (NTRS)
Sridhar, Banavar; Chen, Neil Y.; Ng, Hok K.
2010-01-01
This paper describes a class of strategies for reducing persistent contrail formation in the United States airspace. The primary objective is to minimize potential contrail formation regions by altering the aircraft's cruising altitude in a fuel-efficient way. The results show that the contrail formations can be reduced significantly without extra fuel consumption and without adversely affecting congestion in the airspace. The contrail formations can be further reduced by using extra fuel. For the day tested, the maximal reduction strategy has a 53% contrail reduction rate. The most fuel-efficient strategy has an 8% reduction rate with 2.86% less fuel-burnt compared to the maximal reduction strategy. Using a cost function which penalizes extra fuel consumed while maximizing the amount of contrail reduction provides a flexible way to trade off between contrail reduction and fuel consumption. It can achieve a 35% contrail reduction rate with only 0.23% extra fuel consumption. The proposed fuel-efficient contrail reduction strategy provides a solution to reduce aviation-induced environmental impact on a daily basis.
Analytical and exact solutions of the spherical and cylindrical diodes of Langmuir-Blodgett law
NASA Astrophysics Data System (ADS)
Torres-Cordoba, Rafael; Martinez-Garcia, Edgar
2017-10-01
This paper discloses the exact solutions of a mathematical model that describes the cylindrical and spherical electron current emissions within the context of a physics approximation method. The solution involves analyzing the 1D nonlinear Poisson equation, for the radial component. Although an asymptotic solution has been previously obtained, we present a theoretical solution that satisfies arbitrary boundary conditions. The solution is found in its parametric form (i.e., φ(r )=φ(r (τ)) ) and is valid when the electric field at the cathode surface is non-zero. Furthermore, the non-stationary spatial solution of the electric potential between the anode and the cathode is also presented. In this work, the particle-beam interface is considered to be at the end of the plasma sheath as described by Sutherland et al. [Phys. Plasmas 12, 033103 2005]. Three regimes of space charge effects—no space charge saturation, space charge limited, and space charge saturation—are also considered.
NASA Astrophysics Data System (ADS)
Katayama, Soichiro
We consider the Cauchy problem for systems of nonlinear wave equations with multiple propagation speeds in three space dimensions. Under the null condition for such systems, the global existence of small amplitude solutions is known. In this paper, we will show that the global solution is asymptotically free in the energy sense, by obtaining the asymptotic pointwise behavior of the derivatives of the solution. Nonetheless we can also show that the pointwise behavior of the solution itself may be quite different from that of the free solution. In connection with the above results, a theorem is also developed to characterize asymptotically free solutions for wave equations in arbitrary space dimensions.
Improving Energy Efficiency in CNC Machining
NASA Astrophysics Data System (ADS)
Pavanaskar, Sushrut S.
We present our work on analyzing and improving the energy efficiency of multi-axis CNC milling process. Due to the differences in energy consumption behavior, we treat 3- and 5-axis CNC machines separately in our work. For 3-axis CNC machines, we first propose an energy model that estimates the energy requirement for machining a component on a specified 3-axis CNC milling machine. Our model makes machine-specific predictions of energy requirements while also considering the geometric aspects of the machining toolpath. Our model - and the associated software tool - facilitate direct comparison of various alternative toolpath strategies based on their energy-consumption performance. Further, we identify key factors in toolpath planning that affect energy consumption in CNC machining. We then use this knowledge to propose and demonstrate a novel toolpath planning strategy that may be used to generate new toolpaths that are inherently energy-efficient, inspired by research on digital micrography -- a form of computational art. For 5-axis CNC machines, the process planning problem consists of several sub-problems that researchers have traditionally solved separately to obtain an approximate solution. After illustrating the need to solve all sub-problems simultaneously for a truly optimal solution, we propose a unified formulation based on configuration space theory. We apply our formulation to solve a problem variant that retains key characteristics of the full problem but has lower dimensionality, allowing visualization in 2D. Given the complexity of the full 5-axis toolpath planning problem, our unified formulation represents an important step towards obtaining a truly optimal solution. With this work on the two types of CNC machines, we demonstrate that without changing the current infrastructure or business practices, machine-specific, geometry-based, customized toolpath planning can save energy in CNC machining.
I/O efficient algorithms and applications in geographic information systems
NASA Astrophysics Data System (ADS)
Danner, Andrew
Modern remote sensing methods such a laser altimetry (lidar) and Interferometric Synthetic Aperture Radar (IfSAR) produce georeferenced elevation data at unprecedented rates. Many Geographic Information System (GIS) algorithms designed for terrain modelling applications cannot process these massive data sets. The primary problem is that these data sets are too large to fit in the main internal memory of modern computers and must therefore reside on larger, but considerably slower disks. In these applications, the transfer of data between disk and main memory, or I/O, becomes the primary bottleneck. Working in a theoretical model that more accurately represents this two level memory hierarchy, we can develop algorithms that are I/O-efficient and reduce the amount of disk I/O needed to solve a problem. In this thesis we aim to modernize GIS algorithms and develop a number of I/O-efficient algorithms for processing geographic data derived from massive elevation data sets. For each application, we convert a geographic question to an algorithmic question, develop an I/O-efficient algorithm that is theoretically efficient, implement our approach and verify its performance using real-world data. The applications we consider include constructing a gridded digital elevation model (DEM) from an irregularly spaced point cloud, removing topological noise from a DEM, modeling surface water flow over a terrain, extracting river networks and watershed hierarchies from the terrain, and locating polygons containing query points in a planar subdivision. We initially developed solutions to each of these applications individually. However, we also show how to combine individual solutions to form a scalable geo-processing pipeline that seamlessly solves a sequence of sub-problems with little or no manual intervention. We present experimental results that demonstrate orders of magnitude improvement over previously known algorithms.
On supersymmetric AdS6 solutions in 10 and 11 dimensions
NASA Astrophysics Data System (ADS)
Gutowski, J.; Papadopoulos, G.
2017-12-01
We prove a non-existence theorem for smooth, supersymmetric, warped AdS 6 solutions with connected, compact without boundary internal space in D = 11 and (massive) IIA supergravities. In IIB supergravity we show that if such AdS 6 solutions exist, then the NSNS and RR 3-form fluxes must be linearly independent and certain spinor bilinears must be appropriately restricted. Moreover we demonstrate that the internal space admits an so(3) action which leaves all the fields invariant and for smooth solutions the principal orbits must have co-dimension two. We also describe the topology and geometry of internal spaces that admit such a so(3) action and show that there are no solutions for which the internal space has topology F × S 2, where F is an oriented surface.
Perspective: Memcomputing: Leveraging memory and physics to compute efficiently
NASA Astrophysics Data System (ADS)
Di Ventra, Massimiliano; Traversa, Fabio L.
2018-05-01
It is well known that physical phenomena may be of great help in computing some difficult problems efficiently. A typical example is prime factorization that may be solved in polynomial time by exploiting quantum entanglement on a quantum computer. There are, however, other types of (non-quantum) physical properties that one may leverage to compute efficiently a wide range of hard problems. In this perspective, we discuss how to employ one such property, memory (time non-locality), in a novel physics-based approach to computation: Memcomputing. In particular, we focus on digital memcomputing machines (DMMs) that are scalable. DMMs can be realized with non-linear dynamical systems with memory. The latter property allows the realization of a new type of Boolean logic, one that is self-organizing. Self-organizing logic gates are "terminal-agnostic," namely, they do not distinguish between the input and output terminals. When appropriately assembled to represent a given combinatorial/optimization problem, the corresponding self-organizing circuit converges to the equilibrium points that express the solutions of the problem at hand. In doing so, DMMs take advantage of the long-range order that develops during the transient dynamics. This collective dynamical behavior, reminiscent of a phase transition, or even the "edge of chaos," is mediated by families of classical trajectories (instantons) that connect critical points of increasing stability in the system's phase space. The topological character of the solution search renders DMMs robust against noise and structural disorder. Since DMMs are non-quantum systems described by ordinary differential equations, not only can they be built in hardware with the available technology, they can also be simulated efficiently on modern classical computers. As an example, we will show the polynomial-time solution of the subset-sum problem for the worst cases, and point to other types of hard problems where simulations of DMMs' equations of motion on classical computers have already demonstrated substantial advantages over traditional approaches. We conclude this article by outlining further directions of study.
Higher-order adaptive finite-element methods for Kohn–Sham density functional theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Motamarri, P.; Nowak, M.R.; Leiter, K.
2013-11-15
We present an efficient computational approach to perform real-space electronic structure calculations using an adaptive higher-order finite-element discretization of Kohn–Sham density-functional theory (DFT). To this end, we develop an a priori mesh-adaption technique to construct a close to optimal finite-element discretization of the problem. We further propose an efficient solution strategy for solving the discrete eigenvalue problem by using spectral finite-elements in conjunction with Gauss–Lobatto quadrature, and a Chebyshev acceleration technique for computing the occupied eigenspace. The proposed approach has been observed to provide a staggering 100–200-fold computational advantage over the solution of a generalized eigenvalue problem. Using the proposedmore » solution procedure, we investigate the computational efficiency afforded by higher-order finite-element discretizations of the Kohn–Sham DFT problem. Our studies suggest that staggering computational savings—of the order of 1000-fold—relative to linear finite-elements can be realized, for both all-electron and local pseudopotential calculations, by using higher-order finite-element discretizations. On all the benchmark systems studied, we observe diminishing returns in computational savings beyond the sixth-order for accuracies commensurate with chemical accuracy, suggesting that the hexic spectral-element may be an optimal choice for the finite-element discretization of the Kohn–Sham DFT problem. A comparative study of the computational efficiency of the proposed higher-order finite-element discretizations suggests that the performance of finite-element basis is competing with the plane-wave discretization for non-periodic local pseudopotential calculations, and compares to the Gaussian basis for all-electron calculations to within an order of magnitude. Further, we demonstrate the capability of the proposed approach to compute the electronic structure of a metallic system containing 1688 atoms using modest computational resources, and good scalability of the present implementation up to 192 processors.« less
The eigenvalue problem in phase space.
Cohen, Leon
2018-06-30
We formulate the standard quantum mechanical eigenvalue problem in quantum phase space. The equation obtained involves the c-function that corresponds to the quantum operator. We use the Wigner distribution for the phase space function. We argue that the phase space eigenvalue equation obtained has, in addition to the proper solutions, improper solutions. That is, solutions for which no wave function exists which could generate the distribution. We discuss the conditions for ascertaining whether a position momentum function is a proper phase space distribution. We call these conditions psi-representability conditions, and show that if these conditions are imposed, one extracts the correct phase space eigenfunctions. We also derive the phase space eigenvalue equation for arbitrary phase space distributions functions. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
LED solution for E14 candle lamp
NASA Astrophysics Data System (ADS)
Li, Yun; Liu, Ye; Boonekamp, Erik P.; Shi, Lei; Mei, Yi; Jiang, Tan; Guo, Qing; Wu, Huarong
2009-08-01
On a short to medium term, energy efficient retrofit LED products can offer an attractive solution for traditional lamps replacement in existing fixtures. To comply with user expectations, LED retrofit lamps should not only have the same mechanical interface to fit (socket and shape), but also have the similar light effect as the lamps they replace. The decorative lighting segment shows the best conditions to meet these requirements on short term. In 2008, Philips Lighting Shanghai started with the development of an LED candle lamp for the replacement of a 15W Candle shape (B35 E14) incandescent bulb, which is used in e.g. chandeliers. In this decorative application the main objective is not to generate as much light as possible, but the application requires the lamp to have a comparable look and, primarily, the same light effect as the incandescent candle lamp. This effect can be described as sparkling light, and it has to be directed sufficiently downwards (i.e., in the direction of the base of the lamp). These requirements leave very limited room for optics, electronics, mechanics and thermal design to play with in the small outline of this lamp. The main voltage AC LED concept is chosen to save the space for driver electronics. However the size of the AC LED is relatively big, which makes the optical design challenging. Several optical solutions to achieve the required light effect, to improve the optical efficiency, and to simplify the system are discussed. A novel prismatic lens has been developed which is capable of transforming the Lambertian light emission from typical high power LEDs into a butter-fly intensity distribution with the desired sparkling light effect. Thanks to this lens no reflecting chamber is needed, which improves the optical efficiency up to 70%, while maintaining the compact feature of the original optics. Together with advanced driver solution and thermal solution, the resulting LED candle lamp operates at 230V, consumes 1.8W, and delivers about 55 lm at 3000K with the requested radiation pattern and sparkle effect. Some field tests were done with positive feedback.
Sustainable Skyscrapers: Designing the Net Zero Energy Building of the Future
NASA Astrophysics Data System (ADS)
Kothari, S.; Bartsch, A.
2016-12-01
Cities of the future will need to increase population density in order to keep up with the rising populations in the limited available land area. In order to provide sufficient power as the population grows, cities must become more energy efficient. Fossil fuels and grid energy will continue to become more expensive as nonrenewable resources deplete. The obvious solution to increase population density while decreasing the reliance on fossil fuels is to build taller skyscrapers that are energy neutral, i.e. self-sustaining. However, current skyscrapers are not energy efficient, and therefore cannot provide a sustainable solution to the problem of increasing population density in the face of depleting energy resources. The design of a net zero energy building that includes both residential and commercial space is presented. Alternative energy systems such as wind turbines, photovoltaic cells, and a waste-to-fuel conversion plant have been incorporated into the design of a 50 story skyscraper that is not reliant on fossil fuels and has a payback time of about six years. Although the current building was designed to be located in San Francisco, simple modifications to the design would allow this building to fit the needs of any city around the world.
Scaling Optimization of the SIESTA MHD Code
NASA Astrophysics Data System (ADS)
Seal, Sudip; Hirshman, Steven; Perumalla, Kalyan
2013-10-01
SIESTA is a parallel three-dimensional plasma equilibrium code capable of resolving magnetic islands at high spatial resolutions for toroidal plasmas. Originally designed to exploit small-scale parallelism, SIESTA has now been scaled to execute efficiently over several thousands of processors P. This scaling improvement was accomplished with minimal intrusion to the execution flow of the original version. First, the efficiency of the iterative solutions was improved by integrating the parallel tridiagonal block solver code BCYCLIC. Krylov-space generation in GMRES was then accelerated using a customized parallel matrix-vector multiplication algorithm. Novel parallel Hessian generation algorithms were integrated and memory access latencies were dramatically reduced through loop nest optimizations and data layout rearrangement. These optimizations sped up equilibria calculations by factors of 30-50. It is possible to compute solutions with granularity N/P near unity on extremely fine radial meshes (N > 1024 points). Grid separation in SIESTA, which manifests itself primarily in the resonant components of the pressure far from rational surfaces, is strongly suppressed by finer meshes. Large problem sizes of up to 300 K simultaneous non-linear coupled equations have been solved on the NERSC supercomputers. Work supported by U.S. DOE under Contract DE-AC05-00OR22725 with UT-Battelle, LLC.
Mathematical analysis and coordinated current allocation control in battery power module systems
NASA Astrophysics Data System (ADS)
Han, Weiji; Zhang, Liang
2017-12-01
As the major energy storage device and power supply source in numerous energy applications, such as solar panels, wind plants, and electric vehicles, battery systems often face the issue of charge imbalance among battery cells/modules, which can accelerate battery degradation, cause more energy loss, and even incur fire hazard. To tackle this issue, various circuit designs have been developed to enable charge equalization among battery cells/modules. Recently, the battery power module (BPM) design has emerged to be one of the promising solutions for its capability of independent control of individual battery cells/modules. In this paper, we propose a new current allocation method based on charging/discharging space (CDS) for performance control in BPM systems. Based on the proposed method, the properties of CDS-based current allocation with constant parameters are analyzed. Then, real-time external total power requirement is taken into account and an algorithm is developed for coordinated system performance control. By choosing appropriate control parameters, the desired system performance can be achieved by coordinating the module charge balance and total power efficiency. Besides, the proposed algorithm has complete analytical solutions, and thus is very computationally efficient. Finally, the efficacy of the proposed algorithm is demonstrated using simulations.
Homoclinic accretion solutions in the Schwarzschild-anti-de Sitter space-time
NASA Astrophysics Data System (ADS)
Mach, Patryk
2015-04-01
The aim of this paper is to clarify the distinction between homoclinic and standard (global) Bondi-type accretion solutions in the Schwarzschild-anti-de Sitter space-time. The homoclinic solutions have recently been discovered numerically for polytropic equations of state. Here I show that they exist also for certain isothermal (linear) equations of state, and an analytic solution of this type is obtained. It is argued that the existence of such solutions is generic, although for sufficiently relativistic matter models (photon gas, ultrahard equation of state) there exist global solutions that can be continued to infinity, similarly to standard Michel's solutions in the Schwarzschild space-time. In contrast to that global solutions should not exist for matter models with a nonvanishing rest-mass component, and this is demonstrated for polytropes. For homoclinic isothermal solutions I derive an upper bound on the mass of the black hole for which stationary transonic accretion is allowed.
NASA Astrophysics Data System (ADS)
Prasetyo, I.; Ramadhan, H. S.
2017-07-01
Here we present some solutions with noncanonical global monopole in nonlinear sigma model in 4-dimensional spacetime. We discuss some blackhole solutions and its horizons. We also obtain some compactification solutions. We list some possible compactification channels from 4-space to 2 × 2-spaces of constant curvatures.
Finite difference time domain analysis of chirped dielectric gratings
NASA Technical Reports Server (NTRS)
Hochmuth, Diane H.; Johnson, Eric G.
1993-01-01
The finite difference time domain (FDTD) method for solving Maxwell's time-dependent curl equations is accurate, computationally efficient, and straight-forward to implement. Since both time and space derivatives are employed, the propagation of an electromagnetic wave can be treated as an initial-value problem. Second-order central-difference approximations are applied to the space and time derivatives of the electric and magnetic fields providing a discretization of the fields in a volume of space, for a period of time. The solution to this system of equations is stepped through time, thus, simulating the propagation of the incident wave. If the simulation is continued until a steady-state is reached, an appropriate far-field transformation can be applied to the time-domain scattered fields to obtain reflected and transmitted powers. From this information diffraction efficiencies can also be determined. In analyzing the chirped structure, a mesh is applied only to the area immediately around the grating. The size of the mesh is then proportional to the electric size of the grating. Doing this, however, imposes an artificial boundary around the area of interest. An absorbing boundary condition must be applied along the artificial boundary so that the outgoing waves are absorbed as if the boundary were absent. Many such boundary conditions have been developed that give near-perfect absorption. In this analysis, the Mur absorbing boundary conditions are employed. Several grating structures were analyzed using the FDTD method.
An Open-Source Auto-Calibration Routine Supporting the Stormwater Management Model
NASA Astrophysics Data System (ADS)
Tiernan, E. D.; Hodges, B. R.
2017-12-01
The stormwater management model (SWMM) is a clustered model that relies on subcatchment-averaged parameter assignments to correctly capture catchment stormwater runoff behavior. Model calibration is considered a critical step for SWMM performance, an arduous task that most stormwater management designers undertake manually. This research presents an open-source, automated calibration routine that increases the efficiency and accuracy of the model calibration process. The routine makes use of a preliminary sensitivity analysis to reduce the dimensions of the parameter space, at which point a multi-objective function, genetic algorithm (modified Non-dominated Sorting Genetic Algorithm II) determines the Pareto front for the objective functions within the parameter space. The solutions on this Pareto front represent the optimized parameter value sets for the catchment behavior that could not have been reasonably obtained through manual calibration.
An analytical study of physical models with inherited temporal and spatial memory
NASA Astrophysics Data System (ADS)
Jaradat, Imad; Alquran, Marwan; Al-Khaled, Kamel
2018-04-01
Du et al. (Sci. Reb. 3, 3431 (2013)) demonstrated that the fractional derivative order can be physically interpreted as a memory index by fitting the test data of memory phenomena. The aim of this work is to study analytically the joint effect of the memory index on time and space coordinates simultaneously. For this purpose, we introduce a novel bivariate fractional power series expansion that is accompanied by twofold fractional derivatives ordering α, β\\in(0,1]. Further, some convergence criteria concerning our expansion are presented and an analog of the well-known bivariate Taylor's formula in the sense of mixed fractional derivatives is obtained. Finally, in order to show the functionality and efficiency of this expansion, we employ the corresponding Taylor's series method to obtain closed-form solutions of various physical models with inherited time and space memory.
NASA Astrophysics Data System (ADS)
Strauss, R. Du Toit; Effenberger, Frederic
2017-10-01
In this review, an overview of the recent history of stochastic differential equations (SDEs) in application to particle transport problems in space physics and astrophysics is given. The aim is to present a helpful working guide to the literature and at the same time introduce key principles of the SDE approach via "toy models". Using these examples, we hope to provide an easy way for newcomers to the field to use such methods in their own research. Aspects covered are the solar modulation of cosmic rays, diffusive shock acceleration, galactic cosmic ray propagation and solar energetic particle transport. We believe that the SDE method, due to its simplicity and computational efficiency on modern computer architectures, will be of significant relevance in energetic particle studies in the years to come.
National Air Space (NAS) Data Exchange Environment Through 2060
NASA Technical Reports Server (NTRS)
Roy, Aloke
2015-01-01
NASA's NextGen Concepts and Technology Development (CTD) Project focuses on capabilities to improve safety, capacity and efficiency of the National Air Space (NAS). In order to achieve those objectives, NASA sought industry-Government partnerships to research and identify solutions for traffic flow management, dynamic airspace configuration, separation assurance, super density operations, airport surface operations and similar forward-looking air-traffic modernization (ATM) concepts. Data exchanges over NAS being the key enabler for most of these ATM concepts, the Sub-Topic area 3 of the CTD project sought to identify technology candidates that can satisfy air-to-air and air/ground communications needs of the NAS in the year 2060 timeframe. Honeywell, under a two-year contract with NASA, is working on this communications technology research initiative. This report summarizes Honeywell's research conducted during the second year of the study task.
A method for the dynamic and thermal stress analysis of space shuttle surface insulation
NASA Technical Reports Server (NTRS)
Ojalvo, I. U.; Levy, A.; Austin, F.
1975-01-01
The thermal protection system of the space shuttle consists of thousands of separate insulation tiles bonded to the orbiter's surface through a soft strain-isolation layer. The individual tiles are relatively thick and possess nonuniform properties. Therefore, each is idealized by finite-element assemblages containing up to 2500 degrees of freedom. Since the tiles affixed to a given structural panel will, in general, interact with one another, application of the standard direct-stiffness method would require equation systems involving excessive numbers of unknowns. This paper presents a method which overcomes this problem through an efficient iterative procedure which requires treatment of only a single tile at any given time. Results of associated static, dynamic, and thermal stress analyses and sufficient conditions for convergence of the iterative solution method are given.
Traveling Magnetic Field Applications for Materials Processing in Space
NASA Technical Reports Server (NTRS)
Grugel, R. N.; Mazuruk, K.; Curreri, Peter A. (Technical Monitor)
2001-01-01
Including the capability to induce a controlled fluid flow in the melt can significantly enrich research on solidification phenomena in a microgravity environment. The traveling magnetic field (TMF) is a promising technique to achieve this goal and is the aim of our ground-based project. In this presentation we will discuss new theoretical as well as experimental results recently obtained by our group. In particular, we experimentally demonstrated efficient mixing of metal alloys in long tubes subjected to TMF during processing. Application of this technique can provide an elegant solution to ensure melt homogenization prior to solidification in a microgravity environment where natural convection is generally absent. Results of our experimental work of applying the TMF technique to alloy melts will be presented. Possible applications of TMF on board the International Space Station will also be discussed.
HZETRN: A heavy ion/nucleon transport code for space radiations
NASA Technical Reports Server (NTRS)
Wilson, John W.; Chun, Sang Y.; Badavi, Forooz F.; Townsend, Lawrence W.; Lamkin, Stanley L.
1991-01-01
The galactic heavy ion transport code (GCRTRN) and the nucleon transport code (BRYNTRN) are integrated into a code package (HZETRN). The code package is computer efficient and capable of operating in an engineering design environment for manned deep space mission studies. The nuclear data set used by the code is discussed including current limitations. Although the heavy ion nuclear cross sections are assumed constant, the nucleon-nuclear cross sections of BRYNTRN with full energy dependence are used. The relation of the final code to the Boltzmann equation is discussed in the context of simplifying assumptions. Error generation and propagation is discussed, and comparison is made with simplified analytic solutions to test numerical accuracy of the final results. A brief discussion of biological issues and their impact on fundamental developments in shielding technology is given.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Altsybeyev, V.V., E-mail: v.altsybeev@spbu.ru; Ponomarev, V.A.
The particle tracking method with a so-called gun iteration for modeling the space charge is discussed in the following paper. We suggest to apply the emission model based on the Gauss's law for the calculation of the space charge limited current density distribution using considered method. Based on the presented emission model we have developed a numerical algorithm for this calculations. This approach allows us to perform accurate and low time consumpting numerical simulations for different vacuum sources with the curved emitting surfaces and also in the presence of additional physical effects such as bipolar flows and backscattered electrons. Themore » results of the simulations of the cylindrical diode and diode with elliptical emitter with the use of axysimmetric coordinates are presented. The high efficiency and accuracy of the suggested approach are confirmed by the obtained results and comparisons with the analytical solutions.« less
NASA Astrophysics Data System (ADS)
Shobe, Charles M.; Tucker, Gregory E.; Barnhart, Katherine R.
2017-12-01
Models of landscape evolution by river erosion are often either transport-limited (sediment is always available but may or may not be transportable) or detachment-limited (sediment must be detached from the bed but is then always transportable). While several models incorporate elements of, or transition between, transport-limited and detachment-limited behavior, most require that either sediment or bedrock, but not both, are eroded at any given time. Modeling landscape evolution over large spatial and temporal scales requires a model that can (1) transition freely between transport-limited and detachment-limited behavior, (2) simultaneously treat sediment transport and bedrock erosion, and (3) run in 2-D over large grids and be coupled with other surface process models. We present SPACE (stream power with alluvium conservation and entrainment) 1.0, a new model for simultaneous evolution of an alluvium layer and a bedrock bed based on conservation of sediment mass both on the bed and in the water column. The model treats sediment transport and bedrock erosion simultaneously, embracing the reality that many rivers (even those commonly defined as bedrock
rivers) flow over a partially alluviated bed. SPACE improves on previous models of bedrock-alluvial rivers by explicitly calculating sediment erosion and deposition rather than relying on a flux-divergence (Exner) approach. The SPACE model is a component of the Landlab modeling toolkit, a Python-language library used to create models of Earth surface processes. Landlab allows efficient coupling between the SPACE model and components simulating basin hydrology, hillslope evolution, weathering, lithospheric flexure, and other surface processes. Here, we first derive the governing equations of the SPACE model from existing sediment transport and bedrock erosion formulations and explore the behavior of local analytical solutions for sediment flux and alluvium thickness. We derive steady-state analytical solutions for channel slope, alluvium thickness, and sediment flux, and show that SPACE matches predicted behavior in detachment-limited, transport-limited, and mixed conditions. We provide an example of landscape evolution modeling in which SPACE is coupled with hillslope diffusion, and demonstrate that SPACE provides an effective framework for simultaneously modeling 2-D sediment transport and bedrock erosion.
NASA Technical Reports Server (NTRS)
Stone, James R.
1994-01-01
Alkali metal boilers are of interest for application to future space Rankine cycle power conversion systems. Significant progress on such boilers was accomplished in the 1960's and early 1970's, but development was not continued to operational systems since NASA's plans for future space missions were drastically curtailed in the early 1970's. In particular, piloted Mars missions were indefinitely deferred. With the announcement of the Space Exploration Initiative (SEI) in July 1989 by President Bush, interest was rekindled in challenging space missions and, consequently in space nuclear power and propulsion. Nuclear electric propulsion (NEP) and nuclear thermal propulsion (NTP) were proposed for interplanetary space vehicles, particularly for Mars missions. The potassium Rankine power conversion cycle became of interest to provide electric power for NEP vehicles and for 'dual-mode' NTP vehicles, where the same reactor could be used directly for propulsion and (with an additional coolant loop) for power. Although the boiler is not a major contributor to system mass, it is of critical importance because of its interaction with the rest of the power conversion system; it can cause problems for other components such as excess liquid droplets entering the turbine, thereby reducing its life, or more critically, it can drive instabilities-some severe enough to cause system failure. Funding for the SEI and its associated technology program from 1990 to 1993 was not sufficient to support significant new work on Rankine cycle boilers for space applications. In Fiscal Year 1994, funding for these challenging missions and technologies has again been curtailed, and planning for the future is very uncertain. The purpose of this paper is to review the technologies developed in the 1960's and 1970's in the light of the recent SEI applications. In this way, future Rankine cycle boiler programs may be conducted most efficiently. This report is aimed at evaluating alkali metal boiler technology for space Rankine cycle systems. Research is summarized on the problems of flow stability, liquid carryover, pressure drop and heat transfer, and on potential solutions developed, primarily those developed by the NASA Lewis Research Center in the 1960's and early 1970's.
NASA Astrophysics Data System (ADS)
Zhang, Yongjing; Chen, Zhe; Yao, Lei; Wang, Xiao; Fu, Ping; Lin, Zhidong
2018-04-01
The interlayer spacing of graphene oxide (GO) is a key property for GO membrane. To probe the variation of interlayer spacing of the GO membrane immersing in KCl aqueous solution, electrochemical impedance spectroscopy (EIS), x-ray diffraction (XRD) and computational calculation was utilized in this study. The XRD patterns show that soaking in KCl aqueous solution leads to an increase of interlayer spacing of GO membrane. And the EIS results indicate that during the immersing process, the charge transfer resistance of GO membrane decreases first and then increases. Computational calculation confirms that intercalated water molecules can result in an increase of interlayer spacing of GO membrane, while the permeation of K+ ions would lead to a decrease of interlayer spacing. All the results are in agreement with each other. It suggests that during the immersing process, the interlayer spacing of GO enlarges first and then decreases. EIS can be a promisingly online method for examining the interlayer spacing of GO in the aqueous solution.
Dynamic implicit 3D adaptive mesh refinement for non-equilibrium radiation diffusion
NASA Astrophysics Data System (ADS)
Philip, B.; Wang, Z.; Berrill, M. A.; Birke, M.; Pernice, M.
2014-04-01
The time dependent non-equilibrium radiation diffusion equations are important for solving the transport of energy through radiation in optically thick regimes and find applications in several fields including astrophysics and inertial confinement fusion. The associated initial boundary value problems that are encountered often exhibit a wide range of scales in space and time and are extremely challenging to solve. To efficiently and accurately simulate these systems we describe our research on combining techniques that will also find use more broadly for long term time integration of nonlinear multi-physics systems: implicit time integration for efficient long term time integration of stiff multi-physics systems, local control theory based step size control to minimize the required global number of time steps while controlling accuracy, dynamic 3D adaptive mesh refinement (AMR) to minimize memory and computational costs, Jacobian Free Newton-Krylov methods on AMR grids for efficient nonlinear solution, and optimal multilevel preconditioner components that provide level independent solver convergence.
Development of an efficient Procedure for Resist Wall Space Experiment
NASA Astrophysics Data System (ADS)
Matsumoto, Shouhei; Kumasaki, Saori; Higuchi, Sayoko; Kirihata, Kuniaki; Inoue, Yasue; Fujie, Miho; Soga, Kouichi; Wakabayashi, Kazuyuki; Hoson, Takayuki
The Resist Wall space experiment aims to examine the role of the cortical microtubule-plasma membrane-cell wall continuum in plant resistance to the gravitational force, thereby clarifying the mechanism of gravity resistance. For this purpose, we will cultivate Arabidopsis mutants defective in organization of cortical microtubules (tua6 ) or synthesis of membrane sterols (hmg1 ) as well as the wild type under microgravity and 1 g conditions in the European Modular Cultivation System on the International Space Station up to reproductive stage, and compare phenotypes on growth and development. We will also analyze cell wall properties and gene expression levels using collected materials. However, the amounts of materials collected will be severely limited, and we should develop an efficient procedure for this space experiment. In the present study, we examined the possibility of analyzing various parameters successively using the identical material. On orbit, plant materials will be fixed with RNAlater solution, kept at 4° C for several days and then frozen in a freezer at -20° C. We first examined whether the cell wall extensibility of inflorescence stems can be measured after RNAlater fixation. The gradient of the cell wall extensibility along inflorescence stems was detected in RNAlater-fixed materials as in methanol-killed ones. The sufficient amounts of RNA to analyze the gene expression were also obtained from the materials after measurement of the cell wall extensibility. Furthermore, the levels and composition of cell wall polysaccharides could be measured using the materials after extraction of RNA. These results show that we can analyze the physical and chemical properties of the cell wall as well as gene expression using the identical material obtained in the space experiments.
Kinetic solvers with adaptive mesh in phase space
NASA Astrophysics Data System (ADS)
Arslanbekov, Robert R.; Kolobov, Vladimir I.; Frolova, Anna A.
2013-12-01
An adaptive mesh in phase space (AMPS) methodology has been developed for solving multidimensional kinetic equations by the discrete velocity method. A Cartesian mesh for both configuration (r) and velocity (v) spaces is produced using a “tree of trees” (ToT) data structure. The r mesh is automatically generated around embedded boundaries, and is dynamically adapted to local solution properties. The v mesh is created on-the-fly in each r cell. Mappings between neighboring v-space trees is implemented for the advection operator in r space. We have developed algorithms for solving the full Boltzmann and linear Boltzmann equations with AMPS. Several recent innovations were used to calculate the discrete Boltzmann collision integral with dynamically adaptive v mesh: the importance sampling, multipoint projection, and variance reduction methods. We have developed an efficient algorithm for calculating the linear Boltzmann collision integral for elastic and inelastic collisions of hot light particles in a Lorentz gas. Our AMPS technique has been demonstrated for simulations of hypersonic rarefied gas flows, ion and electron kinetics in weakly ionized plasma, radiation and light-particle transport through thin films, and electron streaming in semiconductors. We have shown that AMPS allows minimizing the number of cells in phase space to reduce the computational cost and memory usage for solving challenging kinetic problems.
A k-Space Method for Moderately Nonlinear Wave Propagation
Jing, Yun; Wang, Tianren; Clement, Greg T.
2013-01-01
A k-space method for moderately nonlinear wave propagation in absorptive media is presented. The Westervelt equation is first transferred into k-space via Fourier transformation, and is solved by a modified wave-vector time-domain scheme. The present approach is not limited to forward propagation or parabolic approximation. One- and two-dimensional problems are investigated to verify the method by comparing results to analytic solutions and finite-difference time-domain (FDTD) method. It is found that to obtain accurate results in homogeneous media, the grid size can be as little as two points per wavelength, and for a moderately nonlinear problem, the Courant–Friedrichs–Lewy number can be as large as 0.4. Through comparisons with the conventional FDTD method, the k-space method for nonlinear wave propagation is shown here to be computationally more efficient and accurate. The k-space method is then employed to study three-dimensional nonlinear wave propagation through the skull, which shows that a relatively accurate focusing can be achieved in the brain at a high frequency by sending a low frequency from the transducer. Finally, implementations of the k-space method using a single graphics processing unit shows that it required about one-seventh the computation time of a single-core CPU calculation. PMID:22899114