Sample records for optimization problems approx

  1. Signal Analysis Algorithms for Optimized Fitting of Nonresonant Laser Induced Thermal Acoustics Damped Sinusoids

    NASA Technical Reports Server (NTRS)

    Balla, R. Jeffrey; Miller, Corey A.

    2008-01-01

    This study seeks a numerical algorithm which optimizes frequency precision for the damped sinusoids generated by the nonresonant LITA technique. It compares computed frequencies, frequency errors, and fit errors obtained using five primary signal analysis methods. Using variations on different algorithms within each primary method, results from 73 fits are presented. Best results are obtained using an AutoRegressive method. Compared to previous results using Prony s method, single shot waveform frequencies are reduced approx.0.4% and frequency errors are reduced by a factor of approx.20 at 303K to approx. 0.1%. We explore the advantages of high waveform sample rates and potential for measurements in low density gases.

  2. A New Algorithm to Optimize Maximal Information Coefficient

    PubMed Central

    Luo, Feng; Yuan, Zheming

    2016-01-01

    The maximal information coefficient (MIC) captures dependences between paired variables, including both functional and non-functional relationships. In this paper, we develop a new method, ChiMIC, to calculate the MIC values. The ChiMIC algorithm uses the chi-square test to terminate grid optimization and then removes the restriction of maximal grid size limitation of original ApproxMaxMI algorithm. Computational experiments show that ChiMIC algorithm can maintain same MIC values for noiseless functional relationships, but gives much smaller MIC values for independent variables. For noise functional relationship, the ChiMIC algorithm can reach the optimal partition much faster. Furthermore, the MCN values based on MIC calculated by ChiMIC can capture the complexity of functional relationships in a better way, and the statistical powers of MIC calculated by ChiMIC are higher than those calculated by ApproxMaxMI. Moreover, the computational costs of ChiMIC are much less than those of ApproxMaxMI. We apply the MIC values tofeature selection and obtain better classification accuracy using features selected by the MIC values from ChiMIC. PMID:27333001

  3. Integration of Reference Frames Using VLBI

    NASA Technical Reports Server (NTRS)

    Ma, Chopo; Smith, David E. (Technical Monitor)

    2001-01-01

    Very Long Baseline Interferometry (VLBI) has the unique potential to integrate the terrestrial and celestial reference frames through simultaneous estimation of positions and velocities of approx. 40 active VLBI stations and a similar number of stations/sites with sufficient historical data, the position and position stability of approx. 150 well-observed extragalactic radio sources and another approx. 500 sources distributed fairly uniformly on the sky, and the time series of the five parameters that specify the relative orientation of the two frames. The full realization of this potential is limited by a number of factors including the temporal and spatial distribution of the stations, uneven distribution of observations over the sources and the sky, variations in source structure, modeling of the solid/fluid Earth and troposphere, logistical restrictions on the daily observing network size, and differing strategies for optimizing analysis for TRF, for CRF and for EOP. The current status of separately optimized and integrated VLBI analysis will be discussed.

  4. Hydroxy propyl cellulose capped silver nanoparticles produced by simple dialysis process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Francis, L.; Balakrishnan, A.; Sanosh, K.P.

    2010-08-15

    Silver (Ag) nanoparticles ({approx}6 nm) were synthesized using a novel dialysis process. Silver nitrate was used as a starting precursor, ethylene glycol as solvent and hydroxy propyl cellulose (HPC) introduced as a capping agent. Different batches of reaction mixtures were prepared with different concentrations of silver nitrate (AgNO{sub 3}). After the reduction and aging, these solutions were subjected to ultra-violet visible spectroscopy (UVS). Optimized solution, containing 250 mg AgNO{sub 3} revealed strong plasmon resonance peak at {approx}410 nm in the spectrum indicating good colloidal state of Ag nanoparticles in the diluted solution. The optimized solution was subjected to dialysis processmore » to remove any unreacted solvent. UVS of the optimized solution after dialysis showed the plasmon resonance peak shifting to {approx}440 nm indicating the reduction of Ag ions into zero-valent Ag. This solution was dried at 80 {sup o}C and the resultant HPC capped Ag (HPC/Ag) nanoparticles were studied using transmission electron microscopy (TEM) for their particle size and morphology. The particle size distribution (PSD) analysis of these nanoparticles showed skewed distribution plot with particle size ranging from 3 to 18 nm. The nanoparticles were characterized for phase composition using X-ray diffractrometry (XRD) and Fourier transform infrared spectroscopy (FT-IR).« less

  5. Spatially Resolving Ocean Color and Sediment Dispersion in River Plumes, Coastal Systems, and Continental Shelf Waters

    NASA Technical Reports Server (NTRS)

    Aurin, Dirk Alexander; Mannino, Antonio; Franz, Bryan

    2013-01-01

    Satellite remote sensing of ocean color in dynamic coastal, inland, and nearshorewaters is impeded by high variability in optical constituents, demands specialized atmospheric correction, and is limited by instrument sensitivity. To accurately detect dispersion of bio-optical properties, remote sensors require ample signal-to-noise ratio (SNR) to sense small variations in ocean color without saturating over bright pixels, an atmospheric correction that can accommodate significantwater-leaving radiance in the near infrared (NIR), and spatial and temporal resolution that coincides with the scales of variability in the environment. Several current and historic space-borne sensors have met these requirements with success in the open ocean, but are not optimized for highly red-reflective and heterogeneous waters such as those found near river outflows or in the presence of sediment resuspension. Here we apply analytical approaches for determining optimal spatial resolution, dominant spatial scales of variability ("patches"), and proportions of patch variability that can be resolved from four river plumes around the world between 2008 and 2011. An offshore region in the Sargasso Sea is analyzed for comparison. A method is presented for processing Moderate Resolution Imaging Spectroradiometer (MODIS) Aqua and Terra imagery including cloud detection, stray lightmasking, faulty detector avoidance, and dynamic aerosol correction using short-wave- and near-infrared wavebands in extremely turbid regions which pose distinct optical and technical challenges. Results showthat a pixel size of approx. 520 mor smaller is generally required to resolve spatial heterogeneity in ocean color and total suspended materials in river plumes. Optimal pixel size increases with distance from shore to approx. 630 m in nearshore regions, approx 750 m on the continental shelf, and approx. 1350 m in the open ocean. Greater than 90% of the optical variability within plume regions is resolvable with 500 m resolution, and small, but significant, differences were found between peak and nadir river flow periods in terms of optimal resolution and resolvable proportion of variability.

  6. Optimization of the Orbiting Wide-Angle Light Collectors (OWL) Mission for Charged-Particle and Neutrino Astronomy

    NASA Technical Reports Server (NTRS)

    Krizmanic, John F.; Mitchell, John W.; Streitmatter, Robert E.

    2013-01-01

    OWL [1] uses the Earth's atmosphere as a vast calorimeter to fully enable the emerging field of charged-particle astronomy with high-statistics measurements of ultra-high-energy cosmic rays (UHECR) and a search for sources of UHE neutrinos and photons. Confirmation of the Greisen-Zatsepin-Kuzmin (GZK) suppression above approx. 4 x 10(exp 19) eV suggests that most UHECR originate in astrophysical objects. Higher energy particles must come from sources within about 100 Mpc and are deflected by approx. 1 degree by predicted intergalactic/galactic magnetic fields. The Pierre Auger Array, Telescope Array and the future JEM-EUSO ISS mission will open charged-particle astronomy, but much greater exposure will be required to fully identify and measure the spectra of individual sources. OWL uses two large telescopes with 3 m optical apertures and 45 degree FOV in near-equatorial orbits. Simulations of a five-year OWL mission indicate approx. 10(exp 6) sq km/ sr/ yr of exposure with full aperture at approx. 6 x 10(exp 19) eV. Observations at different altitudes and spacecraft separations optimize sensitivity to UHECRs and neutrinos. OWL's stereo event reconstruction is nearly independent of track inclination and very tolerant of atmospheric conditions. An optional monocular mode gives increased reliability and can increase the instantaneous aperture. OWL can fully reconstruct horizontal and upward-moving showers and so has high sensitivity to UHE neutrinos. New capabilities in inflatable structures optics and silicon photomultipliers can greatly increase photon sensitivity, reducing the energy threshold for n detection or increasing viewed area using a higher orbit. Design trades between the original and optimized OWL missions and the enhanced science capabilities are described.

  7. Optimization of peptide nucleic acid fluorescence in situ hybridization (PNA-FISH) for the detection of bacteria: The effect of pH, dextran sulfate and probe concentration.

    PubMed

    Rocha, Rui; Santos, Rita S; Madureira, Pedro; Almeida, Carina; Azevedo, Nuno F

    2016-05-20

    Fluorescence in situ hybridization (FISH) is a molecular technique widely used for the detection and characterization of microbial populations. FISH is affected by a wide variety of abiotic and biotic variables and the way they interact with each other. This is translated into a wide variability of FISH procedures found in the literature. The aim of this work is to systematically study the effects of pH, dextran sulfate and probe concentration in the FISH protocol, using a general peptide nucleic acid (PNA) probe for the Eubacteria domain. For this, response surface methodology was used to optimize these 3 PNA-FISH parameters for Gram-negative (Escherichia coli and Pseudomonas fluorescens) and Gram-positive species (Listeria innocua, Staphylococcus epidermidis and Bacillus cereus). The obtained results show that a probe concentration higher than 300nM is favorable for both groups. Interestingly, a clear distinction between the two groups regarding the optimal pH and dextran sulfate concentration was found: a high pH (approx. 10), combined with lower dextran sulfate concentration (approx. 2% [w/v]) for Gram-negative species and near-neutral pH (approx. 8), together with higher dextran sulfate concentrations (approx. 10% [w/v]) for Gram-positive species. This behavior seems to result from an interplay between pH and dextran sulfate and their ability to influence probe concentration and diffusion towards the rRNA target. This study shows that, for an optimum hybridization protocol, dextran sulfate and pH should be adjusted according to the target bacteria. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Comprehensive Benchmark Suite for Simulation of Particle Laden Flows Using the Discrete Element Method with Performance Profiles from the Multiphase Flow with Interface eXchanges (MFiX) Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Peiyuan; Brown, Timothy; Fullmer, William D.

    Five benchmark problems are developed and simulated with the computational fluid dynamics and discrete element model code MFiX. The benchmark problems span dilute and dense regimes, consider statistically homogeneous and inhomogeneous (both clusters and bubbles) particle concentrations and a range of particle and fluid dynamic computational loads. Several variations of the benchmark problems are also discussed to extend the computational phase space to cover granular (particles only), bidisperse and heat transfer cases. A weak scaling analysis is performed for each benchmark problem and, in most cases, the scalability of the code appears reasonable up to approx. 103 cores. Profiling ofmore » the benchmark problems indicate that the most substantial computational time is being spent on particle-particle force calculations, drag force calculations and interpolating between discrete particle and continuum fields. Hardware performance analysis was also carried out showing significant Level 2 cache miss ratios and a rather low degree of vectorization. These results are intended to serve as a baseline for future developments to the code as well as a preliminary indicator of where to best focus performance optimizations.« less

  9. Synthesis and characterization of high-T(sub c) screen-printed Y-Ba-Cu-O films on alumina

    NASA Technical Reports Server (NTRS)

    Bansal, Narottam P.; Simons, Rainee N.; Farrell, D. E.

    1988-01-01

    Thick films of YBa2Cu3O(sub 7-x) have been deposited on highly polished alumina substrates by the screen printing technique. To optimize the post-printing heat treatment, the films were baked at various temperatures for different lengths of time and oxygen-annealed at a lower temperature. The resulting films were characterized by electrical resistivity measurements, x-ray diffraction, and optical and scanning electron microscopy. Properties of the films were found to be highly sensitive to the post-printing thermal treatment. Films baked for 15 min at 1000 C in oxygen were hard, adherent, near single phase, and superconducting with T(sub c)(onset) approx 96 K, T(sub c)(zero) approx 66 K and Delta T sub c (10 to 90 percent) approx 10 K.

  10. Reproducible Preparation of Au/TS-1 with High Reaction Rate for Gas Phase Epoxidation of Propylene

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee W. S.; Stach E.; Akatay, M.C.

    2012-03-01

    A refined and reliable synthesis procedure for Au/TS-1(Si/Ti molar ratio {approx}100) with high reaction rate for the direct gas phase epoxidation of propylene has been developed by studying the effects of pH of the gold slurry solution, mixing time, and preparation temperature for deposition precipitation (DP) of Au on TS-1 supports. Au/TS-1 catalysts prepared at optimal DP conditions (pH {approx} 7.3, mixing for 9.5 h, room temperature) showed an average PO rate {approx} 160 g{sub PO} h{sup -1} kg{sub Cat}{sup -1} at 200 C at 1 atm. A reproducibility better than {+-}10% was demonstrated by nine independent samples prepared atmore » the same conditions. These are the highest rates yet reported at 200 C. No visible gold particles were observed by the HRTEM analysis in the fresh Au/TS-1 with gold loading up to {approx}0.1 wt%, indicating that the gold species were smaller than 1 nm. Additionally, the rate per gram of Au and the catalyst stability increased as the Au loading decreased, giving a maximum value of 500 g{sub PO} h{sup -1} g{sub Au}{sup -1}, and Si/Ti molar ratios of {approx}100 gave the highest rates.« less

  11. Imprint of DES superstructures on the cosmic microwave background

    DOE PAGES

    Kovács, A.; Sánchez, C.; García-Bellido, J.; ...

    2016-11-17

    Here, small temperature anisotropies in the Cosmic Microwave Background can be sourced by density perturbations via the late-time integrated Sachs-Wolfe effect. Large voids and superclusters are excellent environments to make a localized measurement of this tiny imprint. In some cases excess signals have been reported. We probed these claims with an independent data set, using the first year data of the Dark Energy Survey in a different footprint, and using a different super-structure finding strategy. We identified 52 large voids and 102 superclusters at redshiftsmore » $0.2 < z < 0.65$. We used the Jubilee simulation to a priori evaluate the optimal ISW measurement configuration for our compensated top-hat filtering technique, and then performed a stacking measurement of the CMB temperature field based on the DES data. For optimal configurations, we detected a cumulative cold imprint of voids with $$\\Delta T_{f} \\approx -5.0\\pm3.7~\\mu K$$ and a hot imprint of superclusters $$\\Delta T_{f} \\approx 5.1\\pm3.2~\\mu K$$ ; this is $$\\sim1.2\\sigma$$ higher than the expected $$|\\Delta T_{f}| \\approx 0.6~\\mu K$$ imprint of such super-structures in $$\\Lambda$$CDM. If we instead use an a posteriori selected filter size ($$R/R_{v}=0.6$$), we can find a temperature decrement as large as $$\\Delta T_{f} \\approx -9.8\\pm4.7~\\mu K$$ for voids, which is $$\\sim2\\sigma$$ above $$\\Lambda$$CDM expectations and is comparable to previous measurements made using SDSS super-structure data.« less

  12. Imprint of DES superstructures on the cosmic microwave background

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kovács, A.; Sánchez, C.; García-Bellido, J.

    Here, small temperature anisotropies in the Cosmic Microwave Background can be sourced by density perturbations via the late-time integrated Sachs-Wolfe effect. Large voids and superclusters are excellent environments to make a localized measurement of this tiny imprint. In some cases excess signals have been reported. We probed these claims with an independent data set, using the first year data of the Dark Energy Survey in a different footprint, and using a different super-structure finding strategy. We identified 52 large voids and 102 superclusters at redshiftsmore » $0.2 < z < 0.65$. We used the Jubilee simulation to a priori evaluate the optimal ISW measurement configuration for our compensated top-hat filtering technique, and then performed a stacking measurement of the CMB temperature field based on the DES data. For optimal configurations, we detected a cumulative cold imprint of voids with $$\\Delta T_{f} \\approx -5.0\\pm3.7~\\mu K$$ and a hot imprint of superclusters $$\\Delta T_{f} \\approx 5.1\\pm3.2~\\mu K$$ ; this is $$\\sim1.2\\sigma$$ higher than the expected $$|\\Delta T_{f}| \\approx 0.6~\\mu K$$ imprint of such super-structures in $$\\Lambda$$CDM. If we instead use an a posteriori selected filter size ($$R/R_{v}=0.6$$), we can find a temperature decrement as large as $$\\Delta T_{f} \\approx -9.8\\pm4.7~\\mu K$$ for voids, which is $$\\sim2\\sigma$$ above $$\\Lambda$$CDM expectations and is comparable to previous measurements made using SDSS super-structure data.« less

  13. THE X-RAY DETECTABILITY OF ELECTRON BEAMS ESCAPING FROM THE SUN

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saint-Hilaire, Pascal; Krucker, Saem; Christe, Steven

    2009-05-01

    We study the detectability and characterization of electron beams as they leave their acceleration site in the low corona toward interplanetary space through their nonthermal X-ray bremsstrahlung emission. We demonstrate that the largest interplanetary electron beams ({approx}>10{sup 35} electrons above 10 keV) can be detected in X-rays with current and future instrumentation, such as RHESSI or the X-Ray Telescope (XRT) onboard Hinode. We make a list of optimal observing conditions and beam characteristics. Amongst others, good imaging (as opposed to mere localization or detection in spatially integrated data) is required for proper characterization, putting the requirement on the number ofmore » escaping electrons (above 10 keV) to {approx}>3 x 10{sup 36} for RHESSI, {approx}>3 x 10{sup 35} for Hinode/XRT, and {approx}>10{sup 33} electrons for the FOXSI sounding rocket scheduled to fly in 2011. Moreover, we have found that simple modeling hints at the possibility that coronal soft X-ray jets could be the result of local heating by propagating electron beams.« less

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedman, A; Barnard, J J; Cohen, R H

    The Heavy Ion Fusion Science Virtual National Laboratory (a collaboration of LBNL, LLNL, and PPPL) is using intense ion beams to heat thin foils to the 'warm dense matter' regime at {approx}< 1 eV, and is developing capabilities for studying target physics relevant to ion-driven inertial fusion energy. The need for rapid target heating led to the development of plasma-neutralized pulse compression, with current amplification factors exceeding 50 now routine on the Neutralized Drift Compression Experiment (NDCX). Construction of an improved platform, NDCX-II, has begun at LBNL with planned completion in 2012. Using refurbished induction cells from the Advanced Testmore » Accelerator at LLNL, NDCX-II will compress a {approx}500 ns pulse of Li{sup +} ions to {approx} 1 ns while accelerating it to 3-4 MeV over {approx} 15 m. Strong space charge forces are incorporated into the machine design at a fundamental level. We are using analysis, an interactive 1D PIC code (ASP) with optimizing capabilities and centroid tracking, and multi-dimensional Warpcode PIC simulations, to develop the NDCX-II accelerator. This paper describes the computational models employed, and the resulting physics design for the accelerator.« less

  15. Floating-point geometry: toward guaranteed geometric computations with approximate arithmetics

    NASA Astrophysics Data System (ADS)

    Bajard, Jean-Claude; Langlois, Philippe; Michelucci, Dominique; Morin, Géraldine; Revol, Nathalie

    2008-08-01

    Geometric computations can fail because of inconsistencies due to floating-point inaccuracy. For instance, the computed intersection point between two curves does not lie on the curves: it is unavoidable when the intersection point coordinates are non rational, and thus not representable using floating-point arithmetic. A popular heuristic approach tests equalities and nullities up to a tolerance ɛ. But transitivity of equality is lost: we can have A approx B and B approx C, but A not approx C (where A approx B means ||A - B|| < ɛ for A,B two floating-point values). Interval arithmetic is another, self-validated, alternative; the difficulty is to limit the swell of the width of intervals with computations. Unfortunately interval arithmetic cannot decide equality nor nullity, even in cases where it is decidable by other means. A new approach, developed in this paper, consists in modifying the geometric problems and algorithms, to account for the undecidability of the equality test and unavoidable inaccuracy. In particular, all curves come with a non-zero thickness, so two curves (generically) cut in a region with non-zero area, an inner and outer representation of which is computable. This last approach no more assumes that an equality or nullity test is available. The question which arises is: which geometric problems can still be solved with this last approach, and which cannot? This paper begins with the description of some cases where every known arithmetic fails in practice. Then, for each arithmetic, some properties of the problems they can solve are given. We end this work by proposing the bases of a new approach which aims to fulfill the geometric computations requirements.

  16. Numerical Nonlinear Robust Control with Applications to Humanoid Robots

    DTIC Science & Technology

    2015-07-01

    automatically. While optimization and optimal control theory have been widely applied in humanoid robot control, it is not without drawbacks . A blind... drawback of Galerkin-based approaches is the need to successively produce discrete forms, which is difficult to implement in practice. Related...universal function approx- imation ability, these approaches are not without drawbacks . In practice, while a single hidden layer neural network can

  17. Optimal trajectories for an aerospace plane. Part 1: Formulation, results, and analysis

    NASA Technical Reports Server (NTRS)

    Miele, Angelo; Lee, W. Y.; Wu, G. D.

    1990-01-01

    The optimization of the trajectories of an aerospace plane is discussed. This is a hypervelocity vehicle capable of achieving orbital speed, while taking off horizontally. The vehicle is propelled by four types of engines: turbojet engines for flight at subsonic speeds/low supersonic speeds; ramjet engines for flight at moderate supersonic speeds/low hypersonic speeds; scramjet engines for flight at hypersonic speeds; and rocket engines for flight at near-orbital speeds. A single-stage-to-orbit (SSTO) configuration is considered, and the transition from low supersonic speeds to orbital speeds is studied under the following assumptions: the turbojet portion of the trajectory has been completed; the aerospace plane is controlled via the angle of attack and the power setting; the aerodynamic model is the generic hypersonic aerodynamics model example (GHAME). Concerning the engine model, three options are considered: (EM1), a ramjet/scramjet combination in which the scramjet specific impulse tends to a nearly-constant value at large Mach numbers; (EM2), a ramjet/scramjet combination in which the scramjet specific impulse decreases monotonically at large Mach numbers; and (EM3), a ramjet/scramjet/rocket combination in which, owing to stagnation temperature limitations, the scramjet operates only at M approx. less than 15; at higher Mach numbers, the scramjet is shut off and the aerospace plane is driven only by the rocket engines. Under the above assumptions, four optimization problems are solved using the sequential gradient-restoration algorithm for optimal control problems: (P1) minimization of the weight of fuel consumed; (P2) minimization of the peak dynamic pressure; (P3) minimization of the peak heating rate; and (P4) minimization of the peak tangential acceleration.

  18. Reconstructing the Sky Location of Gravitational-Wave Detected Compact Binary Systems: Methodology for Testing and Comparison

    NASA Technical Reports Server (NTRS)

    Sidney, T.; Aylott, B.; Christensen, N.; Farr, B.; Farr, W.; Feroz, F.; Gair, J.; Grover, K.; Graff, P.; Hanna, C.; hide

    2014-01-01

    The problem of reconstructing the sky position of compact binary coalescences detected via gravitational waves is a central one for future observations with the ground-based network of gravitational-wave laser interferometers, such as Advanced LIGO and Advanced Virgo. Different techniques for sky localization have been independently developed. They can be divided in two broad categories: fully coherent Bayesian techniques, which are high latency and aimed at in-depth studies of all the parameters of a source, including sky position, and "triangulation-based" techniques, which exploit the data products from the search stage of the analysis to provide an almost real-time approximation of the posterior probability density function of the sky location of a detection candidate. These techniques have previously been applied to data collected during the last science runs of gravitational-wave detectors operating in the so-called initial configuration. Here, we develop and analyze methods for assessing the self consistency of parameter estimation methods and carrying out fair comparisons between different algorithms, addressing issues of efficiency and optimality. These methods are general, and can be applied to parameter estimation problems other than sky localization. We apply these methods to two existing sky localization techniques representing the two above-mentioned categories, using a set of simulated inspiralonly signals from compact binary systems with a total mass of equal to or less than 20M solar mass and nonspinning components. We compare the relative advantages and costs of the two techniques and show that sky location uncertainties are on average a factor approx. equals 20 smaller for fully coherent techniques than for the specific variant of the triangulation-based technique used during the last science runs, at the expense of a factor approx. equals 1000 longer processing time.

  19. Windows on the axion. [quantum chromodynamics (QCD)

    NASA Technical Reports Server (NTRS)

    Turner, Michael S.

    1989-01-01

    Peccei-Quinn symmetry with attendant axion is a most compelling, and perhaps the most minimal, extension of the standard model, as it provides a very elegant solution to the nagging strong CP-problem associated with the theta vacuum structure of QCD. However, particle physics gives little guidance as to the axion mass; a priori, the plausible values span the range: 10(-12)eV is approx. less than m(a) which is approx. less than 10(6)eV, some 18 orders-of-magnitude. Laboratory experiments have excluded masses greater than 10(4)eV, leaving unprobed some 16 orders-of-magnitude. Axions have a host of interesting astrophysical and cosmological effects, including, modifying the evolution of stars of all types (our sun, red giants, white dwarfs, and neutron stars), contributing significantly to the mass density of the Universe today, and producting detectable line radiation through the decays of relic axions. Consideration of these effects has probed 14 orders-of-magnitude in axion mass, and has left open only two windows for further exploration: 10(-6)eV is approx. less than m(a) is approx. less than 10(-3)eV and 1eV is approx. less than m(a) is approx. less than 5eV (hadronic axions only). Both these windows are accessible to experiment, and a variety of very interesting experiments, all of which involve heavenly axions, are being planned or are underway.

  20. Spatiotemporal Variability and Contribution of Different Aerosol Types to the Aerosol Optical Depth over the Eastern Mediterranean

    NASA Technical Reports Server (NTRS)

    Georgoulias, Aristeidis K.; Alexandri, Georgia; Kourtidis, Konstantinos A.; Lelieveld, Jos; Zanis, Prodromos; Poeschl, Ulrich; Levy, Robert; Amiridis, Vassilis; Marinou, Eleni; Tsikerdekis, Athanasios

    2016-01-01

    This study characterizes the spatiotemporal variability and relative contribution of different types of aerosols to the aerosol optical depth (AOD) over the Eastern Mediterranean as derived from MODIS (Moderate Resolution Imaging Spectroradiometer) Terra (March 2000-December 2012) and Aqua (July 2002-December 2012) satellite instruments. For this purpose, a 0.1deg × 0.1deg gridded MODIS dataset was compiled and validated against sun photometric observations from the AErosol RObotic NETwork (AERONET). The high spatial resolution and long temporal coverage of the dataset allows for the determination of local hot spots like megacities, medium-sized cities, industrial zones and power plant complexes, seasonal variabilities and decadal averages. The average AOD at 550 nm (AOD550) for the entire region is approx. 0.22 +/- 0.19, with maximum values in summer and seasonal variabilities that can be attributed to precipitation, photochemical production of secondary organic aerosols, transport of pollution and smoke from biomass burning in central and eastern Europe and transport of dust from the Sahara and the Middle East. The MODIS data were analyzed together with data from other satellite sensors, reanalysis projects and a chemistry-aerosol-transport model using an optimized algorithm tailored for the region and capable of estimating the contribution of different aerosol types to the total AOD550. The spatial and temporal variability of anthropogenic, dust and fine-mode natural aerosols over land and anthropogenic, dust and marine aerosols over the sea is examined. The relative contribution of the different aerosol types to the total AOD550 exhibits a low/high seasonal variability over land/sea areas, respectively. Overall, anthropogenic aerosols, dust and fine-mode natural aerosols account for approx. 51, approx. 34 and approx. 15 % of the total AOD550 over land, while, anthropogenic aerosols, dust and marine aerosols account approx. 40, approx. 34 and approx. 26 % of the total AOD550 over the sea, based on MODIS Terra and Aqua observations.

  1. Anisotropic spectra of acoustic type turbulence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kuznetsov, E.; P.N. Lebedev Physical Institute, 53 Leninsky Ave., 119991 Moscow; Krasnoselskikh, V.

    2008-06-15

    The problem of spectra for acoustic type of turbulence generated by shocks being randomly distributed in space is considered. It is shown that for turbulence with a weak anisotropy, such spectra have the same dependence in k-space as the Kadomtsev-Petviashvili spectrum: E(k){approx}k{sup -2}. However, the frequency spectrum has always the falling {approx}{omega}{sup -2}, independent of anisotropy. In the strong anisotropic case the energy distribution relative to wave vectors takes anisotropic dependence, forming in the large-k region spectra of the jet type.

  2. Development of the Advanced Energetic Pair Telescope (AdEPT) for Medium-Energy Gamma-Ray Astronomy

    NASA Technical Reports Server (NTRS)

    Hunter, Stanley D.; Bloser, Peter F.; Dion, Michael P.; McConnell, Mark L.; deNolfo, Georgia A.; Son, Seunghee; Ryan, James M.; Stecker, Floyd W.

    2011-01-01

    Progress in high-energy gamma-ray science has been dramatic since the launch of INTEGRAL, AGILE and FERMI. These instruments, however, are not optimized for observations in the medium-energy (approx.0.3< E(sub gamma)< approx.200 MeV) regime where many astrophysical objects exhibit unique, transitory behavior, such as spectral breaks, bursts, and flares. We outline some of the major science goals of a medium-energy mission. These science goals are best achieved with a combination of two telescopes, a Compton telescope and a pair telescope, optimized to provide significant improvements in angular resolution and sensitivity. In this paper we describe the design of the Advanced Energetic Pair Telescope (AdEPT) based on the Three-Dimensional Track Imager (3-DTI) detector. This technology achieves excellent, medium-energy sensitivity, angular resolution near the kinematic limit, and gamma-ray polarization sensitivity, by high resolution 3-D electron tracking. We describe the performance of a 30x30x30 cm3 prototype of the AdEPT instrument.

  3. Synthesis of perfluoroalkylene dianilines

    NASA Technical Reports Server (NTRS)

    Paciorek, K. L.; Ito, T. I.; Harris, D. H.; Beechan, C. M.; Nakaham, J. H.; Kratzer, R. H.

    1981-01-01

    The objective of this contrast was to optimize and scale-up the synthesis of 2,2-bis(4-aminophenyl)-hexafluoropropane and 1,3-bis(4-aminophenyl)hexafluoropropane, as well as to explore avenues to other perfluoroalkyl-bridged dianilines. Routes other than Friedel-Crafts reaction leading to 2,2-bis(4-aminophenyl)hexafluoropropane were investigated. The processes utilizing bisphenol-AF were all unsuccessful; reactions aimed at the production of 4-(hexafluoro-2-halo-isopropyl)aniline from the hydroxyl intermediate failed to yield the desired products. Tailoring the conditions of the Friedel-Crafts reaction of 4-(hexafluoro-2-hydroxyisopropyl)aniline, aniline, and aluminum chloride by using hydrochloride salts and selecting optimum reagent ratios, reaction times, and temperature resulted in approx. 20% yield of pure crystallized 2,2-bis(4-aminophenyl)hexafluoropropane in 0.2 mole reaction batches. Yields up to approx. 40% were realized in small, approx. 0.01 mole, batches. The synthesis of 1,3-bis(4-aminophenyl)hexafluoropropane starting with perfluoroglutarimidine was reinvestigated. The yield of the 4-step reaction sequence giving 1,3-bis(4-acetamidophenyl)hexafluoropropane was raised to 44%. The yield of the subsequent hydrolysis process was improved by a factor of approx. 2. Approaches to prepare other perfluoroalkyl-bridged dianilines were unsuccessful. Reactions reported to proceed readily with trifluoromethyl substituents failed when longer chain perfluoroalkyl groups were employed.

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Diaz Cruz, J. Lorenzo

    We suggest that dark matter can be identified with a stable composite fermion X{sup 0}, that arises within the holographic AdS/CFT models, where the Higgs boson emerges as a composite pseudo-goldstone boson. The predicted properties of X{sup 0} satisfies the cosmological bounds, with m{sub X{sup 0}}{approx}4{pi}f{approx_equal}O(TeV). Thus, through a deeper understanding of the mechanism of electroweak symmetry breaking, a resolution of the Dark Matter enigma is found. Furthermore, by proposing a discrete structure of the Higgs vacuum, one can get a distinct approach to the cosmological constant problem.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guha, S.

    This report describes the research program intended to expand, enhance, and accelerate knowledge and capabilities for developing high-performance, two-terminal multijunction amorphous silicon (a-Si) alloy cells, and modules with low manufacturing cost and high reliability. United Solar uses a spectrum-splitting, triple-junction cell structure. The top cell uses an amorphous silicon alloy of {approx}1.8-eV bandgap to absorb blue photons. The middle cell uses an amorphous silicon germanium alloy ({approx}20% germanium) of {approx}1.6-eV bandgap to capture green photons. The bottom cell has {approx}40% germanium to reduce the bandgap to {approx}1.4-eV to capture red photons. The cells are deposited on a stainless-steel substrate withmore » a predeposited silver/zinc oxide back reflector to facilitate light-trapping. A thin layer of antireflection coating is applied to the top of the cell to reduce reflection loss. The major research activities conducted under this program were: (1) Fundamental studies to improve our understanding of materials and devices; the work included developing and analyzing a-Si alloy and a-SiGe alloy materials prepared near the threshold of amorphous-to-microcrystalline transition and studying solar cells fabricated using these materials. (2) Deposition of small-area cells using a radio-frequency technique to obtain higher deposition rates. (3) Deposition of small-area cells using a modified very high frequency technique to obtain higher deposition rates. (4) Large-area cell research to obtain the highest module efficiency. (5) Optimization of solar cells and modules fabricated using production parameters in a large-area reactor.« less

  6. Observational Definition of Future AGN Echo-Mapping Experiments

    NASA Technical Reports Server (NTRS)

    Collier, Stefan; Peterson, Bradley M.; Horne, Keith

    2001-01-01

    We describe numerical simulations we have begun in order to determine the observational requirements for future echo-apping experiments. We focus on two particular problems: (1) determination of the structure and kinematics of the broad-line region through emission- line reverberation mapping, and (2) detection of interband continuum lags that may be used as a probe of the continuum source, presumably a temperature-stratified accretion disk. Our preliminary results suggest the broad-line region can be reverberation-mapped to good precision with spectra of signal-to-noise ratio per pixel S/N approx. = 30, time resolution (Delta)t approx. = 0.1 day, and duration of about 60 days (which is a factor of three larger than the longest time scale in the input models); data that meet these requirements do not yet exist. We also find that interband continuum lags of approx. greater than 0.5 days can be detected at approx. greater than 95% confidence with at least daily observations for about 6 weeks, or rather more easily and definitively with shorter programs undertaken with satellite-based observatories. The results of these simulations show that significant steps forward in multiwavelength monitoring will almost certainly require dedicated facilities.

  7. The Vetter-Sturtevant Shock Tube Problem in KULL

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ulitsky, M S

    2005-10-06

    The goal of the EZturb mix model in KULL is to predict the turbulent mixing process as it evolves from Rayleigh-Taylor, Richtmyer-Meshkov, or Kelvin-Helmholtz instabilities. In this report we focus on an example of the Richtmyer-Meshkov instability (which occurs when a shock hits an interface between fluids of different densities) with the additional complication of reshock. The experiment by Vetter & Sturtevant (VS) [1], involving a Mach 1.50 incident shock striking an air/SF{sub 6} interface, is a good one to model, now that we understand how the model performs for the Benjamin shock tube [2] and a prototypical incompressible Rayleigh-Taylormore » problem [3]. The x-t diagram for the VS shock tube is quite complicated, since the transmitted shock hits the far wall at {approx}2 millisec, reshocks the mixing zone slightly after 3 millisec (which sets up a release wave that hits the wall at {approx}4 millisec), and then the interface is hit with this expansion wave around 5 millisec. Needless to say, this problem is much more difficult to model than the Bejamin shock tube.« less

  8. A dedicated cone-beam CT system for musculoskeletal extremities imaging: Design, optimization, and initial performance characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zbijewski, W.; De Jean, P.; Prakash, P.

    2011-08-15

    Purpose: This paper reports on the design and initial imaging performance of a dedicated cone-beam CT (CBCT) system for musculoskeletal (MSK) extremities. The system complements conventional CT and MR and offers a variety of potential clinical and logistical advantages that are likely to be of benefit to diagnosis, treatment planning, and assessment of therapy response in MSK radiology, orthopaedic surgery, and rheumatology. Methods: The scanner design incorporated a host of clinical requirements (e.g., ability to scan the weight-bearing knee in a natural stance) and was guided by theoretical and experimental analysis of image quality and dose. Such criteria identified themore » following basic scanner components and system configuration: a flat-panel detector (FPD, Varian 3030+, 0.194 mm pixels); and a low-power, fixed anode x-ray source with 0.5 mm focal spot (SourceRay XRS-125-7K-P, 0.875 kW) mounted on a retractable C-arm allowing for two scanning orientations with the capability for side entry, viz. a standing configuration for imaging of weight-bearing lower extremities and a sitting configuration for imaging of tensioned upper extremity and unloaded lower extremity. Theoretical modeling employed cascaded systems analysis of modulation transfer function (MTF) and detective quantum efficiency (DQE) computed as a function of system geometry, kVp and filtration, dose, source power, etc. Physical experimentation utilized an imaging bench simulating the scanner geometry for verification of theoretical results and investigation of other factors, such as antiscatter grid selection and 3D image quality in phantom and cadaver, including qualitative comparison to conventional CT. Results: Theoretical modeling and benchtop experimentation confirmed the basic suitability of the FPD and x-ray source mentioned above. Clinical requirements combined with analysis of MTF and DQE yielded the following system geometry: a {approx}55 cm source-to-detector distance; 1.3 magnification; a 20 cm diameter bore (20 x 20 x 20 cm{sup 3} field of view); total acquisition arc of {approx}240 deg. The system MTF declines to 50% at {approx}1.3 mm{sup -1} and to 10% at {approx}2.7 mm{sup -1}, consistent with sub-millimeter spatial resolution. Analysis of DQE suggested a nominal technique of 90 kVp (+0.3 mm Cu added filtration) to provide high imaging performance from {approx}500 projections at less than {approx}0.5 kW power, implying {approx}6.4 mGy (0.064 mSv) for low-dose protocols and {approx}15 mGy (0.15 mSv) for high-quality protocols. The experimental studies show improved image uniformity and contrast-to-noise ratio (without increase in dose) through incorporation of a custom 10:1 GR antiscatter grid. Cadaver images demonstrate exquisite bone detail, visualization of articular morphology, and soft-tissue visibility comparable to diagnostic CT (10-20 HU contrast resolution). Conclusions: The results indicate that the proposed system will deliver volumetric images of the extremities with soft-tissue contrast resolution comparable to diagnostic CT and improved spatial resolution at potentially reduced dose. Cascaded systems analysis provided a useful basis for system design and optimization without costly repeated experimentation. A combined process of design specification, image quality analysis, clinical feedback, and revision yielded a prototype that is now awaiting clinical pilot studies. Potential advantages of the proposed system include reduced space and cost, imaging of load-bearing extremities, and combined volumetric imaging with real-time fluoroscopy and digital radiography.« less

  9. Beam transport results on the multi-beam MABE accelerator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coleman, P.D.; Alexander, J.A.; Hasti, D.E.

    1985-10-01

    MABE is a multistage, electron beam linear accelerator. The accelerator has been operated in single beam (60 kA, 7 Mev) and multiple beam configurations. This paper deals with the multiple beam configuration in which typically nine approx. = 25 kA injected beams are transported through three accelerating gaps. Experimental results from the machine are discussed, including problems encountered and proposed solutions to those problems.

  10. Late-time cosmological phase transitions

    NASA Technical Reports Server (NTRS)

    Schramm, David N.

    1991-01-01

    It is shown that the potential galaxy formation and large scale structure problems of objects existing at high redshifts (Z approx. greater than 5), structures existing on scales of 100 M pc as well as velocity flows on such scales, and minimal microwave anisotropies ((Delta)T/T) (approx. less than 10(exp -5)) can be solved if the seeds needed to generate structure form in a vacuum phase transition after decoupling. It is argued that the basic physics of such a phase transition is no more exotic than that utilized in the more traditional GUT scale phase transitions, and that, just as in the GUT case, significant random Gaussian fluctuations and/or topological defects can form. Scale lengths of approx. 100 M pc for large scale structure as well as approx. 1 M pc for galaxy formation occur naturally. Possible support for new physics that might be associated with such a late-time transition comes from the preliminary results of the SAGE solar neutrino experiment, implying neutrino flavor mixing with values similar to those required for a late-time transition. It is also noted that a see-saw model for the neutrino masses might also imply a tau neutrino mass that is an ideal hot dark matter candidate. However, in general either hot or cold dark matter can be consistent with a late-time transition.

  11. Testable solution of the cosmological constant and coincidence problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaw, Douglas J.; Barrow, John D.

    2011-02-15

    We present a new solution to the cosmological constant (CC) and coincidence problems in which the observed value of the CC, {Lambda}, is linked to other observable properties of the Universe. This is achieved by promoting the CC from a parameter that must be specified, to a field that can take many possible values. The observed value of {Lambda}{approx_equal}(9.3 Gyrs){sup -2}[{approx_equal}10{sup -120} in Planck units] is determined by a new constraint equation which follows from the application of a causally restricted variation principle. When applied to our visible Universe, the model makes a testable prediction for the dimensionless spatial curvaturemore » of {Omega}{sub k0}=-0.0056({zeta}{sub b}/0.5), where {zeta}{sub b}{approx}1/2 is a QCD parameter. Requiring that a classical history exist, our model determines the probability of observing a given {Lambda}. The observed CC value, which we successfully predict, is typical within our model even before the effects of anthropic selection are included. When anthropic selection effects are accounted for, we find that the observed coincidence between t{sub {Lambda}={Lambda}}{sup -1/2} and the age of the Universe, t{sub U}, is a typical occurrence in our model. In contrast to multiverse explanations of the CC problems, our solution is independent of the choice of a prior weighting of different {Lambda} values and does not rely on anthropic selection effects. Our model includes no unnatural small parameters and does not require the introduction of new dynamical scalar fields or modifications to general relativity, and it can be tested by astronomical observations in the near future.« less

  12. Topography of the 81/P Wild 2 Nucleus Derived from Stardust Stereoimages

    NASA Technical Reports Server (NTRS)

    Kirk, R. L.; Duxbury, T. C.; Horz, F.; Brownlee, D. E.; Newburn, R. L.; Tsou, P.

    2005-01-01

    On 2 January, 2004, the Stardust spacecraft flew by the nucleus of comet 81P/Wild 2 with a closest approach distance of approx. 240 km. During the encounter, the Stardust Optical Navigation Camera (ONC) obtained 72 images of the nucleus with exposure times alternating between 10 ms (near-optimal for most of the nucleus surface) and 100 ms (used for navigation, and revealing additional details in the coma and dark portions of the surface. Phase angles varied from 72 deg. to near zero to 103 deg. during the encounter, allowing the entire sunlit portion of the surface to be imaged. As many as 20 of the images near closest approach are of sufficiently high resolution to be used in mapping the nucleus surface; of these, two pairs of short-exposure images were used to create the nucleus shape model and derived products reported here. The best image resolution obtained was approx. 14 m/pixel, resulting in approx. 300 pixels across the nucleus. The Stardust Wild 2 dataset is therefore markedly superior from a stereomapping perspective to the Deep Space 1 MICAS images of comet Borrelly. The key subset of the latter (3 images) covered only about a quarter of the surface at phase angles approx. 50 - 60 and less than 50 x 160 pixels across the nucleus, yet it sufficed for groups at the USGS and DLR to produce digital elevation models (DEMs) and study the morphology and photometry of the nucleus in detail.

  13. Security of two quantum cryptography protocols using the same four qubit states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Branciard, Cyril; Ecole Nationale Superieure des Telecommunications, 46, rue Barrault, 75013 Paris; Gisin, Nicolas

    2005-09-15

    The first quantum cryptography protocol, proposed by Bennett and Brassard in 1984 (BB84), has been widely studied in recent years. This protocol uses four states (more precisely, two complementary bases) for the encoding of the classical bit. Recently, it has been noticed that by using the same four states, but a different encoding of information, one can define a protocol which is more robust in practical implementations, specifically when attenuated laser pulses are used instead of single-photon sources [V. Scarani et al., Phys. Rev. Lett. 92, 057901 (2004), referred to as the SARG04 protocol]. We present a detailed study ofmore » SARG04 in two different regimes. In the first part, we consider an implementation with a single-photon source: we derive bounds on the error rate Q for security against all possible attacks by the eavesdropper. The lower and the upper bound obtained for SARG04 (Q < or approx. 10.95% and Q > or approx. 14.9%, respectively) are close to those obtained for BB84 (Q < or approx. 12.4% and Q > or approx. 14.6%, respectively). In the second part, we consider a realistic source consisting of an attenuated laser and improve on previous analysis by allowing Alice to optimize the mean number of photons as a function of the distance. The SARG04 protocol is found to perform better than BB84, both in secret-key rate and in maximal achievable distance, for a wide class of Eve's attacks.« less

  14. Radiation Dose Assessments of Solar Particle Events with Spectral Representation at High Energies for the Improvement of Radiation Protection

    NASA Technical Reports Server (NTRS)

    Kim, Myung-Hee; Atwell, William; Tylka, Allan J.; Dietrich, William F.; Cucinotta, Francis A.

    2010-01-01

    For radiation dose assessments of major solar particle events (SPEs), spectral functional forms of SPEs have been made by fitting available satellite measurements up to approx.100 MeV. However, very high-energy protons (above 500 MeV) have been observed with neutron monitors (NMs) in ground level enhancements (GLEs), which generally present the most severe radiation hazards to astronauts. Due to technical difficulties in converting NM data into absolutely normalized fluence measurements, those functional forms were made with little or no use of NM data. A new analysis of NM data has found that a double power law in rigidity (the so-called Band function) generally provides a satisfactory representation of the combined satellite and NM data from approx.10 MeV to approx.10 GeV in major SPEs (Tylka & Dietrich 2009). We use the Band function fits to re-assess human exposures from large SPEs. Using different spectral representations of large SPEs, variations of exposure levels were compared. The results can be applied to the development of approaches of improved radiation protection for astronauts, as well as the optimization of mission planning and shielding for future space missions.

  15. RNA-Cleaving DNA Enzymes with Altered Regio- or Enantioselectivity

    NASA Technical Reports Server (NTRS)

    Ordoukhanian, Phillip; Joyce, Gerald F.

    2002-01-01

    In vitro evolution methods were used to obtain DNA enzymes that cleave either a 2',5' - phosphodiester following a wibonucleotide or a 3',5' -phosphodiester following an L-ribonucleotide. Both enzymes can operate in an intermolecular reaction format with multiple turnover. The DNA enzyme that cleaves a 2',5' -phosphodiester exhibits a k(sub cat) of approx. 0.01/ min and catalytic efficiency, k(sub cat)/k(sub m) of approx. 10(exp 5)/ M min. The enzyme that cleaves an L-ribonudeotide is about 10-fold slower and has a catalytic efficiency of approx. 4 x 10(exp 5)/ M min. Both enzymes require a divalent metal cation for their activity and have optimal catalytic rate at pH 7-8 and 35-50 C. In a comparison of each enzyme s activity with either its corresponding substrate that contains an unnatural ribonudeotide or a substrate that instead contains a standard ribonucleotide, the 2',5' -phosphodiester-deaving DNA enzyme exhibited a regioselectivity of 6000- fold, while the L-ribonucleotide-cleaving DNA enzyme exhibited an enantioselectivity of 50-fold. These molecules demonstrate how in vitro evolution can be used to obtain regio- and enantioselective catalysts that exhibit specificities for nonnatural analogues of biological compounds.

  16. Intensive HST, RXTE, and ASCA Monitoring of NGC 3516: Evidence against Thermal Reprocessing

    NASA Technical Reports Server (NTRS)

    Edelson, Rick; Koratkar, Anuradha; Nandra, Kirpal; Goad, Michael; Peterson, Bradley M.; Collier, Stefan; Krolik, Julian; Malkan, Matthew; Maoz, Dan; OBrien, Paul

    2000-01-01

    During 1998 April 1316, the bright, strongly variable Seyfert 1 galaxy NGC 3516 was monitored almost continuously with HST for 10.3 hr at ultraviolet wavelengths and 2.8 days at optical wavelengths, and simultaneous RXTE and ASCA monitoring covered the same period. The X-ray fluxes were strongly variable with the soft (0.5-2 keV) X-rays showing stronger variations (approx. 65% peak to peak) than the hard (2-10 keV) X-rays (approx. 50% peak to peak). The optical continuum showed much smaller but still highly significant variations: a slow approx. 2.5% rise followed by a faster approx. 3.5% decline. The short ultraviolet observation did not show significant variability. The soft and hard X-ray light curves were strongly correlated, with no evidence for a significant interband lag. Likewise, the optical continuum bands (3590 and 5510 A) were also strongly correlated, with no measurable lag, to 3(sigma) limits of approx. less than 0.15 day. However, the optical and X-ray light curves showed very different behavior, and no significant correlation or simple relationship could be found. These results appear difficult to reconcile with previous reports of correlations between X-ray and optical variations and of measurable lags within the optical band for some other Seyfert 1 galaxies. These results also present serious problems for "reprocessing" models in which the X-ray source heats a stratified accretion disk, which then reemits in the optical/ultraviolet : the synchronous variations within the optical would suggest that the emitting region is approx. less than 0.3 It-day across, while the lack of correlation between X-ray and optical variations would indicate, in the context of this model, that any reprocessing region must be approx. greater than 1 It-day in size. It may be possible to resolve this conflict by invoking anisotropic emission or special geometry, but the most natural explanation appears to be that the bulk of the optical luminosity is generated by some mechanism other than reprocessing.

  17. Optimization of processing parameters on the controlled growth of ZnO nanorod arrays for the performance improvement of solid-state dye-sensitized solar cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Yi-Mu, E-mail: ymlee@nuu.edu.t; Yang, Hsi-Wen

    2011-03-15

    High-transparency and high quality ZnO nanorod arrays were grown on the ITO substrates by a two-step chemical bath deposition (CBD) method. The effects of processing parameters including reaction temperature (25-95 {sup o}C) and solution concentration (0.01-0.1 M) on the crystal growth, alignment, optical and electrical properties were systematically investigated. It has been found that these process parameters are critical for the growth, orientation and aspect ratio of the nanorod arrays, showing different structural and optical properties. Experimental results reveal that the hexagonal ZnO nanorod arrays prepared under reaction temperature of 95 {sup o}C and solution concentration of 0.03 M possessmore » highest aspect ratio of {approx}21, and show the well-aligned orientation and optimum optical properties. Moreover the ZnO nanorod arrays based heterojunction electrodes and the solid-state dye-sensitized solar cells (SS-DSSCs) were fabricated with an improved optoelectrical performance. -- Graphical abstract: The ZnO nanorod arrays demonstrate well-alignment, high aspect ratio (L/D{approx}21) and excellent optical transmittance by low-temperature chemical bath deposition (CBD). Display Omitted Research highlights: > Investigate the processing parameters of CBD on the growth of ZnO nanorod arrays. > Optimization of CBD process parameters: 0.03 M solution concentration and reaction temperature of 95 {sup o}C. > The prepared ZnO samples possess well-alignment and high aspect ratio (L/D{approx}21). > An n-ZnO/p-NiO heterojunction: great rectifying behavior and low leakage current. > SS-DSSC has J{sub SC} of 0.31 mA/cm{sup 2} and V{sub OC} of 590 mV, and an improved {eta} of 0.059%.« less

  18. IDENTIFYING IONIZED REGIONS IN NOISY REDSHIFTED 21 cm DATA SETS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malloy, Matthew; Lidz, Adam, E-mail: mattma@sas.upenn.edu

    One of the most promising approaches for studying reionization is to use the redshifted 21 cm line. Early generations of redshifted 21 cm surveys will not, however, have the sensitivity to make detailed maps of the reionization process, and will instead focus on statistical measurements. Here, we show that it may nonetheless be possible to directly identify ionized regions in upcoming data sets by applying suitable filters to the noisy data. The locations of prominent minima in the filtered data correspond well with the positions of ionized regions. In particular, we corrupt semi-numeric simulations of the redshifted 21 cm signalmore » during reionization with thermal noise at the level expected for a 500 antenna tile version of the Murchison Widefield Array (MWA), and mimic the degrading effects of foreground cleaning. Using a matched filter technique, we find that the MWA should be able to directly identify ionized regions despite the large thermal noise. In a plausible fiducial model in which {approx}20% of the volume of the universe is neutral at z {approx} 7, we find that a 500-tile MWA may directly identify as many as {approx}150 ionized regions in a 6 MHz portion of its survey volume and roughly determine the size of each of these regions. This may, in turn, allow interesting multi-wavelength follow-up observations, comparing galaxy properties inside and outside of ionized regions. We discuss how the optimal configuration of radio antenna tiles for detecting ionized regions with a matched filter technique differs from the optimal design for measuring power spectra. These considerations have potentially important implications for the design of future redshifted 21 cm surveys.« less

  19. Role of cellular FKBP52 protein in intracellular trafficking of recombinant adeno-associated virus 2 vectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao Weihong; Wu Jianqing; Zhong Li

    2006-09-30

    We have reported that tyrosine-phosphorylated forms of a cellular protein, FKBP52, inhibit the second-strand DNA synthesis of adeno-associated virus 2 (AAV), leading to inefficient transgene expression from recombinant AAV vectors. To further explore the role of FKBP52 in AAV-mediated transduction, we established murine embryo fibroblasts (MEFs) cultures from FKBP52 wild-type (WT), heterozygous (HE), and knockout (KO) mice. Conventional AAV vectors failed to transduce WT MEFs efficiently, and the transduction efficiency was not significantly increased in HE or KO MEFs. AAV vectors failed to traffic efficiently to the nucleus in these cells. Treatment with hydroxyurea (HU) increased the transduction efficiency ofmore » conventional AAV vectors by {approx}25-fold in WT MEFs, but only by {approx}4-fold in KO MEFs. The use of self-complementary AAV (scAAV) vectors, which bypass the requirement of viral second-strand DNA synthesis, revealed that HU treatment increased the transduction efficiency {approx}23-fold in WT MEFs, but only {approx}4-fold in KO MEFs, indicating that the lack of HU treatment-mediated increase in KO MEFs was not due to failure of AAV to undergo viral second-strand DNA synthesis. Following HU treatment, {approx}59% of AAV genomes were present in the nuclear fraction from WT MEFs, but only {approx}28% in KO MEFs, indicating that the pathway by which HU treatment mediates nuclear transport of AAV was impaired in KO MEFs. When KO MEFs were stably transfected with an FKBP52 expression plasmid, HU treatment-mediated increase in the transduction efficiency was restored in these cells, which correlated directly with improved intracellular trafficking. Intact AAV particles were also shown to interact with FKBP52 as well as with dynein, a known cellular protein involved in AAV trafficking. These studies suggest that FKBP52, being a cellular chaperone protein, facilitates intracellular trafficking of AAV, which has implications in the optimal use of recombinant AAV vectors in human gene therapy.« less

  20. Operation of MRO's High Resolution Imaging Science Experiment (HiRISE): Maximizing Science Participation

    NASA Technical Reports Server (NTRS)

    Eliason, E.; Hansen, C. J.; McEwen, A.; Delamere, W. A.; Bridges, N.; Grant, J.; Gulich, V.; Herkenhoff, K.; Keszthelyi, L.; Kirk, R.

    2003-01-01

    Science return from the Mars Reconnaissance Orbiter (MRO) High Resolution Imaging Science Experiment (HiRISE) will be optimized by maximizing science participation in the experiment. MRO is expected to arrive at Mars in March 2006, and the primary science phase begins near the end of 2006 after aerobraking (6 months) and a transition phase. The primary science phase lasts for almost 2 Earth years, followed by a 2-year relay phase in which science observations by MRO are expected to continue. We expect to acquire approx. 10,000 images with HiRISE over the course of MRO's two earth-year mission. HiRISE can acquire images with a ground sampling dimension of as little as 30 cm (from a typical altitude of 300 km), in up to 3 colors, and many targets will be re-imaged for stereo. With such high spatial resolution, the percent coverage of Mars will be very limited in spite of the relatively high data rate of MRO (approx. 10x greater than MGS or Odyssey). We expect to cover approx. 1% of Mars at approx. 1m/pixel or better, approx. 0.1% at full resolution, and approx. 0.05% in color or in stereo. Therefore, the placement of each HiRISE image must be carefully considered in order to maximize the scientific return from MRO. We believe that every observation should be the result of a mini research project based on pre-existing datasets. During operations, we will need a large database of carefully researched 'suggested' observations to select from. The HiRISE team is dedicated to involving the broad Mars community in creating this database, to the fullest degree that is both practical and legal. The philosophy of the team and the design of the ground data system are geared to enabling community involvement. A key aspect of this is that image data will be made available to the planetary community for science analysis as quickly as possible to encourage feedback and new ideas for targets.

  1. Hybrids of Solar Sail, Solar Electric, and Solar Thermal Propulsion for Solar-System Exploration

    NASA Technical Reports Server (NTRS)

    Wilcox, Brian H.

    2012-01-01

    Solar sails have long been known to be an attractive method of propulsion in the inner solar system if the areal density of the overall spacecraft (S/C) could be reduced to approx.10 g/sq m. It has also long been recognized that the figure (precise shape) of useful solar sails needs to be reasonably good, so that the reflected light goes mostly in the desired direction. If one could make large reflective surfaces with reasonable figure at an areal density of approx.10 g/sq m, then several other attractive options emerge. One is to use such sails as solar concentrators for solar-electric propulsion. Current flight solar arrays have a specific output of approx. 100W/kg at 1 Astronomical Unit (AU) from the sun, and near-term advances promise to significantly increase this figure. A S/C with an areal density of 10 g/sq m could accelerate up to 29 km/s per year as a solar sail at 1 AU. Using the same sail as a concentrator at 30 AU, the same spacecraft could have up to approx. 45 W of electric power per kg of total S/C mass available for electric propulsion (EP). With an EP system that is 50% power-efficient, exhausting 10% of the initial S/C mass per year as propellant, the exhaust velocity is approx. 119 km/s and the acceleration is approx. 12 km/s per year. This hybrid thus opens attractive options for missions to the outer solar system, including sample-return missions. If solar-thermal propulsion were perfected, it would offer an attractive intermediate between solar sailing in the inner solar system and solar electric propulsion for the outer solar system. In the example above, both the solar sail and solar electric systems don't have a specific impulse that is near-optimal for the mission. Solar thermal propulsion, with an exhaust velocity of the order of 10 km/s, is better matched to many solar system exploration missions. This paper derives the basic relationships between these three propulsion options and gives examples of missions that might be enabled by such hybrids.

  2. Task-based modeling and optimization of a cone-beam CT scanner for musculoskeletal imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prakash, P.; Zbijewski, W.; Gang, G. J.

    2011-10-15

    Purpose: This work applies a cascaded systems model for cone-beam CT imaging performance to the design and optimization of a system for musculoskeletal extremity imaging. The model provides a quantitative guide to the selection of system geometry, source and detector components, acquisition techniques, and reconstruction parameters. Methods: The model is based on cascaded systems analysis of the 3D noise-power spectrum (NPS) and noise-equivalent quanta (NEQ) combined with factors of system geometry (magnification, focal spot size, and scatter-to-primary ratio) and anatomical background clutter. The model was extended to task-based analysis of detectability index (d') for tasks ranging in contrast and frequencymore » content, and d' was computed as a function of system magnification, detector pixel size, focal spot size, kVp, dose, electronic noise, voxel size, and reconstruction filter to examine trade-offs and optima among such factors in multivariate analysis. The model was tested quantitatively versus the measured NPS and qualitatively in cadaver images as a function of kVp, dose, pixel size, and reconstruction filter under conditions corresponding to the proposed scanner. Results: The analysis quantified trade-offs among factors of spatial resolution, noise, and dose. System magnification (M) was a critical design parameter with strong effect on spatial resolution, dose, and x-ray scatter, and a fairly robust optimum was identified at M {approx} 1.3 for the imaging tasks considered. The results suggested kVp selection in the range of {approx}65-90 kVp, the lower end (65 kVp) maximizing subject contrast and the upper end maximizing NEQ (90 kVp). The analysis quantified fairly intuitive results--e.g., {approx}0.1-0.2 mm pixel size (and a sharp reconstruction filter) optimal for high-frequency tasks (bone detail) compared to {approx}0.4 mm pixel size (and a smooth reconstruction filter) for low-frequency (soft-tissue) tasks. This result suggests a specific protocol for 1 x 1 (full-resolution) projection data acquisition followed by full-resolution reconstruction with a sharp filter for high-frequency tasks along with 2 x 2 binning reconstruction with a smooth filter for low-frequency tasks. The analysis guided selection of specific source and detector components implemented on the proposed scanner. The analysis also quantified the potential benefits and points of diminishing return in focal spot size, reduced electronic noise, finer detector pixels, and low-dose limits of detectability. Theoretical results agreed quantitatively with the measured NPS and qualitatively with evaluation of cadaver images by a musculoskeletal radiologist. Conclusions: A fairly comprehensive model for 3D imaging performance in cone-beam CT combines factors of quantum noise, system geometry, anatomical background, and imaging task. The analysis provided a valuable, quantitative guide to design, optimization, and technique selection for a musculoskeletal extremities imaging system under development.« less

  3. Clinical dental application of Er:YAG laser for Class V cavity preparation.

    PubMed

    Matsumoto, K; Nakamura, Y; Mazeki, K; Kimura, Y

    1996-06-01

    Following the development of the ruby laser by Maiman in 1960, the Nd:YAG laser, the CO2 laser, the semiconductor laser, the He-Ne laser, excimer lasers, the argon laser, and finally the Er:YAG laser capable of cutting hard tissue easily were developed and have come to be applied clinically. In the present study, the Er:YAG laser emitting at a wavelength of 2.94 microns developed by Luxar was used for the clinical preparation of class V cavities. Parameters of 8 Hz and approx. 250 mJ/pulse maximum output were used for irradiation. Sixty teeth of 40 patients were used in this clinical study. The Er:YAG laser used in this study was found to be a system suitable for clinical application. No adverse reaction was observed in any of the cases. Class V cavity preparation was performed without inducing any pain in 48/60 cases (80%). All of the 12 cases that complained of mild or severe intraoperative pain had previously complained of cervical dentin hypersensibility during the preoperative examination. Cavity preparation was completed with this laser system in 58/60 cases (91.7%). No treatment-related clinical problems were observed during the follow-up period of approx. 30 days after cavity preparation and resin filling. Cavity preparation took between approx. 10 sec and 3 min and was related more or less to cavity size and depth. Overall clinical evaluation showed no safety problem with very good rating in 49 cases (81.7%).

  4. Consistency of ARESE II Cloud Absorption Estimates and Sampling Issues

    NASA Technical Reports Server (NTRS)

    Oreopoulos, L.; Marshak, A.; Cahalan, R. F.; Lau, William K. M. (Technical Monitor)

    2002-01-01

    Data from three cloudy days (March 3, 21, 29, 2000) of the ARM Enhanced Shortwave Experiment II (ARESE II) were analyzed. Grand averages of broadband absorptance among three sets of instruments were compared. Fractional solar absorptances were approx. 0.21-0.22 with the exception of March 3 when two sets of instruments gave values smaller by approx. 0.03-0.04. The robustness of these values was investigated by looking into possible sampling problems with the aid of 500 nm spectral fluxes. Grand averages of 500 nm apparent absorptance cover a wide range of values for these three days, namely from a large positive (approx. 0.011) average for March 3, to a small negative (approximately -0.03) for March 21, to near zero (approx. 0.01) for March 29. We present evidence suggesting that a large part of the discrepancies among the three days is due to the different nature of clouds and their non-uniform sampling. Hence, corrections to the grand average broadband absorptance values may be necessary. However, application of the known correction techniques may be precarious due to the sparsity of collocated flux measurements above and below the clouds. Our analysis leads to the conclusion that only March 29 fulfills all requirements for reliable estimates of cloud absorption, that is, the presence of thick, overcast, homogeneous clouds.

  5. Modeling Morphogenesis with Reaction-Diffusion Equations Using Galerkin Spectral Methods

    DTIC Science & Technology

    2002-05-06

    reaction- diffusion equation is a difficult problem in analysis that will not be addressed here. Errors will also arise from numerically approx solutions to...the ODEs. When comparing the approximate solution to actual reaction- diffusion systems found in nature, we must also take into account errors that...

  6. Selection of Hyperspectral Narrowbands (HNBs) and Composition of Hyperspectral Twoband Vegetation Indices (HVIs) for Biophysical Characterization and Discrimination of Crop Types Using Field Reflectance and Hyperion-EO-1 Data

    NASA Technical Reports Server (NTRS)

    Thenkabail, Prasad S.; Mariotto, Isabella; Gumma, Murali Krishna; Middleton, Elizabeth M.; Landis, David R.; Huemmrich, K. Fred

    2013-01-01

    The overarching goal of this study was to establish optimal hyperspectral vegetation indices (HVIs) and hyperspectral narrowbands (HNBs) that best characterize, classify, model, and map the world's main agricultural crops. The primary objectives were: (1) crop biophysical modeling through HNBs and HVIs, (2) accuracy assessment of crop type discrimination using Wilks' Lambda through a discriminant model, and (3) meta-analysis to select optimal HNBs and HVIs for applications related to agriculture. The study was conducted using two Earth Observing One (EO-1) Hyperion scenes and other surface hyperspectral data for the eight leading worldwide crops (wheat, corn, rice, barley, soybeans, pulses, cotton, and alfalfa) that occupy approx. 70% of all cropland areas globally. This study integrated data collected from multiple study areas in various agroecosystems of Africa, the Middle East, Central Asia, and India. Data were collected for the eight crop types in six distinct growth stages. These included (a) field spectroradiometer measurements (350-2500 nm) sampled at 1-nm discrete bandwidths, and (b) field biophysical variables (e.g., biomass, leaf area index) acquired to correspond with spectroradiometer measurements. The eight crops were described and classified using approx. 20 HNBs. The accuracy of classifying these 8 crops using HNBs was around 95%, which was approx. 25% better than the multi-spectral results possible from Landsat-7's Enhanced Thematic Mapper+ or EO-1's Advanced Land Imager. Further, based on this research and meta-analysis involving over 100 papers, the study established 33 optimal HNBs and an equal number of specific two-band normalized difference HVIs to best model and study specific biophysical and biochemical quantities of major agricultural crops of the world. Redundant bands identified in this study will help overcome the Hughes Phenomenon (or "the curse of high dimensionality") in hyperspectral data for a particular application (e.g., biophysical characterization of crops). The findings of this study will make a significant contribution to future hyperspectral missions such as NASA's HyspIRI. Index Terms-Hyperion, field reflectance, imaging spectroscopy, HyspIRI, biophysical parameters, hyperspectral vegetation indices, hyperspectral narrowbands, broadbands.

  7. Dynamics of Dust Particles Released from Oort Cloud Comets and Their Contribution to Radar Meteors

    NASA Technical Reports Server (NTRS)

    Nesvorny, David; Vokrouhlicky, David; Pokorny, Petr; Janches, Diego

    2012-01-01

    The Oort Cloud Comets (OCCs), exemplified by the Great Comet of 1997 (Hale-Bopp), are occasional visitors from the heatless periphery of the solar system. Previous works hypothesized that a great majority of OCCs must physically disrupt after one or two passages through the inner solar system, where strong thermal gradients can cause phase transitions or volatile pressure buildup. Here we study the fate of small debris particles produced by OCC disruptions to determine whether the imprints of a hypothetical population of OCC meteoroids can be found in the existing meteor radar data. We find that OCC particles with diameters D < or approx. 10 microns are blown out from the solar system by radiation pressure, while those with D > or approx. 1 mm have a very low Earth-impact probability. The intermediate particle sizes, D approx. 100 microns represent a sweet spot. About 1% of these particles orbitally evolve by Poynting-Robertson drag to reach orbits with semimajor axis a approx. 1 AU. They are expected to produce meteors with radiants near the apex of the Earth s orbital motion. We find that the model distributions of their impact speeds and orbits provide a good match to radar observations of apex meteors, except for the eccentricity distribution, which is more skewed toward e approx. 1 in our model. Finally, we propose an explanation for the long-standing problem in meteor science related to the relative strength of apex and helion/antihelion sources. As we show in detail, the observed trend, with the apex meteors being more prominent in observations of highly sensitive radars, can be related to orbital dynamics of particles released on the long-period orbits.

  8. On the Unusually High Temperature of the Cluster of Galaxies 1E 0657-56

    NASA Technical Reports Server (NTRS)

    Yaqoob, Tahir

    1999-01-01

    A recent X-ray observation of the cluster 1E 0657-56 (z = 0.296) with ASC,4 implied an unusually high temperature of approx. 17 keV. Such a high temperature would make it the hottest known cluster and severely constrain cosmological models since, in a Universe with critical density (Omega = 1) the probability of observing such a cluster is only approx. 4 x 10(exp -5). Here we test the robustness of this observational result since it has such important implications. We analysed the data using a variety of different data analysis methods and spectral analysis assumptions and find a temperature of approx. 11 - 12 keV in all cases, except for one class of spectral fits. These are fits in which the absorbing column density is fixed at the Galactic value. Using simulated data for a 12 keV cluster, we show that a high temperature of approx. 17 keV is artificially obtained if the true spectrum has a stronger low-energy cut-off than that for Galactic absorption only. The apparent extra absorption may be astrophysical in origin, (either intrinsic or line-of-sight), or it may be a problem with the low-energy CCD efficiency. Although significantly lower than previous measurements, this temperature of kT approx. 11 - 12 keV is still relatively high since only a few clusters have been found to have temperatures higher than 10 keV and the data therefore still present some difficulty for an Omega = 1 Universe. Our results will also be useful to anyone who wants to estimate the systematic errors involved in different methods of background subtraction of ASCA data for sources with similar signal-to-noise to that of the IE 0657-56 data reported here.

  9. The DEEP2 Galaxy Redshift Survey: The Voronoi-Delaunay Method Catalog of Galaxy Groups

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerke, Brian F.; /UC, Berkeley; Newman, Jeffrey A.

    2012-02-14

    We use the first 25% of the DEEP2 Galaxy Redshift Survey spectroscopic data to identify groups and clusters of galaxies in redshift space. The data set contains 8370 galaxies with confirmed redshifts in the range 0.7 {<=} z {<=} 1.4, over one square degree on the sky. Groups are identified using an algorithm (the Voronoi-Delaunay Method) that has been shown to accurately reproduce the statistics of groups in simulated DEEP2-like samples. We optimize this algorithm for the DEEP2 survey by applying it to realistic mock galaxy catalogs and assessing the results using a stringent set of criteria for measuring group-findingmore » success, which we develop and describe in detail here. We find in particular that the group-finder can successfully identify {approx}78% of real groups and that {approx}79% of the galaxies that are true members of groups can be identified as such. Conversely, we estimate that {approx}55% of the groups we find can be definitively identified with real groups and that {approx}46% of the galaxies we place into groups are interloper field galaxies. Most importantly, we find that it is possible to measure the distribution of groups in redshift and velocity dispersion, n({sigma}, z), to an accuracy limited by cosmic variance, for dispersions greater than 350 km s{sup -1}. We anticipate that such measurements will allow strong constraints to be placed on the equation of state of the dark energy in the future. Finally, we present the first DEEP2 group catalog, which assigns 32% of the galaxies to 899 distinct groups with two or more members, 153 of which have velocity dispersions above 350 km s{sup -1}. We provide locations, redshifts and properties for this high-dispersion subsample. This catalog represents the largest sample to date of spectroscopically detected groups at z {approx} 1.« less

  10. Compact Microwave Mercury Ion Clock for Space Applications

    NASA Technical Reports Server (NTRS)

    Prestage, John D.; Tu, Meirong; Chung, Sang K.; MacNeal, Paul

    2007-01-01

    We review progress in developing a small Hg ion clock for space operation based on breadboard ion-clock physics package where Hg ions are shuttled between a quadrupole and a 16-pole rf trap. With this architecture we have demonstrated short-term stability approx.1-2x10(exp -13) at 1 second, averaging to 10-15 at 1 day. This development shows that H-maser quality stabilities can be produced in a small clock package, comparable in size to an ultra-stable quartz oscillator required or holding 1-2x10(exp -13) at 1 second. We have completed an ion clock physics package designed to withstand vibration of launch and are currently building a approx. 1 kg engineering model for test. We also discuss frequency steering software algorithms that simultaneously measure ion signal size and lamp light output, useful for long term operation and self-optimization of microwave power and return engineering data.

  11. Measurements of Breakdown Field and Forward Current Stability in 3C-SiC P-N Junction Diodes Grown on Step-Free 4H-SiC

    NASA Technical Reports Server (NTRS)

    Neudeck, Philip G.; Spry, David J.; Trunek, Andrew J.

    2005-01-01

    This paper reports on initial fabrication and electrical characterization of 3C-SiC p-n junction diodes grown on step-free 4H-SiC mesas. Diodes with n-blocking-layer doping ranging from approx. 2 x 10(exp 16)/cu cm to approx.. 5 x 10(exp 17)/cu cm were fabricated and tested. No optimization of junction edge termination or ohmic contacts was employed. Room temperature reverse characteristics of the best devices show excellent low-leakage behavior, below previous 3C-SiC devices produced by other growth techniques, until the onset of a sharp breakdown knee. The resulting estimated breakdown field of 3C-SiC is at least twice the breakdown field of silicon, but is only around half the breakdown field of <0001> 4H-SiC for the doping range studied. Initial high current stressing of 3C diodes at 100 A/sq cm for more than 20 hours resulted in less than 50 mV change in approx. 3 V forward voltage. 3C-SiC, pn junction, p+n diode, rectifier, reverse breakdown, breakdown field,heteroepitaxy, epitaxial growth, electroluminescence, mesa, bipolar diode

  12. Micromirror Arrays for Adaptive Optics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carr, E.J.

    The long-range goal of this project is to develop the optical and mechanical design of a micromirror array for adaptive optics that will meet the following criteria: flat mirror surface ({lambda}/20), high fill factor (> 95%), large stroke (5-10 {micro}m), and pixel size {approx}-200 {micro}m. This will be accomplished by optimizing the mirror surface and actuators independently and then combining them using bonding technologies that are currently being developed.

  13. High-power laser diodes at various wavelengths

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Emanuel, M.A.

    High power laser diodes at various wavelengths are described. First, performance and reliability of an optimized large transverse mode diode structure at 808 and 941 nm are presented. Next, data are presented on a 9.5 kW peak power array at 900 nm having a narrow emission bandwidth suitable for pumping Yb:S-FAP laser materials. Finally, results on a fiber-coupled laser diode array at {approx}730 nm are presented.

  14. A new experimental proposal for {sup 235}U PFNS to answer a fifty years old question

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kornilov, N.; Massey, T.; Grimes, S.

    2011-07-01

    The Prompt Fission Neutron Spectrum (PFNS) from {sup 235}U(n,f) is very important for various nuclear applications. It has been investigated in different experiments. In spite of {approx}50 years of experimental efforts, a continuing conflict exists at thermal neutron energy. Microscopic experimental PFNS cannot describe macroscopic data. In this report we discuss the current status of this problem and suggest a new experiment, which could possibly resolve this problem. (authors)

  15. Small-Grid Dithers for the JWST Coronagraphs

    NASA Technical Reports Server (NTRS)

    Lajoie, Charles-Philippe; Soummer, Remi; Pueyo, Laurent; Hines, Dean C.; Nelan, Edmund P.; Perrin, Marshall; Clampin, Mark; Isaacs, John C.

    2016-01-01

    We discuss new results of coronagraphic simulations demonstrating a novel mode for JWST that utilizes sub-pixel dithered reference images, called Small-Grid Dithers, to optimize coronagraphic PSF subtraction. These sub-pixel dithers are executed with the Fine Steering Mirror under fine guidance, are accurate to approx.2-3 milliarcseconds (1-s/axis), and provide ample speckle diversity to reconstruct an optimized synthetic reference PSF using LOCI or KLIP. We also discuss the performance gains of Small-Grid Dithers compared to the standard undithered scenario, and show potential contrast gain factors for the NIRCam and MIRI coronagraphs ranging from 2 to more than 10, respectively.

  16. Challenger STS-17 (41-G) post-flight best estimate trajectory products: Development and summary results

    NASA Technical Reports Server (NTRS)

    Kelly, G. M.; Heck, M. L.; Mcconnell, J. G.; Waters, L. A.; Troutman, P. A.; Findlay, J. T.

    1985-01-01

    Results from the STS-17 (41-G) post-flight products are presented. Operational Instrumentation recorder gaps, coupled with the limited tracking coverage available for this high inclination entry profile, necessitated selection of an anchor epoch for reconstruction corresponding to an unusually low altitude of h approx. 297 kft. The final inertial trajectory obtained, BT17N26/UN=169750N, is discussed in Section I, i.e., relative to the problems encountered with the OI and ACIP recorded data on this Challenger flight. Atmospheric selection, again in view of the ground track displacement from the remote meteorological sites, constituted a major problem area as discussed in Section II. The LAIRS file provided by Langley was adopted, with NOAA data utilized over the lowermost approx. 7 kft. As discussed in Section II, the Extended BET, ST17BET/UN=274885C, suggests a limited upper altitude (H approx. 230 kft) for which meaningful flight extraction can be expected. This is further demonstrated, though not considered a limitation, in Section III wherein summary results from the AEROBET (NJ0333 with NJ0346 as duplicate) are presented. GTFILEs were generated only for the selected IMU (IMU2) and the Rate Gyro Assembly/Accelerometer Assembly data due to the loss of ACIP data. Appendices attached present inputs for the generation of the post-flight products (Appendix A), final residual plots (Appendix B), a two second spaced listing of the relevant parameters from the Extended BET (Appendix C), and an archival section (Appendix D) devoting input (source) and output files and/or physical reels.

  17. Evidence for the multiverse in the standard model and beyond

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, Lawrence J.; Nomura, Yasunori

    2008-08-01

    In any theory it is unnatural if the observed values of parameters lie very close to special values that determine the existence of complex structures necessary for observers. A naturalness probability P is introduced to numerically evaluate the degree of unnaturalness. If P is very small in all known theories, corresponding to a high degree of fine-tuning, then there is an observer naturalness problem. In addition to the well-known case of the cosmological constant, we argue that nuclear stability and electroweak symmetry breaking represent significant observer naturalness problems. The naturalness probability associated with nuclear stability depends on the theory ofmore » flavor, but for all known theories is conservatively estimated as P{sub nuc} < or approx. (10{sup -3}-10{sup -2}), and for simple theories of electroweak symmetry breaking P{sub EWSB} < or approx. (10{sup -2}-10{sup -1}). This pattern of unnaturalness in three different arenas, cosmology, nuclear physics, and electroweak symmetry breaking, provides evidence for the multiverse, since each problem may be easily solved by environmental selection. In the nuclear case the problem is largely solved even if the multiverse distribution for the relevant parameters is relatively flat. With somewhat strongly varying distributions, it is possible to understand both the close proximity to neutron stability and the values of m{sub e} and m{sub d}-m{sub u} in terms of the electromagnetic mass difference between the proton and neutron, {delta}{sub EM}{approx_equal}1{+-}0.5 MeV. It is reasonable that multiverse distributions are strong functions of Lagrangian parameters, since they depend not only on the landscape of vacua, but also on the population mechanism, ''integrating out'' other parameters, and on a density of observers factor. In any theory with mass scale M that is the origin of electroweak symmetry breaking, strongly varying multiverse distributions typically lead either to a little hierarchy v/M{approx_equal}(10{sup -2}-10{sup -1}), or to a large hierarchy v<

  18. THE BARYON ACOUSTIC OSCILLATION BROADBAND AND BROAD-BEAM ARRAY: DESIGN OVERVIEW AND SENSITIVITY FORECASTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pober, Jonathan C.; Parsons, Aaron R.; McQuinn, Matthew

    2013-03-15

    This work describes a new instrument optimized for a detection of the neutral hydrogen 21 cm power spectrum between redshifts of 0.5 and 1.5: the Baryon Acoustic Oscillation Broadband and Broad-beam (BAOBAB) array. BAOBAB will build on the efforts of a first generation of 21 cm experiments that are targeting a detection of the signal from the Epoch of Reionization at z {approx} 10. At z {approx} 1, the emission from neutral hydrogen in self-shielded overdense halos also presents an accessible signal, since the dominant, synchrotron foreground emission is considerably fainter than at redshift 10. The principle science driver formore » these observations are baryon acoustic oscillations in the matter power spectrum which have the potential to act as a standard ruler and constrain the nature of dark energy. BAOBAB will fully correlate dual-polarization antenna tiles over the 600-900 MHz band with a frequency resolution of 300 kHz and a system temperature of 50 K. The number of antennas will grow in staged deployments, and reconfigurations of the array will allow for both traditional imaging and high power spectrum sensitivity operations. We present calculations of the power spectrum sensitivity for various array sizes, with a 35 element array measuring the cosmic neutral hydrogen fraction as a function of redshift, and a 132 element system detecting the BAO features in the power spectrum, yielding a 1.8% error on the z {approx} 1 distance scale, and, in turn, significant improvements to constraints on the dark energy equation of state over an unprecedented range of redshifts from {approx}0.5 to 1.5.« less

  19. Catalysts for ultrahigh current density oxygen cathodes for space fuel cell applications

    NASA Technical Reports Server (NTRS)

    Tryk, Donald A.; Yeager, E.

    1992-01-01

    The objective was to identify promising electrocatalyst/support systems for oxygen cathodes capable of operating at ultrahigh current densities in alkaline fuel cells. Such cells will require operation at relatively high temperatures and O2 pressures. A number of materials were prepared, including Pb-Ru and Pb-Ir pyrochlores, RuO2 and Pt-doped RuO2, lithiated NiO and La-Ni perovskites. Several of these materials were prepared using techniques that had not been previously used to prepare them. Particularly interesting was the use of the alkaline solution technique to prepare Pt-doped and Pb-Ru pyrochlores in high area form. Also interesting was the use of the fusion (melt) method for preparing the Pb-Ru pyrochlore. Several of the materials were also deposited with platinum. Well-crystallized Pb2Ru2O(7-y) was used to fabricate very high performance O2 cathodes with good stability in room temperature KOH. This material was also found to be stable over a useful potential range at approx. 140 C in concentrated KOH. For some of the samples, fabrication of the gas-fed electrodes could not be fully optimized during this project period. Future work may be directed at this problem. Pyrochlores that were not well-crystallized were found to be unstable in alkaline solution. Very good O2 reduction performance and stability were observed with Pb2RuO(7-y) in a carbon-based gas-fed electrode with an anion-conducting membrane placed on the electrolyte side of the electrode. The performance came within a factor of about two of that observed without carbon. High area platinum and gold supported on several conductive metal oxide supports were examined. Only small improvements in O2 reduction performance at room temperature were observed for Pb2Ru2O(7-y) as a support because of the high intrinsic activity of the pyrochlore. In contrast, a large improvement was observed for Li-doped NiO as a support for Pt. Very poor performance was observed for Au deposited on Li-NiO at approx. 150 C. Nearly reversible behavior was observed for the O2/OH(-) couple for Li-doped NiO at approx. 200 C. The temperature dependence for the O2 reduction was examined.

  20. Characterizing the Early Impact Bombardment

    NASA Technical Reports Server (NTRS)

    Bogard, Donald D.

    2005-01-01

    The early bombardment revealed in the larger impact craters and basins on the moon was a major planetary process that affected all bodies in the inner solar system, including the Earth and Mars. Understanding the nature and timing of this bombardment is a fundamental planetary problem. The surface density of lunar impact craters within a given size range on a given lunar surface is a measure of the age of that surface relative to other lunar surfaces. When crater densities are combined with absolute radiometric ages determined on lunar rocks returned to Earth, the flux of large lunar impactors through time can be estimated. These studies suggest that the flux of impactors producing craters greater than 1 km in diameter has been approximately constant over the past approx. 3 Gyr. However, prior to 3.0 - 3.5 Gyr the impactor flux was much larger and defines an early bombardment period. Unfortunately, no lunar surface feature older than approx. 4 Gyr is accurately dated, and the surface density of craters are saturated in most of the lunar highlands. This means that such data cannot define the impactor flux between lunar formation and approx. 4 Gyr ago.

  1. Quantum algorithms on Walsh transform and Hamming distance for Boolean functions

    NASA Astrophysics Data System (ADS)

    Xie, Zhengwei; Qiu, Daowen; Cai, Guangya

    2018-06-01

    Walsh spectrum or Walsh transform is an alternative description of Boolean functions. In this paper, we explore quantum algorithms to approximate the absolute value of Walsh transform W_f at a single point z0 (i.e., |W_f(z0)|) for n-variable Boolean functions with probability at least 8/π 2 using the number of O(1/|W_f(z_{0)|ɛ }) queries, promised that the accuracy is ɛ , while the best known classical algorithm requires O(2n) queries. The Hamming distance between Boolean functions is used to study the linearity testing and other important problems. We take advantage of Walsh transform to calculate the Hamming distance between two n-variable Boolean functions f and g using O(1) queries in some cases. Then, we exploit another quantum algorithm which converts computing Hamming distance between two Boolean functions to quantum amplitude estimation (i.e., approximate counting). If Ham(f,g)=t≠0, we can approximately compute Ham( f, g) with probability at least 2/3 by combining our algorithm and {Approx-Count(f,ɛ ) algorithm} using the expected number of Θ( √{N/(\\lfloor ɛ t\\rfloor +1)}+√{t(N-t)}/\\lfloor ɛ t\\rfloor +1) queries, promised that the accuracy is ɛ . Moreover, our algorithm is optimal, while the exact query complexity for the above problem is Θ(N) and the query complexity with the accuracy ɛ is O(1/ɛ 2N/(t+1)) in classical algorithm, where N=2n. Finally, we present three exact quantum query algorithms for two promise problems on Hamming distance using O(1) queries, while any classical deterministic algorithm solving the problem uses Ω(2n) queries.

  2. Sources of Geomagnetic Activity during Nearly Three Solar Cycles (1972-2000)

    NASA Technical Reports Server (NTRS)

    Richardson, I. G.; Cane, H. V.; Cliver, E. W.; White, Nicholas E. (Technical Monitor)

    2002-01-01

    We examine the contributions of the principal solar wind components (corotating highspeed streams, slow solar wind, and transient structures, i.e., interplanetary coronal mass ejections (CMEs), shocks, and postshock flows) to averages of the aa geomagnetic index and the interplanetary magnetic field (IMF) strength in 1972-2000 during nearly three solar cycles. A prime motivation is to understand the influence of solar cycle variations in solar wind structure on long-term (e.g., approximately annual) averages of these parameters. We show that high-speed streams account for approximately two-thirds of long-term aa averages at solar minimum, while at solar maximum, structures associated with transients make the largest contribution (approx. 50%), though contributions from streams and slow solar wind continue to be present. Similarly, high-speed streams are the principal contributor (approx. 55%) to solar minimum averages of the IMF, while transient-related structures are the leading contributor (approx. 40%) at solar maximum. These differences between solar maximum and minimum reflect the changing structure of the near-ecliptic solar wind during the solar cycle. For minimum periods, the Earth is embedded in high-speed streams approx. 55% of the time versus approx. 35% for slow solar wind and approx. 10% for CME-associated structures, while at solar maximum, typical percentages are as follows: high-speed streams approx. 35%, slow solar wind approx. 30%, and CME-associated approx. 35%. These compositions show little cycle-to-cycle variation, at least for the interval considered in this paper. Despite the change in the occurrences of different types of solar wind over the solar cycle (and less significant changes from cycle to cycle), overall, variations in the averages of the aa index and IMF closely follow those in corotating streams. Considering solar cycle averages, we show that high-speed streams account for approx. 44%, approx. 48%, and approx. 40% of the solar wind composition, aa, and the IMF strength, respectively, with corresponding figures of approx. 22%, approx. 32%, and approx. 25% for CME-related structures, and approx. 33%, approx. 19%, and approx. 33% for slow solar wind.

  3. THE VLA SURVEY OF CHANDRA DEEP FIELD SOUTH. V. EVOLUTION AND LUMINOSITY FUNCTIONS OF SUB-MILLIJANSKY RADIO SOURCES AND THE ISSUE OF RADIO EMISSION IN RADIO-QUIET ACTIVE GALACTIC NUCLEI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Padovani, P.; Mainieri, V.; Rosati, P.

    2011-10-10

    We present the evolutionary properties and luminosity functions of the radio sources belonging to the Chandra Deep Field South Very Large Array survey, which reaches a flux density limit at 1.4 GHz of 43 {mu}Jy at the field center and redshift {approx}5 and which includes the first radio-selected complete sample of radio-quiet active galactic nuclei (AGNs). We use a new, comprehensive classification scheme based on radio, far- and near-IR, optical, and X-ray data to disentangle star-forming galaxies (SFGs) from AGNs and radio-quiet from radio-loud AGNs. We confirm our previous result that SFGs become dominant only below 0.1 mJy. The sub-millijanskymore » radio sky turns out to be a complex mix of SFGs and radio-quiet AGNs evolving at a similar, strong rate; non-evolving low-luminosity radio galaxies; and declining radio powerful (P {approx}> 3 x 10{sup 24} W Hz{sup -1}) AGNs. Our results suggest that radio emission from radio-quiet AGNs is closely related to star formation. The detection of compact, high brightness temperature cores in several nearby radio-quiet AGNs can be explained by the coexistence of two components, one non-evolving and AGN related and one evolving and star formation related. Radio-quiet AGNs are an important class of sub-millijansky sources, accounting for {approx}30% of the sample and {approx}60% of all AGNs, and outnumbering radio-loud AGNs at {approx}< 0.1 mJy. This implies that future, large area sub-millijansky surveys, given the appropriate ancillary multiwavelength data, have the potential of being able to assemble vast samples of radio-quiet AGNs, bypassing the problems of obscuration that plague the optical and soft X-ray bands.« less

  4. A Study of Contacts and Back-Surface Reflectors for 0.6eV Ga0.32In0.68As/InAs0.32P0.68 Thermophotovoltaic Monolithically Interconnected Modules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, X.; Duda, A.; Carapella, J. J.

    1998-12-23

    Thermophotovoltaic (TPV) systems have recently rekindled a high level of interest for a number of applications. In order to meet the requirement of low-temperature ({approx}1000 C) TPV systems, 0.6-eV Ga0.32In0.68As/InAs0.32P0.68 TPV monolithically interconnected modules (MIMs) have been developed at the National Renewable energy Laboratory (NREL)[1]. The successful fabrication of Ga0.32In0.68As/InAs0.32P0.68 MIMs depends on developing and optimizing of several key processes. Some results regarding the chemical vapor deposition (CVD)-SiO2 insulating layer, selective chemical etch via sidewall profiles, double-layer antireflection coatings, and metallization via interconnects have previously been given elsewhere [2]. In this paper, we report on the study of contacts andmore » back-surface reflectors. In the first part of this paper, Ti/Pd/Ag and Cr/Pd/Ag contact to n-InAs0.32P0.68and p-Ga0.32In0.68As are investigated. The transfer length method (TLM) was used for measuring of specific contact resistance Rc. The dependence of Rc on different doping levels and different pre-treatment of the two semiconductors will be reported. Also, the adhesion and the thermal stability of Ti/Pd/Ag and Cr/Pd/Ag contacts to n-InAs0.32P0.68and p-Ga0.32In0.68As will be presented. In the second part of this paper, we discuss an optimum back-surface reflector (BSR) that has been developed for 0.6-eV Ga0.32In0.68As/InAs0.32P0.68 TPV MIM devices. The optimum BSR consists of three layers: {approx}1300{angstrom} MgF2 (or {approx}1300{angstrom} CVD SiO2) dielectric layer, {approx}25{angstrom} Ti adhesion layer, and {approx}1500{angstrom} Au reflection layer. This optimum BSR has high reflectance, good adhesion, and excellent thermal stability.« less

  5. Exploring the Outer Solar System with the ESSENCE Supernova Survey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Becker, A.C.; /Washington U., Seattle, Astron. Dept.; Arraki, K.

    We report the discovery and orbital determination of 14 trans-Neptunian objects (TNOs) from the ESSENCE Supernova Survey difference imaging data set. Two additional objects discovered in a similar search of the SDSS-II Supernova Survey database were recovered in this effort. ESSENCE repeatedly observed fields far from the solar system ecliptic (-21{sup o} < {beta} < -5{sup o}), reaching limiting magnitudes per observation of I {approx} 23.1 and R {approx} 23.7. We examine several of the newly detected objects in detail, including 2003 UC{sub 414}, which orbits entirely between Uranus and Neptune and lies very close to a dynamical region thatmore » would make it stable for the lifetime of the solar system. 2003 SS{sub 422} and 2007 TA{sub 418} have high eccentricities and large perihelia, making them candidate members of an outer class of TNOs. We also report a new member of the 'extended' or 'detached' scattered disk, 2004 VN{sub 112}, and verify the stability of its orbit using numerical simulations. This object would have been visible to ESSENCE for only {approx}2% of its orbit, suggesting a vast number of similar objects across the sky. We emphasize that off-ecliptic surveys are optimal for uncovering the diversity of such objects, which in turn will constrain the history of gravitational influences that shaped our early solar system.« less

  6. Erasing the Variable: Empirical Foreground Discovery for Global 21 cm Spectrum Experiments

    NASA Technical Reports Server (NTRS)

    Switzer, Eric R.; Liu, Adrian

    2014-01-01

    Spectral measurements of the 21 cm monopole background have the promise of revealing the bulk energetic properties and ionization state of our universe from z approx. 6 - 30. Synchrotron foregrounds are orders of magnitude larger than the cosmological signal, and are the principal challenge faced by these experiments. While synchrotron radiation is thought to be spectrally smooth and described by relatively few degrees of freedom, the instrumental response to bright foregrounds may be much more complex. To deal with such complexities, we develop an approach that discovers contaminated spectral modes using spatial fluctuations of the measured data. This approach exploits the fact that foregrounds vary across the sky while the signal does not. The discovered modes are projected out of each line-of-sight of a data cube. An angular weighting then optimizes the cosmological signal amplitude estimate by giving preference to lower-noise regions. Using this method, we show that it is essential for the passband to be stable to at least approx. 10(exp -4). In contrast, the constraints on the spectral smoothness of the absolute calibration are mainly aesthetic if one is able to take advantage of spatial information. To the extent it is understood, controlling polarization to intensity leakage at the approx. 10(exp -2) level will also be essential to rejecting Faraday rotation of the polarized synchrotron emission. Subject headings: dark ages, reionization, first stars - methods: data analysis - methods: statistical

  7. Strain relaxation of thin Si{sub 0.6}Ge{sub 0.4} grown with low-temperature buffers by molecular beam epitaxy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, M.; Hansson, G. V.; Ni, W.-X.

    A double-low-temperature-buffer variable-temperature growth scheme was studied for fabrication of strain-relaxed thin Si{sub 0.6}Ge{sub 0.4} layer on Si(001) by using molecular beam epitaxy (MBE), with particular focuses on the influence of growth temperature of individual low-temperature-buffer layers on the relaxation process and final structural qualities. The low-temperature buffers consisted of a 40 nm Si layer grown at an optimized temperature of {approx}400 deg. C, followed by a 20 nm Si{sub 0.6}Ge{sub 0.4} layer grown at temperatures ranging from 50 to 550 deg. C. A significant relaxation increase together with a surface roughness decrease both by a factor of {approx}2, accompaniedmore » with the cross-hatch/cross-hatch-free surface morphology transition, took place for the sample containing a low-temperature Si{sub 0.6}Ge{sub 0.4} layer that was grown at {approx}200 deg. C. This dramatic change was explained by the association with a certain onset stage of the ordered/disordered growth transition during the low-temperature MBE, where the high density of misfit dislocation segments generated near surface cusps largely facilitated the strain relaxation of the top Si{sub 0.6}Ge{sub 0.4} layer.« less

  8. Oil-Free Rotor Support Technologies for an Optimized Helicopter Propulsion System

    NASA Technical Reports Server (NTRS)

    DellaCorte, Christopher; Bruckner, Robert J.

    2007-01-01

    An optimized rotorcraft propulsion system incorporating a foil air bearing supported Oil-Free engine coupled to a high power density gearbox using high viscosity gear oil is explored. Foil air bearings have adequate load capacity and temperature capability for the highspeed gas generator shaft of a rotorcraft engine. Managing the axial loads of the power turbine shaft (low speed spool) will likely require thrust load support from the gearbox through a suitable coupling or other design. Employing specially formulated, high viscosity gear oil for the transmission can yield significant improvements (approx. 2X) in allowable gear loading. Though a completely new propulsion system design is needed to implement such a system, improved performance is possible.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Izquierdo-Vega, Jeannett A.; FES-Cuautitlan. UNAM. Cuautitlan Izcalli, Estado de Mexico; Sanchez-Gutierrez, Manuel

    Fluorosis, caused by drinking water contamination with inorganic fluoride, is a public health problem in many areas around the world. The aim of the study was to evaluate the effect of environmentally relevant doses of fluoride on in vitro fertilization (IVF) capacity of spermatozoa, and its relationship to spermatozoa mitochondrial transmembrane potential ({delta}{psi}{sub m}). Male Wistar rats were administered at 5 mg fluoride/kg body mass/24 h, or deionized water orally for 8 weeks. We evaluated several spermatozoa parameters in treated and untreated rats: i) standard quality analysis, ii) superoxide dismutase (SOD) activity, iii) the generation of superoxide anion (O{sub 2}{supmore » {center_dot}}{sup -}), iv) lipid peroxidation concentration, v) ultrastructural analyses of spermatozoa using transmission electron microscopy, vi) {delta}{psi}{sub m}, vii) acrosome reaction, and viii) IVF capability. Spermatozoa from fluoride-treated rats exhibited a significant decrease in SOD activity ({approx} 33%), accompanied with a significant increase in the generation of O{sub 2}{sup {center_dot}} ({approx} 40%), a significant decrease in {delta}{psi}{sub m} ({approx} 33%), and a significant increase in lipid peroxidation concentration ({approx} 50%), relative to spermatozoa from the control group. Consistent with this finding, spermatozoa from fluoride-treated rats exhibited altered plasmatic membrane. In addition, the percentage of fluoride-treated spermatozoa capable of undergoing the acrosome reaction was decreased relative to control spermatozoa (34 vs. 55%), while the percentage fluoride-treated spermatozoa capable of oocyte fertilization was also significantly lower than the control group (13 vs. 71%). These observations suggest that subchronic exposure to fluoride causes oxidative stress damage and loss of mitochondrial transmembrane potential, resulting in reduced fertility.« less

  10. A New Electron Source for Laboratory Simulation of the Space Environment

    NASA Technical Reports Server (NTRS)

    Krause, Linda Habash; Everding, Daniel; Bonner, Mathew; Swan, Brian

    2012-01-01

    We have developed a new collimated electron source called the Photoelectron Beam Generator (PEBG) for laboratory and spaceflight applications. This technology is needed to replace traditional cathodes because of serious fundamental weaknesses with the present state of the art. Filament cathodes suffer from numerous practical problems, even if expertly designed, including the dependence of electron emission on filament temperature, short lifetimes (approx 100 hours), and relatively high power (approx 10s of W). Other types of cathodes have solved some of these problems, but they are plagued with other difficult problems, such as the Spindt cathode's extreme sensitivity to molecular oxygen. None to date have been able to meet the demand of long lifetime, robust packaging, and precision energy and flux control. This new cathode design avoids many common pitfalls of traditional cathodes. Specifically, there are no fragile parts, no sensitivity to oxygen, no intrinsic emission dependencies on device temperature, and no vacuum requirements for protecting the source from contamination or damage. Recent advances in high-brightness Light Emitting Diodes (LEDs) have provided the key enabling technology for this new electron source. The LEDs are used to photoeject electrons off a target material of a low work-function, and these photoelectrons are subsequently focused into a laminar beam using electrostatic lenses. The PEBG works by illuminating a target material and steering photoelectrons into a laminar beam using electrostatic lenses

  11. Direct Numerical Simulations of High-Speed Turbulent Boundary Layers over Riblets

    NASA Technical Reports Server (NTRS)

    Duan, Lian; Choudhari, Meelan, M.

    2014-01-01

    Direct numerical simulations (DNS) of spatially developing turbulent boundary layers over riblets with a broad range of riblet spacings are conducted to investigate the effects of riblets on skin friction at high speeds. Zero-pressure gradient boundary layers under two flow conditions (Mach 2:5 with T(sub w)/T(sub r) = 1 and Mach 7:2 with T(sub w)/T(sub r) = 0:5) are considered. The DNS results show that the drag-reduction curve (delta C(sub f)/C(sub f) vs l(sup +)(sub g )) at both supersonic speeds follows the trend of low-speed data and consists of a `viscous' regime for small riblet size, a `breakdown' regime with optimal drag reduction, and a `drag-increasing' regime for larger riblet sizes. At l l(sup +)(sub g) approx. 10 (corresponding to s+ approx 20 for the current triangular riblets), drag reduction of approximately 7% is achieved at both Mach numbers, and con rms the observations of the few existing experiments under supersonic conditions. The Mach- number dependence of the drag-reduction curve occurs for riblet sizes that are larger than the optimal size, with smaller slopes of (delta C(sub f)/C(sub f) for larger freestream Mach numbers. The Reynolds analogy holds with 2(C(sub h)=C(sub f) approximately equal to that of at plates for both drag-reducing and drag-increasing configurations.

  12. Active Control of Separation From the Flap of a Supercritical Airfoil

    NASA Technical Reports Server (NTRS)

    Melton, La Tunia Pack; Yao, Chung-Sheng; Seifert, Avi

    2003-01-01

    Active flow control in the form of periodic zero-mass-flux excitation was applied at several regions on the leading edge and trailing edge flaps of a simplified high-lift system t o delay flow separation. The NASA Energy Efficient Transport (EET) supercritical airfoil was equipped with a 15% chord simply hinged leading edge flap and a 25% chord simply hinged trailing edge flap. Detailed flow features were measured in an attempt to identify optimal actuator placement. The measurements included steady and unsteady model and tunnel wall pressures, wake surveys, arrays of surface hot-films, flow visualization, and particle image velocimetry (PIV). The current paper describes the application of active separation control at several locations on the deflected trailing edge flap. High frequency (F(+) approx.= 10) and low frequency amplitude modulation (F(+)AM approx.= 1) of the high frequency excitation were used for control. Preliminary efforts to combine leading and trailing edge flap excitations are also reported.

  13. Effects of Dilute Acid Pretreatment on Cellulose DP and the Relationship Between DP Reduction and Cellulose Digestibility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, W.; Chen, X.; Tucker, M.

    2012-01-01

    The degree of polymerization(DP) of cellulose is considered to be one of the most important properties affecting the enzymatic hydrolysis of cellulose. Various pure cellulosic and biomass materials have been used in a study of the effect of dilute acid treatment on cellulose DP. A substantial reduction in DP was found for all pure cellulosic materials studied even at conditions that would be considered relatively mild for pretreatment. The effect of dilute acid pretreatment on cellulose DP in biomass samples was also investigated. Corn stover pretreated with dilute acid under the most optimal conditions contained cellulose with a DPw inmore » the range of 1600{approx}3500, which is much higher than the level-off DP(DPw 150{approx}300) obtained with pure celluloses. The effect of DP reduction on the saccharification of celluloses was also studied. From this study it does not appear that cellulose DP is a main factor affecting cellulose saccharification.« less

  14. Results from the PALM-3000 High-Order Adaptive Optics System

    NASA Technical Reports Server (NTRS)

    Roberts, Jennifer E.; Dekany, Richard G.; Burruss, Rick S.; Baranec, Christoph; Bouchez, Antonin; Croner, Ernest E.; Guiwits, Stephen R.; Hale, David D. S.; Henning, John R.; Palmer, Dean L.; hide

    2012-01-01

    The first of a new generation of high actuator density AO systems developed for large telescopes, PALM-3000 is optimized for high-contrast exoplanet science but will support operation with natural guide stars as faint as V approx. 18. PALM-3000 began commissioning in June 2011 on the Palomar 200" telescope and has to date over 60 nights of observing. The AO system consists of two Xinetics deformable mirrors, one with 66 by 66 actuators and another with 21 by 21 actuators, a Shack-Hartman WFS with four pupil sampling modes (ranging from 64 to 8 samples across the pupil), and a full vector matrix multiply real-time system capable of running at 2KHz frame rates. We present the details of the completed system, and initial results. Operating at 2 kHz with 8.3cm pupil sampling on-sky, we have achieved a K-band Strehl ratio as high as 84% in approx.1.0 arcsecond visible seeing.

  15. Partial differential equations constrained combinatorial optimization on an adiabatic quantum computer

    NASA Astrophysics Data System (ADS)

    Chandra, Rishabh

    Partial differential equation-constrained combinatorial optimization (PDECCO) problems are a mixture of continuous and discrete optimization problems. PDECCO problems have discrete controls, but since the partial differential equations (PDE) are continuous, the optimization space is continuous as well. Such problems have several applications, such as gas/water network optimization, traffic optimization, micro-chip cooling optimization, etc. Currently, no efficient classical algorithm which guarantees a global minimum for PDECCO problems exists. A new mapping has been developed that transforms PDECCO problem, which only have linear PDEs as constraints, into quadratic unconstrained binary optimization (QUBO) problems that can be solved using an adiabatic quantum optimizer (AQO). The mapping is efficient, it scales polynomially with the size of the PDECCO problem, requires only one PDE solve to form the QUBO problem, and if the QUBO problem is solved correctly and efficiently on an AQO, guarantees a global optimal solution for the original PDECCO problem.

  16. BULGELESS GIANT GALAXIES CHALLENGE OUR PICTURE OF GALAXY FORMATION BY HIERARCHICAL CLUSTERING ,

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kormendy, John; Cornell, Mark E.; Drory, Niv

    2010-11-01

    To better understand the prevalence of bulgeless galaxies in the nearby field, we dissect giant Sc-Scd galaxies with Hubble Space Telescope (HST) photometry and Hobby-Eberly Telescope (HET) spectroscopy. We use the HET High Resolution Spectrograph (resolution R {identical_to} {lambda}/FWHM {approx_equal} 15, 000) to measure stellar velocity dispersions in the nuclear star clusters and (pseudo)bulges of the pure-disk galaxies M 33, M 101, NGC 3338, NGC 3810, NGC 6503, and NGC 6946. The dispersions range from 20 {+-} 1 km s{sup -1} in the nucleus of M 33 to 78 {+-} 2 km s{sup -1} in the pseudobulge of NGC 3338.more » We use HST archive images to measure the brightness profiles of the nuclei and (pseudo)bulges in M 101, NGC 6503, and NGC 6946 and hence to estimate their masses. The results imply small mass-to-light ratios consistent with young stellar populations. These observations lead to two conclusions. (1) Upper limits on the masses of any supermassive black holes are M{sub .} {approx}< (2.6 {+-} 0.5) x 10{sup 6} M{sub sun} in M 101 and M{sub .} {approx}< (2.0 {+-} 0.6) x 10{sup 6} M{sub sun} in NGC 6503. (2) We show that the above galaxies contain only tiny pseudobulges that make up {approx}<3% of the stellar mass. This provides the strongest constraints to date on the lack of classical bulges in the biggest pure-disk galaxies. We inventory the galaxies in a sphere of radius 8 Mpc centered on our Galaxy to see whether giant, pure-disk galaxies are common or rare. We find that at least 11 of 19 galaxies with V{sub circ} > 150 km s{sup -1}, including M 101, NGC 6946, IC 342, and our Galaxy, show no evidence for a classical bulge. Four may contain small classical bulges that contribute 5%-12% of the light of the galaxy. Only four of the 19 giant galaxies are ellipticals or have classical bulges that contribute {approx}1/3 of the galaxy light. We conclude that pure-disk galaxies are far from rare. It is hard to understand how bulgeless galaxies could form as the quiescent tail of a distribution of merger histories. Recognition of pseudobulges makes the biggest problem with cold dark matter galaxy formation more acute: How can hierarchical clustering make so many giant, pure-disk galaxies with no evidence for merger-built bulges? Finally, we emphasize that this problem is a strong function of environment: the Virgo cluster is not a puzzle, because more than 2/3 of its stellar mass is in merger remnants.« less

  17. CO EMISSION IN OPTICALLY OBSCURED (TYPE-2) QUASARS AT REDSHIFTS z Almost-Equal-To 0.1-0.4

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krips, M.; Neri, R.; Cox, P., E-mail: krips@iram.fr, E-mail: neri@iram.fr, E-mail: cox@iram.fr

    We present a search for CO emission in a sample of 10 type-2 quasar host galaxies with redshifts of z Almost-Equal-To 0.1-0.4. We detect CO(J = 1-0) line emission with {>=}5{sigma} in the velocity integrated intensity maps of five sources. A sixth source shows a tentative detection at the {approx}4.5{sigma} level of its CO(J = 1-0) line emission. The CO emission of all six sources is spatially coincident with the position at optical, infrared, or radio wavelengths. The spectroscopic redshifts derived from the CO(J = 1-0) line are very close to the photometric ones for all five detections except formore » the tentative detection for which we find a much larger discrepancy. We derive gas masses of {approx}(2-16) Multiplication-Sign 10{sup 9} M{sub Sun} for the CO emission in the six detected sources, while we constrain the gas masses to upper limits of M{sub gas} {<=} 8 Multiplication-Sign 10{sup 9} M{sub Sun} for the four non-detections. These values are of the order or slightly lower than those derived for type-1 quasars. The line profiles of the CO(J = 1-0) emission are rather narrow ({approx}<300 km s{sup -1}) and single peaked, unveiling no typical signatures for current or recent merger activity, and are comparable to that of type-1 quasars. However, at least one of the observed sources shows a tidal-tail-like emission in the optical that is indicative of an ongoing or past merging event. We also address the problem of detecting spurious {approx}5{sigma} emission peaks within the field of view.« less

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graham, Peter W.; Ismail, Ahmed; Saraswat, Prashant

    We present a simple solution to the little hierarchy problem in the minimal supersymmetric standard model: a vectorlike fourth generation. With O(1) Yukawa couplings for the new quarks, the Higgs mass can naturally be above 114 GeV. Unlike a chiral fourth generation, a vectorlike generation can solve the little hierarchy problem while remaining consistent with precision electroweak and direct production constraints, and maintaining the success of the grand unified framework. The new quarks are predicted to lie between {approx}300-600 GeV and will thus be discovered or ruled out at the LHC. This scenario suggests exploration of several novel collider signatures.

  19. Winds Measured by the Rover Environmental Monitoring Station (REMS) During the Mars Science Laboratory (MSL) Rover's Bagnold Dunes Campaign and Comparison with Numerical Modeling Using MarsWRF

    NASA Technical Reports Server (NTRS)

    Newman, Claire E.; Gomez-Elvira, Javier; Marin, Mercedes; Navarro, Sara; Torres, Josefina; Richardson, Mark I.; Battalio, J. Michael; Guzewich, Scott D.; Sullivan, Robert; de la Torre, Manuel; hide

    2016-01-01

    A high density of REMS wind measurements were collected in three science investigations during MSL's Bagnold Dunes Campaign, which took place over approx. 80 sols around southern winter solstice (Ls approx. 90deg) and constituted the first in situ analysis of the environmental conditions, morphology, structure, and composition of an active dune field on Mars. The Wind Characterization Investigation was designed to fully characterize the near-surface wind field just outside the dunes and confirmed the primarily upslope/downslope flow expected from theory and modeling of the circulation on the slopes of Aeolis Mons in this season. The basic pattern of winds is 'upslope' (from the northwest, heading up Aeolis Mons) during the daytime (approx. 09:00-17:00 or 18:00) and 'downslope' (from the southeast, heading down Aeolis Mons) at night (approx. 20:00 to some time before 08:00). Between these times the wind rotates largely clockwise, giving generally westerly winds mid-morning and easterly winds in the early evening. The timings of these direction changes are relatively consistent from sol to sol; however, the wind direction and speed at any given time shows considerable intersol variability. This pattern and timing is similar to predictions from the MarsWRF numerical model, run at a resolution of approx. 490 m in this region, although the model predicts the upslope winds to have a stronger component from the E than the W, misses a wind speed peak at approx. 09:00, and under-predicts the strength of daytime wind speeds by approx. 2-4 m/s. The Namib Dune Lee Investigation reveals 'blocking' of northerly winds by the dune, leaving primarily a westerly component to the daytime winds, and also shows a broadening of the 1 Hz wind speed distribution likely associated with lee turbulence. The Namib Dune Side Investigation measured primarily daytime winds at the side of the same dune, in support of aeolian change detection experiments designed to put limits on the saltation threshold, and also appears to show the influence of the dune body on the local flow, though less clearly than in the lee. Using a vertical grid with lower resolution near the surface reduces the relative strength of nighttime winds predicted by MarsWRF and produces a peak in wind speed at approx. 09:00, improving the match to the observed diurnal variation of wind speed, albeit with an offset in magnitude. The annual wind field predicted using this grid also provides a far better match to observations of aeolian dune morphology and motion in the Bagnold Dunes. However, the lower overall wind speeds than observed and disagreement with the observed wind direction at approx. 09:00 suggest that the problem has not been solved and that alternative boundary layer mixing schemes should be explored which may result in more mixing of momentum down to the near-surface from higher layers. These results demonstrate a strong need for in situ wind data to constrain the setup and assumptions used in numerical models, so that they may be used with more confidence to predict the circulation at other times and locations on Mars.

  20. DEMONSTRATION OF THE NEXT-GENERATION CAUSTIC-SIDE SOLVENT EXTRACTION SOLVENT WITH 2-CM CENTRIFUGAL CONTRACTORS USING TANK 49H WASTE AND WASTE SIMULANT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pierce, R.; Peters, T.; Crowder, M.

    2011-09-27

    Researchers successfully demonstrated the chemistry and process equipment of the Caustic-Side Solvent Extraction (CSSX) flowsheet using MaxCalix for the decontamination of high level waste (HLW). The demonstration was completed using a 12-stage, 2-cm centrifugal contactor apparatus at the Savannah River National Laboratory (SRNL). This represents the first CSSX process demonstration of the MaxCalix solvent system with Savannah River Site (SRS) HLW. Two tests lasting 24 and 27 hours processed non-radioactive simulated Tank 49H waste and actual Tank 49H HLW, respectively. Conclusions from this work include the following. The CSSX process is capable of reducing {sup 137}Cs in high level radioactivemore » waste by a factor of more than 40,000 using five extraction, two scrub, and five strip stages. Tests demonstrated extraction and strip section stage efficiencies of greater than 93% for the Tank 49H waste test and greater than 88% for the simulant waste test. During a test with HLW, researchers processed 39 liters of Tank 49H solution and the waste raffinate had an average decontamination factor (DF) of 6.78E+04, with a maximum of 1.08E+05. A simulant waste solution ({approx}34.5 liters) with an initial Cs concentration of 83.1 mg/L was processed and had an average DF greater than 5.9E+03, with a maximum DF of greater than 6.6E+03. The difference may be attributable to differences in contactor stage efficiencies. Test results showed the solvent can be stripped of cesium and recycled for {approx}25 solvent turnovers without the occurrence of any measurable solvent degradation or negative effects from minor components. Based on the performance of the 12-stage 2-cm apparatus with the Tank 49H HLW, the projected DF for MCU with seven extraction, two scrub, and seven strip stages operating at a nominal efficiency of 90% is {approx}388,000. At 95% stage efficiency, the DF in MCU would be {approx}3.2 million. Carryover of organic solvent in aqueous streams (and aqueous in organic streams) was less than 0.1% when processing Tank 49H HLW. The entrained solvent concentration measured in the decontaminated salt solution (DSS) was as much as {approx}140 mg/L, although that value may be overstated by as much as 50% due to modifier solubility in the DSS. The entrained solvent concentration was measured in the strip effluent (SE) and the results are pending. A steady-state concentration factor (CF) of 15.9 was achieved with Tank 49H HLW. Cesium distribution ratios [D(Cs)] were measured with non-radioactive Tank 49H waste simulant and actual Tank 49H waste. Below is a comparison of D(Cs) values of ESS and 2-cm tests. Batch Extraction-Strip-Scrub (ESS) tests yielded D(Cs) values for extraction of {approx}81-88 for tests with Tank 49H waste and waste simulant. The results from the 2-cm contactor tests were in agreement with values of 58-92 for the Tank 49H HLW test and 54-83 for the simulant waste test. These values are consistent with the reference D(Cs) for extraction of {approx}60. In tests with Tank 49H waste and waste simulant, batch ESS tests measured D(Cs) values for the two scrub stages as {approx}3.5-5.0 for the first scrub stage and {approx}1.0-3.0 for the second scrub stage. In the Tank 49H test, the D(Cs) values for the 2-cm test were far from the ESS values. A D(Cs) value of 161 was measured for the first scrub stage and 10.8 for the second scrub stage. The data suggest that the scrub stage is not operating as effectively as intended. For the simulant test, a D(Cs) value of 1.9 was measured for the first scrub stage; the sample from the second scrub stage was compromised. Measurements of the pH of all stage samples for the Tank 49H test showed that the pH for extraction and scrub stages was 14 and the pH for the strip stages was {approx}7. It is expected that the pH of the second scrub stage would be {approx}12-13. Batch ESS tests measured D(Cs) values for the strip stages to be {approx}0.002-0.010. A high value in Strip No.3 of a test with simulant solution has been attributed to issues associated with the limits of detection for the analytical method. In the 2-cm contactor tests, the first four strip stages of the Tank 49H waste test and all five strip stages in the simulant waste test had higher values than the ESS tests. Only the fifth strip stage D(Cs) value of the Tank 49H waste test matched that of the ESS tests. It is speculated that the less-than-optimal performance of the strip section is caused by inefficiencies in the scrub section. Because strip is sensitive to pH, the elevated pH value in the second scrub stage may be the cause of strip performance. In spite of the D(Cs) values obtained in the scrub and strip sections, testing showed that the solvent system is robust. Average DFs for the process far exceeded targets even though the scrub and strip stages did not function optimally. Correction of the issue in the scrub and strip stages is expected to yield even higher waste DFs.« less

  1. The Comet Giacobini-Zinner magnetotail: Axial stresses and inferred near-nucleus properties

    NASA Technical Reports Server (NTRS)

    Mccomas, D. J.; Gosling, J. T.; Bame, S. J.; Slavin, J. A.; Smith, E. J.; Steinberg, J. L.

    1986-01-01

    Utilizing the electron and magnetic field data from the ICE tail traversal of comet Giacobini-Zinner along with the MHD equations, a steady state, stress balance model of the cometary magnetotail was developed, and used to infer important but unmeasured ion properties within the magnetotail at ICE and upstream at the average point along each streamline where cometary ions are picked-up. The derived tailward ion flow speed at ICE is quite constant at approx. -20 to -30 km/sec across the entire tail. The flow velocity, ion temperature, density, and ion source rates upstream from the lobes (current sheet) at the average pick-up locations are approx. -75 km/sec (approx. -12), approx. 4 million K (approx. 100,000), approx. 20 cc (approx. 400), and approx. 15 cu cm/sec. Gradients in the plasma properties between the two regions are quite strong. Implications of inferred plasma properties for the near-nucleus region and for cometary magnetotail formation are examined.

  2. Modeling of the hydrogen maser disk in MWC 349

    NASA Astrophysics Data System (ADS)

    Ponomarev, Victor O.; Smith, Howard A.; Strelnitski, Vladimir S.

    1994-04-01

    Maser amplification in a Keplerian circumstellar disk seen edge on-the idea put forward by Gordon (1992), Martin-Pintado, & Serabyn (1992), and Thum, Martin-Pintado, & Bachiller (1992) to explain the millimeter hydrogen recombination lines in MWC 349-is further justified and developed here. The double-peaked (vs. possible triple-peaked) form of the observed spectra is explained by the reduced emission from the inner portion of the disk, the portion responsible for the central ('zero velocity') component of a triple-peaked spectrum. Radial gradient of electron density and/or free-free absorption within the disk are identified as the probable causes of this central 'hole' in the disk and of its opacity. We calculate a set of synthetic maser spectra radiated by a homogeneous Keplerian ring seen edge-on and compare them to the H30-alpha observations of Thum et al., averaged over about 1000 days. We used a simple graphical procedure to solve an inverse problem and deduced the probable values of some basic disk and maser parameters. We find that the maser is essentially unsaturated, and that the most probable values of electron temperature. Doppler width of the microturbulence, and electron density, all averaged along the amplification path are, correspondingly, Te less than or equal to 11,000 K, Vmicro less than or equal to 14 km/s, ne approx. = (3 +/- 2) x 107/cu cm. The model shows that radiation at every frequency within the spectrum arises in a monochromatic 'hot spot.' The maximum optical depth within the 'hot spot' producing radiation at the spectral peak maximum is taumax approx. = 6 +/- 1; the effective width of the masing ring is approx. = 0.4-0.7 times its outer diameter; the size of the 'hot spot' responsible for the radiation at the spectral peak frequency is approx. = 0.2-0.3 times the distance between the two 'hot spots' corresponding to two peaks. An important derivation of our model is the dynamical mass of the central star, M* approx. = 26 solar masses (D/1.2 kpc), D being the distance to the star. Prospects for improving the model are discussed.

  3. Short Pulse Laser Applications Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Town, R J; Clark, D S; Kemp, A J

    We are applying our recently developed, LDRD-funded computational simulation tool to optimize and develop applications of Fast Ignition (FI) for stockpile stewardship. This report summarizes the work performed during a one-year exploratory research LDRD to develop FI point designs for the National Ignition Facility (NIF). These results were sufficiently encouraging to propose successfully a strategic initiative LDRD to design and perform the definitive FI experiment on the NIF. Ignition experiments on the National Ignition Facility (NIF) will begin in 2010 using the central hot spot (CHS) approach, which relies on the simultaneous compression and ignition of a spherical fuel capsule.more » Unlike this approach, the fast ignition (FI) method separates fuel compression from the ignition phase. In the compression phase, a laser such as NIF is used to implode a shell either directly, or by x rays generated from the hohlraum wall, to form a compact dense ({approx}300 g/cm{sup 3}) fuel mass with an areal density of {approx}3.0 g/cm{sup 2}. To ignite such a fuel assembly requires depositing {approx}20kJ into a {approx}35 {micro}m spot delivered in a short time compared to the fuel disassembly time ({approx}20ps). This energy is delivered during the ignition phase by relativistic electrons generated by the interaction of an ultra-short high-intensity laser. The main advantages of FI over the CHS approach are higher gain, a lower ignition threshold, and a relaxation of the stringent symmetry requirements required by the CHS approach. There is worldwide interest in FI and its associated science. Major experimental facilities are being constructed which will enable 'proof of principle' tests of FI in integrated subignition experiments, most notably the OMEGA-EP facility at the University of Rochester's Laboratory of Laser Energetics and the FIREX facility at Osaka University in Japan. Also, scientists in the European Union have recently proposed the construction of a new FI facility, called HiPER, designed to demonstrate FI. Our design work has focused on the NIF, which is the only facility capable of forming a full-scale hydro assembly, and could be adapted for full-scale FI by the conversion of additional beams to short-pulse operation.« less

  4. RECONSTRUCTING REDSHIFT DISTRIBUTIONS WITH CROSS-CORRELATIONS: TESTS AND AN OPTIMIZED RECIPE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matthews, Daniel J.; Newman, Jeffrey A., E-mail: djm70@pitt.ed, E-mail: janewman@pitt.ed

    2010-09-20

    Many of the cosmological tests to be performed by planned dark energy experiments will require extremely well-characterized photometric redshift measurements. Current estimates for cosmic shear are that the true mean redshift of the objects in each photo-z bin must be known to better than 0.002(1 + z), and the width of the bin must be known to {approx}0.003(1 + z) if errors in cosmological measurements are not to be degraded significantly. A conventional approach is to calibrate these photometric redshifts with large sets of spectroscopic redshifts. However, at the depths probed by Stage III surveys (such as DES), let alonemore » Stage IV (LSST, JDEM, and Euclid), existing large redshift samples have all been highly (25%-60%) incomplete, with a strong dependence of success rate on both redshift and galaxy properties. A powerful alternative approach is to exploit the clustering of galaxies to perform photometric redshift calibrations. Measuring the two-point angular cross-correlation between objects in some photometric redshift bin and objects with known spectroscopic redshift, as a function of the spectroscopic z, allows the true redshift distribution of a photometric sample to be reconstructed in detail, even if it includes objects too faint for spectroscopy or if spectroscopic samples are highly incomplete. We test this technique using mock DEEP2 Galaxy Redshift survey light cones constructed from the Millennium Simulation semi-analytic galaxy catalogs. From this realistic test, which incorporates the effects of galaxy bias evolution and cosmic variance, we find that the true redshift distribution of a photometric sample can, in fact, be determined accurately with cross-correlation techniques. We also compare the empirical error in the reconstruction of redshift distributions to previous analytic predictions, finding that additional components must be included in error budgets to match the simulation results. This extra error contribution is small for surveys that sample large areas of sky (>{approx}10{sup 0}-100{sup 0}), but dominant for {approx}1 deg{sup 2} fields. We conclude by presenting a step-by-step, optimized recipe for reconstructing redshift distributions from cross-correlation information using standard correlation measurements.« less

  5. Decay of Reactivity Induced by Simulated Solar Wind Implantation of a Forsteritic Olivine

    NASA Technical Reports Server (NTRS)

    Kuhlman, K.R.; Sridharan, K.; Garrison, D.H.; McKay, D.S.; Taylor, L.A.

    2009-01-01

    In returning humans to the Moon, the Lunar Airborne Dust Toxicity Advisory Group (LADTAG) must address many problems faced by the original Apollo astronauts. Major among these is control of the fine dust (<20 microns) that makes up approx.20 wt% portion of the lunar surface. This ubiquitous, clinging, sharp, abrasive, glassy dust caused a plethora of problems with seals, abrasion, and coatings, in addition to possible health problems, including lunar dust hayfever. The lifetime of reactive sites on the surfaces of irradiated lunar dust grains is of interest to those studying human health because of the free radicals and toxic compounds that may be formed and may not passivate quickly when exposed to habitat/spacecraft air.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaitsgory, Vladimir, E-mail: vladimir.gaitsgory@mq.edu.au; Rossomakhine, Sergey, E-mail: serguei.rossomakhine@flinders.edu.au

    The paper aims at the development of an apparatus for analysis and construction of near optimal solutions of singularly perturbed (SP) optimal controls problems (that is, problems of optimal control of SP systems) considered on the infinite time horizon. We mostly focus on problems with time discounting criteria but a possibility of the extension of results to periodic optimization problems is discussed as well. Our consideration is based on earlier results on averaging of SP control systems and on linear programming formulations of optimal control problems. The idea that we exploit is to first asymptotically approximate a given problem ofmore » optimal control of the SP system by a certain averaged optimal control problem, then reformulate this averaged problem as an infinite-dimensional linear programming (LP) problem, and then approximate the latter by semi-infinite LP problems. We show that the optimal solution of these semi-infinite LP problems and their duals (that can be found with the help of a modification of an available LP software) allow one to construct near optimal controls of the SP system. We demonstrate the construction with two numerical examples.« less

  7. A terahertz-vibration to terahertz-radiation converter based on gold nanoobjects: a feasibility study.

    PubMed

    Moldosanov, Kamil; Postnikov, Andrei

    2016-01-01

    The need for practical and adaptable terahertz sources is apparent in the areas of application such as early cancer diagnostics, nondestructive inspection of pharmaceutical tablets, visualization of concealed objects. We outline the operation principle and suggest the design of a simple appliance for generating terahertz radiation by a system of nanoobjects - gold nanobars (GNBs) or nanorings (GNRs) - irradiated by microwaves. Our estimations confirm a feasibility of the idea that GNBs and GNRs irradiated by microwaves could become terahertz emitters with photon energies within the full width at half maximum of the longitudinal acoustic phononic DOS of gold (ca. 16-19 meV, i.e., 3.9-4.6 THz). A scheme of the terahertz radiation source is suggested based on the domestic microwave oven irradiating a substrate with multiple deposited GNBs or GNRs. The size of a nanoobject for optimal conversion is estimated to be approx. 3 nm (thickness) by approx. 100 nm (length of GNB, or along the GNR). This detailed prediction is open to experimental verification. An impact is expected onto further studies of interplay between atomic vibrations and electromagnetic waves in nanoobjects.

  8. Effects of Tropospheric Spatio-Temporal Correlated Noise on the Analysis of Space Geodetic Data

    NASA Technical Reports Server (NTRS)

    Romero-Wolf, A.; Jacobs, C. S.; Ratcliff, J. T.

    2012-01-01

    The standard VLBI analysis models the distribution of measurement noise as Gaussian. Because the price of recording bits is steadily decreasing, thermal errors will soon no longer dominate. As a result, it is expected that troposphere and instrumentation/clock errors will increasingly become more dominant. Given that both of these errors have correlated spectra, properly modeling the error distributions will become increasingly relevant for optimal analysis. We discuss the advantages of modeling the correlations between tropospheric delays using a Kolmogorov spectrum and the frozen flow assumption pioneered by Treuhaft and Lanyi. We then apply these correlated noise spectra to the weighting of VLBI data analysis for two case studies: X/Ka-band global astrometry and Earth orientation. In both cases we see improved results when the analyses are weighted with correlated noise models vs. the standard uncorrelated models. The X/Ka astrometric scatter improved by approx.10% and the systematic Delta delta vs. delta slope decreased by approx. 50%. The TEMPO Earth orientation results improved by 17% in baseline transverse and 27% in baseline vertical.

  9. Development of modular scalable pulsed power systems for high power magnetized plasma experiments

    NASA Astrophysics Data System (ADS)

    Bean, I. A.; Weber, T. E.; Adams, C. S.; Henderson, B. R.; Klim, A. J.

    2017-10-01

    New pulsed power switches and trigger drivers are being developed in order to explore higher energy regimes in the Magnetic Shock Experiment (MSX) at Los Alamos National Laboratory. To achieve the required plasma velocities, high-power (approx. 100 kV, 100s of kA), high charge transfer (approx. 1 C), low-jitter (few ns) gas switches are needed. A study has been conducted on the effects of various electrode geometries and materials, dielectric media, and triggering strategies; resulting in the design of a low-inductance annular field-distortion switch, optimized for use with dry air at 90 psig, and triggered by a low-jitter, rapid rise-time solid-state Linear Transformer Driver. The switch geometry and electrical characteristics are designed to be compatible with Syllac style capacitors, and are intended to be deployed in modular configurations. The scalable nature of this approach will enable the rapid design and implementation of a wide variety of high-power magnetized plasma experiments. This work is supported by the U.S. Department of Energy, National Nuclear Security Administration. Approved for unlimited release, LA-UR-17-2578.

  10. Integrated Electron-tunneling Refrigerator and TES Bolometer for Millimeter Wave Astronomy

    NASA Technical Reports Server (NTRS)

    Silverberg, R. F.; Benford, D. J.; Chen, T. C.; Chervenak, J.; Finkbeiner, F.; Moseley, S. H.; Duncan, W.; Miller, N.; Schmidt, D.; Ullom, J.

    2005-01-01

    We describe progress in the development of a close-packed array of bolometers intended for use in photometric applications at millimeter wavelengths from ground- based telescopes. Each bolometer in the may uses a proximity-effect Transition Edge Sensor (TES) sensing element and each will have integrated Normal-Insulator-Superconductor (NIS) refrigerators to cool the bolometer below the ambient bath temperature. The NIS refrigerators and acoustic-phonon-mode-isolated bolometers are fabricated on silicon. The radiation-absorbing element is mechanically suspended by four legs, whose dimensions are used to control and optimize the thermal conductance of the bolometer. Using the technology developed at NIST, we fabricate NIS refrigerators at the base of each of the suspension legs. The NIS refrigerators remove hot electrons by quantum-mechanical tunneling and are expected to cool the biased (approx.10 pW) bolometers to <170 mK while the bolometers are inside a pumped 3He-cooled cryostat operating at approx.280 mK. This significantly lower temperature at the bolometer allows the detectors to approach background-limited performance despite the simple cryogenic system.

  11. Performance of Grey Wolf Optimizer on large scale problems

    NASA Astrophysics Data System (ADS)

    Gupta, Shubham; Deep, Kusum

    2017-01-01

    For solving nonlinear continuous problems of optimization numerous nature inspired optimization techniques are being proposed in literature which can be implemented to solve real life problems wherein the conventional techniques cannot be applied. Grey Wolf Optimizer is one of such technique which is gaining popularity since the last two years. The objective of this paper is to investigate the performance of Grey Wolf Optimization Algorithm on large scale optimization problems. The Algorithm is implemented on 5 common scalable problems appearing in literature namely Sphere, Rosenbrock, Rastrigin, Ackley and Griewank Functions. The dimensions of these problems are varied from 50 to 1000. The results indicate that Grey Wolf Optimizer is a powerful nature inspired Optimization Algorithm for large scale problems, except Rosenbrock which is a unimodal function.

  12. DEFINING THE 'BLIND SPOT' OF HINODE EIS AND XRT TEMPERATURE MEASUREMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winebarger, Amy R.; Cirtain, Jonathan; Mulu-Moore, Fana

    2012-02-20

    Observing high-temperature, low emission measure plasma is key to unlocking the coronal heating problem. With current instrumentation, a combination of EUV spectral data from Hinode Extreme-ultraviolet Imaging Spectrometer (EIS; sensitive to temperatures up to 4 MK) and broadband filter data from Hinode X-ray Telescope (XRT; sensitive to higher temperatures) is typically used to diagnose the temperature structure of the observed plasma. In this Letter, we demonstrate that a 'blind spot' exists in temperature-emission measure space for combined Hinode EIS and XRT observations. For a typical active region core with significant emission at 3-4 MK, Hinode EIS and XRT are insensitivemore » to plasma with temperatures greater than {approx}6 MK and emission measures less than {approx}10{sup 27} cm{sup -5}. We then demonstrate that the temperature and emission measure limits of this blind spot depend upon the temperature distribution of the plasma along the line of sight by considering a hypothetical emission measure distribution sharply peaked at 1 MK. For this emission measure distribution, we find that EIS and XRT are insensitive to plasma with emission measures less than {approx}10{sup 26} cm{sup -5}. We suggest that a spatially and spectrally resolved 6-24 Angstrom-Sign spectrum would improve the sensitivity to these high-temperature, low emission measure plasma.« less

  13. Composition of primary cosmic rays at energies 10(15) to approximately 10(16) eV

    NASA Technical Reports Server (NTRS)

    Amenomori, M.; Konishi, E.; Hotta, N.; Mizutani, K.; Kasahara, K.; Kobayashi, T.; Mikumo, E.; Sato, K.; Yuda, T.; Mito, I.

    1985-01-01

    The sigma epsilon gamma spectrum in 1 approx. 5 x 1000 TV observed at Mt. Fuji suggests that the flux of primary protons 10 to the 15 approx 10th eV is lower by a factor of 2 approx. 3 than a simple extrapolation from lower energies; the integral proton spectrum tends to be steeper than around to the power V and the spectral index tends to be steeper than Epsilon to the -17th power around 10 to the 14th power eV and the spectral index becomes approx. 2.0 around 10 to the 15th power eV. If the total flux of primary particles has no steepening up to approx 10 to the 15th power eV, than the fraction of primary protons to the total flux should be approx 20% in contrast to approx 45% at lower energies.

  14. Feed Forward Neural Network and Optimal Control Problem with Control and State Constraints

    NASA Astrophysics Data System (ADS)

    Kmet', Tibor; Kmet'ová, Mária

    2009-09-01

    A feed forward neural network based optimal control synthesis is presented for solving optimal control problems with control and state constraints. The paper extends adaptive critic neural network architecture proposed by [5] to the optimal control problems with control and state constraints. The optimal control problem is transcribed into a nonlinear programming problem which is implemented with adaptive critic neural network. The proposed simulation method is illustrated by the optimal control problem of nitrogen transformation cycle model. Results show that adaptive critic based systematic approach holds promise for obtaining the optimal control with control and state constraints.

  15. CLASH: THE ENHANCED LENSING EFFICIENCY OF THE HIGHLY ELONGATED MERGING CLUSTER MACS J0416.1-2403

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zitrin, A.; Bartelmann, M.; Carrasco, M.

    2013-01-10

    We perform a strong lensing analysis of the merging galaxy cluster MACS J0416.1-2403 (M0416; z = 0.42) in recent CLASH/HST observations. We identify 70 new multiple images and candidates of 23 background sources in the range 0.7 {approx}< z{sub phot} {approx}< 6.14 including two probable high-redshift dropouts, revealing a highly elongated lens with axis ratio {approx_equal}5:1, and a major axis of {approx}100'' (z{sub s} {approx} 2). Compared to other well-studied clusters, M0416 shows an enhanced lensing efficiency. Although the critical area is not particularly large ({approx_equal} 0.6 {open_square}'; z{sub s} {approx} 2), the number of multiple images, per critical area,more » is anomalously high. We calculate that the observed elongation boosts the number of multiple images, per critical area, by a factor of {approx}2.5 Multiplication-Sign , due to the increased ratio of the caustic area relative to the critical area. Additionally, we find that the observed separation between the two main mass components enlarges the critical area by a factor of {approx}2. These geometrical effects can account for the high number (density) of multiple images observed. We find in numerical simulations that only {approx}4% of the clusters (with M{sub vir} {>=} 6 Multiplication-Sign 10{sup 14} h {sup -1} M{sub Sun }) exhibit critical curves as elongated as in M0416.« less

  16. A Luminosity Function of Ly(alpha)-Emitting Galaxies at Z [Approx. Equal to] 4.5(Sup 1),(Sup 2)

    NASA Technical Reports Server (NTRS)

    Dawson, Steve; Rhoads, James E.; Malhotra, Sangeeta; Stern, Daniel; Wang, JunXian; Dey, Arjun; Spinrad, Hyron; Jannuzi, Buell T.

    2007-01-01

    We present a catalog of 59 z [approx. equal to] 4:5 Ly(alpha)-emitting galaxies spectroscopically confirmed in a campaign of Keck DEIMOS follow-up observations to candidates selected in the Large Are (LALA) narrowband imaging survey.We targeted 97 candidates for spectroscopic follow-up; by accounting for the variety of conditions under which we performed spectroscopy, we estimate a selection reliability of approx.76%. Together with our previous sample of Keck LRIS confirmations, the 59 sources confirmed herein bring the total catalog to 73 spectroscopically confirmed z [approx. equal to] 4:5 Ly(alpha)- emitting galaxies in the [approx. equal to] 0.7 deg(exp 2) covered by the LALA imaging. As with the Keck LRIS sample, we find that a nonnegligible fraction of the co rest-frame equivalent widths (W(sub lambda)(sup rest)) that exceed the maximum predicted for normal stellar populations: 17%-31%(93%confidence) of the detected galaxies show (W(sub lambda)(sup rest)) 12%-27% (90% confidence) show (W(sub lambda)(sup rest)) > 240 A. We construct a luminosity function of z [approx. equal to] 4.5 Ly(alpha) emission lines for comparison to Ly(alpha) luminosity function < 6.6. We find no significant evidence for Ly(alpha) luminosity function evolution from z [approx. equal to] 3 to z [approx. equal to] 6. This result supports the conclusion that the intergalactic me largely reionized from the local universe out to z [approx. equal to] 6.5. It is somewhat at odds with the pronounced drop in the cosmic star formation rate density recently measured between z approx. 3 an z approx. 6 in continuum-selected Lyman-break galaxies, and therefore potentially sheds light on the relationship between the two populations.

  17. SUPERNOVA FALLBACK ONTO MAGNETARS AND PROPELLER-POWERED SUPERNOVAE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Piro, Anthony L.; Ott, Christian D., E-mail: piro@caltech.edu, E-mail: cott@tapir.caltech.edu

    2011-08-01

    We explore fallback accretion onto newly born magnetars during the supernova of massive stars. Strong magnetic fields ({approx}10{sup 15} G) and short spin periods ({approx}1-10 ms) have an important influence on how the magnetar interacts with the infalling material. At long spin periods, weak magnetic fields, and high accretion rates, sufficient material is accreted to form a black hole, as is commonly found for massive progenitor stars. When B {approx}< 5 x 10{sup 14} G, accretion causes the magnetar to spin sufficiently rapidly to deform triaxially and produces gravitational waves, but only for {approx}50-200 s until it collapses to amore » black hole. Conversely, at short spin periods, strong magnetic fields, and low accretion rates, the magnetar is in the 'propeller regime' and avoids becoming a black hole by expelling incoming material. This process spins down the magnetar, so that gravitational waves are only expected if the initial protoneutron star is spinning rapidly. Even when the magnetar survives, it accretes at least {approx}0.3 M{sub sun}, so we expect magnetars born within these types of environments to be more massive than the 1.4 M{sub sun} typically associated with neutron stars. The propeller mechanism converts the {approx}10{sup 52} erg of spin energy in the magnetar into the kinetic energy of an outflow, which shock heats the outgoing supernova ejecta during the first {approx}10-30 s. For a small {approx}5 M{sub sun} hydrogen-poor envelope, this energy creates a brighter, faster evolving supernova with high ejecta velocities {approx}(1-3) x 10{sup 4} km s{sup -1} and may appear as a broad-lined Type Ib/c supernova. For a large {approx}> 10 M{sub sun} hydrogen-rich envelope, the result is a bright Type IIP supernova with a plateau luminosity of {approx}> 10{sup 43} erg s{sup -1} lasting for a timescale of {approx}60-80 days.« less

  18. Isoform composition and stoichiometry of the approx. 90-kDa heat shock protein associated with glucocorticoid receptors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mendel, D.B.; Orti, E.

    1988-05-15

    The authors observed that the approx. 90-kDa non-steroid-binding component of nonactivated glucocorticoid receptors purified from WEHI-7 mouse thymoma cells (which has been identified as the approx. 90-kDa heat shock protein) consistently migrates as a doublet during polyacrylamide gel electrophoresis under denaturing and reducing conditions. It has recently been reported that murine Meth A cells contain a tumor-specific transplantation antigen (TSTA) which is related or identical to the approx. 90-kDa heat shock protein. The observation that TSTA and the approx. 90-kDa heat shock protein isolated from these cells exists as two isoforms of similar molecular mass and charge has suggested thatmore » the doublet observed is also due to the existence of two isoforms. They have therefore conducted this study to determine whether TSTA and the approx. 90-kDa component of glucocorticoid receptors are indeed related, to establish whether the receptor preferentially binds one isoform of the approx. 90-kDa heat shock protein, and to investigate the stoichiometry of the nonactivated receptor complex. They used the BuGr1 and AC88 monoclonal antibodies to purify, respectively, receptor-associated and free approx. 90-kDa heat shock protein from WEHI-7 cells grown for 48 h with (/sup 35/S)methionine to metabolically label proteins to steady state. The long-term metabolic labeling approach has also enabled them to directly determine that the purified non-activated glucocorticoid receptor contains a single steroid-binding protein and two approx. 90-kDa non-steroid-binding subunits. The consistency with which a approx. 1:2 stoichiometric ratio of steroid binding to approx. 90-kDa protein is observed supports the view that the approx. 90-kDa heat shock protein is a true component of nonactivated glucocorticoid-receptor complexes.« less

  19. SPECTRAL PROPERTIES OF {approx}0.5-6 keV ENERGETIC NEUTRAL ATOMS MEASURED BY THE INTERSTELLAR BOUNDARY EXPLORER (IBEX) ALONG THE LINES OF SIGHT OF VOYAGER

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Desai, M. I.; Allegrini, F. A.; Dayeh, M. A.

    2012-04-20

    Energetic neutral atoms (ENAs) observed by the Interstellar Boundary Explorer (IBEX) provide powerful diagnostics about the origin of the progenitor ion populations and the physical mechanisms responsible for their production. Here we survey the fluxes, energy spectra, and energy dependence of the spectral indices of {approx}0.5-6 keV ENAs measured by IBEX-Hi along the lines of sight of Voyager 1 and 2. We compare the ENA spectra observed at IBEX with predictions of Zank et al. who modeled the microphysics of the heliospheric termination shock to predict the shape and relative contributions of three distinct heliosheath ion populations. We show thatmore » (1) the ENA spectral indices exhibit similar energy dependence along V1 and V2 directions-the spectrum hardens to {gamma} {approx} 1 between {approx}1 and 2 keV and softens to {gamma} {approx} 2 below {approx}1 keV and above {approx}2 keV, (2) the observed ENA fluxes agree to within {approx}50% of the Zank et al. predictions and are unlikely to be produced by core solar wind (SW) ions, and (3) the ENA spectra do not exhibit sharp cutoffs at {approx}twice the SW speed as is typically observed for shell-like pickup ion (PUI) distributions in the heliosphere. We conclude that ENAs at IBEX are generated by at least two types of ion populations whose relative contributions depend on the ENA energy: transmitted PUIs in the {approx}0.5-5 keV energy range and reflected PUIs above {approx}5 keV energy. The {approx}0.5-5 keV PUI distribution is probably a superposition of Maxwellian or kappa distributions and partially filled shell distributions in velocity space.« less

  20. THE NORTHERN WRAPS OF THE SAGITTARIUS STREAM AS TRACED BY RED CLUMP STARS: DISTANCES, INTRINSIC WIDTHS, AND STELLAR DENSITIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Correnti, M.; Ferraro, F. R.; Bellazzini, M.

    2010-09-20

    We trace the tidal Stream of the Sagittarius dwarf spheroidal galaxy (Sgr dSph) using Red Clump (RC) stars from the catalog of the Sloan Digital Sky Survey-Data Release 6, in the range 150{sup 0} {approx}< R.A. {approx}< 220{sup 0}, corresponding to the range of orbital azimuth 220{sup 0} {approx}< {Lambda} {approx}< 290{sup 0}. Substructures along the line of sight (los) are identified as significant peaks in the differential star count profiles (SCPs) of candidate RC stars. A proper modeling of the SCPs allows us to obtain (1) {<=}10% accurate, purely differential distances with respect to the main body of Sgr,more » (2) estimates of the FWHM along the los, and (3) estimates of the local density, for each detected substructure. In the range 255{sup 0} {approx}< {Lambda} {approx}< 290{sup 0} we cleanly and continuously trace various coherent structures that can be ascribed to the Stream, in particular: the well-known northern portion of the leading arm, running from d {approx_equal} 43 kpc at {Lambda} {approx_equal} 290{sup 0} to d {approx_equal} 30 kpc at {Lambda} {approx_equal} 255{sup 0}, and a more nearby coherent series of detections lying at a constant distance d {approx_equal} 25 kpc, that can be identified with a wrap of the trailing arm. The latter structure, predicted by several models of the disruption of Sgr dSph, was never traced before; comparison with existing models indicates that the difference in distance between these portions of the leading and trailing arms may provide a powerful tool to discriminate between theoretical models assuming different shapes of the Galactic potential. A further, more distant wrap in the same portion of the sky is detected only along a couple of los. For {Lambda} {approx}< 255{sup 0} the detected structures are more complex and less easily interpreted. We are confident of being able to trace the continuation of the leading arm down to {Lambda} {approx_equal} 220{sup 0} and d {approx_equal} 20 kpc; the trailing arm is seen up to {Lambda} {approx_equal} 240{sup 0} where it is replaced by more distant structures. Possible detections of more nearby wraps and of the Virgo Stellar Stream are also discussed. These measured properties provide a coherent set of observational constraints for the next generation of theoretical models of the disruption of Sgr.« less

  1. Ringfield lithographic camera

    DOEpatents

    Sweatt, W.C.

    1998-09-08

    A projection lithography camera is presented with a wide ringfield optimized so as to make efficient use of extreme ultraviolet radiation from a large area radiation source (e.g., D{sub source} {approx_equal} 0.5 mm). The camera comprises four aspheric mirrors optically arranged on a common axis of symmetry. The camera includes an aperture stop that is accessible through a plurality of partial aperture stops to synthesize the theoretical aperture stop. Radiation from a mask is focused to form a reduced image on a wafer, relative to the mask, by reflection from the four aspheric mirrors. 11 figs.

  2. COPS: Large-scale nonlinearly constrained optimization problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bondarenko, A.S.; Bortz, D.M.; More, J.J.

    2000-02-10

    The authors have started the development of COPS, a collection of large-scale nonlinearly Constrained Optimization Problems. The primary purpose of this collection is to provide difficult test cases for optimization software. Problems in the current version of the collection come from fluid dynamics, population dynamics, optimal design, and optimal control. For each problem they provide a short description of the problem, notes on the formulation of the problem, and results of computational experiments with general optimization solvers. They currently have results for DONLP2, LANCELOT, MINOS, SNOPT, and LOQO.

  3. Wide Field Imaging of the Hubble Deep Field-South Region III: Catalog

    NASA Technical Reports Server (NTRS)

    Palunas, Povilas; Collins, Nicholas R.; Gardner, Jonathan P.; Hill, Robert S.; Malumuth, Eliot M.; Rhodes, Jason; Teplitz, Harry I.; Woodgate, Bruce E.

    2002-01-01

    We present 1/2 square degree uBVRI imaging around the Hubble Deep Field - South. These data have been used in earlier papers to examine the QSO population and the evolution of the correlation function in the region around the HDF-S. The images were obtained with the Big Throughput Camera at CTIO in September 1998. The images reach 5 sigma limits of u approx. 24.4, B approx. 25.6, V approx. 25.3, R approx. 24.9 and I approx. 23.9. We present a catalog of approx. 22,000 galaxies. We also present number-magnitude counts and a comparison with other observations of the same field. The data presented here are available over the world wide web.

  4. Plasma Thruster Development: Magnetoplasmadynamic Propulsion, Status and Basic Problems.

    DTIC Science & Technology

    1986-02-01

    34 9 Sublimation Rates vs. Temperature for Typical Electrode Materials 65 10 Time to Reach Melting vs. Surface Heat Load (One-Dimensional, Large Area...Approx.) for Different Electrode Materials and Initial Temperatures 75 V LIST OF TABLES TABLE PAGE I Models of Thruster Types (with approximation (1...much higher specific impulse values than the minimum must be achieved in order to obtain acceptable effi- Sciencies , e.g. for 30% efficiency with argon

  5. Generalized bipartite quantum state discrimination problems with sequential measurements

    NASA Astrophysics Data System (ADS)

    Nakahira, Kenji; Kato, Kentaro; Usuda, Tsuyoshi Sasaki

    2018-02-01

    We investigate an optimization problem of finding quantum sequential measurements, which forms a wide class of state discrimination problems with the restriction that only local operations and one-way classical communication are allowed. Sequential measurements from Alice to Bob on a bipartite system are considered. Using the fact that the optimization problem can be formulated as a problem with only Alice's measurement and is convex programming, we derive its dual problem and necessary and sufficient conditions for an optimal solution. Our results are applicable to various practical optimization criteria, including the Bayes criterion, the Neyman-Pearson criterion, and the minimax criterion. In the setting of the problem of finding an optimal global measurement, its dual problem and necessary and sufficient conditions for an optimal solution have been widely used to obtain analytical and numerical expressions for optimal solutions. Similarly, our results are useful to obtain analytical and numerical expressions for optimal sequential measurements. Examples in which our results can be used to obtain an analytical expression for an optimal sequential measurement are provided.

  6. Spatial and Temporal Variations in Titan's Surface Temperatures from Cassini CIRS Observations

    NASA Technical Reports Server (NTRS)

    Cottini, V.; Nixon, C. A.; Jennings, D. E.; deKok, R.; Teanby, N. A.; Irwin, P. G. J.; Flasar, F. M.

    2012-01-01

    We report a wide-ranging study of Titan's surface temperatures by analysis of the Moon's outgoing radiance through a spectral window in the thermal infrared at 19 mm (530/cm) characterized by lower atmospheric opacity. We begin by modeling Cassini Composite Infrared Spectrometer (CIRS) far infrared spectra collected in the period 2004-2010, using a radiative transfer forward model combined with a non-linear optimal estimation inversion method. At low-latitudes, we agree with the HASI near-surface temperature of about 94 K at 101S (Fulchignoni et al., 2005). We find a systematic decrease from the equator toward the poles, hemispherically asymmetric, of approx. 1 K at 60 deg. south and approx. 3 K at 60 deg. north, in general agreement with a previous analysis of CIRS data and with Voyager results from the previous northern winter. Subdividing the available database, corresponding to about one Titan season, into 3 consecutive periods, small seasonal changes of up to 2 K at 60 deg N became noticeable in the results. In addition, clear evidence of diurnal variations of the surface temperatures near the equator are observed for the first time: we find a trend of slowly increasing temperature from the morning to the early afternoon and a faster decrease during the night. The diurnal change is approx. 1.5 K, in agreement with model predictions for a surface with a thermal inertia between 300 and 600 J/ sq. m s (exp -1/2) / K. These results provide important constraints on coupled surface-atmosphere models of Titan's meteorology and atmospheric dynamic.

  7. Comparison of three methods for long-term monitoring of boreal lake area using Landsat TM and ETM+ imagery

    USGS Publications Warehouse

    Roach, Jennifer K.; Griffith, Brad; Verbyla, David

    2012-01-01

    Programs to monitor lake area change are becoming increasingly important in high latitude regions, and their development often requires evaluating tradeoffs among different approaches in terms of accuracy of measurement, consistency across multiple users over long time periods, and efficiency. We compared three supervised methods for lake classification from Landsat imagery (density slicing, classification trees, and feature extraction). The accuracy of lake area and number estimates was evaluated relative to high-resolution aerial photography acquired within two days of satellite overpasses. The shortwave infrared band 5 was better at separating surface water from nonwater when used alone than when combined with other spectral bands. The simplest of the three methods, density slicing, performed best overall. The classification tree method resulted in the most omission errors (approx. 2x), feature extraction resulted in the most commission errors (approx. 4x), and density slicing had the least directional bias (approx. half of the lakes with overestimated area and half of the lakes with underestimated area). Feature extraction was the least consistent across training sets (i.e., large standard error among different training sets). Density slicing was the best of the three at classifying small lakes as evidenced by its lower optimal minimum lake size criterion of 5850 m2 compared with the other methods (8550 m2). Contrary to conventional wisdom, the use of additional spectral bands and a more sophisticated method not only required additional processing effort but also had a cost in terms of the accuracy and consistency of lake classifications.

  8. A Cascade Optimization Strategy for Solution of Difficult Multidisciplinary Design Problems

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Hopkins, Dale A.; Berke, Laszlo

    1996-01-01

    A research project to comparatively evaluate 10 nonlinear optimization algorithms was recently completed. A conclusion was that no single optimizer could successfully solve all 40 problems in the test bed, even though most optimizers successfully solved at least one-third of the problems. We realized that improved search directions and step lengths, available in the 10 optimizers compared, were not likely to alleviate the convergence difficulties. For the solution of those difficult problems we have devised an alternative approach called cascade optimization strategy. The cascade strategy uses several optimizers, one followed by another in a specified sequence, to solve a problem. A pseudorandom scheme perturbs design variables between the optimizers. The cascade strategy has been tested successfully in the design of supersonic and subsonic aircraft configurations and air-breathing engines for high-speed civil transport applications. These problems could not be successfully solved by an individual optimizer. The cascade optimization strategy, however, generated feasible optimum solutions for both aircraft and engine problems. This paper presents the cascade strategy and solutions to a number of these problems.

  9. A composite molecular phylogeny of living lemuroid primates.

    PubMed

    DelPero, Massimiliano; Pozzi, Luca; Masters, Judith C

    2006-01-01

    Lemuroid phylogeny is a source of lively debate among primatologists. Reconstructions based on morphological, physiological, behavioural and molecular data have yielded a diverse array of tree topologies with few nodes in common. In the last decade, molecular phylogenetic studies have grown in popularity, and a wide range of sequences has been brought to bear on the problem, but consensus has remained elusive. We present an analysis based on a composite molecular data set of approx. 6,400 bp assembled from the National Center for Biotechnology Information (NCBI) database, including both mitochondrial and nuclear genes, and diverse analytical methods. Our analysis consolidates some of the nodes that were insecure in previous reconstructions, but is still equivocal on the placement of some taxa. We conducted a similar analysis of a composite data set of approx. 3,600 bp to investigate the controversial relationships within the family Lemuridae. Here our analysis was more successful; only the position of Eulemur coronatus remained uncertain. Copyright 2006 S. Karger AG, Basel.

  10. Large-scale functional models of visual cortex for remote sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brumby, Steven P; Kenyon, Garrett; Rasmussen, Craig E

    Neuroscience has revealed many properties of neurons and of the functional organization of visual cortex that are believed to be essential to human vision, but are missing in standard artificial neural networks. Equally important may be the sheer scale of visual cortex requiring {approx}1 petaflop of computation. In a year, the retina delivers {approx}1 petapixel to the brain, leading to massively large opportunities for learning at many levels of the cortical system. We describe work at Los Alamos National Laboratory (LANL) to develop large-scale functional models of visual cortex on LANL's Roadrunner petaflop supercomputer. An initial run of a simplemore » region VI code achieved 1.144 petaflops during trials at the IBM facility in Poughkeepsie, NY (June 2008). Here, we present criteria for assessing when a set of learned local representations is 'complete' along with general criteria for assessing computer vision models based on their projected scaling behavior. Finally, we extend one class of biologically-inspired learning models to problems of remote sensing imagery.« less

  11. Short-lived solar burst spectral component at f approximately 100 GHz

    NASA Technical Reports Server (NTRS)

    Kaufmann, P.; Correia, E.; Costa, J. E. R.; Vaz, A. M. Z.

    1986-01-01

    A new kind of burst emission component was discovered, exhibiting fast and distinct pulses (approx. 60 ms durations), with spectral peak emission at f approx. 100 GHz, and onset time coincident to hard X-rays to within approx. 128 ms. These features pose serious constraints for the interpretation using current models. One suggestion assumes the f approx. 100 GHz pulses emission by synchrotron mechanism of electrons accelerated to ultrarelativistic energies. The hard X-rays originate from inverse Compton scattering of the electrons on the synchrotron photons. Several crucial observational tests are needed for the understanding of the phenomenon, requiring high sensitivity and high time resolution (approx. 1 ms) simultaneous to high spatial resolution (0.1 arcsec) at f approx. 110 GHz and hard X-rays.

  12. THE XMM-NEWTON WIDE-FIELD SURVEY IN THE COSMOS FIELD (XMM-COSMOS): DEMOGRAPHY AND MULTIWAVELENGTH PROPERTIES OF OBSCURED AND UNOBSCURED LUMINOUS ACTIVE GALACTIC NUCLEI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brusa, M.; Cappelluti, N.; Merloni, A.

    2010-06-10

    We report the final optical identifications of the medium-depth ({approx}60 ks), contiguous (2 deg{sup 2}) XMM-Newton survey of the COSMOS field. XMM-Newton has detected {approx}1800 X-ray sources down to limiting fluxes of {approx}5 x 10{sup -16}, {approx}3 x 10{sup -15}, and {approx}7 x 10{sup -15} erg cm{sup -2} s{sup -1} in the 0.5-2 keV, 2-10 keV, and 5-10 keV bands, respectively ({approx}1 x 10{sup -15}, {approx}6 x 10{sup -15}, and {approx}1 x 10{sup -14} erg cm{sup -2} s{sup -1}, in the three bands, respectively, over 50% of the area). The work is complemented by an extensive collection of multiwavelength datamore » from 24 {mu}m to UV, available from the COSMOS survey, for each of the X-ray sources, including spectroscopic redshifts for {approx}>50% of the sample, and high-quality photometric redshifts for the rest. The XMM and multiwavelength flux limits are well matched: 1760 (98%) of the X-ray sources have optical counterparts, 1711 ({approx}95%) have IRAC counterparts, and 1394 ({approx}78%) have MIPS 24 {mu}m detections. Thanks to the redshift completeness (almost 100%) we were able to constrain the high-luminosity tail of the X-ray luminosity function confirming that the peak of the number density of log L{sub X} > 44.5 active galactic nuclei (AGNs) is at z {approx} 2. Spectroscopically identified obscured and unobscured AGNs, as well as normal and star-forming galaxies, present well-defined optical and infrared properties. We devised a robust method to identify a sample of {approx}150 high-redshift (z > 1), obscured AGN candidates for which optical spectroscopy is not available. We were able to determine that the fraction of the obscured AGN population at the highest (L{sub X} > 10{sup 44} erg s{sup -1}) X-ray luminosity is {approx}15%-30% when selection effects are taken into account, providing an important observational constraint for X-ray background synthesis. We studied in detail the optical spectrum and the overall spectral energy distribution of a prototypical Type 2 QSO, caught in a stage transitioning from being starburst dominated to AGN dominated, which was possible to isolate only thanks to the combination of X-ray and infrared observations.« less

  13. CLASH: THREE STRONGLY LENSED IMAGES OF A CANDIDATE z Almost-Equal-To 11 GALAXY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coe, Dan; Postman, Marc; Bradley, Larry

    2013-01-01

    We present a candidate for the most distant galaxy known to date with a photometric redshift of z = 10.7{sup +0.6} {sub -0.4} (95% confidence limits; with z < 9.5 galaxies of known types ruled out at 7.2{sigma}). This J-dropout Lyman break galaxy, named MACS0647-JD, was discovered as part of the Cluster Lensing and Supernova survey with Hubble (CLASH). We observe three magnified images of this galaxy due to strong gravitational lensing by the galaxy cluster MACSJ0647.7+7015 at z = 0.591. The images are magnified by factors of {approx}80, 7, and 2, with the brighter two observed at {approx}26th magnitudemore » AB ({approx}0.15 {mu}Jy) in the WFC3/IR F160W filter ({approx}1.4-1.7 {mu}m) where they are detected at {approx}>12{sigma}. All three images are also confidently detected at {approx}>6{sigma} in F140W ({approx}1.2-1.6 {mu}m), dropping out of detection from 15 lower wavelength Hubble Space Telescope filters ({approx}0.2-1.4 {mu}m), and lacking bright detections in Spitzer/IRAC 3.6 {mu}m and 4.5 {mu}m imaging ({approx}3.2-5.0 {mu}m). We rule out a broad range of possible lower redshift interlopers, including some previously published as high-redshift candidates. Our high-redshift conclusion is more conservative than if we had neglected a Bayesian photometric redshift prior. Given CLASH observations of 17 high-mass clusters to date, our discoveries of MACS0647-JD at z {approx} 10.8 and MACS1149-JD at z {approx} 9.6 are consistent with a lensed luminosity function extrapolated from lower redshifts. This would suggest that low-luminosity galaxies could have reionized the universe. However, given the significant uncertainties based on only two galaxies, we cannot yet rule out the sharp drop-off in number counts at z {approx}> 10 suggested by field searches.« less

  14. IC 751: A New Changing Look AGN Discovered by NuSTAR

    NASA Technical Reports Server (NTRS)

    Ricci, C.; Bauer, F. E.; Arevalo, P.; Boggs, S.; Brandt, W. N.; Christensen, F. E.; Craig, W. W.; Ghandi, P.; Hailey, C. J.; Harrison, F. A.; hide

    2016-01-01

    We present results of five Nuclear Spectroscopic Telescope Array (NuSTAR) observations of the type 2 active galactic nucleus (AGN) in IC 751, three of which were performed simultaneously with XMM-Newton or Swift/ X-Ray Telescope. We find that the nuclear X-ray source underwent a clear transition from a Compton-thick (NH approx. = 2 x10(exp 24)/sq cm) to a Compton-thin ( N(sub H) approx. = 4 x10(exp 23)/sq cm) state on timescales of < or approx.3 months, which makes IC 751 the first changing look AGN discovered by NuSTAR. Changes of the line of sight column density at the approx.2 (sigma) level are also found on a timescale of approx. 48 hr (delta N(sub H approx. 10(exp 23)/sq cm). From the lack of spectral variability on timescales of approx.100 ks, we infer that the varying absorber is located beyond the emission-weighted average radius of the broad-line region (BLR), and could therefore be related either to the external part of the BLR or a clumpy molecular torus. By adopting a physical torus X-ray spectral model, we are able to disentangle the column density of the non-varying absorber (N(sub H) approx. 3.8 x10(exp 23)/sq cm) from that of the varying clouds [N(sub H) approx. (1- 150) x 10(exp 22)/sq cm], and to constrain that of the material responsible for the reprocessed X-ray radiation (N(sub H approx. 6 x 10(exp 24)/sq cm).24 -2). We find evidence of significant intrinsic X-ray variability, with the flux varying by a factor of five on timescales of a few months in the 2-10 and 10-50 keV band.

  15. ENVIRONMENTAL EFFECTS ON STAR FORMATION ACTIVITY AT z {approx} 0.9 IN THE COSMOS FIELD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kajisawa, M.; Shioya, Y.; Taniguchi, Y.

    2013-05-01

    We investigated the fraction of [O II] emitters in galaxies at z {approx} 0.9 as a function of the local galaxy density in the Hubble Space Telescope (HST) COSMOS 2 deg{sup 2} field. [O II] emitters are selected by the narrowband excess technique with the NB711-band imaging data taken with Suprime-Cam on the Subaru telescope. We carefully selected 614 photo-z-selected galaxies with M{sub U3500} < -19.31 at z = 0.901 - 0.920, which includes 195 [O II] emitters, to directly compare the results with our previous study at z {approx} 1.2. We found that the fraction is almost constant atmore » 0.3 Mpc{sup -2} < {Sigma}{sub 10th} < 10 Mpc{sup -2}. We also checked the fraction of galaxies with blue rest-frame colors of NUV - R < 2 in our photo-z-selected sample, and found that the fraction of blue galaxies does not significantly depend on the local density. On the other hand, the semi-analytic model of galaxy formation predicted that the fraction of star-forming galaxies at z {approx} 0.9 decreases with increasing projected galaxy density even if the effects of the projection and the photo-z error in our analysis were taken into account. The fraction of [O II] emitters decreases from {approx}60% at z {approx} 1.2 to {approx}30% at z {approx} 0.9 independent of galaxy environment. The decrease of the [O II] emitter fraction could be explained mainly by the rapid decrease of star formation activity in the universe from z {approx} 1.2 to z {approx} 0.9.« less

  16. CONSTRAINTS ON THE FAINT END OF THE QUASAR LUMINOSITY FUNCTION AT z {approx} 5 IN THE COSMOS FIELD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ikeda, H.; Matsuoka, K.; Kajisawa, M.

    2012-09-10

    We present the result of our low-luminosity quasar survey in the redshift range of 4.5 {approx}< z {approx}< 5.5 in the COSMOS field. Using the COSMOS photometric catalog, we selected 15 quasar candidates with 22 < i' < 24 at z {approx} 5 that are {approx}3 mag fainter than the Sloan Digital Sky Survey quasars in the same redshift range. We obtained optical spectra for 14 of the 15 candidates using FOCAS on the Subaru Telescope and did not identify any low-luminosity type-1 quasars at z {approx} 5, while a low-luminosity type-2 quasar at z {approx} 5.07 was discovered. Inmore » order to constrain the faint end of the quasar luminosity function at z {approx} 5, we calculated the 1{sigma} confidence upper limits of the space density of type-1 quasars. As a result, the 1{sigma} confidence upper limits on the quasar space density are {Phi} < 1.33 Multiplication-Sign 10{sup -7} Mpc{sup -3} mag{sup -1} for -24.52 < M{sub 1450} < -23.52 and {Phi} < 2.88 Multiplication-Sign 10{sup -7} Mpc{sup -3} mag{sup -1} for -23.52 < M{sub 1450} < -22.52. The inferred 1{sigma} confidence upper limits of the space density are then used to provide constraints on the faint-end slope and the break absolute magnitude of the quasar luminosity function at z {approx} 5. We find that the quasar space density decreases gradually as a function of redshift at low luminosity (M{sub 1450} {approx} -23), being similar to the trend found for quasars with high luminosity (M{sub 1450} < -26). This result is consistent with the so-called downsizing evolution of quasars seen at lower redshifts.« less

  17. Algorithms for bilevel optimization

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia; Dennis, J. E., Jr.

    1994-01-01

    General multilevel nonlinear optimization problems arise in design of complex systems and can be used as a means of regularization for multi-criteria optimization problems. Here, for clarity in displaying our ideas, we restrict ourselves to general bi-level optimization problems, and we present two solution approaches. Both approaches use a trust-region globalization strategy, and they can be easily extended to handle the general multilevel problem. We make no convexity assumptions, but we do assume that the problem has a nondegenerate feasible set. We consider necessary optimality conditions for the bi-level problem formulations and discuss results that can be extended to obtain multilevel optimization formulations with constraints at each level.

  18. Near-Optimal Guidance Method for Maximizing the Reachable Domain of Gliding Aircraft

    NASA Astrophysics Data System (ADS)

    Tsuchiya, Takeshi

    This paper proposes a guidance method for gliding aircraft by using onboard computers to calculate a near-optimal trajectory in real-time, and thereby expanding the reachable domain. The results are applicable to advanced aircraft and future space transportation systems that require high safety. The calculation load of the optimal control problem that is used to maximize the reachable domain is too large for current computers to calculate in real-time. Thus the optimal control problem is divided into two problems: a gliding distance maximization problem in which the aircraft motion is limited to a vertical plane, and an optimal turning flight problem in a horizontal direction. First, the former problem is solved using a shooting method. It can be solved easily because its scale is smaller than that of the original problem, and because some of the features of the optimal solution are obtained in the first part of this paper. Next, in the latter problem, the optimal bank angle is computed from the solution of the former; this is an analytical computation, rather than an iterative computation. Finally, the reachable domain obtained from the proposed near-optimal guidance method is compared with that obtained from the original optimal control problem.

  19. Direct Method Transcription for a Human-Class Translunar Injection Trajectory Optimization

    NASA Technical Reports Server (NTRS)

    Witzberger, Kevin E.; Zeiler, Tom

    2012-01-01

    This paper presents a new trajectory optimization software package developed in the framework of a low-to-high fidelity 3 degrees-of-freedom (DOF)/6-DOF vehicle simulation program named Mission Analysis Simulation Tool in Fortran (MASTIF) and its application to a translunar trajectory optimization problem. The functionality of the developed optimization package is implemented as a new "mode" in generalized settings to make it applicable for a general trajectory optimization problem. In doing so, a direct optimization method using collocation is employed for solving the problem. Trajectory optimization problems in MASTIF are transcribed to a constrained nonlinear programming (NLP) problem and solved with SNOPT, a commercially available NLP solver. A detailed description of the optimization software developed is provided as well as the transcription specifics for the translunar injection (TLI) problem. The analysis includes a 3-DOF trajectory TLI optimization and a 3-DOF vehicle TLI simulation using closed-loop guidance.

  20. Sulfate Deposition in Regolith Exposed in Trenches on the Plains Between the Spirit Landing Site and Columbia Hills in Gusev Crater, Mars

    NASA Technical Reports Server (NTRS)

    Wang, Alian; Haskin, L. A.; Squyres, S. W.; Arvidson, R.; Crumpler, L.; Gellert, R.; Hurowitz, J.; Schroeder, C.; Tosca, N.; Herkenhoff, K.

    2005-01-01

    During its exploration within Gusev crater between sol 01 and sol 158, the Spirit rover dug three trenches (Fig. 1) to expose the subsurface regolith [1, 2, 9]. Laguna trench (approx. 6 cm deep, approx.203 m from the rim of Bonneville crater) was dug in Laguna Hollow at the boundary of the impact ejecta from Bonneville crater and the surrounding plains. The Big Hole trench (approx. 6-7 cm deep) and The Boroughs trench (approx. 11 cm deep) were dug in the plains between the Bonneville crater and the Columbia Hills (approx.556 m and approx.1698 m from the rim of Bonneville crater respectively). The top, wall and floor regolith of the three trenches were investigated using the entire set of Athena scientific instruments [10].

  1. Multidisciplinary Optimization of a Transport Aircraft Wing using Particle Swarm Optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw; Venter, Gerhard

    2002-01-01

    The purpose of this paper is to demonstrate the application of particle swarm optimization to a realistic multidisciplinary optimization test problem. The paper's new contributions to multidisciplinary optimization is the application of a new algorithm for dealing with the unique challenges associated with multidisciplinary optimization problems, and recommendations as to the utility of the algorithm in future multidisciplinary optimization applications. The selected example is a bi-level optimization problem that demonstrates severe numerical noise and has a combination of continuous and truly discrete design variables. The use of traditional gradient-based optimization algorithms is thus not practical. The numerical results presented indicate that the particle swarm optimization algorithm is able to reliably find the optimum design for the problem presented here. The algorithm is capable of dealing with the unique challenges posed by multidisciplinary optimization as well as the numerical noise and truly discrete variables present in the current example problem.

  2. Status Update on the James Webb Space Telescope Project

    NASA Technical Reports Server (NTRS)

    Rigby, Jane R.

    2012-01-01

    The James Webb Space Telescope (JWST) is a large (6.6 m), cold <50 K), infrared (IR)-optimized space observatory that will be launched in approx.2018. The observatory will have four instruments covering 0.6 to 28 micron, including a multi-object spectrograph, two integral field units, and grisms optimized for exoplanets. I will review JWST's key science themes, as well as exciting new ideas from the recent JWST Frontiers Workshop. I will summarize the technical progress and mission status. Recent highlights: All mirrors have been fabricated, polished, and gold-coated; the mirror is expected to be diffraction-limited down to a wavelength of 2 microns. The MIRI instrument just completed its cryogenic testing. STScI has released exposure time calculators and sensitivity charts to enable scientists to start thinking about how to use JWST for their science.

  3. Optimal recombination in genetic algorithms for flowshop scheduling problems

    NASA Astrophysics Data System (ADS)

    Kovalenko, Julia

    2016-10-01

    The optimal recombination problem consists in finding the best possible offspring as a result of a recombination operator in a genetic algorithm, given two parent solutions. We prove NP-hardness of the optimal recombination for various variants of the flowshop scheduling problem with makespan criterion and criterion of maximum lateness. An algorithm for solving the optimal recombination problem for permutation flowshop problems is built, using enumeration of prefect matchings in a special bipartite graph. The algorithm is adopted for the classical flowshop scheduling problem and for the no-wait flowshop problem. It is shown that the optimal recombination problem for the permutation flowshop scheduling problem is solvable in polynomial time for almost all pairs of parent solutions as the number of jobs tends to infinity.

  4. An efficient and accurate solution methodology for bilevel multi-objective programming problems using a hybrid evolutionary-local-search algorithm.

    PubMed

    Deb, Kalyanmoy; Sinha, Ankur

    2010-01-01

    Bilevel optimization problems involve two optimization tasks (upper and lower level), in which every feasible upper level solution must correspond to an optimal solution to a lower level optimization problem. These problems commonly appear in many practical problem solving tasks including optimal control, process optimization, game-playing strategy developments, transportation problems, and others. However, they are commonly converted into a single level optimization problem by using an approximate solution procedure to replace the lower level optimization task. Although there exist a number of theoretical, numerical, and evolutionary optimization studies involving single-objective bilevel programming problems, not many studies look at the context of multiple conflicting objectives in each level of a bilevel programming problem. In this paper, we address certain intricate issues related to solving multi-objective bilevel programming problems, present challenging test problems, and propose a viable and hybrid evolutionary-cum-local-search based algorithm as a solution methodology. The hybrid approach performs better than a number of existing methodologies and scales well up to 40-variable difficult test problems used in this study. The population sizing and termination criteria are made self-adaptive, so that no additional parameters need to be supplied by the user. The study indicates a clear niche of evolutionary algorithms in solving such difficult problems of practical importance compared to their usual solution by a computationally expensive nested procedure. The study opens up many issues related to multi-objective bilevel programming and hopefully this study will motivate EMO and other researchers to pay more attention to this important and difficult problem solving activity.

  5. PROBING THE FAINT END OF THE QUASAR LUMINOSITY FUNCTION AT z{approx} 4 IN THE COSMOS FIELD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ikeda, H.; Nagao, T.; Matsuoka, K.

    2011-02-20

    We searched for quasars that are {approx}3 mag fainter than the SDSS quasars in the redshift range 3.7 {approx}< z {approx}< 4.7 in the COSMOS field to constrain the faint end of the quasar luminosity function (QLF). Using optical photometric data, we selected 31 quasar candidates with 22 < i' < 24 at z {approx} 4. We obtained optical spectra for most of these candidates using FOCAS on the Subaru telescope and identified eight low-luminosity quasars at z {approx} 4. In order to derive the QLF based on our spectroscopic follow-up campaign, we estimated the photometric completeness of our quasarmore » survey through detailed Monte Carlo simulations. Our QLF at z {approx} 4 has a much shallower faint-end slope ({beta} = -1.67{sup +0.11}{sub -0.17}) than that obtained by other recent surveys in the same redshift. Our result is consistent with the scenario of downsizing evolution of active galactic nuclei inferred by recent optical and X-ray quasar surveys at lower redshifts.« less

  6. Economical Fabrication of Thick-Section Ceramic Matrix Composites

    NASA Technical Reports Server (NTRS)

    Babcock, Jason; Ramachandran, Gautham; Williams, Brian; Benander, Robert

    2010-01-01

    A method was developed for producing thick-section [>2 in. (approx.5 cm)], continuous fiber-reinforced ceramic matrix composites (CMCs). Ultramet-modified fiber interface coating and melt infiltration processing, developed previously for thin-section components, were used for the fabrication of CMCs that were an order of magnitude greater in thickness [up to 2.5 in. (approx.6.4 cm)]. Melt processing first involves infiltration of a fiber preform with the desired interface coating, and then with carbon to partially densify the preform. A molten refractory metal is then infiltrated and reacts with the excess carbon to form the carbide matrix without damaging the fiber reinforcement. Infiltration occurs from the inside out as the molten metal fills virtually all the available void space. Densification to <5 vol% porosity is a one-step process requiring no intermediate machining steps. The melt infiltration method requires no external pressure. This prevents over-infiltration of the outer surface plies, which can lead to excessive residual porosity in the center of the part. However, processing of thick-section components required modification of the conventional process conditions, and the means by which the large amount of molten metal is introduced into the fiber preform. Modification of the low-temperature, ultraviolet-enhanced chemical vapor deposition process used to apply interface coatings to the fiber preform was also required to accommodate the high preform thickness. The thick-section CMC processing developed in this work proved to be invaluable for component development, fabrication, and testing in two complementary efforts. In a project for the Army, involving SiC/SiC blisk development, nominally 0.8 in. thick x 8 in. diameter (approx. 2 cm thick x 20 cm diameter) components were successfully infiltrated. Blisk hubs were machined using diamond-embedded cutting tools and successfully spin-tested. Good ply uniformity and extremely low residual porosity (<2 percent) were achieved, the latter being far lower than that achieved with SiC matrix composites fabricated via CVI or PIP. The pyrolytic carbon/zirconium nitride interface coating optimized in this work for use on carbon fibers was incorporated in the SiC/SiC composites and yielded a >41 ksi (approx. 283 MPa) flexural strength.

  7. X-Ray Spectroscopy of Optically Bright Planets using the Chandra Observatory

    NASA Technical Reports Server (NTRS)

    Ford, P. G.; Elsner, R. F.

    2005-01-01

    Since its launch in July 1999, Chandra's Advanced CCD Imaging Spectrometer (ACIS) has observed several planets (Venus, Mars, Jupiter and Saturn) and 6 comets. At 0.5 arc-second spatial resolution, ACIS detects individual x-ray photons with good quantum efficiency (25% at 0.6 KeV) and energy resolution (20% FWHM at 0.6 KeV). However, the ACIS CCDs are also sensitive to optical and near-infrared light, which is absorbed by optical blocking filters (OBFs) that eliminate optical contamination from all but the brightest extended sources, e.g., planets. .Jupiter at opposition subseconds approx.45 arc-seconds (90 CCD pixels.) Since Chandra is incapable of tracking a moving target, the planet takes 10 - 20 kiloseconds to move across the most sensitive ACIS CCD, after which the observatory must be re-pointed. Meanwhile, the OBF covering that CCD adds an opt,ical signal equivalent to approx.110 eV to each pixel that lies within thc outline of the Jovian disk. This has three consequences: (1) the observatory must be pointed away from Jupiter while CCD bias maps are constructed; (2) most x-rays from within the optical image will be misidentified as charged-particle background and ignored; and (3) those x-rays that are reported will bc assigned anomalously high energies. The same also applies to thc other planets, but is less serious since they are either dimmer at optical wavelengths, or they show less apparent motion across the sky, permitting reduced CCD exposure times: the optical contamination from Saturn acids approx.15 eV per pixel, and from Mars and Venus approx.31 eV. After analyzing a series of short .Jupiter observations in December 2000, ACIS parameters were optimized for the February 2003 opposition. CCD bias maps were constructed while Chandra pointed away from Jupiter, and the subsequent observations employed on-board software to ignore any pixel that contained less charge than that expected from optical leakage. In addition, ACIS was commanded to report 5 x 5 arrays of pixel values surrounding each x-ray event, and the outlying values were employed during ground processing to correct for the optical contamination.

  8. Impact Lithologies and Post-Impact Hydrothermal Alteration Exposed by the Chicxulub Scientific Drilling Project, Yaxcopoil, Mexico

    NASA Technical Reports Server (NTRS)

    Kring, David A.; Zurcher, Lukas; Horz, Friedrich

    2003-01-01

    The Chicxulub Scientific Drilling Project recovered a continuous core from the Yaxcopoil-1 (YAX-1) borehole, which is approx.60-65 km from the center of the Chicxulub structure, approx.15 km beyond the limit of the estimated approx.50 km radius transient crater (excavation cavity), but within the rim of the estimated approx.90 km radius final crater. Approximately approx.100 m of melt-bearing impactites were recoverd from a depth of 794 to 895 m, above approx.600 m of underlying megablocks of Cretaceous target sediments, before bottoming at 1511 m. Compared to lithologies at impact craters like the Ries, the YAX-1 impactite sequence is incredibly rich in impact melts of unusual textural variety and complexity. The impactite sequence has also been altered by hydrothermal activity that may have largely been produced by the impact event.

  9. Potential atmospheric impact of the Toba mega-eruption {approx}71,000 years ago

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zielinski, G.A.; Mayewski, P.A.; Meeker, L.D.

    1996-04-15

    An {approx}6 year-long period of volcanic sulfate recorded in the GISP2 ice core about 71,000 {+-} 5000 years ago may provide detailed information on the atmospheric and climate impact in the Toba mega-eruption. Deposition of these aerosols occur beginning of an {approx}1000-year long stadial event, but not immediately before the longer glacial period beginning {approx}67,500 years ago. Total stratospheric loading estimates over this {approx}6 year period range from 2200 to 4400 Mt of H{sub 2}SO{sub 4} aerosols. The range in values is given to compensate for uncertainties in aerosol transport. Magnitude and longevity of the atmospheric loading may have ledmore » directly to enhanced cooling during the initial two centuries of this {approx}1000-year cooling event. 25 refs., 2 fig., 1 tab.« less

  10. Finite dimensional approximation of a class of constrained nonlinear optimal control problems

    NASA Technical Reports Server (NTRS)

    Gunzburger, Max D.; Hou, L. S.

    1994-01-01

    An abstract framework for the analysis and approximation of a class of nonlinear optimal control and optimization problems is constructed. Nonlinearities occur in both the objective functional and in the constraints. The framework includes an abstract nonlinear optimization problem posed on infinite dimensional spaces, and approximate problem posed on finite dimensional spaces, together with a number of hypotheses concerning the two problems. The framework is used to show that optimal solutions exist, to show that Lagrange multipliers may be used to enforce the constraints, to derive an optimality system from which optimal states and controls may be deduced, and to derive existence results and error estimates for solutions of the approximate problem. The abstract framework and the results derived from that framework are then applied to three concrete control or optimization problems and their approximation by finite element methods. The first involves the von Karman plate equations of nonlinear elasticity, the second, the Ginzburg-Landau equations of superconductivity, and the third, the Navier-Stokes equations for incompressible, viscous flows.

  11. LDRD Final Report: Global Optimization for Engineering Science Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HART,WILLIAM E.

    1999-12-01

    For a wide variety of scientific and engineering problems the desired solution corresponds to an optimal set of objective function parameters, where the objective function measures a solution's quality. The main goal of the LDRD ''Global Optimization for Engineering Science Problems'' was the development of new robust and efficient optimization algorithms that can be used to find globally optimal solutions to complex optimization problems. This SAND report summarizes the technical accomplishments of this LDRD, discusses lessons learned and describes open research issues.

  12. Climate and Health Impacts of US Emissions Reductions Consistent with 2 C

    NASA Technical Reports Server (NTRS)

    Shindell, Drew T.; Lee, Yunha; Faluvegi, Greg

    2016-01-01

    An emissions trajectory for the US consistent with 2 C warming would require marked societal changes, making it crucial to understand the associated benefits. Previous studies have examined technological potentials and implementation costs and public health benefits have been quantified for less-aggressive potential emissions-reduction policies, but researchers have not yet fully explored the multiple benefits of reductions consistent with 2 C. We examine the impacts of such highly ambitious scenarios for clean energy and vehicles. US transportation emissions reductions avoid approx.0.03 C global warming in 2030 (0.15 C in 2100), whereas energy emissions reductions avoid approx.0.05-0.07 C 2030 warming (approx.0.25 C in 2100). Nationally, however, clean energy policies produce climate disbenefits including warmer summers (although these would be eliminated by the remote effects of similar policies if they were undertaken elsewhere). The policies also greatly reduce damaging ambient particulate matter and ozone. By 2030, clean energy policies could prevent approx.175,000 premature deaths, with approx.22,000 (11,000-96,000; 95% confidence) fewer annually thereafter, whereas clean transportation could prevent approx.120,000 premature deaths and approx.14,000 (9,000-52,000) annually thereafter. Near-term national benefits are valued at approx.US$250 billion (140 billion to 1,050billion) per year, which is likely to exceed implementation costs. Including longer-term, worldwide climate impacts, benefits roughly quintuple, becoming approx.5-10 times larger than estimated implementation costs. Achieving the benefits, however, would require both larger and broader emissions reductions than those in current legislation or regulations.

  13. Research on NC laser combined cutting optimization model of sheet metal parts

    NASA Astrophysics Data System (ADS)

    Wu, Z. Y.; Zhang, Y. L.; Li, L.; Wu, L. H.; Liu, N. B.

    2017-09-01

    The optimization problem for NC laser combined cutting of sheet metal parts was taken as the research object in this paper. The problem included two contents: combined packing optimization and combined cutting path optimization. In the problem of combined packing optimization, the method of “genetic algorithm + gravity center NFP + geometric transformation” was used to optimize the packing of sheet metal parts. In the problem of combined cutting path optimization, the mathematical model of cutting path optimization was established based on the parts cutting constraint rules of internal contour priority and cross cutting. The model played an important role in the optimization calculation of NC laser combined cutting.

  14. Quadratic constrained mixed discrete optimization with an adiabatic quantum optimizer

    NASA Astrophysics Data System (ADS)

    Chandra, Rishabh; Jacobson, N. Tobias; Moussa, Jonathan E.; Frankel, Steven H.; Kais, Sabre

    2014-07-01

    We extend the family of problems that may be implemented on an adiabatic quantum optimizer (AQO). When a quadratic optimization problem has at least one set of discrete controls and the constraints are linear, we call this a quadratic constrained mixed discrete optimization (QCMDO) problem. QCMDO problems are NP-hard, and no efficient classical algorithm for their solution is known. Included in the class of QCMDO problems are combinatorial optimization problems constrained by a linear partial differential equation (PDE) or system of linear PDEs. An essential complication commonly encountered in solving this type of problem is that the linear constraint may introduce many intermediate continuous variables into the optimization while the computational cost grows exponentially with problem size. We resolve this difficulty by developing a constructive mapping from QCMDO to quadratic unconstrained binary optimization (QUBO) such that the size of the QUBO problem depends only on the number of discrete control variables. With a suitable embedding, taking into account the physical constraints of the realizable coupling graph, the resulting QUBO problem can be implemented on an existing AQO. The mapping itself is efficient, scaling cubically with the number of continuous variables in the general case and linearly in the PDE case if an efficient preconditioner is available.

  15. An Enhanced Memetic Algorithm for Single-Objective Bilevel Optimization Problems.

    PubMed

    Islam, Md Monjurul; Singh, Hemant Kumar; Ray, Tapabrata; Sinha, Ankur

    2017-01-01

    Bilevel optimization, as the name reflects, deals with optimization at two interconnected hierarchical levels. The aim is to identify the optimum of an upper-level  leader problem, subject to the optimality of a lower-level follower problem. Several problems from the domain of engineering, logistics, economics, and transportation have an inherent nested structure which requires them to be modeled as bilevel optimization problems. Increasing size and complexity of such problems has prompted active theoretical and practical interest in the design of efficient algorithms for bilevel optimization. Given the nested nature of bilevel problems, the computational effort (number of function evaluations) required to solve them is often quite high. In this article, we explore the use of a Memetic Algorithm (MA) to solve bilevel optimization problems. While MAs have been quite successful in solving single-level optimization problems, there have been relatively few studies exploring their potential for solving bilevel optimization problems. MAs essentially attempt to combine advantages of global and local search strategies to identify optimum solutions with low computational cost (function evaluations). The approach introduced in this article is a nested Bilevel Memetic Algorithm (BLMA). At both upper and lower levels, either a global or a local search method is used during different phases of the search. The performance of BLMA is presented on twenty-five standard test problems and two real-life applications. The results are compared with other established algorithms to demonstrate the efficacy of the proposed approach.

  16. Optimal Price Decision Problem for Simultaneous Multi-article Auction and Its Optimal Price Searching Method by Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Masuda, Kazuaki; Aiyoshi, Eitaro

    We propose a method for solving optimal price decision problems for simultaneous multi-article auctions. An auction problem, originally formulated as a combinatorial problem, determines both every seller's whether or not to sell his/her article and every buyer's which article(s) to buy, so that the total utility of buyers and sellers will be maximized. Due to the duality theory, we transform it equivalently into a dual problem in which Lagrange multipliers are interpreted as articles' transaction price. As the dual problem is a continuous optimization problem with respect to the multipliers (i.e., the transaction prices), we propose a numerical method to solve it by applying heuristic global search methods. In this paper, Particle Swarm Optimization (PSO) is used to solve the dual problem, and experimental results are presented to show the validity of the proposed method.

  17. A Low Cross-Polarization Smooth-Walled Horn with Improved Bandwidth

    NASA Technical Reports Server (NTRS)

    Zeng, Lingzhen; Bennette, Charles L.; Chuss, David T.; Wollack, Edward J.

    2009-01-01

    Corrugated feed horns offer excellent beam symmetry, main beam efficiency, and cross-polar response over wide bandwidths, but can be challenging to fabricate. An easier-to-manufacture smooth-walled feed is explored that approximates these properties over a finite bandwidth. The design, optimization and measurement of a monotonically-profiled, smooth-walled scalar feedhorn with a diffraction-limited approx. 14deg FWHM beam is presented. The feed was demonstrated to have low cross polarization (<-30 dB) across the frequency range 33-45 GHz (30% fractional bandwidth). A power reflection below -28 dB was measured across the band.

  18. Rapid and Reliable Damage Proxy Map from InSAR Coherence

    NASA Technical Reports Server (NTRS)

    Yun, Sang-Ho; Fielding, Eric; Simons, Mark; Agram, Piyush; Rosen, Paul; Owen, Susan; Webb, Frank

    2012-01-01

    Future radar satellites will visit SoCal within a day after a disaster event. Data acquisition latency in 2015-2020 is 8 to approx. 15 hours. Data transfer latency that often involves human/agency intervention far exceeds the data acquisition latency. Need interagency cooperation to establish automatic pipeline for data transfer. The algorithm is tested with ALOS PALSAR data of Pasadena, California. Quantitative quality assessment is being pursued: Meeting with Pasadena City Hall computer engineers for a complete list of demolition/construction project 1. Estimate the probability of detection and probability of false alarm 2. Estimate the optimal threshold value.

  19. Evaluation of AlsubxGasub1-xsubAs solar cells

    NASA Technical Reports Server (NTRS)

    Loo, R. Y.; Kamath, G. S.; Knechtli, R. C.; Narayanan, A.; Li, S. S.

    1985-01-01

    Single junction GaAs solar cells have already attained an efficiency of 19% AMO which could potentially be increased to approx 20%, with some optimization. To achieve the higher efficiency the concept of multibandgap solar cells which utilizes a wider region of the solar spectrum should be sed. One of the materials for fabricating the top cell in a multibandgap solar cell is AlGaAs because it is compatible with GaAs in bandgap and lattice match. This is a very important consideration from the materials technology point of view, and the viability of this approach is evaluated.

  20. Techniques for shuttle trajectory optimization

    NASA Technical Reports Server (NTRS)

    Edge, E. R.; Shieh, C. J.; Powers, W. F.

    1973-01-01

    The application of recently developed function-space Davidon-type techniques to the shuttle ascent trajectory optimization problem is discussed along with an investigation of the recently developed PRAXIS algorithm for parameter optimization. At the outset of this analysis, the major deficiency of the function-space algorithms was their potential storage problems. Since most previous analyses of the methods were with relatively low-dimension problems, no storage problems were encountered. However, in shuttle trajectory optimization, storage is a problem, and this problem was handled efficiently. Topics discussed include: the shuttle ascent model and the development of the particular optimization equations; the function-space algorithms; the operation of the algorithm and typical simulations; variable final-time problem considerations; and a modification of Powell's algorithm.

  1. Emission Lines from the Gas Disk Around TW Hydra and the Origin of the Inner Hole

    NASA Technical Reports Server (NTRS)

    Gorti, U.; Hollenbach, D.; Najita, J.; Pascucci, I.

    2011-01-01

    We compare line emission calculated from theoretical disk models with optical to submillimeter wavelength observational data of the gas disk surrounding TW Hya and infer the spatial distribution of mass in the gas disk. The model disk that best matches observations has a gas mass ranging from approx.10(exp -4) to 10(exp -5) M for 0.06AU < r < 3.5 AU and approx. 0.06M for 3.5AU < r < 200 AU. We find that the inner dust hole (r < 3.5 AU) in the disk must be depleted of gas by approx. 1-2 orders of magnitude compared with the extrapolated surface density distribution of the outer disk. Grain growth alone is therefore not a viable explanation for the dust hole. CO vibrational emission arises within r approx. 0.5 AU from thermal excitation of gas. [O i] 6300Å and 5577Å forbidden lines and OH mid-infrared emission are mainly due to prompt emission following UV photodissociation of OH and water at r < or approx. 0.1 AU and at r approx. 4 AU. [Ne ii] emission is consistent with an origin in X-ray heated neutral gas at r < or approx. 10 AU, and may not require the presence of a significant extreme-ultraviolet (h? > 13.6 eV) flux from TW Hya. H2 pure rotational line emission comes primarily from r approx. 1 to 30 AU. [Oi] 63microns, HCO+, and CO pure rotational lines all arise from the outer disk at r approx. 30-120 AU. We discuss planet formation and photoevaporation as causes for the decrease in surface density of gas and dust inside 4 AU. If a planet is present, our results suggest a planet mass approx. 4-7MJ situated at 3 AU. Using our photoevaporation models and the best surface density profile match to observations, we estimate a current photoevaporative mass loss rate of 4x10(exp -9M)/yr and a remaining disk lifetime of approx.5 million years.

  2. A HIGHLY ELONGATED PROMINENT LENS AT z = 0.87: FIRST STRONG-LENSING ANALYSIS OF EL GORDO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zitrin, Adi; Menanteau, Felipe; Hughes, John P.

    We present the first strong-lensing (SL) analysis of the galaxy cluster ACT-CL J0102-4915 (El Gordo), in recent HST/ACS images, revealing a prominent strong lens at a redshift of z = 0.87. This finding adds to the already-established unique properties of El Gordo: it is the most massive, hot, X-ray luminous, and bright Sunyaev-Zeldovich effect cluster at z {approx}> 0.6, and the only {sup b}ullet{sup -}like merging cluster known at these redshifts. The lens consists of two merging massive clumps, where, for a source redshift of z{sub s} {approx} 2, each clump exhibits only a small, separate critical area, with amore » total area of 0.69 {+-} 0.11{open_square}' over the two clumps. For a higher source redshift, z{sub s} {approx} 4, the critical curves of the two clumps merge together into one bigger and very elongated lens (axis ratio {approx_equal} 5.5), enclosing an effective area of 1.44 {+-} 0.22{open_square}'. The critical curves continue expanding with increasing redshift so that for high-redshift sources (z{sub s} {approx}> 9) they enclose an area of {approx}1.91 {+-} 0.30{open_square}' (effective {theta}{sub e} {approx_equal} 46.''8 {+-} 3.''7) and a mass of 6.09 {+-} 1.04 Multiplication-Sign 10{sup 14} M{sub Sun }. According to our model, the area of high magnification ({mu} > 10) for such high-redshift sources is {approx_equal}1.2{open_square}', and the area with {mu} > 5 is {approx_equal}2.3{open_square}', making El Gordo a compelling target for studying the high-redshift universe. We obtain a strong lower limit on the total mass of El Gordo, {approx}> 1.7 Multiplication-Sign 10{sup 15} M{sub Sun} from the SL regime alone, suggesting a total mass of roughly M{sub 200} {approx} 2.3 Multiplication-Sign 10{sup 15} M{sub Sun }. Our results should be revisited when additional spectroscopic and HST imaging data are available.« less

  3. On l(1): Optimal decentralized performance

    NASA Technical Reports Server (NTRS)

    Sourlas, Dennis; Manousiouthakis, Vasilios

    1993-01-01

    In this paper, the Manousiouthakis parametrization of all decentralized stabilizing controllers is employed in mathematically formulating the l(sup 1) optimal decentralized controller synthesis problem. The resulting optimization problem is infinite dimensional and therefore not directly amenable to computations. It is shown that finite dimensional optimization problems that have value arbitrarily close to the infinite dimensional one can be constructed. Based on this result, an algorithm that solves the l(sup 1) decentralized performance problems is presented. A global optimization approach to the solution of the infinite dimensional approximating problems is also discussed.

  4. Execution of Multidisciplinary Design Optimization Approaches on Common Test Problems

    NASA Technical Reports Server (NTRS)

    Balling, R. J.; Wilkinson, C. A.

    1997-01-01

    A class of synthetic problems for testing multidisciplinary design optimization (MDO) approaches is presented. These test problems are easy to reproduce because all functions are given as closed-form mathematical expressions. They are constructed in such a way that the optimal value of all variables and the objective is unity. The test problems involve three disciplines and allow the user to specify the number of design variables, state variables, coupling functions, design constraints, controlling design constraints, and the strength of coupling. Several MDO approaches were executed on two sample synthetic test problems. These approaches included single-level optimization approaches, collaborative optimization approaches, and concurrent subspace optimization approaches. Execution results are presented, and the robustness and efficiency of these approaches an evaluated for these sample problems.

  5. Time-domain finite elements in optimal control with application to launch-vehicle guidance. PhD. Thesis

    NASA Technical Reports Server (NTRS)

    Bless, Robert R.

    1991-01-01

    A time-domain finite element method is developed for optimal control problems. The theory derived is general enough to handle a large class of problems including optimal control problems that are continuous in the states and controls, problems with discontinuities in the states and/or system equations, problems with control inequality constraints, problems with state inequality constraints, or problems involving any combination of the above. The theory is developed in such a way that no numerical quadrature is necessary regardless of the degree of nonlinearity in the equations. Also, the same shape functions may be employed for every problem because all strong boundary conditions are transformed into natural or weak boundary conditions. In addition, the resulting nonlinear algebraic equations are very sparse. Use of sparse matrix solvers allows for the rapid and accurate solution of very difficult optimization problems. The formulation is applied to launch-vehicle trajectory optimization problems, and results show that real-time optimal guidance is realizable with this method. Finally, a general problem solving environment is created for solving a large class of optimal control problems. The algorithm uses both FORTRAN and a symbolic computation program to solve problems with a minimum of user interaction. The use of symbolic computation eliminates the need for user-written subroutines which greatly reduces the setup time for solving problems.

  6. STAR-GALAXY CLASSIFICATION IN MULTI-BAND OPTICAL IMAGING

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fadely, Ross; Willman, Beth; Hogg, David W.

    2012-11-20

    Ground-based optical surveys such as PanSTARRS, DES, and LSST will produce large catalogs to limiting magnitudes of r {approx}> 24. Star-galaxy separation poses a major challenge to such surveys because galaxies-even very compact galaxies-outnumber halo stars at these depths. We investigate photometric classification techniques on stars and galaxies with intrinsic FWHM <0.2 arcsec. We consider unsupervised spectral energy distribution template fitting and supervised, data-driven support vector machines (SVMs). For template fitting, we use a maximum likelihood (ML) method and a new hierarchical Bayesian (HB) method, which learns the prior distribution of template probabilities from the data. SVM requires training datamore » to classify unknown sources; ML and HB do not. We consider (1) a best-case scenario (SVM{sub best}) where the training data are (unrealistically) a random sampling of the data in both signal-to-noise and demographics and (2) a more realistic scenario where training is done on higher signal-to-noise data (SVM{sub real}) at brighter apparent magnitudes. Testing with COSMOS ugriz data, we find that HB outperforms ML, delivering {approx}80% completeness, with purity of {approx}60%-90% for both stars and galaxies. We find that no algorithm delivers perfect performance and that studies of metal-poor main-sequence turnoff stars may be challenged by poor star-galaxy separation. Using the Receiver Operating Characteristic curve, we find a best-to-worst ranking of SVM{sub best}, HB, ML, and SVM{sub real}. We conclude, therefore, that a well-trained SVM will outperform template-fitting methods. However, a normally trained SVM performs worse. Thus, HB template fitting may prove to be the optimal classification method in future surveys.« less

  7. Percolation galaxy groups and clusters in the sdss redshift survey: identification, catalogs, and the multiplicity function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Berlind, Andreas A.; Frieman, Joshua A.; Weinberg, David H.

    2006-01-01

    We identify galaxy groups and clusters in volume-limited samples of the SDSS redshift survey, using a redshift-space friends-of-friends algorithm. We optimize the friends-of-friends linking lengths to recover galaxy systems that occupy the same dark matter halos, using a set of mock catalogs created by populating halos of N-body simulations with galaxies. Extensive tests with these mock catalogs show that no combination of perpendicular and line-of-sight linking lengths is able to yield groups and clusters that simultaneously recover the true halo multiplicity function, projected size distribution, and velocity dispersion. We adopt a linking length combination that yields, for galaxy groups withmore » ten or more members: a group multiplicity function that is unbiased with respect to the true halo multiplicity function; an unbiased median relation between the multiplicities of groups and their associated halos; a spurious group fraction of less than {approx}1%; a halo completeness of more than {approx}97%; the correct projected size distribution as a function of multiplicity; and a velocity dispersion distribution that is {approx}20% too low at all multiplicities. These results hold over a range of mock catalogs that use different input recipes of populating halos with galaxies. We apply our group-finding algorithm to the SDSS data and obtain three group and cluster catalogs for three volume-limited samples that cover 3495.1 square degrees on the sky. We correct for incompleteness caused by fiber collisions and survey edges, and obtain measurements of the group multiplicity function, with errors calculated from realistic mock catalogs. These multiplicity function measurements provide a key constraint on the relation between galaxy populations and dark matter halos.« less

  8. A simple crunching of the AGS 'bare' machine ORM data - February 2007 - to extract some aspects of AGS transverse coupling at injection and extraction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahrens, L.

    2010-11-01

    The objective of this note is to (once again) explore the AGS 'ORM' (orbit response matrix) data taken (by Operations) early during the 2007 run with an AGS bare machine and gold beam. Indeed the present motivation is to extract as much information about the AGS inherent transverse coupling as possible - from general arguments and the copious ORM data. And taking this one step further, (though not accomplished yet) the goal really should be to tell the model how to describe this coupling. 'Bare' as used here means the AGS with no quadrupole, sextupole or octupole magnets powered. Onlymore » the main (combined-function) magnet string and dipole bumps necessary to optimize beam survival are powered. 'ORM data' means the systematic recording of the equilibrium orbit beam position monitor response to powering individual dipole corrector magnets. The 'matrix' results from looking at the effect of each of the (12 superperiods X 4 dipoles per superperiod) 'kicks' on each of the (12 X 6) pick up electrodes (pues) in each transverse plane. So then we have two (48 X 72) matrices of numbers from the ORM data. (Though 'pue' usually refers to the hardware in the vacuum chamber and 'bpm' to the beam position monitoring system, the two labels will be used casually here.) The exercise is carried out at two magnet rigidities, injection (AGS field {approx}434 Gauss) and extraction to RHIC ({approx}9730 Gauss), - a ratio of rigidities of about 22.4. Since we stick with a bare machine, we are also stuck with the bare tunes which means the tunes are rather close together and near 8.75. Injection: (h,v) {approx} (8.73, 8.76).« less

  9. A DNA enzyme with Mg(2+)-Dependent RNA Phosphoesterase Activity

    NASA Technical Reports Server (NTRS)

    Breaker, Ronald R.; Joyce, Gerald F.

    1995-01-01

    Previously we demonstrated that DNA can act as an enzyme in the Pb(2+)-dependent cleavage of an RNA phosphoester. This is a facile reaction, with an uncatalyzed rate for a typical RNA phosphoester of approx. 10(exp -4)/ min in the presence of 1 mM Pb(OAc)2 at pH 7.0 and 23 C. The Mg(2+) - dependent reaction is more difficult, with an uncatalyzed rate of approx. 10(exp -7)/ min under comparable conditions. Mg(2+) - dependent cleavage has special relevance to biology because it is compatible with intracellular conditions. Using in vitro selection, we sought to develop a family of phosphoester-cleaving DNA enzymes that operate in the presence of various divalent metals, focusing particularly on the Mg(2+) - dependent reaction. Results: We generated a population of greater than 10(exp 13) DNAs containing 40 random nucleotides and carried out repeated rounds of selective amplification, enriching for molecules that cleave a target RNA phosphoester in the presence of 1 mM Mg(2+), Mn(2+), Zn(2+) or Pb(2+). Examination of individual clones from the Mg(2+) lineage after the sixth round revealed a catalytic motif comprised of a three-stem junction.This motif was partially randomized and subjected to seven additional rounds of selective amplification, yielding catalysts with a rate of 0.01/ min. The optimized DNA catalyst was divided into separate substrate and enzyme domains and shown to have a similar level of activity under multiple turnover conditions. Conclusions: We have generated a Mg(2+) - dependent DNA enzyme that cleaves a target RNA phosphoester with a catalytic rate approx. 10(exp 5) - fold greater than that of the uncatalyzed reaction. This activity is compatible with intracellular conditions, raising the possibility that DNA enzymes might be made to operate in vivo.

  10. K2: A NEW METHOD FOR THE DETECTION OF GALAXY CLUSTERS BASED ON CANADA-FRANCE-HAWAII TELESCOPE LEGACY SURVEY MULTICOLOR IMAGES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thanjavur, Karun; Willis, Jon; Crampton, David, E-mail: karun@uvic.c

    2009-11-20

    We have developed a new method, K2, optimized for the detection of galaxy clusters in multicolor images. Based on the Red Sequence approach, K2 detects clusters using simultaneous enhancements in both colors and position. The detection significance is robustly determined through extensive Monte Carlo simulations and through comparison with available cluster catalogs based on two different optical methods, and also on X-ray data. K2 also provides quantitative estimates of the candidate clusters' richness and photometric redshifts. Initially, K2 was applied to the two color (gri) 161 deg{sup 2} images of the Canada-France-Hawaii Telescope Legacy Survey Wide (CFHTLS-W) data. Our simulationsmore » show that the false detection rate for these data, at our selected threshold, is only approx1%, and that the cluster catalogs are approx80% complete up to a redshift of z = 0.6 for Fornax-like and richer clusters and to z approx 0.3 for poorer clusters. Based on the g-, r-, and i-band photometric catalogs of the Terapix T05 release, 35 clusters/deg{sup 2} are detected, with 1-2 Fornax-like or richer clusters every 2 deg{sup 2}. Catalogs containing data for 6144 galaxy clusters have been prepared, of which 239 are rich clusters. These clusters, especially the latter, are being searched for gravitational lenses-one of our chief motivations for cluster detection in CFHTLS. The K2 method can be easily extended to use additional color information and thus improve overall cluster detection to higher redshifts. The complete set of K2 cluster catalogs, along with the supplementary catalogs for the member galaxies, are available on request from the authors.« less

  11. DEEP LBT/LUCI SPECTROSCOPY OF AN Ly{alpha} EMITTER CANDIDATE AT z {approx_equal} 7.7

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang Linhua; Bian Fuyan; Fan Xiaohui

    2013-07-01

    We present deep spectroscopic observations of an Ly{alpha} emitter (LAE) candidate at z {approx_equal} 7.7 using the infrared spectrograph LUCI on the 2 Multiplication-Sign 8.4 m Large Binocular Telescope (LBT). The candidate is the brightest among the four z {approx_equal} 7.7 LAE candidates found in a narrowband imaging survey by Krug et al. Our spectroscopic data include a total of 7.5 hr of integration with LBT/LUCI and are deep enough to significantly (3.2{sigma}-4.9{sigma}) detect the Ly{alpha} emission line of this candidate based on its Ly{alpha} flux 1.2 Multiplication-Sign 10{sup -17} erg s{sup -1} cm{sup -2} estimated from the narrowband photometry.more » However, we do not find any convincing signal at the expected position of its Ly{alpha} emission line, suggesting that this source is not an LAE at z {approx_equal} 7.7. The non-detection in this work, together with the previous studies of z {approx_equal} 7.7 LAEs, puts a strong constraint on the bright-end Ly{alpha} luminosity function (LF) at z {approx_equal} 7.7. We find a rapid evolution of the Ly{alpha} LF from z {approx_equal} 6.5 to 7.7: the upper limit of the z {approx_equal} 7.7 LF is more than five times lower than the z {approx_equal} 6.5 LF at the bright end (f{>=} 1.0 Multiplication-Sign 10{sup -17} erg s{sup -1} cm{sup -2} or L{>=} 6.9 Multiplication-Sign 10{sup 42} erg s{sup -1}). This is likely caused by an increasing neutral fraction in the intergalactic medium that substantially attenuates Ly{alpha} emission at z {approx_equal} 7.7.« less

  12. Solar Drivers of 11-yr and Long-Term Cosmic Ray Modulation

    NASA Technical Reports Server (NTRS)

    Cliver, E. W.; Richardson, I. G.; Ling, A. G.

    2011-01-01

    In the current paradigm for the modulation of galactic cosmic rays (GCRs), diffusion is taken to be the dominant process during solar maxima while drift dominates at minima. Observations during the recent solar minimum challenge the pre-eminence of drift: at such times. In 2009, the approx.2 GV GCR intensity measured by the Newark neutron monitor increased by approx.5% relative to its maximum value two cycles earlier even though the average tilt angle in 2009 was slightly larger than that in 1986 (approx.20deg vs. approx.14deg), while solar wind B was significantly lower (approx.3.9 nT vs. approx.5.4 nT). A decomposition of the solar wind into high-speed streams, slow solar wind, and coronal mass ejections (CMEs; including postshock flows) reveals that the Sun transmits its message of changing magnetic field (diffusion coefficient) to the heliosphere primarily through CMEs at solar maximum and high-speed streams at solar minimum. Long-term reconstructions of solar wind B are in general agreement for the approx. 1900-present interval and can be used to reliably estimate GCR intensity over this period. For earlier epochs, however, a recent Be-10-based reconstruction covering the past approx. 10(exp 4) years shows nine abrupt and relatively short-lived drops of B to < or approx.= 0 nT, with the first of these corresponding to the Sporer minimum. Such dips are at variance with the recent suggestion that B has a minimum or floor value of approx.2.8 nT. A floor in solar wind B implies a ceiling in the GCR intensity (a permanent modulation of the local interstellar spectrum) at a given energy/rigidity. The 30-40% increase in the intensity of 2.5 GV electrons observed by Ulysses during the recent solar minimum raises an interesting paradox that will need to be resolved.

  13. An Empirical Comparison of Seven Iterative and Evolutionary Function Optimization Heuristics

    NASA Technical Reports Server (NTRS)

    Baluja, Shumeet

    1995-01-01

    This report is a repository of the results obtained from a large scale empirical comparison of seven iterative and evolution-based optimization heuristics. Twenty-seven static optimization problems, spanning six sets of problem classes which are commonly explored in genetic algorithm literature, are examined. The problem sets include job-shop scheduling, traveling salesman, knapsack, binpacking, neural network weight optimization, and standard numerical optimization. The search spaces in these problems range from 2368 to 22040. The results indicate that using genetic algorithms for the optimization of static functions does not yield a benefit, in terms of the final answer obtained, over simpler optimization heuristics. Descriptions of the algorithms tested and the encodings of the problems are described in detail for reproducibility.

  14. Gravitino-overproduction problem in an inflationary universe

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kawasaki, Masahiro; Takahashi, Fuminobu; Deutsches Elektronen Synchrotron DESY, Notkestrasse 85, 22607 Hamburg

    We show that the gravitino-overproduction problem is prevalent among inflation models in supergravity. An inflaton field {phi} generically acquires (effective) nonvanishing auxiliary field G{sub {phi}}{sup (eff)}, if the Kaehler potential is nonminimal. The inflaton field then decays into a pair of the gravitinos. We extensively study the cosmological constraints on G{sub {phi}}{sup (eff)} for a wide range of the gravitino mass. For many inflation models we explicitly estimate G{sub {phi}}{sup (eff)}, and show that the gravitino-overproduction problem severely constrains the inflation models, unless such an interaction as K={kappa}/2 vertical bar {phi}|{sup 2}z{sup 2}+H.c. is suppressed (here z is the fieldmore » responsible for the supersymmetry breaking). We find that many of them are already excluded or on the verge of, if {kappa}{approx}O(1)« less

  15. Typical motions in multiple systems

    NASA Technical Reports Server (NTRS)

    Anosova, Joanna P.

    1990-01-01

    In very old times, people counted - one, two, many. The author wants to show that they were right. Consider the motions of isolated bodies: (1) N = 1 - simple motion; (2) N = 2 - Keplerian orbits; and (3) N = 3 - this is the difficult problem. In general, this problem can be studied only by computer simulations. The author studied this problem over many years (see, e.g., Agekian and Anosova, 1967; Anosova, 1986, 1989 a,b). The principal result is that two basic types of dynamics take place in triple systems. The first special type is the stable hierarchical systems with two almost Keplerian orbits. The second general type is the unstable triple systems with complicated motions of the bodies. By random choice of the initial conditions, by the Monte-Carlo method, the stable systems comprised about approx. 10% of the examined cases; the unstable systems comprised the other approx. 90% of cases under consideration. In N greater than 3, the studies of dynamics of such systems by computer simulations show that we have in general also the motions roughly as at the cases 1 - 3 with the relative negative or positive energies of the bodies. In the author's picture, the typical trajectories of the bodies in unstable triple systems of the general type of dynamics are seen. Such systems are disrupted always after close triple approaches of the bodies. These approaches play a role like the gravitational slingshot. Often, the velocities of escapers are very large. On the other hand, the movie also shows the dynamical processes of a formation, dynamical evolution and disruption of the temporary wide binaries in triples and a formation of final hard massive binaries in the final evolution of triples.

  16. NSLS-II: Nonlinear Model Calibration for Synchrotrons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bengtsson, J.

    This tech note is essentially a summary of a lecture we delivered to the Acc. Phys. Journal Club Apr, 2010. However, since the estimated accuracy of these methods has been naive and misleading in the field of particle accelerators, i.e., ignores the impact of noise, we will elaborate on this in some detail. A prerequisite for a calibration of the nonlinear Hamiltonian is that the quadratic part has been understood, i.e., that the linear optics for the real accelerator has been calibrated. For synchrotron light source operations, this problem has been solved by the interactive LOCO technique/tool (Linear Optics frommore » Closed Orbits). Before that, in the context of hadron accelerators, it has been done by signal processing of turn-by-turn BPM data. We have outlined how to make a basic calibration of the nonlinear model for synchrotrons. In particular, we have shown how this was done for LEAR, CERN (antiprotons) in the mid-80s. Specifically, our accuracy for frequency estimation was {approx} 1 x 10{sup -5} for 1024 turns (to calibrate the linear optics) and {approx} 1 x 10{sup -4} for 256 turns for tune footprint and betatron spectrum. For a comparison, the estimated tune footprint for stable beam for NSLS-II is {approx}0.1. Since the transverse damping time is {approx}20 msec, i.e., {approx}4,000 turns. There is no fundamental difference for: antiprotons, protons, and electrons in this case. Because the estimated accuracy for these methods in the field of particle accelerators has been naive, i.e., ignoring the impact of noise, we have also derived explicit formula, from first principles, for a quantitative statement. For e.g. N = 256 and 5% noise we obtain {delta}{nu} {approx} 1 x 10{sup -5}. A comparison with the state-of-the-arts in e.g. telecomm and electrical engineering since the 60s is quite revealing. For example, Kalman filter (1960), crucial for the: Ranger, Mariner, and Apollo (including the Lunar Module) missions during the 60s. Or Claude Shannon et al since the 40s for that matter. Conclusion: what's elementary in the latter is considered 'advanced', if at all, in the former. It is little surprise then that published measurements typically contains neither error bars (for the random errors) nor estimates for the systematic in the former discipline. We have also showed how to estimate the state space by turn-by-turn data from two adjacent BPMs. And how to improve the resolution of the nonlinear resonance spectrum by Fourier analyzing the linear action variables instead of the betatron motion. In fact, the state estimator could be further improved by adding a Kalman filter. For transparency, we have also summarized on how these techniques provide a framework- and method for a TQM (Total Quality Management) approach for the main ring. Of course, to make the ($2.5M) turn-by-turn data acquisition system that is being implemented (for all the BPMs) useful, a means ({approx}10% contingency for the BPM system) to drive the beam is obviously required.« less

  17. Multiobjective optimization of temporal processes.

    PubMed

    Song, Zhe; Kusiak, Andrew

    2010-06-01

    This paper presents a dynamic predictive-optimization framework of a nonlinear temporal process. Data-mining (DM) and evolutionary strategy algorithms are integrated in the framework for solving the optimization model. DM algorithms learn dynamic equations from the process data. An evolutionary strategy algorithm is then applied to solve the optimization problem guided by the knowledge extracted by the DM algorithm. The concept presented in this paper is illustrated with the data from a power plant, where the goal is to maximize the boiler efficiency and minimize the limestone consumption. This multiobjective optimization problem can be either transformed into a single-objective optimization problem through preference aggregation approaches or into a Pareto-optimal optimization problem. The computational results have shown the effectiveness of the proposed optimization framework.

  18. Toward Large-Area Sub-Arcsecond X-Ray Telescopes

    NASA Technical Reports Server (NTRS)

    ODell, Stephen L.; Aldcroft, Thomas L.; Allured, Ryan; Atkins, Carolyn; Burrows, David N.; Cao, Jian; Chalifoux, Brandon D.; Chan, Kai-Wing; Cotroneo, Vincenzo; Elsner, Ronald F.; hide

    2014-01-01

    The future of x-ray astronomy depends upon development of x-ray telescopes with larger aperture areas (approx. = 3 square meters) and fine angular resolution (approx. = 1 inch). Combined with the special requirements of nested grazing-incidence optics, the mass and envelope constraints of space-borne telescopes render such advances technologically and programmatically challenging. Achieving this goal will require precision fabrication, alignment, mounting, and assembly of large areas (approx. = 600 square meters) of lightweight (approx. = 1 kilogram/square meter areal density) high-quality mirrors at an acceptable cost (approx. = 1 million dollars/square meter of mirror surface area). This paper reviews relevant technological and programmatic issues, as well as possible approaches for addressing these issues-including active (in-space adjustable) alignment and figure correction.

  19. Laser-driven relativistic electron beam interaction with solid dielectric

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarkisov, G. S.; Ivanov, V. V.; Leblanc, P.

    2012-07-30

    The multi-frames shadowgraphy, interferometry and polarimetry diagnostics with sub-ps time resolution were used for an investigation of ionization wave dynamics inside a glass target induced by laser-driven relativistic electron beam. Experiments were done using the 50 TW Leopard laser at the UNR. For a laser flux of {approx}2 Multiplication-Sign 10{sup 18}W/cm{sup 2} a hemispherical ionization wave propagates at c/3. The maximum of the electron density inside the glass target is {approx}2 Multiplication-Sign 10{sup 19}cm{sup -3}. Magnetic and electric fields are less than {approx}15 kG and {approx}1 MV/cm, respectively. The electron temperature has a maximum of {approx}0.5 eV. 2D interference phasemore » shift shows the 'fountain effect' of electron beam. The very low ionization inside glass target {approx}0.1% suggests a fast recombination at the sub-ps time scale. 2D PIC-simulations demonstrate radial spreading of fast electrons by self-consistent electrostatic fields.« less

  20. Social Emotional Optimization Algorithm for Nonlinear Constrained Optimization Problems

    NASA Astrophysics Data System (ADS)

    Xu, Yuechun; Cui, Zhihua; Zeng, Jianchao

    Nonlinear programming problem is one important branch in operational research, and has been successfully applied to various real-life problems. In this paper, a new approach called Social emotional optimization algorithm (SEOA) is used to solve this problem which is a new swarm intelligent technique by simulating the human behavior guided by emotion. Simulation results show that the social emotional optimization algorithm proposed in this paper is effective and efficiency for the nonlinear constrained programming problems.

  1. Evolutionary optimization methods for accelerator design

    NASA Astrophysics Data System (ADS)

    Poklonskiy, Alexey A.

    Many problems from the fields of accelerator physics and beam theory can be formulated as optimization problems and, as such, solved using optimization methods. Despite growing efficiency of the optimization methods, the adoption of modern optimization techniques in these fields is rather limited. Evolutionary Algorithms (EAs) form a relatively new and actively developed optimization methods family. They possess many attractive features such as: ease of the implementation, modest requirements on the objective function, a good tolerance to noise, robustness, and the ability to perform a global search efficiently. In this work we study the application of EAs to problems from accelerator physics and beam theory. We review the most commonly used methods of unconstrained optimization and describe the GATool, evolutionary algorithm and the software package, used in this work, in detail. Then we use a set of test problems to assess its performance in terms of computational resources, quality of the obtained result, and the tradeoff between them. We justify the choice of GATool as a heuristic method to generate cutoff values for the COSY-GO rigorous global optimization package for the COSY Infinity scientific computing package. We design the model of their mutual interaction and demonstrate that the quality of the result obtained by GATool increases as the information about the search domain is refined, which supports the usefulness of this model. We Giscuss GATool's performance on the problems suffering from static and dynamic noise and study useful strategies of GATool parameter tuning for these and other difficult problems. We review the challenges of constrained optimization with EAs and methods commonly used to overcome them. We describe REPA, a new constrained optimization method based on repairing, in exquisite detail, including the properties of its two repairing techniques: REFIND and REPROPT. We assess REPROPT's performance on the standard constrained optimization test problems for EA with a variety of different configurations and suggest optimal default parameter values based on the results. Then we study the performance of the REPA method on the same set of test problems and compare the obtained results with those of several commonly used constrained optimization methods with EA. Based on the obtained results, particularly on the outstanding performance of REPA on test problem that presents significant difficulty for other reviewed EAs, we conclude that the proposed method is useful and competitive. We discuss REPA parameter tuning for difficult problems and critically review some of the problems from the de-facto standard test problem set for the constrained optimization with EA. In order to demonstrate the practical usefulness of the developed method, we study several problems of accelerator design and demonstrate how they can be solved with EAs. These problems include a simple accelerator design problem (design a quadrupole triplet to be stigmatically imaging, find all possible solutions), a complex real-life accelerator design problem (an optimization of the front end section for the future neutrino factory), and a problem of the normal form defect function optimization which is used to rigorously estimate the stability of the beam dynamics in circular accelerators. The positive results we obtained suggest that the application of EAs to problems from accelerator theory can be very beneficial and has large potential. The developed optimization scenarios and tools can be used to approach similar problems.

  2. A weak Hamiltonian finite element method for optimal control problems

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Bless, Robert R.

    1989-01-01

    A temporal finite element method based on a mixed form of the Hamiltonian weak principle is developed for dynamics and optimal control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical optimal control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and optimal control are illustrated. The example dynamics problem involves a time-marching problem. As optimal control examples, elementary trajectory optimization problems are treated.

  3. A weak Hamiltonian finite element method for optimal control problems

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Bless, Robert R.

    1990-01-01

    A temporal finite element method based on a mixed form of the Hamiltonian weak principle is developed for dynamics and optimal control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical optimal control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and optimal control are illustrated. The example dynamics problem involves a time-marching problem. As optimal control examples, elementary trajectory optimization problems are treated.

  4. Weak Hamiltonian finite element method for optimal control problems

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Bless, Robert R.

    1991-01-01

    A temporal finite element method based on a mixed form of the Hamiltonian weak principle is developed for dynamics and optimal control problems. The mixed form of Hamilton's weak principle contains both displacements and momenta as primary variables that are expanded in terms of nodal values and simple polynomial shape functions. Unlike other forms of Hamilton's principle, however, time derivatives of the momenta and displacements do not appear therein; instead, only the virtual momenta and virtual displacements are differentiated with respect to time. Based on the duality that is observed to exist between the mixed form of Hamilton's weak principle and variational principles governing classical optimal control problems, a temporal finite element formulation of the latter can be developed in a rather straightforward manner. Several well-known problems in dynamics and optimal control are illustrated. The example dynamics problem involves a time-marching problem. As optimal control examples, elementary trajectory optimization problems are treated.

  5. THE PRISM MULTI-OBJECT SURVEY (PRIMUS). I. SURVEY OVERVIEW AND CHARACTERISTICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coil, Alison L.; Moustakas, John; Aird, James

    2011-11-01

    We present the PRIsm MUlti-object Survey (PRIMUS), a spectroscopic faint galaxy redshift survey to z {approx} 1. PRIMUS uses a low-dispersion prism and slitmasks to observe {approx}2500 objects at once in a 0.18 deg{sup 2} field of view, using the Inamori Magellan Areal Camera and Spectrograph camera on the Magellan I Baade 6.5 m telescope at Las Campanas Observatory. PRIMUS covers a total of 9.1 deg{sup 2} of sky to a depth of i{sub AB} {approx} 23.5 in seven different deep, multi-wavelength fields that have coverage from the Galaxy Evolution Explorer, Spitzer, and either XMM or Chandra, as well asmore » multiple-band optical and near-IR coverage. PRIMUS includes {approx}130,000 robust redshifts of unique objects with a redshift precision of {sigma}{sub z}/(1 + z) {approx} 0.005. The redshift distribution peaks at z {approx} 0.6 and extends to z = 1.2 for galaxies and z = 5 for broad-line active galactic nuclei. The motivation, observational techniques, fields, target selection, slitmask design, and observations are presented here, with a brief summary of the redshift precision; a forthcoming paper presents the data reduction, redshift fitting, redshift confidence, and survey completeness. PRIMUS is the largest faint galaxy survey undertaken to date. The high targeting fraction ({approx}80%) and large survey size will allow for precise measures of galaxy properties and large-scale structure to z {approx} 1.« less

  6. Optimality conditions for the numerical solution of optimization problems with PDE constraints :

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aguilo Valentin, Miguel Alejandro; Ridzal, Denis

    2014-03-01

    A theoretical framework for the numerical solution of partial di erential equation (PDE) constrained optimization problems is presented in this report. This theoretical framework embodies the fundamental infrastructure required to e ciently implement and solve this class of problems. Detail derivations of the optimality conditions required to accurately solve several parameter identi cation and optimal control problems are also provided in this report. This will allow the reader to further understand how the theoretical abstraction presented in this report translates to the application.

  7. FRANOPP: Framework for analysis and optimization problems user's guide

    NASA Technical Reports Server (NTRS)

    Riley, K. M.

    1981-01-01

    Framework for analysis and optimization problems (FRANOPP) is a software aid for the study and solution of design (optimization) problems which provides the driving program and plotting capability for a user generated programming system. In addition to FRANOPP, the programming system also contains the optimization code CONMIN, and two user supplied codes, one for analysis and one for output. With FRANOPP the user is provided with five options for studying a design problem. Three of the options utilize the plot capability and present an indepth study of the design problem. The study can be focused on a history of the optimization process or on the interaction of variables within the design problem.

  8. An update on the study of high-gradient elliptical SRF cavities at 805 MHz for proton and other applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tajima, Tsuyoshi; Haynes, Brian; Krawczyk, Frank

    2010-09-09

    An update on the study of 805 MHz elliptical SRF cavities that have been optimized for high gradient will be presented. An optimized cell shape, which is still appropriate for easy high pressure water rinsing, has been designed with the ratios of peak magnetic and electric fields to accelerating gradient being 3.75 mT/(MV/m) and 1.82, respectively. A total of 3 single-cell cavities have been fabricated. Two of the 3 cavities have been tested so far. The second cavity achieved an E{sub acc} of {approx}50 MV/m at Q{sub 0} of 1.4 x 10{sup 10}. This result demonstrates that 805 MHz cavitiesmore » can, in principle, achieve as high as, or could even be better than, 1.3 GHz high-gradient cavities.« less

  9. Status Update on the James Webb Space Telescope Project

    NASA Technical Reports Server (NTRS)

    Rigby, Jane R.

    2011-01-01

    The James Webb Space Telescope (JWST) is a large (6.6 m), cold (<50 K), infrared (IR)-optimized space observatory that will be launched in approx.2018. The observatory will have four instruments covering 0.6 to 28 micron, including a multi-object spectrograph, two integral fie ld units, and grisms optimized for exoplanets. I will review JWST's k ey science themes, as well as exciting new ideas from the recent JWST Frontiers Workshop. I will summarize the technical progress and miss ion status. Recent highlights: All mirrors have been fabricated, polished, and gold-coated; the mirror is expected to be diffraction-limite d down to a wavelength of 2 micron. The MIRI instrument just complete d its cryogenic testing. STScI has released exposure time calculators and sensitivity charts to enable scientists to start thinking about how to use JWST for their science.

  10. AMANDA Observations Constrain the Ultrahigh Energy Neutrino Flux

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Halzen, Francis; /Wisconsin U., Madison; Hooper, Dan

    2006-05-01

    A number of experimental techniques are currently being deployed in an effort to make the first detection of ultra-high energy cosmic neutrinos. To accomplish this goal, techniques using radio and acoustic detectors are being developed, which are optimally designed for studying neutrinos with energies in the PeV-EeV range and above. Data from the AMANDA experiment, in contrast, has been used to place limits on the cosmic neutrino flux at less extreme energies (up to {approx}10 PeV). In this letter, we show that by adopting a different analysis strategy, optimized for much higher energy neutrinos, the same AMANDA data can bemore » used to place a limit competitive with radio techniques at EeV energies. We also discuss the sensitivity of the IceCube experiment, in various stages of deployment, to ultra-high energy neutrinos.« less

  11. Evaluation of Genetic Algorithm Concepts Using Model Problems. Part 2; Multi-Objective Optimization

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Pulliam, Thomas H.

    2003-01-01

    A genetic algorithm approach suitable for solving multi-objective optimization problems is described and evaluated using a series of simple model problems. Several new features including a binning selection algorithm and a gene-space transformation procedure are included. The genetic algorithm is suitable for finding pareto optimal solutions in search spaces that are defined by any number of genes and that contain any number of local extrema. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all optimization problems attempted. The binning algorithm generally provides pareto front quality enhancements and moderate convergence efficiency improvements for most of the model problems. The gene-space transformation procedure provides a large convergence efficiency enhancement for problems with non-convoluted pareto fronts and a degradation in efficiency for problems with convoluted pareto fronts. The most difficult problems --multi-mode search spaces with a large number of genes and convoluted pareto fronts-- require a large number of function evaluations for GA convergence, but always converge.

  12. Wireless Sensor Network Optimization: Multi-Objective Paradigm.

    PubMed

    Iqbal, Muhammad; Naeem, Muhammad; Anpalagan, Alagan; Ahmed, Ashfaq; Azam, Muhammad

    2015-07-20

    Optimization problems relating to wireless sensor network planning, design, deployment and operation often give rise to multi-objective optimization formulations where multiple desirable objectives compete with each other and the decision maker has to select one of the tradeoff solutions. These multiple objectives may or may not conflict with each other. Keeping in view the nature of the application, the sensing scenario and input/output of the problem, the type of optimization problem changes. To address different nature of optimization problems relating to wireless sensor network design, deployment, operation, planing and placement, there exist a plethora of optimization solution types. We review and analyze different desirable objectives to show whether they conflict with each other, support each other or they are design dependent. We also present a generic multi-objective optimization problem relating to wireless sensor network which consists of input variables, required output, objectives and constraints. A list of constraints is also presented to give an overview of different constraints which are considered while formulating the optimization problems in wireless sensor networks. Keeping in view the multi facet coverage of this article relating to multi-objective optimization, this will open up new avenues of research in the area of multi-objective optimization relating to wireless sensor networks.

  13. Uncertainty Aware Structural Topology Optimization Via a Stochastic Reduced Order Model Approach

    NASA Technical Reports Server (NTRS)

    Aguilo, Miguel A.; Warner, James E.

    2017-01-01

    This work presents a stochastic reduced order modeling strategy for the quantification and propagation of uncertainties in topology optimization. Uncertainty aware optimization problems can be computationally complex due to the substantial number of model evaluations that are necessary to accurately quantify and propagate uncertainties. This computational complexity is greatly magnified if a high-fidelity, physics-based numerical model is used for the topology optimization calculations. Stochastic reduced order model (SROM) methods are applied here to effectively 1) alleviate the prohibitive computational cost associated with an uncertainty aware topology optimization problem; and 2) quantify and propagate the inherent uncertainties due to design imperfections. A generic SROM framework that transforms the uncertainty aware, stochastic topology optimization problem into a deterministic optimization problem that relies only on independent calls to a deterministic numerical model is presented. This approach facilitates the use of existing optimization and modeling tools to accurately solve the uncertainty aware topology optimization problems in a fraction of the computational demand required by Monte Carlo methods. Finally, an example in structural topology optimization is presented to demonstrate the effectiveness of the proposed uncertainty aware structural topology optimization approach.

  14. A Kind of Nonlinear Programming Problem Based on Mixed Fuzzy Relation Equations Constraints

    NASA Astrophysics Data System (ADS)

    Li, Jinquan; Feng, Shuang; Mi, Honghai

    In this work, a kind of nonlinear programming problem with non-differential objective function and under the constraints expressed by a system of mixed fuzzy relation equations is investigated. First, some properties of this kind of optimization problem are obtained. Then, a polynomial-time algorithm for this kind of optimization problem is proposed based on these properties. Furthermore, we show that this algorithm is optimal for the considered optimization problem in this paper. Finally, numerical examples are provided to illustrate our algorithms.

  15. Design and Development of NEA Scout Solar Sail Deployer Mechanism

    NASA Technical Reports Server (NTRS)

    Sobey, Alexander R.; Lockett, Tiffany Russell

    2016-01-01

    The 6U (approx.10 cm x 20 cm x 30 cm) cubesat Near Earth Asteroid (NEA) Scout1, projected for launch in September 2018 aboard the maiden voyage of the Space Launch System, will utilize a solar sail as its main method of propulsion throughout its approx.3-year mission to a Near Earth Asteroid. Due to the extreme volume constraints levied onto the mission, an acutely compact solar sail deployment mechanism has been designed to meet the volume and mass constraints, as well as provide enough propulsive solar sail area and quality in order to achieve mission success. The design of such a compact system required the development of approximately half a dozen prototypes in order to identify unforeseen problems, advance solutions, and build confidence in the final design product. This paper focuses on the obstacles of developing a solar sail deployment mechanism for such an application and the lessons learned from a thorough development process. The lessons presented will have significant applications beyond the NEA Scout mission, such as the development of other deployable boom mechanisms and uses for gossamer-thin films in space.

  16. Electric flux tube in a magnetic plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao Jinfeng; Shuryak, Edward

    2008-06-15

    In this paper we study a methodical problem related to the magnetic scenario recently suggested and initiated by Liao and Shuryak [Phys. Rev. C 75, 054907 (2007)] to understand the strongly coupled quark-gluon plasma (sQGP): the electric flux tube in a monopole plasma. A macroscopic approach, in which we interpolate between a Bose condensed (dual superconductor) medium and a classical gas medium, is developed first. Then we work out a microscopic approach based on detailed quantum mechanical calculations of the monopole scattering on the electric flux tube, evaluating induced currents for all partial waves. As expected, the flux tube losesmore » its stability when particles can penetrate it: We make this condition precise by calculating the critical value for the product of the flux tube size times the particle momentum, above which the flux tube dissolves. Lattice static potentials indicate that flux tubes seem to dissolve at T>T{sub dissolution}{approx_equal}1.3T{sub c}. Using our criterion one gets an estimate of the magnetic density n{approx_equal}4.4-6.6 fm{sup -3} at this temperature.« less

  17. Initial Assessment of the Excavation and Deposition of Impact Lithologies Exposed by the Chicxulub Scientific Drilling Project, Yaxcopoil, Mexico

    NASA Technical Reports Server (NTRS)

    Kring, David A.; Horz, Friedrich; Zurcher, Lukas

    2003-01-01

    The Chicxulub Scientific Drilling Project (www.icdp-online.de) recovered a continuous core from a depth of 404 m (in Tertiary cover) to 1511 m (in a megablock of Cretaceous target sediments), penetrating approx. 100 m of melt-bearing impactites between 794 and 895 m. The Yaxcopoil-1 (YAX-1) borehole is approx. 60-65 km from the center of the Chicxulub structure, which is approx. 15 km beyond the limit of the estimated approx. 50 km radius transient crater (excavation cavity), but within the rim of the estimated approx. 90 km radius final crater. In general, the impactite sequence is incredibly rich in impact melts of unusual textural variety and complexity, quite unlike melt-bearing impact formations from other terrestrial craters.

  18. Determination of Unfiltered Radiances from the Clouds and the Earth's Radiant Energy System (CERES) Instrument

    NASA Technical Reports Server (NTRS)

    Loeb, N. G.; Priestley, K. J.; Kratz, D. P.; Geier, E. B.; Green, R. N.; Wielicki, B. A.; Hinton, P. OR.; Nolan, S. K.

    2001-01-01

    A new method for determining unfiltered shortwave (SW), longwave (LW) and window (W) radiances from filtered radiances measured by the Clouds and the Earth's Radiant Energy System (CERES) satellite instrument is presented. The method uses theoretically derived regression coefficients between filtered and unfiltered radiances that are a function of viewing geometry, geotype and whether or not cloud is present. Relative errors in insta.ntaneous unfiltered radiances from this method are generally well below 1% for SW radiances (approx. 0.4% 1(sigma) or approx.l W/sq m equivalent flux), < 0.2% for LW radiances (approx. 0.1% 1(sigma) or approx.0.3 W/sq m equivalent flux) and < 0.2% (approx. 0.1% 1(sigma) for window channel radiances.

  19. Supersymmetric Q-balls: A numerical study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Campanelli, L.; INFN--Sezione di Ferrara, I-44100 Ferrara; Ruggieri, M.

    2008-02-15

    We study numerically a class of nontopological solitons, the Q-balls, arising in a supersymmetric extension of the standard model with low-energy, gauge-mediated symmetry breaking. Taking into account the exact form of the supersymmetric potential giving rise to Q-balls, we find that there is a lower limit on the value of the charge Q in order to make them classically stable: Q > or approx. 5x10{sup 2}Q{sub cr}, where Q{sub cr} is constant depending on the parameters defining the potential and can be in the range 1 < or approx. Q{sub cr} < or approx. 10{sup 8} {sup divide} {sup 16}.more » If Q is the baryon number, stability with respect to the decay into protons requires Q > or approx. 10{sup 17}Q{sub cr}, while if the gravitino mass is greater then m{sub 3/2} > or approx. 61 MeV, no stable gauge-mediation supersymmetric Q-balls exist. Finally, we find that energy and radius of Q-balls can be parametrized as E{approx}{xi}{sub E}Q{sup 3/4} and R{approx}{xi}{sub R}Q{sup 1/4}, where {xi}{sub E} and {xi}{sub R} are slowly varying functions of the charge.« less

  20. A relation between the short time variations of cosmic rays and geomagnetic field change

    NASA Technical Reports Server (NTRS)

    Saki, T.; Kato, M.

    1985-01-01

    An event is reported of approx. 37 min periodicity in cosmic ray intensity observed at Akeno(38 deg 47 N, 138 deg 30 E. 900m above s.l., cutoff 10.4 GV) during 1300 approx. 1900 UT on April 25th, 1984, just a day before Forbush decrease of April 26th. This event seemed to be followed by the periodic variations of the geomagnetic field observed at Kakioka (36 deg 23 N, 140 deg 18 E). The regression coefficient between them was obtained approx. 0.07%/10nT. It is shown that in general the power spectral density of cosmic rays in the frequency of 0.0001 approx. 0.001Hz correlates positively with the fluctuations of geomagnetic field (Dst field) around approx. 1.2x0.0001Hz. From the analysis of 47 days data (April 14th to June 13th, 1984) the regression curve was obtained as y=0.275x sup 0.343 with the correlation coefficient of 0.48, where x and y mean Fourier components of Dst field summed over 1.04 approx. 1.39x0.001Hz and cosmic ray power spectral density averaged over 0.0001 approx. 0.001Hz.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    N. Seth Carpenter; Suzette J. Payne; Annette L. Schafer

    We recognize a discrepancy in magnitudes estimated for several Basin and Range, U.S.A. faults. For example, magnitudes predicted for the Wasatch (Utah), Lost River (Idaho), and Lemhi (Idaho) faults from fault segment lengths (L{sub seg}) where lengths are defined between geometrical, structural, and/or behavioral discontinuities assumed to persistently arrest rupture, are consistently less than magnitudes calculated from displacements (D) along these same segments. For self-similarity, empirical relationships (e.g. Wells and Coppersmith, 1994) should predict consistent magnitudes (M) using diverse fault dimension values for a given fault (i.e. M {approx} L{sub seg}, should equal M {approx} D). Typically, the empirical relationshipsmore » are derived from historical earthquake data and parameter values used as input into these relationships are determined from field investigations of paleoearthquakes. A commonly used assumption - grounded in the characteristic-earthquake model of Schwartz and Coppersmith (1984) - is equating L{sub seg} with surface rupture length (SRL). Many large historical events yielded secondary and/or sympathetic faulting (e.g. 1983 Borah Peak, Idaho earthquake) which are included in the measurement of SRL and used to derive empirical relationships. Therefore, calculating magnitude from the M {approx} SRL relationship using L{sub seg} as SRL leads to an underestimation of magnitude and the M {approx} L{sub seg} and M {approx} D discrepancy. Here, we propose an alternative approach to earthquake magnitude estimation involving a relationship between moment magnitude (Mw) and length, where length is L{sub seg} instead of SRL. We analyze seven historical, surface-rupturing, strike-slip and normal faulting earthquakes for which segmentation of the causative fault and displacement data are available and whose rupture included at least one entire fault segment, but not two or more. The preliminary Mw {approx} L{sub seg} results are strikingly consistent with Mw {approx} D calculations using paleoearthquake data for the Wasatch, Lost River, and Lemhi faults, demonstrating self-similarity and implying that the Mw {approx} L{sub seg} relationship should supplant M {approx} SRL relationships currently employed in seismic hazard analyses. The relationship will permit reliable use of L{sub seg} data from field investigations and proper use and weighting of multiple-segment-rupture scenarios in seismic hazard analyses, and eliminate the need to reconcile the Mw {approx} SRL and Mw {approx} D differences in a multiple-parameter relationship for segmented faults.« less

  2. UV-CONTINUUM SLOPES AT z {approx} 4-7 FROM THE HUDF09+ERS+CANDELS OBSERVATIONS: DISCOVERY OF A WELL-DEFINED UV COLOR-MAGNITUDE RELATIONSHIP FOR z {>=} 4 STAR-FORMING GALAXIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bouwens, R. J.; Franx, M.; Labbe, I.

    2012-08-01

    Ultra-deep Advanced Camera for Surveys (ACS) and WFC3/IR HUDF+HUDF09 data, along with the wide-area GOODS+ERS+CANDELS data over the CDF-S GOODS field, are used to measure UV colors, expressed as the UV-continuum slope {beta}, of star-forming galaxies over a wide range of luminosity (0.1L*{sub z=3} to 2L*{sub z=3}) at high redshift (z {approx} 7 to z {approx} 4). {beta} is measured using all ACS and WFC3/IR passbands uncontaminated by Ly{alpha} and spectral breaks. Extensive tests show that our {beta} measurements are only subject to minimal biases. Using a different selection procedure, Dunlop et al. recently found large biases in their {beta}more » measurements. To reconcile these different results, we simulated both approaches and found that {beta} measurements for faint sources are subject to large biases if the same passbands are used both to select the sources and to measure {beta}. High-redshift galaxies show a well-defined rest-frame UV color-magnitude (CM) relationship that becomes systematically bluer toward fainter UV luminosities. No evolution is seen in the slope of the UV CM relationship in the first 1.5 Gyr, though there is a small evolution in the zero point to redder colors from z {approx} 7 to z {approx} 4. This suggests that galaxies are evolving along a well-defined sequence in the L{sub UV}-color ({beta}) plane (a 'star-forming sequence'?). Dust appears to be the principal factor driving changes in the UV color {beta} with luminosity. These new larger {beta} samples lead to improved dust extinction estimates at z {approx} 4-7 and confirm that the extinction is essentially zero at low luminosities and high redshifts. Inclusion of the new dust extinction results leads to (1) excellent agreement between the star formation rate (SFR) density at z {approx} 4-8 and that inferred from the stellar mass density; and (2) to higher specific star formation rates (SSFRs) at z {approx}> 4, suggesting that the SSFR may evolve modestly (by factors of {approx}2) from z {approx} 4-7 to z {approx} 2.« less

  3. Fast Optimization for Aircraft Descent and Approach Trajectory

    NASA Technical Reports Server (NTRS)

    Luchinsky, Dmitry G.; Schuet, Stefan; Brenton, J.; Timucin, Dogan; Smith, David; Kaneshige, John

    2017-01-01

    We address problem of on-line scheduling of the aircraft descent and approach trajectory. We formulate a general multiphase optimal control problem for optimization of the descent trajectory and review available methods of its solution. We develop a fast algorithm for solution of this problem using two key components: (i) fast inference of the dynamical and control variables of the descending trajectory from the low dimensional flight profile data and (ii) efficient local search for the resulting reduced dimensionality non-linear optimization problem. We compare the performance of the proposed algorithm with numerical solution obtained using optimal control toolbox General Pseudospectral Optimal Control Software. We present results of the solution of the scheduling problem for aircraft descent using novel fast algorithm and discuss its future applications.

  4. Research on cutting path optimization of sheet metal parts based on ant colony algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Z. Y.; Ling, H.; Li, L.; Wu, L. H.; Liu, N. B.

    2017-09-01

    In view of the disadvantages of the current cutting path optimization methods of sheet metal parts, a new method based on ant colony algorithm was proposed in this paper. The cutting path optimization problem of sheet metal parts was taken as the research object. The essence and optimization goal of the optimization problem were presented. The traditional serial cutting constraint rule was improved. The cutting constraint rule with cross cutting was proposed. The contour lines of parts were discretized and the mathematical model of cutting path optimization was established. Thus the problem was converted into the selection problem of contour lines of parts. Ant colony algorithm was used to solve the problem. The principle and steps of the algorithm were analyzed.

  5. Large-scale structural optimization

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.

    1983-01-01

    Problems encountered by aerospace designers in attempting to optimize whole aircraft are discussed, along with possible solutions. Large scale optimization, as opposed to component-by-component optimization, is hindered by computational costs, software inflexibility, concentration on a single, rather than trade-off, design methodology and the incompatibility of large-scale optimization with single program, single computer methods. The software problem can be approached by placing the full analysis outside of the optimization loop. Full analysis is then performed only periodically. Problem-dependent software can be removed from the generic code using a systems programming technique, and then embody the definitions of design variables, objective function and design constraints. Trade-off algorithms can be used at the design points to obtain quantitative answers. Finally, decomposing the large-scale problem into independent subproblems allows systematic optimization of the problems by an organization of people and machines.

  6. Application of the gravity search algorithm to multi-reservoir operation optimization

    NASA Astrophysics Data System (ADS)

    Bozorg-Haddad, Omid; Janbaz, Mahdieh; Loáiciga, Hugo A.

    2016-12-01

    Complexities in river discharge, variable rainfall regime, and drought severity merit the use of advanced optimization tools in multi-reservoir operation. The gravity search algorithm (GSA) is an evolutionary optimization algorithm based on the law of gravity and mass interactions. This paper explores the GSA's efficacy for solving benchmark functions, single reservoir, and four-reservoir operation optimization problems. The GSA's solutions are compared with those of the well-known genetic algorithm (GA) in three optimization problems. The results show that the GSA's results are closer to the optimal solutions than the GA's results in minimizing the benchmark functions. The average values of the objective function equal 1.218 and 1.746 with the GSA and GA, respectively, in solving the single-reservoir hydropower operation problem. The global solution equals 1.213 for this same problem. The GSA converged to 99.97% of the global solution in its average-performing history, while the GA converged to 97% of the global solution of the four-reservoir problem. Requiring fewer parameters for algorithmic implementation and reaching the optimal solution in fewer number of functional evaluations are additional advantages of the GSA over the GA. The results of the three optimization problems demonstrate a superior performance of the GSA for optimizing general mathematical problems and the operation of reservoir systems.

  7. Discrete particle swarm optimization to solve multi-objective limited-wait hybrid flow shop scheduling problem

    NASA Astrophysics Data System (ADS)

    Santosa, B.; Siswanto, N.; Fiqihesa

    2018-04-01

    This paper proposes a discrete Particle Swam Optimization (PSO) to solve limited-wait hybrid flowshop scheduing problem with multi objectives. Flow shop schedulimg represents the condition when several machines are arranged in series and each job must be processed at each machine with same sequence. The objective functions are minimizing completion time (makespan), total tardiness time, and total machine idle time. Flow shop scheduling model always grows to cope with the real production system accurately. Since flow shop scheduling is a NP-Hard problem then the most suitable method to solve is metaheuristics. One of metaheuristics algorithm is Particle Swarm Optimization (PSO), an algorithm which is based on the behavior of a swarm. Originally, PSO was intended to solve continuous optimization problems. Since flow shop scheduling is a discrete optimization problem, then, we need to modify PSO to fit the problem. The modification is done by using probability transition matrix mechanism. While to handle multi objectives problem, we use Pareto Optimal (MPSO). The results of MPSO is better than the PSO because the MPSO solution set produced higher probability to find the optimal solution. Besides the MPSO solution set is closer to the optimal solution

  8. CARE AND FEEDING OF FROGS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, Margaret; Chiang, Eugene, E-mail: mpan@astro.berkeley.edu

    2012-01-15

    'Propellers' are features in Saturn's A ring associated with moonlets that open partial gaps. They exhibit non-Keplerian motion (Tiscareno et al.); the longitude residuals of the best-observed propeller, 'Bleriot', appear consistent with a sinusoid of period {approx}4 years. Pan and Chiang proposed that propeller moonlets librate in 'frog resonances' with co-orbiting ring material. By analogy with the restricted three-body problem, they treated the co-orbital material as stationary in the rotating frame and neglected non-co-orbital material. Here we use simple numerical experiments to extend the frog model, including feedback due to the gap's motion, and drag associated with the Lindblad diskmore » torques that cause Type I migration. Because the moonlet creates the gap, we expect the gap centroid to track the moonlet, but only after a time delay t{sub delay}, the time for a ring particle to travel from conjunction with the moonlet to the end of the gap. We find that frog librations can persist only if t{sub delay} exceeds the frog libration period P{sub lib}, and if damping from Lindblad torques balances driving from co-orbital torques. If t{sub delay} << Pl{sub ib}, then the libration amplitude damps to zero. In the case of Bleriot, the frog resonance model can reproduce the observed libration period P{sub lib} {approx_equal} 4 yr. However, our simple feedback prescription suggests that Bleriot's t{sub delay} {approx} 0.01P{sub lib}, which is inconsistent with the observed libration amplitude of 260 km. We urge more accurate treatments of feedback to test the assumptions of our toy models.« less

  9. Constraint Optimization Literature Review

    DTIC Science & Technology

    2015-11-01

    COPs. 15. SUBJECT TERMS high-performance computing, mobile ad hoc network, optimization, constraint, satisfaction 16. SECURITY CLASSIFICATION OF: 17...Optimization Problems 1 2.1 Constraint Satisfaction Problems 1 2.2 Constraint Optimization Problems 3 3. Constraint Optimization Algorithms 9 3.1...Constraint Satisfaction Algorithms 9 3.1.1 Brute-Force search 9 3.1.2 Constraint Propagation 10 3.1.3 Depth-First Search 13 3.1.4 Local Search 18

  10. Design and Optimization of the SPOT Primary Mirror Segment

    NASA Technical Reports Server (NTRS)

    Budinoff, Jason G.; Michaels, Gregory J.

    2005-01-01

    The 3m Spherical Primary Optical Telescope (SPOT) will utilize a single ring of 0.86111 point-to-point hexagonal mirror segments. The f2.85 spherical mirror blanks will be fabricated by the same replication process used for mass-produced commercial telescope mirrors. Diffraction-limited phasing will require segment-to-segment radius of curvature (ROC) variation of approx.1 micron. Low-cost, replicated segment ROC variations are estimated to be almost 1 mm, necessitating a method for segment ROC adjustment & matching. A mechanical architecture has been designed that allows segment ROC to be adjusted up to 400 microns while introducing a minimum figure error, allowing segment-to-segment ROC matching. A key feature of the architecture is the unique back profile of the mirror segments. The back profile of the mirror was developed with shape optimization in MSC.Nastran(TradeMark) using optical performance response equations written with SigFit. A candidate back profile was generated which minimized ROC-adjustment-induced surface error while meeting the constraints imposed by the fabrication method. Keywords: optimization, radius of curvature, Pyrex spherical mirror, Sigfit

  11. Comparison of Optimal Design Methods in Inverse Problems

    PubMed Central

    Banks, H. T.; Holm, Kathleen; Kappel, Franz

    2011-01-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher Information Matrix (FIM). A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criteria with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model [13], the standard harmonic oscillator model [13] and a popular glucose regulation model [16, 19, 29]. PMID:21857762

  12. A Jet Break in the X-ray Light Curve of Short GRB 111020A: Implications for Energetics and Rates

    NASA Technical Reports Server (NTRS)

    Fong, W.; Berger, E.; Margutti, R.; Zauderer, B. A.; Troja, E.; Czekala, I.; Chornock, R.; Gehrels, N.; Sakamoto, T.; Fox, D. B.; hide

    2012-01-01

    We present broadband observations of the afterglow and environment of the short GRB 111020A. An extensive X-ray light curve from Swift/XRT, XMM-Newton, and Chandra, spanning approx.100 s to 10 days after the burst, reveals a significant break at (delta)t approx. = 2 days with pre- and post-break decline rates of (alpha)X,1 approx. = -0.78 and (alpha)X,2 < or approx. 1.7, respectively. Interpreted as a jet break, we infer a collimated outflow with an opening angle of (theta)j approx. = 3deg - 8deg. The resulting beaming-corrected gamma-ray (10-1000 keV band) and blast-wave kinetic energies are (2-3) x 10(exp 48) erg and (0.3-2) x 10(exp 49) erg, respectively, with the range depending on the unknown redshift of the burst. We report a radio afterglow limit of <39 micro-Jy (3(sigma)) from Expanded Very Large Array observations that, along with our finding that v(sub c) < v(sub X), constrains the circumburst density to n(sub 0) approx.0.01 0.1/cu cm. Optical observations provide an afterglow limit of i > or approx.24.4 mag at 18 hr after the burst and reveal a potential host galaxy with i approx. = 24.3 mag. The subarcsecond localization from Chandra provides a precise offset of 0".80+/-0".11 (1(sigma))from this galaxy corresponding to an offset of 5.7 kpc for z = 0.5-1.5. We find a high excess neutral hydrogen column density of (7.5+/-2.0) x 10(exp 21)/sq cm (z = 0). Our observations demonstrate that a growing fraction of short gamma-ray bursts (GRBs) are collimated, which may lead to a true event rate of > or approx.100-1000 Gpc(sup -3)/yr, in good agreement with the NS-NS merger rate of approx. = 200-3000 Gpc(sup -3)/ yr. This consistency is promising for coincident short GRB-gravitational wave searches in the forthcoming era of Advanced LIGO/VIRGO.

  13. THE REMARKABLE MOLECULAR CONTENT OF THE RED SPIDER NEBULA (NGC 6537)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, J. L.; Ziurys, L. M., E-mail: lziurys@email.arizona.edu

    2013-06-10

    Millimeter and sub-millimeter molecular-line observations of planetary nebula (PN) NGC 6537 (Red Spider) have been carried out using the Sub-Millimeter Telescope and the 12 m antenna of the Arizona Radio Observatory in the frequency range 86-692 GHz. CN, HCN, HNC, CCH, CS, SO, H{sub 2}CO, HCO{sup +} and N{sub 2}H{sup +}, along with the J = 3 {yields} 2 and 6 {yields} 5 lines of CO and those of several isotopologues, were detected toward the Red Spider, estimated to be {approx}1600 yr old. This extremely high excitation PN evidently fosters a rich molecular environment. The presence of CS and SOmore » suggest that sulfur may be sequestered in molecular form in such nebulae. A radiative transfer analysis of the CO and CS spectra indicate a kinetic temperature of T{sub K} {approx} 60-80 K and gas densities of n(H{sub 2}) {approx} 1-8 Multiplication-Sign 10{sup 5} cm{sup -3} in NGC 6537. Column densities of the molecules in the nebula and their fractional abundances relative to H{sub 2} ranged from N{sub tot} {approx} 10{sup 16} cm{sup -2} and f {approx} 10{sup -4} for CO, to {approx}7 Multiplication-Sign 10{sup 11} cm{sup -2} and f {approx} 8 Multiplication-Sign 10{sup -9} for the least abundant species, N{sub 2}H{sup +}. For SO and CS, N{sub tot} {approx} 2 Multiplication-Sign 10{sup 12} cm{sup -2} and 10{sup 13} cm{sup -2}, respectively, with f {approx} 10{sup -7} and 2 Multiplication-Sign 10{sup -8}. It was also found that HCN/HNC Almost-Equal-To 2. A low {sup 12}C/{sup 13}C ratio of {approx}4 was measured, indicative of hot-bottom burning. These results, coupled with past observations, suggest that molecular abundances in PNe are governed principally by the physical and chemical properties of the individual object and its progenitor star, rather than nebular age.« less

  14. CLASH: DISCOVERY OF A BRIGHT z {approx_equal} 6.2 DWARF GALAXY QUADRUPLY LENSED BY MACS J0329.6-0211

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zitrin, A.; Moustakas, J.; Bradley, L.

    2012-03-15

    We report the discovery of a z{sub phot} = 6.18{sup +0.05}{sub -0.07} (95% confidence level) dwarf galaxy, lensed into four images by the galaxy cluster MACS J0329.6-0211 (z{sub l} = 0.45). The galaxy is observed as a high-redshift dropout in HST/ACS/WFC3 CLASH and Spitzer/IRAC imaging. Its redshift is securely determined due to a clear detection of the Lyman break in the 18-band photometry, making this galaxy one of the highest-redshift multiply lensed objects known to date with an observed magnitude of F125W =24.00 {+-} 0.04 AB mag for its most magnified image. We also present the first strong-lensing analysis ofmore » this cluster uncovering 15 additional multiply imaged candidates of five lower-redshift sources spanning the range z{sub s} {approx_equal} 2-4. The mass model independently supports the high photometric redshift and reveals magnifications of 11.6{sup +8.9}{sub -4.1}, 17.6{sup +6.2}{sub -3.9}, 3.9{sup +3.0}{sub -1.7}, and 3.7{sup +1.3}{sub -0.2}, respectively, for the four images of the high-redshift galaxy. By delensing the most magnified image we construct an image of the source with a physical resolution of {approx}200 pc when the universe was {approx}0.9 Gyr old, where the z {approx_equal} 6.2 galaxy occupies a source-plane area of approximately 2.2 kpc{sup 2}. Modeling the observed spectral energy distribution using population synthesis models, we find a demagnified stellar mass of {approx}10{sup 9} M{sub Sun }, subsolar metallicity (Z/Z{sub Sun} {approx} 0.5), low dust content (A{sub V} {approx} 0.1 mag), a demagnified star formation rate (SFR) of {approx}3.2 M{sub Sun} yr{sup -1}, and a specific SFR of {approx}3.4 Gyr{sup -1}, all consistent with the properties of local dwarf galaxies.« less

  15. Multiobjective optimization approach: thermal food processing.

    PubMed

    Abakarov, A; Sushkov, Y; Almonacid, S; Simpson, R

    2009-01-01

    The objective of this study was to utilize a multiobjective optimization technique for the thermal sterilization of packaged foods. The multiobjective optimization approach used in this study is based on the optimization of well-known aggregating functions by an adaptive random search algorithm. The applicability of the proposed approach was illustrated by solving widely used multiobjective test problems taken from the literature. The numerical results obtained for the multiobjective test problems and for the thermal processing problem show that the proposed approach can be effectively used for solving multiobjective optimization problems arising in the food engineering field.

  16. Replica analysis for the duality of the portfolio optimization problem

    NASA Astrophysics Data System (ADS)

    Shinzato, Takashi

    2016-11-01

    In the present paper, the primal-dual problem consisting of the investment risk minimization problem and the expected return maximization problem in the mean-variance model is discussed using replica analysis. As a natural extension of the investment risk minimization problem under only a budget constraint that we analyzed in a previous study, we herein consider a primal-dual problem in which the investment risk minimization problem with budget and expected return constraints is regarded as the primal problem, and the expected return maximization problem with budget and investment risk constraints is regarded as the dual problem. With respect to these optimal problems, we analyze a quenched disordered system involving both of these optimization problems using the approach developed in statistical mechanical informatics and confirm that both optimal portfolios can possess the primal-dual structure. Finally, the results of numerical simulations are shown to validate the effectiveness of the proposed method.

  17. Replica analysis for the duality of the portfolio optimization problem.

    PubMed

    Shinzato, Takashi

    2016-11-01

    In the present paper, the primal-dual problem consisting of the investment risk minimization problem and the expected return maximization problem in the mean-variance model is discussed using replica analysis. As a natural extension of the investment risk minimization problem under only a budget constraint that we analyzed in a previous study, we herein consider a primal-dual problem in which the investment risk minimization problem with budget and expected return constraints is regarded as the primal problem, and the expected return maximization problem with budget and investment risk constraints is regarded as the dual problem. With respect to these optimal problems, we analyze a quenched disordered system involving both of these optimization problems using the approach developed in statistical mechanical informatics and confirm that both optimal portfolios can possess the primal-dual structure. Finally, the results of numerical simulations are shown to validate the effectiveness of the proposed method.

  18. 27 CFR 9.126 - Santa Clara Valley.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ...: (1) The beginning point is at the junction of Elephant Head Creek and Pacheco Creek (approx. .75 mile... point the boundary moves in a northerly direction up Elephant Head Creek approx. 1.2 miles until it....G.S. map; (29) Then it moves northeast along Pacheco Creek approx. .5 mile to Elephant Head Creek at...

  19. 27 CFR 9.126 - Santa Clara Valley.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ...: (1) The beginning point is at the junction of Elephant Head Creek and Pacheco Creek (approx. .75 mile... point the boundary moves in a northerly direction up Elephant Head Creek approx. 1.2 miles until it....G.S. map; (29) Then it moves northeast along Pacheco Creek approx. .5 mile to Elephant Head Creek at...

  20. 27 CFR 9.126 - Santa Clara Valley.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ...: (1) The beginning point is at the junction of Elephant Head Creek and Pacheco Creek (approx. .75 mile... point the boundary moves in a northerly direction up Elephant Head Creek approx. 1.2 miles until it....G.S. map; (29) Then it moves northeast along Pacheco Creek approx. .5 mile to Elephant Head Creek at...

  1. 27 CFR 9.126 - Santa Clara Valley.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ...: (1) The beginning point is at the junction of Elephant Head Creek and Pacheco Creek (approx. .75 mile... point the boundary moves in a northerly direction up Elephant Head Creek approx. 1.2 miles until it....G.S. map; (29) Then it moves northeast along Pacheco Creek approx. .5 mile to Elephant Head Creek at...

  2. 27 CFR 9.126 - Santa Clara Valley.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ...: (1) The beginning point is at the junction of Elephant Head Creek and Pacheco Creek (approx. .75 mile... point the boundary moves in a northerly direction up Elephant Head Creek approx. 1.2 miles until it....G.S. map; (29) Then it moves northeast along Pacheco Creek approx. .5 mile to Elephant Head Creek at...

  3. The coral reefs optimization algorithm: a novel metaheuristic for efficiently solving optimization problems.

    PubMed

    Salcedo-Sanz, S; Del Ser, J; Landa-Torres, I; Gil-López, S; Portilla-Figueras, J A

    2014-01-01

    This paper presents a novel bioinspired algorithm to tackle complex optimization problems: the coral reefs optimization (CRO) algorithm. The CRO algorithm artificially simulates a coral reef, where different corals (namely, solutions to the optimization problem considered) grow and reproduce in coral colonies, fighting by choking out other corals for space in the reef. This fight for space, along with the specific characteristics of the corals' reproduction, produces a robust metaheuristic algorithm shown to be powerful for solving hard optimization problems. In this research the CRO algorithm is tested in several continuous and discrete benchmark problems, as well as in practical application scenarios (i.e., optimum mobile network deployment and off-shore wind farm design). The obtained results confirm the excellent performance of the proposed algorithm and open line of research for further application of the algorithm to real-world problems.

  4. The Coral Reefs Optimization Algorithm: A Novel Metaheuristic for Efficiently Solving Optimization Problems

    PubMed Central

    Salcedo-Sanz, S.; Del Ser, J.; Landa-Torres, I.; Gil-López, S.; Portilla-Figueras, J. A.

    2014-01-01

    This paper presents a novel bioinspired algorithm to tackle complex optimization problems: the coral reefs optimization (CRO) algorithm. The CRO algorithm artificially simulates a coral reef, where different corals (namely, solutions to the optimization problem considered) grow and reproduce in coral colonies, fighting by choking out other corals for space in the reef. This fight for space, along with the specific characteristics of the corals' reproduction, produces a robust metaheuristic algorithm shown to be powerful for solving hard optimization problems. In this research the CRO algorithm is tested in several continuous and discrete benchmark problems, as well as in practical application scenarios (i.e., optimum mobile network deployment and off-shore wind farm design). The obtained results confirm the excellent performance of the proposed algorithm and open line of research for further application of the algorithm to real-world problems. PMID:25147860

  5. Temperature Responses to Spectral Solar Variability on Decadal Time Scales

    NASA Technical Reports Server (NTRS)

    Cahalan, Robert F.; Wen, Guoyong; Harder, Jerald W.; Pilewskie, Peter

    2010-01-01

    Two scenarios of spectral solar forcing, namely Spectral Irradiance Monitor (SIM)-based out-of-phase variations and conventional in-phase variations, are input to a time-dependent radiative-convective model (RCM), and to the GISS modelE. Both scenarios and models give maximum temperature responses in the upper stratosphere, decreasing to the surface. Upper stratospheric peak-to-peak responses to out-of-phase forcing are approx.0.6 K and approx.0.9 K in RCM and modelE, approx.5 times larger than responses to in-phase forcing. Stratospheric responses are in-phase with TSI and UV variations, and resemble HALOE observed 11-year temperature variations. For in-phase forcing, ocean mixed layer response lags surface air response by approx.2 years, and is approx.0.06 K compared to approx.0.14 K for atmosphere. For out-of-phase forcing, lags are similar, but surface responses are significantly smaller. For both scenarios, modelE surface responses are less than 0.1 K in the tropics, and display similar patterns over oceanic regions, but complex responses over land.

  6. Quantum Heterogeneous Computing for Satellite Positioning Optimization

    NASA Astrophysics Data System (ADS)

    Bass, G.; Kumar, V.; Dulny, J., III

    2016-12-01

    Hard optimization problems occur in many fields of academic study and practical situations. We present results in which quantum heterogeneous computing is used to solve a real-world optimization problem: satellite positioning. Optimization problems like this can scale very rapidly with problem size, and become unsolvable with traditional brute-force methods. Typically, such problems have been approximately solved with heuristic approaches; however, these methods can take a long time to calculate and are not guaranteed to find optimal solutions. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. There are now commercially available quantum annealing (QA) devices that are designed to solve difficult optimization problems. These devices have 1000+ quantum bits, but they have significant hardware size and connectivity limitations. We present a novel heterogeneous computing stack that combines QA and classical machine learning and allows the use of QA on problems larger than the quantum hardware could solve in isolation. We begin by analyzing the satellite positioning problem with a heuristic solver, the genetic algorithm. The classical computer's comparatively large available memory can explore the full problem space and converge to a solution relatively close to the true optimum. The QA device can then evolve directly to the optimal solution within this more limited space. Preliminary experiments, using the Quantum Monte Carlo (QMC) algorithm to simulate QA hardware, have produced promising results. Working with problem instances with known global minima, we find a solution within 8% in a matter of seconds, and within 5% in a few minutes. Future studies include replacing QMC with commercially available quantum hardware and exploring more problem sets and model parameters. Our results have important implications for how heterogeneous quantum computing can be used to solve difficult optimization problems in any field.

  7. Module for Oxygenating Water without Generating Bubbles

    NASA Technical Reports Server (NTRS)

    Gonzalez-Martin, Anuncia; Sidik, Reyimjan; Kim, Jinseong

    2004-01-01

    A module that dissolves oxygen in water at concentrations approaching saturation, without generating bubbles of oxygen gas, has been developed as a prototype of improved oxygenators for water-disinfection and water-purification systems that utilize photocatalyzed redox reactions. Depending on the specific nature of a water-treatment system, it is desirable to prevent the formation of bubbles for one or more reasons: (1) Bubbles can remove some organic contaminants from the liquid phase to the gas phase, thereby introducing a gas-treatment problem that complicates the overall water-treatment problem; and/or (2) in some systems (e.g., those that must function in microgravity or in any orientation in normal Earth gravity), bubbles can interfere with the flow of the liquid phase. The present oxygenation module (see Figure 1) is a modified version of a commercial module that contains >100 hollow polypropylene fibers with a nominal pore size of 0.05 m and a total surface area of 0.5 m2. The module was originally designed for oxygenation in a bioreactor, with no water flowing around or inside the tubes. The modification, made to enable the use of the module to oxygenate flowing water, consisted mainly in the encapsulation of the fibers in a tube of Tygon polyvinyl chloride (PVC) with an inside diameter of 1 in. (approx.=25 mm). In operation, water is pumped along the insides of the hollow fibers and oxygen gas is supplied to the space outside the hollow tubes inside the PVC tube. In tests, the pressure drops of water and oxygen in the module were found to be close to zero at water-flow rates ranging up to 320 mL/min and oxygen-flow rates up to 27 mL/min. Under all test conditions, no bubbles were observed at the water outlet. In some tests, flow rates were chosen to obtain dissolved-oxygen concentrations between 25 and 31 parts per million (ppm) . approaching the saturation level of approx.=35 ppm at a temperature of 20 C and pressure of 1 atm (approx.=0.1 MPa). As one would expect, it was observed that the time needed to bring a flow of water from an initial low dissolved-oxygen concentration (e.g., 5 ppm) to a steady high dissolved-oxygen concentration at or near the saturation level depends on the rates of flow of both oxygen and water, among other things. Figure 2 shows the results of an experiment in which a greater flow of oxygen was used during the first few tens of minutes to bring the concentration up to approx.=25 ppm, then a lesser flow was used to maintain the concentration.

  8. Interaction of Ethyl Alcohol Vapor with Sulfuric Acid Solutions

    NASA Technical Reports Server (NTRS)

    Leu, Ming-Taun

    2006-01-01

    We investigated the uptake of ethyl alcohol (ethanol) vapor by sulfuric acid solutions over the range approx.40 to approx.80 wt % H2SO4 and temperatures of 193-273 K. Laboratory studies used a fast flow-tube reactor coupled to an electron-impact ionization mass spectrometer for detection of ethanol and reaction products. The uptake coefficients ((gamma)) were measured and found to vary from 0.019 to 0.072, depending upon the acid composition and temperature. At concentrations greater than approx.70 wt % and in dilute solutions colder than 220 K, the values approached approx.0.07. We also determined the effective solubility constant of ethanol in approx.40 wt % H2SO4 in the temperature range 203-223 K. The potential implications to the budget of ethanol in the global troposphere are briefly discussed.

  9. Wireless Sensor Network Optimization: Multi-Objective Paradigm

    PubMed Central

    Iqbal, Muhammad; Naeem, Muhammad; Anpalagan, Alagan; Ahmed, Ashfaq; Azam, Muhammad

    2015-01-01

    Optimization problems relating to wireless sensor network planning, design, deployment and operation often give rise to multi-objective optimization formulations where multiple desirable objectives compete with each other and the decision maker has to select one of the tradeoff solutions. These multiple objectives may or may not conflict with each other. Keeping in view the nature of the application, the sensing scenario and input/output of the problem, the type of optimization problem changes. To address different nature of optimization problems relating to wireless sensor network design, deployment, operation, planing and placement, there exist a plethora of optimization solution types. We review and analyze different desirable objectives to show whether they conflict with each other, support each other or they are design dependent. We also present a generic multi-objective optimization problem relating to wireless sensor network which consists of input variables, required output, objectives and constraints. A list of constraints is also presented to give an overview of different constraints which are considered while formulating the optimization problems in wireless sensor networks. Keeping in view the multi facet coverage of this article relating to multi-objective optimization, this will open up new avenues of research in the area of multi-objective optimization relating to wireless sensor networks. PMID:26205271

  10. Cost component analysis.

    PubMed

    Lörincz, András; Póczos, Barnabás

    2003-06-01

    In optimizations the dimension of the problem may severely, sometimes exponentially increase optimization time. Parametric function approximatiors (FAPPs) have been suggested to overcome this problem. Here, a novel FAPP, cost component analysis (CCA) is described. In CCA, the search space is resampled according to the Boltzmann distribution generated by the energy landscape. That is, CCA converts the optimization problem to density estimation. Structure of the induced density is searched by independent component analysis (ICA). The advantage of CCA is that each independent ICA component can be optimized separately. In turn, (i) CCA intends to partition the original problem into subproblems and (ii) separating (partitioning) the original optimization problem into subproblems may serve interpretation. Most importantly, (iii) CCA may give rise to high gains in optimization time. Numerical simulations illustrate the working of the algorithm.

  11. The design of multirate digital control systems

    NASA Technical Reports Server (NTRS)

    Berg, M. C.

    1986-01-01

    The successive loop closures synthesis method is the only method for multirate (MR) synthesis in common use. A new method for MR synthesis is introduced which requires a gradient-search solution to a constrained optimization problem. Some advantages of this method are that the control laws for all control loops are synthesized simultaneously, taking full advantage of all cross-coupling effects, and that simple, low-order compensator structures are easily accomodated. The algorithm and associated computer program for solving the constrained optimization problem are described. The successive loop closures , optimal control, and constrained optimization synthesis methods are applied to two example design problems. A series of compensator pairs are synthesized for each example problem. The succesive loop closure, optimal control, and constrained optimization synthesis methods are compared, in the context of the two design problems.

  12. Using Data Assimilation Diagnostics to Assess the SMAP Level-4 Soil Moisture Product

    NASA Technical Reports Server (NTRS)

    Reichle, Rolf; Liu, Qing; De Lannoy, Gabrielle; Crow, Wade; Kimball, John; Koster, Randy; Ardizzone, Joe

    2018-01-01

    The Soil Moisture Active Passive (SMAP) mission Level-4 Soil Moisture (L4_SM) product provides 3-hourly, 9-km resolution, global estimates of surface (0-5 cm) and root-zone (0-100 cm) soil moisture and related land surface variables from 31 March 2015 to present with approx.2.5-day latency. The ensemble-based L4_SM algorithm assimilates SMAP brightness temperature (Tb) observations into the Catchment land surface model. This study describes the spatially distributed L4_SM analysis and assesses the observation-minus-forecast (O-F) Tb residuals and the soil moisture and temperature analysis increments. Owing to the climatological rescaling of the Tb observations prior to assimilation, the analysis is essentially unbiased, with global mean values of approx. 0.37 K for the O-F Tb residuals and practically zero for the soil moisture and temperature increments. There are, however, modest regional (absolute) biases in the O-F residuals (under approx. 3 K), the soil moisture increments (under approx. 0.01 cu m/cu m), and the surface soil temperature increments (under approx. 1 K). Typical instantaneous values are approx. 6 K for O-F residuals, approx. 0.01 (approx. 0.003) cu m/cu m for surface (root-zone) soil moisture increments, and approx. 0.6 K for surface soil temperature increments. The O-F diagnostics indicate that the actual errors in the system are overestimated in deserts and densely vegetated regions and underestimated in agricultural regions and transition zones between dry and wet climates. The O-F auto-correlations suggest that the SMAP observations are used efficiently in western North America, the Sahel, and Australia, but not in many forested regions and the high northern latitudes. A case study in Australia demonstrates that assimilating SMAP observations successfully corrects short-term errors in the L4_SM rainfall forcing.

  13. Global Assessment of the SMAP Level-4 Soil Moisture Product Using Assimilation Diagnostics

    NASA Technical Reports Server (NTRS)

    Reichle, Rolf; Liu, Qing; De Lannoy, Gabrielle; Crow, Wade; Kimball, John; Koster, Randy; Ardizzone, Joe

    2018-01-01

    The Soil Moisture Active Passive (SMAP) mission Level-4 Soil Moisture (L4_SM) product provides 3-hourly, 9-km resolution, global estimates of surface (0-5 cm) and root-zone (0-100 cm) soil moisture and related land surface variables from 31 March 2015 to present with approx. 2.5-day latency. The ensemble-based L4_SM algorithm assimilates SMAP brightness temperature (Tb) observations into the Catchment land surface model. This study describes the spatially distributed L4_SM analysis and assesses the observation-minus-forecast (O-F) Tb residuals and the soil moisture and temperature analysis increments. Owing to the climatological rescaling of the Tb observations prior to assimilation, the analysis is essentially unbiased, with global mean values of approx. 0.37 K for the O-F Tb residuals and practically zero for the soil moisture and temperature increments. There are, however, modest regional (absolute) biases in the O-F residuals (under approx. 3 K), the soil moisture increments (under approx. 0.01 cu m/cu m), and the surface soil temperature increments (under approx. 1 K). Typical instantaneous values are approx. 6 K for O-F residuals, approx. 0.01 (approx. 0.003) cu m/cu m for surface (root-zone) soil moisture increments, and approx. 0.6 K for surface soil temperature increments. The O-F diagnostics indicate that the actual errors in the system are overestimated in deserts and densely vegetated regions and underestimated in agricultural regions and transition zones between dry and wet climates. The O-F auto-correlations suggest that the SMAP observations are used efficiently in western North America, the Sahel, and Australia, but not in many forested regions and the high northern latitudes. A case study in Australia demonstrates that assimilating SMAP observations successfully corrects short-term errors in the L4_SM rainfall forcing.

  14. Lensing corrections to features in the angular two-point correlation function and power spectrum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LoVerde, Marilena; Department of Physics, Columbia University, New York, New York 10027; Hui, Lam

    2008-01-15

    It is well known that magnification bias, the modulation of galaxy or quasar source counts by gravitational lensing, can change the observed angular correlation function. We investigate magnification-induced changes to the shape of the observed correlation function w({theta}), and the angular power spectrum C{sub l}, paying special attention to the matter-radiation equality peak and the baryon wiggles. Lensing effectively mixes the correlation function of the source galaxies with that of the matter correlation at the lower redshifts of the lenses distorting the observed correlation function. We quantify how the lensing corrections depend on the width of the selection function, themore » galaxy bias b, and the number count slope s. The lensing correction increases with redshift and larger corrections are present for sources with steep number count slopes and/or broad redshift distributions. The most drastic changes to C{sub l} occur for measurements at high redshifts (z > or approx. 1.5) and low multipole moment (l < or approx. 100). For the source distributions we consider, magnification bias can shift the location of the matter-radiation equality scale by 1%-6% at z{approx}1.5 and by z{approx}3.5 the shift can be as large as 30%. The baryon bump in {theta}{sup 2}w({theta}) is shifted by < or approx. 1% and the width is typically increased by {approx}10%. Shifts of > or approx. 0.5% and broadening > or approx. 20% occur only for very broad selection functions and/or galaxies with (5s-2)/b > or approx. 2. However, near the baryon bump the magnification correction is not constant but is a gently varying function which depends on the source population. Depending on how the w({theta}) data is fitted, this correction may need to be accounted for when using the baryon acoustic scale for precision cosmology.« less

  15. High internal inductance for steady-state operation in ITER and a reactor

    DOE PAGES

    Ferron, John R.; Holcomb, Christopher T.; Luce, Timothy C.; ...

    2015-06-26

    Increased confinement and ideal stability limits at relatively high values of the internal inductance (more » $${{\\ell}_{i}}$$ ) have enabled an attractive scenario for steady-state tokamak operation to be demonstrated in DIII-D. Normalized plasma pressure in the range appropriate for a reactor has been achieved in high elongation and triangularity double-null divertor discharges with $${{\\beta}_{\\text{N}}}\\approx 5$$ at $${{\\ell}_{i}}\\approx 1.3$$ , near the ideal $n=1$ kink stability limit calculated without the effect of a stabilizing vacuum vessel wall, with the ideal-wall limit still higher at $${{\\beta}_{\\text{N}}}>5.5$$ . Confinement is above the H-mode level with $${{H}_{98\\left(\\text{y},2\\right)}}\\approx 1.8$$ . At $${{q}_{95}}\\approx 7.5$$ , the current is overdriven, with bootstrap current fraction $${{f}_{\\text{BS}}}\\approx 0.8$$ , noninductive current fraction $${{f}_{\\text{NI}}}>1$$ and negative surface voltage. For ITER (which has a single-null divertor shape), operation at $${{\\ell}_{i}}\\approx 1$$ is a promising option with $${{f}_{\\text{BS}}}\\approx 0.5$$ and the remaining current driven externally near the axis where the electron cyclotron current drive efficiency is high. This scenario has been tested in the ITER shape in DIII-D at $${{q}_{95}}=4.8$$ , so far reaching $${{f}_{\\text{NI}}}=0.7$$ and $${{f}_{\\text{BS}}}=0.4$$ at $${{\\beta}_{\\text{N}}}\\approx 3.5$$ with performance appropriate for the ITER Q=5 mission, $${{H}_{89}}{{\\beta}_{\\text{N}}}/q_{95}^{2}\\approx 0.3$$ . Modeling studies explored how increased current drive power for DIII-D could be applied to maintain a stationary, fully noninductive high $${{\\ell}_{i}}$$ discharge. Lastly, stable solutions in the double-null shape are found without the vacuum vessel wall at $${{\\beta}_{\\text{N}}}=4$$ , $${{\\ell}_{i}}=1.07$$ and $${{f}_{\\text{BS}}}=0.5$$ , and at $${{\\beta}_{\\text{N}}}=5$$ with the vacuum vessel wall.« less

  16. Stellar Populations of Lyman Break Galaxies at z approx. to 1-3 in the HST/WFC3 Early Release Science Observations

    NASA Technical Reports Server (NTRS)

    Hathi, N. P.; Cohen, S. H.; Ryan, R. E., Jr.; Finkelstein, S. L.; McCarthy, P. J.; Windhorst, R. A.; Yan, H.; Koekemoer, A. M.; Rutkowski, M. J.; OConnell, R. W.; hide

    2012-01-01

    We analyze the spectral energy distributions (SEDs) of Lyman break galaxies . (LBGs) at z approx = 1-3 selected using the Hubble Space Telescope (HST) Wide Field Camera 3 (WFC3) UVIS channel filters. These HST /WFC3 obse,rvations cover about 50 arcmin2 in the GOODS-South field as a part of the WFC3 Early Release Science program. These LBGs at z approx = 1-3 are selected using dropout selection criteria similar to high redshift LBGs. The deep multi-band photometry in this field is used to identify best-fit SED models, from which we infer the following results: (1) the photometric redshift estimate of these dropout selected LBGs is accurate to within few percent; (2) the UV spectral slope f3 is redder than at high redshift (z > 3), where LBGs are less dusty; (3) on average, LBGs at .z approx = 1-3 are massive, dustier and more highly star-forming, compared to LBGs at higher redshifts with similar luminosities, though their median values are similar within 1a uncertainties. This could imply that identical dropout selection technique, at all. redshifts, find physically similar galaxies; and (4) the stellar masses of these LBGs are directly proportional to their UV luminosities with a logarithmic slope of approx 0.46, and star-formation rates are proportional to their stellar masses with a logarithmic slope of approx 0.90. These relations hold true - within luminosities probed in this study - for LBGs from z approx = 1.5 to 5. The star-forming galaxies selected using other color-based techniques show similar correlations at z approx = 2, but to avoid any selection biases, and for direct comparison with LBGs at z > 3, a true Lyman break selection at z approx = 2 is essential. The future HST UV surveys,. both wider and deeper, covering a large luminosity range are important to better understand LBG properties, and their evolution.

  17. Spectral and Timing Nature of the Symbiotic X-Ray Binary 4U 1954+319: The Slowest Rotating Neutron Star in AN X-Ray Binary System

    NASA Technical Reports Server (NTRS)

    Enoto, Teruaki; Sasano, Makoto; Yamada, Shin'Ya; Tamagawa, Toru; Makishima, Kazuo; Pottschmidt, Katja; Marcu, Diana; Corbet, Robin H. D.; Fuerst, Felix; Wilms, Jorn

    2014-01-01

    The symbiotic X-ray binary (SyXB) 4U 1954+319 is a rare system hosting a peculiar neutron star (NS) and an M-type optical companion. Its approx. 5.4 hr NS spin period is the longest among all known accretion-powered pulsars and exhibited large (is approx. 7%) fluctuations over 8 yr. A spin trend transition was detected with Swift/BAT around an X-ray brightening in 2012. The source was in quiescent and bright states before and after this outburst based on 60 ks Suzaku observations in 2011 and 2012. The observed continuum is well described by a Comptonized model with the addition of a narrow 6.4 keV Fe-K alpha line during the outburst. Spectral similarities to slowly rotating pulsars in high-mass X-ray binaries, its high pulsed fraction (approx. 60%-80%), and the location in the Corbet diagram favor high B-field (approx. greater than 10(exp12) G) over a weak field as in low-mass X-ray binaries. The observed low X-ray luminosity (10(exp33)-10(exp35) erg s(exp-1)), probable wide orbit, and a slow stellar wind of this SyXB make quasi-spherical accretion in the subsonic settling regime a plausible model. Assuming a approx. 10(exp13) G NS, this scheme can explain the approx. 5.4 hr equilibrium rotation without employing the magnetar-like field (approx. 10(exp16) G) required in the disk accretion case. The timescales of multiple irregular flares (approx. 50 s) can also be attributed to the free-fall time from the Alfv´en shell for a approx. 10(exp13) G field. A physical interpretation of SyXBs beyond the canonical binary classifications is discussed.

  18. The optimal location of piezoelectric actuators and sensors for vibration control of plates

    NASA Astrophysics Data System (ADS)

    Kumar, K. Ramesh; Narayanan, S.

    2007-12-01

    This paper considers the optimal placement of collocated piezoelectric actuator-sensor pairs on a thin plate using a model-based linear quadratic regulator (LQR) controller. LQR performance is taken as objective for finding the optimal location of sensor-actuator pairs. The problem is formulated using the finite element method (FEM) as multi-input-multi-output (MIMO) model control. The discrete optimal sensor and actuator location problem is formulated in the framework of a zero-one optimization problem. A genetic algorithm (GA) is used to solve the zero-one optimization problem. Different classical control strategies like direct proportional feedback, constant-gain negative velocity feedback and the LQR optimal control scheme are applied to study the control effectiveness.

  19. IR luminescence of tellurium-doped silica-based optical fibre

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dianov, Evgenii M; Alyshev, S V; Shubin, Aleksei V

    2012-03-31

    Tellurium-doped germanosilicate fibre has been fabricated by the MCVD process. In contrast to Te-containing glasses studied earlier, it has a broad luminescence band (full width at half maximum of {approx}350 nm) centred at 1500 nm, with a lifetime of {approx}2 {mu}s. The luminescence of the fibre has been studied before and after gamma irradiation in a {sup 60}Co source to 309 and 992 kGy. The irradiation produced a luminescence band around 1100 nm, with a full width at half maximum of {approx}400 nm and lifetime of {approx}5 {mu}s. (letters)

  20. Exploring the quantum speed limit with computer games

    NASA Astrophysics Data System (ADS)

    Sørensen, Jens Jakob W. H.; Pedersen, Mads Kock; Munch, Michael; Haikka, Pinja; Jensen, Jesper Halkjær; Planke, Tilo; Andreasen, Morten Ginnerup; Gajdacz, Miroslav; Mølmer, Klaus; Lieberoth, Andreas; Sherson, Jacob F.

    2016-04-01

    Humans routinely solve problems of immense computational complexity by intuitively forming simple, low-dimensional heuristic strategies. Citizen science (or crowd sourcing) is a way of exploiting this ability by presenting scientific research problems to non-experts. ‘Gamification’—the application of game elements in a non-game context—is an effective tool with which to enable citizen scientists to provide solutions to research problems. The citizen science games Foldit, EteRNA and EyeWire have been used successfully to study protein and RNA folding and neuron mapping, but so far gamification has not been applied to problems in quantum physics. Here we report on Quantum Moves, an online platform gamifying optimization problems in quantum physics. We show that human players are able to find solutions to difficult problems associated with the task of quantum computing. Players succeed where purely numerical optimization fails, and analyses of their solutions provide insights into the problem of optimization of a more profound and general nature. Using player strategies, we have thus developed a few-parameter heuristic optimization method that efficiently outperforms the most prominent established numerical methods. The numerical complexity associated with time-optimal solutions increases for shorter process durations. To understand this better, we produced a low-dimensional rendering of the optimization landscape. This rendering reveals why traditional optimization methods fail near the quantum speed limit (that is, the shortest process duration with perfect fidelity). Combined analyses of optimization landscapes and heuristic solution strategies may benefit wider classes of optimization problems in quantum physics and beyond.

  1. Exploring the quantum speed limit with computer games.

    PubMed

    Sørensen, Jens Jakob W H; Pedersen, Mads Kock; Munch, Michael; Haikka, Pinja; Jensen, Jesper Halkjær; Planke, Tilo; Andreasen, Morten Ginnerup; Gajdacz, Miroslav; Mølmer, Klaus; Lieberoth, Andreas; Sherson, Jacob F

    2016-04-14

    Humans routinely solve problems of immense computational complexity by intuitively forming simple, low-dimensional heuristic strategies. Citizen science (or crowd sourcing) is a way of exploiting this ability by presenting scientific research problems to non-experts. 'Gamification'--the application of game elements in a non-game context--is an effective tool with which to enable citizen scientists to provide solutions to research problems. The citizen science games Foldit, EteRNA and EyeWire have been used successfully to study protein and RNA folding and neuron mapping, but so far gamification has not been applied to problems in quantum physics. Here we report on Quantum Moves, an online platform gamifying optimization problems in quantum physics. We show that human players are able to find solutions to difficult problems associated with the task of quantum computing. Players succeed where purely numerical optimization fails, and analyses of their solutions provide insights into the problem of optimization of a more profound and general nature. Using player strategies, we have thus developed a few-parameter heuristic optimization method that efficiently outperforms the most prominent established numerical methods. The numerical complexity associated with time-optimal solutions increases for shorter process durations. To understand this better, we produced a low-dimensional rendering of the optimization landscape. This rendering reveals why traditional optimization methods fail near the quantum speed limit (that is, the shortest process duration with perfect fidelity). Combined analyses of optimization landscapes and heuristic solution strategies may benefit wider classes of optimization problems in quantum physics and beyond.

  2. Numerical optimization methods for controlled systems with parameters

    NASA Astrophysics Data System (ADS)

    Tyatyushkin, A. I.

    2017-10-01

    First- and second-order numerical methods for optimizing controlled dynamical systems with parameters are discussed. In unconstrained-parameter problems, the control parameters are optimized by applying the conjugate gradient method. A more accurate numerical solution in these problems is produced by Newton's method based on a second-order functional increment formula. Next, a general optimal control problem with state constraints and parameters involved on the righthand sides of the controlled system and in the initial conditions is considered. This complicated problem is reduced to a mathematical programming one, followed by the search for optimal parameter values and control functions by applying a multimethod algorithm. The performance of the proposed technique is demonstrated by solving application problems.

  3. Data Understanding Applied to Optimization

    NASA Technical Reports Server (NTRS)

    Buntine, Wray; Shilman, Michael

    1998-01-01

    The goal of this research is to explore and develop software for supporting visualization and data analysis of search and optimization. Optimization is an ever-present problem in science. The theory of NP-completeness implies that the problems can only be resolved by increasingly smarter problem specific knowledge, possibly for use in some general purpose algorithms. Visualization and data analysis offers an opportunity to accelerate our understanding of key computational bottlenecks in optimization and to automatically tune aspects of the computation for specific problems. We will prototype systems to demonstrate how data understanding can be successfully applied to problems characteristic of NASA's key science optimization tasks, such as central tasks for parallel processing, spacecraft scheduling, and data transmission from a remote satellite.

  4. Multiobjective Optimization Using a Pareto Differential Evolution Approach

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    Differential Evolution is a simple, fast, and robust evolutionary algorithm that has proven effective in determining the global optimum for several difficult single-objective optimization problems. In this paper, the Differential Evolution algorithm is extended to multiobjective optimization problems by using a Pareto-based approach. The algorithm performs well when applied to several test optimization problems from the literature.

  5. On a distinctive feature of problems of calculating time-average characteristics of nuclear reactor optimal control sets

    NASA Astrophysics Data System (ADS)

    Trifonenkov, A. V.; Trifonenkov, V. P.

    2017-01-01

    This article deals with a feature of problems of calculating time-average characteristics of nuclear reactor optimal control sets. The operation of a nuclear reactor during threatened period is considered. The optimal control search problem is analysed. The xenon poisoning causes limitations on the variety of statements of the problem of calculating time-average characteristics of a set of optimal reactor power off controls. The level of xenon poisoning is limited. There is a problem of choosing an appropriate segment of the time axis to ensure that optimal control problem is consistent. Two procedures of estimation of the duration of this segment are considered. Two estimations as functions of the xenon limitation were plot. Boundaries of the interval of averaging are defined more precisely.

  6. Preliminary structural design of a lunar transfer vehicle aerobrake. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Bush, Lance B.

    1992-01-01

    An aerobrake concept for a Lunar transfer vehicle was weight optimized through the use of the Taguchi design method, structural finite element analyses and structural sizing routines. Six design parameters were chosen to represent the aerobrake structural configuration. The design parameters included honeycomb core thickness, diameter to depth ratio, shape, material, number of concentric ring frames, and number of radial frames. Each parameter was assigned three levels. The minimum weight aerobrake configuration resulting from the study was approx. half the weight of the average of all twenty seven experimental configurations. The parameters having the most significant impact on the aerobrake structural weight were identified.

  7. Sodium-metal chloride batteries

    NASA Technical Reports Server (NTRS)

    Ratnakumar, B. V.; Attia, A. I.; Halpert, G.

    1992-01-01

    It was concluded that rapid development in the technology of sodium metal chloride batteries has been achieved in the last decade mainly due to the: expertise available with sodium sulfur system; safety; and flexibility in design and fabrication. Long cycle lives of over 1000 and high energy densities of approx. 100 Wh/kg have been demonstrated in both Na/FeCl2 and Na/NiCl2 cells. Optimization of porous cathode and solid electrolyte geometries are essential for further enhancing the battery performance. Fundamental studies confirm the capabilities of these systems. Nickel dichloride emerges as the candidate cathode material for high power density applications such as electric vehicle and space.

  8. DISCOVERY OF A HIGHLY UNEQUAL-MASS BINARY T DWARF WITH KECK LASER GUIDE STAR ADAPTIVE OPTICS: A COEVALITY TEST OF SUBSTELLAR THEORETICAL MODELS AND EFFECTIVE TEMPERATURES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Michael C.; Dupuy, Trent J.; Leggett, S. K., E-mail: mliu@ifa.hawaii.ed

    Highly unequal-mass ratio binaries are rare among field brown dwarfs, with the mass ratio distribution of the known census described by q {sup (4.9{+-}0.7)}. However, such systems enable a unique test of the joint accuracy of evolutionary and atmospheric models, under the constraint of coevality for the individual components (the 'isochrone test'). We carry out this test using two of the most extreme field substellar binaries currently known, the T1 + T6 {epsilon} Ind Bab binary and a newly discovered 0.''14 T2.0 + T7.5 binary, 2MASS J12095613-1004008AB, identified with Keck laser guide star adaptive optics. The latter is the mostmore » extreme tight binary resolved to date (q {approx} 0.5). Based on the locations of the binary components on the Hertzsprung-Russell (H-R) diagram, current models successfully indicate that these two systems are coeval, with internal age differences of log(age) = -0.8 {+-} 1.3(-1.0{sup +1.2}{sub -1.3}) dex and 0.5{sup +0.4}{sub -0.3}(0.3{sup +0.3}{sub -0.4}) dex for 2MASS J1209-1004AB and {epsilon} Ind Bab, respectively, as inferred from the Lyon (Tucson) models. However, the total mass of {epsilon} Ind Bab derived from the H-R diagram ({approx} 80 M{sub Jup} using the Lyon models) is strongly discrepant with the reported dynamical mass. This problem, which is independent of the assumed age of the {epsilon} Ind Bab system, can be explained by a {approx} 50-100 K systematic error in the model atmosphere fitting, indicating slightly warmer temperatures for both components; bringing the mass determinations from the H-R diagram and the visual orbit into consistency leads to an inferred age of {approx} 6 Gyr for {epsilon} Ind Bab, older than previously assumed. Overall, the two T dwarf binaries studied here, along with recent results from T dwarfs in age and mass benchmark systems, yield evidence for small ({approx}100 K) errors in the evolutionary models and/or model atmospheres, but not significantly larger. Future parallax, resolved spectroscopy, and dynamical mass measurements for 2MASS J1209-1004AB will enable a more stringent application of the isochrone test. Finally, the binary nature of this object reduces its utility as the primary T3 near-IR spectral typing standard; we suggest SDSS J1206+2813 as a replacement.« less

  9. Robust optimization modelling with applications to industry and environmental problems

    NASA Astrophysics Data System (ADS)

    Chaerani, Diah; Dewanto, Stanley P.; Lesmana, Eman

    2017-10-01

    Robust Optimization (RO) modeling is one of the existing methodology for handling data uncertainty in optimization problem. The main challenge in this RO methodology is how and when we can reformulate the robust counterpart of uncertain problems as a computationally tractable optimization problem or at least approximate the robust counterpart by a tractable problem. Due to its definition the robust counterpart highly depends on how we choose the uncertainty set. As a consequence we can meet this challenge only if this set is chosen in a suitable way. The development on RO grows fast, since 2004, a new approach of RO called Adjustable Robust Optimization (ARO) is introduced to handle uncertain problems when the decision variables must be decided as a ”wait and see” decision variables. Different than the classic Robust Optimization (RO) that models decision variables as ”here and now”. In ARO, the uncertain problems can be considered as a multistage decision problem, thus decision variables involved are now become the wait and see decision variables. In this paper we present the applications of both RO and ARO. We present briefly all results to strengthen the importance of RO and ARO in many real life problems.

  10. A Matrix-Free Algorithm for Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Lambe, Andrew Borean

    Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and constraints. On the aerostructural test problem formulated with thousands of constraints, the matrix-free optimizer is estimated to reduce the total computational time by up to 90% compared to conventional optimizers.

  11. A Matrix-Free Algorithm for Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Lambe, Andrew Borean

    Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation. motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and constraints. On the aerostructural test problem formulated with thousands of constraints, the matrix-free optimizer is estimated to reduce the total computational time by up to 90% compared to conventional optimizers.

  12. A sequential linear optimization approach for controller design

    NASA Technical Reports Server (NTRS)

    Horta, L. G.; Juang, J.-N.; Junkins, J. L.

    1985-01-01

    A linear optimization approach with a simple real arithmetic algorithm is presented for reliable controller design and vibration suppression of flexible structures. Using first order sensitivity of the system eigenvalues with respect to the design parameters in conjunction with a continuation procedure, the method converts a nonlinear optimization problem into a maximization problem with linear inequality constraints. The method of linear programming is then applied to solve the converted linear optimization problem. The general efficiency of the linear programming approach allows the method to handle structural optimization problems with a large number of inequality constraints on the design vector. The method is demonstrated using a truss beam finite element model for the optimal sizing and placement of active/passive-structural members for damping augmentation. Results using both the sequential linear optimization approach and nonlinear optimization are presented and compared. The insensitivity to initial conditions of the linear optimization approach is also demonstrated.

  13. Design and multi-physics optimization of rotary MRF brakes

    NASA Astrophysics Data System (ADS)

    Topcu, Okan; Taşcıoğlu, Yiğit; Konukseven, Erhan İlhan

    2018-03-01

    Particle swarm optimization (PSO) is a popular method to solve the optimization problems. However, calculations for each particle will be excessive when the number of particles and complexity of the problem increases. As a result, the execution speed will be too slow to achieve the optimized solution. Thus, this paper proposes an automated design and optimization method for rotary MRF brakes and similar multi-physics problems. A modified PSO algorithm is developed for solving multi-physics engineering optimization problems. The difference between the proposed method and the conventional PSO is to split up the original single population into several subpopulations according to the division of labor. The distribution of tasks and the transfer of information to the next party have been inspired by behaviors of a hunting party. Simulation results show that the proposed modified PSO algorithm can overcome the problem of heavy computational burden of multi-physics problems while improving the accuracy. Wire type, MR fluid type, magnetic core material, and ideal current inputs have been determined by the optimization process. To the best of the authors' knowledge, this multi-physics approach is novel for optimizing rotary MRF brakes and the developed PSO algorithm is capable of solving other multi-physics engineering optimization problems. The proposed method has showed both better performance compared to the conventional PSO and also has provided small, lightweight, high impedance rotary MRF brake designs.

  14. Algorithmic Perspectives on Problem Formulations in MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2000-01-01

    This work is concerned with an approach to formulating the multidisciplinary optimization (MDO) problem that reflects an algorithmic perspective on MDO problem solution. The algorithmic perspective focuses on formulating the problem in light of the abilities and inabilities of optimization algorithms, so that the resulting nonlinear programming problem can be solved reliably and efficiently by conventional optimization techniques. We propose a modular approach to formulating MDO problems that takes advantage of the problem structure, maximizes the autonomy of implementation, and allows for multiple easily interchangeable problem statements to be used depending on the available resources and the characteristics of the application problem.

  15. A general optimality criteria algorithm for a class of engineering optimization problems

    NASA Astrophysics Data System (ADS)

    Belegundu, Ashok D.

    2015-05-01

    An optimality criteria (OC)-based algorithm for optimization of a general class of nonlinear programming (NLP) problems is presented. The algorithm is only applicable to problems where the objective and constraint functions satisfy certain monotonicity properties. For multiply constrained problems which satisfy these assumptions, the algorithm is attractive compared with existing NLP methods as well as prevalent OC methods, as the latter involve computationally expensive active set and step-size control strategies. The fixed point algorithm presented here is applicable not only to structural optimization problems but also to certain problems as occur in resource allocation and inventory models. Convergence aspects are discussed. The fixed point update or resizing formula is given physical significance, which brings out a strength and trim feature. The number of function evaluations remains independent of the number of variables, allowing the efficient solution of problems with large number of variables.

  16. Convex optimization problem prototyping for image reconstruction in computed tomography with the Chambolle-Pock algorithm

    PubMed Central

    Sidky, Emil Y.; Jørgensen, Jakob H.; Pan, Xiaochuan

    2012-01-01

    The primal-dual optimization algorithm developed in Chambolle and Pock (CP), 2011 is applied to various convex optimization problems of interest in computed tomography (CT) image reconstruction. This algorithm allows for rapid prototyping of optimization problems for the purpose of designing iterative image reconstruction algorithms for CT. The primal-dual algorithm is briefly summarized in the article, and its potential for prototyping is demonstrated by explicitly deriving CP algorithm instances for many optimization problems relevant to CT. An example application modeling breast CT with low-intensity X-ray illumination is presented. PMID:22538474

  17. Wind Farm Turbine Type and Placement Optimization

    NASA Astrophysics Data System (ADS)

    Graf, Peter; Dykes, Katherine; Scott, George; Fields, Jason; Lunacek, Monte; Quick, Julian; Rethore, Pierre-Elouan

    2016-09-01

    The layout of turbines in a wind farm is already a challenging nonlinear, nonconvex, nonlinearly constrained continuous global optimization problem. Here we begin to address the next generation of wind farm optimization problems by adding the complexity that there is more than one turbine type to choose from. The optimization becomes a nonlinear constrained mixed integer problem, which is a very difficult class of problems to solve. This document briefly summarizes the algorithm and code we have developed, the code validation steps we have performed, and the initial results for multi-turbine type and placement optimization (TTP_OPT) we have run.

  18. Wind farm turbine type and placement optimization

    DOE PAGES

    Graf, Peter; Dykes, Katherine; Scott, George; ...

    2016-10-03

    The layout of turbines in a wind farm is already a challenging nonlinear, nonconvex, nonlinearly constrained continuous global optimization problem. Here we begin to address the next generation of wind farm optimization problems by adding the complexity that there is more than one turbine type to choose from. The optimization becomes a nonlinear constrained mixed integer problem, which is a very difficult class of problems to solve. Furthermore, this document briefly summarizes the algorithm and code we have developed, the code validation steps we have performed, and the initial results for multi-turbine type and placement optimization (TTP_OPT) we have run.

  19. Gravity inversion of a fault by Particle swarm optimization (PSO).

    PubMed

    Toushmalani, Reza

    2013-01-01

    Particle swarm optimization is a heuristic global optimization method and also an optimization algorithm, which is based on swarm intelligence. It comes from the research on the bird and fish flock movement behavior. In this paper we introduce and use this method in gravity inverse problem. We discuss the solution for the inverse problem of determining the shape of a fault whose gravity anomaly is known. Application of the proposed algorithm to this problem has proven its capability to deal with difficult optimization problems. The technique proved to work efficiently when tested to a number of models.

  20. Post-Optimality Analysis In Aerospace Vehicle Design

    NASA Technical Reports Server (NTRS)

    Braun, Robert D.; Kroo, Ilan M.; Gage, Peter J.

    1993-01-01

    This analysis pertains to the applicability of optimal sensitivity information to aerospace vehicle design. An optimal sensitivity (or post-optimality) analysis refers to computations performed once the initial optimization problem is solved. These computations may be used to characterize the design space about the present solution and infer changes in this solution as a result of constraint or parameter variations, without reoptimizing the entire system. The present analysis demonstrates that post-optimality information generated through first-order computations can be used to accurately predict the effect of constraint and parameter perturbations on the optimal solution. This assessment is based on the solution of an aircraft design problem in which the post-optimality estimates are shown to be within a few percent of the true solution over the practical range of constraint and parameter variations. Through solution of a reusable, single-stage-to-orbit, launch vehicle design problem, this optimal sensitivity information is also shown to improve the efficiency of the design process, For a hierarchically decomposed problem, this computational efficiency is realized by estimating the main-problem objective gradient through optimal sep&ivity calculations, By reducing the need for finite differentiation of a re-optimized subproblem, a significant decrease in the number of objective function evaluations required to reach the optimal solution is obtained.

  1. Analysis of a Two-Dimensional Thermal Cloaking Problem on the Basis of Optimization

    NASA Astrophysics Data System (ADS)

    Alekseev, G. V.

    2018-04-01

    For a two-dimensional model of thermal scattering, inverse problems arising in the development of tools for cloaking material bodies on the basis of a mixed thermal cloaking strategy are considered. By applying the optimization approach, these problems are reduced to optimization ones in which the role of controls is played by variable parameters of the medium occupying the cloaking shell and by the heat flux through a boundary segment of the basic domain. The solvability of the direct and optimization problems is proved, and an optimality system is derived. Based on its analysis, sufficient conditions on the input data are established that ensure the uniqueness and stability of optimal solutions.

  2. Solving mixed integer nonlinear programming problems using spiral dynamics optimization algorithm

    NASA Astrophysics Data System (ADS)

    Kania, Adhe; Sidarto, Kuntjoro Adji

    2016-02-01

    Many engineering and practical problem can be modeled by mixed integer nonlinear programming. This paper proposes to solve the problem with modified spiral dynamics inspired optimization method of Tamura and Yasuda. Four test cases have been examined, including problem in engineering and sport. This method succeeds in obtaining the optimal result in all test cases.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong Subo; Katz, Boaz; Socrates, Aristotle

    Upcoming direct-imaging experiments may detect a new class of long-period, highly luminous, tidally powered extrasolar gas giants. Even though they are hosted by {approx} Gyr-'old' main-sequence stars, they can be as 'hot' as young Jupiters at {approx}100 Myr, the prime targets of direct-imaging surveys. They are on years-long orbits and presently migrating to 'feed' the 'hot Jupiters'. They are expected from 'high-e' migration mechanisms, in which Jupiters are excited to highly eccentric orbits and then shrink semimajor axis by a factor of {approx}10-100 due to tidal dissipation at close periastron passages. The dissipated orbital energy is converted to heat, andmore » if it is deposited deep enough into the atmosphere, the planet likely radiates steadily at luminosity L {approx} 100-1000 L{sub Jup}(2 Multiplication-Sign 10{sup -7}-2 Multiplication-Sign 10{sup -6} L{sub Sun }) during a typical {approx} Gyr migration timescale. Their large orbital separations and expected high planet-to-star flux ratios in IR make them potentially accessible to high-contrast imaging instruments on 10 m class telescopes. {approx}10 such planets are expected to exist around FGK dwarfs within {approx}50 pc. Long-period radial velocity planets are viable candidates, and the highly eccentric planet HD 20782b at maximum angular separation {approx}0.''08 is a promising candidate. Directly imaging these tidally powered Jupiters would enable a direct test of high-e migration mechanisms. Once detected, the luminosity would provide a direct measurement of the migration rate, and together with mass (and possibly radius) estimate, they would serve as a laboratory to study planetary spectral formation and tidal physics.« less

  4. Comparisons with Caenorhabditis (approximately 100 Mb) and Drosophila (approximately 175 Mb) using flow cytometry show genome size in Arabidopsis to be approximately 157 Mb and thus approximately 25% larger than the Arabidopsis genome initiative estimate of approximately 125 Mb.

    PubMed

    Bennett, Michael D; Leitch, Ilia J; Price, H James; Johnston, J Spencer

    2003-04-01

    Recent genome sequencing papers have given genome sizes of 180 Mb for Drosophila melanogaster Iso-1 and 125 Mb for Arabidopsis thaliana Columbia. The former agrees with early cytochemical estimates, but numerous cytometric estimates of around 170 Mb imply that a genome size of 125 Mb for arabidopsis is an underestimate. In this study, nuclei of species pairs were compared directly using flow cytometry. Co-run Columbia and Iso-1 female gave a 2C peak for arabidopsis only approx. 15 % below that for drosophila, and 16C endopolyploid Columbia nuclei had approx. 15 % more DNA than 2C chicken nuclei (with >2280 Mb). Caenorhabditis elegans Bristol N2 (genome size approx. 100 Mb) co-run with Columbia or Iso-1 gave a 2C peak for drosophila approx. 75 % above that for 2C C. elegans, and a 2C peak for arabidopsis approx. 57 % above that for C. elegans. This confirms that 1C in drosophila is approx. 175 Mb and, combined with other evidence, leads us to conclude that the genome size of arabidopsis is not approx. 125 Mb, but probably approx. 157 Mb. It is likely that the discrepancy represents extra repeated sequences in unsequenced gaps in heterochromatic regions. Complete sequencing of the arabidopsis genome until no gaps remain at telomeres, nucleolar organizing regions or centromeres is still needed to provide the first precise angiosperm C-value as a benchmark calibration standard for plant genomes, and to ensure that no genes have been missed in arabidopsis, especially in centromeric regions, which are clearly larger than once imagined.

  5. The Infrared Properties of Sources Matched in the Wise All-Sky and Herschel ATLAS Surveys

    NASA Technical Reports Server (NTRS)

    Bond, Nicholas A.; Benford, Dominic J.; Gardner, Jonathan P.; Amblard, Alexandre; Fleuren, Simone; Blain, Andrew W.; Dunne, Loretta; Smith, Daniel J. B.; Maddox, Steve J.; Hoyos, Carlos; hide

    2012-01-01

    We describe the infrared properties of sources detected over approx 36 sq deg of sky in the GAMA 15-hr equatorial field, using data from both the Herschel Astrophysical Terahertz Large-Area Survey (HATLAS) and Wide-field Infrared Survey (WISE). With 5sigma point-source depths of 34 and 0.048 mJy at 250 micron and 3.4 micron, respectively, we are able to identify 50.6% of the H-ATLAS sources in the WISE survey, corresponding to a surface density of approx 630 deg(exp -2). Approximately two-thirds of these sources have measured spectroscopic or optical/near-IR photometric redshifts of z < 1. For sources with spectroscopic redshifts at z < 0.3, we find a linear correlation between the infrared luminosity at 3.4 micron and that at 250 micron, with +/- 50% scatter over approx 1.5 orders of magnitude in luminosity, approx 10(exp 9) - 10(exp 10.5) Solar Luminosity By contrast, the matched sources without previously measured redshifts (r approx > 20.5) have 250-350 micron flux density ratios that suggest either high-redshift galaxies (z approx > 1.5) or optically faint low-redshift galaxies with unusually low temperatures (T approx < 20). Their small 3.4-250 micron flux ratios favor a high-redshift galaxy population, as only the most actively star-forming galaxies at low redshift (e.g., Arp 220) exhibit comparable flux density ratios. Furthermore, we find a relatively large AGN fraction (approx 30%) in a 12 micron flux-limited subsample of H-ATLAS sources, also consistent with there being a significant population of high-redshift sources in the no-redshift sample

  6. TIME-FREQUENCY ANALYSIS OF THE SUPERORBITAL MODULATION OF THE X-RAY BINARY SMC X-1 USING THE HILBERT-HUANG TRANSFORM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hu, Chin-Ping; Chou, Yi; Yang, Ting-Chang

    2011-10-20

    The high-mass X-ray binary SMC X-1 exhibits a superorbital modulation with a dramatically varying period ranging between {approx}40 days and {approx}60 days. This research studies the time-frequency properties of the superorbital modulation of SMC X-1 based on the observations made by the All-Sky Monitor (ASM) onboard the Rossi X-ray Timing Explorer (RXTE). We analyzed the entire ASM database collected since 1996. The Hilbert-Huang transform (HHT), developed for non-stationary and nonlinear time-series analysis, was adopted to derive the instantaneous superorbital frequency. The resultant Hilbert spectrum is consistent with the dynamic power spectrum as it shows more detailed information in both themore » time and frequency domains. The RXTE observations show that the superorbital modulation period was mostly between {approx}50 days and {approx}65 days, whereas it changed to {approx}45 days around MJD 50,800 and MJD 54,000. Our analysis further indicates that the instantaneous frequency changed to a timescale of hundreds of days between {approx}MJD 51,500 and {approx}MJD 53,500. Based on the instantaneous phase defined by HHT, we folded the ASM light curve to derive a superorbital profile, from which an asymmetric feature and a low state with barely any X-ray emissions (lasting for {approx}0.3 cycles) were observed. We also calculated the correlation between the mean period and the amplitude of the superorbital modulation. The result is similar to the recently discovered relationship between the superorbital cycle length and the mean X-ray flux for Her X-1.« less

  7. The Infrared Properties of Sources Matched in the WISE All-Sky and Herschel Atlas Surveys

    NASA Technical Reports Server (NTRS)

    Bond, Nicholas A.; Benford, Dominic J.; Gardner, Jonathan P.; Eisenhardt, Peter; Amblard, Alexandre; Temi, Pasquale; Fleuren, Simone; Blain, Andrew W.; Dunne, Loretta; Smith, Daniel J.; hide

    2012-01-01

    We describe the infrared properties of sources detected over approx. 36 deg2 of sky in the GAMA 15-hr equatorial field, using data from both the Herschel Astrophysical Terahertz Large-Area Survey (H-ATLAS) and Wide-field Infrared Survey (WISE). With 5(sigma) point-source depths of 34 and 0.048 mJy at 250 microns and 3.4 microns, respectively, we are able to identify 50.6% of the H-ATLAS sources in the WISE survey, corresponding to a surface density of approx. 630 deg-2. Approximately two-thirds of these sources have measured spectroscopic or optical/near-IR photometric redshifts of z < 1. For sources with spectroscopic redshifts at z < 0.3, we find a linear correlation between the infrared luminosity at 3.4 microns and that at 250 microns, with +/-50% scatter over approx. 1.5 orders of magnitude in luminosity, approx. 10(exp 9) - 10(exp 10.5) Stellar Luminosity. By contrast, the matched sources without previously measured redshifts (r > or approx. 20.5) have 250-350 microns flux density ratios that suggest either high-redshift galaxies (z > or approx. 1.5) or optically faint low-redshift galaxies with unusually low temperatures (T < or approx. 20). Their small 3.4-250 microns flux ratios favor a high-redshift galaxy population, as only the most actively star-forming galaxies at low redshift (e.g., Arp 220) exhibit comparable flux density ratios. Furthermore, we find a relatively large AGN fraction (approx. 30%) in a 12 microns flux-limited subsample of H-ATLAS sources, also consistent with there being a significant population of high-redshift sources in the no-redshift sample.

  8. The Structural Evolution of Milky-Way-Like Star-Forming Galaxies zeta is approximately 1.3

    NASA Technical Reports Server (NTRS)

    Patel, Shannon G.; Fumagalli, Mattia; Franx, Marun; VanDokkum, Pieter G.; VanDerWel, Arjen; Leja, Joel; Labbe, Ivo; Brammr, Gabriel; Whitaker, Katherine E.; Skelton, Rosalind E.; hide

    2013-01-01

    We follow the structural evolution of star-forming galaxies (SFGs) like the Milky Way by selecting progenitors to zeta is approx. 1.3 based on the stellar mass growth inferred from the evolution of the star-forming sequence. We select our sample from the 3D-HT survey, which utilizes spectroscopy from the HST-WFC3 G141 near-IR grism and enables precise redshift measurements for our sample of SFGs. Structural properties are obtained from Sersic profile fits to CANDELS WFC3 imaging. The progenitors of zeta = 0 SFGs with stellar mass M = 10(exp 10.5) solar mass are typically half as massive at zeta is approx. 1. This late-time stellar mass grow is consistent with recent studies that employ abundance matching techniques. The descendant SFGs at zeta is approx. 0 have grown in half-light radius by a factor of approx. 1.4 zeta is approx. 1. The half-light radius grows with stellar mass as r(sub e) alpha stellar mass(exp 0.29). While most of the stellar mass is clearly assembling at large radii, the mass surface density profiles reveal ongoing mass growth also in the central regions where bulges and pseudobulges are common features in present day late-type galaxies. Some portion of this growth in the central regions is due to star formation as recent observations of H(a) maps for SFGs at zeta approx. are found to be extended but centrally peaked. Connecting our lookback study with galactic archeology, we find the stellar mass surface density at R - 8 kkpc to have increased by a factor of approx. 2 since zeta is approx. 1, in good agreement with measurements derived for the solar neighborhood of the Milky Way.

  9. Toward Large-Area Sub-Arcsecond X-Ray Telescopes II

    NASA Technical Reports Server (NTRS)

    O'Dell, Stephen L.; Allured, Ryan; Ames, Andrew O.; Biskach, Michael P.; Broadway David M.; Bruni, Ricardo J.; Burrows, David; Cao, Jian; Chalifoux, Brandon D.; Chan, Kai-Wing; hide

    2016-01-01

    In order to advance significantly scientific objectives, future x-ray astronomy missions will likely call for x-ray telescopes with large aperture areas (approx. = 3 sq m) and fine angular resolution (approx. = 1"). Achieving such performance is programmatically and technologically challenging due to the mass and envelope constraints of space-borne telescopes and to the need for densely nested grazing-incidence optics. Such an x-ray telescope will require precision fabrication, alignment, mounting, and assembly of large areas (approx. = 600 sq m) of lightweight (approx. = 2 kg/sq m areal density) high-quality mirrors, at an acceptable cost (approx. = 1 M$/sq m of mirror surface area). This paper reviews relevant programmatic and technological issues, as well as possible approaches for addressing these issues-including direct fabrication of monocrystalline silicon mirrors, active (in-space adjustable) figure correction of replicated mirrors, static post-fabrication correction using ion implantation, differential erosion or deposition, and coating-stress manipulation of thin substrates.

  10. Radio Detections During Two State Transitions of the Intermediate-Mass Black Hole HLX-1

    NASA Technical Reports Server (NTRS)

    Webb, Natalie; Cseh, David; Lenc, Emil; Godet, Olivier; Barret, Didier; Corbel, Stephane; Farrell, Sean; Fender, Robert; Gehrels, Neil; Heywood, Ian

    2012-01-01

    Relativistic jets are streams of plasma moving at appreciable fractions of the speed of light. They have been observed from stellar-mass black holes (approx. 3 to 20 solar masses) as well as supermassive black holes (approx.. 10(exp 6) to 10(exp 9) Solar Mass) found in the centers of most galaxies. Jets should also be produced by intermediate-mass black holes (approx. 10(exp 2) to 10(exp 5) Solar Mass), although evidence for this third class of black hole has, until recently, been weak. We report the detection of transient radio emission at the location of the intermediate-mass black hole candidate ESO 243-49 HLX-1, which is consistent with a discrete jet ejection event. These observations also allow us to refine the mass estimate of the black hole to be between approx. 9 × 10(exp 3) Solar Mass and approx. 9 × 10(exp 4) Solar Mass.

  11. Nash equilibrium and multi criterion aerodynamic optimization

    NASA Astrophysics Data System (ADS)

    Tang, Zhili; Zhang, Lianhe

    2016-06-01

    Game theory and its particular Nash Equilibrium (NE) are gaining importance in solving Multi Criterion Optimization (MCO) in engineering problems over the past decade. The solution of a MCO problem can be viewed as a NE under the concept of competitive games. This paper surveyed/proposed four efficient algorithms for calculating a NE of a MCO problem. Existence and equivalence of the solution are analyzed and proved in the paper based on fixed point theorem. Specific virtual symmetric Nash game is also presented to set up an optimization strategy for single objective optimization problems. Two numerical examples are presented to verify proposed algorithms. One is mathematical functions' optimization to illustrate detailed numerical procedures of algorithms, the other is aerodynamic drag reduction of civil transport wing fuselage configuration by using virtual game. The successful application validates efficiency of algorithms in solving complex aerodynamic optimization problem.

  12. Exact solution of large asymmetric traveling salesman problems.

    PubMed

    Miller, D L; Pekny, J F

    1991-02-15

    The traveling salesman problem is one of a class of difficult problems in combinatorial optimization that is representative of a large number of important scientific and engineering problems. A survey is given of recent applications and methods for solving large problems. In addition, an algorithm for the exact solution of the asymmetric traveling salesman problem is presented along with computational results for several classes of problems. The results show that the algorithm performs remarkably well for some classes of problems, determining an optimal solution even for problems with large numbers of cities, yet for other classes, even small problems thwart determination of a provably optimal solution.

  13. Review: Optimization methods for groundwater modeling and management

    NASA Astrophysics Data System (ADS)

    Yeh, William W.-G.

    2015-09-01

    Optimization methods have been used in groundwater modeling as well as for the planning and management of groundwater systems. This paper reviews and evaluates the various optimization methods that have been used for solving the inverse problem of parameter identification (estimation), experimental design, and groundwater planning and management. Various model selection criteria are discussed, as well as criteria used for model discrimination. The inverse problem of parameter identification concerns the optimal determination of model parameters using water-level observations. In general, the optimal experimental design seeks to find sampling strategies for the purpose of estimating the unknown model parameters. A typical objective of optimal conjunctive-use planning of surface water and groundwater is to minimize the operational costs of meeting water demand. The optimization methods include mathematical programming techniques such as linear programming, quadratic programming, dynamic programming, stochastic programming, nonlinear programming, and the global search algorithms such as genetic algorithms, simulated annealing, and tabu search. Emphasis is placed on groundwater flow problems as opposed to contaminant transport problems. A typical two-dimensional groundwater flow problem is used to explain the basic formulations and algorithms that have been used to solve the formulated optimization problems.

  14. A new chaotic multi-verse optimization algorithm for solving engineering optimization problems

    NASA Astrophysics Data System (ADS)

    Sayed, Gehad Ismail; Darwish, Ashraf; Hassanien, Aboul Ella

    2018-03-01

    Multi-verse optimization algorithm (MVO) is one of the recent meta-heuristic optimization algorithms. The main inspiration of this algorithm came from multi-verse theory in physics. However, MVO like most optimization algorithms suffers from low convergence rate and entrapment in local optima. In this paper, a new chaotic multi-verse optimization algorithm (CMVO) is proposed to overcome these problems. The proposed CMVO is applied on 13 benchmark functions and 7 well-known design problems in the engineering and mechanical field; namely, three-bar trust, speed reduce design, pressure vessel problem, spring design, welded beam, rolling element-bearing and multiple disc clutch brake. In the current study, a modified feasible-based mechanism is employed to handle constraints. In this mechanism, four rules were used to handle the specific constraint problem through maintaining a balance between feasible and infeasible solutions. Moreover, 10 well-known chaotic maps are used to improve the performance of MVO. The experimental results showed that CMVO outperforms other meta-heuristic optimization algorithms on most of the optimization problems. Also, the results reveal that sine chaotic map is the most appropriate map to significantly boost MVO's performance.

  15. Optimal control of a harmonic oscillator: Economic interpretations

    NASA Astrophysics Data System (ADS)

    Janová, Jitka; Hampel, David

    2013-10-01

    Optimal control is a popular technique for modelling and solving the dynamic decision problems in economics. A standard interpretation of the criteria function and Lagrange multipliers in the profit maximization problem is well known. On a particular example, we aim to a deeper understanding of the possible economic interpretations of further mathematical and solution features of the optimal control problem: we focus on the solution of the optimal control problem for harmonic oscillator serving as a model for Phillips business cycle. We discuss the economic interpretations of arising mathematical objects with respect to well known reasoning for these in other problems.

  16. Research in Support of the Use of Rankine Cycle Energy Conversion Systems for Space Power and Propulsion

    NASA Technical Reports Server (NTRS)

    Lahey, Richard T., Jr.; Dhir, Vijay

    2004-01-01

    This is the report of a Scientific Working Group (SWG) formed by NASA to determine the feasibility of using a liquid metal cooled nuclear reactor and Rankine energy conversion cycle for dual purpose power and propulsion in space. This is a high level technical report which is intended for use by NASA management in program planning. The SWG was composed of a team of specialists in nuclear energy and multiphase flow and heat transfer technology from academia, national laboratories, NASA and industry. The SWG has identified the key technology issues that need to be addressed and have recommended an integrated short term (approx. 2 years) and a long term (approx. 10 year) research and development (R&D) program to qualify a Rankine cycle power plant for use in space. This research is ultimately intended to give NASA and its contractors the ability to reliably predict both steady and transient multiphase flow and heat transfer phenomena at reduced gravity, so they can analyze and optimize designs and scale-up experimental data on Rankine cycle components and systems. In addition, some of these results should also be useful for the analysis and design of various multiphase life support and thermal management systems being considered by NASA.

  17. REANALYSIS OF F-STATISTIC GRAVITATIONAL-WAVE SEARCHES WITH THE HIGHER CRITICISM STATISTIC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bennett, M. F.; Melatos, A.; Delaigle, A.

    2013-04-01

    We propose a new method of gravitational-wave detection using a modified form of higher criticism, a statistical technique introduced by Donoho and Jin. Higher criticism is designed to detect a group of sparse, weak sources, none of which are strong enough to be reliably estimated or detected individually. We apply higher criticism as a second-pass method to synthetic F-statistic and C-statistic data for a monochromatic periodic source in a binary system and quantify the improvement relative to the first-pass methods. We find that higher criticism on C-statistic data is more sensitive by {approx}6% than the C-statistic alone under optimal conditionsmore » (i.e., binary orbit known exactly) and the relative advantage increases as the error in the orbital parameters increases. Higher criticism is robust even when the source is not monochromatic (e.g., phase-wandering in an accreting system). Applying higher criticism to a phase-wandering source over multiple time intervals gives a {approx}> 30% increase in detectability with few assumptions about the frequency evolution. By contrast, in all-sky searches for unknown periodic sources, which are dominated by the brightest source, second-pass higher criticism does not provide any benefits over a first-pass search.« less

  18. Singular perturbation analysis of AOTV-related trajectory optimization problems

    NASA Technical Reports Server (NTRS)

    Calise, Anthony J.; Bae, Gyoung H.

    1990-01-01

    The problem of real time guidance and optimal control of Aeroassisted Orbit Transfer Vehicles (AOTV's) was addressed using singular perturbation theory as an underlying method of analysis. Trajectories were optimized with the objective of minimum energy expenditure in the atmospheric phase of the maneuver. Two major problem areas were addressed: optimal reentry, and synergetic plane change with aeroglide. For the reentry problem, several reduced order models were analyzed with the objective of optimal changes in heading with minimum energy loss. It was demonstrated that a further model order reduction to a single state model is possible through the application of singular perturbation theory. The optimal solution for the reduced problem defines an optimal altitude profile dependent on the current energy level of the vehicle. A separate boundary layer analysis is used to account for altitude and flight path angle dynamics, and to obtain lift and bank angle control solutions. By considering alternative approximations to solve the boundary layer problem, three guidance laws were derived, each having an analytic feedback form. The guidance laws were evaluated using a Maneuvering Reentry Research Vehicle model and all three laws were found to be near optimal. For the problem of synergetic plane change with aeroglide, a difficult terminal boundary layer control problem arises which to date is found to be analytically intractable. Thus a predictive/corrective solution was developed to satisfy the terminal constraints on altitude and flight path angle. A composite guidance solution was obtained by combining the optimal reentry solution with the predictive/corrective guidance method. Numerical comparisons with the corresponding optimal trajectory solutions show that the resulting performance is very close to optimal. An attempt was made to obtain numerically optimized trajectories for the case where heating rate is constrained. A first order state variable inequality constraint was imposed on the full order AOTV point mass equations of motion, using a simple aerodynamic heating rate model.

  19. Conceptual Comparison of Population Based Metaheuristics for Engineering Problems

    PubMed Central

    Green, Paul

    2015-01-01

    Metaheuristic algorithms are well-known optimization tools which have been employed for solving a wide range of optimization problems. Several extensions of differential evolution have been adopted in solving constrained and nonconstrained multiobjective optimization problems, but in this study, the third version of generalized differential evolution (GDE) is used for solving practical engineering problems. GDE3 metaheuristic modifies the selection process of the basic differential evolution and extends DE/rand/1/bin strategy in solving practical applications. The performance of the metaheuristic is investigated through engineering design optimization problems and the results are reported. The comparison of the numerical results with those of other metaheuristic techniques demonstrates the promising performance of the algorithm as a robust optimization tool for practical purposes. PMID:25874265

  20. Conceptual comparison of population based metaheuristics for engineering problems.

    PubMed

    Adekanmbi, Oluwole; Green, Paul

    2015-01-01

    Metaheuristic algorithms are well-known optimization tools which have been employed for solving a wide range of optimization problems. Several extensions of differential evolution have been adopted in solving constrained and nonconstrained multiobjective optimization problems, but in this study, the third version of generalized differential evolution (GDE) is used for solving practical engineering problems. GDE3 metaheuristic modifies the selection process of the basic differential evolution and extends DE/rand/1/bin strategy in solving practical applications. The performance of the metaheuristic is investigated through engineering design optimization problems and the results are reported. The comparison of the numerical results with those of other metaheuristic techniques demonstrates the promising performance of the algorithm as a robust optimization tool for practical purposes.

  1. Efficiency of quantum vs. classical annealing in nonconvex learning problems

    PubMed Central

    Zecchina, Riccardo

    2018-01-01

    Quantum annealers aim at solving nonconvex optimization problems by exploiting cooperative tunneling effects to escape local minima. The underlying idea consists of designing a classical energy function whose ground states are the sought optimal solutions of the original optimization problem and add a controllable quantum transverse field to generate tunneling processes. A key challenge is to identify classes of nonconvex optimization problems for which quantum annealing remains efficient while thermal annealing fails. We show that this happens for a wide class of problems which are central to machine learning. Their energy landscapes are dominated by local minima that cause exponential slowdown of classical thermal annealers while simulated quantum annealing converges efficiently to rare dense regions of optimal solutions. PMID:29382764

  2. Direct Multiple Shooting Optimization with Variable Problem Parameters

    NASA Technical Reports Server (NTRS)

    Whitley, Ryan J.; Ocampo, Cesar A.

    2009-01-01

    Taking advantage of a novel approach to the design of the orbital transfer optimization problem and advanced non-linear programming algorithms, several optimal transfer trajectories are found for problems with and without known analytic solutions. This method treats the fixed known gravitational constants as optimization variables in order to reduce the need for an advanced initial guess. Complex periodic orbits are targeted with very simple guesses and the ability to find optimal transfers in spite of these bad guesses is successfully demonstrated. Impulsive transfers are considered for orbits in both the 2-body frame as well as the circular restricted three-body problem (CRTBP). The results with this new approach demonstrate the potential for increasing robustness for all types of orbit transfer problems.

  3. The pseudo-Boolean optimization approach to form the N-version software structure

    NASA Astrophysics Data System (ADS)

    Kovalev, I. V.; Kovalev, D. I.; Zelenkov, P. V.; Voroshilova, A. A.

    2015-10-01

    The problem of developing an optimal structure of N-version software system presents a kind of very complex optimization problem. This causes the use of deterministic optimization methods inappropriate for solving the stated problem. In this view, exploiting heuristic strategies looks more rational. In the field of pseudo-Boolean optimization theory, the so called method of varied probabilities (MVP) has been developed to solve problems with a large dimensionality. Some additional modifications of MVP have been made to solve the problem of N-version systems design. Those algorithms take into account the discovered specific features of the objective function. The practical experiments have shown the advantage of using these algorithm modifications because of reducing a search space.

  4. Characterization of high-pressure capacitively coupled hydrogen plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nunomura, S.; Kondo, M.

    2007-11-01

    Capacitively coupled very-high-frequency hydrogen plasmas have been systematically diagnosed in a wide range of a gas pressure from 5 mTorr to 10 Torr. The plasma parameters, ion species, and ion energy distributions (IEDs) are measured using a Langmuir probe, optical emission spectroscopy, and energy filtered mass spectrometer. The measurement results show that the ion species in a hydrogen plasma is determined from ionization channels and subsequent ion-molecule reactions. The ions are dominated by H{sub 2}{sup +} at a less-collisional condition of < or approx. 20 mTorr, whereas those are dominated by H{sub 3}{sup +} at a collisional condition of >more » or approx. 20 mTorr. The IED is determined by both the sheath potential drop and ion-neutral collisions in the plasma sheath. The IED is broadened for a collisional sheath at > or approx. 0.3 Torr and the ion bombardment energy is lowered. For high-pressure discharge operated at {approx_equal}10 Torr, plasmas are characterized by a low electron temperature of {approx_equal}0.8 eV and a low ion bombardment energy of < or approx. 15 eV.« less

  5. Shocklets, SLAMS, and Field-Aligned Ion Beams in the Terrestrial Foreshock

    NASA Technical Reports Server (NTRS)

    Wilson, L. B.; Koval, A.; Sibeck, D. G.; Szabo, A.; Cattell, C. A.; Kasper, J. C.; Maruca, B. A.; Pulupa, M.; Salem, C. S.; Wilber, M.

    2012-01-01

    We present Wind spacecraft observations of ion distributions showing field- aligned beams (FABs) and large-amplitude magnetic fluctuations composed of a series of shocklets and short large-amplitude magnetic structures (SLAMS). The FABs are found to have T(sub k) approx 80-850 eV, V(sub b)/V(sub sw) approx 1.3-2.4, T(sub perpendicular,b)/T(sub paralell,b) approx 1-8, and n(sub b)/n(sub o) approx 0.2-11%. Saturation amplitudes for ion/ion resonant and non-resonant instabilities are too small to explain the observed SLAMS amplitudes. We show two examples where groups of SLAMS can act like a local quasi-perpendicular shock reflecting ions to produce the FABs, a scenario distinct from the more-common production at the quasi-perpendicular bow shock. The SLAMS exhibit a foot-like magnetic enhancement with a leading magnetosonic whistler train, consistent with previous observations. Strong ion and electron heating are observed within the series of shocklets and SLAMS with temperatures increasing by factors approx > 5 and approx >3, respectively. Both the core and halo electron components show strong perpendicular heating inside the feature.

  6. DIRECT IMAGING OF FINE STRUCTURES IN GIANT PLANET-FORMING REGIONS OF THE PROTOPLANETARY DISK AROUND AB AURIGAE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hashimoto, J.; Tamura, M.; Fukue, T.

    We report high-resolution 1.6 {mu}m polarized intensity (PI) images of the circumstellar disk around the Herbig Ae star AB Aur at a radial distance of 22 AU (0.''15) up to 554 AU (3.''85), which have been obtained by the high-contrast instrument HiCIAO with the dual-beam polarimetry. We revealed complicated and asymmetrical structures in the inner part ({approx}<140 AU) of the disk while confirming the previously reported outer (r {approx}> 200 AU) spiral structure. We have imaged a double ring structure at {approx}40 and {approx}100 AU and a ring-like gap between the two. We found a significant discrepancy of inclination anglesmore » between two rings, which may indicate that the disk of AB Aur is warped. Furthermore, we found seven dips (the typical size is {approx}45 AU or less) within two rings, as well as three prominent PI peaks at {approx}40 AU. The observed structures, including a bumpy double ring, a ring-like gap, and a warped disk in the innermost regions, provide essential information for understanding the formation mechanism of recently detected wide-orbit (r > 20 AU) planets.« less

  7. Exploratory studies on a passively triggered vacuum spark

    NASA Astrophysics Data System (ADS)

    Rout, R. K.; Auluck, S. K. H.; Nagpal, J. S.; Kulkarni, L. V.

    1999-12-01

    The results of an experimental investigation on a passively triggered vacuum spark device are presented. The diagnostics include the current, x-ray and optical emission measurements. The sharp dips in the current derivative signal indicate the occurrence of pinching at an early stage of the discharge (at current icons/Journals/Common/approx" ALT="approx" ALIGN="TOP"/>5 kA). A well-confined plasma with a central hot region was recorded using a streak camera. The pinched plasma was observed to undergo kink-type oscillations with a time period of 10-15 ns. Repeated plasma fronts were seen to move from the anode to the cathode with an average velocity of icons/Journals/Common/approx" ALT="approx" ALIGN="TOP"/>5 × 106 cm s-1. Soft x-ray emission having a radiation intensity of a few hundred mR per discharge was observed. The x-ray signals obtained using photodiodes showed multiple bursts. A soft x-ray pinhole camera recorded micro-pinches of icons/Journals/Common/approx" ALT="approx" ALIGN="TOP"/>100 µm. The x-ray emitting regions were confined to the inter-electrode gap. The x-ray emission characteristics were influenced by the electrolytic resistance, which was connected across the spark gap to initiate discharge.

  8. Optimal control of LQR for discrete time-varying systems with input delays

    NASA Astrophysics Data System (ADS)

    Yin, Yue-Zhu; Yang, Zhong-Lian; Yin, Zhi-Xiang; Xu, Feng

    2018-04-01

    In this work, we consider the optimal control problem of linear quadratic regulation for discrete time-variant systems with single input and multiple input delays. An innovative and simple method to derive the optimal controller is given. The studied problem is first equivalently converted into a problem subject to a constraint condition. Last, with the established duality, the problem is transformed into a static mathematical optimisation problem without input delays. The optimal control input solution to minimise performance index function is derived by solving this optimisation problem with two methods. A numerical simulation example is carried out and its results show that our two approaches are both feasible and very effective.

  9. Exact solution for an optimal impermeable parachute problem

    NASA Astrophysics Data System (ADS)

    Lupu, Mircea; Scheiber, Ernest

    2002-10-01

    In the paper there are solved direct and inverse boundary problems and analytical solutions are obtained for optimization problems in the case of some nonlinear integral operators. It is modeled the plane potential flow of an inviscid, incompressible and nonlimited fluid jet, witch encounters a symmetrical, curvilinear obstacle--the deflector of maximal drag. There are derived integral singular equations, for direct and inverse problems and the movement in the auxiliary canonical half-plane is obtained. Next, the optimization problem is solved in an analytical manner. The design of the optimal airfoil is performed and finally, numerical computations concerning the drag coefficient and other geometrical and aerodynamical parameters are carried out. This model corresponds to the Helmholtz impermeable parachute problem.

  10. A DEEP HUBBLE SPACE TELESCOPE SEARCH FOR ESCAPING LYMAN CONTINUUM FLUX AT z {approx} 1.3: EVIDENCE FOR AN EVOLVING IONIZING EMISSIVITY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siana, Brian; Bridge, Carrie R.; Teplitz, Harry I.

    We have obtained deep Hubble Space Telescope far-UV images of 15 starburst galaxies at z {approx} 1.3 in the GOODS fields to search for escaping Lyman continuum (LyC) photons. These are the deepest far-UV images (m{sub AB} = 28.7, 3{sigma}, 1'' diameter) over this large an area (4.83 arcmin{sup 2}) and provide some of the best escape fraction constraints for any galaxies at any redshift. We do not detect any individual galaxies, with 3{sigma} limits to the LyC ({approx}700 A) flux 50-149 times fainter (in f{sub {nu}}) than the rest-frame UV (1500 A) continuum fluxes. Correcting for the mean intergalacticmore » medium (IGM) attenuation (factor {approx}2), as well as an intrinsic stellar Lyman break (factor {approx}3), these limits translate to relative escape fraction limits of f{sub esc,rel} < [0.03, 0.21]. The stacked limit is f{sub esc,rel}(3{sigma}) < 0.02. We use a Monte Carlo simulation to properly account for the expected distribution of line-of-sight IGM opacities. When including constraints from previous surveys at z {approx} 1.3 we find that, at the 95% confidence level, no more than 8% of star-forming galaxies at z {approx} 1.3 can have relative escape fractions greater than 0.50. Alternatively, if the majority of galaxies have low, but non-zero, escaping LyC, the escape fraction cannot be more than 0.04. In light of some evidence for strong LyC emission from UV-faint regions of Lyman break galaxies (LBGs) at z {approx} 3, we also stack sub-regions of our galaxies with different surface brightnesses and detect no significant LyC flux at the f{sub esc,rel} < 0.03 level. Both the stacked limits and the limits from the Monte Carlo simulation suggest that the average ionizing emissivity (relative to non-ionizing UV emissivity) at z {approx} 1.3 is significantly lower than has been observed in LBGs at z {approx} 3. If the ionizing emissivity of star-forming galaxies is in fact increasing with redshift, it would help to explain the high photoionization rates seen in the IGM at z>4 and reionization of the IGM at z>6.« less

  11. From nonlinear optimization to convex optimization through firefly algorithm and indirect approach with applications to CAD/CAM.

    PubMed

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently.

  12. From Nonlinear Optimization to Convex Optimization through Firefly Algorithm and Indirect Approach with Applications to CAD/CAM

    PubMed Central

    Gálvez, Akemi; Iglesias, Andrés

    2013-01-01

    Fitting spline curves to data points is a very important issue in many applied fields. It is also challenging, because these curves typically depend on many continuous variables in a highly interrelated nonlinear way. In general, it is not possible to compute these parameters analytically, so the problem is formulated as a continuous nonlinear optimization problem, for which traditional optimization techniques usually fail. This paper presents a new bioinspired method to tackle this issue. In this method, optimization is performed through a combination of two techniques. Firstly, we apply the indirect approach to the knots, in which they are not initially the subject of optimization but precomputed with a coarse approximation scheme. Secondly, a powerful bioinspired metaheuristic technique, the firefly algorithm, is applied to optimization of data parameterization; then, the knot vector is refined by using De Boor's method, thus yielding a better approximation to the optimal knot vector. This scheme converts the original nonlinear continuous optimization problem into a convex optimization problem, solved by singular value decomposition. Our method is applied to some illustrative real-world examples from the CAD/CAM field. Our experimental results show that the proposed scheme can solve the original continuous nonlinear optimization problem very efficiently. PMID:24376380

  13. Analytical and numerical analysis of inverse optimization problems: conditions of uniqueness and computational methods

    PubMed Central

    Zatsiorsky, Vladimir M.

    2011-01-01

    One of the key problems of motor control is the redundancy problem, in particular how the central nervous system (CNS) chooses an action out of infinitely many possible. A promising way to address this question is to assume that the choice is made based on optimization of a certain cost function. A number of cost functions have been proposed in the literature to explain performance in different motor tasks: from force sharing in grasping to path planning in walking. However, the problem of uniqueness of the cost function(s) was not addressed until recently. In this article, we analyze two methods of finding additive cost functions in inverse optimization problems with linear constraints, so-called linear-additive inverse optimization problems. These methods are based on the Uniqueness Theorem for inverse optimization problems that we proved recently (Terekhov et al., J Math Biol 61(3):423–453, 2010). Using synthetic data, we show that both methods allow for determining the cost function. We analyze the influence of noise on the both methods. Finally, we show how a violation of the conditions of the Uniqueness Theorem may lead to incorrect solutions of the inverse optimization problem. PMID:21311907

  14. Adaptive Constrained Optimal Control Design for Data-Based Nonlinear Discrete-Time Systems With Critic-Only Structure.

    PubMed

    Luo, Biao; Liu, Derong; Wu, Huai-Ning

    2018-06-01

    Reinforcement learning has proved to be a powerful tool to solve optimal control problems over the past few years. However, the data-based constrained optimal control problem of nonaffine nonlinear discrete-time systems has rarely been studied yet. To solve this problem, an adaptive optimal control approach is developed by using the value iteration-based Q-learning (VIQL) with the critic-only structure. Most of the existing constrained control methods require the use of a certain performance index and only suit for linear or affine nonlinear systems, which is unreasonable in practice. To overcome this problem, the system transformation is first introduced with the general performance index. Then, the constrained optimal control problem is converted to an unconstrained optimal control problem. By introducing the action-state value function, i.e., Q-function, the VIQL algorithm is proposed to learn the optimal Q-function of the data-based unconstrained optimal control problem. The convergence results of the VIQL algorithm are established with an easy-to-realize initial condition . To implement the VIQL algorithm, the critic-only structure is developed, where only one neural network is required to approximate the Q-function. The converged Q-function obtained from the critic-only VIQL method is employed to design the adaptive constrained optimal controller based on the gradient descent scheme. Finally, the effectiveness of the developed adaptive control method is tested on three examples with computer simulation.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kashiyama, Kazumi; Ioka, Kunihito; Kawanaka, Norita

    We suggest that white dwarf (WD) pulsars can compete with neutron star (NS) pulsars for producing the excesses of cosmic ray electrons and positrons (e{sup {+-}}) observed by the PAMELA, ATIC/PPB-BETS, Fermi, and H.E.S.S. experiments. A merger of two WDs leads to a rapidly spinning WD with a rotational energy ({approx}10{sup 50} erg) comparable to the NS case. The birth rate ({approx}10{sup -2}-10{sup -3}/yr/galaxy) is also similar, providing the right energy budget for the cosmic ray e{sup {+-}}. Applying the NS theory, we suggest that the WD pulsars can in principle produce e{sup {+-}} up to {approx}10 TeV. In contrastmore » to the NS model, the adiabatic and radiative energy losses of e{sup {+-}} are negligible since their injection continues after the expansion of the pulsar wind nebula, and hence it is enough that a fraction {approx}1% of WDs are magnetized ({approx}10{sup 7}-10{sup 9} G) as observed. The long activity also increases the number of nearby sources ({approx}100), which reduces the Poisson fluctuation in the flux. The WD pulsars could dominate the quickly cooling e{sup {+-}} above TeV energy as a second spectral bump or even surpass the NS pulsars in the observing energy range {approx}10 GeV-1 TeV, providing a background for the dark matter signals and a nice target for the future AMS-02, CALET, and CTA experiment.« less

  16. Topology optimization of unsteady flow problems using the lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Nørgaard, Sebastian; Sigmund, Ole; Lazarov, Boyan

    2016-02-01

    This article demonstrates and discusses topology optimization for unsteady incompressible fluid flows. The fluid flows are simulated using the lattice Boltzmann method, and a partial bounceback model is implemented to model the transition between fluid and solid phases in the optimization problems. The optimization problem is solved with a gradient based method, and the design sensitivities are computed by solving the discrete adjoint problem. For moderate Reynolds number flows, it is demonstrated that topology optimization can successfully account for unsteady effects such as vortex shedding and time-varying boundary conditions. Such effects are relevant in several engineering applications, i.e. fluid pumps and control valves.

  17. Artificial bee colony algorithm for constrained possibilistic portfolio optimization problem

    NASA Astrophysics Data System (ADS)

    Chen, Wei

    2015-07-01

    In this paper, we discuss the portfolio optimization problem with real-world constraints under the assumption that the returns of risky assets are fuzzy numbers. A new possibilistic mean-semiabsolute deviation model is proposed, in which transaction costs, cardinality and quantity constraints are considered. Due to such constraints the proposed model becomes a mixed integer nonlinear programming problem and traditional optimization methods fail to find the optimal solution efficiently. Thus, a modified artificial bee colony (MABC) algorithm is developed to solve the corresponding optimization problem. Finally, a numerical example is given to illustrate the effectiveness of the proposed model and the corresponding algorithm.

  18. Multiobjective Aerodynamic Shape Optimization Using Pareto Differential Evolution and Generalized Response Surface Metamodels

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.

    2004-01-01

    Differential Evolution (DE) is a simple, fast, and robust evolutionary algorithm that has proven effective in determining the global optimum for several difficult single-objective optimization problems. The DE algorithm has been recently extended to multiobjective optimization problem by using a Pareto-based approach. In this paper, a Pareto DE algorithm is applied to multiobjective aerodynamic shape optimization problems that are characterized by computationally expensive objective function evaluations. To improve computational expensive the algorithm is coupled with generalized response surface meta-models based on artificial neural networks. Results are presented for some test optimization problems from the literature to demonstrate the capabilities of the method.

  19. Parameter meta-optimization of metaheuristics of solving specific NP-hard facility location problem

    NASA Astrophysics Data System (ADS)

    Skakov, E. S.; Malysh, V. N.

    2018-03-01

    The aim of the work is to create an evolutionary method for optimizing the values of the control parameters of metaheuristics of solving the NP-hard facility location problem. A system analysis of the tuning process of optimization algorithms parameters is carried out. The problem of finding the parameters of a metaheuristic algorithm is formulated as a meta-optimization problem. Evolutionary metaheuristic has been chosen to perform the task of meta-optimization. Thus, the approach proposed in this work can be called “meta-metaheuristic”. Computational experiment proving the effectiveness of the procedure of tuning the control parameters of metaheuristics has been performed.

  20. Alternative Optimizations of X-ray TES Arrays: Soft X-rays, High Count Rates, and Mixed-Pixel Arrays

    NASA Technical Reports Server (NTRS)

    Kilbourne, C. A.; Bandler, S. R.; Brown, A.-D.; Chervenak, J. A.; Figueroa-Feliciano, E.; Finkbeiner, F. M.; Iyomoto, N.; Kelley, R. L.; Porter, F. S.; Smith, S. J.

    2007-01-01

    We are developing arrays of superconducting transition-edge sensors (TES) for imaging spectroscopy telescopes such as the XMS on Constellation-X. While our primary focus has been on arrays that meet the XMS requirements (of which, foremost, is an energy resolution of 2.5 eV at 6 keV and a bandpass from approx. 0.3 keV to 12 keV), we have also investigated other optimizations that might be used to extend the XMS capabilities. In one of these optimizations, improved resolution below 1 keV is achieved by reducing the heat capacity. Such pixels can be based on our XMS-style TES's with the separate absorbers omitted. These pixels can added to an array with broadband response either as a separate array or interspersed, depending on other factors that include telescope design and science requirements. In one version of this approach, we have designed and fabricated a composite array of low-energy and broad-band pixels to provide high spectral resolving power over a broader energy bandpass than could be obtained with a single TES design. The array consists of alternating pixels with and without overhanging absorbers. To explore optimizations for higher count rates, we are also optimizing the design and operating temperature of pixels that are coupled to a solid substrate. We will present the performance of these variations and discuss other optimizations that could be used to enhance the XMS or enable other astrophysics experiments.

  1. Optimal control problem for linear fractional-order systems, described by equations with Hadamard-type derivative

    NASA Astrophysics Data System (ADS)

    Postnov, Sergey

    2017-11-01

    Two kinds of optimal control problem are investigated for linear time-invariant fractional-order systems with lumped parameters which dynamics described by equations with Hadamard-type derivative: the problem of control with minimal norm and the problem of control with minimal time at given restriction on control norm. The problem setting with nonlocal initial conditions studied. Admissible controls allowed to be the p-integrable functions (p > 1) at half-interval. The optimal control problem studied by moment method. The correctness and solvability conditions for the corresponding moment problem are derived. For several special cases the optimal control problems stated are solved analytically. Some analogies pointed for results obtained with the results which are known for integer-order systems and fractional-order systems describing by equations with Caputo- and Riemann-Liouville-type derivatives.

  2. Multi-step optimization strategy for fuel-optimal orbital transfer of low-thrust spacecraft

    NASA Astrophysics Data System (ADS)

    Rasotto, M.; Armellin, R.; Di Lizia, P.

    2016-03-01

    An effective method for the design of fuel-optimal transfers in two- and three-body dynamics is presented. The optimal control problem is formulated using calculus of variation and primer vector theory. This leads to a multi-point boundary value problem (MPBVP), characterized by complex inner constraints and a discontinuous thrust profile. The first issue is addressed by embedding the MPBVP in a parametric optimization problem, thus allowing a simplification of the set of transversality constraints. The second problem is solved by representing the discontinuous control function by a smooth function depending on a continuation parameter. The resulting trajectory optimization method can deal with different intermediate conditions, and no a priori knowledge of the control structure is required. Test cases in both the two- and three-body dynamics show the capability of the method in solving complex trajectory design problems.

  3. Optimal perturbations for nonlinear systems using graph-based optimal transport

    NASA Astrophysics Data System (ADS)

    Grover, Piyush; Elamvazhuthi, Karthik

    2018-06-01

    We formulate and solve a class of finite-time transport and mixing problems in the set-oriented framework. The aim is to obtain optimal discrete-time perturbations in nonlinear dynamical systems to transport a specified initial measure on the phase space to a final measure in finite time. The measure is propagated under system dynamics in between the perturbations via the associated transfer operator. Each perturbation is described by a deterministic map in the measure space that implements a version of Monge-Kantorovich optimal transport with quadratic cost. Hence, the optimal solution minimizes a sum of quadratic costs on phase space transport due to the perturbations applied at specified times. The action of the transport map is approximated by a continuous pseudo-time flow on a graph, resulting in a tractable convex optimization problem. This problem is solved via state-of-the-art solvers to global optimality. We apply this algorithm to a problem of transport between measures supported on two disjoint almost-invariant sets in a chaotic fluid system, and to a finite-time optimal mixing problem by choosing the final measure to be uniform. In both cases, the optimal perturbations are found to exploit the phase space structures, such as lobe dynamics, leading to efficient global transport. As the time-horizon of the problem is increased, the optimal perturbations become increasingly localized. Hence, by combining the transfer operator approach with ideas from the theory of optimal mass transportation, we obtain a discrete-time graph-based algorithm for optimal transport and mixing in nonlinear systems.

  4. The ESSENCE Supernova Survey: Survey Optimization, Observations, and Supernova Photometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miknaitis, Gajus; Pignata, G.; Rest, A.

    We describe the implementation and optimization of the ESSENCE supernova survey, which we have undertaken to measure the equation of state parameter of the dark energy. We present a method for optimizing the survey exposure times and cadence to maximize our sensitivity to the dark energy equation of state parameter w = P/{rho}c{sup 2} for a given fixed amount of telescope time. For our survey on the CTIO 4m telescope, measuring the luminosity distances and redshifts for supernovae at modest redshifts (z {approx} 0.5 {+-} 0.2) is optimal for determining w. We describe the data analysis pipeline based on usingmore » reliable and robust image subtraction to find supernovae automatically and in near real-time. Since making cosmological inferences with supernovae relies crucially on accurate measurement of their brightnesses, we describe our efforts to establish a thorough calibration of the CTIO 4m natural photometric system. In its first four years, ESSENCE has discovered and spectroscopically confirmed 102 type Ia SNe, at redshifts from 0.10 to 0.78, identified through an impartial, effective methodology for spectroscopic classification and redshift determination. We present the resulting light curves for the all type Ia supernovae found by ESSENCE and used in our measurement of w, presented in Wood-Vasey et al. (2007).« less

  5. Optimization of Pumpkin Oil Recovery by Using Aqueous Enzymatic Extraction and Comparison of the Quality of the Obtained Oil with the Quality of Cold-Pressed Oil

    PubMed Central

    Roszkowska, Beata; Czaplicki, Sylwester; Tańska, Małgorzata

    2016-01-01

    Summary The study was carried out to optimize pumpkin oil recovery in the process of aqueous extraction preceded by enzymatic maceration of seeds, as well as to compare the quality of the obtained oil to the quality of cold-pressed pumpkin seed oil. Hydrated pulp of hulless pumpkin seeds was macerated using a 2% (by mass) cocktail of commercial pectinolytic, cellulolytic and proteolytic preparations (Rohapect® UF, Rohament® CL and Colorase® 7089). The optimization procedure utilized response surface methodology based on Box- -Behnken plan of experiment. The optimized variables of enzymatic pretreatment were pH, temperature and maceration time. The results showed that the pH value, temperature and maceration time of 4.7, 54 °C and 15.4 h, respectively, were conducive to maximize the oil yield up to 72.64%. Among these variables, the impact of pH was crucial (above 73% of determined variation) for oil recovery results. The oil obtained by aqueous enzymatic extraction was richer in sterols, squalene and tocopherols, and only slightly less abundant in carotenoids than the cold-pressed one. However, it had a lower oxidative stability, with induction period shortened by approx. 30% in relation to the cold-pressed oil. PMID:28115898

  6. Comparative Evaluation of Different Optimization Algorithms for Structural Design Applications

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.

    1996-01-01

    Non-linear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Centre, a project was initiated to assess the performance of eight different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using the eight different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems, however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with Sequential Unconstrained Minimizations Technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.

  7. Performance Trend of Different Algorithms for Structural Design Optimization

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Coroneos, Rula M.; Guptill, James D.; Hopkins, Dale A.

    1996-01-01

    Nonlinear programming algorithms play an important role in structural design optimization. Fortunately, several algorithms with computer codes are available. At NASA Lewis Research Center, a project was initiated to assess performance of different optimizers through the development of a computer code CometBoards. This paper summarizes the conclusions of that research. CometBoards was employed to solve sets of small, medium and large structural problems, using different optimizers on a Cray-YMP8E/8128 computer. The reliability and efficiency of the optimizers were determined from the performance of these problems. For small problems, the performance of most of the optimizers could be considered adequate. For large problems however, three optimizers (two sequential quadratic programming routines, DNCONG of IMSL and SQP of IDESIGN, along with the sequential unconstrained minimizations technique SUMT) outperformed others. At optimum, most optimizers captured an identical number of active displacement and frequency constraints but the number of active stress constraints differed among the optimizers. This discrepancy can be attributed to singularity conditions in the optimization and the alleviation of this discrepancy can improve the efficiency of optimizers.

  8. Evolutionary Optimization of a Geometrically Refined Truss

    NASA Technical Reports Server (NTRS)

    Hull, P. V.; Tinker, M. L.; Dozier, G. V.

    2007-01-01

    Structural optimization is a field of research that has experienced noteworthy growth for many years. Researchers in this area have developed optimization tools to successfully design and model structures, typically minimizing mass while maintaining certain deflection and stress constraints. Numerous optimization studies have been performed to minimize mass, deflection, and stress on a benchmark cantilever truss problem. Predominantly traditional optimization theory is applied to this problem. The cross-sectional area of each member is optimized to minimize the aforementioned objectives. This Technical Publication (TP) presents a structural optimization technique that has been previously applied to compliant mechanism design. This technique demonstrates a method that combines topology optimization, geometric refinement, finite element analysis, and two forms of evolutionary computation: genetic algorithms and differential evolution to successfully optimize a benchmark structural optimization problem. A nontraditional solution to the benchmark problem is presented in this TP, specifically a geometrically refined topological solution. The design process begins with an alternate control mesh formulation, multilevel geometric smoothing operation, and an elastostatic structural analysis. The design process is wrapped in an evolutionary computing optimization toolset.

  9. The expanded invasive weed optimization metaheuristic for solving continuous and discrete optimization problems.

    PubMed

    Josiński, Henryk; Kostrzewa, Daniel; Michalczuk, Agnieszka; Switoński, Adam

    2014-01-01

    This paper introduces an expanded version of the Invasive Weed Optimization algorithm (exIWO) distinguished by the hybrid strategy of the search space exploration proposed by the authors. The algorithm is evaluated by solving three well-known optimization problems: minimization of numerical functions, feature selection, and the Mona Lisa TSP Challenge as one of the instances of the traveling salesman problem. The achieved results are compared with analogous outcomes produced by other optimization methods reported in the literature.

  10. Particle swarm optimization - Genetic algorithm (PSOGA) on linear transportation problem

    NASA Astrophysics Data System (ADS)

    Rahmalia, Dinita

    2017-08-01

    Linear Transportation Problem (LTP) is the case of constrained optimization where we want to minimize cost subject to the balance of the number of supply and the number of demand. The exact method such as northwest corner, vogel, russel, minimal cost have been applied at approaching optimal solution. In this paper, we use heurisitic like Particle Swarm Optimization (PSO) for solving linear transportation problem at any size of decision variable. In addition, we combine mutation operator of Genetic Algorithm (GA) at PSO to improve optimal solution. This method is called Particle Swarm Optimization - Genetic Algorithm (PSOGA). The simulations show that PSOGA can improve optimal solution resulted by PSO.

  11. Analytical and Computational Properties of Distributed Approaches to MDO

    NASA Technical Reports Server (NTRS)

    Alexandrov, Natalia M.; Lewis, Robert Michael

    2000-01-01

    Historical evolution of engineering disciplines and the complexity of the MDO problem suggest that disciplinary autonomy is a desirable goal in formulating and solving MDO problems. We examine the notion of disciplinary autonomy and discuss the analytical properties of three approaches to formulating and solving MDO problems that achieve varying degrees of autonomy by distributing the problem along disciplinary lines. Two of the approaches-Optimization by Linear Decomposition and Collaborative Optimization-are based on bi-level optimization and reflect what we call a structural perspective. The third approach, Distributed Analysis Optimization, is a single-level approach that arises from what we call an algorithmic perspective. The main conclusion of the paper is that disciplinary autonomy may come at a price: in the bi-level approaches, the system-level constraints introduced to relax the interdisciplinary coupling and enable disciplinary autonomy can cause analytical and computational difficulties for optimization algorithms. The single-level alternative we discuss affords a more limited degree of autonomy than that of the bi-level approaches, but without the computational difficulties of the bi-level methods. Key Words: Autonomy, bi-level optimization, distributed optimization, multidisciplinary optimization, multilevel optimization, nonlinear programming, problem integration, system synthesis

  12. Simultaneous optimization of loading pattern and burnable poison placement for PWRs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alim, F.; Ivanov, K.; Yilmaz, S.

    2006-07-01

    To solve in-core fuel management optimization problem, GARCO-PSU (Genetic Algorithm Reactor Core Optimization - Pennsylvania State Univ.) is developed. This code is applicable for all types and geometry of PWR core structures with unlimited number of fuel assembly (FA) types in the inventory. For this reason an innovative genetic algorithm is developed with modifying the classical representation of the genotype. In-core fuel management heuristic rules are introduced into GARCO. The core re-load design optimization has two parts, loading pattern (LP) optimization and burnable poison (BP) placement optimization. These parts depend on each other, but it is difficult to solve themore » combined problem due to its large size. Separating the problem into two parts provides a practical way to solve the problem. However, the result of this method does not reflect the real optimal solution. GARCO-PSU achieves to solve LP optimization and BP placement optimization simultaneously in an efficient manner. (authors)« less

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Kuo -Ling; Mehrotra, Sanjay

    We present a homogeneous algorithm equipped with a modified potential function for the monotone complementarity problem. We show that this potential function is reduced by at least a constant amount if a scaled Lipschitz condition (SLC) is satisfied. A practical algorithm based on this potential function is implemented in a software package named iOptimize. The implementation in iOptimize maintains global linear and polynomial time convergence properties, while achieving practical performance. It either successfully solves the problem, or concludes that the SLC is not satisfied. When compared with the mature software package MOSEK (barrier solver version 6.0.0.106), iOptimize solves convex quadraticmore » programming problems, convex quadratically constrained quadratic programming problems, and general convex programming problems in fewer iterations. Moreover, several problems for which MOSEK fails are solved to optimality. In addition, we also find that iOptimize detects infeasibility more reliably than the general nonlinear solvers Ipopt (version 3.9.2) and Knitro (version 8.0).« less

  14. A noisy chaotic neural network for solving combinatorial optimization problems: stochastic chaotic simulated annealing.

    PubMed

    Wang, Lipo; Li, Sa; Tian, Fuyu; Fu, Xiuju

    2004-10-01

    Recently Chen and Aihara have demonstrated both experimentally and mathematically that their chaotic simulated annealing (CSA) has better search ability for solving combinatorial optimization problems compared to both the Hopfield-Tank approach and stochastic simulated annealing (SSA). However, CSA may not find a globally optimal solution no matter how slowly annealing is carried out, because the chaotic dynamics are completely deterministic. In contrast, SSA tends to settle down to a global optimum if the temperature is reduced sufficiently slowly. Here we combine the best features of both SSA and CSA, thereby proposing a new approach for solving optimization problems, i.e., stochastic chaotic simulated annealing, by using a noisy chaotic neural network. We show the effectiveness of this new approach with two difficult combinatorial optimization problems, i.e., a traveling salesman problem and a channel assignment problem for cellular mobile communications.

  15. An Algorithm for the Mixed Transportation Network Design Problem

    PubMed Central

    Liu, Xinyu; Chen, Qun

    2016-01-01

    This paper proposes an optimization algorithm, the dimension-down iterative algorithm (DDIA), for solving a mixed transportation network design problem (MNDP), which is generally expressed as a mathematical programming with equilibrium constraint (MPEC). The upper level of the MNDP aims to optimize the network performance via both the expansion of the existing links and the addition of new candidate links, whereas the lower level is a traditional Wardrop user equilibrium (UE) problem. The idea of the proposed solution algorithm (DDIA) is to reduce the dimensions of the problem. A group of variables (discrete/continuous) is fixed to optimize another group of variables (continuous/discrete) alternately; then, the problem is transformed into solving a series of CNDPs (continuous network design problems) and DNDPs (discrete network design problems) repeatedly until the problem converges to the optimal solution. The advantage of the proposed algorithm is that its solution process is very simple and easy to apply. Numerical examples show that for the MNDP without budget constraint, the optimal solution can be found within a few iterations with DDIA. For the MNDP with budget constraint, however, the result depends on the selection of initial values, which leads to different optimal solutions (i.e., different local optimal solutions). Some thoughts are given on how to derive meaningful initial values, such as by considering the budgets of new and reconstruction projects separately. PMID:27626803

  16. Optimal Control Problems with Switching Points. Ph.D. Thesis, 1990 Final Report

    NASA Technical Reports Server (NTRS)

    Seywald, Hans

    1991-01-01

    The main idea of this report is to give an overview of the problems and difficulties that arise in solving optimal control problems with switching points. A brief discussion of existing optimality conditions is given and a numerical approach for solving the multipoint boundary value problems associated with the first-order necessary conditions of optimal control is presented. Two real-life aerospace optimization problems are treated explicitly. These are altitude maximization for a sounding rocket (Goddard Problem) in the presence of a dynamic pressure limit, and range maximization for a supersonic aircraft flying in the vertical, also in the presence of a dynamic pressure limit. In the second problem singular control appears along arcs with active dynamic pressure limit, which in the context of optimal control, represents a first-order state inequality constraint. An extension of the Generalized Legendre-Clebsch Condition to the case of singular control along state/control constrained arcs is presented and is applied to the aircraft range maximization problem stated above. A contribution to the field of Jacobi Necessary Conditions is made by giving a new proof for the non-optimality of conjugate paths in the Accessory Minimum Problem. Because of its simple and explicit character, the new proof may provide the basis for an extension of Jacobi's Necessary Condition to the case of the trajectories with interior point constraints. Finally, the result that touch points cannot occur for first-order state inequality constraints is extended to the case of vector valued control functions.

  17. Nonexpansiveness of a linearized augmented Lagrangian operator for hierarchical convex optimization

    NASA Astrophysics Data System (ADS)

    Yamagishi, Masao; Yamada, Isao

    2017-04-01

    Hierarchical convex optimization concerns two-stage optimization problems: the first stage problem is a convex optimization; the second stage problem is the minimization of a convex function over the solution set of the first stage problem. For the hierarchical convex optimization, the hybrid steepest descent method (HSDM) can be applied, where the solution set of the first stage problem must be expressed as the fixed point set of a certain nonexpansive operator. In this paper, we propose a nonexpansive operator that yields a computationally efficient update when it is plugged into the HSDM. The proposed operator is inspired by the update of the linearized augmented Lagrangian method. It is applicable to characterize the solution set of recent sophisticated convex optimization problems found in the context of inverse problems, where the sum of multiple proximable convex functions involving linear operators must be minimized to incorporate preferable properties into the minimizers. For such a problem formulation, there has not yet been reported any nonexpansive operator that yields an update free from the inversions of linear operators in cases where it is utilized in the HSDM. Unlike previously known nonexpansive operators, the proposed operator yields an inversion-free update in such cases. As an application of the proposed operator plugged into the HSDM, we also present, in the context of the so-called superiorization, an algorithmic solution to a convex optimization problem over the generalized convex feasible set where the intersection of the hard constraints is not necessarily simple.

  18. 46 CFR 153.372 - Gauges and vapor return for cargo vapor pressures exceeding 100 kPa (approx. 14.7 psia).

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 46 Shipping 5 2011-10-01 2011-10-01 false Gauges and vapor return for cargo vapor pressures exceeding 100 kPa (approx. 14.7 psia). 153.372 Section 153.372 Shipping COAST GUARD, DEPARTMENT OF HOMELAND... return for cargo vapor pressures exceeding 100 kPa (approx. 14.7 psia). When table 1 references this...

  19. TARCMO: Theory and Algorithms for Robust, Combinatorial, Multicriteria Optimization

    DTIC Science & Technology

    2016-11-28

    objective 9 4.6 On The Recoverable Robust Traveling Salesman Problem . . . . . 11 4.7 A Bicriteria Approach to Robust Optimization...be found. 4.6 On The Recoverable Robust Traveling Salesman Problem The traveling salesman problem (TSP) is a well-known combinatorial optimiza- tion...procedure for the robust traveling salesman problem . While this iterative algorithms results in an optimal solution to the robust TSP, computation

  20. A CAUTIONARY TALE: MARVELS BROWN DWARF CANDIDATE REVEALS ITSELF TO BE A VERY LONG PERIOD, HIGHLY ECCENTRIC SPECTROSCOPIC STELLAR BINARY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mack, Claude E. III; Stassun, Keivan G.; De Lee, Nathan

    2013-05-15

    We report the discovery of a highly eccentric, double-lined spectroscopic binary star system (TYC 3010-1494-1), comprising two solar-type stars that we had initially identified as a single star with a brown dwarf companion. At the moderate resolving power of the MARVELS spectrograph and the spectrographs used for subsequent radial-velocity (RV) measurements (R {approx}< 30, 000), this particular stellar binary mimics a single-lined binary with an RV signal that would be induced by a brown dwarf companion (Msin i {approx} 50 M{sub Jup}) to a solar-type primary. At least three properties of this system allow it to masquerade as a singlemore » star with a very-low-mass companion: its large eccentricity (e {approx} 0.8), its relatively long period (P {approx} 238 days), and the approximately perpendicular orientation of the semi-major axis with respect to the line of sight ({omega} {approx} 189 Degree-Sign ). As a result of these properties, for {approx}95% of the orbit the two sets of stellar spectral lines are completely blended, and the RV measurements based on centroiding on the apparently single-lined spectrum is very well fit by an orbit solution indicative of a brown dwarf companion on a more circular orbit (e {approx} 0.3). Only during the {approx}5% of the orbit near periastron passage does the true, double-lined nature and large RV amplitude of {approx}15 km s{sup -1} reveal itself. The discovery of this binary system is an important lesson for RV surveys searching for substellar companions; at a given resolution and observing cadence, a survey will be susceptible to these kinds of astrophysical false positives for a range of orbital parameters. Finally, for surveys like MARVELS that lack the resolution for a useful line bisector analysis, it is imperative to monitor the peak of the cross-correlation function for suspicious changes in width or shape, so that such false positives can be flagged during the candidate vetting process.« less

  1. A Mathematical Optimization Problem in Bioinformatics

    ERIC Educational Resources Information Center

    Heyer, Laurie J.

    2008-01-01

    This article describes the sequence alignment problem in bioinformatics. Through examples, we formulate sequence alignment as an optimization problem and show how to compute the optimal alignment with dynamic programming. The examples and sample exercises have been used by the author in a specialized course in bioinformatics, but could be adapted…

  2. A Problem on Optimal Transportation

    ERIC Educational Resources Information Center

    Cechlarova, Katarina

    2005-01-01

    Mathematical optimization problems are not typical in the classical curriculum of mathematics. In this paper we show how several generalizations of an easy problem on optimal transportation were solved by gifted secondary school pupils in a correspondence mathematical seminar, how they can be used in university courses of linear programming and…

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graf, Peter; Dykes, Katherine; Scott, George

    The layout of turbines in a wind farm is already a challenging nonlinear, nonconvex, nonlinearly constrained continuous global optimization problem. Here we begin to address the next generation of wind farm optimization problems by adding the complexity that there is more than one turbine type to choose from. The optimization becomes a nonlinear constrained mixed integer problem, which is a very difficult class of problems to solve. Furthermore, this document briefly summarizes the algorithm and code we have developed, the code validation steps we have performed, and the initial results for multi-turbine type and placement optimization (TTP_OPT) we have run.

  4. A framework for modeling and optimizing dynamic systems under uncertainty

    DOE PAGES

    Nicholson, Bethany; Siirola, John

    2017-11-11

    Algebraic modeling languages (AMLs) have drastically simplified the implementation of algebraic optimization problems. However, there are still many classes of optimization problems that are not easily represented in most AMLs. These classes of problems are typically reformulated before implementation, which requires significant effort and time from the modeler and obscures the original problem structure or context. In this work we demonstrate how the Pyomo AML can be used to represent complex optimization problems using high-level modeling constructs. We focus on the operation of dynamic systems under uncertainty and demonstrate the combination of Pyomo extensions for dynamic optimization and stochastic programming.more » We use a dynamic semibatch reactor model and a large-scale bubbling fluidized bed adsorber model as test cases.« less

  5. Computational alternatives to obtain time optimal jet engine control. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Basso, R. J.; Leake, R. J.

    1976-01-01

    Two computational methods to determine an open loop time optimal control sequence for a simple single spool turbojet engine are described by a set of nonlinear differential equations. Both methods are modifications of widely accepted algorithms which can solve fixed time unconstrained optimal control problems with a free right end. Constrained problems to be considered have fixed right ends and free time. Dynamic programming is defined on a standard problem and it yields a successive approximation solution to the time optimal problem of interest. A feedback control law is obtained and it is then used to determine the corresponding open loop control sequence. The Fletcher-Reeves conjugate gradient method has been selected for adaptation to solve a nonlinear optimal control problem with state variable and control constraints.

  6. A framework for modeling and optimizing dynamic systems under uncertainty

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nicholson, Bethany; Siirola, John

    Algebraic modeling languages (AMLs) have drastically simplified the implementation of algebraic optimization problems. However, there are still many classes of optimization problems that are not easily represented in most AMLs. These classes of problems are typically reformulated before implementation, which requires significant effort and time from the modeler and obscures the original problem structure or context. In this work we demonstrate how the Pyomo AML can be used to represent complex optimization problems using high-level modeling constructs. We focus on the operation of dynamic systems under uncertainty and demonstrate the combination of Pyomo extensions for dynamic optimization and stochastic programming.more » We use a dynamic semibatch reactor model and a large-scale bubbling fluidized bed adsorber model as test cases.« less

  7. Distributed Method to Optimal Profile Descent

    NASA Astrophysics Data System (ADS)

    Kim, Geun I.

    Current ground automation tools for Optimal Profile Descent (OPD) procedures utilize path stretching and speed profile change to maintain proper merging and spacing requirements at high traffic terminal area. However, low predictability of aircraft's vertical profile and path deviation during decent add uncertainty to computing estimated time of arrival, a key information that enables the ground control center to manage airspace traffic effectively. This paper uses an OPD procedure that is based on a constant flight path angle to increase the predictability of the vertical profile and defines an OPD optimization problem that uses both path stretching and speed profile change while largely maintaining the original OPD procedure. This problem minimizes the cumulative cost of performing OPD procedures for a group of aircraft by assigning a time cost function to each aircraft and a separation cost function to a pair of aircraft. The OPD optimization problem is then solved in a decentralized manner using dual decomposition techniques under inter-aircraft ADS-B mechanism. This method divides the optimization problem into more manageable sub-problems which are then distributed to the group of aircraft. Each aircraft solves its assigned sub-problem and communicate the solutions to other aircraft in an iterative process until an optimal solution is achieved thus decentralizing the computation of the optimization problem.

  8. Towards Resolving the Crab Sigma-Problem: A Linear Accelerator?

    NASA Technical Reports Server (NTRS)

    Contopoulos, Ioannis; Kazanas, Demosthenes; White, Nicholas E. (Technical Monitor)

    2002-01-01

    Using the exact solution of the axisymmetric pulsar magnetosphere derived in a previous publication and the conservation laws of the associated MHD flow, we show that the Lorentz factor of the outflowing plasma increases linearly with distance from the light cylinder. Therefore, the ratio of the Poynting to particle energy flux, generically referred to as sigma, decreases inversely proportional to distance, from a large value (typically approx. greater than 10(exp 4)) near the light cylinder to sigma approx. = 1 at a transition distance R(sub trans). Beyond this distance the inertial effects of the outflowing plasma become important and the magnetic field geometry must deviate from the almost monopolar form it attains between R(sub lc), and R(sub trans). We anticipate that this is achieved by collimation of the poloidal field lines toward the rotation axis, ensuring that the magnetic field pressure in the equatorial region will fall-off faster than 1/R(sup 2) (R being the cylindrical radius). This leads both to a value sigma = a(sub s) much less than 1 at the nebular reverse shock at distance R(sub s) (R(sub s) much greater than R(sub trans)) and to a component of the flow perpendicular to the equatorial component, as required by observation. The presence of the strong shock at R = R(sub s) allows for the efficient conversion of kinetic energy into radiation. We speculate that the Crab pulsar is unique in requiring sigma(sub s) approx. = 3 x 10(exp -3) because of its small translational velocity, which allowed for the shock distance R(sub s) to grow to values much greater than R(sub trans).

  9. THE VELOCITY WIDTH FUNCTION OF GALAXIES FROM THE 40% ALFALFA SURVEY: SHEDDING LIGHT ON THE COLD DARK MATTER OVERABUNDANCE PROBLEM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Papastergis, Emmanouil; Martin, Ann M.; Giovanelli, Riccardo

    The ongoing Arecibo Legacy Fast ALFA (ALFALFA) survey is a wide-area, extragalactic HI-line survey conducted at the Arecibo Observatory. Sources have so far been extracted over {approx}3000 deg{sup 2} of sky (40% of its final area), resulting in the largest HI-selected sample to date. We measure the space density of HI-bearing galaxies as a function of their observed velocity width (uncorrected for inclination) down to w = 20 km s{sup -1}, a factor of two lower than the previous generation HI Parkes All-Sky Survey. We confirm previous results that indicate a substantial discrepancy between the observational distribution and the theoreticalmore » one expected in a cold dark matter (CDM) universe, at low widths. In particular, a comparison with synthetic galaxy samples populating state-of-the-art CDM simulations imply a factor of {approx}8 difference in the abundance of galaxies with w = 50 km s{sup -1} (increasing to a factor of {approx}100 when extrapolated to the ALFALFA limit of w = 20 km s{sup -1}). We furthermore identify possible solutions, including a keV warm dark matter scenario and the fact that HI disks in low-mass galaxies are usually not extended enough to probe the full amplitude of the galactic rotation curve. In this latter case, we can statistically infer the relationship between the measured HI rotational velocity of a galaxy and the mass of its host CDM halo. Observational verification of the presented relationship at low velocities would provide an important test of the validity of the established dark matter model.« less

  10. Solution of monotone complementarity and general convex programming problems using a modified potential reduction interior point method

    DOE PAGES

    Huang, Kuo -Ling; Mehrotra, Sanjay

    2016-11-08

    We present a homogeneous algorithm equipped with a modified potential function for the monotone complementarity problem. We show that this potential function is reduced by at least a constant amount if a scaled Lipschitz condition (SLC) is satisfied. A practical algorithm based on this potential function is implemented in a software package named iOptimize. The implementation in iOptimize maintains global linear and polynomial time convergence properties, while achieving practical performance. It either successfully solves the problem, or concludes that the SLC is not satisfied. When compared with the mature software package MOSEK (barrier solver version 6.0.0.106), iOptimize solves convex quadraticmore » programming problems, convex quadratically constrained quadratic programming problems, and general convex programming problems in fewer iterations. Moreover, several problems for which MOSEK fails are solved to optimality. In addition, we also find that iOptimize detects infeasibility more reliably than the general nonlinear solvers Ipopt (version 3.9.2) and Knitro (version 8.0).« less

  11. Dynamic programming and graph algorithms in computer vision.

    PubMed

    Felzenszwalb, Pedro F; Zabih, Ramin

    2011-04-01

    Optimization is a powerful paradigm for expressing and solving problems in a wide range of areas, and has been successfully applied to many vision problems. Discrete optimization techniques are especially interesting since, by carefully exploiting problem structure, they often provide nontrivial guarantees concerning solution quality. In this paper, we review dynamic programming and graph algorithms, and discuss representative examples of how these discrete optimization techniques have been applied to some classical vision problems. We focus on the low-level vision problem of stereo, the mid-level problem of interactive object segmentation, and the high-level problem of model-based recognition.

  12. An optimization method for the problems of thermal cloaking of material bodies

    NASA Astrophysics Data System (ADS)

    Alekseev, G. V.; Levin, V. A.

    2016-11-01

    Inverse heat-transfer problems related to constructing special thermal devices such as cloaking shells, thermal-illusion or thermal-camouflage devices, and heat-flux concentrators are studied. The heatdiffusion equation with a variable heat-conductivity coefficient is used as the initial heat-transfer model. An optimization method is used to reduce the above inverse problems to the respective control problem. The solvability of the above control problem is proved, an optimality system that describes necessary extremum conditions is derived, and a numerical algorithm for solving the control problem is proposed.

  13. System design optimization for a Mars-roving vehicle and perturbed-optimal solutions in nonlinear programming

    NASA Technical Reports Server (NTRS)

    Pavarini, C.

    1974-01-01

    Work in two somewhat distinct areas is presented. First, the optimal system design problem for a Mars-roving vehicle is attacked by creating static system models and a system evaluation function and optimizing via nonlinear programming techniques. The second area concerns the problem of perturbed-optimal solutions. Given an initial perturbation in an element of the solution to a nonlinear programming problem, a linear method is determined to approximate the optimal readjustments of the other elements of the solution. Then, the sensitivity of the Mars rover designs is described by application of this method.

  14. Performance comparison of genetic algorithms and particle swarm optimization for model integer programming bus timetabling problem

    NASA Astrophysics Data System (ADS)

    Wihartiko, F. D.; Wijayanti, H.; Virgantari, F.

    2018-03-01

    Genetic Algorithm (GA) is a common algorithm used to solve optimization problems with artificial intelligence approach. Similarly, the Particle Swarm Optimization (PSO) algorithm. Both algorithms have different advantages and disadvantages when applied to the case of optimization of the Model Integer Programming for Bus Timetabling Problem (MIPBTP), where in the case of MIPBTP will be found the optimal number of trips confronted with various constraints. The comparison results show that the PSO algorithm is superior in terms of complexity, accuracy, iteration and program simplicity in finding the optimal solution.

  15. Optimal Control of Evolution Mixed Variational Inclusions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alduncin, Gonzalo, E-mail: alduncin@geofisica.unam.mx

    2013-12-15

    Optimal control problems of primal and dual evolution mixed variational inclusions, in reflexive Banach spaces, are studied. The solvability analysis of the mixed state systems is established via duality principles. The optimality analysis is performed in terms of perturbation conjugate duality methods, and proximation penalty-duality algorithms to mixed optimality conditions are further presented. Applications to nonlinear diffusion constrained problems as well as quasistatic elastoviscoplastic bilateral contact problems exemplify the theory.

  16. Optimal control and optimal trajectories of regional macroeconomic dynamics based on the Pontryagin maximum principle

    NASA Astrophysics Data System (ADS)

    Bulgakov, V. K.; Strigunov, V. V.

    2009-05-01

    The Pontryagin maximum principle is used to prove a theorem concerning optimal control in regional macroeconomics. A boundary value problem for optimal trajectories of the state and adjoint variables is formulated, and optimal curves are analyzed. An algorithm is proposed for solving the boundary value problem of optimal control. The performance of the algorithm is demonstrated by computing an optimal control and the corresponding optimal trajectories.

  17. Random Matrix Approach for Primal-Dual Portfolio Optimization Problems

    NASA Astrophysics Data System (ADS)

    Tada, Daichi; Yamamoto, Hisashi; Shinzato, Takashi

    2017-12-01

    In this paper, we revisit the portfolio optimization problems of the minimization/maximization of investment risk under constraints of budget and investment concentration (primal problem) and the maximization/minimization of investment concentration under constraints of budget and investment risk (dual problem) for the case that the variances of the return rates of the assets are identical. We analyze both optimization problems by the Lagrange multiplier method and the random matrix approach. Thereafter, we compare the results obtained from our proposed approach with the results obtained in previous work. Moreover, we use numerical experiments to validate the results obtained from the replica approach and the random matrix approach as methods for analyzing both the primal and dual portfolio optimization problems.

  18. Evaluation of Genetic Algorithm Concepts using Model Problems. Part 1; Single-Objective Optimization

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Pulliam, Thomas H.

    2003-01-01

    A genetic-algorithm-based optimization approach is described and evaluated using a simple hill-climbing model problem. The model problem utilized herein allows for the broad specification of a large number of search spaces including spaces with an arbitrary number of genes or decision variables and an arbitrary number hills or modes. In the present study, only single objective problems are considered. Results indicate that the genetic algorithm optimization approach is flexible in application and extremely reliable, providing optimal results for all problems attempted. The most difficult problems - those with large hyper-volumes and multi-mode search spaces containing a large number of genes - require a large number of function evaluations for GA convergence, but they always converge.

  19. A time-domain decomposition iterative method for the solution of distributed linear quadratic optimal control problems

    NASA Astrophysics Data System (ADS)

    Heinkenschloss, Matthias

    2005-01-01

    We study a class of time-domain decomposition-based methods for the numerical solution of large-scale linear quadratic optimal control problems. Our methods are based on a multiple shooting reformulation of the linear quadratic optimal control problem as a discrete-time optimal control (DTOC) problem. The optimality conditions for this DTOC problem lead to a linear block tridiagonal system. The diagonal blocks are invertible and are related to the original linear quadratic optimal control problem restricted to smaller time-subintervals. This motivates the application of block Gauss-Seidel (GS)-type methods for the solution of the block tridiagonal systems. Numerical experiments show that the spectral radii of the block GS iteration matrices are larger than one for typical applications, but that the eigenvalues of the iteration matrices decay to zero fast. Hence, while the GS method is not expected to convergence for typical applications, it can be effective as a preconditioner for Krylov-subspace methods. This is confirmed by our numerical tests.A byproduct of this research is the insight that certain instantaneous control techniques can be viewed as the application of one step of the forward block GS method applied to the DTOC optimality system.

  20. Heterogeneous quantum computing for satellite constellation optimization: solving the weighted k-clique problem

    NASA Astrophysics Data System (ADS)

    Bass, Gideon; Tomlin, Casey; Kumar, Vaibhaw; Rihaczek, Pete; Dulny, Joseph, III

    2018-04-01

    NP-hard optimization problems scale very rapidly with problem size, becoming unsolvable with brute force methods, even with supercomputing resources. Typically, such problems have been approximated with heuristics. However, these methods still take a long time and are not guaranteed to find an optimal solution. Quantum computing offers the possibility of producing significant speed-up and improved solution quality. Current quantum annealing (QA) devices are designed to solve difficult optimization problems, but they are limited by hardware size and qubit connectivity restrictions. We present a novel heterogeneous computing stack that combines QA and classical machine learning, allowing the use of QA on problems larger than the hardware limits of the quantum device. These results represent experiments on a real-world problem represented by the weighted k-clique problem. Through this experiment, we provide insight into the state of quantum machine learning.

  1. Decomposition method for zonal resource allocation problems in telecommunication networks

    NASA Astrophysics Data System (ADS)

    Konnov, I. V.; Kashuba, A. Yu

    2016-11-01

    We consider problems of optimal resource allocation in telecommunication networks. We first give an optimization formulation for the case where the network manager aims to distribute some homogeneous resource (bandwidth) among users of one region with quadratic charge and fee functions and present simple and efficient solution methods. Next, we consider a more general problem for a provider of a wireless communication network divided into zones (clusters) with common capacity constraints. We obtain a convex quadratic optimization problem involving capacity and balance constraints. By using the dual Lagrangian method with respect to the capacity constraint, we suggest to reduce the initial problem to a single-dimensional optimization problem, but calculation of the cost function value leads to independent solution of zonal problems, which coincide with the above single region problem. Some results of computational experiments confirm the applicability of the new methods.

  2. Portfolio optimization using fuzzy linear programming

    NASA Astrophysics Data System (ADS)

    Pandit, Purnima K.

    2013-09-01

    Portfolio Optimization (PO) is a problem in Finance, in which investor tries to maximize return and minimize risk by carefully choosing different assets. Expected return and risk are the most important parameters with regard to optimal portfolios. In the simple form PO can be modeled as quadratic programming problem which can be put into equivalent linear form. PO problems with the fuzzy parameters can be solved as multi-objective fuzzy linear programming problem. In this paper we give the solution to such problems with an illustrative example.

  3. Topology-changing shape optimization with the genetic algorithm

    NASA Astrophysics Data System (ADS)

    Lamberson, Steven E., Jr.

    The goal is to take a traditional shape optimization problem statement and modify it slightly to allow for prescribed changes in topology. This modification enables greater flexibility in the choice of parameters for the topology optimization problem, while improving the direct physical relevance of the results. This modification involves changing the optimization problem statement from a nonlinear programming problem into a form of mixed-discrete nonlinear programing problem. The present work demonstrates one possible way of using the Genetic Algorithm (GA) to solve such a problem, including the use of "masking bits" and a new modification to the bit-string affinity (BSA) termination criterion specifically designed for problems with "masking bits." A simple ten-bar truss problem proves the utility of the modified BSA for this type of problem. A more complicated two dimensional bracket problem is solved using both the proposed approach and a more traditional topology optimization approach (Solid Isotropic Microstructure with Penalization or SIMP) to enable comparison. The proposed approach is able to solve problems with both local and global constraints, which is something traditional methods cannot do. The proposed approach has a significantly higher computational burden --- on the order of 100 times larger than SIMP, although the proposed approach is able to offset this with parallel computing.

  4. Genetic algorithm parameters tuning for resource-constrained project scheduling problem

    NASA Astrophysics Data System (ADS)

    Tian, Xingke; Yuan, Shengrui

    2018-04-01

    Project Scheduling Problem (RCPSP) is a kind of important scheduling problem. To achieve a certain optimal goal such as the shortest duration, the smallest cost, the resource balance and so on, it is required to arrange the start and finish of all tasks under the condition of satisfying project timing constraints and resource constraints. In theory, the problem belongs to the NP-hard problem, and the model is abundant. Many combinatorial optimization problems are special cases of RCPSP, such as job shop scheduling, flow shop scheduling and so on. At present, the genetic algorithm (GA) has been used to deal with the classical RCPSP problem and achieved remarkable results. Vast scholars have also studied the improved genetic algorithm for the RCPSP problem, which makes it to solve the RCPSP problem more efficiently and accurately. However, for the selection of the main parameters of the genetic algorithm, there is no parameter optimization in these studies. Generally, we used the empirical method, but it cannot ensure to meet the optimal parameters. In this paper, the problem was carried out, which is the blind selection of parameters in the process of solving the RCPSP problem. We made sampling analysis, the establishment of proxy model and ultimately solved the optimal parameters.

  5. A new approach to impulsive rendezvous near circular orbit

    NASA Astrophysics Data System (ADS)

    Carter, Thomas; Humi, Mayer

    2012-04-01

    A new approach is presented for the problem of planar optimal impulsive rendezvous of a spacecraft in an inertial frame near a circular orbit in a Newtonian gravitational field. The total characteristic velocity to be minimized is replaced by a related characteristic-value function and this related optimization problem can be solved in closed form. The solution of this problem is shown to approach the solution of the original problem in the limit as the boundary conditions approach those of a circular orbit. Using a form of primer-vector theory the problem is formulated in a way that leads to relatively easy calculation of the optimal velocity increments. A certain vector that can easily be calculated from the boundary conditions determines the number of impulses required for solution of the optimization problem and also is useful in the computation of these velocity increments. Necessary and sufficient conditions for boundary conditions to require exactly three nonsingular non-degenerate impulses for solution of the related optimal rendezvous problem, and a means of calculating these velocity increments are presented. A simple example of a three-impulse rendezvous problem is solved and the resulting trajectory is depicted. Optimal non-degenerate nonsingular two-impulse rendezvous for the related problem is found to consist of four categories of solutions depending on the four ways the primer vector locus intersects the unit circle. Necessary and sufficient conditions for each category of solutions are presented. The region of the boundary values that admit each category of solutions of the related problem are found, and in each case a closed-form solution of the optimal velocity increments is presented. Similar results are presented for the simpler optimal rendezvous that require only one-impulse. For brevity degenerate and singular solutions are not discussed in detail, but should be presented in a following study. Although this approach is thought to provide simpler computations than existing methods, its main contribution may be in establishing a new approach to the more general problem.

  6. A 70 Kiloparsec X-Ray Tail in the Cluster A3627

    NASA Technical Reports Server (NTRS)

    Sun, M.; Jones, C.; Forman, W.; Nulsen, P. E. J.; Donahue, M.; Voit, G. M.

    2006-01-01

    We present the discovery of a 70 kpc X-ray tail behind the small late-type galaxy ESO 137-001, in the nearby, hot (T=6.5 keV) merging cluster A3627, from both Chandra and XMM-Newton observations. The tail has a length-to-width ratio of approx. 10. It is luminous (L(0.5-2keV) approx 1041 ergs/s), with a temperature of approx. 0.7 keV and an X-ray gas mass of approx 10(exp 9) solar masses (approx 10% of the galaxy's stellar mass). We interpret this tail as the stripped interstellar medium of ESO 137-001 mixed with the hot cluster medium, with this blue galaxy being converted into a gas-poor galaxy. Three X-ray point sources are detected in the axis of the tail, which may imply active star formation there. The straightness and narrowness of the tail also imply that the turbulence in the intracluster medium is not strong on scales of 20-70 kpc.

  7. In Situ Investigation of Iron Meteorites at Meridiani Planum Mars

    NASA Technical Reports Server (NTRS)

    Fleischer, I.; Klingelhoefer, G.; Schroeder, C.; Morris, R. V.; Golombek, M.; Ashley, J. W.

    2010-01-01

    The Mars Exploration Rover Opportunity has encountered four iron meteorites at its landing site in Meridiani Planum. The first one, informally named "Heat Shield Rock", measuring approx.30 by 15 cm, was encountered in January 2005 [1, 2] and officially recognized as the first iron meteorite on the martian surface with the name "Meridiani Planum" after the location of its find [3]. We will refer to it as "Heat Shield Rock" to avoid confusion with the site. Between July and October 2009, separated approx.10 km from Heat Shield Rock, three other iron meteorite fragments were encountered, informally named "Block Island" (approx.60 cm across), "Shelter Island" (approx.50 by 20 cm), and "Mackinac Island" (approx.30 cm across). Heat Shield Rock and Block Island, the two specimens investigated in detail, are shown in Figure 1. Here, we focus on the meteorites chemistry and mineralogy. An overview in the mission context is given in [4]; other abstracts discuss their morphology [5], photometric properties [6], and their provenance [7].

  8. New evidence favoring multilevel decomposition and optimization

    NASA Technical Reports Server (NTRS)

    Padula, Sharon L.; Polignone, Debra A.

    1990-01-01

    The issue of the utility of multilevel decomposition and optimization remains controversial. To date, only the structural optimization community has actively developed and promoted multilevel optimization techniques. However, even this community acknowledges that multilevel optimization is ideally suited for a rather limited set of problems. It is warned that decomposition typically requires eliminating local variables by using global variables and that this in turn causes ill-conditioning of the multilevel optimization by adding equality constraints. The purpose is to suggest a new multilevel optimization technique. This technique uses behavior variables, in addition to design variables and constraints, to decompose the problem. The new technique removes the need for equality constraints, simplifies the decomposition of the design problem, simplifies the programming task, and improves the convergence speed of multilevel optimization compared to conventional optimization.

  9. Primer-optimized results and trends for circular phasing and other circle-to-circle impulsive coplanar rendezvous

    NASA Astrophysics Data System (ADS)

    Sandrik, Suzannah

    Optimal solutions to the impulsive circular phasing problem, a special class of orbital maneuver in which impulsive thrusts shift a vehicle's orbital position by a specified angle, are found using primer vector theory. The complexities of optimal circular phasing are identified and illustrated using specifically designed Matlab software tools. Information from these new visualizations is applied to explain discrepancies in locally optimal solutions found by previous researchers. Two non-phasing circle-to-circle impulsive rendezvous problems are also examined to show the applicability of the tools developed here to a broader class of problems and to show how optimizing these rendezvous problems differs from the circular phasing case.

  10. On Born's Conjecture about Optimal Distribution of Charges for an Infinite Ionic Crystal

    NASA Astrophysics Data System (ADS)

    Bétermin, Laurent; Knüpfer, Hans

    2018-04-01

    We study the problem for the optimal charge distribution on the sites of a fixed Bravais lattice. In particular, we prove Born's conjecture about the optimality of the rock salt alternate distribution of charges on a cubic lattice (and more generally on a d-dimensional orthorhombic lattice). Furthermore, we study this problem on the two-dimensional triangular lattice and we prove the optimality of a two-component honeycomb distribution of charges. The results hold for a class of completely monotone interaction potentials which includes Coulomb-type interactions for d≥3 . In a more general setting, we derive a connection between the optimal charge problem and a minimization problem for the translated lattice theta function.

  11. Multimodal optimization by using hybrid of artificial bee colony algorithm and BFGS algorithm

    NASA Astrophysics Data System (ADS)

    Anam, S.

    2017-10-01

    Optimization has become one of the important fields in Mathematics. Many problems in engineering and science can be formulated into optimization problems. They maybe have many local optima. The optimization problem with many local optima, known as multimodal optimization problem, is how to find the global solution. Several metaheuristic methods have been proposed to solve multimodal optimization problems such as Particle Swarm Optimization (PSO), Genetics Algorithm (GA), Artificial Bee Colony (ABC) algorithm, etc. The performance of the ABC algorithm is better than or similar to those of other population-based algorithms with the advantage of employing a fewer control parameters. The ABC algorithm also has the advantages of strong robustness, fast convergence and high flexibility. However, it has the disadvantages premature convergence in the later search period. The accuracy of the optimal value cannot meet the requirements sometimes. Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm is a good iterative method for finding a local optimum. Compared with other local optimization methods, the BFGS algorithm is better. Based on the advantages of the ABC algorithm and the BFGS algorithm, this paper proposes a hybrid of the artificial bee colony algorithm and the BFGS algorithm to solve the multimodal optimization problem. The first step is that the ABC algorithm is run to find a point. In the second step is that the point obtained by the first step is used as an initial point of BFGS algorithm. The results show that the hybrid method can overcome from the basic ABC algorithm problems for almost all test function. However, if the shape of function is flat, the proposed method cannot work well.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, William E.; Siirola, John Daniel

    We describe new capabilities for modeling MPEC problems within the Pyomo modeling software. These capabilities include new modeling components that represent complementar- ity conditions, modeling transformations for re-expressing models with complementarity con- ditions in other forms, and meta-solvers that apply transformations and numeric optimization solvers to optimize MPEC problems. We illustrate the breadth of Pyomo's modeling capabil- ities for MPEC problems, and we describe how Pyomo's meta-solvers can perform local and global optimization of MPEC problems.

  13. Genetic algorithms for multicriteria shape optimization of induction furnace

    NASA Astrophysics Data System (ADS)

    Kůs, Pavel; Mach, František; Karban, Pavel; Doležel, Ivo

    2012-09-01

    In this contribution we deal with a multi-criteria shape optimization of an induction furnace. We want to find shape parameters of the furnace in such a way, that two different criteria are optimized. Since they cannot be optimized simultaneously, instead of one optimum we find set of partially optimal designs, so called Pareto front. We compare two different approaches to the optimization, one using nonlinear conjugate gradient method and second using variation of genetic algorithm. As can be seen from the numerical results, genetic algorithm seems to be the right choice for this problem. Solution of direct problem (coupled problem consisting of magnetic and heat field) is done using our own code Agros2D. It uses finite elements of higher order leading to fast and accurate solution of relatively complicated coupled problem. It also provides advanced scripting support, allowing us to prepare parametric model of the furnace and simply incorporate various types of optimization algorithms.

  14. A One-Layer Recurrent Neural Network for Real-Time Portfolio Optimization With Probability Criterion.

    PubMed

    Liu, Qingshan; Dang, Chuangyin; Huang, Tingwen

    2013-02-01

    This paper presents a decision-making model described by a recurrent neural network for dynamic portfolio optimization. The portfolio-optimization problem is first converted into a constrained fractional programming problem. Since the objective function in the programming problem is not convex, the traditional optimization techniques are no longer applicable for solving this problem. Fortunately, the objective function in the fractional programming is pseudoconvex on the feasible region. It leads to a one-layer recurrent neural network modeled by means of a discontinuous dynamic system. To ensure the optimal solutions for portfolio optimization, the convergence of the proposed neural network is analyzed and proved. In fact, the neural network guarantees to get the optimal solutions for portfolio-investment advice if some mild conditions are satisfied. A numerical example with simulation results substantiates the effectiveness and illustrates the characteristics of the proposed neural network.

  15. Venus: The First Habitable World of Our Solar System?

    NASA Technical Reports Server (NTRS)

    Way, Michael Joseph; Del Genio, Anthony; Kiang, Nancy; Sohl, Linda; Clune, Tom; Aleinov, Igor; Kelley, Maxwell

    2015-01-01

    A great deal of effort in the search for life off-Earth in the past 20+ years has focused on Mars via a plethora of space and ground based missions. While there is good evidence that surface liquid water existed on Mars in substantial quantities, it is not clear how long such water existed. Most studies point to this water existing billions of years ago. However,those familiar with the Faint Young Sun hypothesis for Earth will quickly realize that this problem is even more pronounced for Mars. In this context recent simulations have been completed with the GISS 3-D GCM (1) of paleo Venus (approx. 3 billion years ago) when the sun was approx. 25 less luminous than today. A combination of a less luminous Sun and a slow rotation rate reveal that Venus could have had conditions on its surface amenable to surface liquid water. Previous work has also provided bounds on how much water Venus could have had using measured DH ratios. It is possible that less assumptions have to be made to make Venus an early habitable world than have to be made for Mars, even thoughVenus is a much tougher world on which to confirm this hypothesis.

  16. Investigation of a Light Gas Helicon Plasma Source for the VASIMR Space Propulsion System

    NASA Technical Reports Server (NTRS)

    Squire, J. P.; Chang-Diaz, F. R.; Jacobson, V. T.; Glover, T. W.; Baity, F. W.; Carter, M. D.; Goulding, R. H.; Bengtson, R. D.; Bering, E. A., III

    2003-01-01

    An efficient plasma source producing a high-density (approx.10(exp 19/cu m) light gas (e.g. H, D, or He) flowing plasma with a high degree of ionization is a critical component of the Variable Specific Impulse Magnetoplasma Rocket (VASIMR) concept. We are developing an antenna to apply ICRF power near the fundamental ion cyclotron resonance to further accelerate the plasma ions to velocities appropriate for space propulsion applications. The high degree of ionization and a low vacuum background pressure are important to eliminate the problem of radial losses due to charge exchange. We have performed parametric (e.g. gas flow, power (0.5 - 3 kW), magnetic field , frequency (25 and 50 MHz)) studies of a helicon operating with gas (H2 D2, He, N2 and Ar) injected at one end with a high magnetic mirror downstream of the antenna. We have explored operation with a cusp and a mirror field upstream. Plasma flows into a low background vacuum (<10(exp -4) torr) at velocities higher than the ion sound speed. High densities (approx. 10(exp 19/cu m) have been achieved at the location where ICRF will be applied, just downstream of the magnetic mirror.

  17. A SUCCESSFUL BROADBAND SURVEY FOR GIANT Ly{alpha} NEBULAE. II. SPECTROSCOPIC CONFIRMATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prescott, Moire K. M.; Dey, Arjun; Jannuzi, Buell T., E-mail: mkpresco@physics.ucsb.edu

    2013-01-01

    Using a systematic broadband search technique, we have carried out a survey for large Ly{alpha} nebulae (or Ly{alpha} {sup b}lobs{sup )} at 2 {approx}< z {approx}< 3 within 8.5 deg{sup 2} of the NOAO Deep Wide-Field Survey Booetes field, corresponding to a total survey comoving volume of Almost-Equal-To 10{sup 8} h {sup -3} {sub 70} Mpc{sup 3}. Here, we present our spectroscopic observations of candidate giant Ly{alpha} nebulae. Of 26 candidates targeted, 5 were confirmed to have Ly{alpha} emission at 1.7 {approx}< z {approx}< 2.7, 4 of which were new discoveries. The confirmed Ly{alpha} nebulae span a range of Ly{alpha}more » equivalent widths, colors, sizes, and line ratios, and most show spatially extended continuum emission. The remaining candidates did not reveal any strong emission lines, but instead exhibit featureless, diffuse, blue continuum spectra. Their nature remains mysterious, but we speculate that some of these might be Ly{alpha} nebulae lying within the redshift desert (i.e., 1.2 {approx}< z {approx}< 1.6). Our spectroscopic follow-up confirms the power of using deep broadband imaging to search for the bright end of the Ly{alpha} nebula population across enormous comoving volumes.« less

  18. Slow-roll k-essence

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chiba, Takeshi; Dutta, Sourish; Scherrer, Robert J.

    We derive slow-roll conditions for thawing k-essence with a separable Lagrangian p(X,{phi})=F(X)V({phi}). We examine the evolution of the equation of state parameter, w, as a function of the scale factor a, for the case where w is close to -1. We find two distinct cases, corresponding to X{approx_equal}0 and F{sub X}{approx_equal}0, respectively. For the case where X{approx_equal}0 the evolution of {phi} and hence w is described by only two parameters, and w(a) is model independent and coincides with similar behavior seen in thawing quintessence models. This result also extends to nonseparable Lagrangians where X{approx_equal}0. For the case F{sub X}{approx_equal}0, anmore » expression is derived for w(a), but this expression depends on the potential V({phi}), so there is no model-independent limiting behavior. For the X{approx_equal}0 case, we derive observational constraints on the two parameters of the model, w{sub 0} (the present-day value of w), and the K, which parametrizes the curvature of the potential. We find that the observations sharply constrain w{sub 0} to be close to -1, but provide very poor constraints on K.« less

  19. Inductive current startup in large tokamaks with expanding minor radius and RF assist

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borowski, S.K.

    1983-01-01

    Auxiliary RF heating of electrons before and during the current rise phase of a large tokamak, such as the Fusion Engineering Device, is examined as a means of reducing both the initiation loop voltage and resistive flux expenditure during startup. Prior to current initiation, 1 to 2 MW of electron cyclotron resonance heating power at approx.90 GHz is used to create a small volume of high conductivity plasma (T/sub e/ approx. = 100 eV, n/sub e/ approx. = 10/sup 19/m/sup -3/) near the upper hybrid resonance (UHR) region. This plasma conditioning permits a small radius (a/sup 0/ approx.< 0.4 m)more » current channel to be established with a relatively low initial loop voltage (approx.< 25 V as opposed to approx.100 V without RF assist). During the subsequent plasma expansion and current ramp phase, additional RF power is introduced to reduce volt-second consumption due to plasma resistance. To study the preheating phase, a near classical particle and energy transport model is developed to estimate the electron heating efficiency in a currentless toroidal plasma. The model assumes that preferential electron heating at the UHR leads to the formation of an ambipolar sheath potential between the neutral plasma and the conducting vacuum vessel and limiter.« less

  20. Temperature Dependence of Radiation Induced Conductivity in Insulators

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dennison, J. R.; Gillespie, Jodie; Hodges, Joshua

    2009-03-10

    This study measures Radiation Induced Conductivity (RIC) of Low Density Polyethylene (LDPE) over temperatures ranging from {approx}110 K to {approx}350 K. RIC occurs when incident ionizing radiation deposits energy and excites electrons into the conduction band of insulators. Conductivity was measured when a voltage was applied across vacuum-baked, thin film LDPE polymer samples in a parallel plate geometry. RIC was calculated as the difference in sample conductivity under no incident radiation and under an incident {approx}4 MeV electron beam at low incident fluxes of 10{sup -4}-10{sup -1} Gr/sec. The steady-state RIC was found to agree well with the standard powermore » law relation, {sigma}{sub RIC} = k{sub RIC}{center_dot}D ring {sup {delta}} between conductivity, {sigma} and adsorbed dose rate, D ring . Both the proportionality constant, k{sub RIC}, and the power, {delta}, were found to be temperature dependant above {approx}250 K, with behavior consistent with photoconductivity models developed for localized trap states in disordered semiconductors. Below {approx}250 K, kRIC and {delta} exhibited little change. The observed difference in temperature dependence might be related to a structural phase transition seen at T{sub {beta}}{approx}256 K in prior studies of mechanical and thermodynamic properties of LDPE.« less

  1. Rapid Acceleration of a Coronal Mass Ejection in the Low Corona and Implications of Propagation

    NASA Technical Reports Server (NTRS)

    Gallagher, Peter T.; Lawrence, Gareth R.; Dennis, Brian R.

    2003-01-01

    A high-velocity Coronal Mass Ejection (CME) associated with the 2002 April 21 X1.5 flare is studied using a unique set of observations from the Transition Region and Coronal Explorer (TRACE), the Ultraviolet Coronagraph Spectrometer (UVCS), and the Large-Angle Spectrometric Coronagraph (LASCO). The event is first observed as a rapid rise in GOES X-rays, followed by simultaneous conjugate footpoint brightenings connected by an ascending loop or flux-rope feature. While expanding, the appearance of the feature remains remarkably constant as it passes through the TRACE 195 A passband and LASCO fields-of-view, allowing its height-time behavior to be accurately determined. An analytic function, having exponential and linear components, is found to represent the height-time evolution of the CME in the range 1.05-26 R. The CME acceleration rises exponentially to approx. 900 km/sq s within approximately 20-min, peaking at approx.1400 m/sq s when the leading edge is at approx. 1.7 R. The acceleration subsequently falls off as a slowly varying exponential for approx.,90-min. At distances beyond approx. 3.4 R, the height-time profile is approximately linear with a constant velocity of approx. 2400 km/s. These results are briefly discussed in light of recent kinematic models of CMEs.

  2. Paleoclimatic implications of glacial and postglacial refugia for Pinus pumila in western Beringia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, P M; Lozhkin, A V; Solomatkina, T B

    Palynological results from Julietta Lake currently provide the most direct evidence to support the existence of a glacial refugium for Pinus pumila in mountains of southwestern Beringia. Both percentages and accumulation rates indicate the evergreen shrub survived until at least {approx}19,000 14C yr B.P. in the Upper Kolyma region. Percentage data suggest numbers dwindled into the late glaciation, whereas pollen accumulation rates point towards a more rapid demise shortly after {approx}19,000 14C yr B.P. Pinus pumila did not re-establish in any great numbers until {approx}8100 14C yr B.P., despite the local presence {approx}9800 14C yr B.P. of Larix dahurica, whichmore » shares similar summer temperature requirements. The postglacial thermal maximum (in Beringia {approx}11,000-9000 14C yr B.P.) provided Pinus pumila shrubs with equally harsh albeit different conditions for survival than those present during the LGM. Regional records indicate that in this time of maximum warmth Pinus pumila likely sheltered in a second, lower-elevation refugium. Paleoclimatic models and modern ecology suggest that shifts in the nature of seasonal transitions and not only seasonal extremes have played important roles in the history of Pinus pumila over the last {approx}21,000 14C yr B.P.« less

  3. Optimization in First Semester Calculus: A Look at a Classic Problem

    ERIC Educational Resources Information Center

    LaRue, Renee; Infante, Nicole Engelke

    2015-01-01

    Optimization problems in first semester calculus have historically been a challenge for students. Focusing on the classic optimization problem of finding the minimum amount of fencing required to enclose a fixed area, we examine students' activity through the lens of Tall and Vinner's concept image and Carlson and Bloom's multidimensional…

  4. Energy-Efficient Cognitive Radio Sensor Networks: Parametric and Convex Transformations

    PubMed Central

    Naeem, Muhammad; Illanko, Kandasamy; Karmokar, Ashok; Anpalagan, Alagan; Jaseemuddin, Muhammad

    2013-01-01

    Designing energy-efficient cognitive radio sensor networks is important to intelligently use battery energy and to maximize the sensor network life. In this paper, the problem of determining the power allocation that maximizes the energy-efficiency of cognitive radio-based wireless sensor networks is formed as a constrained optimization problem, where the objective function is the ratio of network throughput and the network power. The proposed constrained optimization problem belongs to a class of nonlinear fractional programming problems. Charnes-Cooper Transformation is used to transform the nonlinear fractional problem into an equivalent concave optimization problem. The structure of the power allocation policy for the transformed concave problem is found to be of a water-filling type. The problem is also transformed into a parametric form for which a ε-optimal iterative solution exists. The convergence of the iterative algorithms is proven, and numerical solutions are presented. The iterative solutions are compared with the optimal solution obtained from the transformed concave problem, and the effects of different system parameters (interference threshold level, the number of primary users and secondary sensor nodes) on the performance of the proposed algorithms are investigated. PMID:23966194

  5. Using Animal Instincts to Design Efficient Biomedical Studies via Particle Swarm Optimization.

    PubMed

    Qiu, Jiaheng; Chen, Ray-Bing; Wang, Weichung; Wong, Weng Kee

    2014-10-01

    Particle swarm optimization (PSO) is an increasingly popular metaheuristic algorithm for solving complex optimization problems. Its popularity is due to its repeated successes in finding an optimum or a near optimal solution for problems in many applied disciplines. The algorithm makes no assumption of the function to be optimized and for biomedical experiments like those presented here, PSO typically finds the optimal solutions in a few seconds of CPU time on a garden-variety laptop. We apply PSO to find various types of optimal designs for several problems in the biological sciences and compare PSO performance relative to the differential evolution algorithm, another popular metaheuristic algorithm in the engineering literature.

  6. THE REDSHIFT AND NATURE OF AzTEC/COSMOS 1: A STARBURST GALAXY AT z = 4.6

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smolcic, V.; Capak, P.; Blain, A. W.

    2011-04-20

    Based on broadband/narrowband photometry and Keck DEIMOS spectroscopy, we report a redshift of z = 4.64{sup +0.06}{sub -0.08} for AzTEC/COSMOS 1, the brightest submillimeter galaxy (SMG) in the AzTEC/COSMOS field. In addition to the COSMOS-survey X-ray to radio data, we report observations of the source with Herschel/PACS (100, 160 {mu}m), CSO/SHARC II (350 {mu}m), and CARMA and PdBI (3 mm). We do not detect CO(5 {yields} 4) line emission in the covered redshift ranges, 4.56-4.76 (PdBI/CARMA) and 4.94-5.02 (CARMA). If the line is within this bandwidth, this sets 3{sigma} upper limits on the gas mass to {approx}<8 x 10{sup 9}more » M{sub sun} and {approx}<5 x 10{sup 10} M{sub sun}, respectively (assuming similar conditions as observed in z {approx} 2 SMGs). This could be explained by a low CO-excitation in the source. Our analysis of the UV-IR spectral energy distribution of AzTEC 1 shows that it is an extremely young ({approx}<50 Myr), massive (M{sub *} {approx} 10{sup 11} M{sub sun}), but compact ({approx}<2 kpc) galaxy, forming stars at a rate of {approx}1300 M{sub sun} yr{sup -1}. Our results imply that AzTEC 1 is forming stars in a 'gravitationally bound' regime in which gravity prohibits the formation of a superwind, leading to matter accumulation within the galaxy and further generations of star formation.« less

  7. THE ASTEROID DISTRIBUTION IN THE ECLIPTIC

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryan, Erin Lee; Woodward, Charles E.; Dipaolo, Andrea

    2009-06-15

    We present analysis of the asteroid surface density distribution of main-belt asteroids (mean perihelion {delta} {approx_equal} 2.404 AU) in five ecliptic latitude fields, -17 {approx}> {beta}({sup 0}) {approx}< +15, derived from deep Large Binocular Telescope V-band (85% completeness limit V = 21.3 mag) and Spitzer Space Telescope IRAC 8.0 {mu}m (80% completeness limit {approx}103 {mu}Jy) fields enabling us to probe the 0.5-1.0 km diameter asteroid population. We discovered 58 new asteroids in the optical survey as well as 41 new bodies in the Spitzer fields. The derived power-law slopes of the number of asteroids per square degree are similar withinmore » each {approx}5{sup 0} ecliptic latitude bin with a mean value of -0.111 {+-} 0.077. For the 23 known asteroids detected in all four IRAC channels mean albedos range from 0.24 {+-} 0.07 to 0.10 {+-} 0.05. No low-albedo asteroids (p{sub V} {approx}< 0.1) were detected in the Spitzer FLS fields, whereas in the SWIRE fields they are frequent. The SWIRE data clearly samples asteroids in the middle and outer belts providing the first estimates of these km-sized asteroids' albedos. Our observed asteroid number densities at optical wavelengths are generally consistent with those derived from the Standard Asteroid Model within the ecliptic plane. However, we find an overdensity at {beta} {approx}> 5{sup 0} in our optical fields, while the infrared number densities are underdense by factors of 2 to 3 at all ecliptic latitudes.« less

  8. LONG-DURATION X-RAY FLASH AND X-RAY-RICH GAMMA-RAY BURSTS FROM LOW-MASS POPULATION III STARS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakauchi, Daisuke; Kashiyama, Kazumi; Nakamura, Takashi

    2012-11-10

    Recent numerical simulations suggest that Population III (Pop III) stars were born with masses not larger than {approx}100 M {sub Sun} and typically {approx}40 M {sub Sun }. By self-consistently considering the jet generation and propagation in the envelope of these low-mass Pop III stars, we find that a Pop III blue supergiant star has the possibility of giving rise to a gamma-ray burst (GRB) even though it keeps a massive hydrogen envelope. We evaluate observational characteristics of Pop III GRBs and predict that Pop III GRBs have a duration of {approx}10{sup 5} s in the observer frame and amore » peak luminosity of {approx}5 Multiplication-Sign 10{sup 50} erg s{sup -1}. Assuming that the E {sub p}-L {sub p} (or E {sub p}-E {sub {gamma},iso}) correlation holds for Pop III GRBs, we find that the spectrum peak energy falls at approximately a few keV (or {approx}100 keV) in the observer frame. We discuss the detectability of Pop III GRBs by future satellite missions such as EXIST and Lobster. If the E {sub p}-E {sub {gamma},iso} correlation holds, we have the possibility to detect Pop III GRBs at z {approx} 9 as long-duration X-ray-rich GRBs by EXIST. Conversely, if the E {sub p}-L {sub p} correlation holds, we have the possibility to detect Pop III GRBs up to z {approx} 19 as long-duration X-ray flashes by Lobster.« less

  9. The Star Formation Rate Efficiency of Neutral Atomic-Dominated Hydrogen Gas in the Ooutskirts of Star-Forming Galaxies From z approx. 1 to z approx. 3

    NASA Technical Reports Server (NTRS)

    Rafelski, Marc; Gardner, Jonathan P.; Fumagalli, Michele; Neeleman, Marcel; Teplitz, Harry I.; Grogin, Norman; Koekemoer, Anton M.; Scarlata, Claudia

    2016-01-01

    Current observational evidence suggests that the star formation rate (SFR)efficiency of neutral atomic hydrogen gas measured in damped Ly(alpha) systems (DLAs) at z approx. 3 is more than 10 times lower than predicted by the Kennicutt-Schmidt (KS)relation. To understand the origin of this deficit, and to investigate possible evolution with redshift and galaxy properties, we measure the SFR efficiency of atomic gas at z approx. 1, z approx. 2, and z approx. 3 around star-forming galaxies. We use new robust photometric redshifts in the Hubble Ultra Deep Field to create galaxy stacks in these three redshift bins, and measure the SFR efficiency by combining DLA absorber statistics with the observed rest-frame UV emission in the galaxies' outskirts. We find that the SFR efficiency of H I gas at z > 1 is approx. 1%-3% of that predicted by the KS relation. Contrary to simulations and models that predict a reduced SFR efficiency with decreasing metallicity and thus with increasing redshift, we find no significant evolution in the SFR efficiency with redshift. Our analysis instead suggests that the reduced SFR efficiency is driven by the low molecular content of this atomic-dominated phase, with metallicity playing a secondary effect in regulating the conversion between atomic and molecular gas. This interpretation is supported by the similarity between the observed SFR efficiency and that observed in local atomic-dominated gas, such as in the outskirts of local spiral galaxies and local dwarf galaxies.

  10. A Spectroscopic Search for Leaking Lyman Continuum at Zeta Approximately 0.7

    NASA Technical Reports Server (NTRS)

    Bridge, Carrie R.; Teplitz, Harry I.; Siana, Brian; Scarlata, Claudia; Rudie, Gwen C.; Colbert, James; Ferguson, Henry C.; Brown, Thomas M.; Conselice, Christopher J.; Armus, Lee; hide

    2010-01-01

    We present the results of rest-frame, UV slitless spectroscopic observations of a sample of 32 z approx. 0.7 Lyman Break Galaxy (LBG) analogs in the COSMOS field. The spectroscopic search was performed with the Solar Blind Channel (SBC) on HST. While we find no direct detections of the Lyman Continuum we achieve individual limits (3sigma) of the observed non-ionizing UV to Lyman continuum flux density ratios, f(sub nu)(1500A)/f(sub nu)(830A) of 20 to 204 (median of 73.5) and 378.7 for the stack. Assuming an intrinsic Lyman Break of 3.4 and an optical depth of Lyman continuum photons along the line of sight to the galaxy of 85% we report an upper limit for the relative escape fraction in individual galaxies of 0.02 - 0.19 and a stacked 3sigma upper limit of 0.01. We find no indication of a relative escape fraction near unity as seen in some LBGs at z approx. 3. Our UV spectra achieve the deepest limits to date at any redshift on the escape fraction in individual sources. The contrast between these z approx. 0.7 low escape fraction LBG analogs with z approx. 3 LBGs suggests that either the processes conducive to high f(sub esc) are not being selected for in the z less than or approx.1 samples or the average escape fraction is decreasing from z approx. 3 to z approx. 1. We discuss possible mechanisms which could affect the escape of Lyman continuum photons

  11. The effect of nitrogen on the cycling performance in thin-film Si{sub 1-x}N{sub x} anode

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahn, Donggi; Kim, Chunjoong; Lee, Joon-Gon

    2008-09-15

    The effects of nitrogen on the electrochemical properties of silicon-nitrogen (Si{sub 1-x}N{sub x}) thin films were examined in terms of their initial capacities and cycling properties. In particular, Si{sub 0.76}N{sub 0.24} thin films showed negligible initial capacity but an abrupt capacity increase to {approx}2300 mA h/g after {approx}650 cycles. The capacity of pure Si thin films was deteriorated to {approx}20% of the initial level after 200 cycles between 0.02 and 1.2 V at 0.5 C (1 C=4200 mA/g), whereas the Si{sub 0.76}N{sub 0.24} thin films exhibited excellent cycle-life performance after {approx}650 cycles. In addition, the Si{sub 0.76}N{sub 0.24} thin filmsmore » at 50 deg. C showed an abrupt capacity increase at an earlier stage of only {approx}30 cycles. The abnormal electrochemical behaviors in the Si{sub 0.76}N{sub 0.24} thin films were demonstrated to be correlated with the formation of Li{sub 3}N and Si{sub 3}N{sub 4}. - Graphical abstract: The Si{sub 0.76}N{sub 0.24} thin films showed negligible initial capacity, but an abrupt capacity increase to {approx}2300 mA h/g after {approx}650 cycles, followed by excellent cycle-life performance. This abnormal electrochemical behavior was demonstrated to be correlated with the formation of Li{sub 3}N and Si{sub 3}N{sub 4}.« less

  12. Polarimetry and Flux Distribution in the Debris Disk Around HD 32297

    NASA Technical Reports Server (NTRS)

    Asensio-Torres, R.; Janson, M.; Hashimoto, J.; Thalmann, C.; Currie, T.; Buenzli,; Kudo, T.; Kuzuhara, M.; Kusakabe, N.; Akiyama, E.; hide

    2016-01-01

    We present high-contrast angular differential imaging (ADI) observations of the debris disk around HD32297 in H-band, as well as the first polarimetric images for this system in polarized differential imaging (PDI) mode with Subaru/HICIAO. In ADI, we detect the nearly edge-on disk at > or = 5(sigma) levels from approx. 0.45" to approx.1.7" (50-192AU) from the star and recover the spine deviation from the midplane already found in previous works. We also find for the first time imaging and surface brightness (SB) indications for the presence of a gapped structure on both sides of the disk at distances of approx. 0.75" (NE side) and approx. 0.65" (SW side). Global forward-modeling work delivers a best-fit model disk and well-fitting parameter intervals that essentially match previous results, with high-forward scattering grains and a ring located at 110AU. However, this single ring model cannot account for the gapped structure seen in our SB profiles. We create simple double ring models and achieve a satisfactory fit with two rings located at 60 and 95AU, respectively, low-forward scattering grains and very sharp inner slopes. In polarized light we retrieve the disk extending from approx. 0.25-1.6", although the central region is quite noisy and high S/N are only found in the range approx. 0.75-1.2". The disk is polarized in the azimuthal direction, as expected, and the departure from the midplane is also clearly observed. Evidence for a gapped scenario is not found in the PDI data. We obtain a linear polarization degree of the grains that increases from approx. 10% at 0.55" to approx. 25% at 1.6". The maximum is found at scattering angles of 90, either from the main components of the disk or from dust grains blown out to larger radii.

  13. A High-Heritage Blunt-Body Entry, Descent, and Landing Concept for Human Mars Exploration

    NASA Technical Reports Server (NTRS)

    Price, Humphrey; Manning, Robert; Sklyanskiy, Evgeniy; Braun, Robert

    2016-01-01

    Human-scale landers require the delivery of much heavier payloads to the surface of Mars than is possible with entry, descent, and landing (EDL) approaches used to date. A conceptual design was developed for a 10 m diameter crewed Mars lander with an entry mass of approx.75 t that could deliver approx.28 t of useful landed mass (ULM) to a zero Mars areoid, or lower, elevation. The EDL design centers upon use of a high ballistic coefficient blunt-body entry vehicle and throttled supersonic retro-propulsion (SRP). The design concept includes a 26 t Mars Ascent Vehicle (MAV) that could support a crew of 2 for approx.24 days, a crew of 3 for approx.16 days, or a crew of 4 for approx.12 days. The MAV concept is for a fully-fueled single-stage vehicle that utilizes a single pump-fed 250 kN engine using Mono-Methyl Hydrazine (MMH) and Mixed Oxides of Nitrogen (MON-25) propellants that would deliver the crew to a low Mars orbit (LMO) at the end of the surface mission. The MAV concept could potentially provide abort-to-orbit capability during much of the EDL profile in response to fault conditions and could accommodate return to orbit for cases where the MAV had no access to other Mars surface infrastructure. The design concept for the descent stage utilizes six 250 kN MMH/MON-25 engines that would have very high commonality with the MAV engine. Analysis indicates that the MAV would require approx.20 t of propellant (including residuals) and the descent stage would require approx.21 t of propellant. The addition of a 12 m diameter supersonic inflatable aerodynamic decelerator (SIAD), based on a proven flight design, was studied as an optional method to improve the ULM fraction, reducing the required descent propellant by approx.4 t.

  14. A High-Heritage Blunt-Body Entry, Descent, and Landing Concept for Human Mars Exploration

    NASA Technical Reports Server (NTRS)

    Price, Humphrey; Manning, Robert; Sklyanskiy, Evgeniy; Braun, Robert

    2016-01-01

    Human-scale landers require the delivery of much heavier payloads to the surface of Mars than is possible with entry, descent, and landing (EDL) approaches used to date. A conceptual design was developed for a 10 m diameter crewed Mars lander with an entry mass of approx. 75 t that could deliver approx. 28 t of useful landed mass (ULM) to a zero Mars areoid, or lower, elevation. The EDL design centers upon use of a high ballistic coefficient blunt-body entry vehicle and throttled supersonic retro-propulsion (SRP). The design concept includes a 26 t Mars Ascent Vehicle (MAV) that could support a crew of 2 for approx. 24 days, a crew of 3 for approx.16 days, or a crew of 4 for approx.12 days. The MAV concept is for a fully-fueled single-stage vehicle that utilizes a single pump-fed 250 kN engine using Mono-Methyl Hydrazine (MMH) and Mixed Oxides of Nitrogen (MON-25) propellants that would deliver the crew to a low Mars orbit (LMO) at the end of the surface mission. The MAV concept could potentially provide abort-to-orbit capability during much of the EDL profile in response to fault conditions and could accommodate return to orbit for cases where the MAV had no access to other Mars surface infrastructure. The design concept for the descent stage utilizes six 250 kN MMH/MON-25 engines that would have very high commonality with the MAV engine. Analysis indicates that the MAV would require approx. 20 t of propellant (including residuals) and the descent stage would require approx. 21 t of propellant. The addition of a 12 m diameter supersonic inflatable aerodynamic decelerator (SIAD), based on a proven flight design, was studied as an optional method to improve the ULM fraction, reducing the required descent propellant by approx.4 t.

  15. Characterization of particulate cyclic nucleotide phosphodiesterases from bovine brain: Purification of a distinct cGMP-stimulated isoenzyme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murashima, Seiko; Tanaka, Takayuki; Hockman, S.

    1990-06-05

    In the absence of detergent, {approx}80-85% of the total cGMP-stimulated phosphodiesterase (PDE) activity in bovine brain was associated with washed particulate fractions; {approx}85-90% of the calmodulin-sensitive PDE was soluble. Particulate cGMP-stimulated PDE was higher in cerebral cortical gray matter than in other regions. Homogenization of the brain particulate fraction in 1% Lubrol increased cGMP-stimulated activity {approx}100% and calmodulin-stimulated {approx}400-500%. Although 1% Lubrol readily solubilized these PDE activities, {approx}75% of the cAMP PDE activity (0.5 {mu}M ({sup 3}H)cAMP) that was not affected by cGMP was not solubilized. This cAMP PDE activity was very sensitive to inhibition by Rolipram but not cilostamide.more » Thus, three different PDE types, i.e., cGMP stimulated, calmodulin sensitive, and Rolipram inhibited, are associated in different ways with crude bovine brain particulate fractions. The brain enzyme exhibited a slightly greater subunit M{sub r} than did soluble forms from calf liver or bovine brain, as evidenced by protein staining or immunoblotting after polyacrylamide gel electrophoresis under denaturing conditions. Incubation of brain particulate and liver soluble cGMP-stimulated PDEs with V{sub 8} protease produced several peptides of similar size, as well as at least two distinct fragments of {approx}27 kDa from the brain and {approx}23 kDa from the liver enzyme. After photolabeling in the presence of ({sup 32}P)cGMP and digestion with V{sub 8} protease, ({sup 32}P)cGMP in each PDE was predominantly recovered with a peptide of {approx}14 kDa. All of these observations are consistent with the existence of at least two discrete forms (isoenzymes) of cGMP-stimulated PDE.« less

  16. THE BLANCO COSMOLOGY SURVEY: DATA ACQUISITION, PROCESSING, CALIBRATION, QUALITY DIAGNOSTICS, AND DATA RELEASE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Desai, S.; Mohr, J. J.; Semler, D. R.

    2012-09-20

    The Blanco Cosmology Survey (BCS) is a 60 night imaging survey of {approx}80 deg{sup 2} of the southern sky located in two fields: ({alpha}, {delta}) = (5 hr, -55 Degree-Sign ) and (23 hr, -55 Degree-Sign ). The survey was carried out between 2005 and 2008 in griz bands with the Mosaic2 imager on the Blanco 4 m telescope. The primary aim of the BCS survey is to provide the data required to optically confirm and measure photometric redshifts for Sunyaev-Zel'dovich effect selected galaxy clusters from the South Pole Telescope and the Atacama Cosmology Telescope. We process and calibrate themore » BCS data, carrying out point-spread function-corrected model-fitting photometry for all detected objects. The median 10{sigma} galaxy (point-source) depths over the survey in griz are approximately 23.3 (23.9), 23.4 (24.0), 23.0 (23.6), and 21.3 (22.1), respectively. The astrometric accuracy relative to the USNO-B survey is {approx}45 mas. We calibrate our absolute photometry using the stellar locus in grizJ bands, and thus our absolute photometric scale derives from the Two Micron All Sky Survey, which has {approx}2% accuracy. The scatter of stars about the stellar locus indicates a systematic floor in the relative stellar photometric scatter in griz that is {approx}1.9%, {approx}2.2%, {approx}2.7%, and {approx}2.7%, respectively. A simple cut in the AstrOmatic star-galaxy classifier spread{sub m}odel produces a star sample with good spatial uniformity. We use the resulting photometric catalogs to calibrate photometric redshifts for the survey and demonstrate scatter {delta}z/(1 + z) = 0.054 with an outlier fraction {eta} < 5% to z {approx} 1. We highlight some selected science results to date and provide a full description of the released data products.« less

  17. The Blanco Cosmology Survey: Data Acquisition, Processing, Calibration, Quality Diagnostics and Data Release

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Desai, S.; /Munich U. /Munich, Tech. U., Universe; Armstrong, R.

    2012-04-01

    The Blanco Cosmology Survey (BCS) is a 60 night imaging survey of {approx}80 deg{sup 2} of the southern sky located in two fields: ({alpha},{delta})= (5 hr, -55{sup circ} and 23 hr, -55{sup circ}). The survey was carried out between 2005 and 2008 in griz bands with the Mosaic2 imager on the Blanco 4m telescope. The primary aim of the BCS survey is to provide the data required to optically confirm and measure photometric redshifts for Sunyaev-Zel'dovich effect selected galaxy clusters from the South Pole Telescope and the Atacama Cosmology Telescope. We process and calibrate the BCS data, carrying out PSFmore » corrected model fitting photometry for all detected objects. The median 10{sigma} galaxy (point source) depths over the survey in griz are approximately 23.3 (23.9), 23.4 (24.0), 23.0 (23.6) and 21.3 (22.1), respectively. The astrometric accuracy relative to the USNO-B survey is {approx}45 milli-arcsec. We calibrate our absolute photometry using the stellar locus in grizJ bands, and thus our absolute photometric scale derives from 2MASS which has {approx}2% accuracy. The scatter of stars about the stellar locus indicates a systematics floor in the relative stellar photometric scatter in griz that is {approx}1.9%, {approx}2.2%, {approx}2.7% and {approx}2.7%, respectively. A simple cut in the AstrOmatic star-galaxy classifier produces a star sample with good spatial uniformity. We use the resulting photometric catalogs to calibrate photometric redshifts for the survey and demonstrate scatter {delta} z/(1+z)=0.054 with an outlier fraction {eta}<5% to z{approx}1. We highlight some selected science results to date and provide a full description of the released data products.« less

  18. THE FIRST Hi-GAL OBSERVATIONS OF THE OUTER GALAXY: A LOOK AT STAR FORMATION IN THE THIRD GALACTIC QUADRANT IN THE LONGITUDE RANGE 216. Degree-Sign 5 {approx}< l {approx}< 225. Degree-Sign 5

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elia, D.; Molinari, S.; Schisano, E.

    2013-07-20

    We present the first Herschel PACS and SPIRE photometric observations in a portion of the outer Galaxy (216. Degree-Sign 5 {approx}< l {approx}< 225. Degree-Sign 5 and -2 Degree-Sign {approx}< b {approx}< 0 Degree-Sign ) as a part of the Hi-GAL survey. The maps between 70 and 500 {mu}m, the derived column density and temperature maps, and the compact source catalog are presented. NANTEN CO(1-0) line observations are used to derive cloud kinematics and distances so that we can estimate distance-dependent physical parameters of the compact sources (cores and clumps) having a reliable spectral energy distribution that we separate intomore » 255 proto-stellar and 688 starless sources. Both typologies are found in association with all the distance components observed in the field, up to {approx}5.8 kpc, testifying to the presence of star formation beyond the Perseus arm at these longitudes. Selecting the starless gravitationally bound sources, we identify 590 pre-stellar candidates. Several sources of both proto- and pre-stellar nature are found to exceed the minimum requirement for being compatible with massive star formation based on the mass-radius relation. For the pre-stellar sources belonging to the Local arm (d {approx}< 1.5 kpc) we study the mass function whose high-mass end shows a power law N(log M){proportional_to}M {sup -1.0{+-}0.2}. Finally, we use a luminosity versus mass diagram to infer the evolutionary status of the sources, finding that most of the proto-stellar sources are in the early accretion phase (with some cases compatible with a Class I stage), while for pre-stellar sources, in general, accretion has not yet started.« less

  19. Altered coupling of muscarinic acetylcholine receptors in pancreatic acinar carcinoma of rat

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chien, J.L.; Warren, J.R.

    The structure and function of muscarinic acetylcholine receptors (mAChR) in acinar carcinoma cells have been compared to mAChR in normal pancreatic acinar cells. Similar 80 kD proteins identified by SDS-PAGE of tumor and normal mAChR affinity-labeled with the muscarinic antagonist /sup 3/H-propylbenzilyl-choline mustards, and identical binding of the antagonist N-methylscopolamine to tumor and normal cells (K/sub D/approx.4x10/sup -10/ M), indicate conservation of mAChR proteins in carcinoma cells. Carcinoma mAChR display homogeneous binding of the agonists carbamylcholine (CCh), K/sub D/approx.3x10/sup -5/ M, and oxotremorine (Oxo), K/sub D/approx.x10/sup -6/ M, whereas normal cells display heterogeneous binding, with a minor component of highmore » affinity interactions for CCh, K/sub D/approx.3x10/sup -6/ M, and Oxo, K/sub D/approx.2x/sup -17/ M, and a major component of low affinity interactions for CCh, K/sub D/approx.1x10/sup -4/ M, and Oxo, K/sub D/approx.2x10/sup -5/ M. Both carcinoma and normal cells exhibit concentration-dependent CCh-stimulated increase in cytosolic free Ca/sup 2 +/, as measured by intracellular Quin 2 fluorescence and /sup 45/Ca/sup 2 +/ efflux. However, carcinoma cells demonstrate 50% maximal stimulation of intracellular Ca/sup 2 +/ release at a CCh concentration (EC/sub 50/approx.6x10/sup -7/ M) one log below that observed for normal cells. The authors propose an altered coupling of mAChR to intracellular Ca/sup 2 +/ homeostasis in carcinoma cells, which is manifest as a single activated receptor state for agonist binding, and increased sensitivity to muscarinic receptor stimulation of Ca/sup 2 +/ release.« less

  20. NuSTAR Observations of WISE J1036+0449, A Galaxy at Z Approx. 1 Obscured by Hot Dust

    NASA Technical Reports Server (NTRS)

    Ricci, C.; Assef, R. J.; Stern, D.; Nikutta, R.; Alexander, D. M.; Asmus, D.; Ballantyne, D. R.; Bauer, F. E.; Blain, A. W.; Boggs, S.; hide

    2017-01-01

    Hot dust-obscured galaxies (hot DOGs), selected from Wide-Field Infrared Survey Explorer's all-sky infrared survey, host some of the most powerful active galactic nuclei known and may represent an important stage in the evolution of galaxies. Most known hot DOGs are located at z > 1.5, due in part to a strong bias against identifying them at lower redshift related to the selection criteria. We present a new selection method that identifies 153 hot DOG candidates at z approx. 1, where they are significantly brighter and easier to study. We validate this approach by measuring a redshift z = 1.009 and finding a spectral energy distribution similar to that of higher-redshift hot DOGs for one of these objects, WISE J1036+0449 (L(BOL) approx. = 8 x 10(exp 46) erg/s). We find evidence of a broadened component in Mg II, which would imply a black hole mass of M(BH) approx. = 2 x 10(exp 8) Stellar Mass and an Eddington ratio of lambda(Edd) approx. = 2.7. WISE J1036+0449 is the first hot DOG detected by the Nuclear Spectroscopic Telescope Array, and observations show that the source is heavily obscured, with a column density of N(H) approx. = (2-15) x 10(exp 23)/sq cm. The source has an intrinsic 2-10 keV luminosity of approx. 6 x 10(exp 44) erg/s, a value significantly lower than that expected from the mid-infrared X-ray correlation. We also find that other hot DOGs observed by X-ray facilities show a similar deficiency of X-ray flux. We discuss the origin of the X-ray weakness and the absorption properties of hot DOGs. Hot DOGs at z < or approx. 1 could be excellent laboratories to probe the characteristics of the accretion flow and of the X-ray emitting plasma at extreme values of the Eddington ratio.

  1. NuSTAR Observations of WISE J1036+0449, A Galaxy at zeta approx 1 Obscured by Hot Dust

    NASA Technical Reports Server (NTRS)

    Ricci, C.; Assef, R. J.; Stern, Daniel K.; Nikutta, R.; Alexander, D. M.; Asmus, D.; Ballantyne, D. R.; Bauer, F. E.; Blain, A.W.; Zhang, William W.; hide

    2017-01-01

    Hot dust-obscured galaxies (hot DOGs), selected from Wide-Field Infrared Survey Explorer's all-sky infrared survey, host some of the most powerful active galactic nuclei known and may represent an important stage in the evolution of galaxies. Most known hot DOGs are located at z > 1.5, due in part to a strong bias against identifying them at lower redshift related to the selection criteria. We present a new selection method that identifies 153 hot DOG candidates at z approx. 1, where they are significantly brighter and easier to study. We validate this approach by measuring a redshift z = 1.009 and finding a spectral energy distribution similar to that of higher-redshift hot DOGs for one of these objects, WISE J1036+0449 (L(sub BOL) approx. = 8 x 10(exp 46) erg/s). We find evidence of a broadened component in Mg II, which would imply a black hole mass of M(BH) approx. = 2 x 10(exp 8) Stellar Mass and an Eddington ratio of lambda(sub Edd) approx. = 2.7. WISE J1036+0449 is the first hot DOG detected by the Nuclear Spectroscopic Telescope Array, and observations show that the source is heavily obscured, with a column density of N(sub H) approx. = (2-15) x 10(exp 23)/sq cm. The source has an intrinsic 2-10 keV luminosity of approx. 6 x 10(exp 44) erg/s, a value significantly lower than that expected from the mid-infrared X-ray correlation. We also find that other hot DOGs observed by X-ray facilities show a similar deficiency of X-ray flux. We discuss the origin of the X-ray weakness and the absorption properties of hot DOGs. Hot DOGs at z < or approx. 1 could be excellent laboratories to probe the characteristics of the accretion flow and of the X-ray emitting plasma at extreme values of the Eddington ratio.

  2. NUCLEAR X-RAY PROPERTIES OF THE PECULIAR RADIO-LOUD HIDDEN AGN 4C+29.30

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sobolewska, M. A.; Siemiginowska, Aneta; Migliori, G.

    2012-10-20

    We present results from a study of nuclear emission from a nearby radio galaxy, 4C+29.30, over a broad 0.5-200 keV X-ray band. This study used new XMM-Newton ({approx}17 ks) and Chandra ({approx}300 ks) data, and archival Swift/BAT data from the 58 month catalog. The hard (>2 keV) X-ray spectrum of 4C+29.30 can be decomposed into an intrinsic hard power law ({Gamma} {approx} 1.56) modified by a cold absorber with an intrinsic column density N {sub H,z} {approx} 5 Multiplication-Sign 10{sup 23} cm{sup -2}, and its reflection (|{Omega}/2{pi}| {approx} 0.3) from a neutral matter including a narrow iron K{alpha} emission linemore » at a rest-frame energy {approx}6.4 keV. The reflected component is less absorbed than the intrinsic one with an upper limit on the absorbing column of N {sup refl} {sub H,z} < 2.5 Multiplication-Sign 10{sup 22} cm{sup -2}. The X-ray spectrum varied between the XMM-Newton and Chandra observations. We show that a scenario invoking variations of the normalization of the power law is favored over a model with variable intrinsic column density. X-rays in the 0.5-2 keV band are dominated by diffuse emission modeled with a thermal bremsstrahlung component with temperature {approx}0.7 keV, and contain only a marginal contribution from the scattered power-law component. We hypothesize that 4C+29.30 belongs to a class of 'hidden' active galactic nuclei containing a geometrically thick torus. However, unlike the majority of hidden AGNs, 4C+29.30 is radio-loud. Correlations between the scattering fraction and Eddington luminosity ratio, and between black hole mass and stellar velocity dispersion, imply that 4C+29.30 hosts a black hole with {approx}10{sup 8} M {sub Sun} mass.« less

  3. Moessbauer Mineralogy of Rock, Soil, and Dust at Gusev Crater, Mars: Spirit's Journey through Weakly Altered Olivine Basalt on the Plains and Pervasively Altered Basalt in the Columbia Hills

    NASA Technical Reports Server (NTRS)

    Morris, R. V.; Klingelhoefer, G.; Schroeder, C.; Rodionov, D. S.; Yen, A.; Ming, D. W.; deSouza, P. A., Jr.; Fleischer, I.; Wdowiak, T.; Gellert, R.; hide

    2006-01-01

    The Moessbauer spectrometer on Spirit measured the oxidation state of Fe, identified Fe-bearing phases, and measured relative abundances of Fe among those phases for surface materials on the plains and in the Columbia Hills of Gusev crater. Eight Fe-bearing phases were identified: olivine, pyroxene, ilmenite, magnetite, nanophase ferric oxide (npOx), hematite, goethite, and a Fe(3+)-sulfate. Adirondack basaltic rocks on the plains are nearly unaltered (Fe(3+)/Fe(sub T)<0.2) with Fe from olivine, pyroxene (Ol>Px), and minor npOx and magnetite. Columbia Hills basaltic rocks are nearly unaltered (Peace and Backstay), moderately altered (WoolyPatch, Wishstone, and Keystone), and pervasively altered (e.g., Clovis, Uchben, Watchtower, Keel, and Paros with Fe(3+)/Fe(sub T) approx.0.6-0.9). Fe from pyroxene is greater than Fe from olivine (Ol sometimes absent), and Fe(2+) from Ol+Px is 40-49% and 9-24% for moderately and pervasively altered materials, respectively. Ilmenite (Fe from Ilm approx.3-6%) is present in Backstay, Wishstone, Keystone, and related rocks along with magnetite (Fe from Mt approx. 10-15%). Remaining Fe is present as npOx, hematite, and goethite in variable proportions. Clovis has the highest goethite content (Fe from Gt=40%). Goethite (alpha-FeOOH) is mineralogical evidence for aqueous processes because it has structural hydroxide and is formed under aqueous conditions. Relatively unaltered basaltic soils (Fe(3+)/Fe(sub T) approx. 0.3) occur throughout Gusev crater (approx. 60-80% Fe from Ol+Px, approx. 10-30% from npOx, and approx. 10% from Mt). PasoRobles soil in the Columbia Hills has a unique occurrence of high concentrations of Fe(3+)-sulfate (approx. 65% of Fe). Magnetite is identified as a strongly magnetic phase in Martian soil and dust.

  4. THE UNUSUAL TEMPORAL AND SPECTRAL EVOLUTION OF THE TYPE IIn SUPERNOVA 2011ht

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roming, P. W. A.; Bayless, A. J.; Pritchard, T. A.

    2012-06-01

    We present very early UV to optical photometric and spectroscopic observations of the peculiar Type IIn supernova (SN) 2011ht in UGC 5460. The UV observations of the rise to peak are only the second ever recorded for a Type IIn SN and are by far the most complete. The SN, first classified as an SN impostor, slowly rose to a peak of M{sub V} {approx} -17 in {approx}55 days. In contrast to the {approx}2 mag increase in the v-band light curve from the first observation until peak, the UV flux increased by >7 mag. The optical spectra are dominated bymore » strong, Balmer emission with narrow peaks (FWHM {approx} 600 km s{sup -1}), very broad asymmetric wings (FWHM {approx} 4200 km s{sup -1}), and blueshifted absorption ({approx}300 km s{sup -1}) superposed on a strong blue continuum. The UV spectra are dominated by Fe II, Mg II, Si II, and Si III absorption lines broadened by {approx}1500 km s{sup -1}. Merged X-ray observations reveal a L{sub 0.2-10} = (1.0 {+-} 0.2) Multiplication-Sign 10{sup 39} erg s{sup -1}. Some properties of SN 2011ht are similar to SN impostors, while others are comparable to Type IIn SNe. Early spectra showed features typical of luminous blue variables at maximum and during giant eruptions. However, the broad emission profiles coupled with the strong UV flux have not been observed in previous SN impostors. The absolute magnitude and energetics ({approx}2.5 Multiplication-Sign 10{sup 49} erg in the first 112 days) are reminiscent of normal Type IIn SN, but the spectra are of a dense wind. We suggest that the mechanism for creating this unusual profile could be a shock interacting with a shell of material that was ejected a year before the discovery of the SN.« less

  5. Multiobjective optimization in a pseudometric objective space as applied to a general model of business activities

    NASA Astrophysics Data System (ADS)

    Khachaturov, R. V.

    2016-09-01

    It is shown that finding the equivalence set for solving multiobjective discrete optimization problems is advantageous over finding the set of Pareto optimal decisions. An example of a set of key parameters characterizing the economic efficiency of a commercial firm is proposed, and a mathematical model of its activities is constructed. In contrast to the classical problem of finding the maximum profit for any business, this study deals with a multiobjective optimization problem. A method for solving inverse multiobjective problems in a multidimensional pseudometric space is proposed for finding the best project of firm's activities. The solution of a particular problem of this type is presented.

  6. Development of a two-stage light gas gun to accelerate hydrogen pellets to high speeds for plasma fueling applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Combs, S.K.; Milora, S.L.; Foust, C.R.

    1988-01-01

    The development of a two-stage light gas gun to accelerate hydrogen isotope pellets to high speeds is under way at Oak Ridge National Laboratory. High velocities (>2 km/s) are desirable for plasma fueling applications, since the faster pellets can penetrate more deeply into large, hot plasmas and deposit atoms of fuel directly in a larger fraction of the plasma volume. In the initial configuration of the two-stage device, a 2.2-l volume (/<=/55-bar) provides the gas to accelerate a 25.4-mm-diam piston in a 1-m-long pump tube; a burst disk or a fast valve initiates the acceleration process in the first stage.more » As the piston travels the length of the pump tube, the downstream gas (initially at <1 bar) is compressed (to pressures up to 2600 bar) and thus is driven to high temperature (approx.5000 K). This provides the driving force for acceleration of a 4-mm pellet in a 1-m-long gun barrel. In preliminary tests using helium as the driver in both stages, 35-mg plastic pellets have been accelerated to speeds as high as 3.8 km/s. Projectiles composed of hydrogen ice will have a mass in the range from 5 to 20 mg (/rho/ approx. 0.087, 0.20, and 0.32 g/cm/sup 3/ for frozen hydrogen isotopes). However, the use of sabots to encase and protect the cryogenic pellets from the high peak pressures will probably be required to realize speeds of approx.3 km/s or greater. The experimental plan includes acceleration of hydrogen isotopes as soon as the gun geometry and operating parameters are optimized; theoretical models are being used to aid in this process. The hardware is being designed to accommodate repetitive operation, which is the objective of this research and is required for future applications. 25 refs., 6 figs., 1 tab.« less

  7. Assessment of spatial variation in drinking water iodine and its implications for dietary intake: a new conceptual model for Denmark.

    PubMed

    Voutchkova, Denitza Dimitrova; Ernstsen, Vibeke; Hansen, Birgitte; Sørensen, Brian Lyngby; Zhang, Chaosheng; Kristiansen, Søren Munch

    2014-09-15

    Iodine is essential for human health. Many countries have therefore introduced universal salt iodising (USI) programmes to ensure adequate intake for the populations. However, little attention has been paid to subnational differences in iodine intake from drinking water caused by naturally occurring spatial variations. To address this issue, we here present the results of a Danish nationwide study of spatial trends of iodine in drinking water and the relevance of these trends for human dietary iodine intake. The data consist of treated drinking water samples from 144 waterworks, representing approx. 45% of the groundwater abstraction for drinking water supply in Denmark. The samples were analysed for iodide, iodate, total iodine (TI) and other major and trace elements. The spatial patterns were investigated with Local Moran's I. TI ranges from <0.2 to 126 μg L(-1) (mean 14.4 μg L(-1), median 11.9 μg L(-1)). Six speciation combinations were found. Half of the samples (n = 71) contain organic iodine; all species were detected in approx. 27% of all samples. The complex spatial variation is attributed both to the geology and the groundwater treatment. TI >40 μg L(-1) originates from postglacial marine and glacial meltwater sand and from Campanian-Maastrichtian chalk aquifers. The estimated drinking water contribution to human intake varies from 0% to >100% of the WHO recommended daily iodine intake for adults and from 0% to approx. 50% for adolescents. The paper presents a new conceptual model based on the observed clustering of high or low drinking-water iodine concentrations, delimiting zones with potentially deficient, excessive or optimal iodine status. Our findings suggest that the present coarse-scale nationwide programme for monitoring the population's iodine status may not offer a sufficiently accurate picture. Local variations in drinking-water iodine should be mapped and incorporated into future adjustment of the monitoring and/or the USI programmes. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. The Sizing and Optimization Language, (SOL): Computer language for design problems

    NASA Technical Reports Server (NTRS)

    Lucas, Stephen H.; Scotti, Stephen J.

    1988-01-01

    The Sizing and Optimization Language, (SOL), a new high level, special purpose computer language was developed to expedite application of numerical optimization to design problems and to make the process less error prone. SOL utilizes the ADS optimization software and provides a clear, concise syntax for describing an optimization problem, the OPTIMIZE description, which closely parallels the mathematical description of the problem. SOL offers language statements which can be used to model a design mathematically, with subroutines or code logic, and with existing FORTRAN routines. In addition, SOL provides error checking and clear output of the optimization results. Because of these language features, SOL is best suited to model and optimize a design concept when the model consits of mathematical expressions written in SOL. For such cases, SOL's unique syntax and error checking can be fully utilized. SOL is presently available for DEC VAX/VMS systems. A SOL package is available which includes the SOL compiler, runtime library routines, and a SOL reference manual.

  9. The 4 Ms CHANDRA Deep Field-South Number Counts Apportioned by Source Class: Pervasive Active Galactic Nuclei and the Ascent of Normal Galaxies

    NASA Technical Reports Server (NTRS)

    Lehmer, Bret D.; Xue, Y. Q.; Brandt, W. N.; Alexander, D. M.; Bauer, F. E.; Brusa, M.; Comastri, A.; Gilli, R.; Hornschemeier, A. E.; Luo, B.; hide

    2012-01-01

    We present 0.5-2 keV, 2-8 keV, 4-8 keV, and 0.5-8 keV (hereafter soft, hard, ultra-hard, and full bands, respectively) cumulative and differential number-count (log N-log S ) measurements for the recently completed approx. equal to 4 Ms Chandra Deep Field-South (CDF-S) survey, the deepest X-ray survey to date. We implement a new Bayesian approach, which allows reliable calculation of number counts down to flux limits that are factors of approx. equal to 1.9-4.3 times fainter than the previously deepest number-count investigations. In the soft band (SB), the most sensitive bandpass in our analysis, the approx. equal to 4 Ms CDF-S reaches a maximum source density of approx. equal to 27,800 deg(sup -2). By virtue of the exquisite X-ray and multiwavelength data available in the CDF-S, we are able to measure the number counts from a variety of source populations (active galactic nuclei (AGNs), normal galaxies, and Galactic stars) and subpopulations (as a function of redshift, AGN absorption, luminosity, and galaxy morphology) and test models that describe their evolution. We find that AGNs still dominate the X-ray number counts down to the faintest flux levels for all bands and reach a limiting SB source density of approx. equal to 14,900 deg(sup -2), the highest reliable AGN source density measured at any wavelength. We find that the normal-galaxy counts rise rapidly near the flux limits and, at the limiting SB flux, reach source densities of approx. equal to 12,700 deg(sup -2) and make up 46% plus or minus 5% of the total number counts. The rapid rise of the galaxy counts toward faint fluxes, as well as significant normal-galaxy contributions to the overall number counts, indicates that normal galaxies will overtake AGNs just below the approx. equal to 4 Ms SB flux limit and will provide a numerically significant new X-ray source population in future surveys that reach below the approx. equal to 4 Ms sensitivity limit. We show that a future approx. equal to 10 Ms CDF-S would allow for a significant increase in X-ray-detected sources, with many of the new sources being cosmologically distant (z greater than or approx. equal to 0.6) normal galaxies.

  10. Optimal ballistically captured Earth-Moon transfers

    NASA Astrophysics Data System (ADS)

    Ricord Griesemer, Paul; Ocampo, Cesar; Cooley, D. S.

    2012-07-01

    The optimality of a low-energy Earth-Moon transfer terminating in ballistic capture is examined for the first time using primer vector theory. An optimal control problem is formed with the following free variables: the location, time, and magnitude of the transfer insertion burn, and the transfer time. A constraint is placed on the initial state of the spacecraft to bind it to a given initial orbit around a first body, and on the final state of the spacecraft to limit its Keplerian energy with respect to a second body. Optimal transfers in the system are shown to meet certain conditions placed on the primer vector and its time derivative. A two point boundary value problem containing these necessary conditions is created for use in targeting optimal transfers. The two point boundary value problem is then applied to the ballistic lunar capture problem, and an optimal trajectory is shown. Additionally, the problem is then modified to fix the time of transfer, allowing for optimal multi-impulse transfers. The tradeoff between transfer time and fuel cost is shown for Earth-Moon ballistic lunar capture transfers.

  11. An approach for aerodynamic optimization of transonic fan blades

    NASA Astrophysics Data System (ADS)

    Khelghatibana, Maryam

    Aerodynamic design optimization of transonic fan blades is a highly challenging problem due to the complexity of flow field inside the fan, the conflicting design requirements and the high-dimensional design space. In order to address all these challenges, an aerodynamic design optimization method is developed in this study. This method automates the design process by integrating a geometrical parameterization method, a CFD solver and numerical optimization methods that can be applied to both single and multi-point optimization design problems. A multi-level blade parameterization is employed to modify the blade geometry. Numerical analyses are performed by solving 3D RANS equations combined with SST turbulence model. Genetic algorithms and hybrid optimization methods are applied to solve the optimization problem. In order to verify the effectiveness and feasibility of the optimization method, a singlepoint optimization problem aiming to maximize design efficiency is formulated and applied to redesign a test case. However, transonic fan blade design is inherently a multi-faceted problem that deals with several objectives such as efficiency, stall margin, and choke margin. The proposed multi-point optimization method in the current study is formulated as a bi-objective problem to maximize design and near-stall efficiencies while maintaining the required design pressure ratio. Enhancing these objectives significantly deteriorate the choke margin, specifically at high rotational speeds. Therefore, another constraint is embedded in the optimization problem in order to prevent the reduction of choke margin at high speeds. Since capturing stall inception is numerically very expensive, stall margin has not been considered as an objective in the problem statement. However, improving near-stall efficiency results in a better performance at stall condition, which could enhance the stall margin. An investigation is therefore performed on the Pareto-optimal solutions to demonstrate the relation between near-stall efficiency and stall margin. The proposed method is applied to redesign NASA rotor 67 for single and multiple operating conditions. The single-point design optimization showed +0.28 points improvement of isentropic efficiency at design point, while the design pressure ratio and mass flow are, respectively, within 0.12% and 0.11% of the reference blade. Two cases of multi-point optimization are performed: First, the proposed multi-point optimization problem is relaxed by removing the choke margin constraint in order to demonstrate the relation between near-stall efficiency and stall margin. An investigation on the Pareto-optimal solutions of this optimization shows that the stall margin has been increased with improving near-stall efficiency. The second multi-point optimization case is performed with considering all the objectives and constraints. One selected optimized design on the Pareto front presents +0.41, +0.56 and +0.9 points improvement in near-peak efficiency, near-stall efficiency and stall margin, respectively. The design pressure ratio and mass flow are, respectively, within 0.3% and 0.26% of the reference blade. Moreover the optimized design maintains the required choking margin. Detailed aerodynamic analyses are performed to investigate the effect of shape optimization on shock occurrence, secondary flows, tip leakage and shock/tip-leakage interactions in both single and multi-point optimizations.

  12. OPTIMIZING THROUGH CO-EVOLUTIONARY AVALANCHES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    S. BOETTCHER; A. PERCUS

    2000-08-01

    We explore a new general-purpose heuristic for finding high-quality solutions to hard optimization problems. The method, called extremal optimization, is inspired by ''self-organized critically,'' a concept introduced to describe emergent complexity in many physical systems. In contrast to Genetic Algorithms which operate on an entire ''gene-pool'' of possible solutions, extremal optimization successively replaces extremely undesirable elements of a sub-optimal solution with new, random ones. Large fluctuations, called ''avalanches,'' ensue that efficiently explore many local optima. Drawing upon models used to simulate far-from-equilibrium dynamics, extremal optimization complements approximation methods inspired by equilibrium statistical physics, such as simulated annealing. With only onemore » adjustable parameter, its performance has proved competitive with more elaborate methods, especially near phase transitions. Those phase transitions are found in the parameter space of most optimization problems, and have recently been conjectured to be the origin of some of the hardest instances in computational complexity. We will demonstrate how extremal optimization can be implemented for a variety of combinatorial optimization problems. We believe that extremal optimization will be a useful tool in the investigation of phase transitions in combinatorial optimization problems, hence valuable in elucidating the origin of computational complexity.« less

  13. Dynamic Programming and Graph Algorithms in Computer Vision*

    PubMed Central

    Felzenszwalb, Pedro F.; Zabih, Ramin

    2013-01-01

    Optimization is a powerful paradigm for expressing and solving problems in a wide range of areas, and has been successfully applied to many vision problems. Discrete optimization techniques are especially interesting, since by carefully exploiting problem structure they often provide non-trivial guarantees concerning solution quality. In this paper we briefly review dynamic programming and graph algorithms, and discuss representative examples of how these discrete optimization techniques have been applied to some classical vision problems. We focus on the low-level vision problem of stereo; the mid-level problem of interactive object segmentation; and the high-level problem of model-based recognition. PMID:20660950

  14. Representations in Problem Solving: A Case Study with Optimization Problems

    ERIC Educational Resources Information Center

    Villegas, Jose L.; Castro, Enrique; Gutierrez, Jose

    2009-01-01

    Introduction: Representations play an essential role in mathematical thinking. They favor the understanding of mathematical concepts and stimulate the development of flexible and versatile thinking in problem solving. Here our focus is on their use in optimization problems, a type of problem considered important in mathematics teaching and…

  15. Class and Home Problems: Optimization Problems

    ERIC Educational Resources Information Center

    Anderson, Brian J.; Hissam, Robin S.; Shaeiwitz, Joseph A.; Turton, Richard

    2011-01-01

    Optimization problems suitable for all levels of chemical engineering students are available. These problems do not require advanced mathematical techniques, since they can be solved using typical software used by students and practitioners. The method used to solve these problems forces students to understand the trends for the different terms…

  16. Parameter Optimization and Operating Strategy of a TEG System for Railway Vehicles

    NASA Astrophysics Data System (ADS)

    Heghmanns, A.; Wilbrecht, S.; Beitelschmidt, M.; Geradts, K.

    2016-03-01

    A thermoelectric generator (TEG) system demonstrator for diesel electric locomotives with the objective of reducing the mechanical load on the thermoelectric modules (TEM) is developed and constructed to validate a one-dimensional thermo-fluid flow simulation model. The model is in good agreement with the measurements and basis for the optimization of the TEG's geometry by a genetic multi objective algorithm. The best solution has a maximum power output of approx. 2.7 kW and does not exceed the maximum back pressure of the diesel engine nor the maximum TEM hot side temperature. To maximize the reduction of the fuel consumption, an operating strategy regarding the system power output for the TEG system is developed. Finally, the potential consumption reduction in passenger and freight traffic operating modes is estimated under realistic driving conditions by means of a power train and lateral dynamics model. The fuel savings are between 0.5% and 0.7%, depending on the driving style.

  17. The plug-based nanovolume Microcapillary Protein Crystallization System (MPCS)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerdts, Cory J.; Elliott, Mark; Lovell, Scott

    2012-02-08

    The Microcapillary Protein Crystallization System (MPCS) embodies a new semi-automated plug-based crystallization technology which enables nanolitre-volume screening of crystallization conditions in a plasticware format that allows crystals to be easily removed for traditional cryoprotection and X-ray diffraction data collection. Protein crystals grown in these plastic devices can be directly subjected to in situ X-ray diffraction studies. The MPCS integrates the formulation of crystallization cocktails with the preparation of the crystallization experiments. Within microfluidic Teflon tubing or the microfluidic circuitry of a plastic CrystalCard, {approx}10-20 nl volume droplets are generated, each representing a microbatch-style crystallization experiment with a different chemical composition.more » The entire protein sample is utilized in crystallization experiments. Sparse-matrix screening and chemical gradient screening can be combined in one comprehensive 'hybrid' crystallization trial. The technology lends itself well to optimization by high-granularity gradient screening using optimization reagents such as precipitation agents, ligands or cryoprotectants.« less

  18. Mercury Atomic Frequency Standards for Space Based Navigation and Timekeeping

    NASA Technical Reports Server (NTRS)

    Tjoelker, R. L.; Burt, E. A.; Chung, S.; Hamell, R. L.; Prestage, J. D.; Tucker, B.; Cash, P.; Lutwak, R.

    2012-01-01

    A low power Mercury Atomic Frequency Standard (MAFS) has been developed and demonstrated on the path towards future space clock applications. A self contained mercury ion breadboard clock: emulating flight clock interfaces, steering a USO local oscillator, and consuming approx 40 Watts has been operating at JPL for more than a year. This complete, modular ion clock instrument demonstrates that key GNSS size, weight, and power (SWaP) requirements can be achieved while still maintaining short and long term performance demonstrated in previous ground ion clocks. The MAFS breadboard serves as a flexible platform for optimizing further space clock development and guides engineering model design trades towards fabrication of an ion clock for space flight.

  19. Gamma-Ray Imaging for Explosives Detection

    NASA Technical Reports Server (NTRS)

    deNolfo, G. A.; Hunter, S. D.; Barbier, L. M.; Link, J. T.; Son, S.; Floyd, S. R.; Guardala, N.; Skopec, M.; Stark, B.

    2008-01-01

    We describe a gamma-ray imaging camera (GIC) for active interrogation of explosives being developed by NASA/GSFC and NSWCICarderock. The GIC is based on the Three-dimensional Track Imager (3-DTI) technology developed at GSFC for gamma-ray astrophysics. The 3-DTI, a large volume time-projection chamber, provides accurate, approx.0.4 mm resolution, 3-D tracking of charged particles. The incident direction of gamma rays, E, > 6 MeV, are reconstructed from the momenta and energies of the electron-positron pair resulting from interactions in the 3-DTI volume. The optimization of the 3-DTI technology for this specific application and the performance of the GIC from laboratory tests is presented.

  20. Rapid Generation of Optimal Asteroid Powered Descent Trajectories Via Convex Optimization

    NASA Technical Reports Server (NTRS)

    Pinson, Robin; Lu, Ping

    2015-01-01

    This paper investigates a convex optimization based method that can rapidly generate the fuel optimal asteroid powered descent trajectory. The ultimate goal is to autonomously design the optimal powered descent trajectory on-board the spacecraft immediately prior to the descent burn. Compared to a planetary powered landing problem, the major difficulty is the complex gravity field near the surface of an asteroid that cannot be approximated by a constant gravity field. This paper uses relaxation techniques and a successive solution process that seeks the solution to the original nonlinear, nonconvex problem through the solutions to a sequence of convex optimal control problems.

  1. A robust optimization methodology for preliminary aircraft design

    NASA Astrophysics Data System (ADS)

    Prigent, S.; Maréchal, P.; Rondepierre, A.; Druot, T.; Belleville, M.

    2016-05-01

    This article focuses on a robust optimization of an aircraft preliminary design under operational constraints. According to engineers' know-how, the aircraft preliminary design problem can be modelled as an uncertain optimization problem whose objective (the cost or the fuel consumption) is almost affine, and whose constraints are convex. It is shown that this uncertain optimization problem can be approximated in a conservative manner by an uncertain linear optimization program, which enables the use of the techniques of robust linear programming of Ben-Tal, El Ghaoui, and Nemirovski [Robust Optimization, Princeton University Press, 2009]. This methodology is then applied to two real cases of aircraft design and numerical results are presented.

  2. New trends in astrodynamics and applications: optimal trajectories for space guidance.

    PubMed

    Azimov, Dilmurat; Bishop, Robert

    2005-12-01

    This paper represents recent results on the development of optimal analytic solutions to the variation problem of trajectory optimization and their application in the construction of on-board guidance laws. The importance of employing the analytically integrated trajectories in a mission design is discussed. It is assumed that the spacecraft is equipped with a power-limited propulsion and moving in a central Newtonian field. Satisfaction of the necessary and sufficient conditions for optimality of trajectories is analyzed. All possible thrust arcs and corresponding classes of the analytical solutions are classified based on the propulsion system parameters and performance index of the problem. The solutions are presented in a form convenient for applications in escape, capture, and interorbital transfer problems. Optimal guidance and neighboring optimal guidance problems are considered. It is shown that the analytic solutions can be used as reference trajectories in constructing the guidance algorithms for the maneuver problems mentioned above. An illustrative example of a spiral trajectory that terminates on a given elliptical parking orbit is discussed.

  3. Rapid optimization of multiple-burn rocket flights.

    NASA Technical Reports Server (NTRS)

    Brown, K. R.; Harrold, E. F.; Johnson, G. W.

    1972-01-01

    Different formulations of the fuel optimization problem for multiple burn trajectories are considered. It is shown that certain customary idealizing assumptions lead to an ill-posed optimization problem for which no solution exists. Several ways are discussed for avoiding such difficulties by more realistic problem statements. An iterative solution of the boundary value problem is presented together with efficient coast arc computations, the right end conditions for various orbital missions, and some test results.

  4. HSTLBO: A hybrid algorithm based on Harmony Search and Teaching-Learning-Based Optimization for complex high-dimensional optimization problems

    PubMed Central

    Tuo, Shouheng; Yong, Longquan; Deng, Fang’an; Li, Yanhai; Lin, Yong; Lu, Qiuju

    2017-01-01

    Harmony Search (HS) and Teaching-Learning-Based Optimization (TLBO) as new swarm intelligent optimization algorithms have received much attention in recent years. Both of them have shown outstanding performance for solving NP-Hard optimization problems. However, they also suffer dramatic performance degradation for some complex high-dimensional optimization problems. Through a lot of experiments, we find that the HS and TLBO have strong complementarity each other. The HS has strong global exploration power but low convergence speed. Reversely, the TLBO has much fast convergence speed but it is easily trapped into local search. In this work, we propose a hybrid search algorithm named HSTLBO that merges the two algorithms together for synergistically solving complex optimization problems using a self-adaptive selection strategy. In the HSTLBO, both HS and TLBO are modified with the aim of balancing the global exploration and exploitation abilities, where the HS aims mainly to explore the unknown regions and the TLBO aims to rapidly exploit high-precision solutions in the known regions. Our experimental results demonstrate better performance and faster speed than five state-of-the-art HS variants and show better exploration power than five good TLBO variants with similar run time, which illustrates that our method is promising in solving complex high-dimensional optimization problems. The experiment on portfolio optimization problems also demonstrate that the HSTLBO is effective in solving complex read-world application. PMID:28403224

  5. HSTLBO: A hybrid algorithm based on Harmony Search and Teaching-Learning-Based Optimization for complex high-dimensional optimization problems.

    PubMed

    Tuo, Shouheng; Yong, Longquan; Deng, Fang'an; Li, Yanhai; Lin, Yong; Lu, Qiuju

    2017-01-01

    Harmony Search (HS) and Teaching-Learning-Based Optimization (TLBO) as new swarm intelligent optimization algorithms have received much attention in recent years. Both of them have shown outstanding performance for solving NP-Hard optimization problems. However, they also suffer dramatic performance degradation for some complex high-dimensional optimization problems. Through a lot of experiments, we find that the HS and TLBO have strong complementarity each other. The HS has strong global exploration power but low convergence speed. Reversely, the TLBO has much fast convergence speed but it is easily trapped into local search. In this work, we propose a hybrid search algorithm named HSTLBO that merges the two algorithms together for synergistically solving complex optimization problems using a self-adaptive selection strategy. In the HSTLBO, both HS and TLBO are modified with the aim of balancing the global exploration and exploitation abilities, where the HS aims mainly to explore the unknown regions and the TLBO aims to rapidly exploit high-precision solutions in the known regions. Our experimental results demonstrate better performance and faster speed than five state-of-the-art HS variants and show better exploration power than five good TLBO variants with similar run time, which illustrates that our method is promising in solving complex high-dimensional optimization problems. The experiment on portfolio optimization problems also demonstrate that the HSTLBO is effective in solving complex read-world application.

  6. A Comparative Theoretical and Computational Study on Robust Counterpart Optimization: I. Robust Linear Optimization and Robust Mixed Integer Linear Optimization

    PubMed Central

    Li, Zukui; Ding, Ran; Floudas, Christodoulos A.

    2011-01-01

    Robust counterpart optimization techniques for linear optimization and mixed integer linear optimization problems are studied in this paper. Different uncertainty sets, including those studied in literature (i.e., interval set; combined interval and ellipsoidal set; combined interval and polyhedral set) and new ones (i.e., adjustable box; pure ellipsoidal; pure polyhedral; combined interval, ellipsoidal, and polyhedral set) are studied in this work and their geometric relationship is discussed. For uncertainty in the left hand side, right hand side, and objective function of the optimization problems, robust counterpart optimization formulations induced by those different uncertainty sets are derived. Numerical studies are performed to compare the solutions of the robust counterpart optimization models and applications in refinery production planning and batch process scheduling problem are presented. PMID:21935263

  7. The Role of Intuition in the Solving of Optimization Problems

    ERIC Educational Resources Information Center

    Malaspina, Uldarico; Font, Vicenc

    2010-01-01

    This article presents the partial results obtained in the first stage of the research, which sought to answer the following questions: (a) What is the role of intuition in university students' solutions to optimization problems? (b) What is the role of rigor in university students' solutions to optimization problems? (c) How is the combination of…

  8. Parametric optimal control of uncertain systems under an optimistic value criterion

    NASA Astrophysics Data System (ADS)

    Li, Bo; Zhu, Yuanguo

    2018-01-01

    It is well known that the optimal control of a linear quadratic model is characterized by the solution of a Riccati differential equation. In many cases, the corresponding Riccati differential equation cannot be solved exactly such that the optimal feedback control may be a complex time-oriented function. In this article, a parametric optimal control problem of an uncertain linear quadratic model under an optimistic value criterion is considered for simplifying the expression of optimal control. Based on the equation of optimality for the uncertain optimal control problem, an approximation method is presented to solve it. As an application, a two-spool turbofan engine optimal control problem is given to show the utility of the proposed model and the efficiency of the presented approximation method.

  9. Parameter estimation using meta-heuristics in systems biology: a comprehensive review.

    PubMed

    Sun, Jianyong; Garibaldi, Jonathan M; Hodgman, Charlie

    2012-01-01

    This paper gives a comprehensive review of the application of meta-heuristics to optimization problems in systems biology, mainly focussing on the parameter estimation problem (also called the inverse problem or model calibration). It is intended for either the system biologist who wishes to learn more about the various optimization techniques available and/or the meta-heuristic optimizer who is interested in applying such techniques to problems in systems biology. First, the parameter estimation problems emerging from different areas of systems biology are described from the point of view of machine learning. Brief descriptions of various meta-heuristics developed for these problems follow, along with outlines of their advantages and disadvantages. Several important issues in applying meta-heuristics to the systems biology modelling problem are addressed, including the reliability and identifiability of model parameters, optimal design of experiments, and so on. Finally, we highlight some possible future research directions in this field.

  10. Optimal Control for Stochastic Delay Evolution Equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng, Qingxin, E-mail: mqx@hutc.zj.cn; Shen, Yang, E-mail: skyshen87@gmail.com

    2016-08-15

    In this paper, we investigate a class of infinite-dimensional optimal control problems, where the state equation is given by a stochastic delay evolution equation with random coefficients, and the corresponding adjoint equation is given by an anticipated backward stochastic evolution equation. We first prove the continuous dependence theorems for stochastic delay evolution equations and anticipated backward stochastic evolution equations, and show the existence and uniqueness of solutions to anticipated backward stochastic evolution equations. Then we establish necessary and sufficient conditions for optimality of the control problem in the form of Pontryagin’s maximum principles. To illustrate the theoretical results, we applymore » stochastic maximum principles to study two examples, an infinite-dimensional linear-quadratic control problem with delay and an optimal control of a Dirichlet problem for a stochastic partial differential equation with delay. Further applications of the two examples to a Cauchy problem for a controlled linear stochastic partial differential equation and an optimal harvesting problem are also considered.« less

  11. Variational Trajectory Optimization Tool Set: Technical description and user's manual

    NASA Technical Reports Server (NTRS)

    Bless, Robert R.; Queen, Eric M.; Cavanaugh, Michael D.; Wetzel, Todd A.; Moerder, Daniel D.

    1993-01-01

    The algorithms that comprise the Variational Trajectory Optimization Tool Set (VTOTS) package are briefly described. The VTOTS is a software package for solving nonlinear constrained optimal control problems from a wide range of engineering and scientific disciplines. The VTOTS package was specifically designed to minimize the amount of user programming; in fact, for problems that may be expressed in terms of analytical functions, the user needs only to define the problem in terms of symbolic variables. This version of the VTOTS does not support tabular data; thus, problems must be expressed in terms of analytical functions. The VTOTS package consists of two methods for solving nonlinear optimal control problems: a time-domain finite-element algorithm and a multiple shooting algorithm. These two algorithms, under the VTOTS package, may be run independently or jointly. The finite-element algorithm generates approximate solutions, whereas the shooting algorithm provides a more accurate solution to the optimization problem. A user's manual, some examples with results, and a brief description of the individual subroutines are included.

  12. Directed Bee Colony Optimization Algorithm to Solve the Nurse Rostering Problem.

    PubMed

    Rajeswari, M; Amudhavel, J; Pothula, Sujatha; Dhavachelvan, P

    2017-01-01

    The Nurse Rostering Problem is an NP-hard combinatorial optimization, scheduling problem for assigning a set of nurses to shifts per day by considering both hard and soft constraints. A novel metaheuristic technique is required for solving Nurse Rostering Problem (NRP). This work proposes a metaheuristic technique called Directed Bee Colony Optimization Algorithm using the Modified Nelder-Mead Method for solving the NRP. To solve the NRP, the authors used a multiobjective mathematical programming model and proposed a methodology for the adaptation of a Multiobjective Directed Bee Colony Optimization (MODBCO). MODBCO is used successfully for solving the multiobjective problem of optimizing the scheduling problems. This MODBCO is an integration of deterministic local search, multiagent particle system environment, and honey bee decision-making process. The performance of the algorithm is assessed using the standard dataset INRC2010, and it reflects many real-world cases which vary in size and complexity. The experimental analysis uses statistical tools to show the uniqueness of the algorithm on assessment criteria.

  13. Optimal Control of Thermo--Fluid Phenomena in Variable Domains

    NASA Astrophysics Data System (ADS)

    Volkov, Oleg; Protas, Bartosz

    2008-11-01

    This presentation concerns our continued research on adjoint--based optimization of viscous incompressible flows (the Navier--Stokes problem) coupled with heat conduction involving change of phase (the Stefan problem), and occurring in domains with variable boundaries. This problem is motivated by optimization of advanced welding techniques used in automotive manufacturing, where the goal is to determine an optimal heat input, so as to obtain a desired shape of the weld pool surface upon solidification. We argue that computation of sensitivities (gradients) in such free--boundary problems requires the use of the shape--differential calculus as a key ingredient. We also show that, with such tools available, the computational solution of the direct and inverse (optimization) problems can in fact be achieved in a similar manner and in a comparable computational time. Our presentation will address certain mathematical and computational aspects of the method. As an illustration we will consider the two--phase Stefan problem with contact point singularities where our approach allows us to obtain a thermodynamically consistent solution.

  14. Directed Bee Colony Optimization Algorithm to Solve the Nurse Rostering Problem

    PubMed Central

    Amudhavel, J.; Pothula, Sujatha; Dhavachelvan, P.

    2017-01-01

    The Nurse Rostering Problem is an NP-hard combinatorial optimization, scheduling problem for assigning a set of nurses to shifts per day by considering both hard and soft constraints. A novel metaheuristic technique is required for solving Nurse Rostering Problem (NRP). This work proposes a metaheuristic technique called Directed Bee Colony Optimization Algorithm using the Modified Nelder-Mead Method for solving the NRP. To solve the NRP, the authors used a multiobjective mathematical programming model and proposed a methodology for the adaptation of a Multiobjective Directed Bee Colony Optimization (MODBCO). MODBCO is used successfully for solving the multiobjective problem of optimizing the scheduling problems. This MODBCO is an integration of deterministic local search, multiagent particle system environment, and honey bee decision-making process. The performance of the algorithm is assessed using the standard dataset INRC2010, and it reflects many real-world cases which vary in size and complexity. The experimental analysis uses statistical tools to show the uniqueness of the algorithm on assessment criteria. PMID:28473849

  15. Exploring quantum computing application to satellite data assimilation

    NASA Astrophysics Data System (ADS)

    Cheung, S.; Zhang, S. Q.

    2015-12-01

    This is an exploring work on potential application of quantum computing to a scientific data optimization problem. On classical computational platforms, the physical domain of a satellite data assimilation problem is represented by a discrete variable transform, and classical minimization algorithms are employed to find optimal solution of the analysis cost function. The computation becomes intensive and time-consuming when the problem involves large number of variables and data. The new quantum computer opens a very different approach both in conceptual programming and in hardware architecture for solving optimization problem. In order to explore if we can utilize the quantum computing machine architecture, we formulate a satellite data assimilation experimental case in the form of quadratic programming optimization problem. We find a transformation of the problem to map it into Quadratic Unconstrained Binary Optimization (QUBO) framework. Binary Wavelet Transform (BWT) will be applied to the data assimilation variables for its invertible decomposition and all calculations in BWT are performed by Boolean operations. The transformed problem will be experimented as to solve for a solution of QUBO instances defined on Chimera graphs of the quantum computer.

  16. Optimal rail container shipment planning problem in multimodal transportation

    NASA Astrophysics Data System (ADS)

    Cao, Chengxuan; Gao, Ziyou; Li, Keping

    2012-09-01

    The optimal rail container shipment planning problem in multimodal transportation is studied in this article. The characteristics of the multi-period planning problem is presented and the problem is formulated as a large-scale 0-1 integer programming model, which maximizes the total profit generated by all freight bookings accepted in a multi-period planning horizon subject to the limited capacities. Two heuristic algorithms are proposed to obtain an approximate optimal solution of the problem. Finally, numerical experiments are conducted to demonstrate the proposed formulation and heuristic algorithms.

  17. Stochastic Optimization For Water Resources Allocation

    NASA Astrophysics Data System (ADS)

    Yamout, G.; Hatfield, K.

    2003-12-01

    For more than 40 years, water resources allocation problems have been addressed using deterministic mathematical optimization. When data uncertainties exist, these methods could lead to solutions that are sub-optimal or even infeasible. While optimization models have been proposed for water resources decision-making under uncertainty, no attempts have been made to address the uncertainties in water allocation problems in an integrated approach. This paper presents an Integrated Dynamic, Multi-stage, Feedback-controlled, Linear, Stochastic, and Distributed parameter optimization approach to solve a problem of water resources allocation. It attempts to capture (1) the conflict caused by competing objectives, (2) environmental degradation produced by resource consumption, and finally (3) the uncertainty and risk generated by the inherently random nature of state and decision parameters involved in such a problem. A theoretical system is defined throughout its different elements. These elements consisting mainly of water resource components and end-users are described in terms of quantity, quality, and present and future associated risks and uncertainties. Models are identified, modified, and interfaced together to constitute an integrated water allocation optimization framework. This effort is a novel approach to confront the water allocation optimization problem while accounting for uncertainties associated with all its elements; thus resulting in a solution that correctly reflects the physical problem in hand.

  18. Genetic algorithms - What fitness scaling is optimal?

    NASA Technical Reports Server (NTRS)

    Kreinovich, Vladik; Quintana, Chris; Fuentes, Olac

    1993-01-01

    A problem of choosing the best scaling function as a mathematical optimization problem is formulated and solved under different optimality criteria. A list of functions which are optimal under different criteria is presented which includes both the best functions empirically proved and new functions that may be worth trying.

  19. Application of GA, PSO, and ACO algorithms to path planning of autonomous underwater vehicles

    NASA Astrophysics Data System (ADS)

    Aghababa, Mohammad Pourmahmood; Amrollahi, Mohammad Hossein; Borjkhani, Mehdi

    2012-09-01

    In this paper, an underwater vehicle was modeled with six dimensional nonlinear equations of motion, controlled by DC motors in all degrees of freedom. Near-optimal trajectories in an energetic environment for underwater vehicles were computed using a numerical solution of a nonlinear optimal control problem (NOCP). An energy performance index as a cost function, which should be minimized, was defined. The resulting problem was a two-point boundary value problem (TPBVP). A genetic algorithm (GA), particle swarm optimization (PSO), and ant colony optimization (ACO) algorithms were applied to solve the resulting TPBVP. Applying an Euler-Lagrange equation to the NOCP, a conjugate gradient penalty method was also adopted to solve the TPBVP. The problem of energetic environments, involving some energy sources, was discussed. Some near-optimal paths were found using a GA, PSO, and ACO algorithms. Finally, the problem of collision avoidance in an energetic environment was also taken into account.

  20. Application of Particle Swarm Optimization Algorithm in the Heating System Planning Problem

    PubMed Central

    Ma, Rong-Jiang; Yu, Nan-Yang; Hu, Jun-Yi

    2013-01-01

    Based on the life cycle cost (LCC) approach, this paper presents an integral mathematical model and particle swarm optimization (PSO) algorithm for the heating system planning (HSP) problem. The proposed mathematical model minimizes the cost of heating system as the objective for a given life cycle time. For the particularity of HSP problem, the general particle swarm optimization algorithm was improved. An actual case study was calculated to check its feasibility in practical use. The results show that the improved particle swarm optimization (IPSO) algorithm can more preferably solve the HSP problem than PSO algorithm. Moreover, the results also present the potential to provide useful information when making decisions in the practical planning process. Therefore, it is believed that if this approach is applied correctly and in combination with other elements, it can become a powerful and effective optimization tool for HSP problem. PMID:23935429

  1. Improving multi-objective reservoir operation optimization with sensitivity-informed problem decomposition

    NASA Astrophysics Data System (ADS)

    Chu, J. G.; Zhang, C.; Fu, G. T.; Li, Y.; Zhou, H. C.

    2015-04-01

    This study investigates the effectiveness of a sensitivity-informed method for multi-objective operation of reservoir systems, which uses global sensitivity analysis as a screening tool to reduce the computational demands. Sobol's method is used to screen insensitive decision variables and guide the formulation of the optimization problems with a significantly reduced number of decision variables. This sensitivity-informed problem decomposition dramatically reduces the computational demands required for attaining high quality approximations of optimal tradeoff relationships between conflicting design objectives. The search results obtained from the reduced complexity multi-objective reservoir operation problems are then used to pre-condition the full search of the original optimization problem. In two case studies, the Dahuofang reservoir and the inter-basin multi-reservoir system in Liaoning province, China, sensitivity analysis results show that reservoir performance is strongly controlled by a small proportion of decision variables. Sensitivity-informed problem decomposition and pre-conditioning are evaluated in their ability to improve the efficiency and effectiveness of multi-objective evolutionary optimization. Overall, this study illustrates the efficiency and effectiveness of the sensitivity-informed method and the use of global sensitivity analysis to inform problem decomposition when solving the complex multi-objective reservoir operation problems.

  2. Mass Flux in the Ancient Earth-Moon System and Benign Implications for the Origin of Life on Earth

    NASA Technical Reports Server (NTRS)

    Ryder, Graham

    2002-01-01

    The origin of life on Earth is commonly considered to have been negatively affected by intense impacting in the Hadean, with the potential for the repeated evaporation and sterilization of any ocean. The impact flux is based on scaling from the lunar crater density record, but that record has no tie to any absolute age determination for any identified stratigraphic unit older than approx. 3.9 Ga (Nectaris basin). The flux can be described in terms of mass accretion, and various independent means can be used to estimate the mass flux in different intervals. The critical interval is that between the end of essential crustal formation (approx. 4.4 Ga) and the oldest mare times (approx. 3.8 Ga). The masses of the basin-forming projectiles during Nectarian and early Imbrian times, when the last 15 of the approx.45 identified impact basins formed, can be reasonably estimated as minima. These in sum provide a minimum of 2 x 10(exp 21)g for the mass flux to the Moon during those times. If the interval was 80 million years (Nectaris 3.90 Ga, Orientale 3.82 Ga), then the flux was approx. 2 x 10(exp 13) g/yr over this period. This is higher by more than an order of magnitude than a flux curve that declines continuously and uniformly from lunar accretion to the rate inferred for the older mare plains. This rate cannot be extrapolated back increasingly into pre-Nectarian times, because the Moon would have added masses far in excess of itself in post-crust-formation time. Thus this episode was a distinct and cataclysmic set of events. There are approx. 30 pre-Nectarian basins, and they were probably part of the same cataclysm (starting at approx. 4.0 Ga?) because the crust is fairly intact, the meteoritic contamination of the pre-Nectarian crust is very low, impact melt rocks older than 3.92 Ga are virtually unknown, and ancient volcanic and plutonic rocks have survived this interval. The accretionary flux from approx. 4.4 to approx. 4.0 Ga was comparatively benign. When scaled to Earth, even the late cataclysm does not produce oceane vaporating, globally sterilizing events. The rooted concept that such events took place is based on the extrapolation of a nonexistent lunar record to the Hadean. The Earth from approx. 4.4 to approx. 3.8 Ga was comparatively peaceful, and the impacting itself could have been thermally and hydrothermally beneficial. The origin of life could have taken place at any time between 4.4 and 3.85 Ga, given the current impact constraints, and there is no justification for the claim that life originated (or re-originated) as late as 3.85 Ga in response to the end of hostile impact conditions.

  3. Comparison of optimal design methods in inverse problems

    NASA Astrophysics Data System (ADS)

    Banks, H. T.; Holm, K.; Kappel, F.

    2011-07-01

    Typical optimal design methods for inverse or parameter estimation problems are designed to choose optimal sampling distributions through minimization of a specific cost function related to the resulting error in parameter estimates. It is hoped that the inverse problem will produce parameter estimates with increased accuracy using data collected according to the optimal sampling distribution. Here we formulate the classical optimal design problem in the context of general optimization problems over distributions of sampling times. We present a new Prohorov metric-based theoretical framework that permits one to treat succinctly and rigorously any optimal design criteria based on the Fisher information matrix. A fundamental approximation theory is also included in this framework. A new optimal design, SE-optimal design (standard error optimal design), is then introduced in the context of this framework. We compare this new design criterion with the more traditional D-optimal and E-optimal designs. The optimal sampling distributions from each design are used to compute and compare standard errors; the standard errors for parameters are computed using asymptotic theory or bootstrapping and the optimal mesh. We use three examples to illustrate ideas: the Verhulst-Pearl logistic population model (Banks H T and Tran H T 2009 Mathematical and Experimental Modeling of Physical and Biological Processes (Boca Raton, FL: Chapman and Hall/CRC)), the standard harmonic oscillator model (Banks H T and Tran H T 2009) and a popular glucose regulation model (Bergman R N, Ider Y Z, Bowden C R and Cobelli C 1979 Am. J. Physiol. 236 E667-77 De Gaetano A and Arino O 2000 J. Math. Biol. 40 136-68 Toffolo G, Bergman R N, Finegood D T, Bowden C R and Cobelli C 1980 Diabetes 29 979-90).

  4. New Parameterization of Neutron Absorption Cross Sections

    NASA Technical Reports Server (NTRS)

    Tripathi, Ram K.; Wilson, John W.; Cucinotta, Francis A.

    1997-01-01

    Recent parameterization of absorption cross sections for any system of charged ion collisions, including proton-nucleus collisions, is extended for neutron-nucleus collisions valid from approx. 1 MeV to a few GeV, thus providing a comprehensive picture of absorption cross sections for any system of collision pairs (charged or uncharged). The parameters are associated with the physics of the problem. At lower energies, optical potential at the surface is important, and the Pauli operator plays an increasingly important role at intermediate energies. The agreement between the calculated and experimental data is better than earlier published results.

  5. Process Control in Production-Worthy Plasma Doping Technology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winder, Edmund J.; Fang Ziwei; Arevalo, Edwin

    2006-11-13

    As the semiconductor industry continues to scale devices of smaller dimensions and improved performance, many ion implantation processes require lower energy and higher doses. Achieving these high doses (in some cases {approx}1x1016 ions/cm2) at low energies (<3 keV) while maintaining throughput is increasingly challenging for traditional beamline implant tools because of space-charge effects that limit achievable beam density at low energies. Plasma doping is recognized as a technology which can overcome this problem. In this paper, we highlight the technology available to achieve process control for all implant parameters associated with modem semiconductor manufacturing.

  6. Human Performance on Hard Non-Euclidean Graph Problems: Vertex Cover

    ERIC Educational Resources Information Center

    Carruthers, Sarah; Masson, Michael E. J.; Stege, Ulrike

    2012-01-01

    Recent studies on a computationally hard visual optimization problem, the Traveling Salesperson Problem (TSP), indicate that humans are capable of finding close to optimal solutions in near-linear time. The current study is a preliminary step in investigating human performance on another hard problem, the Minimum Vertex Cover Problem, in which…

  7. Problem Solving through an Optimization Problem in Geometry

    ERIC Educational Resources Information Center

    Poon, Kin Keung; Wong, Hang-Chi

    2011-01-01

    This article adapts the problem-solving model developed by Polya to investigate and give an innovative approach to discuss and solve an optimization problem in geometry: the Regiomontanus Problem and its application to football. Various mathematical tools, such as calculus, inequality and the properties of circles, are used to explore and reflect…

  8. Multiple shooting algorithms for jump-discontinuous problems in optimal control and estimation

    NASA Technical Reports Server (NTRS)

    Mook, D. J.; Lew, Jiann-Shiun

    1991-01-01

    Multiple shooting algorithms are developed for jump-discontinuous two-point boundary value problems arising in optimal control and optimal estimation. Examples illustrating the origin of such problems are given to motivate the development of the solution algorithms. The algorithms convert the necessary conditions, consisting of differential equations and transversality conditions, into algebraic equations. The solution of the algebraic equations provides exact solutions for linear problems. The existence and uniqueness of the solution are proved.

  9. Aerodynamic Shape Optimization Using A Real-Number-Encoded Genetic Algorithm

    NASA Technical Reports Server (NTRS)

    Holst, Terry L.; Pulliam, Thomas H.

    2001-01-01

    A new method for aerodynamic shape optimization using a genetic algorithm with real number encoding is presented. The algorithm is used to optimize three different problems, a simple hill climbing problem, a quasi-one-dimensional nozzle problem using an Euler equation solver and a three-dimensional transonic wing problem using a nonlinear potential solver. Results indicate that the genetic algorithm is easy to implement and extremely reliable, being relatively insensitive to design space noise.

  10. A hybrid nonlinear programming method for design optimization

    NASA Technical Reports Server (NTRS)

    Rajan, S. D.

    1986-01-01

    Solutions to engineering design problems formulated as nonlinear programming (NLP) problems usually require the use of more than one optimization technique. Moreover, the interaction between the user (analysis/synthesis) program and the NLP system can lead to interface, scaling, or convergence problems. An NLP solution system is presented that seeks to solve these problems by providing a programming system to ease the user-system interface. A simple set of rules is used to select an optimization technique or to switch from one technique to another in an attempt to detect, diagnose, and solve some potential problems. Numerical examples involving finite element based optimal design of space trusses and rotor bearing systems are used to illustrate the applicability of the proposed methodology.

  11. Optimal design of piezoelectric transformers: a rational approach based on an analytical model and a deterministic global optimization.

    PubMed

    Pigache, Francois; Messine, Frédéric; Nogarede, Bertrand

    2007-07-01

    This paper deals with a deterministic and rational way to design piezoelectric transformers in radial mode. The proposed approach is based on the study of the inverse problem of design and on its reformulation as a mixed constrained global optimization problem. The methodology relies on the association of the analytical models for describing the corresponding optimization problem and on an exact global optimization software, named IBBA and developed by the second author to solve it. Numerical experiments are presented and compared in order to validate the proposed approach.

  12. Exact and explicit optimal solutions for trajectory planning and control of single-link flexible-joint manipulators

    NASA Technical Reports Server (NTRS)

    Chen, Guanrong

    1991-01-01

    An optimal trajectory planning problem for a single-link, flexible joint manipulator is studied. A global feedback-linearization is first applied to formulate the nonlinear inequality-constrained optimization problem in a suitable way. Then, an exact and explicit structural formula for the optimal solution of the problem is derived and the solution is shown to be unique. It turns out that the optimal trajectory planning and control can be done off-line, so that the proposed method is applicable to both theoretical analysis and real time tele-robotics control engineering.

  13. Graphical models for optimal power flow

    DOE PAGES

    Dvijotham, Krishnamurthy; Chertkov, Michael; Van Hentenryck, Pascal; ...

    2016-09-13

    Optimal power flow (OPF) is the central optimization problem in electric power grids. Although solved routinely in the course of power grid operations, it is known to be strongly NP-hard in general, and weakly NP-hard over tree networks. In this paper, we formulate the optimal power flow problem over tree networks as an inference problem over a tree-structured graphical model where the nodal variables are low-dimensional vectors. We adapt the standard dynamic programming algorithm for inference over a tree-structured graphical model to the OPF problem. Combining this with an interval discretization of the nodal variables, we develop an approximation algorithmmore » for the OPF problem. Further, we use techniques from constraint programming (CP) to perform interval computations and adaptive bound propagation to obtain practically efficient algorithms. Compared to previous algorithms that solve OPF with optimality guarantees using convex relaxations, our approach is able to work for arbitrary tree-structured distribution networks and handle mixed-integer optimization problems. Further, it can be implemented in a distributed message-passing fashion that is scalable and is suitable for “smart grid” applications like control of distributed energy resources. In conclusion, numerical evaluations on several benchmark networks show that practical OPF problems can be solved effectively using this approach.« less

  14. A tabu search evalutionary algorithm for multiobjective optimization: Application to a bi-criterion aircraft structural reliability problem

    NASA Astrophysics Data System (ADS)

    Long, Kim Chenming

    Real-world engineering optimization problems often require the consideration of multiple conflicting and noncommensurate objectives, subject to nonconvex constraint regions in a high-dimensional decision space. Further challenges occur for combinatorial multiobjective problems in which the decision variables are not continuous. Traditional multiobjective optimization methods of operations research, such as weighting and epsilon constraint methods, are ill-suited to solving these complex, multiobjective problems. This has given rise to the application of a wide range of metaheuristic optimization algorithms, such as evolutionary, particle swarm, simulated annealing, and ant colony methods, to multiobjective optimization. Several multiobjective evolutionary algorithms have been developed, including the strength Pareto evolutionary algorithm (SPEA) and the non-dominated sorting genetic algorithm (NSGA), for determining the Pareto-optimal set of non-dominated solutions. Although numerous researchers have developed a wide range of multiobjective optimization algorithms, there is a continuing need to construct computationally efficient algorithms with an improved ability to converge to globally non-dominated solutions along the Pareto-optimal front for complex, large-scale, multiobjective engineering optimization problems. This is particularly important when the multiple objective functions and constraints of the real-world system cannot be expressed in explicit mathematical representations. This research presents a novel metaheuristic evolutionary algorithm for complex multiobjective optimization problems, which combines the metaheuristic tabu search algorithm with the evolutionary algorithm (TSEA), as embodied in genetic algorithms. TSEA is successfully applied to bicriteria (i.e., structural reliability and retrofit cost) optimization of the aircraft tail structure fatigue life, which increases its reliability by prolonging fatigue life. A comparison for this application of the proposed algorithm, TSEA, with several state-of-the-art multiobjective optimization algorithms reveals that TSEA outperforms these algorithms by providing retrofit solutions with greater reliability for the same costs (i.e., closer to the Pareto-optimal front) after the algorithms are executed for the same number of generations. This research also demonstrates that TSEA competes with and, in some situations, outperforms state-of-the-art multiobjective optimization algorithms such as NSGA II and SPEA 2 when applied to classic bicriteria test problems in the technical literature and other complex, sizable real-world applications. The successful implementation of TSEA contributes to the safety of aeronautical structures by providing a systematic way to guide aircraft structural retrofitting efforts, as well as a potentially useful algorithm for a wide range of multiobjective optimization problems in engineering and other fields.

  15. QUEST FOR COSMOS SUBMILLIMETER GALAXY COUNTERPARTS USING CARMA AND VLA: IDENTIFYING THREE HIGH-REDSHIFT STARBURST GALAXIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smolcic, V.; Navarrete, F.; Bertoldi, F.

    2012-05-01

    We report on interferometric observations at 1.3 mm at 2''-3'' resolution using the Combined Array for Research in Millimeter-wave Astronomy. We identify multi-wavelength counterparts of three submillimeter galaxies (SMGs; F{sub 1m} > 5.5 mJy) in the COSMOS field, initially detected with MAMBO and AzTEC bolometers at low, {approx}10''-30'', resolution. All three sources-AzTEC/C1, Cosbo-3, and Cosbo-8-are identified to coincide with positions of 20 cm radio sources. Cosbo-3, however, is not associated with the most likely radio counterpart, closest to the MAMBO source position, but with that farther away from it. This illustrates the need for intermediate-resolution ({approx}2'') mm-observations to identify themore » correct counterparts of single-dish-detected SMGs. All of our three sources become prominent only at NIR wavelengths, and their mm-to-radio flux based redshifts suggest that they lie at redshifts z {approx}> 2. As a proof of concept, we show that photometric redshifts can be well determined for SMGs, and we find photometric redshifts of 5.6 {+-} 1.2, 1.9{sup +0.9}{sub -0.5}, and {approx}4 for AzTEC/C1, Cosbo-3, and Cosbo-8, respectively. Using these we infer that these galaxies have radio-based star formation rates of {approx}> 1000 M{sub Sun} yr{sup -1}and IR luminosities of {approx}10{sup 13} L{sub Sun} consistent with properties of high-redshift SMGs. In summary, our sources reflect a variety of SMG properties in terms of redshift and clustering, consistent with the framework that SMGs are progenitors of z {approx} 2 and today's passive galaxies.« less

  16. Discovery of an Extreme MeV Blazar with the Swift Burst Alert Telescope

    NASA Technical Reports Server (NTRS)

    Sambruna, R. M.; Markwardt, C. B.; Mushotzky, R. F.; Tueller, J.; Hartman, R.; Brandt, W. N.; Schneider, D> P.; Falcone, A.; Cucchiara, A.; hide

    2006-01-01

    The Burst Alert Telescope (BAT) onboard Swift detected bright emission from 15-195 keV from the source SWIFT J0746.3+2548 (J0746 in the following), identified with the optically-faint (R approx. 19), z=2.979 quasar SDSS J074625.87+244901.2. Here we present Swift and multiwavelength observations of this source. The X-ray emission from J0746 is variable on timescales of hours to weeks in 0.5-8 keV and of a few months in 15-195 keV, but there is no accompanying spectral variability in the 0.5-8 keV band. There is a suggestion that the BAT spectrum, initially very hard (photon index Gamma approx. 0.7), steepened to Gamma approx. 1.3 in a few months, together with a decrease of the 15-195 keV flux by a factor approx. 2. The 0.5-8 keV continuum is well described by a power law with Gamma approx. 1.3, and spectral flattening below 1 keV. The latter can be described with a column density in excess of the Galactic value with intrinsic column density Nz(sub H) approx. 10(exp 22)/sq cm , or with a flatter power law, implying a sharp (Delta(Gamma) less than or approx. 1) break across 16 keV in the quasar's rest-frame. The Spectral Energy Distribution of J0746 is double-humped, with the first component peaking at IR wavelengths and the second component at MeV energies. These properties suggest that J0746 is a a blazar with high gamma-ray luminosity and low peak energy (MeV) stretching the blazar sequence to an extreme.

  17. A DEEPER LOOK AT LEO IV: STAR FORMATION HISTORY AND EXTENDED STRUCTURE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sand, David J.; Seth, Anil; Olszewski, Edward W.

    We present MMT/Megacam imaging of the Leo IV dwarf galaxy in order to investigate its structure and star formation history, and to search for signs of association with the recently discovered Leo V satellite. Based on parameterized fits, we find that Leo IV is round, with {epsilon} < 0.23 (at the 68% confidence limit) and a half-light radius of r{sub h} {approx_equal} 130 pc. Additionally, we perform a thorough search for extended structures in the plane of the sky and along the line of sight. We derive our surface brightness detection limit by implanting fake structures into our catalog withmore » stellar populations identical to that of Leo IV. We show that we are sensitive to stream-like structures with surface brightness {mu}{sub r} {approx}< 29.6 mag arcsec{sup -2}, and at this limit we find no stellar bridge between Leo IV (out to a radius of {approx}0.5 kpc) and the recently discovered, nearby satellite Leo V. Using the color-magnitude fitting package StarFISH, we determine that Leo IV is consistent with a single age ({approx}14 Gyr), single metallicity ([Fe/H] {approx} -2.3) stellar population, although we cannot rule out a significant spread in these values. We derive a luminosity of M{sub V} = -5.5 {+-} 0.3. Studying both the spatial distribution and frequency of Leo IV's 'blue plume' stars reveals evidence for a young ({approx}2 Gyr) stellar population which makes up {approx}2% of its stellar mass. This sprinkling of star formation, only detectable in this deep study, highlights the need for further imaging of the new Milky Way satellites along with theoretical work on the expected, detailed properties of these possible 'reionization fossils'.« less

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dullo, Bililign T.; Graham, Alister W., E-mail: Bdullo@astro.swin.edu.au

    We have used the full radial extent of images from the Hubble Space Telescope's Advanced Camera for Surveys and Wide Field Planetary Camera 2 to extract surface brightness profiles from a sample of six, local lenticular galaxy candidates. We have modeled these profiles using a core-Sersic bulge plus an exponential disk model. Our fast rotating lenticular disk galaxies with bulge magnitudes M{sub V} {approx}< -21.30 mag have central stellar deficits, suggesting that these bulges may have formed from ''dry'' merger events involving supermassive black holes (BHs) while their surrounding disk was subsequently built up, perhaps via cold gas accretion scenarios.more » The central stellar mass deficits M{sub def} are roughly 0.5-2 M{sub BH} (BH mass), rather than {approx}10-20 M{sub BH} as claimed from some past studies, which is in accord with core-Sersic model mass deficit measurements in elliptical galaxies. Furthermore, these bulges have Sersic indices n {approx}3, half-light radii R{sub e} < 2 kpc and masses >10{sup 11} M{sub Sun }, and therefore appear to be descendants of the compact galaxies reported at z {approx} 1.5-2. Past studies which have searched for these local counterparts by using single-component galaxy models to provide the z {approx} 0 size comparisons have overlooked these dense, compact, and massive bulges in today's early-type disk galaxies. This evolutionary scenario not only accounts for what are today generally old bulges-which must be present in z {approx} 1.5 images-residing in what are generally young disks, but it eliminates the uncomfortable suggestion of a factor of three to five growth in size for the compact, z {approx} 1.5 galaxies that are known to possess infant disks.« less

  19. X-Ray Emission from the Wolf-Rayet Bubble S 308

    NASA Technical Reports Server (NTRS)

    Toala, J. A.; Guerrero, M. A.; Chu, Y.-H.; Gruendl, R. A.; Arthur, S. J.; Smith, R. C.; Snowden, S. L.

    2012-01-01

    The Wolf-Rayet (WR) bubble S 308 around the WR star HD 50896 is one of the only two WR bubbles known to possess X-ray emission. We present XMM-Newton observations of three fields of this WR bubble that, in conjunction with an existing observation of its Northwest quadrant (Chu et al. 2003), map most of the nebula. The X-ray emission from S 308 displays a limb-brightened morphology, with a 22' in size central cavity and a shell thickness of approx. 8'. This X-ray shell is confined by the optical shell of ionized material. The spectrum is dominated by the He-like triplets of N VI at approx.0.43 keV and O VII at approx.0.5 keV, and declines towards high energies, with a faint tail up to 1 keV. This spectrum can be described by a two-temperature optically thin plasma emission model (T1 approx.1.1 x 10(exp 6) K, T2 approx.13 x 10(exp 6) K), with a total X-ray luminosity approx.3 x 10(exp 33) erg/s at the assumed distance of 1.8 kpc. Qualitative comparison of the X-ray morphology of S 308 with the results of numerical simulations of wind-blown WR bubbles suggests a progenitor mass of 40 Stellar mass and an age in the WR phase approx.20,000 yrs. The X-ray luminosity predicted by simulatioms including the effects of heat conduction is in agreement with the observations, however, the simulated X-ray spectrum indicates generally hotter gas than is derived from the observations. We suggest that non-equilibrium ionization (NEI) may provide an explanation for this discrepancy.

  20. CLASH: NEW MULTIPLE IMAGES CONSTRAINING THE INNER MASS PROFILE OF MACS J1206.2-0847

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zitrin, A.; Rosati, P.; Nonino, M.

    2012-04-20

    We present a strong-lensing analysis of the galaxy cluster MACS J1206.2-0847 (z = 0.44) using UV, Optical, and IR, HST/ACS/WFC3 data taken as part of the CLASH multi-cycle treasury program, with VLT/VIMOS spectroscopy for some of the multiply lensed arcs. The CLASH observations, combined with our mass model, allow us to identify 47 new multiply lensed images of 12 distant sources. These images, along with the previously known arc, span the redshift range 1 {approx}< z {approx}< 5.5, and thus enable us to derive a detailed mass distribution and to accurately constrain, for the first time, the inner mass profilemore » of this cluster. We find an inner profile slope of dlog {Sigma}/dlog {theta} {approx_equal} -0.55 {+-} 0.1 (in the range [1'', 53''], or 5 kpc {approx}< r {approx}< 300 kpc), as commonly found for relaxed and well-concentrated clusters. Using the many systems uncovered here we derive credible critical curves and Einstein radii for different source redshifts. For a source at z{sub s} {approx_equal} 2.5, the critical curve encloses a large area with an effective Einstein radius of {theta}{sub E} = 28'' {+-} 3'', and a projected mass of (1.34 {+-} 0.15) Multiplication-Sign 10{sup 14} M{sub Sun }. From the current understanding of structure formation in concordance cosmology, these values are relatively high for clusters at z {approx} 0.5, so that detailed studies of the inner mass distribution of clusters such as MACS J1206.2-0847 can provide stringent tests of the {Lambda}CDM paradigm.« less

  1. The Thermal Expansion of Ring Particles and the Secular Orbital Evolution of Rings Around Planets and Asteroids

    NASA Technical Reports Server (NTRS)

    Rubincam, David P.

    2013-01-01

    The thermal expansion and contraction of ring particles orbiting a planet or asteroid can cause secular orbit evolution. This effect, called here the thermal expansion effect, depends on ring particles entering and exiting the shadow of the body they orbit. A particle cools off in the shadow and heats up again in the sunshine, suffering thermal contraction and expansion. The changing cross-section it presents to solar radiation pressure plus time lags due to thermal inertia lead to a net along-track force. The effect causes outward drift for rocky particles. For the equatorial orbits considered here, the thermal expansion effect is larger than Poynting-Robertson drag in the inner solar system for particles in the size range approx. 0.001 - 0.02 m. This leads to a net increase in the semimajor axis from the two opposing effects at rates ranging from approx. 0.1 R per million years for Mars to approx. 1 R per million years for Mercury, for distances approx. 2R from the body, where R is the body's radius. Asteroid 243 Ida has approx. 10 R per million years, while a hypothetical Near-Earth Asteroid (NEA) can have faster rates of approx. 0.5 R per thousand years, due chiefly to its small radius compared to the planets. The thermal expansion effect weakens greatly at Jupiter and is overwhelmed by Poynting-Robertson for icy particles orbiting Saturn. Meteoroids in eccentric orbits about the Sun also suffer the thermal expansion effect, but with only approx. 0.0003e2 AU change in semimajor axis over a million years for a 2 m meteoroid orbiting between Mercury and Earth.

  2. AGN UNIFICATION AT z {approx} 1: u - R COLORS AND GRADIENTS IN X-RAY AGN HOSTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mark Ammons, S.; Rosario, David J. V.; Koo, David C., E-mail: ammons@as.arizona.edu, E-mail: rosario@ucolick.org, E-mail: koo@ucolick.org

    2011-10-10

    We present uncontaminated rest-frame u - R colors of 78 X-ray-selected active galactic nucleus (AGN) hosts at 0.5 < z < 1.5 in the Chandra Deep Fields measured with Hubble Space Telescope (HST)/Advanced Camera for Surveys/NICMOS and Very Large Telescope/ISAAC imaging. We also present spatially resolved NUV - R color gradients for a subsample of AGN hosts imaged by HST/Wide Field Camera 3 (WFC3). Integrated, uncorrected photometry is not reliable for comparing the mean properties of soft and hard AGN host galaxies at z {approx} 1 due to color contamination from point-source AGN emission. We use a cloning simulation tomore » develop a calibration between concentration and this color contamination and use this to correct host galaxy colors. The mean u - R color of the unobscured/soft hosts beyond {approx}6 kpc is statistically equivalent to that of the obscured/hard hosts (the soft sources are 0.09 {+-} 0.16 mag bluer). Furthermore, the rest-frame V - J colors of the obscured and unobscured hosts beyond {approx}6 kpc are statistically equivalent, suggesting that the two populations have similar distributions of dust extinction. For the WFC3/infrared sample, the mean NUV - R color gradients of unobscured and obscured sources differ by less than {approx}0.5 mag for r > 1.1 kpc. These three observations imply that AGN obscuration is uncorrelated with the star formation rate beyond {approx}1 kpc. These observations favor a unification scenario for intermediate-luminosity AGNs in which obscuration is determined geometrically. Scenarios in which the majority of intermediate-luminosity AGNs at z {approx} 1 are undergoing rapid, galaxy-wide quenching due to AGN-driven feedback processes are disfavored.« less

  3. Gamma-Ray Observations of the Orion Molecular Clouds with the Fermi Large Area Telescope

    NASA Technical Reports Server (NTRS)

    Ackermann, M.; Ajello, M.; Allafort, A.; Antolini, E.; Baldini, L.; Ballet, J.; Barbiellini, G.; Bastieri, D.; Bechtol, K.; Bellazzini, R.; hide

    2012-01-01

    We report on the gamma-ray observations of giant molecular clouds Orion A and B with the Large Area Telescope (LAT) on board the Fermi Gamma-ray Space Telescope. The gamma-ray emission in the energy band between approx 100 MeV and approx 100 GeV is predicted to trace the gas mass distribution in the clouds through nuclear interactions between the Galactic cosmic rays (CRs) and interstellar gas. The gamma-ray production cross-section for the nuclear interaction is known to approx 10% precision which makes the LAT a powerful tool to measure the gas mass column density distribution of molecular clouds for a known CR intensity. We present here such distributions for Orion A and B, and correlate them with those of the velocity-integrated CO intensity (W(sub CO)) at a 1 deg 1 deg pixel level. The correlation is found to be linear over a W(sub CO) range of approx 10-fold when divided in three regions, suggesting penetration of nuclear CRs to most of the cloud volumes. The W(sub CO)-to-mass conversion factor, X(sub CO), is found to be approx 2.3 10(exp 20) / sq cm (K km/s)(exp -1) for the high-longitude part of Orion A (l > 212 deg), approx 1.7 times higher than approx 1.3 10(exp 20) found for the rest of Orion A and B. We interpret the apparent high X(sub CO) in the high-longitude region of Orion A in the light of recent works proposing a nonlinear relation between H2 and CO densities in the diffuse molecular gas.W(sub CO) decreases faster than the H2 column density in the region making the gas "darker" to W(sub CO).

  4. Aerospace applications of integer and combinatorial optimization

    NASA Technical Reports Server (NTRS)

    Padula, S. L.; Kincaid, R. K.

    1995-01-01

    Research supported by NASA Langley Research Center includes many applications of aerospace design optimization and is conducted by teams of applied mathematicians and aerospace engineers. This paper investigates the benefits from this combined expertise in solving combinatorial optimization problems. Applications range from the design of large space antennas to interior noise control. A typical problem, for example, seeks the optimal locations for vibration-damping devices on a large space structure and is expressed as a mixed/integer linear programming problem with more than 1500 design variables.

  5. Control of Finite-State, Finite Memory Stochastic Systems

    NASA Technical Reports Server (NTRS)

    Sandell, Nils R.

    1974-01-01

    A generalized problem of stochastic control is discussed in which multiple controllers with different data bases are present. The vehicle for the investigation is the finite state, finite memory (FSFM) stochastic control problem. Optimality conditions are obtained by deriving an equivalent deterministic optimal control problem. A FSFM minimum principle is obtained via the equivalent deterministic problem. The minimum principle suggests the development of a numerical optimization algorithm, the min-H algorithm. The relationship between the sufficiency of the minimum principle and the informational properties of the problem are investigated. A problem of hypothesis testing with 1-bit memory is investigated to illustrate the application of control theoretic techniques to information processing problems.

  6. Evidence for Doppler-Shifted Iron Emission Lines in Black Hole Candidate 4U 1630-47

    NASA Technical Reports Server (NTRS)

    Cui, Wei; Chen, Wan; Zhang, Shuang Nan

    2000-01-01

    We report the first detection of a pair of correlated the X-ray spectrum of black hole candidate 4U 1630-47 outburst, based on Rossi X-Ray Timing Explorer (RXTE) emission lines in during its 1996 observations of the source. At the peak plateau of the outburst, the emission lines are detected, centered mostly at approx. 5.7 and approx. 7.7 keV, respectively, while the line energies exhibit random variability approx. 5%. Interestingly, the lines move in a concerted manner to keep their separation roughly constant. The lines also vary greatly in strength, but with the lower energy line always much stronger than the higher energy one. The measured equivalent width ranges from approx. 50 to approx. 270 eV for the former, and from insignificant detection to approx. 140 eV for the latter; the two are reasonably correlated. The correlation between the lines implies a causal connection; perhaps they share a common origin. Both lines may arise from a single K & alpha; line of highly ionized iron that is Doppler shifted either in a Keplerian accretion disk or in a bipolar outflow or even both. In both scenarios, a change in the line energy might simply reflect a change in the ionization state of line-emitting matter. We discuss the implication of the results and also raise some questions about such interpretations.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kantowski, Ronald; Chen Bin; Dai Xinyu, E-mail: kantowski@nhn.ou.ed, E-mail: Bin.Chen-1@ou.ed, E-mail: dai@nhn.ou.ed

    We compute the deflection angle to order (m/r {sub 0}){sup 2} and m/r{sub 0} x {Lambda}r {sup 2}{sub 0} for a light ray traveling in a flat {Lambda}CDM cosmology that encounters a completely condensed mass region. We use a Swiss cheese model for the inhomogeneities and find that the most significant correction to the Einstein angle occurs not because of the nonlinear terms but instead occurs because the condensed mass is embedded in a background cosmology. The Swiss cheese model predicts a decrease in the deflection angle of {approx}2% for weakly lensed galaxies behind the rich cluster A1689 and thatmore » the reduction can be as large as {approx}5% for similar rich clusters at z {approx} 1. Weak-lensing deflection angles caused by galaxies can likewise be reduced by as much as {approx}4%. We show that the lowest order correction in which {Lambda} appears is proportional to m/r{sub 0} x {radical}({Lambda}r{sub 0}{sup 2}) and could cause as much as a {approx}0.02% increase in the deflection angle for light that passes through a rich cluster. The lowest order nonlinear correction in the mass is proportional to m/r{sub 0}x{radical}(m/r{sub 0}) and can increase the deflection angle by {approx}0.005% for weak lensing by galaxies.« less

  8. Annealing studies of heteroepitaxial InSbN on GaAs grown by molecular beam epitaxy for long-wavelength infrared detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patra, Nimai C.; Bharatan, Sudhakar; Li Jia

    2012-10-15

    We report the effect of annealing on the structural, vibrational, electrical, and optical properties of heteropepitaxially grown InSbN epilayers on GaAs substrate by molecular beam epitaxy for long-wavelength infrared detector applications. As-grown epilayers exhibited high N incorporation in the both substitutional and interstitial sites, with N induced defects as evidenced from high resolution x-ray diffraction, secondary ion mass spectroscopy, and room temperature (RT) micro-Raman studies. The as-grown optical band gap was observed at 0.132 eV ({approx}9.4 {mu}m) and the epilayer exhibited high background carrier concentration at {approx}10{sup 18} cm{sup -3} range with corresponding mobility of {approx}10{sup 3} cm{sup 2}/Vs. Exmore » situ and in situ annealing at 430 Degree-Sign C though led to the loss of N but improved InSb quality due to effective annihilation of N related defects and other lattice defects attested to enhanced InSb LO phonon modes in the corresponding Raman spectra. Further, annealing resulted in the optical absorption edge red shifting to 0.12 eV ({approx}10.3 {mu}m) and the layers were characterized by reduced background carrier concentration in the {approx}10{sup 16} cm{sup -3} range with enhanced mobility in {approx}10{sup 4} cm{sup 2}/Vs range.« less

  9. Discovery of Rapidly Moving Partial X-Ray Absorbers Within Cassiopeiae

    NASA Technical Reports Server (NTRS)

    Hamaguchi, K.; Oskinova, L.; Russell, C. M. P.; Petre, R.; Enoto, T.; Morihana, K.; Ishida, M.

    2016-01-01

    Gamma Cassiopeiae is an enigmatic Be star with unusually strong hard X-ray emission. The Suzaku observatory detected six rapid X-ray spectral hardening events called "softness dips" in a approx.100 ks observation in 2011. All the softness dip events show symmetric softness-ratio variations, and some of them have flat bottoms apparently due to saturation. The softness dip spectra are best described by either approx.40% or approx.70% partial covering absorption to kT approx.12 keV plasma emission by matter with a neutral hydrogen column density of approx.(2-8) ×10(exp 21)/sq cm, while the spectrum outside these dips is almost free of absorption. This result suggests the presence of two distinct X-ray-emitting spots in the gamma Cas system, perhaps on a white dwarf (WD) companion with dipole mass accretion. The partial covering absorbers may be blobs in the Be stellar wind, the Be disk, or rotating around the WD companion. Weak correlations of the softness ratios to the hard X-ray flux suggest the presence of stable plasmas at kT approx 0.9 and 5 keV, which may originate from the Be or WD winds. The formation of a Be star and WD binary system requires mass transfer between two stars; gamma Cas may have experienced such activity in the past.

  10. High-Temperature Properties of Piezoelectric Langatate Single Crystals

    NASA Technical Reports Server (NTRS)

    Sehirlioglu, Alp; Sayir, Ali; Klemenz, Christine

    2007-01-01

    Langasite type crystals belong to non-polar point group of 32 and do not show any phase transformations up to the melting temperature. Langatate (La3Ga(5.5)Ta(0.5)O14) demonstrates piezoelectric activity better than quartz and possesses attractive properties for high temperature sensors, resonators and filter applications. High-quality and colorless langatate crystals were grown by the Czochralski technique. The electromechanical and electrical properties of langatate crystals in different crystallographic directions were characterized at elevated temperature. The piezoelectric coefficient along x-axis was 7 pC/N as measured by a Berlincourt meter for a plate geometry with an aspect ratio of 10:1. The dielectric constant did not exhibit any significant temperature dependence (K33 approx. 21 at 30 C and K33 approx. 23 at 600 C). Loss tangent at 100 kHz remained <0.003 up to 300 C and <0.65 at 600 C. The dielectric properties along the y-axis were similar and its temperature dependence was analogous to the x-axis. Electromechanically, the inactive z-axis exhibited no resonance with K33 approx. 84 at room temperature, decreasing down to approx. 49 at 600 C. Resistivity of these crystals along x-axis decreased from approx. 6x10(exp 11) omega-cm at room temperature, to approx. 1.6x10(exp 6) omega-cm at 600 C.

  11. The Pioneer Anomaly and a Rotating Godel Universe

    NASA Technical Reports Server (NTRS)

    Wilson, Thomas; Blome, Hans-Joachim

    2008-01-01

    The Pioneer Anomaly represents an intriguing problem for fundamental physics whose scope still seems to baffle the best of explanations. It involves one of the most precise fine-scale acceleration measurements possible in the space age as the Pioneer 10/11 spacecraft reached distances of 20-70 AU from the Sun. An anomalous acceleration directed back toward the Sun of approx. 8x10(exp -10) m/sq s was discovered. The problem will be summarized and an up-to-date overview of possible explanations for this surprising result will be given. It may even be possible that our cosmic environment such as expansion dynamics and/or dark energy could be influencing the behavior of planets and spacecrafts within our local solar system. Then a new possibility, that of a rotating Godel Universe, will be introduced and examined.

  12. Graph Design via Convex Optimization: Online and Distributed Perspectives

    NASA Astrophysics Data System (ADS)

    Meng, De

    Network and graph have long been natural abstraction of relations in a variety of applications, e.g. transportation, power system, social network, communication, electrical circuit, etc. As a large number of computation and optimization problems are naturally defined on graphs, graph structures not only enable important properties of these problems, but also leads to highly efficient distributed and online algorithms. For example, graph separability enables the parallelism for computation and operation as well as limits the size of local problems. More interestingly, graphs can be defined and constructed in order to take best advantage of those problem properties. This dissertation focuses on graph structure and design in newly proposed optimization problems, which establish a bridge between graph properties and optimization problem properties. We first study a new optimization problem called Geodesic Distance Maximization Problem (GDMP). Given a graph with fixed edge weights, finding the shortest path, also known as the geodesic, between two nodes is a well-studied network flow problem. We introduce the Geodesic Distance Maximization Problem (GDMP): the problem of finding the edge weights that maximize the length of the geodesic subject to convex constraints on the weights. We show that GDMP is a convex optimization problem for a wide class of flow costs, and provide a physical interpretation using the dual. We present applications of the GDMP in various fields, including optical lens design, network interdiction, and resource allocation in the control of forest fires. We develop an Alternating Direction Method of Multipliers (ADMM) by exploiting specific problem structures to solve large-scale GDMP, and demonstrate its effectiveness in numerical examples. We then turn our attention to distributed optimization on graph with only local communication. Distributed optimization arises in a variety of applications, e.g. distributed tracking and localization, estimation problems in sensor networks, multi-agent coordination. Distributed optimization aims to optimize a global objective function formed by summation of coupled local functions over a graph via only local communication and computation. We developed a weighted proximal ADMM for distributed optimization using graph structure. This fully distributed, single-loop algorithm allows simultaneous updates and can be viewed as a generalization of existing algorithms. More importantly, we achieve faster convergence by jointly designing graph weights and algorithm parameters. Finally, we propose a new problem on networks called Online Network Formation Problem: starting with a base graph and a set of candidate edges, at each round of the game, player one first chooses a candidate edge and reveals it to player two, then player two decides whether to accept it; player two can only accept limited number of edges and make online decisions with the goal to achieve the best properties of the synthesized network. The network properties considered include the number of spanning trees, algebraic connectivity and total effective resistance. These network formation games arise in a variety of cooperative multiagent systems. We propose a primal-dual algorithm framework for the general online network formation game, and analyze the algorithm performance by the competitive ratio and regret.

  13. Mixed Integer Programming and Heuristic Scheduling for Space Communication

    NASA Technical Reports Server (NTRS)

    Lee, Charles H.; Cheung, Kar-Ming

    2013-01-01

    Optimal planning and scheduling for a communication network was created where the nodes within the network are communicating at the highest possible rates while meeting the mission requirements and operational constraints. The planning and scheduling problem was formulated in the framework of Mixed Integer Programming (MIP) to introduce a special penalty function to convert the MIP problem into a continuous optimization problem, and to solve the constrained optimization problem using heuristic optimization. The communication network consists of space and ground assets with the link dynamics between any two assets varying with respect to time, distance, and telecom configurations. One asset could be communicating with another at very high data rates at one time, and at other times, communication is impossible, as the asset could be inaccessible from the network due to planetary occultation. Based on the network's geometric dynamics and link capabilities, the start time, end time, and link configuration of each view period are selected to maximize the communication efficiency within the network. Mathematical formulations for the constrained mixed integer optimization problem were derived, and efficient analytical and numerical techniques were developed to find the optimal solution. By setting up the problem using MIP, the search space for the optimization problem is reduced significantly, thereby speeding up the solution process. The ratio of the dimension of the traditional method over the proposed formulation is approximately an order N (single) to 2*N (arraying), where N is the number of receiving antennas of a node. By introducing a special penalty function, the MIP problem with non-differentiable cost function and nonlinear constraints can be converted into a continuous variable problem, whose solution is possible.

  14. Calculation of Pareto-optimal solutions to multiple-objective problems using threshold-of-acceptability constraints

    NASA Technical Reports Server (NTRS)

    Giesy, D. P.

    1978-01-01

    A technique is presented for the calculation of Pareto-optimal solutions to a multiple-objective constrained optimization problem by solving a series of single-objective problems. Threshold-of-acceptability constraints are placed on the objective functions at each stage to both limit the area of search and to mathematically guarantee convergence to a Pareto optimum.

  15. Application of cellular automatons and ant algorithms in avionics

    NASA Astrophysics Data System (ADS)

    Kuznetsov, A. V.; Selvesiuk, N. I.; Platoshin, G. A.; Semenova, E. V.

    2018-03-01

    The paper considers two algorithms for searching quasi-optimal solutions of discrete optimization problems with regard to the tasks of avionics placing. The first one solves the problem of optimal placement of devices by installation locations, the second one is for the problem of finding the shortest route between devices. Solutions are constructed using a cellular automaton and the ant colony algorithm.

  16. Dynamic malware containment under an epidemic model with alert

    NASA Astrophysics Data System (ADS)

    Zhang, Tianrui; Yang, Lu-Xing; Yang, Xiaofan; Wu, Yingbo; Tang, Yuan Yan

    2017-03-01

    Alerting at the early stage of malware invasion turns out to be an important complement to malware detection and elimination. This paper addresses the issue of how to dynamically contain the prevalence of malware at a lower cost, provided alerting is feasible. A controlled epidemic model with alert is established, and an optimal control problem based on the epidemic model is formulated. The optimality system for the optimal control problem is derived. The structure of an optimal control for the proposed optimal control problem is characterized under some conditions. Numerical examples show that the cost-efficiency of an optimal control strategy can be enhanced by adjusting the upper and lower bounds on admissible controls.

  17. An inverse dynamics approach to trajectory optimization and guidance for an aerospace plane

    NASA Technical Reports Server (NTRS)

    Lu, Ping

    1992-01-01

    The optimal ascent problem for an aerospace planes is formulated as an optimal inverse dynamic problem. Both minimum-fuel and minimax type of performance indices are considered. Some important features of the optimal trajectory and controls are used to construct a nonlinear feedback midcourse controller, which not only greatly simplifies the difficult constrained optimization problem and yields improved solutions, but is also suited for onboard implementation. Robust ascent guidance is obtained by using combination of feedback compensation and onboard generation of control through the inverse dynamics approach. Accurate orbital insertion can be achieved with near-optimal control of the rocket through inverse dynamics even in the presence of disturbances.

  18. Performance optimization of the power user electric energy data acquire system based on MOEA/D evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Ding, Zhongan; Gao, Chen; Yan, Shengteng; Yang, Canrong

    2017-10-01

    The power user electric energy data acquire system (PUEEDAS) is an important part of smart grid. This paper builds a multi-objective optimization model for the performance of the PUEEADS from the point of view of the combination of the comprehensive benefits and cost. Meanwhile, the Chebyshev decomposition approach is used to decompose the multi-objective optimization problem. We design a MOEA/D evolutionary algorithm to solve the problem. By analyzing the Pareto optimal solution set of multi-objective optimization problem and comparing it with the monitoring value to grasp the direction of optimizing the performance of the PUEEDAS. Finally, an example is designed for specific analysis.

  19. Production scheduling with ant colony optimization

    NASA Astrophysics Data System (ADS)

    Chernigovskiy, A. S.; Kapulin, D. V.; Noskova, E. E.; Yamskikh, T. N.; Tsarev, R. Yu

    2017-10-01

    The optimum solution of the production scheduling problem for manufacturing processes at an enterprise is crucial as it allows one to obtain the required amount of production within a specified time frame. Optimum production schedule can be found using a variety of optimization algorithms or scheduling algorithms. Ant colony optimization is one of well-known techniques to solve the global multi-objective optimization problem. In the article, the authors present a solution of the production scheduling problem by means of an ant colony optimization algorithm. A case study of the algorithm efficiency estimated against some others production scheduling algorithms is presented. Advantages of the ant colony optimization algorithm and its beneficial effect on the manufacturing process are provided.

  20. A Mixed Integer Efficient Global Optimization Framework: Applied to the Simultaneous Aircraft Design, Airline Allocation and Revenue Management Problem

    NASA Astrophysics Data System (ADS)

    Roy, Satadru

    Traditional approaches to design and optimize a new system, often, use a system-centric objective and do not take into consideration how the operator will use this new system alongside of other existing systems. This "hand-off" between the design of the new system and how the new system operates alongside other systems might lead to a sub-optimal performance with respect to the operator-level objective. In other words, the system that is optimal for its system-level objective might not be best for the system-of-systems level objective of the operator. Among the few available references that describe attempts to address this hand-off, most follow an MDO-motivated subspace decomposition approach of first designing a very good system and then provide this system to the operator who decides the best way to use this new system along with the existing systems. The motivating example in this dissertation presents one such similar problem that includes aircraft design, airline operations and revenue management "subspaces". The research here develops an approach that could simultaneously solve these subspaces posed as a monolithic optimization problem. The monolithic approach makes the problem a Mixed Integer/Discrete Non-Linear Programming (MINLP/MDNLP) problem, which are extremely difficult to solve. The presence of expensive, sophisticated engineering analyses further aggravate the problem. To tackle this challenge problem, the work here presents a new optimization framework that simultaneously solves the subspaces to capture the "synergism" in the problem that the previous decomposition approaches may not have exploited, addresses mixed-integer/discrete type design variables in an efficient manner, and accounts for computationally expensive analysis tools. The framework combines concepts from efficient global optimization, Kriging partial least squares, and gradient-based optimization. This approach then demonstrates its ability to solve an 11 route airline network problem consisting of 94 decision variables including 33 integer and 61 continuous type variables. This application problem is a representation of an interacting group of systems and provides key challenges to the optimization framework to solve the MINLP problem, as reflected by the presence of a moderate number of integer and continuous type design variables and expensive analysis tool. The result indicates simultaneously solving the subspaces could lead to significant improvement in the fleet-level objective of the airline when compared to the previously developed sequential subspace decomposition approach. In developing the approach to solve the MINLP/MDNLP challenge problem, several test problems provided the ability to explore performance of the framework. While solving these test problems, the framework showed that it could solve other MDNLP problems including categorically discrete variables, indicating that the framework could have broader application than the new aircraft design-fleet allocation-revenue management problem.

  1. Robust quantum optimizer with full connectivity.

    PubMed

    Nigg, Simon E; Lörch, Niels; Tiwari, Rakesh P

    2017-04-01

    Quantum phenomena have the potential to speed up the solution of hard optimization problems. For example, quantum annealing, based on the quantum tunneling effect, has recently been shown to scale exponentially better with system size than classical simulated annealing. However, current realizations of quantum annealers with superconducting qubits face two major challenges. First, the connectivity between the qubits is limited, excluding many optimization problems from a direct implementation. Second, decoherence degrades the success probability of the optimization. We address both of these shortcomings and propose an architecture in which the qubits are robustly encoded in continuous variable degrees of freedom. By leveraging the phenomenon of flux quantization, all-to-all connectivity with sufficient tunability to implement many relevant optimization problems is obtained without overhead. Furthermore, we demonstrate the robustness of this architecture by simulating the optimal solution of a small instance of the nondeterministic polynomial-time hard (NP-hard) and fully connected number partitioning problem in the presence of dissipation.

  2. Cooperative global optimal preview tracking control of linear multi-agent systems: an internal model approach

    NASA Astrophysics Data System (ADS)

    Lu, Yanrong; Liao, Fucheng; Deng, Jiamei; Liu, Huiyang

    2017-09-01

    This paper investigates the cooperative global optimal preview tracking problem of linear multi-agent systems under the assumption that the output of a leader is a previewable periodic signal and the topology graph contains a directed spanning tree. First, a type of distributed internal model is introduced, and the cooperative preview tracking problem is converted to a global optimal regulation problem of an augmented system. Second, an optimal controller, which can guarantee the asymptotic stability of the augmented system, is obtained by means of the standard linear quadratic optimal preview control theory. Third, on the basis of proving the existence conditions of the controller, sufficient conditions are given for the original problem to be solvable, meanwhile a cooperative global optimal controller with error integral and preview compensation is derived. Finally, the validity of theoretical results is demonstrated by a numerical simulation.

  3. Reliability-based design optimization using a generalized subset simulation method and posterior approximation

    NASA Astrophysics Data System (ADS)

    Ma, Yuan-Zhuo; Li, Hong-Shuang; Yao, Wei-Xing

    2018-05-01

    The evaluation of the probabilistic constraints in reliability-based design optimization (RBDO) problems has always been significant and challenging work, which strongly affects the performance of RBDO methods. This article deals with RBDO problems using a recently developed generalized subset simulation (GSS) method and a posterior approximation approach. The posterior approximation approach is used to transform all the probabilistic constraints into ordinary constraints as in deterministic optimization. The assessment of multiple failure probabilities required by the posterior approximation approach is achieved by GSS in a single run at all supporting points, which are selected by a proper experimental design scheme combining Sobol' sequences and Bucher's design. Sequentially, the transformed deterministic design optimization problem can be solved by optimization algorithms, for example, the sequential quadratic programming method. Three optimization problems are used to demonstrate the efficiency and accuracy of the proposed method.

  4. Tailored parameter optimization methods for ordinary differential equation models with steady-state constraints.

    PubMed

    Fiedler, Anna; Raeth, Sebastian; Theis, Fabian J; Hausser, Angelika; Hasenauer, Jan

    2016-08-22

    Ordinary differential equation (ODE) models are widely used to describe (bio-)chemical and biological processes. To enhance the predictive power of these models, their unknown parameters are estimated from experimental data. These experimental data are mostly collected in perturbation experiments, in which the processes are pushed out of steady state by applying a stimulus. The information that the initial condition is a steady state of the unperturbed process provides valuable information, as it restricts the dynamics of the process and thereby the parameters. However, implementing steady-state constraints in the optimization often results in convergence problems. In this manuscript, we propose two new methods for solving optimization problems with steady-state constraints. The first method exploits ideas from optimization algorithms on manifolds and introduces a retraction operator, essentially reducing the dimension of the optimization problem. The second method is based on the continuous analogue of the optimization problem. This continuous analogue is an ODE whose equilibrium points are the optima of the constrained optimization problem. This equivalence enables the use of adaptive numerical methods for solving optimization problems with steady-state constraints. Both methods are tailored to the problem structure and exploit the local geometry of the steady-state manifold and its stability properties. A parameterization of the steady-state manifold is not required. The efficiency and reliability of the proposed methods is evaluated using one toy example and two applications. The first application example uses published data while the second uses a novel dataset for Raf/MEK/ERK signaling. The proposed methods demonstrated better convergence properties than state-of-the-art methods employed in systems and computational biology. Furthermore, the average computation time per converged start is significantly lower. In addition to the theoretical results, the analysis of the dataset for Raf/MEK/ERK signaling provides novel biological insights regarding the existence of feedback regulation. Many optimization problems considered in systems and computational biology are subject to steady-state constraints. While most optimization methods have convergence problems if these steady-state constraints are highly nonlinear, the methods presented recover the convergence properties of optimizers which can exploit an analytical expression for the parameter-dependent steady state. This renders them an excellent alternative to methods which are currently employed in systems and computational biology.

  5. Optimal Water-Power Flow Problem: Formulation and Distributed Optimal Solution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall-Anese, Emiliano; Zhao, Changhong; Zamzam, Admed S.

    This paper formalizes an optimal water-power flow (OWPF) problem to optimize the use of controllable assets across power and water systems while accounting for the couplings between the two infrastructures. Tanks and pumps are optimally managed to satisfy water demand while improving power grid operations; {for the power network, an AC optimal power flow formulation is augmented to accommodate the controllability of water pumps.} Unfortunately, the physics governing the operation of the two infrastructures and coupling constraints lead to a nonconvex (and, in fact, NP-hard) problem; however, after reformulating OWPF as a nonconvex, quadratically-constrained quadratic problem, a feasible point pursuit-successivemore » convex approximation approach is used to identify feasible and optimal solutions. In addition, a distributed solver based on the alternating direction method of multipliers enables water and power operators to pursue individual objectives while respecting the couplings between the two networks. The merits of the proposed approach are demonstrated for the case of a distribution feeder coupled with a municipal water distribution network.« less

  6. Trajectory optimization for lunar rover performing vertical takeoff vertical landing maneuvers in the presence of terrain

    NASA Astrophysics Data System (ADS)

    Ma, Lin; Wang, Kexin; Xu, Zuhua; Shao, Zhijiang; Song, Zhengyu; Biegler, Lorenz T.

    2018-05-01

    This study presents a trajectory optimization framework for lunar rover performing vertical takeoff vertical landing (VTVL) maneuvers in the presence of terrain using variable-thrust propulsion. First, a VTVL trajectory optimization problem with three-dimensional kinematics and dynamics model, boundary conditions, and path constraints is formulated. Then, a finite-element approach transcribes the formulated trajectory optimization problem into a nonlinear programming (NLP) problem solved by a highly efficient NLP solver. A homotopy-based backtracking strategy is applied to enhance the convergence in solving the formulated VTVL trajectory optimization problem. The optimal thrust solution typically has a "bang-bang" profile considering that bounds are imposed on the magnitude of engine thrust. An adaptive mesh refinement strategy based on a constant Hamiltonian profile is designed to address the difficulty in locating the breakpoints in the thrust profile. Four scenarios are simulated. Simulation results indicate that the proposed trajectory optimization framework has sufficient adaptability to handle VTVL missions efficiently.

  7. THE OPTICAL MICROVARIABILITY AND SPECTRAL CHANGES OF THE BL LACERTAE OBJECT S5 0716+714

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Poon, H.; Fu, J. N.; Fan, J. H.

    We monitored the BL Lac object S5 0716+714 in the optical band during 2008 October and December and 2009 February with a best temporal resolution of about 5 minutes in the BVRI bands. Four fast flares were observed with amplitudes ranging from 0.3 to 0.75 mag. The source remained active during the whole monitoring campaign, showing microvariability in all days except for one. The overall variability amplitudes are {delta}B {approx} 0fm89, {delta}V {approx} 0fm80, {delta}R {approx} 0fm73, and {delta}I{approx} 0fm51. Typical timescales of microvariability range from 2 to 8 hr. The overall V - R color index ranges from 0.37more » to 0.59. Strong bluer-when-brighter chromatism was found on internight timescales. However, a different spectral behavior was found on intranight timescales. A possible time lag of {approx}11 minutes between B and I bands was found on one night. The shock-in-jet model and geometric effects can be applied to explain the source's intranight behavior.« less

  8. Finite element approximation of an optimal control problem for the von Karman equations

    NASA Technical Reports Server (NTRS)

    Hou, L. Steven; Turner, James C.

    1994-01-01

    This paper is concerned with optimal control problems for the von Karman equations with distributed controls. We first show that optimal solutions exist. We then show that Lagrange multipliers may be used to enforce the constraints and derive an optimality system from which optimal states and controls may be deduced. Finally we define finite element approximations of solutions for the optimality system and derive error estimates for the approximations.

  9. Discrete Bat Algorithm for Optimal Problem of Permutation Flow Shop Scheduling

    PubMed Central

    Luo, Qifang; Zhou, Yongquan; Xie, Jian; Ma, Mingzhi; Li, Liangliang

    2014-01-01

    A discrete bat algorithm (DBA) is proposed for optimal permutation flow shop scheduling problem (PFSP). Firstly, the discrete bat algorithm is constructed based on the idea of basic bat algorithm, which divide whole scheduling problem into many subscheduling problems and then NEH heuristic be introduced to solve subscheduling problem. Secondly, some subsequences are operated with certain probability in the pulse emission and loudness phases. An intensive virtual population neighborhood search is integrated into the discrete bat algorithm to further improve the performance. Finally, the experimental results show the suitability and efficiency of the present discrete bat algorithm for optimal permutation flow shop scheduling problem. PMID:25243220

  10. Discrete bat algorithm for optimal problem of permutation flow shop scheduling.

    PubMed

    Luo, Qifang; Zhou, Yongquan; Xie, Jian; Ma, Mingzhi; Li, Liangliang

    2014-01-01

    A discrete bat algorithm (DBA) is proposed for optimal permutation flow shop scheduling problem (PFSP). Firstly, the discrete bat algorithm is constructed based on the idea of basic bat algorithm, which divide whole scheduling problem into many subscheduling problems and then NEH heuristic be introduced to solve subscheduling problem. Secondly, some subsequences are operated with certain probability in the pulse emission and loudness phases. An intensive virtual population neighborhood search is integrated into the discrete bat algorithm to further improve the performance. Finally, the experimental results show the suitability and efficiency of the present discrete bat algorithm for optimal permutation flow shop scheduling problem.

  11. Precision of Sensitivity in the Design Optimization of Indeterminate Structures

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Pai, Shantaram S.; Hopkins, Dale A.

    2006-01-01

    Design sensitivity is central to most optimization methods. The analytical sensitivity expression for an indeterminate structural design optimization problem can be factored into a simple determinate term and a complicated indeterminate component. Sensitivity can be approximated by retaining only the determinate term and setting the indeterminate factor to zero. The optimum solution is reached with the approximate sensitivity. The central processing unit (CPU) time to solution is substantially reduced. The benefit that accrues from using the approximate sensitivity is quantified by solving a set of problems in a controlled environment. Each problem is solved twice: first using the closed-form sensitivity expression, then using the approximation. The problem solutions use the CometBoards testbed as the optimization tool with the integrated force method as the analyzer. The modification that may be required, to use the stiffener method as the analysis tool in optimization, is discussed. The design optimization problem of an indeterminate structure contains many dependent constraints because of the implicit relationship between stresses, as well as the relationship between the stresses and displacements. The design optimization process can become problematic because the implicit relationship reduces the rank of the sensitivity matrix. The proposed approximation restores the full rank and enhances the robustness of the design optimization method.

  12. Free terminal time optimal control problem of an HIV model based on a conjugate gradient method.

    PubMed

    Jang, Taesoo; Kwon, Hee-Dae; Lee, Jeehyun

    2011-10-01

    The minimum duration of treatment periods and the optimal multidrug therapy for human immunodeficiency virus (HIV) type 1 infection are considered. We formulate an optimal tracking problem, attempting to drive the states of the model to a "healthy" steady state in which the viral load is low and the immune response is strong. We study an optimal time frame as well as HIV therapeutic strategies by analyzing the free terminal time optimal tracking control problem. The minimum duration of treatment periods and the optimal multidrug therapy are found by solving the corresponding optimality systems with the additional transversality condition for the terminal time. We demonstrate by numerical simulations that the optimal dynamic multidrug therapy can lead to the long-term control of HIV by the strong immune response after discontinuation of therapy.

  13. Constrained optimization via simulation models for new product innovation

    NASA Astrophysics Data System (ADS)

    Pujowidianto, Nugroho A.

    2017-11-01

    We consider the problem of constrained optimization where the decision makers aim to optimize the primary performance measure while constraining the secondary performance measures. This paper provides a brief overview of stochastically constrained optimization via discrete event simulation. Most review papers tend to be methodology-based. This review attempts to be problem-based as decision makers may have already decided on the problem formulation. We consider constrained optimization models as there are usually constraints on secondary performance measures as trade-off in new product development. It starts by laying out different possible methods and the reasons using constrained optimization via simulation models. It is then followed by the review of different simulation optimization approach to address constrained optimization depending on the number of decision variables, the type of constraints, and the risk preferences of the decision makers in handling uncertainties.

  14. Benchmarking optimization software with COPS 3.0.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolan, E. D.; More, J. J.; Munson, T. S.

    2004-05-24

    The authors describe version 3.0 of the COPS set of nonlinearly constrained optimization problems. They have added new problems, as well as streamlined and improved most of the problems. They also provide a comparison of the FILTER, KNITRO, LOQO, MINOS, and SNOPT solvers on these problems.

  15. Optimal birth control of age-dependent competitive species III. Overtaking problem

    NASA Astrophysics Data System (ADS)

    He, Ze-Rong; Cheng, Ji-Shu; Zhang, Chun-Guo

    2008-01-01

    A study is made of an overtaking optimal problem for a population system consisting of two competing species, which is controlled by fertilities. The existence of optimal policy is proved and a maximum principle is carefully derived under less restrictive conditions. Weak and strong turnpike properties of optimal trajectories are established.

  16. Application of ant colony optimization to optimal foragaing theory: comparison of simulation and field results

    USDA-ARS?s Scientific Manuscript database

    Ant Colony Optimization (ACO) refers to the family of algorithms inspired by the behavior of real ants and used to solve combinatorial problems such as the Traveling Salesman Problem (TSP).Optimal Foraging Theory (OFT) is an evolutionary principle wherein foraging organisms or insect parasites seek ...

  17. Proposed SLR Optical Bench Required to Track Debris Using 1550 nm Lasers

    NASA Technical Reports Server (NTRS)

    Shappirio, M.; Coyle, D. B.; McGarry, J. F.; Bufton, J.; Cheek, J. W.; Clarke, G.; Hull, S. M.; Skillman, D. R.; Stysley, P. R.; Sun, X.; hide

    2015-01-01

    A previous study has indicated that by using approx.1550 nm wavelengths a laser ranging system can track debris objects in an "eye safe" manner, while increasing the expected return rate by a factor of approx. 2/unit area of the telescope. In this presentation we develop the optical bench required to use approx.1550nm lasers, and integration with a 532nm system. We will use the optical bench configuration for NGSLR as the baseline, and indicate a possible injection point for the 1550 nm laser. The presentation will include what elements may need to be changed for transmitting the required power on the approx.1550nm wavelength, supporting the alignment of the laser to the telescope, and possible concerns for the telescope optics.

  18. The Small Satellites of Pluto as Observed by New Horizons

    NASA Technical Reports Server (NTRS)

    Weaver, H. A.; Buie, M. W; Buratti, B. J.; Grundy, W. M.; Lauer, T. R.; Olkin, C. B.; Parker, A .H.; Porter, S. B.; Showalter, M. R.; Spencer, J. R.; hide

    2016-01-01

    The New Horizons mission has provided resolved measurements of Pluto's moons Styx, Nix, Kerberos, and Hydra. All four are small, with equivalent spherical diameters of approx.40 kilometers for Nix and Hydra and approx. 10 kilometers for Styx and Kerberos. They are also highly elongated, with maximum to minimum axis ratios of approx. 2. All four moons have high albedos (approx.50 to 90%) suggestive of a water-ice surface composition. Crater densities on Nix and Hydra imply surface ages of at least 4 billion years. The small moons rotate much faster than synchronous, with rotational poles clustered nearly orthogonal to the common pole directions of Pluto and Charon. These results reinforce the hypothesis that the small moons formed in the aftermath of a collision that produced the Pluto-Charon binary.

  19. An Investigation of Generalized Differential Evolution Metaheuristic for Multiobjective Optimal Crop-Mix Planning Decision

    PubMed Central

    Olugbara, Oludayo

    2014-01-01

    This paper presents an annual multiobjective crop-mix planning as a problem of concurrent maximization of net profit and maximization of crop production to determine an optimal cropping pattern. The optimal crop production in a particular planting season is a crucial decision making task from the perspectives of economic management and sustainable agriculture. A multiobjective optimal crop-mix problem is formulated and solved using the generalized differential evolution 3 (GDE3) metaheuristic to generate a globally optimal solution. The performance of the GDE3 metaheuristic is investigated by comparing its results with the results obtained using epsilon constrained and nondominated sorting genetic algorithms—being two representatives of state-of-the-art in evolutionary optimization. The performance metrics of additive epsilon, generational distance, inverted generational distance, and spacing are considered to establish the comparability. In addition, a graphical comparison with respect to the true Pareto front for the multiobjective optimal crop-mix planning problem is presented. Empirical results generally show GDE3 to be a viable alternative tool for solving a multiobjective optimal crop-mix planning problem. PMID:24883369

  20. An investigation of generalized differential evolution metaheuristic for multiobjective optimal crop-mix planning decision.

    PubMed

    Adekanmbi, Oluwole; Olugbara, Oludayo; Adeyemo, Josiah

    2014-01-01

    This paper presents an annual multiobjective crop-mix planning as a problem of concurrent maximization of net profit and maximization of crop production to determine an optimal cropping pattern. The optimal crop production in a particular planting season is a crucial decision making task from the perspectives of economic management and sustainable agriculture. A multiobjective optimal crop-mix problem is formulated and solved using the generalized differential evolution 3 (GDE3) metaheuristic to generate a globally optimal solution. The performance of the GDE3 metaheuristic is investigated by comparing its results with the results obtained using epsilon constrained and nondominated sorting genetic algorithms-being two representatives of state-of-the-art in evolutionary optimization. The performance metrics of additive epsilon, generational distance, inverted generational distance, and spacing are considered to establish the comparability. In addition, a graphical comparison with respect to the true Pareto front for the multiobjective optimal crop-mix planning problem is presented. Empirical results generally show GDE3 to be a viable alternative tool for solving a multiobjective optimal crop-mix planning problem.

  1. Trajectory Design Employing Convex Optimization for Landing on Irregularly Shaped Asteroids

    NASA Technical Reports Server (NTRS)

    Pinson, Robin M.; Lu, Ping

    2016-01-01

    Mission proposals that land spacecraft on asteroids are becoming increasingly popular. However, in order to have a successful mission the spacecraft must reliably and softly land at the intended landing site with pinpoint precision. The problem under investigation is how to design a propellant optimal powered descent trajectory that can be quickly computed onboard the spacecraft, without interaction from the ground control. The propellant optimal control problem in this work is to determine the optimal finite thrust vector to land the spacecraft at a specified location, in the presence of a highly nonlinear gravity field, subject to various mission and operational constraints. The proposed solution uses convex optimization, a gravity model with higher fidelity than Newtonian, and an iterative solution process for a fixed final time problem. In addition, a second optimization method is wrapped around the convex optimization problem to determine the optimal flight time that yields the lowest propellant usage over all flight times. Gravity models designed for irregularly shaped asteroids are investigated. Success of the algorithm is demonstrated by designing powered descent trajectories for the elongated binary asteroid Castalia.

  2. Efficient computation of optimal actions.

    PubMed

    Todorov, Emanuel

    2009-07-14

    Optimal choice of actions is a fundamental problem relevant to fields as diverse as neuroscience, psychology, economics, computer science, and control engineering. Despite this broad relevance the abstract setting is similar: we have an agent choosing actions over time, an uncertain dynamical system whose state is affected by those actions, and a performance criterion that the agent seeks to optimize. Solving problems of this kind remains hard, in part, because of overly generic formulations. Here, we propose a more structured formulation that greatly simplifies the construction of optimal control laws in both discrete and continuous domains. An exhaustive search over actions is avoided and the problem becomes linear. This yields algorithms that outperform Dynamic Programming and Reinforcement Learning, and thereby solve traditional problems more efficiently. Our framework also enables computations that were not possible before: composing optimal control laws by mixing primitives, applying deterministic methods to stochastic systems, quantifying the benefits of error tolerance, and inferring goals from behavioral data via convex optimization. Development of a general class of easily solvable problems tends to accelerate progress--as linear systems theory has done, for example. Our framework may have similar impact in fields where optimal choice of actions is relevant.

  3. An improved genetic algorithm for multidimensional optimization of precedence-constrained production planning and scheduling

    NASA Astrophysics Data System (ADS)

    Dao, Son Duy; Abhary, Kazem; Marian, Romeo

    2017-06-01

    Integration of production planning and scheduling is a class of problems commonly found in manufacturing industry. This class of problems associated with precedence constraint has been previously modeled and optimized by the authors, in which, it requires a multidimensional optimization at the same time: what to make, how many to make, where to make and the order to make. It is a combinatorial, NP-hard problem, for which no polynomial time algorithm is known to produce an optimal result on a random graph. In this paper, the further development of Genetic Algorithm (GA) for this integrated optimization is presented. Because of the dynamic nature of the problem, the size of its solution is variable. To deal with this variability and find an optimal solution to the problem, GA with new features in chromosome encoding, crossover, mutation, selection as well as algorithm structure is developed herein. With the proposed structure, the proposed GA is able to "learn" from its experience. Robustness of the proposed GA is demonstrated by a complex numerical example in which performance of the proposed GA is compared with those of three commercial optimization solvers.

  4. OBSERVATIONS OF Arp 220 USING HERSCHEL-SPIRE: AN UNPRECEDENTED VIEW OF THE MOLECULAR GAS IN AN EXTREME STAR FORMATION ENVIRONMENT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rangwala, Naseem; Maloney, Philip R.; Glenn, Jason

    2011-12-10

    We present Herschel Spectral and Photometric Imaging Receiver Fourier Transform Spectrometer (Herschel SPIRE-FTS) observations of Arp 220, a nearby ultra-luminous infrared galaxy. The FTS provides continuous spectral coverage from 190 to 670 {mu}m, a wavelength region that is either very difficult to observe or completely inaccessible from the ground. The spectrum provides a good measurement of the continuum and detection of several molecular and atomic species. We detect luminous CO (J = 4-3 to 13-12) and water rotational transitions with comparable total luminosity {approx}2 Multiplication-Sign 10{sup 8} L{sub Sun }; very high-J transitions of HCN (J = 12-11 to 17-16)more » in absorption; strong absorption features of rare species such as OH{sup +}, H{sub 2}O{sup +}, and HF; and atomic lines of [C I] and [N II]. The modeling of the continuum shows that the dust is warm, with T = 66 K, and has an unusually large optical depth, with {tau}{sub dust} {approx} 5 at 100 {mu}m. The total far-infrared luminosity of Arp 220 is L{sub FIR} {approx} 2 Multiplication-Sign 10{sup 12} L{sub Sun }. Non-LTE modeling of the extinction corrected CO rotational transitions shows that the spectral line energy distribution of CO is fit well by two temperature components: cold molecular gas at T {approx} 50 K and warm molecular gas at T {approx} 1350{sup +280}{sub -100} K (the inferred temperatures are much lower if CO line fluxes are not corrected for dust extinction). These two components are not in pressure equilibrium. The mass of the warm gas is 10% of the cold gas, but it dominates the CO luminosity. The ratio of total CO luminosity to the total FIR luminosity is L{sub CO}/L{sub FIR} {approx} 10{sup -4} (the most luminous lines, such as J = 6-5, have L{sub CO,J=6-5}/L{sub FIR} {approx} 10{sup -5}). The temperature of the warm gas is in excellent agreement with the observations of H{sub 2} rotational lines. At 1350 K, H{sub 2} dominates the cooling ({approx}20 L{sub Sun} M{sup -1}{sub Sun }) in the interstellar medium compared to CO ({approx}0.4 L{sub Sun} M{sup -1}{sub Sun }). We have ruled out photodissociation regions, X-ray-dominated regions, and cosmic rays as likely sources of excitation of this warm molecular gas, and found that only a non-ionizing source can heat this gas; the mechanical energy from supernovae and stellar winds is able to satisfy the large energy budget of {approx}20 L{sub Sun} M{sup -1}{sub Sun }. Analysis of the very high-J lines of HCN strongly indicates that they are solely populated by infrared pumping of photons at 14 {mu}m. This mechanism requires an intense radiation field with T > 350 K. We detect a massive molecular outflow in Arp 220 from the analysis of strong P Cygni line profiles observed in OH{sup +}, H{sub 2}O{sup +}, and H{sub 2}O. The outflow has a mass {approx}> 10{sup 7} M{sub Sun} and is bound to the nuclei with velocity {approx}< 250 km s{sup -1}. The large column densities observed for these molecular ions strongly favor the existence of an X-ray luminous AGN (10{sup 44} erg s{sup -1}) in Arp 220.« less

  5. Fast, accurate photon beam accelerator modeling using BEAMnrc: A systematic investigation of efficiency enhancing methods and cross-section data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fragoso, Margarida; Kawrakow, Iwan; Faddegon, Bruce A.

    In this work, an investigation of efficiency enhancing methods and cross-section data in the BEAMnrc Monte Carlo (MC) code system is presented. Additionally, BEAMnrc was compared with VMC++, another special-purpose MC code system that has recently been enhanced for the simulation of the entire treatment head. BEAMnrc and VMC++ were used to simulate a 6 MV photon beam from a Siemens Primus linear accelerator (linac) and phase space (PHSP) files were generated at 100 cm source-to-surface distance for the 10x10 and 40x40 cm{sup 2} field sizes. The BEAMnrc parameters/techniques under investigation were grouped by (i) photon and bremsstrahlung cross sections,more » (ii) approximate efficiency improving techniques (AEITs), (iii) variance reduction techniques (VRTs), and (iv) a VRT (bremsstrahlung photon splitting) in combination with an AEIT (charged particle range rejection). The BEAMnrc PHSP file obtained without the efficiency enhancing techniques under study or, when not possible, with their default values (e.g., EXACT algorithm for the boundary crossing algorithm) and with the default cross-section data (PEGS4 and Bethe-Heitler) was used as the ''base line'' for accuracy verification of the PHSP files generated from the different groups described previously. Subsequently, a selection of the PHSP files was used as input for DOSXYZnrc-based water phantom dose calculations, which were verified against measurements. The performance of the different VRTs and AEITs available in BEAMnrc and of VMC++ was specified by the relative efficiency, i.e., by the efficiency of the MC simulation relative to that of the BEAMnrc base-line calculation. The highest relative efficiencies were {approx}935 ({approx}111 min on a single 2.6 GHz processor) and {approx}200 ({approx}45 min on a single processor) for the 10x10 field size with 50 million histories and 40x40 cm{sup 2} field size with 100 million histories, respectively, using the VRT directional bremsstrahlung splitting (DBS) with no electron splitting. When DBS was used with electron splitting and combined with augmented charged particle range rejection, a technique recently introduced in BEAMnrc, relative efficiencies were {approx}420 ({approx}253 min on a single processor) and {approx}175 ({approx}58 min on a single processor) for the 10x10 and 40x40 cm{sup 2} field sizes, respectively. Calculations of the Siemens Primus treatment head with VMC++ produced relative efficiencies of {approx}1400 ({approx}6 min on a single processor) and {approx}60 ({approx}4 min on a single processor) for the 10x10 and 40x40 cm{sup 2} field sizes, respectively. BEAMnrc PHSP calculations with DBS alone or DBS in combination with charged particle range rejection were more efficient than the other efficiency enhancing techniques used. Using VMC++, accurate simulations of the entire linac treatment head were performed within minutes on a single processor. Noteworthy differences ({+-}1%-3%) in the mean energy, planar fluence, and angular and spectral distributions were observed with the NIST bremsstrahlung cross sections compared with those of Bethe-Heitler (BEAMnrc default bremsstrahlung cross section). However, MC calculated dose distributions in water phantoms (using combinations of VRTs/AEITs and cross-section data) agreed within 2% of measurements. Furthermore, MC calculated dose distributions in a simulated water/air/water phantom, using NIST cross sections, were within 2% agreement with the BEAMnrc Bethe-Heitler default case.« less

  6. Optimization with artificial neural network systems - A mapping principle and a comparison to gradient based methods

    NASA Technical Reports Server (NTRS)

    Leong, Harrison Monfook

    1988-01-01

    General formulae for mapping optimization problems into systems of ordinary differential equations associated with artificial neural networks are presented. A comparison is made to optimization using gradient-search methods. The performance measure is the settling time from an initial state to a target state. A simple analytical example illustrates a situation where dynamical systems representing artificial neural network methods would settle faster than those representing gradient-search. Settling time was investigated for a more complicated optimization problem using computer simulations. The problem was a simplified version of a problem in medical imaging: determining loci of cerebral activity from electromagnetic measurements at the scalp. The simulations showed that gradient based systems typically settled 50 to 100 times faster than systems based on current neural network optimization methods.

  7. Simultaneous analysis and design

    NASA Technical Reports Server (NTRS)

    Haftka, R. T.

    1984-01-01

    Optimization techniques are increasingly being used for performing nonlinear structural analysis. The development of element by element (EBE) preconditioned conjugate gradient (CG) techniques is expected to extend this trend to linear analysis. Under these circumstances the structural design problem can be viewed as a nested optimization problem. There are computational benefits to treating this nested problem as a large single optimization problem. The response variables (such as displacements) and the structural parameters are all treated as design variables in a unified formulation which performs simultaneously the design and analysis. Two examples are used for demonstration. A seventy-two bar truss is optimized subject to linear stress constraints and a wing box structure is optimized subject to nonlinear collapse constraints. Both examples show substantial computational savings with the unified approach as compared to the traditional nested approach.

  8. Closed-Loop Optimal Control Implementations for Space Applications

    DTIC Science & Technology

    2016-12-01

    analyses of a series of optimal control problems, several real- time optimal control algorithms are developed that continuously adapt to feedback on the...through the analyses of a series of optimal control problems, several real- time optimal control algorithms are developed that continuously adapt to...information is estimated to average 1 hour per response, including the time for reviewing instruction, searching existing data sources, gathering

  9. Conformational Space Annealing explained: A general optimization algorithm, with diverse applications

    NASA Astrophysics Data System (ADS)

    Joung, InSuk; Kim, Jong Yun; Gross, Steven P.; Joo, Keehyoung; Lee, Jooyoung

    2018-02-01

    Many problems in science and engineering can be formulated as optimization problems. One way to solve these problems is to develop tailored problem-specific approaches. As such development is challenging, an alternative is to develop good generally-applicable algorithms. Such algorithms are easy to apply, typically function robustly, and reduce development time. Here we provide a description for one such algorithm called Conformational Space Annealing (CSA) along with its python version, PyCSA. We previously applied it to many optimization problems including protein structure prediction and graph community detection. To demonstrate its utility, we have applied PyCSA to two continuous test functions, namely Ackley and Eggholder functions. In addition, in order to provide complete generality of PyCSA to any types of an objective function, we demonstrate the way PyCSA can be applied to a discrete objective function, namely a parameter optimization problem. Based on the benchmarking results of the three problems, the performance of CSA is shown to be better than or similar to the most popular optimization method, simulated annealing. For continuous objective functions, we found that, L-BFGS-B was the best performing local optimization method, while for a discrete objective function Nelder-Mead was the best. The current version of PyCSA can be run in parallel at the coarse grained level by calculating multiple independent local optimizations separately. The source code of PyCSA is available from http://lee.kias.re.kr.

  10. Trajectory Optimization Using Adjoint Method and Chebyshev Polynomial Approximation for Minimizing Fuel Consumption During Climb

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Hornby, Gregory; Ishihara, Abe

    2013-01-01

    This paper describes two methods of trajectory optimization to obtain an optimal trajectory of minimum-fuel- to-climb for an aircraft. The first method is based on the adjoint method, and the second method is based on a direct trajectory optimization method using a Chebyshev polynomial approximation and cubic spine approximation. The approximate optimal trajectory will be compared with the adjoint-based optimal trajectory which is considered as the true optimal solution of the trajectory optimization problem. The adjoint-based optimization problem leads to a singular optimal control solution which results in a bang-singular-bang optimal control.

  11. Optimizing Cr(VI) and Tc(VII) remediation through nano-scale biomineral engineering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cutting, R. S.; Coker, V. S.; Telling, N. D.

    2009-09-09

    To optimize the production of biomagnetite for the bioremediation of metal oxyanion contaminated waters, the reduction of aqueous Cr(VI) to Cr(III) by two biogenic magnetites and a synthetic magnetite was evaluated under batch and continuous flow conditions. Results indicate that nano-scale biogenic magnetite produced by incubating synthetic schwertmannite powder in cell suspensions of Geobacter sulfurreducens is more efficient at reducing Cr(VI) than either biogenic nano-magnetite produced from a suspension of ferrihydrite 'gel' or synthetic nano-scale Fe{sub 3}O{sub 4} powder. Although X-ray Photoelectron Spectroscopy (XPS) measurements obtained from post-exposure magnetite samples reveal that both Cr(III) and Cr(VI) are associated with nanoparticlemore » surfaces, X-ray Magnetic Circular Dichroism (XMCD) studies indicate that some Cr(III) has replaced octahedrally coordinated Fe in the lattice of the magnetite. Inductively Coupled Plasma-Atomic Emission Spectrometry (ICP-AES) measurements of total aqueous Cr in the associated solution phase indicated that, although the majority of Cr(III) was incorporated within or adsorbed to the magnetite samples, a proportion ({approx}10-15 %) was released back into solution. Studies of Tc(VII) uptake by magnetites produced via the different synthesis routes also revealed significant differences between them as regards effectiveness for remediation. In addition, column studies using a {gamma}-camera to obtain real time images of a {sup 99m}Tc(VII) radiotracer were performed to visualize directly the relative performances of the magnetite sorbents against ultra-trace concentrations of metal oxyanion contaminants. Again, the magnetite produced from schwertmannite proved capable of retaining more ({approx}20%) {sup 99m}Tc(VII) than the magnetite produced from ferrihydrite, confirming that biomagnetite production for efficient environmental remediation can be fine-tuned through careful selection of the initial Fe(III) mineral substrate supplied to Fe(III)-reducing bacteria.« less

  12. Terahertz-Regime, Micro-VEDs: Evaluation of Micromachined TWT Conceptual Designs

    NASA Technical Reports Server (NTRS)

    Booske, John H.; Kory, Carol L.; Gallagher, D.; van der Weide, Daniel W.; Limbach, S; Gustafson, P; Lee, W.-J.; Gallagher, S.; Jain, K.

    2001-01-01

    Summary form only given. The Terahertz (THz) region of the electromagnetic spectrum (approx.300-3000 GHz) has enormous potential for high-data-rate communications, spectroscopy, astronomy, space research, medicine, biology, surveillance, remote sensing, industrial process control, etc. The most critical roadblock to full exploitation of the THz band is lack of coherent radiation sources that are powerful (0.01-10.0 W continuous wave), efficient (>1 %), frequency agile (instantaneously tunable over 1% bandwidths or more), reliable, and relatively inexpensive. Micro-machined Vacuum Electron Devices (micro-VEDs) represent a promising solution. We describe prospects for miniature, THz-regime TWTs fabricated using micromachining techniques. Several approx.600 GHz conceptual designs are compared. Their expected performance has been analyzed using SD, 2.51), and 3D TWT codes. A folded waveguide (FWG) TWT forward-wave amplifier design is presented based on a Northrop Grumman (NGC) optimized design procedure. This conceptual device is compared to the simulated performance of a novel, micro-VED helix TWT. Conceptual FWG TWT backward-wave amplifiers and oscillators are also discussed. A scaled (100 GHz) FWG TWT operating at a relatively low voltage (-12 kV) is under development at NGC. Also, actual-size micromachining experiments are planned to evaluate the feasibility of arrays of micro-VED TWTs. Progress and results of these efforts are described. This work was supported, in part by AFOSR, ONR, and NSF.

  13. Analog Processor To Solve Optimization Problems

    NASA Technical Reports Server (NTRS)

    Duong, Tuan A.; Eberhardt, Silvio P.; Thakoor, Anil P.

    1993-01-01

    Proposed analog processor solves "traveling-salesman" problem, considered paradigm of global-optimization problems involving routing or allocation of resources. Includes electronic neural network and auxiliary circuitry based partly on concepts described in "Neural-Network Processor Would Allocate Resources" (NPO-17781) and "Neural Network Solves 'Traveling-Salesman' Problem" (NPO-17807). Processor based on highly parallel computing solves problem in significantly less time.

  14. Non-adaptive and adaptive hybrid approaches for enhancing water quality management

    NASA Astrophysics Data System (ADS)

    Kalwij, Ineke M.; Peralta, Richard C.

    2008-09-01

    SummaryUsing optimization to help solve groundwater management problems cost-effectively is becoming increasingly important. Hybrid optimization approaches, that combine two or more optimization algorithms, will become valuable and common tools for addressing complex nonlinear hydrologic problems. Hybrid heuristic optimizers have capabilities far beyond those of a simple genetic algorithm (SGA), and are continuously improving. SGAs having only parent selection, crossover, and mutation are inefficient and rarely used for optimizing contaminant transport management. Even an advanced genetic algorithm (AGA) that includes elitism (to emphasize using the best strategies as parents) and healing (to help assure optimal strategy feasibility) is undesirably inefficient. Much more efficient than an AGA is the presented hybrid (AGCT), which adds comprehensive tabu search (TS) features to an AGA. TS mechanisms (TS probability, tabu list size, search coarseness and solution space size, and a TS threshold value) force the optimizer to search portions of the solution space that yield superior pumping strategies, and to avoid reproducing similar or inferior strategies. An AGCT characteristic is that TS control parameters are unchanging during optimization. However, TS parameter values that are ideal for optimization commencement can be undesirable when nearing assumed global optimality. The second presented hybrid, termed global converger (GC), is significantly better than the AGCT. GC includes AGCT plus feedback-driven auto-adaptive control that dynamically changes TS parameters during run-time. Before comparing AGCT and GC, we empirically derived scaled dimensionless TS control parameter guidelines by evaluating 50 sets of parameter values for a hypothetical optimization problem. For the hypothetical area, AGCT optimized both well locations and pumping rates. The parameters are useful starting values because using trial-and-error to identify an ideal combination of control parameter values for a new optimization problem can be time consuming. For comparison, AGA, AGCT, and GC are applied to optimize pumping rates for assumed well locations of a complex large-scale contaminant transport and remediation optimization problem at Blaine Naval Ammunition Depot (NAD). Both hybrid approaches converged more closely to the optimal solution than the non-hybrid AGA. GC averaged 18.79% better convergence than AGCT, and 31.9% than AGA, within the same computation time (12.5 days). AGCT averaged 13.1% better convergence than AGA. The GC can significantly reduce the burden of employing computationally intensive hydrologic simulation models within a limited time period and for real-world optimization problems. Although demonstrated for a groundwater quality problem, it is also applicable to other arenas, such as managing salt water intrusion and surface water contaminant loading.

  15. A Semi-Infinite Programming based algorithm for determining T-optimum designs for model discrimination

    PubMed Central

    Duarte, Belmiro P.M.; Wong, Weng Kee; Atkinson, Anthony C.

    2016-01-01

    T-optimum designs for model discrimination are notoriously difficult to find because of the computational difficulty involved in solving an optimization problem that involves two layers of optimization. Only a handful of analytical T-optimal designs are available for the simplest problems; the rest in the literature are found using specialized numerical procedures for a specific problem. We propose a potentially more systematic and general way for finding T-optimal designs using a Semi-Infinite Programming (SIP) approach. The strategy requires that we first reformulate the original minimax or maximin optimization problem into an equivalent semi-infinite program and solve it using an exchange-based method where lower and upper bounds produced by solving the outer and the inner programs, are iterated to convergence. A global Nonlinear Programming (NLP) solver is used to handle the subproblems, thus finding the optimal design and the least favorable parametric configuration that minimizes the residual sum of squares from the alternative or test models. We also use a nonlinear program to check the global optimality of the SIP-generated design and automate the construction of globally optimal designs. The algorithm is successfully used to produce results that coincide with several T-optimal designs reported in the literature for various types of model discrimination problems with normally distributed errors. However, our method is more general, merely requiring that the parameters of the model be estimated by a numerical optimization. PMID:27330230

  16. A Semi-Infinite Programming based algorithm for determining T-optimum designs for model discrimination.

    PubMed

    Duarte, Belmiro P M; Wong, Weng Kee; Atkinson, Anthony C

    2015-03-01

    T-optimum designs for model discrimination are notoriously difficult to find because of the computational difficulty involved in solving an optimization problem that involves two layers of optimization. Only a handful of analytical T-optimal designs are available for the simplest problems; the rest in the literature are found using specialized numerical procedures for a specific problem. We propose a potentially more systematic and general way for finding T-optimal designs using a Semi-Infinite Programming (SIP) approach. The strategy requires that we first reformulate the original minimax or maximin optimization problem into an equivalent semi-infinite program and solve it using an exchange-based method where lower and upper bounds produced by solving the outer and the inner programs, are iterated to convergence. A global Nonlinear Programming (NLP) solver is used to handle the subproblems, thus finding the optimal design and the least favorable parametric configuration that minimizes the residual sum of squares from the alternative or test models. We also use a nonlinear program to check the global optimality of the SIP-generated design and automate the construction of globally optimal designs. The algorithm is successfully used to produce results that coincide with several T-optimal designs reported in the literature for various types of model discrimination problems with normally distributed errors. However, our method is more general, merely requiring that the parameters of the model be estimated by a numerical optimization.

  17. A NEW INFRARED COLOR CRITERION FOR THE SELECTION OF 0 < z < 7 AGNs: APPLICATION TO DEEP FIELDS AND IMPLICATIONS FOR JWST SURVEYS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Messias, H.; Afonso, J.; Salvato, M.

    2012-08-01

    It is widely accepted that observations at mid-infrared (mid-IR) wavelengths enable the selection of galaxies with nuclear activity, which may not be revealed even in the deepest X-ray surveys. Many mid-IR color-color criteria have been explored to accomplish this goal and tested thoroughly in the literature. Besides missing many low-luminosity active galactic nuclei (AGNs), one of the main conclusions is that, with increasing redshift, the contamination by non-active galaxies becomes significant (especially at z {approx}> 2.5). This is problematic for the study of the AGN phenomenon in the early universe, the main goal of many of the current and futuremore » deep extragalactic surveys. In this work new near- and mid-IR color diagnostics are explored, aiming for improved efficiency-better completeness and less contamination-in selecting AGNs out to very high redshifts. We restrict our study to the James Webb Space Telescope wavelength range (0.6-27 {mu}m). The criteria are created based on the predictions by state-of-the-art galaxy and AGN templates covering a wide variety of galaxy properties, and tested against control samples with deep multi-wavelength coverage (ranging from the X-rays to radio frequencies). We show that the colors K{sub s} - [4.5], [4.5] - [8.0], and [8.0] - [24] are ideal as AGN/non-AGN diagnostics at, respectively, z {approx}< 1, 1 {approx}< z {approx}< 2.5, and z {approx}> 2.5-3. However, when the source redshift is unknown, these colors should be combined. We thus develop an improved IR criterion (using K{sub s} and IRAC bands, KI) as a new alternative at z {approx}< 2.5. KI does not show improved completeness (50%-60% overall) in comparison to commonly used Infrared Array Camera (IRAC) based AGN criteria, but is less affected by non-AGN contamination (revealing a >50%-90% level of successful AGN selection). We also propose KIM (using K{sub s} , IRAC, and MIPS 24 {mu}m bands, KIM), which aims to select AGN hosts from local distances to as far back as the end of reionization (0 < z {approx}< 7) with reduced non-AGN contamination. However, the necessary testing constraints and the small control-sample sizes prevent the confirmation of its improved efficiency at z {approx}> 2.5. Overall, KIM shows a {approx}30%-40% completeness and a >70%-90% level of successful AGN selection. KI and KIM are built to be reliable against a {approx}10%-20% error in flux, are based on existing filters, and are suitable for immediate use.« less

  18. Inverse Compton X-Ray Halos Around High-z Radio Galaxies: A Feedback Mechanism Powered by Far-Infrared Starbursts or the Cosmic Microwave Background?

    NASA Technical Reports Server (NTRS)

    Small, Ian; Blundell, Katherine M.; Lehmer, B. D.; Alexander, D. M.

    2012-01-01

    We report the detection of extended X-ray emission around two powerful radio galaxies at z approx. 3.6 (4C 03.24 and 4C 19.71) and use these to investigate the origin of extended, inverse Compton (IC) powered X-ray halos at high redshifts. The halos have X-ray luminosities of L(sub X) approx. 3 x 10(exp 44) erg/s and sizes of approx.60 kpc. Their morphologies are broadly similar to the approx.60 kpc long radio lobes around these galaxies suggesting they are formed from IC scattering by relativistic electrons in the radio lobes, of either cosmic microwave background (CMB) photons or far-infrared photons from the dust-obscured starbursts in these galaxies. These observations double the number of z > 3 radio galaxies with X-ray-detected IC halos. We compare the IC X-ray-to-radio luminosity ratios for the two new detections to the two previously detected z approx. 3.8 radio galaxies. Given the similar redshifts, we would expect comparable X-ray IC luminosities if millimeter photons from the CMB are the dominant seed field for the IC emission (assuming all four galaxies have similar ages and jet powers). Instead we see that the two z approx. 3.6 radio galaxies, which are 4 fainter in the far-infrared than those at z 3.8, also have approx.4x fainter X-ray IC emission. Including data for a further six z > or approx. 2 radio sources with detected IC X-ray halos from the literature, we suggest that in the more compact, majority of radio sources, those with lobe sizes < or approx.100-200 kpc, the bulk of the IC emission may be driven by scattering of locally produced far-infrared photons from luminous, dust-obscured starbursts within these galaxies, rather than millimeter photons from the CMB. The resulting X-ray emission appears sufficient to ionize the gas on approx.100-200 kpc scales around these systems and thus helps form the extended, kinematically quiescent Ly(alpha) emission line halos found around some of these systems. The starburst and active galactic nucleus activity in these galaxies are thus combining to produce an even more effective and widespread "feedback" process, acting on the long-term gas reservoir for the galaxy, than either individually could achieve. If episodic radio activity and co-eval starbursts are common in massive, high-redshift galaxies, then this IC-feedback mechanism may play a role in affecting the star formation histories of the most massive galaxies at the present day.

  19. Inversion of Robin coefficient by a spectral stochastic finite element approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin Bangti; Zou Jun

    2008-03-01

    This paper investigates a variational approach to the nonlinear stochastic inverse problem of probabilistically calibrating the Robin coefficient from boundary measurements for the steady-state heat conduction. The problem is formulated into an optimization problem, and mathematical properties relevant to its numerical computations are investigated. The spectral stochastic finite element method using polynomial chaos is utilized for the discretization of the optimization problem, and its convergence is analyzed. The nonlinear conjugate gradient method is derived for the optimization system. Numerical results for several two-dimensional problems are presented to illustrate the accuracy and efficiency of the stochastic finite element method.

  20. Using Approximations to Accelerate Engineering Design Optimization

    NASA Technical Reports Server (NTRS)

    Torczon, Virginia; Trosset, Michael W.

    1998-01-01

    Optimization problems that arise in engineering design are often characterized by several features that hinder the use of standard nonlinear optimization techniques. Foremost among these features is that the functions used to define the engineering optimization problem often are computationally intensive. Within a standard nonlinear optimization algorithm, the computational expense of evaluating the functions that define the problem would necessarily be incurred for each iteration of the optimization algorithm. Faced with such prohibitive computational costs, an attractive alternative is to make use of surrogates within an optimization context since surrogates can be chosen or constructed so that they are typically much less expensive to compute. For the purposes of this paper, we will focus on the use of algebraic approximations as surrogates for the objective. In this paper we introduce the use of so-called merit functions that explicitly recognize the desirability of improving the current approximation to the objective during the course of the optimization. We define and experiment with the use of merit functions chosen to simultaneously improve both the solution to the optimization problem (the objective) and the quality of the approximation. Our goal is to further improve the effectiveness of our general approach without sacrificing any of its rigor.

  1. Mixed integer simulation optimization for optimal hydraulic fracturing and production of shale gas fields

    NASA Astrophysics Data System (ADS)

    Li, J. C.; Gong, B.; Wang, H. G.

    2016-08-01

    Optimal development of shale gas fields involves designing a most productive fracturing network for hydraulic stimulation processes and operating wells appropriately throughout the production time. A hydraulic fracturing network design-determining well placement, number of fracturing stages, and fracture lengths-is defined by specifying a set of integer ordered blocks to drill wells and create fractures in a discrete shale gas reservoir model. The well control variables such as bottom hole pressures or production rates for well operations are real valued. Shale gas development problems, therefore, can be mathematically formulated with mixed-integer optimization models. A shale gas reservoir simulator is used to evaluate the production performance for a hydraulic fracturing and well control plan. To find the optimal fracturing design and well operation is challenging because the problem is a mixed integer optimization problem and entails computationally expensive reservoir simulation. A dynamic simplex interpolation-based alternate subspace (DSIAS) search method is applied for mixed integer optimization problems associated with shale gas development projects. The optimization performance is demonstrated with the example case of the development of the Barnett Shale field. The optimization results of DSIAS are compared with those of a pattern search algorithm.

  2. Efficient Optimization of Low-Thrust Spacecraft Trajectories

    NASA Technical Reports Server (NTRS)

    Lee, Seungwon; Fink, Wolfgang; Russell, Ryan; Terrile, Richard; Petropoulos, Anastassios; vonAllmen, Paul

    2007-01-01

    A paper describes a computationally efficient method of optimizing trajectories of spacecraft driven by propulsion systems that generate low thrusts and, hence, must be operated for long times. A common goal in trajectory-optimization problems is to find minimum-time, minimum-fuel, or Pareto-optimal trajectories (here, Pareto-optimality signifies that no other solutions are superior with respect to both flight time and fuel consumption). The present method utilizes genetic and simulated-annealing algorithms to search for globally Pareto-optimal solutions. These algorithms are implemented in parallel form to reduce computation time. These algorithms are coupled with either of two traditional trajectory- design approaches called "direct" and "indirect." In the direct approach, thrust control is discretized in either arc time or arc length, and the resulting discrete thrust vectors are optimized. The indirect approach involves the primer-vector theory (introduced in 1963), in which the thrust control problem is transformed into a co-state control problem and the initial values of the co-state vector are optimized. In application to two example orbit-transfer problems, this method was found to generate solutions comparable to those of other state-of-the-art trajectory-optimization methods while requiring much less computation time.

  3. The solution of private problems for optimization heat exchangers parameters

    NASA Astrophysics Data System (ADS)

    Melekhin, A.

    2017-11-01

    The relevance of the topic due to the decision of problems of the economy of resources in heating systems of buildings. To solve this problem we have developed an integrated method of research which allows solving tasks on optimization of parameters of heat exchangers. This method decides multicriteria optimization problem with the program nonlinear optimization on the basis of software with the introduction of an array of temperatures obtained using thermography. The author have developed a mathematical model of process of heat exchange in heat exchange surfaces of apparatuses with the solution of multicriteria optimization problem and check its adequacy to the experimental stand in the visualization of thermal fields, an optimal range of managed parameters influencing the process of heat exchange with minimal metal consumption and the maximum heat output fin heat exchanger, the regularities of heat exchange process with getting generalizing dependencies distribution of temperature on the heat-release surface of the heat exchanger vehicles, defined convergence of the results of research in the calculation on the basis of theoretical dependencies and solving mathematical model.

  4. Evolutionary Dynamic Multiobjective Optimization Via Kalman Filter Prediction.

    PubMed

    Muruganantham, Arrchana; Tan, Kay Chen; Vadakkepat, Prahlad

    2016-12-01

    Evolutionary algorithms are effective in solving static multiobjective optimization problems resulting in the emergence of a number of state-of-the-art multiobjective evolutionary algorithms (MOEAs). Nevertheless, the interest in applying them to solve dynamic multiobjective optimization problems has only been tepid. Benchmark problems, appropriate performance metrics, as well as efficient algorithms are required to further the research in this field. One or more objectives may change with time in dynamic optimization problems. The optimization algorithm must be able to track the moving optima efficiently. A prediction model can learn the patterns from past experience and predict future changes. In this paper, a new dynamic MOEA using Kalman filter (KF) predictions in decision space is proposed to solve the aforementioned problems. The predictions help to guide the search toward the changed optima, thereby accelerating convergence. A scoring scheme is devised to hybridize the KF prediction with a random reinitialization method. Experimental results and performance comparisons with other state-of-the-art algorithms demonstrate that the proposed algorithm is capable of significantly improving the dynamic optimization performance.

  5. Quark masses and mixings with hierarchical Friedberg-Lee symmetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Araki, Takeshi; Geng, C. Q.

    2010-04-01

    We consider the Friedberg-Lee symmetry for the quark sector and show that the symmetry closely relates to both quark masses and mixing angles. We also extend our scheme to the fourth generation quark model and find the relation |V{sub tb}{sup '}|{approx_equal}|V{sub t}{sup '}{sub b}|{approx_equal}m{sub b}/m{sub b}{sup '}<{lambda}{sup 2} with {lambda}{approx_equal}0.22 for m{sub b}=4.2 GeV and m{sub b}{sup '}>199 GeV.

  6. REDDENING AND EXTINCTION TOWARD THE GALACTIC BULGE FROM OGLE-III: THE INNER MILKY WAY'S R{sub V} {approx} 2.5 EXTINCTION CURVE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nataf, David M.; Gould, Andrew; Johnson, Jennifer A.

    We combine VI photometry from OGLE-III with VISTA Variables in The Via Lactea survey and Two Micron All Sky Survey measurements of E(J - K{sub s} ) to resolve the longstanding problem of the non-standard optical extinction toward the Galactic bulge. We show that the extinction is well fit by the relation A{sub I} = 0.7465 Multiplication-Sign E(V - I) + 1.3700 Multiplication-Sign E(J - K{sub s} ), or, equivalently, A{sub I} = 1.217 Multiplication-Sign E(V - I)(1 + 1.126 Multiplication-Sign (E(J - K{sub s} )/E(V - I) - 0.3433)). The optical and near-IR reddening law toward the inner Galaxymore » approximately follows an R{sub V} Almost-Equal-To 2.5 extinction curve with a dispersion {sigma}{sub R{sub V}}{approx}0.2, consistent with extragalactic investigations of the hosts of Type Ia SNe. Differential reddening is shown to be significant on scales as small as our mean field size of 6'. The intrinsic luminosity parameters of the Galactic bulge red clump (RC) are derived to be (M{sub I,RC},{sigma}{sub I,RC,0}, (V-I){sub RC,0},{sigma}{sub (V-I){sub R{sub C}}}, (J-K{sub s}){sub RC,0}) = (-0.12, 0.09, 1.06, 0.121, 0.66). Our measurements of the RC brightness, brightness dispersion, and number counts allow us to estimate several Galactic bulge structural parameters. We estimate a distance to the Galactic center of 8.20 kpc. We measure an upper bound on the tilt {alpha} Almost-Equal-To 40 Degree-Sign between the bulge's major axis and the Sun-Galactic center line of sight, though our brightness peaks are consistent with predictions of an N-body model oriented at {alpha} Almost-Equal-To 25 Degree-Sign . The number of RC stars suggests a total stellar mass for the Galactic bulge of {approx}2.3 Multiplication-Sign 10{sup 10} M{sub Sun} if one assumes a canonical Salpeter initial mass function (IMF), or {approx}1.6 Multiplication-Sign 10{sup 10} M{sub Sun} if one assumes a bottom-light Zoccali IMF.« less

  7. Development of the PEBLebl Traveling Salesman Problem Computerized Testbed

    ERIC Educational Resources Information Center

    Mueller, Shane T.; Perelman, Brandon S.; Tan, Yin Yin; Thanasuan, Kejkaew

    2015-01-01

    The traveling salesman problem (TSP) is a combinatorial optimization problem that requires finding the shortest path through a set of points ("cities") that returns to the starting point. Because humans provide heuristic near-optimal solutions to Euclidean versions of the problem, it has sometimes been used to investigate human visual…

  8. Optimization of the interplanetary trajectories of spacecraft with a solar electric propulsion power plant of minimal power

    NASA Astrophysics Data System (ADS)

    Ivanyukhin, A. V.; Petukhov, V. G.

    2016-12-01

    The problem of optimizing the interplanetary trajectories of a spacecraft (SC) with a solar electric propulsion system (SEPS) is examined. The problem of investigating the permissible power minimum of the solar electric propulsion power plant required for a successful flight is studied. Permissible ranges of thrust and exhaust velocity are analyzed for the given range of flight time and final mass of the spacecraft. The optimization is performed according to Portnyagin's maximum principle, and the continuation method is used for reducing the boundary problem of maximal principle to the Cauchy problem and to study the solution/ parameters dependence. Such a combination results in the robust algorithm that reduces the problem of trajectory optimization to the numerical integration of differential equations by the continuation method.

  9. Investigations of quantum heuristics for optimization

    NASA Astrophysics Data System (ADS)

    Rieffel, Eleanor; Hadfield, Stuart; Jiang, Zhang; Mandra, Salvatore; Venturelli, Davide; Wang, Zhihui

    We explore the design of quantum heuristics for optimization, focusing on the quantum approximate optimization algorithm, a metaheuristic developed by Farhi, Goldstone, and Gutmann. We develop specific instantiations of the of quantum approximate optimization algorithm for a variety of challenging combinatorial optimization problems. Through theoretical analyses and numeric investigations of select problems, we provide insight into parameter setting and Hamiltonian design for quantum approximate optimization algorithms and related quantum heuristics, and into their implementation on hardware realizable in the near term.

  10. An Investigation to Manufacturing Analytical Services Composition using the Analytical Target Cascading Method.

    PubMed

    Tien, Kai-Wen; Kulvatunyou, Boonserm; Jung, Kiwook; Prabhu, Vittaldas

    2017-01-01

    As cloud computing is increasingly adopted, the trend is to offer software functions as modular services and compose them into larger, more meaningful ones. The trend is attractive to analytical problems in the manufacturing system design and performance improvement domain because 1) finding a global optimization for the system is a complex problem; and 2) sub-problems are typically compartmentalized by the organizational structure. However, solving sub-problems by independent services can result in a sub-optimal solution at the system level. This paper investigates the technique called Analytical Target Cascading (ATC) to coordinate the optimization of loosely-coupled sub-problems, each may be modularly formulated by differing departments and be solved by modular analytical services. The result demonstrates that ATC is a promising method in that it offers system-level optimal solutions that can scale up by exploiting distributed and modular executions while allowing easier management of the problem formulation.

  11. Nonlinear Multidimensional Assignment Problems Efficient Conic Optimization Methods and Applications

    DTIC Science & Technology

    2015-06-24

    WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) Arizona State University School of Mathematical & Statistical Sciences 901 S...SUPPLEMENTARY NOTES 14. ABSTRACT The major goals of this project were completed: the exact solution of previously unsolved challenging combinatorial optimization... combinatorial optimization problem, the Directional Sensor Problem, was solved in two ways. First, heuristically in an engineering fashion and second, exactly

  12. Helping the decision maker effectively promote various experts' views into various optimal solutions to China's institutional problem of health care provider selection through the organization of a pilot health care provider research system.

    PubMed

    Tang, Liyang

    2013-04-04

    The main aim of China's Health Care System Reform was to help the decision maker find the optimal solution to China's institutional problem of health care provider selection. A pilot health care provider research system was recently organized in China's health care system, and it could efficiently collect the data for determining the optimal solution to China's institutional problem of health care provider selection from various experts, then the purpose of this study was to apply the optimal implementation methodology to help the decision maker effectively promote various experts' views into various optimal solutions to this problem under the support of this pilot system. After the general framework of China's institutional problem of health care provider selection was established, this study collaborated with the National Bureau of Statistics of China to commission a large-scale 2009 to 2010 national expert survey (n = 3,914) through the organization of a pilot health care provider research system for the first time in China, and the analytic network process (ANP) implementation methodology was adopted to analyze the dataset from this survey. The market-oriented health care provider approach was the optimal solution to China's institutional problem of health care provider selection from the doctors' point of view; the traditional government's regulation-oriented health care provider approach was the optimal solution to China's institutional problem of health care provider selection from the pharmacists' point of view, the hospital administrators' point of view, and the point of view of health officials in health administration departments; the public private partnership (PPP) approach was the optimal solution to China's institutional problem of health care provider selection from the nurses' point of view, the point of view of officials in medical insurance agencies, and the health care researchers' point of view. The data collected through a pilot health care provider research system in the 2009 to 2010 national expert survey could help the decision maker effectively promote various experts' views into various optimal solutions to China's institutional problem of health care provider selection.

  13. The Cosmic History of Hot Gas Cooling and Radio AGN Activity in Massive Early-Type Galaxies

    NASA Technical Reports Server (NTRS)

    Danielson, A. L. R.; Lehmer, B. D.; Alexander, D. M.; Brandt, W. M.; Luo, B.; Miller, N.; Xue, Y. Q.; Stott, J. P.

    2012-01-01

    We study the X-ray properties of 393 optically selected early-type galaxies (ETGs) over the redshift range of z approx equals 0.0-1.2 in the Chandra Deep Fields. To measure the average X-ray properties of the ETG population, we use X-ray stacking analyses with a subset of 158 passive ETGs (148 of which were individually undetected in X-ray). This ETG subset was constructed to span the redshift ranges of z = 0.1-1.2 in the approx equals 4 Ms CDF-S and approx equals 2 Ms CDF-N and z = 0.1-0.6 in the approx equals 250 ks E-CDF-S where the contribution from individually undetected AGNs is expected to be negligible in our stacking. We find that 55 of the ETGs are detected individually in the X-rays, and 12 of these galaxies have properties consistent with being passive hot-gas dominated systems (i.e., systems not dominated by an X-ray bright Active Galactic Nucleus; AGN). On the basis of our analyses, we find little evolution in the mean 0.5-2 keY to B-band luminosity ratio (L(sub x) /L(sub Beta) varies as [1 +z]) since z approx equals 1.2, implying that some heating mechanism prevents the gas from cooling in these systems. We consider that feedback from radio-mode AGN activity could be responsible for heating the gas. We select radio AGNs in the ETG population using their far-infrared/radio flux ratio. Our radio observations allow us to constrain the duty cycle history of radio AGN activity in our ETG sample. We estimate that if scaling relations between radio and mechanical power hold out to z approx equals 1.2 for the ETG population being studied here, the average mechanical power from AGN activity is a factor of approx equals1.4 -- 2.6 times larger than the average radiative cooling power from hot gas over the redshift range z approx equals 0-1.2. The excess of inferred AGN mechanical power from these ETGs is consistent with that found in the local Universe for similar types of galaxies.

  14. HST-WFC3 Near-Infrared Spectroscopy of Quenched Galaxies at zeta approx 1.5 from the WISP Survey: Stellar Populations Properties

    NASA Technical Reports Server (NTRS)

    Bedregal, A. G.; Scarlata, C.; Henry, A. L.; Atek, H.; Rafelski, M.; Teplitz, H. I.; Dominguez, A.; Siana, B.; Colbert, J. W.; Malkan, M.; hide

    2013-01-01

    We combine Hubble Space Telescope (HST) G102 and G141 near-IR (NIR) grism spectroscopy with HST/WFC3- UVIS, HST/WFC3-IR, and Spitzer/IRAC [3.6 microns] photometry to assemble a sample of massive (log(Mstar/M solar mass) at approx 11.0) and quenched (specific star formation rate < 0.01 G/yr(exp -1) galaxies at zeta approx 1.5. Our sample of 41 galaxies is the largest with G102+G141 NIR spectroscopy for quenched sources at these redshifts. In contrast to the local universe, zeta approx 1.5 quenched galaxies in the high-mass range have a wide range of stellar population properties. We find that their spectral energy distributions (SEDs) are well fitted with exponentially decreasing star formation histories and short star formation timescales (tau less than or equal to 100 M/yr). Quenched galaxies also show a wide distribution in ages, between 1 and 4 G/yr. In the (u - r)0-versus-mass space quenched galaxies have a large spread in rest-frame color at a given mass. Most quenched galaxies populate the zeta appro. 1.5 red sequence (RS), but an important fraction of them (32%) have substantially bluer colors. Although with a large spread, we find that the quenched galaxies on the RS have older median ages (3.1 G/yr) than the quenched galaxies off the RS (1.5 G/yr). We also show that a rejuvenated SED cannot reproduce the observed stacked spectra of (the bluer) quenched galaxies off the RS. We derive the upper limit on the fraction of massive galaxies on the RS at zeta approx 1.5 to be <43%.We speculate that the young quenched galaxies off the RS are in a transition phase between vigorous star formation at zeta > 2 and the zeta approx 1.5 RS. According to their estimated ages, the time required for quenched galaxies off the RS to join their counterparts on the z approx. 1.5 RS is of the order of approx. 1G/yr.

  15. TESTING GALAXY FORMATION MODELS WITH THE GHOSTS SURVEY: THE COLOR PROFILE OF M81's STELLAR HALO

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Monachesi, Antonela; Bell, Eric F.; Bailin, Jeremy

    2013-04-01

    We study the properties of the stellar populations in M81's outermost part, which hereafter we will call the stellar halo, using Hubble Space Telescope (HST) Advanced Camera for Surveys observations of 19 fields from the GHOSTS survey. The observed fields probe the stellar halo out to a projected distance of {approx}50 kpc from the galactic center. Each field was observed in both F606W and F814W filters. The 50% completeness levels of the color-magnitude diagrams (CMDs) are typically at 2 mag below the tip of the red giant branch (TRGB). Fields at distances closer than 15 kpc show evidence of disk-dominatedmore » populations whereas fields at larger distances are mostly populated by halo stars. The red giant branch (RGB) of the M81's halo CMDs is well matched with isochrones of {approx}10 Gyr and metallicities [Fe/H] {approx} - 1.2 dex, suggesting that the dominant stellar population of M81's halo has a similar age and metallicity. The halo of M81 is characterized by a color distribution of width {approx}0.4 mag and an approximately constant median value of (F606W - F814W) {approx}1 mag measured using stars within the magnitude range 23.7 {approx}< F814W {approx}< 25.5. When considering only fields located at galactocentric radius R > 15 kpc, we detect no color gradient in the stellar halo of M81. We place a limit of 0.03 {+-} 0.11 mag difference between the median color of RGB M81 halo stars at {approx}15 and at 50 kpc, corresponding to a metallicity difference of 0.08 {+-} 0.35 dex over that radial range for an assumed constant age of 10 Gyr. We compare these results with model predictions for the colors of stellar halos formed purely via accretion of satellite galaxies. When we analyze the cosmologically motivated models in the same way as the HST data, we find that they predict no color gradient for the stellar halos, in good agreement with the observations.« less

  16. HERSCHEL-ATLAS: TOWARD A SAMPLE OF {approx}1000 STRONGLY LENSED GALAXIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonzalez-Nuevo, J.; Lapi, A.; Bressan, S.

    2012-04-10

    While the selection of strongly lensed galaxies (SLGs) with 500 {mu}m flux density S{sub 500} > 100 mJy has proven to be rather straightforward, for many applications it is important to analyze samples larger than the ones obtained when confining ourselves to such a bright limit. Moreover, only by probing to fainter flux densities is it possible to exploit strong lensing to investigate the bulk of the high-z star-forming galaxy population. We describe HALOS (the Herschel-ATLAS Lensed Objects Selection), a method for efficiently selecting fainter candidate SLGs, reaching a surface density of {approx_equal} 1.5-2 deg{sup -2}, i.e., a factor ofmore » about 4-6 higher than that at the 100 mJy flux limit. HALOS will allow the selection of up to {approx}1000 candidate SLGs (with amplifications {mu} {approx}> 2) over the full H-ATLAS survey area. Applying HALOS to the H-ATLAS Science Demonstration Phase field ({approx_equal} 14.4 deg{sup 2}) we find 31 candidate SLGs, whose candidate lenses are identified in the VIKING near-infrared catalog. Using the available information on candidate sources and candidate lenses we tentatively estimate a {approx_equal} 72% purity of the sample. As expected, the purity decreases with decreasing flux density of the sources and with increasing angular separation between candidate sources and lenses. The redshift distribution of the candidate lensed sources is close to that reported for most previous surveys for lensed galaxies, while that of candidate lenses extends to redshifts substantially higher than found in the other surveys. The counts of candidate SLGs are also in good agreement with model predictions. Even though a key ingredient of the method is the deep near-infrared VIKING photometry, we show that H-ATLAS data alone allow the selection of a similarly deep sample of candidate SLGs with an efficiency close to 50%; a slightly lower surface density ({approx_equal} 1.45 deg{sup -2}) can be reached with a {approx}70% efficiency.« less

  17. ASTROPHYSICAL PARAMETERS OF LS 2883 AND IMPLICATIONS FOR THE PSR B1259-63 GAMMA-RAY BINARY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Negueruela, Ignacio; Lorenzo, Javier; Ribo, Marc

    2011-05-01

    Only a few binary systems with compact objects display TeV emission. The physical properties of the companion stars represent basic input for understanding the physical mechanisms behind the particle acceleration, emission, and absorption processes in these so-called gamma-ray binaries. Here we present high-resolution and high signal-to-noise optical spectra of LS 2883, the Be star forming a gamma-ray binary with the young non-accreting pulsar PSR B1259-63, showing it to rotate faster and be significantly earlier and more luminous than previously thought. Analysis of the interstellar lines suggests that the system is located at the same distance as (and thus is likelymore » a member of) Cen OB1. Taking the distance to the association, d = 2.3 kpc, and a color excess of E(B - V) = 0.85 for LS 2883 results in M{sub V} {approx} -4.4. Because of fast rotation, LS 2883 is oblate (R{sub eq} {approx_equal} 9.7 R{sub sun} and R{sub pole} {approx_equal} 8.1 R{sub sun}) and presents a temperature gradient (T{sub eq}{approx} 27,500 K, log g{sub eq} = 3.7; T{sub pole}{approx} 34,000 K, log g{sub pole} = 4.1). If the star did not rotate, it would have parameters corresponding to a late O-type star. We estimate its luminosity at log(L{sub *}/L{sub sun}) {approx_equal} 4.79 and its mass at M{sub *} {approx} 30 M{sub sun}. The mass function then implies an inclination of the binary system i{sub orb} {approx} 23{sup 0}, slightly smaller than previous estimates. We discuss the implications of these new astrophysical parameters of LS 2883 for the production of high-energy and very high-energy gamma rays in the PSR B1259-63/LS 2883 gamma-ray binary system. In particular, the stellar properties are very important for prediction of the line-like bulk Comptonization component from the unshocked ultrarelativistic pulsar wind.« less

  18. GAMMA-RAY BURST HOST GALAXY SURVEYS AT REDSHIFT z {approx}> 4: PROBES OF STAR FORMATION RATE AND COSMIC REIONIZATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Trenti, Michele; Perna, Rosalba; Levesque, Emily M.

    2012-04-20

    Measuring the star formation rate (SFR) at high redshift is crucial for understanding cosmic reionization and galaxy formation. Two common complementary approaches are Lyman break galaxy (LBG) surveys for large samples and gamma-ray burst (GRB) observations for sensitivity to SFR in small galaxies. The z {approx}> 4 GRB-inferred SFR is higher than the LBG rate, but this difference is difficult to understand, as both methods rely on several modeling assumptions. Using a physically motivated galaxy luminosity function model, with star formation in dark matter halos with virial temperature T{sub vir} {approx}> 2 Multiplication-Sign 10{sup 4} K (M{sub DM} {approx}> 2more » Multiplication-Sign 10{sup 8} M{sub Sun }), we show that GRB- and LBG-derived SFRs are consistent if GRBs extend to faint galaxies (M{sub AB} {approx}< -11). To test star formation below the detection limit L{sub lim} {approx} 0.05L*{sub z=3} of LBG surveys, we propose to measure the fraction f{sub det}(L > L{sub lim}, z) of GRB hosts with L > L{sub lim}. This fraction quantifies the missing star formation fraction in LBG surveys, constraining the mass-suppression scale for galaxy formation, with weak dependence on modeling assumptions. Because f{sub det}(L > L{sub lim}, z) corresponds to the ratio of SFRs derived from LBG and GRB surveys, if these estimators are unbiased, measuring f{sub det}(L > L{sub lim}, z) also constrains the redshift evolution of the GRB production rate per unit mass of star formation. Our analysis predicts significant success for GRB host detections at z {approx} 5 with f{sub det}(L > L{sub lim}, z) {approx} 0.4, but rarer detections at z > 6. By analyzing the upper limits on host galaxy luminosities of six z > 5 GRBs from literature data, we infer that galaxies with M{sub AB} > -15 were present at z > 5 at 95% confidence, demonstrating the key role played by very faint galaxies during reionization.« less

  19. MAPPING THE SHORES OF THE BROWN DWARF DESERT. III. YOUNG MOVING GROUPS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, T. M.; Ireland, M. J.; Stewart, P.

    2012-01-10

    We present the results of an aperture-masking interferometry survey for substellar companions around 67 members of the young ({approx}8-200 Myr) nearby ({approx}5-86 pc) AB Doradus, {beta} Pictoris, Hercules-Lyra, TW Hya, and Tucana-Horologium stellar associations. Observations were made at near-infrared wavelengths between 1.2 and 3.8 {mu}m using the adaptive optics facilities of the Keck II, Very Large Telescope UT4, and Palomar Hale Telescopes. Typical contrast ratios of {approx}100-200 were achieved at angular separations between {approx}40 and 320 mas, with our survey being 100% complete for companions with masses below {approx}0.25 M{sub Sun} across this range. We report the discovery of amore » 0.52 {+-} 0.09 M{sub Sun} companion to HIP 14807, as well as the detections and orbits of previously known stellar companions to HD 16760, HD 113449, and HD 160934. We show that the companion to HD 16760 is in a face-on orbit, resulting in an upward revision of its mass from M{sub 2}sin i {approx} 14 M{sub J} to M{sub 2} = 0.28 {+-} 0.04 M{sub Sun }. No substellar companions were detected around any of our sample members, despite our ability to detect companions with masses below 80 M{sub J} for 50 of our targets: of these, our sensitivity extended down to 40 M{sub J} around 30 targets, with a subset of 22 subject to the still more stringent limit of 20 M{sub J}. A statistical analysis of our non-detection of substellar companions allows us to place constraints on their frequency around {approx}0.2-1.5 M{sub Sun} stars. In particular, considering companion mass distributions that have been proposed in the literature, we obtain an upper limit estimate of {approx}9%-11% for the frequency of 20-80 M{sub J} companions between 3 and 30 AU at 95% confidence, assuming that their semimajor axes are distributed according to dN/da{proportional_to}a{sup -1} in this range.« less

  20. Critical error fields for locked mode instability in tokamaks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    La Haye, R.J.; Fitzpatrick, R.; Hender, T.C.

    1992-07-01

    Otherwise stable discharges can become nonlinearly unstable to disruptive locked modes when subjected to a resonant {ital m}=2, {ital n}=1 error field from irregular poloidal field coils, as in DIII-D (Nucl. Fusion {bold 31}, 875 (1991)), or from resonant magnetic perturbation coils as in COMPASS-C ({ital Proceedings} {ital of} {ital the} 18{ital th} {ital European} {ital Conference} {ital on} {ital Controlled} {ital Fusion} {ital and} {ital Plasma} {ital Physics}, Berlin (EPS, Petit-Lancy, Switzerland, 1991), Vol. 15C, Part II, p. 61). Experiments in Ohmically heated deuterium discharges with {ital q}{approx}3.5, {ital {bar n}} {approx} 2 {times} 10{sup 19} m{sup {minus}3} andmore » {ital B}{sub {ital T}} {approx} 1.2 T show that a much larger relative error field ({ital B}{sub {ital r}21}/{ital B}{sub {ital T}} {approx} 1 {times} 10{sup {minus}3}) is required to produce a locked mode in the small, rapidly rotating plasma of COMPASS-C ({ital R}{sub 0} = 0.56 m, {ital f}{approx}13 kHz) than in the medium-sized plasmas of DIII-D ({ital R}{sub 0} = 1.67 m, {ital f}{approx}1.6 kHz), where the critical relative error field is {ital B}{sub {ital r}21}/{ital B}{sub {ital T}} {approx} 2 {times} 10{sup {minus}4}. This dependence of the threshold for instability is explained by a nonlinear tearing theory of the interaction of resonant magnetic perturbations with rotating plasmas that predicts the critical error field scales as ({ital fR}{sub 0}/{ital B}{sub {ital T}}){sup 4/3}{ital {bar n}}{sup 2/3}. Extrapolating from existing devices, the predicted critical field for locked modes in Ohmic discharges on the International Thermonuclear Experimental Reactor (ITER) (Nucl. Fusion {bold 30}, 1183 (1990)) ({ital f}=0.17 kHz, {ital R}{sub 0} = 6.0 m, {ital B}{sub {ital T}} = 4.9 T, {ital {bar n}} = 2 {times} 10{sup 19} m{sup {minus}3}) is {ital B}{sub {ital r}21}/{ital B}{sub {ital T}} {approx} 2 {times} 10{sup {minus}5}.« less

Top