Lan, Yihua; Li, Cunhua; Ren, Haozheng; Zhang, Yong; Min, Zhifang
2012-10-21
A new heuristic algorithm based on the so-called geometric distance sorting technique is proposed for solving the fluence map optimization with dose-volume constraints which is one of the most essential tasks for inverse planning in IMRT. The framework of the proposed method is basically an iterative process which begins with a simple linear constrained quadratic optimization model without considering any dose-volume constraints, and then the dose constraints for the voxels violating the dose-volume constraints are gradually added into the quadratic optimization model step by step until all the dose-volume constraints are satisfied. In each iteration step, an interior point method is adopted to solve each new linear constrained quadratic programming. For choosing the proper candidate voxels for the current dose constraint adding, a so-called geometric distance defined in the transformed standard quadratic form of the fluence map optimization model was used to guide the selection of the voxels. The new geometric distance sorting technique can mostly reduce the unexpected increase of the objective function value caused inevitably by the constraint adding. It can be regarded as an upgrading to the traditional dose sorting technique. The geometry explanation for the proposed method is also given and a proposition is proved to support our heuristic idea. In addition, a smart constraint adding/deleting strategy is designed to ensure a stable iteration convergence. The new algorithm is tested on four cases including head-neck, a prostate, a lung and an oropharyngeal, and compared with the algorithm based on the traditional dose sorting technique. Experimental results showed that the proposed method is more suitable for guiding the selection of new constraints than the traditional dose sorting method, especially for the cases whose target regions are in non-convex shapes. It is a more efficient optimization technique to some extent for choosing constraints than the dose sorting method. By integrating a smart constraint adding/deleting scheme within the iteration framework, the new technique builds up an improved algorithm for solving the fluence map optimization with dose-volume constraints.
Chebib, Jobran; Guillaume, Frédéric
2017-10-01
Phenotypic traits do not always respond to selection independently from each other and often show correlated responses to selection. The structure of a genotype-phenotype map (GP map) determines trait covariation, which involves variation in the degree and strength of the pleiotropic effects of the underlying genes. It is still unclear, and debated, how much of that structure can be deduced from variational properties of quantitative traits that are inferred from their genetic (co) variance matrix (G-matrix). Here we aim to clarify how the extent of pleiotropy and the correlation among the pleiotropic effects of mutations differentially affect the structure of a G-matrix and our ability to detect genetic constraints from its eigen decomposition. We show that the eigenvectors of a G-matrix can be predictive of evolutionary constraints when they map to underlying pleiotropic modules with correlated mutational effects. Without mutational correlation, evolutionary constraints caused by the fitness costs associated with increased pleiotropy are harder to infer from evolutionary metrics based on a G-matrix's geometric properties because uncorrelated pleiotropic effects do not affect traits' genetic correlations. Correlational selection induces much weaker modular partitioning of traits' genetic correlations in absence then in presence of underlying modular pleiotropy. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.
Moho map of South America from receiver functions and surface waves
NASA Astrophysics Data System (ADS)
Lloyd, Simon; van der Lee, Suzan; FrançA, George Sand; AssumpçãO, Marcelo; Feng, Mei
2010-11-01
We estimate crustal structure and thickness of South America north of roughly 40°S. To this end, we analyzed receiver functions from 20 relatively new temporary broadband seismic stations deployed across eastern Brazil. In the analysis we include teleseismic and some regional events, particularly for stations that recorded few suitable earthquakes. We first estimate crustal thickness and average Poisson's ratio using two different stacking methods. We then combine the new crustal constraints with results from previous receiver function studies. To interpolate the crustal thickness between the station locations, we jointly invert these Moho point constraints, Rayleigh wave group velocities, and regional S and Rayleigh waveforms for a continuous map of Moho depth. The new tomographic Moho map suggests that Moho depth and Moho relief vary slightly with age within the Precambrian crust. Whether or not a positive correlation between crustal thickness and geologic age is derived from the pre-interpolation point constraints depends strongly on the selected subset of receiver functions. This implies that using only pre-interpolation point constraints (receiver functions) inadequately samples the spatial variation in geologic age. The new Moho map also reveals an anomalously deep Moho beneath the oldest core of the Amazonian Craton.
Learning process mapping heuristics under stochastic sampling overheads
NASA Technical Reports Server (NTRS)
Ieumwananonthachai, Arthur; Wah, Benjamin W.
1991-01-01
A statistical method was developed previously for improving process mapping heuristics. The method systematically explores the space of possible heuristics under a specified time constraint. Its goal is to get the best possible heuristics while trading between the solution quality of the process mapping heuristics and their execution time. The statistical selection method is extended to take into consideration the variations in the amount of time used to evaluate heuristics on a problem instance. The improvement in performance is presented using the more realistic assumption along with some methods that alleviate the additional complexity.
Boiret, Mathieu; de Juan, Anna; Gorretta, Nathalie; Ginot, Yves-Michel; Roger, Jean-Michel
2015-09-10
Raman chemical imaging provides chemical and spatial information about pharmaceutical drug product. By using resolution methods on acquired spectra, the objective is to calculate pure spectra and distribution maps of image compounds. With multivariate curve resolution-alternating least squares, constraints are used to improve the performance of the resolution and to decrease the ambiguity linked to the final solution. Non negativity and spatial local rank constraints have been identified as the most powerful constraints to be used. In this work, an alternative method to set local rank constraints is proposed. The method is based on orthogonal projections pretreatment. For each drug product compound, raw Raman spectra are orthogonally projected to a basis including all the variability from the formulation compounds other than the product of interest. Presence or absence of the compound of interest is obtained by observing the correlations between the orthogonal projected spectra and a pure spectrum orthogonally projected to the same basis. By selecting an appropriate threshold, maps of presence/absence of compounds can be set up for all the product compounds. This method appears as a powerful approach to identify a low dose compound within a pharmaceutical drug product. The maps of presence/absence of compounds can be used as local rank constraints in resolution methods, such as multivariate curve resolution-alternating least squares process in order to improve the resolution of the system. The method proposed is particularly suited for pharmaceutical systems, where the identity of all compounds in the formulations is known and, therefore, the space of interferences can be well defined. Copyright © 2015 Elsevier B.V. All rights reserved.
Artificial Intelligence Support for Landing Site Selection on Mars
NASA Astrophysics Data System (ADS)
Rongier, G.; Pankratius, V.
2017-12-01
Mars is a key target for planetary exploration; a better understanding of its evolution and habitability requires roving in situ. Landing site selection is becoming more challenging for scientists as new instruments generate higher data volumes. The involved engineering and scientific constraints make site selection and the anticipation of possible onsite actions into a complex optimization problem: there may be multiple acceptable solutions depending on various goals and assumptions. Solutions must also account for missing data, errors, and potential biases. To address these problems, we propose an AI-informed decision support system that allows scientists, mission designers, engineers, and committees to explore alternative site selection choices based on data. In particular, we demonstrate first results of an exploratory case study using fuzzy logic and a simulation of a rover's mobility map based on the fast marching algorithm. Our system computes favorability maps of the entire planet to facilitate landing site selection and allows a definition of different configurations for rovers, science target priorities, landing ellipses, and other constraints. For a rover similar to NASA's Mars 2020 rover, we present results in form of a site favorability map as well as four derived exploration scenarios that depend on different prioritized scientific targets, all visualizing inherent tradeoffs. Our method uses the NASA PDS Geosciences Node and the NASA/ICA Integrated Database of Planetary Features. Under common assumptions, the data products reveal Eastern Margaritifer Terra and Meridiani Planum to be the most favorable sites due to a high concentration of scientific targets and a flat, easily navigable surface. Our method also allows mission designers to investigate which constraints have the highest impact on the mission exploration potential and to change parameter ranges. Increasing the elevation limit for landing, for example, provides access to many additional, more interesting sites on the southern terrains of Mars. The speed of current rovers is another limit to exploration capabilities: our system helps quantify how speed increases can improve the number of reachable targets in the search space. We acknowledge support from NASA AISTNNX15AG84G (PI Pankratius) and NSF ACI1442997 (PI Pankratius).
A constraint optimization based virtual network mapping method
NASA Astrophysics Data System (ADS)
Li, Xiaoling; Guo, Changguo; Wang, Huaimin; Li, Zhendong; Yang, Zhiwen
2013-03-01
Virtual network mapping problem, maps different virtual networks onto the substrate network is an extremely challenging work. This paper proposes a constraint optimization based mapping method for solving virtual network mapping problem. This method divides the problem into two phases, node mapping phase and link mapping phase, which are all NP-hard problems. Node mapping algorithm and link mapping algorithm are proposed for solving node mapping phase and link mapping phase, respectively. Node mapping algorithm adopts the thinking of greedy algorithm, mainly considers two factors, available resources which are supplied by the nodes and distance between the nodes. Link mapping algorithm is based on the result of node mapping phase, adopts the thinking of distributed constraint optimization method, which can guarantee to obtain the optimal mapping with the minimum network cost. Finally, simulation experiments are used to validate the method, and results show that the method performs very well.
Bound-preserving Legendre-WENO finite volume schemes using nonlinear mapping
NASA Astrophysics Data System (ADS)
Smith, Timothy; Pantano, Carlos
2017-11-01
We present a new method to enforce field bounds in high-order Legendre-WENO finite volume schemes. The strategy consists of reconstructing each field through an intermediate mapping, which by design satisfies realizability constraints. Determination of the coefficients of the polynomial reconstruction involves nonlinear equations that are solved using Newton's method. The selection between the original or mapped reconstruction is implemented dynamically to minimize computational cost. The method has also been generalized to fields that exhibit interdependencies, requiring multi-dimensional mappings. Further, the method does not depend on the existence of a numerical flux function. We will discuss details of the proposed scheme and show results for systems in conservation and non-conservation form. This work was funded by the NSF under Grant DMS 1318161.
MOLA-Based Landing Site Characterization
NASA Technical Reports Server (NTRS)
Duxbury, T. C.; Ivanov, A. B.
2001-01-01
The Mars Global Surveyor (MGS) Mars Orbiter Laser Altimeter (MOLA) data provide the basis for site characterization and selection never before possible. The basic MOLA information includes absolute radii, elevation and 1 micrometer albedo with derived datasets including digital image models (DIM's illuminated elevation data), slopes maps and slope statistics and small scale surface roughness maps and statistics. These quantities are useful in downsizing potential sites from descent engineering constraints and landing/roving hazard and mobility assessments. Slope baselines at the few hundred meter level and surface roughness at the 10 meter level are possible. Additionally, the MOLA-derived Mars surface offers the possibility to precisely register and map project other instrument datasets (images, ultraviolet, infrared, radar, etc.) taken at different resolution, viewing and lighting geometry, building multiple layers of an information cube for site characterization and selection. Examples of direct MOLA data, data derived from MOLA and other instruments data registered to MOLA arc given for the Hematite area.
NASA Astrophysics Data System (ADS)
Issa, S. M.; Shehhi, B. Al
2012-07-01
Landfill sites receive 92% of total annual solid waste produced by municipalities in the emirate of Abu Dhabi. In this study, candidate sites for an appropriate landfill location for the Abu Dhabi municipal area are determined by integrating geographic information systems (GIS) and multi-criteria evaluation (MCE) analysis. To identify appropriate landfill sites, eight input map layers including proximity to urban areas, proximity to wells and water table depth, geology and topography, proximity to touristic and archeological sites, distance from roads network, distance from drainage networks, and land slope are used in constraint mapping. A final map was generated which identified potential areas showing suitability for the location of the landfill site. Results revealed that 30% of the study area was identified as highly suitable, 25% as suitable, and 45% as unsuitable. The selection of the final landfill site, however, requires further field research.
Selection of the InSight landing site
Golombek, M.; Kipp, D.; Warner, N.; Daubar, Ingrid J.; Fergason, Robin L.; Kirk, Randolph L.; Beyer, R.; Huertas, A.; Piqueux, Sylvain; Putzig, N.E.; Campbell, B.A.; Morgan, G. A.; Charalambous, C.; Pike, W. T.; Gwinner, K.; Calef, F.; Kass, D.; Mischna, M A; Ashley, J.; Bloom, C.; Wigton, N.; Hare, T.; Schwartz, C.; Gengl, H.; Redmond, L.; Trautman, M.; Sweeney, J.; Grima, C.; Smith, I. B.; Sklyanskiy, E.; Lisano, M.; Benardini, J.; Smrekar, S.E.; Lognonne, P.; Banerdt, W. B.
2017-01-01
The selection of the Discovery Program InSight landing site took over four years from initial identification of possible areas that met engineering constraints, to downselection via targeted data from orbiters (especially Mars Reconnaissance Orbiter (MRO) Context Camera (CTX) and High-Resolution Imaging Science Experiment (HiRISE) images), to selection and certification via sophisticated entry, descent and landing (EDL) simulations. Constraints on elevation (≤−2.5 km">≤−2.5 km≤−2.5 km for sufficient atmosphere to slow the lander), latitude (initially 15°S–5°N and later 3°N–5°N for solar power and thermal management of the spacecraft), ellipse size (130 km by 27 km from ballistic entry and descent), and a load bearing surface without thick deposits of dust, severely limited acceptable areas to western Elysium Planitia. Within this area, 16 prospective ellipses were identified, which lie ∼600 km north of the Mars Science Laboratory (MSL) rover. Mapping of terrains in rapidly acquired CTX images identified especially benign smooth terrain and led to the downselection to four northern ellipses. Acquisition of nearly continuous HiRISE, additional Thermal Emission Imaging System (THEMIS), and High Resolution Stereo Camera (HRSC) images, along with radar data confirmed that ellipse E9 met all landing site constraints: with slopes <15° at 84 m and 2 m length scales for radar tracking and touchdown stability, low rock abundance (<10 %) to avoid impact and spacecraft tip over, instrument deployment constraints, which included identical slope and rock abundance constraints, a radar reflective and load bearing surface, and a fragmented regolith ∼5 m thick for full penetration of the heat flow probe. Unlike other Mars landers, science objectives did not directly influence landing site selection.
Analysis of Temperature Maps of Selected Dawn Data Over the Surface of Vesta
NASA Technical Reports Server (NTRS)
Tosi, F.; Capria, M. T.; DeSanctis, M. C.; Palomba, E.; Grassi, D.; Capaccioni, F.; Ammannito, E.; Combe, J.-Ph.; Sunshine, J. M.; McCord, T. B.;
2012-01-01
The thermal behavior of areas of unusual albedo at the surface of Vesta can be related to physical properties that may provide some information about the origin of those materials. Dawn s Visible and Infrared Mapping Spectrometer (VIR) [1] hyperspectral cubes can be used to retrieve surface temperatures. Due to instrumental constraints, high accuracy is obtained only if temperatures are greater than 180 K. Bright and dark surface materials on Vesta are currently investigated by the Dawn team [e.g., 2 and 3 respectively]. Here we present temperature maps of several local-scale features that were observed by Dawn under different illumination conditions and different local solar times.
Constraints on somatosensory map development: mutants lead the way.
Gaspar, Patricia; Renier, Nicolas
2018-05-09
In the rodent somatosensory system, the disproportionally large whisker representation and their specialization into barrel-shaped units in the different sensory relays has offered experimentalists with an ideal tool to identify mechanisms involved in brain map formation. These combine three intertwined constraints: Firstly, fasciculation of the incoming axons; secondly, early neural activity; finally, molecular patterning. Sophisticated genetic manipulations in mice have now allowed dissecting these mechanisms with greater accuracy. Here we discuss some recent papers that provided novel insights into how these different mapping rules and constraints interact to shape the barrel map. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Raup, B. H.; Khalsa, S. S.; Armstrong, R.
2007-12-01
The Global Land Ice Measurements from Space (GLIMS) project has built a geospatial and temporal database of glacier data, composed of glacier outlines and various scalar attributes. These data are being derived primarily from satellite imagery, such as from ASTER and Landsat. Each "snapshot" of a glacier is from a specific time, and the database is designed to store multiple snapshots representative of different times. We have implemented two web-based interfaces to the database; one enables exploration of the data via interactive maps (web map server), while the other allows searches based on text-field constraints. The web map server is an Open Geospatial Consortium (OGC) compliant Web Map Server (WMS) and Web Feature Server (WFS). This means that other web sites can display glacier layers from our site over the Internet, or retrieve glacier features in vector format. All components of the system are implemented using Open Source software: Linux, PostgreSQL, PostGIS (geospatial extensions to the database), MapServer (WMS and WFS), and several supporting components such as Proj.4 (a geographic projection library) and PHP. These tools are robust and provide a flexible and powerful framework for web mapping applications. As a service to the GLIMS community, the database contains metadata on all ASTER imagery acquired over glacierized terrain. Reduced-resolution of the images (browse imagery) can be viewed either as a layer in the MapServer application, or overlaid on the virtual globe within Google Earth. The interactive map application allows the user to constrain by time what data appear on the map. For example, ASTER or glacier outlines from 2002 only, or from Autumn in any year, can be displayed. The system allows users to download their selected glacier data in a choice of formats. The results of a query based on spatial selection (using a mouse) or text-field constraints can be downloaded in any of these formats: ESRI shapefiles, KML (Google Earth), MapInfo, GML (Geography Markup Language) and GMT (Generic Mapping Tools). This "clip-and-ship" function allows users to download only the data they are interested in. Our flexible web interfaces to the database, which includes various support layers (e.g. a layer to help collaborators identify satellite imagery over their region of expertise) will facilitate enhanced analysis to be undertaken on glacier systems, their distribution, and their impacts on other Earth systems.
MPEG-7-based description infrastructure for an audiovisual content analysis and retrieval system
NASA Astrophysics Data System (ADS)
Bailer, Werner; Schallauer, Peter; Hausenblas, Michael; Thallinger, Georg
2005-01-01
We present a case study of establishing a description infrastructure for an audiovisual content-analysis and retrieval system. The description infrastructure consists of an internal metadata model and access tool for using it. Based on an analysis of requirements, we have selected, out of a set of candidates, MPEG-7 as the basis of our metadata model. The openness and generality of MPEG-7 allow using it in broad range of applications, but increase complexity and hinder interoperability. Profiling has been proposed as a solution, with the focus on selecting and constraining description tools. Semantic constraints are currently only described in textual form. Conformance in terms of semantics can thus not be evaluated automatically and mappings between different profiles can only be defined manually. As a solution, we propose an approach to formalize the semantic constraints of an MPEG-7 profile using a formal vocabulary expressed in OWL, which allows automated processing of semantic constraints. We have defined the Detailed Audiovisual Profile as the profile to be used in our metadata model and we show how some of the semantic constraints of this profile can be formulated using ontologies. To work practically with the metadata model, we have implemented a MPEG-7 library and a client/server document access infrastructure.
Expected Utility Distributions for Flexible, Contingent Execution
NASA Technical Reports Server (NTRS)
Bresina, John L.; Washington, Richard
2000-01-01
This paper presents a method for using expected utility distributions in the execution of flexible, contingent plans. A utility distribution maps the possible start times of an action to the expected utility of the plan suffix starting with that action. The contingent plan encodes a tree of possible courses of action and includes flexible temporal constraints and resource constraints. When execution reaches a branch point, the eligible option with the highest expected utility at that point in time is selected. The utility distributions make this selection sensitive to the runtime context, yet still efficient. Our approach uses predictions of action duration uncertainty as well as expectations of resource usage and availability to determine when an action can execute and with what probability. Execution windows and probabilities inevitably change as execution proceeds, but such changes do not invalidate the cached utility distributions, thus, dynamic updating of utility information is minimized.
Blind beam-hardening correction from Poisson measurements
NASA Astrophysics Data System (ADS)
Gu, Renliang; Dogandžić, Aleksandar
2016-02-01
We develop a sparse image reconstruction method for Poisson-distributed polychromatic X-ray computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. We employ our mass-attenuation spectrum parameterization of the noiseless measurements and express the mass- attenuation spectrum as a linear combination of B-spline basis functions of order one. A block coordinate-descent algorithm is developed for constrained minimization of a penalized Poisson negative log-likelihood (NLL) cost function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and nonnegativity and sparsity of the density map image; the image sparsity is imposed using a convex total-variation (TV) norm penalty term. This algorithm alternates between a Nesterov's proximal-gradient (NPG) step for estimating the density map image and a limited-memory Broyden-Fletcher-Goldfarb-Shanno with box constraints (L-BFGS-B) step for estimating the incident-spectrum parameters. To accelerate convergence of the density- map NPG steps, we apply function restart and a step-size selection scheme that accounts for varying local Lipschitz constants of the Poisson NLL. Real X-ray CT reconstruction examples demonstrate the performance of the proposed scheme.
NASA Technical Reports Server (NTRS)
Vidal, A.; Mueller, K.; Golombek, M. P.
2003-01-01
We undertook axial surface mapping of selected wrinkle ridges on Solis Planum, Mars in order to assess the subsurface geometry of blind thrusts proposed to exist beneath them. This work builds on previous work that defined structural families of wrinkle ridges based on their surface morphology in this region. Although a growing consensus exists for models of wrinkle ridge kinematics and mechanics, a number of current problems remain. These include the origin of topographic offset across the edges of wrinkle ridges, the relationship between broad arches and superposed ridges, the origin of smaller wrinkles, and perhaps most importantly, the trajectory of blind thrusts that underlie wrinkle ridges and accommodate shortening at deeper crustal levels. We are particularly interested in defining the depths at which blind thrusts flatten under wrinkle ridges in order to provide constraints on the brittle-ductile transition during Early Hesperian time. We also seek to test whether wrinkle ridges on Solis Planum develop above reactivated faults or newly formed ones.
Geological and technological assessment of artificial reef sites, Louisiana outer continental shelf
Pope, D.L.; Moslow, T.F.; Wagner, J.B.
1993-01-01
This paper describes the general procedures used to select sites for obsolete oil and gas platforms as artificial reefs on the Louisiana outer continental shelf (OCS). The methods employed incorporate six basic steps designed to resolve multiple-use conflicts that might otherwise arise with daily industry and commercial fishery operations, and to identify and assess both geological and technological constraints that could affect placement of the structures. These steps include: (1) exclusion mapping; (2) establishment of artificial reef planning areas; (3) database compilation; (4) assessment and interpretation of database; (5) mapping of geological and man-made features within each proposed reef site; and (6) site selection. Nautical charts, bathymetric maps, and offshore oil and gas maps were used for exclusion mapping, and to select nine regional planning areas. Pipeline maps were acquired from federal agencies and private industry to determine their general locations within each planning area, and to establish exclusion fairways along each pipeline route. Approximately 1600 line kilometers of high-resolution geophysical data collected by federal agencies and private industry was acquired for the nine planning areas. These data were interpreted to determine the nature and extent of near-surface geologic features that could affect placement of the structures. Seismic reflection patterns were also characterized to evaluate near-bottom sedimentation processes in the vicinity of each reef site. Geotechnical borings were used to determine the lithological and physical properties of the sediment, and for correlation with the geophysical data. Since 1987, five sites containing 10 obsolete production platforms have been selected on the Louisiana OCS using these procedures. Industry participants have realized a total savings of approximately US $1 500 000 in salvaging costs by converting these structures into artificial reefs. ?? 1993.
Poster - 52: Smoothing constraints in Modulated Photon Radiotherapy (XMRT) fluence map optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGeachy, Philip; Villarreal-Barajas, Jose Eduardo
Purpose: Modulated Photon Radiotherapy (XMRT), which simultaneously optimizes photon beamlet energy (6 and 18 MV) and fluence, has recently shown dosimetric improvement in comparison to conventional IMRT. That said, the degree of smoothness of resulting fluence maps (FMs) has yet to be investigated and could impact the deliverability of XMRT. This study looks at investigating FM smoothness and imposing smoothing constraint in the fluence map optimization. Methods: Smoothing constraints were modeled in the XMRT algorithm with the sum of positive gradient (SPG) technique. XMRT solutions, with and without SPG constraints, were generated for a clinical prostate scan using standard dosimetricmore » prescriptions, constraints, and a seven coplanar beam arrangement. The smoothness, with and without SPG constraints, was assessed by looking at the absolute and relative maximum SPG scores for each fluence map. Dose volume histograms were utilized when evaluating impact on the dose distribution. Results: Imposing SPG constraints reduced the absolute and relative maximum SPG values by factors of up to 5 and 2, respectively, when compared with their non-SPG constrained counterparts. This leads to a more seamless conversion of FMS to their respective MLC sequences. This improved smoothness resulted in an increase to organ at risk (OAR) dose, however the increase is not clinically significant. Conclusions: For a clinical prostate case, there was a noticeable improvement in the smoothness of the XMRT FMs when SPG constraints were applied with a minor increase in dose to OARs. This increase in OAR dose is not clinically meaningful.« less
Registration of heat capacity mapping mission day and night images
NASA Technical Reports Server (NTRS)
Watson, K.; Hummer-Miller, S.; Sawatzky, D. L.
1982-01-01
Registration of thermal images is complicated by distinctive differences in the appearance of day and night features needed as control in the registration process. These changes are unlike those that occur between Landsat scenes and pose unique constraints. Experimentation with several potentially promising techniques has led to selection of a fairly simple scheme for registration of data from the experimental thermal satellite HCMM using an affine transformation. Two registration examples are provided.
Systematicity As a Selection Constraint in Analogical Mapping
1989-09-15
MONITORING ORGANIZATION university of Ilinois (if applicable ) Cognitive Science (Code 1142CS) Department of Psychology Office of Naval Research 6c...5000 8a N4AME OF FUNDING, SPONSORING 3b OFFICE SYMBOL 9 PROCUREMENT INSTRUMENT DENTIFJCATION NUMBERI ORGANIZATION (If applicable ) N00014-89-J-1272 8c...sch as ...it is distributed to persons in a city; resources cost money... or we could preserve ...degree of pressure determines flow rate. The present
Defining a genetic ideotype for crop improvement.
Trethowan, Richard M
2014-01-01
While plant breeders traditionally base selection on phenotype, the development of genetic ideotypes can help focus the selection process. This chapter provides a road map for the establishment of a refined genetic ideotype. The first step is an accurate definition of the target environment including the underlying constraints, their probability of occurrence, and impact on phenotype. Once the environmental constraints are established, the wealth of information on plant physiological responses to stresses, known gene information, and knowledge of genotype ×environment and gene × environment interaction help refine the target ideotype and form a basis for cross prediction.Once a genetic ideotype is defined the challenge remains to build the ideotype in a plant breeding program. A number of strategies including marker-assisted recurrent selection and genomic selection can be used that also provide valuable information for the optimization of genetic ideotype. However, the informatics required to underpin the realization of the genetic ideotype then becomes crucial. The reduced cost of genotyping and the need to combine pedigree, phenotypic, and genetic data in a structured way for analysis and interpretation often become the rate-limiting steps, thus reducing genetic gain. Systems for managing these data and an example of ideotype construction for a defined environment type are discussed.
Iannelli, Gianni Cristian; Torres, Marco A.
2018-01-01
Cash crops are agricultural crops intended to be sold for profit as opposed to subsistence crops, meant to support the producer, or to support livestock. Since cash crops are intended for future sale, they translate into large financial value when considered on a wide geographical scale, so their production directly involves financial risk. At a national level, extreme weather events including destructive rain or hail, as well as drought, can have a significant impact on the overall economic balance. It is thus important to map such crops in order to set up insurance and mitigation strategies. Using locally generated data—such as municipality-level records of crop seeding—for mapping purposes implies facing a series of issues like data availability, quality, homogeneity, etc. We thus opted for a different approach relying on global datasets. Global datasets ensure homogeneity and availability of data, although sometimes at the expense of precision and accuracy. A typical global approach makes use of spaceborne remote sensing, for which different land cover classification strategies are available in literature at different levels of cost and accuracy. We selected the optimal strategy in the perspective of a global processing chain. Thanks to a specifically developed strategy for fusing unsupervised classification results with environmental constraints and other geospatial inputs including ground-based data, we managed to obtain good classification results despite the constraints placed. The overall production process was composed using “good-enough" algorithms at each step, ensuring that the precision, accuracy, and data-hunger of each algorithm was commensurate to the precision, accuracy, and amount of data available. This paper describes the tailored strategy developed on the occasion as a cooperation among different groups with diverse backgrounds, a strategy which is believed to be profitably reusable in other, similar contexts. The paper presents the problem, the constraints and the adopted solutions; it then summarizes the main findings including that efforts and costs can be saved on the side of Earth Observation data processing when additional ground-based data are available to support the mapping task. PMID:29443919
Dell'Acqua, Fabio; Iannelli, Gianni Cristian; Torres, Marco A; Martina, Mario L V
2018-02-14
Cash crops are agricultural crops intended to be sold for profit as opposed to subsistence crops, meant to support the producer, or to support livestock. Since cash crops are intended for future sale, they translate into large financial value when considered on a wide geographical scale, so their production directly involves financial risk. At a national level, extreme weather events including destructive rain or hail, as well as drought, can have a significant impact on the overall economic balance. It is thus important to map such crops in order to set up insurance and mitigation strategies. Using locally generated data-such as municipality-level records of crop seeding-for mapping purposes implies facing a series of issues like data availability, quality, homogeneity, etc. We thus opted for a different approach relying on global datasets. Global datasets ensure homogeneity and availability of data, although sometimes at the expense of precision and accuracy. A typical global approach makes use of spaceborne remote sensing, for which different land cover classification strategies are available in literature at different levels of cost and accuracy. We selected the optimal strategy in the perspective of a global processing chain. Thanks to a specifically developed strategy for fusing unsupervised classification results with environmental constraints and other geospatial inputs including ground-based data, we managed to obtain good classification results despite the constraints placed. The overall production process was composed using "good-enough" algorithms at each step, ensuring that the precision, accuracy, and data-hunger of each algorithm was commensurate to the precision, accuracy, and amount of data available. This paper describes the tailored strategy developed on the occasion as a cooperation among different groups with diverse backgrounds, a strategy which is believed to be profitably reusable in other, similar contexts. The paper presents the problem, the constraints and the adopted solutions; it then summarizes the main findings including that efforts and costs can be saved on the side of Earth Observation data processing when additional ground-based data are available to support the mapping task.
An atlas of ShakeMaps for selected global earthquakes
Allen, Trevor I.; Wald, David J.; Hotovec, Alicia J.; Lin, Kuo-Wan; Earle, Paul S.; Marano, Kristin D.
2008-01-01
An atlas of maps of peak ground motions and intensity 'ShakeMaps' has been developed for almost 5,000 recent and historical global earthquakes. These maps are produced using established ShakeMap methodology (Wald and others, 1999c; Wald and others, 2005) and constraints from macroseismic intensity data, instrumental ground motions, regional topographically-based site amplifications, and published earthquake-rupture models. Applying the ShakeMap methodology allows a consistent approach to combine point observations with ground-motion predictions to produce descriptions of peak ground motions and intensity for each event. We also calculate an estimated ground-motion uncertainty grid for each earthquake. The Atlas of ShakeMaps provides a consistent and quantitative description of the distribution and intensity of shaking for recent global earthquakes (1973-2007) as well as selected historic events. As such, the Atlas was developed specifically for calibrating global earthquake loss estimation methodologies to be used in the U.S. Geological Survey Prompt Assessment of Global Earthquakes for Response (PAGER) Project. PAGER will employ these loss models to rapidly estimate the impact of global earthquakes as part of the USGS National Earthquake Information Center's earthquake-response protocol. The development of the Atlas of ShakeMaps has also led to several key improvements to the Global ShakeMap system. The key upgrades include: addition of uncertainties in the ground motion mapping, introduction of modern ground-motion prediction equations, improved estimates of global seismic-site conditions (VS30), and improved definition of stable continental region polygons. Finally, we have merged all of the ShakeMaps in the Atlas to provide a global perspective of earthquake ground shaking for the past 35 years, allowing comparison with probabilistic hazard maps. The online Atlas and supporting databases can be found at http://earthquake.usgs.gov/eqcenter/shakemap/atlas.php/.
NASA Astrophysics Data System (ADS)
Clarkin, T. J.; Kasprzyk, J. R.; Raseman, W. J.; Herman, J. D.
2015-12-01
This study contributes a diagnostic assessment of multiobjective evolutionary algorithm (MOEA) search on a set of water resources problem formulations with different configurations of constraints. Unlike constraints in classical optimization modeling, constraints within MOEA simulation-optimization represent limits on acceptable performance that delineate whether solutions within the search problem are feasible. Constraints are relevant because of the emergent pressures on water resources systems: increasing public awareness of their sustainability, coupled with regulatory pressures on water management agencies. In this study, we test several state-of-the-art MOEAs that utilize restricted tournament selection for constraint handling on varying configurations of water resources planning problems. For example, a problem that has no constraints on performance levels will be compared with a problem with several severe constraints, and a problem with constraints that have less severe values on the constraint thresholds. One such problem, Lower Rio Grande Valley (LRGV) portfolio planning, has been solved with a suite of constraints that ensure high reliability, low cost variability, and acceptable performance in a single year severe drought. But to date, it is unclear whether or not the constraints are negatively affecting MOEAs' ability to solve the problem effectively. Two categories of results are explored. The first category uses control maps of algorithm performance to determine if the algorithm's performance is sensitive to user-defined parameters. The second category uses run-time performance metrics to determine the time required for the algorithm to reach sufficient levels of convergence and diversity on the solution sets. Our work exploring the effect of constraints will better enable practitioners to define MOEA problem formulations for real-world systems, especially when stakeholders are concerned with achieving fixed levels of performance according to one or more metrics.
MapMaker and PathTracer for tracking carbon in genome-scale metabolic models
Tervo, Christopher J.; Reed, Jennifer L.
2016-01-01
Constraint-based reconstruction and analysis (COBRA) modeling results can be difficult to interpret given the large numbers of reactions in genome-scale models. While paths in metabolic networks can be found, existing methods are not easily combined with constraint-based approaches. To address this limitation, two tools (MapMaker and PathTracer) were developed to find paths (including cycles) between metabolites, where each step transfers carbon from reactant to product. MapMaker predicts carbon transfer maps (CTMs) between metabolites using only information on molecular formulae and reaction stoichiometry, effectively determining which reactants and products share carbon atoms. MapMaker correctly assigned CTMs for over 97% of the 2,251 reactions in an Escherichia coli metabolic model (iJO1366). Using CTMs as inputs, PathTracer finds paths between two metabolites. PathTracer was applied to iJO1366 to investigate the importance of using CTMs and COBRA constraints when enumerating paths, to find active and high flux paths in flux balance analysis (FBA) solutions, to identify paths for putrescine utilization, and to elucidate a potential CO2 fixation pathway in E. coli. These results illustrate how MapMaker and PathTracer can be used in combination with constraint-based models to identify feasible, active, and high flux paths between metabolites. PMID:26771089
Craton Heterogeneity in the South American Lithosphere
NASA Astrophysics Data System (ADS)
Lloyd, S.; Van der Lee, S.; Assumpcao, M.; Feng, M.; Franca, G. S.
2012-04-01
We investigate structure of the lithosphere beneath South America using receiver functions, surface wave dispersion analysis, and seismic tomography. The data used include recordings from 20 temporary broadband seismic stations deployed across eastern Brazil (BLSP02) and from the Chile Ridge Subduction Project seismic array in southern Chile (CRSP). By jointly inverting Moho point constraints, Rayleigh wave group velocities, and regional S and Rayleigh wave forms we obtain a continuous map of Moho depth. The new tomographic Moho map suggests that Moho depth and Moho relief vary slightly with age within the Precambrian crust. Whether or not a correlation between crustal thickness and geologic age can be derived from the pre-interpolation point constraints depends strongly on the selected subset of receiver functions. This implies that using only pre-interpolation point constraints (receiver functions) inadequately samples the spatial variation in geologic age. We also invert for S velocity structure and estimate the depth of the lithosphere-asthenosphere boundary (LAB) in Precambrian South America. The new model reveals a relatively thin lithosphere throughout most of Precambrian South America (< 140 km). Comparing LAB depth with lithospheric age shows they are overall positively correlated, whereby the thickest lithosphere occurs in the relatively small Saõ Francisco craton (200 km). However, within the larger Amazonian craton the younger lithosphere is thicker, indicating that locally even larger cratons are not protected from erosion or reworking of the lithosphere.
STK Integrated Message Production List Editor (SIMPLE) for CEO Operations
NASA Technical Reports Server (NTRS)
Trenchard, Mike; Heydorn, James
2014-01-01
Late in fiscal year 2011, the Crew Earth Observations (CEO) team was tasked to upgrade and replace its mission planning and mission operations software systems, which were developed in the Space Shuttle era of the 1980s and 1990s. The impetuses for this change were the planned transition of all workstations to the Windows 7 64-bit operating system and the desire for more efficient and effective use of Satellite Tool Kit (STK) software required for reliable International Space Station (ISS) Earth location tracking. An additional requirement of this new system was the use of the same SQL database of CEO science sites from the SMMS, which was also being developed. STK Integrated Message Production List Editor (SIMPLE) is the essential, all-in-one tool now used by CEO staff to perform daily ISS mission planning to meet its requirement to acquire astronaut photography of specific sites on Earth. The sites are part of a managed, long-term database that has been defined and developed for scientific, educational, and public interest. SIMPLE's end product is a set of basic time and location data computed for an operator-selected set of targets that the ISS crew will be asked to photograph (photography is typically planned 12 to 36 hours out). The CEO operator uses SIMPLE to (a) specify a payload operations planning period; (b) acquire and validate the best available ephemeris data (vectors) for the ISS during the planning period; (c) ingest and display mission-specific site information from the CEO database; (d) identify and display potential current dynamic event targets as map features; (e) compute and display time and location information for each target; (f) screen and select targets based on known crew availability constraints, obliquity constraints, and real-time evaluated constraints to target visibility due to illumination (sun elevation) and atmospheric conditions (weather); and finally (g) incorporate basic, computed time and location information for each selected target into the daily CEO Target List product (message) for submission to ISS payload planning and integration teams for their review and approval prior to uplink. SIMPLE requires and uses the following resources: an ISS mission planning period Greenwich Mean Time start date/time and end date/time), the best available ISS mission ephemeris data (vectors) for that planning period, the STK software package configured for the ISS, and an ISS mission-specific subset of the CEO sites database. The primary advantages realized by the development and implementation of SIMPLE into the CEO payload operations support activity are a smooth transition to the Windows 7 operating system upon scheduled workstation refresh; streamlining of the input and verification of the current ISS ephemeris (vector data); seamless incorporation of selected contents of the SQL database of science sites; the ability to tag and display potential dynamic event opportunities on orbit track maps; simplification of the display and selection of encountered sites based on crew availability, illumination, obliquity, and weather constraints; the incorporation of high-quality mapping of the Earth with various satellite-based datasets for use in describing targets; and the ability to encapsulate and export the essential selected target elements in XML format for use by onboard Earth-location systems, such as Worldmap. SIMPLE is a carefully designed and crafted in-house software package that includes detailed help files for the user and meticulous internal documentation for future modifications. It was delivered in February 2012 for test and evaluation. Following acceptance, it was implemented for CEO mission operations support in May 2012.
Landfill site selection by using geographic information systems
NASA Astrophysics Data System (ADS)
Şener, Başak; Süzen, M. Lütfi; Doyuran, Vedat
2006-01-01
One of the serious and growing potential problems in most large urban areas is the shortage of land for waste disposal. Although there are some efforts to reduce and recover the waste, disposal in landfills is still the most common method for waste destination. An inappropriate landfill site may have negative environmental, economic and ecological impacts. Therefore, it should be selected carefully by considering both regulations and constraints on other sources. In this study, candidate sites for an appropriate landfill area in the vicinity of Ankara are determined by using the integration of geographic information systems and multicriteria decision analysis (MCDA). For this purpose, 16 input map layers including topography, settlements (urban centers and villages), roads (Highway E90 and village roads), railways, airport, wetlands, infrastructures (pipelines and power lines), slope, geology, land use, floodplains, aquifers and surface water are prepared and two different MCDA methods (simple additive weighting and analytic hierarchy process) are implemented to a geographical information system. Comparison of the maps produced by these two different methods shows that both methods yield conformable results. Field checks also confirm that the candidate sites agree well with the selected criteria.
Smith, Erik A.; Sanocki, Chris A.; Lorenz, David L.; Jacobsen, Katrin E.
2017-12-27
Streamflow distribution maps for the Cannon River and St. Louis River drainage basins were developed by the U.S. Geological Survey, in cooperation with the Legislative-Citizen Commission on Minnesota Resources, to illustrate relative and cumulative streamflow distributions. The Cannon River was selected to provide baseline data to assess the effects of potential surficial sand mining, and the St. Louis River was selected to determine the effects of ongoing Mesabi Iron Range mining. Each drainage basin (Cannon, St. Louis) was subdivided into nested drainage basins: the Cannon River was subdivided into 152 nested drainage basins, and the St. Louis River was subdivided into 353 nested drainage basins. For each smaller drainage basin, the estimated volumes of groundwater discharge (as base flow) and surface runoff flowing into all surface-water features were displayed under the following conditions: (1) extreme low-flow conditions, comparable to an exceedance-probability quantile of 0.95; (2) low-flow conditions, comparable to an exceedance-probability quantile of 0.90; (3) a median condition, comparable to an exceedance-probability quantile of 0.50; and (4) a high-flow condition, comparable to an exceedance-probability quantile of 0.02.Streamflow distribution maps were developed using flow-duration curve exceedance-probability quantiles in conjunction with Soil-Water-Balance model outputs; both the flow-duration curve and Soil-Water-Balance models were built upon previously published U.S. Geological Survey reports. The selected streamflow distribution maps provide a proactive water management tool for State cooperators by illustrating flow rates during a range of hydraulic conditions. Furthermore, after the nested drainage basins are highlighted in terms of surface-water flows, the streamflows can be evaluated in the context of meeting specific ecological flows under different flow regimes and potentially assist with decisions regarding groundwater and surface-water appropriations. Presented streamflow distribution maps are foundational work intended to support the development of additional streamflow distribution maps that include statistical constraints on the selected flow conditions.
Scheduling Results for the THEMIS Observation Scheduling Tool
NASA Technical Reports Server (NTRS)
Mclaren, David; Rabideau, Gregg; Chien, Steve; Knight, Russell; Anwar, Sadaat; Mehall, Greg; Christensen, Philip
2011-01-01
We describe a scheduling system intended to assist in the development of instrument data acquisitions for the THEMIS instrument, onboard the Mars Odyssey spacecraft, and compare results from multiple scheduling algorithms. This tool creates observations of both (a) targeted geographical regions of interest and (b) general mapping observations, while respecting spacecraft constraints such as data volume, observation timing, visibility, lighting, season, and science priorities. This tool therefore must address both geometric and state/timing/resource constraints. We describe a tool that maps geometric polygon overlap constraints to set covering constraints using a grid-based approach. These set covering constraints are then incorporated into a greedy optimization scheduling algorithm incorporating operations constraints to generate feasible schedules. The resultant tool generates schedules of hundreds of observations per week out of potential thousands of observations. This tool is currently under evaluation by the THEMIS observation planning team at Arizona State University.
Zhang, Meiyan; Zheng, Yahong Rosa
2017-01-01
This paper investigates the task assignment and path planning problem for multiple AUVs in three dimensional (3D) underwater wireless sensor networks where nonholonomic motion constraints of underwater AUVs in 3D space are considered. The multi-target task assignment and path planning problem is modeled by the Multiple Traveling Sales Person (MTSP) problem and the Genetic Algorithm (GA) is used to solve the MTSP problem with Euclidean distance as the cost function and the Tour Hop Balance (THB) or Tour Length Balance (TLB) constraints as the stop criterion. The resulting tour sequences are mapped to 2D Dubins curves in the X−Y plane, and then interpolated linearly to obtain the Z coordinates. We demonstrate that the linear interpolation fails to achieve G1 continuity in the 3D Dubins path for multiple targets. Therefore, the interpolated 3D Dubins curves are checked against the AUV dynamics constraint and the ones satisfying the constraint are accepted to finalize the 3D Dubins curve selection. Simulation results demonstrate that the integration of the 3D Dubins curve with the MTSP model is successful and effective for solving the 3D target assignment and path planning problem. PMID:28696377
Cai, Wenyu; Zhang, Meiyan; Zheng, Yahong Rosa
2017-07-11
This paper investigates the task assignment and path planning problem for multiple AUVs in three dimensional (3D) underwater wireless sensor networks where nonholonomic motion constraints of underwater AUVs in 3D space are considered. The multi-target task assignment and path planning problem is modeled by the Multiple Traveling Sales Person (MTSP) problem and the Genetic Algorithm (GA) is used to solve the MTSP problem with Euclidean distance as the cost function and the Tour Hop Balance (THB) or Tour Length Balance (TLB) constraints as the stop criterion. The resulting tour sequences are mapped to 2D Dubins curves in the X - Y plane, and then interpolated linearly to obtain the Z coordinates. We demonstrate that the linear interpolation fails to achieve G 1 continuity in the 3D Dubins path for multiple targets. Therefore, the interpolated 3D Dubins curves are checked against the AUV dynamics constraint and the ones satisfying the constraint are accepted to finalize the 3D Dubins curve selection. Simulation results demonstrate that the integration of the 3D Dubins curve with the MTSP model is successful and effective for solving the 3D target assignment and path planning problem.
Lankford, Christopher L; Does, Mark D
2018-02-01
Quantitative MRI may require correcting for nuisance parameters which can or must be constrained to independently measured or assumed values. The noise and/or bias in these constraints propagate to fitted parameters. For example, the case of refocusing pulse flip angle constraint in multiple spin echo T 2 mapping is explored. An analytical expression for the mean-squared error of a parameter of interest was derived as a function of the accuracy and precision of an independent estimate of a nuisance parameter. The expression was validated by simulations and then used to evaluate the effects of flip angle (θ) constraint on the accuracy and precision of T⁁2 for a variety of multi-echo T 2 mapping protocols. Constraining θ improved T⁁2 precision when the θ-map signal-to-noise ratio was greater than approximately one-half that of the first spin echo image. For many practical scenarios, constrained fitting was calculated to reduce not just the variance but the full mean-squared error of T⁁2, for bias in θ⁁≲6%. The analytical expression derived in this work can be applied to inform experimental design in quantitative MRI. The example application to T 2 mapping provided specific cases, depending on θ⁁ accuracy and precision, in which θ⁁ measurement and constraint would be beneficial to T⁁2 variance or mean-squared error. Magn Reson Med 79:673-682, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Improving ontology matching with propagation strategy and user feedback
NASA Astrophysics Data System (ADS)
Li, Chunhua; Cui, Zhiming; Zhao, Pengpeng; Wu, Jian; Xin, Jie; He, Tianxu
2015-07-01
Markov logic networks which unify probabilistic graphical model and first-order logic provide an excellent framework for ontology matching. The existing approach requires a threshold to produce matching candidates and use a small set of constraints acting as filter to select the final alignments. We introduce novel match propagation strategy to model the influences between potential entity mappings across ontologies, which can help to identify the correct correspondences and produce missed correspondences. The estimation of appropriate threshold is a difficult task. We propose an interactive method for threshold selection through which we obtain an additional measurable improvement. Running experiments on a public dataset has demonstrated the effectiveness of proposed approach in terms of the quality of result alignment.
LPmerge: an R package for merging genetic maps by linear programming.
Endelman, Jeffrey B; Plomion, Christophe
2014-06-01
Consensus genetic maps constructed from multiple populations are an important resource for both basic and applied research, including genome-wide association analysis, genome sequence assembly and studies of evolution. The LPmerge software uses linear programming to efficiently minimize the mean absolute error between the consensus map and the linkage maps from each population. This minimization is performed subject to linear inequality constraints that ensure the ordering of the markers in the linkage maps is preserved. When marker order is inconsistent between linkage maps, a minimum set of ordinal constraints is deleted to resolve the conflicts. LPmerge is on CRAN at http://cran.r-project.org/web/packages/LPmerge. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Rao, Xiong; Tang, Yunwei
2014-11-01
Land surface deformation evidently exists in a newly-built high-speed railway in the southeast of China. In this study, we utilize the Small BAseline Subsets (SBAS)-Differential Synthetic Aperture Radar Interferometry (DInSAR) technique to detect land surface deformation along the railway. In this work, 40 Cosmo-SkyMed satellite images were selected to analyze the spatial distribution and velocity of the deformation in study area. 88 pairs of image with high coherence were firstly chosen with an appropriate threshold. These images were used to deduce the deformation velocity map and the variation in time series. This result can provide information for orbit correctness and ground control point (GCP) selection in the following steps. Then, more pairs of image were selected to tighten the constraint in time dimension, and to improve the final result by decreasing the phase unwrapping error. 171 combinations of SAR pairs were ultimately selected. Reliable GCPs were re-selected according to the previously derived deformation velocity map. Orbital residuals error was rectified using these GCPs, and nonlinear deformation components were estimated. Therefore, a more accurate surface deformation velocity map was produced. Precise geodetic leveling work was implemented in the meantime. We compared the leveling result with the geocoding SBAS product using the nearest neighbour method. The mean error and standard deviation of the error were respectively 0.82 mm and 4.17 mm. This result demonstrates the effectiveness of DInSAR technique for monitoring land surface deformation, which can serve as a reliable decision for supporting highspeed railway project design, construction, operation and maintenance.
Constraining the interaction between dark sectors with future HI intensity mapping observations
NASA Astrophysics Data System (ADS)
Xu, Xiaodong; Ma, Yin-Zhe; Weltman, Amanda
2018-04-01
We study a model of interacting dark matter and dark energy, in which the two components are coupled. We calculate the predictions for the 21-cm intensity mapping power spectra, and forecast the detectability with future single-dish intensity mapping surveys (BINGO, FAST and SKA-I). Since dark energy is turned on at z ˜1 , which falls into the sensitivity range of these radio surveys, the HI intensity mapping technique is an efficient tool to constrain the interaction. By comparing with current constraints on dark sector interactions, we find that future radio surveys will produce tight and reliable constraints on the coupling parameters.
An RBF-based reparameterization method for constrained texture mapping.
Yu, Hongchuan; Lee, Tong-Yee; Yeh, I-Cheng; Yang, Xiaosong; Li, Wenxi; Zhang, Jian J
2012-07-01
Texture mapping has long been used in computer graphics to enhance the realism of virtual scenes. However, to match the 3D model feature points with the corresponding pixels in a texture image, surface parameterization must satisfy specific positional constraints. However, despite numerous research efforts, the construction of a mathematically robust, foldover-free parameterization that is subject to positional constraints continues to be a challenge. In the present paper, this foldover problem is addressed by developing radial basis function (RBF)-based reparameterization. Given initial 2D embedding of a 3D surface, the proposed method can reparameterize 2D embedding into a foldover-free 2D mesh, satisfying a set of user-specified constraint points. In addition, this approach is mesh free. Therefore, generating smooth texture mapping results is possible without extra smoothing optimization.
Caseys, Celine; Stritt, Christoph; Glauser, Gaetan; Blanchard, Thierry; Lexer, Christian
2015-01-01
The mechanisms responsible for the origin, maintenance and evolution of plant secondary metabolite diversity remain largely unknown. Decades of phenotypic studies suggest hybridization as a key player in generating chemical diversity in plants. Knowledge of the genetic architecture and selective constraints of phytochemical traits is key to understanding the effects of hybridization on plant chemical diversity and ecological interactions. Using the European Populus species P. alba (White poplar) and P. tremula (European aspen) and their hybrids as a model, we examined levels of inter- and intraspecific variation, heritabilities, phenotypic correlations, and the genetic architecture of 38 compounds of the phenylpropanoid pathway measured by liquid chromatography and mass spectrometry (UHPLC-MS). We detected 41 quantitative trait loci (QTL) for chlorogenic acids, salicinoids and flavonoids by genetic mapping in natural hybrid crosses. We show that these three branches of the phenylpropanoid pathway exhibit different geographic patterns of variation, heritabilities, and genetic architectures, and that they are affected differently by hybridization and evolutionary constraints. Flavonoid abundances present high species specificity, clear geographic structure, and strong genetic determination, contrary to salicinoids and chlorogenic acids. Salicinoids, which represent important defence compounds in Salicaceae, exhibited pronounced genetic correlations on the QTL map. Our results suggest that interspecific phytochemical differentiation is concentrated in downstream sections of the phenylpropanoid pathway. In particular, our data point to glycosyltransferase enzymes as likely targets of rapid evolution and interspecific differentiation in the ‘model forest tree’ Populus. PMID:26010156
Caseys, Celine; Stritt, Christoph; Glauser, Gaetan; Blanchard, Thierry; Lexer, Christian
2015-01-01
The mechanisms responsible for the origin, maintenance and evolution of plant secondary metabolite diversity remain largely unknown. Decades of phenotypic studies suggest hybridization as a key player in generating chemical diversity in plants. Knowledge of the genetic architecture and selective constraints of phytochemical traits is key to understanding the effects of hybridization on plant chemical diversity and ecological interactions. Using the European Populus species P. alba (White poplar) and P. tremula (European aspen) and their hybrids as a model, we examined levels of inter- and intraspecific variation, heritabilities, phenotypic correlations, and the genetic architecture of 38 compounds of the phenylpropanoid pathway measured by liquid chromatography and mass spectrometry (UHPLC-MS). We detected 41 quantitative trait loci (QTL) for chlorogenic acids, salicinoids and flavonoids by genetic mapping in natural hybrid crosses. We show that these three branches of the phenylpropanoid pathway exhibit different geographic patterns of variation, heritabilities, and genetic architectures, and that they are affected differently by hybridization and evolutionary constraints. Flavonoid abundances present high species specificity, clear geographic structure, and strong genetic determination, contrary to salicinoids and chlorogenic acids. Salicinoids, which represent important defence compounds in Salicaceae, exhibited pronounced genetic correlations on the QTL map. Our results suggest that interspecific phytochemical differentiation is concentrated in downstream sections of the phenylpropanoid pathway. In particular, our data point to glycosyltransferase enzymes as likely targets of rapid evolution and interspecific differentiation in the 'model forest tree' Populus.
2011-01-01
Background A gene's position in regulatory, protein interaction or metabolic networks can be predictive of the strength of purifying selection acting on it, but these relationships are neither universal nor invariably strong. Following work in bacteria, fungi and invertebrate animals, we explore the relationship between selective constraint and metabolic function in mammals. Results We measure the association between selective constraint, estimated by the ratio of nonsynonymous (Ka) to synonymous (Ks) substitutions, and several, primarily metabolic, measures of gene function. We find significant differences between the selective constraints acting on enzyme-coding genes from different cellular compartments, with the nucleus showing higher constraint than genes from either the cytoplasm or the mitochondria. Among metabolic genes, the centrality of an enzyme in the metabolic network is significantly correlated with Ka/Ks. In contrast to yeasts, gene expression magnitude does not appear to be the primary predictor of selective constraint in these organisms. Conclusions Our results imply that the relationship between selective constraint and enzyme centrality is complex: the strength of selective constraint acting on mammalian genes is quite variable and does not appear to exclusively follow patterns seen in other organisms. PMID:21470417
Liu, Yuangang; Guo, Qingsheng; Sun, Yageng; Ma, Xiaoya
2014-01-01
Scale reduction from source to target maps inevitably leads to conflicts of map symbols in cartography and geographic information systems (GIS). Displacement is one of the most important map generalization operators and it can be used to resolve the problems that arise from conflict among two or more map objects. In this paper, we propose a combined approach based on constraint Delaunay triangulation (CDT) skeleton and improved elastic beam algorithm for automated building displacement. In this approach, map data sets are first partitioned. Then the displacement operation is conducted in each partition as a cyclic and iterative process of conflict detection and resolution. In the iteration, the skeleton of the gap spaces is extracted using CDT. It then serves as an enhanced data model to detect conflicts and construct the proximity graph. Then, the proximity graph is adjusted using local grouping information. Under the action of forces derived from the detected conflicts, the proximity graph is deformed using the improved elastic beam algorithm. In this way, buildings are displaced to find an optimal compromise between related cartographic constraints. To validate this approach, two topographic map data sets (i.e., urban and suburban areas) were tested. The results were reasonable with respect to each constraint when the density of the map was not extremely high. In summary, the improvements include (1) an automated parameter-setting method for elastic beams, (2) explicit enforcement regarding the positional accuracy constraint, added by introducing drag forces, (3) preservation of local building groups through displacement over an adjusted proximity graph, and (4) an iterative strategy that is more likely to resolve the proximity conflicts than the one used in the existing elastic beam algorithm. PMID:25470727
Supporting Multiple Cognitive Processing Styles Using Tailored Support Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuan Q. Tran; Karen M. Feigh; Amy R. Pritchett
According to theories of cognitive processing style or cognitive control mode, human performance is more effective when an individual’s cognitive state (e.g., intuition/scramble vs. deliberate/strategic) matches his/her ecological constraints or context (e.g., utilize intuition to strive for a "good-enough" response instead of deliberating for the "best" response under high time pressure). Ill-mapping between cognitive state and ecological constraints are believed to lead to degraded task performance. Consequently, incorporating support systems which are designed to specifically address multiple cognitive and functional states e.g., high workload, stress, boredom, and initiate appropriate mitigation strategies (e.g., reduce information load) is essential to reduce plantmore » risk. Utilizing the concept of Cognitive Control Models, this paper will discuss the importance of tailoring support systems to match an operator's cognitive state, and will further discuss the importance of these ecological constraints in selecting and implementing mitigation strategies for safe and effective system performance. An example from the nuclear power plant industry illustrating how a support system might be tailored to support different cognitive states is included.« less
NuSTAR constraints on coronal cutoffs in Swift-BAT selected Seyfert 1 AGN
NASA Astrophysics Data System (ADS)
Kamraj, Nikita; Harrison, Fiona; Balokovic, Mislav; Brightman, Murray; Stern, Daniel
2017-08-01
The continuum X-ray emission from Active Galactic Nuclei (AGN) is believed to originate in a hot, compact corona above the accretion disk. Compton upscattering of UV photons from the inner accretion disk by coronal electrons produces a power law X-ray continuum with a cutoff at energies determined by the electron temperature. The NuSTAR observatory, with its high sensitivity in hard X-rays, has enabled detailed broadband modeling of the X-ray spectra of AGN, thereby allowing tight constraints to be placed on the high-energy cutoff of the X-ray continuum. Recent detections of low cutoff energies in Seyfert 1 AGN in the NuSTAR band have motivated us to pursue a systematic search for low cutoff candidates in Swift-BAT detected Seyfert 1 AGN that have been observed with NuSTAR. We use our constraints on the cutoff energy to map out the location of these sources on the compactness - temperature diagram for AGN coronae, and discuss the implications of low cutoff energies for the cooling and thermalization mechanisms in the corona.
Reformulating Constraints for Compilability and Efficiency
NASA Technical Reports Server (NTRS)
Tong, Chris; Braudaway, Wesley; Mohan, Sunil; Voigt, Kerstin
1992-01-01
KBSDE is a knowledge compiler that uses a classification-based approach to map solution constraints in a task specification onto particular search algorithm components that will be responsible for satisfying those constraints (e.g., local constraints are incorporated in generators; global constraints are incorporated in either testers or hillclimbing patchers). Associated with each type of search algorithm component is a subcompiler that specializes in mapping constraints into components of that type. Each of these subcompilers in turn uses a classification-based approach, matching a constraint passed to it against one of several schemas, and applying a compilation technique associated with that schema. While much progress has occurred in our research since we first laid out our classification-based approach [Ton91], we focus in this paper on our reformulation research. Two important reformulation issues that arise out of the choice of a schema-based approach are: (1) compilability-- Can a constraint that does not directly match any of a particular subcompiler's schemas be reformulated into one that does? and (2) Efficiency-- If the efficiency of the compiled search algorithm depends on the compiler's performance, and the compiler's performance depends on the form in which the constraint was expressed, can we find forms for constraints which compile better, or reformulate constraints whose forms can be recognized as ones that compile poorly? In this paper, we describe a set of techniques we are developing for partially addressing these issues.
Spectral CT metal artifact reduction with an optimization-based reconstruction algorithm
NASA Astrophysics Data System (ADS)
Gilat Schmidt, Taly; Barber, Rina F.; Sidky, Emil Y.
2017-03-01
Metal objects cause artifacts in computed tomography (CT) images. This work investigated the feasibility of a spectral CT method to reduce metal artifacts. Spectral CT acquisition combined with optimization-based reconstruction is proposed to reduce artifacts by modeling the physical effects that cause metal artifacts and by providing the flexibility to selectively remove corrupted spectral measurements in the spectral-sinogram space. The proposed Constrained `One-Step' Spectral CT Image Reconstruction (cOSSCIR) algorithm directly estimates the basis material maps while enforcing convex constraints. The incorporation of constraints on the reconstructed basis material maps is expected to mitigate undersampling effects that occur when corrupted data is excluded from reconstruction. The feasibility of the cOSSCIR algorithm to reduce metal artifacts was investigated through simulations of a pelvis phantom. The cOSSCIR algorithm was investigated with and without the use of a third basis material representing metal. The effects of excluding data corrupted by metal were also investigated. The results demonstrated that the proposed cOSSCIR algorithm reduced metal artifacts and improved CT number accuracy. For example, CT number error in a bright shading artifact region was reduced from 403 HU in the reference filtered backprojection reconstruction to 33 HU using the proposed algorithm in simulation. In the dark shading regions, the error was reduced from 1141 HU to 25 HU. Of the investigated approaches, decomposing the data into three basis material maps and excluding the corrupted data demonstrated the greatest reduction in metal artifacts.
Using Data-Driven Model-Brain Mappings to Constrain Formal Models of Cognition
Borst, Jelmer P.; Nijboer, Menno; Taatgen, Niels A.; van Rijn, Hedderik; Anderson, John R.
2015-01-01
In this paper we propose a method to create data-driven mappings from components of cognitive models to brain regions. Cognitive models are notoriously hard to evaluate, especially based on behavioral measures alone. Neuroimaging data can provide additional constraints, but this requires a mapping from model components to brain regions. Although such mappings can be based on the experience of the modeler or on a reading of the literature, a formal method is preferred to prevent researcher-based biases. In this paper we used model-based fMRI analysis to create a data-driven model-brain mapping for five modules of the ACT-R cognitive architecture. We then validated this mapping by applying it to two new datasets with associated models. The new mapping was at least as powerful as an existing mapping that was based on the literature, and indicated where the models were supported by the data and where they have to be improved. We conclude that data-driven model-brain mappings can provide strong constraints on cognitive models, and that model-based fMRI is a suitable way to create such mappings. PMID:25747601
Simard, G.; et al.
2018-06-20
We report constraints on cosmological parameters from the angular power spectrum of a cosmic microwave background (CMB) gravitational lensing potential map created using temperature data from 2500 degmore » $^2$ of South Pole Telescope (SPT) data supplemented with data from Planck in the same sky region, with the statistical power in the combined map primarily from the SPT data. We fit the corresponding lensing angular power spectrum to a model including cold dark matter and a cosmological constant ($$\\Lambda$$CDM), and to models with single-parameter extensions to $$\\Lambda$$CDM. We find constraints that are comparable to and consistent with constraints found using the full-sky Planck CMB lensing data. Specifically, we find $$\\sigma_8 \\Omega_{\\rm m}^{0.25}=0.598 \\pm 0.024$$ from the lensing data alone with relatively weak priors placed on the other $$\\Lambda$$CDM parameters. In combination with primary CMB data from Planck, we explore single-parameter extensions to the $$\\Lambda$$CDM model. We find $$\\Omega_k = -0.012^{+0.021}_{-0.023}$$ or $$M_{\
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simard, G.; et al.
We report constraints on cosmological parameters from the angular power spectrum of a cosmic microwave background (CMB) gravitational lensing potential map created using temperature data from 2500 degmore » $^2$ of South Pole Telescope (SPT) data supplemented with data from Planck in the same sky region, with the statistical power in the combined map primarily from the SPT data. We fit the corresponding lensing angular power spectrum to a model including cold dark matter and a cosmological constant ($$\\Lambda$$CDM), and to models with single-parameter extensions to $$\\Lambda$$CDM. We find constraints that are comparable to and consistent with constraints found using the full-sky Planck CMB lensing data. Specifically, we find $$\\sigma_8 \\Omega_{\\rm m}^{0.25}=0.598 \\pm 0.024$$ from the lensing data alone with relatively weak priors placed on the other $$\\Lambda$$CDM parameters. In combination with primary CMB data from Planck, we explore single-parameter extensions to the $$\\Lambda$$CDM model. We find $$\\Omega_k = -0.012^{+0.021}_{-0.023}$$ or $$M_{\
Cai, Congbo; Chen, Zhong; van Zijl, Peter C.M.
2017-01-01
The reconstruction of MR quantitative susceptibility mapping (QSM) from local phase measurements is an ill posed inverse problem and different regularization strategies incorporating a priori information extracted from magnitude and phase images have been proposed. However, the anatomy observed in magnitude and phase images does not always coincide spatially with that in susceptibility maps, which could give erroneous estimation in the reconstructed susceptibility map. In this paper, we develop a structural feature based collaborative reconstruction (SFCR) method for QSM including both magnitude and susceptibility based information. The SFCR algorithm is composed of two consecutive steps corresponding to complementary reconstruction models, each with a structural feature based l1 norm constraint and a voxel fidelity based l2 norm constraint, which allows both the structure edges and tiny features to be recovered, whereas the noise and artifacts could be reduced. In the M-step, the initial susceptibility map is reconstructed by employing a k-space based compressed sensing model incorporating magnitude prior. In the S-step, the susceptibility map is fitted in spatial domain using weighted constraints derived from the initial susceptibility map from the M-step. Simulations and in vivo human experiments at 7T MRI show that the SFCR method provides high quality susceptibility maps with improved RMSE and MSSIM. Finally, the susceptibility values of deep gray matter are analyzed in multiple head positions, with the supine position most approximate to the gold standard COSMOS result. PMID:27019480
Lunar prospector mission design and trajectory support
NASA Technical Reports Server (NTRS)
Lozier, David; Galal, Ken; Folta, David; Beckman, Mark
1998-01-01
The Lunar Prospector mission is the first dedicated NASA lunar mapping mission since the Apollo Orbiter program which was flown over 25 years ago. Competitively selected under the NASA Discovery Program, Lunar Prospector was launched on January 7, 1998 on the new Lockheed Martin Athena 2 launch vehicle. The mission design of Lunar Prospector is characterized by a direct minimum energy transfer trajectory to the moon with three scheduled orbit correction maneuvers to remove launch and cislunar injection errors prior to lunar insertion. At lunar encounter, a series of three lunar orbit insertion maneuvers and a small circularization burn were executed to achieve a 100 km altitude polar mapping orbit. This paper will present the design of the Lunar Prospector transfer, lunar insertion and mapping orbits, including maneuver and orbit determination strategies in the context of mission goals and constraints. Contingency plans for handling transfer orbit injection and lunar orbit insertion anomalies are also summarized. Actual flight operations results are discussed and compared to pre-launch support analysis.
A dual-route approach to orthographic processing.
Grainger, Jonathan; Ziegler, Johannes C
2011-01-01
In the present theoretical note we examine how different learning constraints, thought to be involved in optimizing the mapping of print to meaning during reading acquisition, might shape the nature of the orthographic code involved in skilled reading. On the one hand, optimization is hypothesized to involve selecting combinations of letters that are the most informative with respect to word identity (diagnosticity constraint), and on the other hand to involve the detection of letter combinations that correspond to pre-existing sublexical phonological and morphological representations (chunking constraint). These two constraints give rise to two different kinds of prelexical orthographic code, a coarse-grained and a fine-grained code, associated with the two routes of a dual-route architecture. Processing along the coarse-grained route optimizes fast access to semantics by using minimal subsets of letters that maximize information with respect to word identity, while coding for approximate within-word letter position independently of letter contiguity. Processing along the fined-grained route, on the other hand, is sensitive to the precise ordering of letters, as well as to position with respect to word beginnings and endings. This enables the chunking of frequently co-occurring contiguous letter combinations that form relevant units for morpho-orthographic processing (prefixes and suffixes) and for the sublexical translation of print to sound (multi-letter graphemes).
A Dual-Route Approach to Orthographic Processing
Grainger, Jonathan; Ziegler, Johannes C.
2011-01-01
In the present theoretical note we examine how different learning constraints, thought to be involved in optimizing the mapping of print to meaning during reading acquisition, might shape the nature of the orthographic code involved in skilled reading. On the one hand, optimization is hypothesized to involve selecting combinations of letters that are the most informative with respect to word identity (diagnosticity constraint), and on the other hand to involve the detection of letter combinations that correspond to pre-existing sublexical phonological and morphological representations (chunking constraint). These two constraints give rise to two different kinds of prelexical orthographic code, a coarse-grained and a fine-grained code, associated with the two routes of a dual-route architecture. Processing along the coarse-grained route optimizes fast access to semantics by using minimal subsets of letters that maximize information with respect to word identity, while coding for approximate within-word letter position independently of letter contiguity. Processing along the fined-grained route, on the other hand, is sensitive to the precise ordering of letters, as well as to position with respect to word beginnings and endings. This enables the chunking of frequently co-occurring contiguous letter combinations that form relevant units for morpho-orthographic processing (prefixes and suffixes) and for the sublexical translation of print to sound (multi-letter graphemes). PMID:21716577
NASA Astrophysics Data System (ADS)
Head, James W.
1999-01-01
The Site Selection Process: Site selection as a process can be subdivided into several main elements and these can be represented as the corners of a tetrahedron. Successful site selection outcome requires the interactions between these elements or corners, and should also take into account several other external factors or considerations. In principle, elements should be defined in approximately the following order: (1) major scientific and programmatic goals and objectives: What are the major questions that are being asked, goals that should be achieved, and objectives that must be accomplished. Do programmatic goals (e.g., sample return) differ from mission goals (e.g., precursor to sample return)? It is most helpful if these questions can be placed in the context of site characterization and hypothesis testing (e.g., Was Mars warm and wet in the Noachian? Land at a Noachian-aged site that shows evidence of surface water and characterize it specifically to address this question). Goals and objectives, then, help define important engineering factors such as type of payload, landing regions of interest (highlands, lowlands, smooth, rough, etc.), mobility, mission duration, etc. Goals and objectives then lead to: (2) spacecraft design and engineering landing site constraints: the spacecraft is designed to optimize the areas that will meet the goals and objectives, but this in turn introduces constraints that must be met in the selection of a landing site. Scientific and programmatic goals and objectives also help to define (3), the specific lander scientific payload requirements and capabilities. For example, what observations and experiments are required to address the major questions? How do we characterize the site in reference to the specific questions? Is mobility required and if so, how much? Which experiments are on the spacecraft, which on the rover? The results of these deliberations should lead to a surface exploration strategy, in which the goals and objectives can in principle be achieved through the exploration of a site meeting the basic engineering constraints. Armed with all of this important background information, one can then proceed to (4) the selection of optimum sites to address major scientific and programmatic objectives. Following the successful completion of this process and the selection of a site or region, there is a further step of mission optimization, in which a detailed mission profile and surface exploration plan is developed. In practice, the process never works in a linear fashion. Scientific goals are influenced by ongoing discoveries and developments and simple crystallization of thinking. Programmatic goals are influenced by evolving fiscal constraints, perspectives on program duration, and roles of specific missions in the context of the larger program. Engineering constraints are influenced by evolving fiscal constraints, decisions on hardware design that may have little to do with scientific goals (e.g., lander clearance; size of landing ellipse), and evolving understanding (e.g., assessment of engineering constraint space reveals further the degree to which mission duration is severely influenced by available solar energy and thus latitude). Lander scientific payload is influenced by fiscal constraints, total mass, evolving complexity, technological developments, and a payload selection process that may involve very long-term goals (e.g., human exploration) as well as shorter term scientific and programmatic goals. Site selection activities commonly involve scientists who are actively trying to decipher the complex geology of the crust of Mars and to unravel its geologic history through geological mapping. By the nature of the process, they are thinking in terms of broad morphostratigraphic units which may have multiple possible origins, defined using images with resolutions of many tens to hundreds of meters, and whose surfaces at the scale of the lander and rover are virtually unknown; this approach and effort is crucially important but does not necessarily readily lend itself to integration with the other elements.
A VS30 map for California with geologic and topographic constraints
Thompson, Eric; Wald, David J.; Worden, Charles
2014-01-01
For many earthquake engineering applications, site response is estimated through empirical correlations with the time‐averaged shear‐wave velocity to 30 m depth (VS30). These applications therefore depend on the availability of either site‐specific VS30 measurements or VS30 maps at local, regional, and global scales. Because VS30 measurements are sparse, a proxy frequently is needed to estimate VS30 at unsampled locations. We present a new VS30 map for California, which accounts for observational constraints from multiple sources and spatial scales, such as geology, topography, and site‐specific VS30measurements. We apply the geostatistical approach of regression kriging (RK) to combine these constraints for predicting VS30. For the VS30 trend, we start with geology‐based VS30 values and identify two distinct trends between topographic gradient and the residuals from the geology VS30 model. One trend applies to deep and fine Quaternary alluvium, whereas the second trend is slightly stronger and applies to Pleistocene sedimentary units. The RK framework ensures that the resulting map of California is locally refined to reflect the rapidly expanding database of VS30 measurements throughout California. We compare the accuracy of the new mapping method to a previously developed map of VS30 for California. We also illustrate the sensitivity of ground motions to the new VS30 map by comparing real and scenario ShakeMaps with VS30 values from our new map to those for existingVS30 maps.
Web-based Tool Suite for Plasmasphere Information Discovery
NASA Astrophysics Data System (ADS)
Newman, T. S.; Wang, C.; Gallagher, D. L.
2005-12-01
A suite of tools that enable discovery of terrestrial plasmasphere characteristics from NASA IMAGE Extreme Ultra Violet (EUV) images is described. The tool suite is web-accessible, allowing easy remote access without the need for any software installation on the user's computer. The features supported by the tool include reconstruction of the plasmasphere plasma density distribution from a short sequence of EUV images, semi-automated selection of the plasmapause boundary in an EUV image, and mapping of the selected boundary to the geomagnetic equatorial plane. EUV image upload and result download is also supported. The tool suite's plasmapause mapping feature is achieved via the Roelof and Skinner (2000) Edge Algorithm. The plasma density reconstruction is achieved through a tomographic technique that exploits physical constraints to allow for a moderate resolution result. The tool suite's software architecture uses Java Server Pages (JSP) and Java Applets on the front side for user-software interaction and Java Servlets on the server side for task execution. The compute-intensive components of the tool suite are implemented in C++ and invoked by the server via Java Native Interface (JNI).
Measuring energy-saving retrofits: Experiences from the Texas LoanSTAR program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Haberl, J.S.; Reddy, T.A.; Claridge, D.E.
1996-02-01
In 1988 the Governor`s Energy Management Center of Texas received approval from the US Department of Energy to establish a $98.6 million state-wide retrofit demonstration revolving loan program to fund energy-conserving retrofits in state, public school, and local government buildings. As part of this program, a first-of-its-kind, statewide Monitoring and Analysis Program (MAP) was established to verify energy and dollar savings of the retrofits, reduce energy costs by identifying operational and maintenance improvements, improve retrofit selection in future rounds of the LoanSTAR program, and initiate a data base of energy use in institutional and commercial buildings located in Texas. Thismore » report discusses the LoanSTAR MAP with an emphasis on the process of acquiring and analyzing data to measure savings from energy conservation retrofits when budgets are a constraint. This report includes a discussion of the program structure, basic measurement techniques, data archiving and handling, data reporting and analysis, and includes selected examples from LoanSTAR agencies. A summary of the program results for the first two years of monitoring is also included.« less
Mechanistic Explanations for Restricted Evolutionary Paths That Emerge from Gene Regulatory Networks
Cotterell, James; Sharpe, James
2013-01-01
The extent and the nature of the constraints to evolutionary trajectories are central issues in biology. Constraints can be the result of systems dynamics causing a non-linear mapping between genotype and phenotype. How prevalent are these developmental constraints and what is their mechanistic basis? Although this has been extensively explored at the level of epistatic interactions between nucleotides within a gene, or amino acids within a protein, selection acts at the level of the whole organism, and therefore epistasis between disparate genes in the genome is expected due to their functional interactions within gene regulatory networks (GRNs) which are responsible for many aspects of organismal phenotype. Here we explore epistasis within GRNs capable of performing a common developmental function – converting a continuous morphogen input into discrete spatial domains. By exploring the full complement of GRN wiring designs that are able to perform this function, we analyzed all possible mutational routes between functional GRNs. Through this study we demonstrate that mechanistic constraints are common for GRNs that perform even a simple function. We demonstrate a common mechanistic cause for such a constraint involving complementation between counter-balanced gene-gene interactions. Furthermore we show how such constraints can be bypassed by means of “permissive” mutations that buffer changes in a direct route between two GRN topologies that would normally be unviable. We show that such bypasses are common and thus we suggest that unlike what was observed in protein sequence-function relationships, the “tape of life” is less reproducible when one considers higher levels of biological organization. PMID:23613807
Enforcement of entailment constraints in distributed service-based business processes.
Hummer, Waldemar; Gaubatz, Patrick; Strembeck, Mark; Zdun, Uwe; Dustdar, Schahram
2013-11-01
A distributed business process is executed in a distributed computing environment. The service-oriented architecture (SOA) paradigm is a popular option for the integration of software services and execution of distributed business processes. Entailment constraints, such as mutual exclusion and binding constraints, are important means to control process execution. Mutually exclusive tasks result from the division of powerful rights and responsibilities to prevent fraud and abuse. In contrast, binding constraints define that a subject who performed one task must also perform the corresponding bound task(s). We aim to provide a model-driven approach for the specification and enforcement of task-based entailment constraints in distributed service-based business processes. Based on a generic metamodel, we define a domain-specific language (DSL) that maps the different modeling-level artifacts to the implementation-level. The DSL integrates elements from role-based access control (RBAC) with the tasks that are performed in a business process. Process definitions are annotated using the DSL, and our software platform uses automated model transformations to produce executable WS-BPEL specifications which enforce the entailment constraints. We evaluate the impact of constraint enforcement on runtime performance for five selected service-based processes from existing literature. Our evaluation demonstrates that the approach correctly enforces task-based entailment constraints at runtime. The performance experiments illustrate that the runtime enforcement operates with an overhead that scales well up to the order of several ten thousand logged invocations. Using our DSL annotations, the user-defined process definition remains declarative and clean of security enforcement code. Our approach decouples the concerns of (non-technical) domain experts from technical details of entailment constraint enforcement. The developed framework integrates seamlessly with WS-BPEL and the Web services technology stack. Our prototype implementation shows the feasibility of the approach, and the evaluation points to future work and further performance optimizations.
Ding, Yi; Peng, Kai; Yu, Miao; Lu, Lei; Zhao, Kun
2017-08-01
The performance of the two selected spatial frequency phase unwrapping methods is limited by a phase error bound beyond which errors will occur in the fringe order leading to a significant error in the recovered absolute phase map. In this paper, we propose a method to detect and correct the wrong fringe orders. Two constraints are introduced during the fringe order determination of two selected spatial frequency phase unwrapping methods. A strategy to detect and correct the wrong fringe orders is also described. Compared with the existing methods, we do not need to estimate the threshold associated with absolute phase values to determine the fringe order error, thus making it more reliable and avoiding the procedure of search in detecting and correcting successive fringe order errors. The effectiveness of the proposed method is validated by the experimental results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koposov, Sergey E.; Rix, Hans-Walter; Hogg, David W., E-mail: koposov@ast.cam.ac.u
2010-03-20
The narrow GD-1 stream of stars, spanning 60{sup 0} on the sky at a distance of {approx}10 kpc from the Sun and {approx}15 kpc from the Galactic center, is presumed to be debris from a tidally disrupted star cluster that traces out a test-particle orbit in the Milky Way halo. We combine Sloan Digital Sky Survey (SDSS) photometry, USNO-B astrometry, and SDSS and Calar Alto spectroscopy to construct a complete, empirical six-dimensional (6D) phase-space map of the stream. We find that an eccentric orbit in a flattened isothermal potential describes this phase-space map well. Even after marginalizing over the streammore » orbital parameters and the distance from the Sun to the Galactic center, the orbital fit to GD-1 places strong constraints on the circular velocity at the Sun's radius V{sub c} = 224 +- 13 km s{sup -1} and total potential flattening q{sub P}HI = 0.87{sup +0.07}{sub -0.04}. When we drop any informative priors on V{sub c} , the GD-1 constraint becomes V{sub c} = 221 +- 18 km s{sup -1}. Our 6D map of GD-1, therefore, yields the best current constraint on V{sub c} and the only strong constraint on q{sub P}HI at Galactocentric radii near R {approx} 15 kpc. Much, if not all, of the total potential flattening may be attributed to the mass in the stellar disk, so the GD-1 constraints on the flattening of the halo itself are weak: q{sub P}HI{sub ,halo} > 0.89 at 90% confidence. The greatest uncertainty in the 6D map and the orbital analysis stems from the photometric distances, which will be obviated by GAIA.« less
A new mosaic method for three-dimensional surface
NASA Astrophysics Data System (ADS)
Yuan, Yun; Zhu, Zhaokun; Ding, Yongjun
2011-08-01
Three-dimensional (3-D) data mosaic is a indispensable link in surface measurement and digital terrain map generation. With respect to the mosaic problem of the local unorganized cloud points with rude registration and mass mismatched points, a new mosaic method for 3-D surface based on RANSAC is proposed. Every circular of this method is processed sequentially by random sample with additional shape constraint, data normalization of cloud points, absolute orientation, data denormalization of cloud points, inlier number statistic, etc. After N random sample trials the largest consensus set is selected, and at last the model is re-estimated using all the points in the selected subset. The minimal subset is composed of three non-colinear points which form a triangle. The shape of triangle is considered in random sample selection in order to make the sample selection reasonable. A new coordinate system transformation algorithm presented in this paper is used to avoid the singularity. The whole rotation transformation between the two coordinate systems can be solved by twice rotations expressed by Euler angle vector, each rotation has explicit physical means. Both simulation and real data are used to prove the correctness and validity of this mosaic method. This method has better noise immunity due to its robust estimation property, and has high accuracy as the shape constraint is added to random sample and the data normalization added to the absolute orientation. This method is applicable for high precision measurement of three-dimensional surface and also for the 3-D terrain mosaic.
Hugelier, Siewert; Vitale, Raffaele; Ruckebusch, Cyril
2018-03-01
This article explores smoothing with edge-preserving properties as a spatial constraint for the resolution of hyperspectral images with multivariate curve resolution-alternating least squares (MCR-ALS). For each constrained component image (distribution map), irrelevant spatial details and noise are smoothed applying an L 1 - or L 0 -norm penalized least squares regression, highlighting in this way big changes in intensity of adjacent pixels. The feasibility of the constraint is demonstrated on three different case studies, in which the objects under investigation are spatially clearly defined, but have significant spectral overlap. This spectral overlap is detrimental for obtaining a good resolution and additional spatial information should be provided. The final results show that the spatial constraint enables better image (map) abstraction, artifact removal, and better interpretation of the results obtained, compared to a classical MCR-ALS analysis of hyperspectral images.
Why don't zebras have machine guns? Adaptation, selection, and constraints in evolutionary theory.
Shanahan, Timothy
2008-03-01
In an influential paper, Stephen Jay Gould and Richard Lewontin (1979) contrasted selection-driven adaptation with phylogenetic, architectural, and developmental constraints as distinct causes of phenotypic evolution. In subsequent publications Gould (e.g., 1997a,b, 2002) has elaborated this distinction into one between a narrow "Darwinian Fundamentalist" emphasis on "external functionalist" processes, and a more inclusive "pluralist" emphasis on "internal structuralist" principles. Although theoretical integration of functionalist and structuralist explanations is the ultimate aim, natural selection and internal constraints are treated as distinct causes of evolutionary change. This distinction is now routinely taken for granted in the literature in evolutionary biology. I argue that this distinction is problematic because the effects attributed to non-selective constraints are more parsimoniously explained as the ordinary effects of selection itself. Although it may still be a useful shorthand to speak of phylogenetic, architectural, and developmental constraints on phenotypic evolution, it is important to understand that such "constraints" do not constitute an alternative set of causes of evolutionary change. The result of this analysis is a clearer understanding of the relationship between adaptation, selection and constraints as explanatory concepts in evolutionary theory.
Mapping of the Moon by Clementine
McEwen, A.S.; Robinson, M.S.
1997-01-01
The "faster, cheaper, better" Clementine spacecraft mission mapped the Moon from February 19 to May 3, 1994. Global coverage was acquired in 11 spectral bandpasses from 415 to 2792 nm and at resolutions of 80-330 m/pixel; a thermal-infrared camera sampled ???20% of the surface; a high-resolution camera sampled selected areas (especially the polar regions); and a lidar altimeter mapped the large-scale topography up to latitudes of ??75??. The spacecraft was in a polar, elliptical orbit, 400-450 km periselene altitude. Periselene latitude was -28.5?? for the first month of mapping, then moved to +28.5??. NASA is supporting the archiving, systematic processing, and analysis of the ???1.8 million lunar images and other datasets. A new global positional network has been constructed from 43,000 images and ???0.5 million match points; new digital maps will facilitate future lunar exploration. In-flight calibrations now enable photometry to a high level of precision for the uv-visible CCD camera. Early science results include: (1) global models of topography, gravity, and crustal thicknesses; (2) new information on the topography and structure of multiring impact basins; (3) evidence suggestive of water ice in large permanent shadows near the south pole; (4) global mapping of iron abundances; and (5) new constraints on the Phanerozoic cratering rate of the Earth. Many additional results are expected following completion of calibration and systematic processing efforts. ?? 1997 COSPAR. Published by Elsevier Science Ltd.
Thompson, E.M.; Wald, D.J.
2012-01-01
Despite obvious limitations as a proxy for site amplification, the use of time-averaged shear-wave velocity over the top 30 m (VS30) remains widely practiced, most notably through its use as an explanatory variable in ground motion prediction equations (and thus hazard maps and ShakeMaps, among other applications). As such, we are developing an improved strategy for producing VS30 maps given the common observational constraints. Using the abundant VS30 measurements in Taiwan, we compare alternative mapping methods that combine topographic slope, surface geology, and spatial correlation structure. The different VS30 mapping algorithms are distinguished by the way that slope and geology are combined to define a spatial model of VS30. We consider the globally applicable slope-only model as a baseline to which we compare two methods of combining both slope and geology. For both hybrid approaches, we model spatial correlation structure of the residuals using the kriging-with-a-trend technique, which brings the map into closer agreement with the observations. Cross validation indicates that we can reduce the uncertainty of the VS30 map by up to 16% relative to the slope-only approach.
ERIC Educational Resources Information Center
Batty, Kimberly A.
2011-01-01
The purpose of this study was to document the factors (i.e., motivation and perceived constraints) and processes (i.e., constraint negotiation) that influence students' selection of and satisfaction with their internship choice. The study was conducted using a quantitative approach, which included a focus group, a pilot study, and a…
Mapping uncharted territory in ice from zeolite networks to ice structures.
Engel, Edgar A; Anelli, Andrea; Ceriotti, Michele; Pickard, Chris J; Needs, Richard J
2018-06-05
Ice is one of the most extensively studied condensed matter systems. Yet, both experimentally and theoretically several new phases have been discovered over the last years. Here we report a large-scale density-functional-theory study of the configuration space of water ice. We geometry optimise 74,963 ice structures, which are selected and constructed from over five million tetrahedral networks listed in the databases of Treacy, Deem, and the International Zeolite Association. All prior knowledge of ice is set aside and we introduce "generalised convex hulls" to identify configurations stabilised by appropriate thermodynamic constraints. We thereby rediscover all known phases (I-XVII, i, 0 and the quartz phase) except the metastable ice IV. Crucially, we also find promising candidates for ices XVIII through LI. Using the "sketch-map" dimensionality-reduction algorithm we construct an a priori, navigable map of configuration space, which reproduces similarity relations between structures and highlights the novel candidates. By relating the known phases to the tractably small, yet structurally diverse set of synthesisable candidate structures, we provide an excellent starting point for identifying formation pathways.
Geologic Mapping of the NW Rim of Hellas Basin, Mars
NASA Astrophysics Data System (ADS)
Crown, D. A.; Bleamaster, L. F.; Mest, S. C.; Mustard, J. F.
2009-03-01
Geologic mapping of the NW rim of Hellas basin is providing new constraints on the magnitudes, extents, and history of volatile-driven processes as well as a geologic context for mineralogic identifications.
Kobayashi, Amane; Sekiguchi, Yuki; Takayama, Yuki; Oroguchi, Tomotaka; Nakasako, Masayoshi
2014-11-17
Coherent X-ray diffraction imaging (CXDI) is a lensless imaging technique that is suitable for visualizing the structures of non-crystalline particles with micrometer to sub-micrometer dimensions from material science and biology. One of the difficulties inherent to CXDI structural analyses is the reconstruction of electron density maps of specimen particles from diffraction patterns because saturated detector pixels and a beam stopper result in missing data in small-angle regions. To overcome this difficulty, the dark-field phase-retrieval (DFPR) method has been proposed. The DFPR method reconstructs electron density maps from diffraction data, which are modified by multiplying Gaussian masks with an observed diffraction pattern in the high-angle regions. In this paper, we incorporated Friedel centrosymmetry for diffraction patterns into the DFPR method to provide a constraint for the phase-retrieval calculation. A set of model simulations demonstrated that this constraint dramatically improved the probability of reconstructing correct electron density maps from diffraction patterns that were missing data in the small-angle region. In addition, the DFPR method with the constraint was applied successfully to experimentally obtained diffraction patterns with significant quantities of missing data. We also discuss this method's limitations with respect to the level of Poisson noise in X-ray detection.
Constrained proper sampling of conformations of transition state ensemble of protein folding
Lin, Ming; Zhang, Jian; Lu, Hsiao-Mei; Chen, Rong; Liang, Jie
2011-01-01
Characterizing the conformations of protein in the transition state ensemble (TSE) is important for studying protein folding. A promising approach pioneered by Vendruscolo [Nature (London) 409, 641 (2001)] to study TSE is to generate conformations that satisfy all constraints imposed by the experimentally measured ϕ values that provide information about the native likeness of the transition states. Faísca [J. Chem. Phys. 129, 095108 (2008)] generated conformations of TSE based on the criterion that, starting from a TS conformation, the probabilities of folding and unfolding are about equal through Markov Chain Monte Carlo (MCMC) simulations. In this study, we use the technique of constrained sequential Monte Carlo method [Lin , J. Chem. Phys. 129, 094101 (2008); Zhang Proteins 66, 61 (2007)] to generate TSE conformations of acylphosphatase of 98 residues that satisfy the ϕ-value constraints, as well as the criterion that each conformation has a folding probability of 0.5 by Monte Carlo simulations. We adopt a two stage process and first generate 5000 contact maps satisfying the ϕ-value constraints. Each contact map is then used to generate 1000 properly weighted conformations. After clustering similar conformations, we obtain a set of properly weighted samples of 4185 candidate clusters. Representative conformation of each of these cluster is then selected and 50 runs of Markov chain Monte Carlo (MCMC) simulation are carried using a regrowth move set. We then select a subset of 1501 conformations that have equal probabilities to fold and to unfold as the set of TSE. These 1501 samples characterize well the distribution of transition state ensemble conformations of acylphosphatase. Compared with previous studies, our approach can access much wider conformational space and can objectively generate conformations that satisfy the ϕ-value constraints and the criterion of 0.5 folding probability without bias. In contrast to previous studies, our results show that transition state conformations are very diverse and are far from nativelike when measured in cartesian root-mean-square deviation (cRMSD): the average cRMSD between TSE conformations and the native structure is 9.4 Å for this short protein, instead of 6 Å reported in previous studies. In addition, we found that the average fraction of native contacts in the TSE is 0.37, with enrichment in native-like β-sheets and a shortage of long range contacts, suggesting such contacts form at a later stage of folding. We further calculate the first passage time of folding of TSE conformations through calculation of physical time associated with the regrowth moves in MCMC simulation through mapping such moves to a Markovian state model, whose transition time was obtained by Langevin dynamics simulations. Our results indicate that despite the large structural diversity of the TSE, they are characterized by similar folding time. Our approach is general and can be used to study TSE in other macromolecules. PMID:21341875
Volkmann, Niels
2004-01-01
Reduced representation templates are used in a real-space pattern matching framework to facilitate automatic particle picking from electron micrographs. The procedure consists of five parts. First, reduced templates are constructed either from models or directly from the data. Second, a real-space pattern matching algorithm is applied using the reduced representations as templates. Third, peaks are selected from the resulting score map using peak-shape characteristics. Fourth, the surviving peaks are tested for distance constraints. Fifth, a correlation-based outlier screening is applied. Test applications to a data set of keyhole limpet hemocyanin particles indicate that the method is robust and reliable.
Hyperbolic Harmonic Mapping for Surface Registration
Shi, Rui; Zeng, Wei; Su, Zhengyu; Jiang, Jian; Damasio, Hanna; Lu, Zhonglin; Wang, Yalin; Yau, Shing-Tung; Gu, Xianfeng
2016-01-01
Automatic computation of surface correspondence via harmonic map is an active research field in computer vision, computer graphics and computational geometry. It may help document and understand physical and biological phenomena and also has broad applications in biometrics, medical imaging and motion capture inducstries. Although numerous studies have been devoted to harmonic map research, limited progress has been made to compute a diffeomorphic harmonic map on general topology surfaces with landmark constraints. This work conquers this problem by changing the Riemannian metric on the target surface to a hyperbolic metric so that the harmonic mapping is guaranteed to be a diffeomorphism under landmark constraints. The computational algorithms are based on Ricci flow and nonlinear heat diffusion methods. The approach is general and robust. We employ our algorithm to study the constrained surface registration problem which applies to both computer vision and medical imaging applications. Experimental results demonstrate that, by changing the Riemannian metric, the registrations are always diffeomorphic and achieve relatively high performance when evaluated with some popular surface registration evaluation standards. PMID:27187948
Local search for optimal global map generation using mid-decadal landsat images
Khatib, L.; Gasch, J.; Morris, Robert; Covington, S.
2007-01-01
NASA and the US Geological Survey (USGS) are seeking to generate a map of the entire globe using Landsat 5 Thematic Mapper (TM) and Landsat 7 Enhanced Thematic Mapper Plus (ETM+) sensor data from the "mid-decadal" period of 2004 through 2006. The global map is comprised of thousands of scene locations and, for each location, tens of different images of varying quality to chose from. Furthermore, it is desirable for images of adjacent scenes be close together in time of acquisition, to avoid obvious discontinuities due to seasonal changes. These characteristics make it desirable to formulate an automated solution to the problem of generating the complete map. This paper formulates a Global Map Generator problem as a Constraint Optimization Problem (GMG-COP) and describes an approach to solving it using local search. Preliminary results of running the algorithm on image data sets are summarized. The results suggest a significant improvement in map quality using constraint-based solutions. Copyright ?? 2007, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
A neural-network based estimator to search for primordial non-Gaussianity in Planck CMB maps
DOE Office of Scientific and Technical Information (OSTI.GOV)
Novaes, C.P.; Bernui, A.; Ferreira, I.S.
2015-09-01
We present an upgraded combined estimator, based on Minkowski Functionals and Neural Networks, with excellent performance in detecting primordial non-Gaussianity in simulated maps that also contain a weighted mixture of Galactic contaminations, besides real pixel's noise from Planck cosmic microwave background radiation data. We rigorously test the efficiency of our estimator considering several plausible scenarios for residual non-Gaussianities in the foreground-cleaned Planck maps, with the intuition to optimize the training procedure of the Neural Network to discriminate between contaminations with primordial and secondary non-Gaussian signatures. We look for constraints of primordial local non-Gaussianity at large angular scales in the foreground-cleanedmore » Planck maps. For the SMICA map we found f{sub NL} = 33 ± 23, at 1σ confidence level, in excellent agreement with the WMAP-9yr and Planck results. In addition, for the other three Planck maps we obtain similar constraints with values in the interval f{sub NL} element of [33, 41], concomitant with the fact that these maps manifest distinct features in reported analyses, like having different pixel's noise intensities.« less
Infrared Extinction and Stellar Populations in the Milky Way Midplane
NASA Astrophysics Data System (ADS)
Zasowski, Gail; Majewski, S. R.; Benjamin, R. A.; Nidever, D. L.; Skrutskie, M. F.; Indebetouw, R.; Patterson, R. J.; Meade, M. R.; Whitney, B. A.; Babler, B.; Churchwell, E.; Watson, C.
2012-01-01
The primary laboratory for developing and testing models of galaxy formation, structure, and evolution is our own Milky Way, the closest large galaxy and the only one in which we can resolve large numbers of individual stars. The recent availability of extensive stellar surveys, particularly infrared ones, has enabled precise, contiguous measurement of large-scale Galactic properties, a major improvement over inferences based on selected, but scattered, sightlines. However, our ability to fully exploit the Milky Way as a galactic laboratory is severely hampered by the fact that its midplane and central bulge -- where most of the Galactic stellar mass lies -- is heavily obscured by interstellar dust. Therefore, proper consideration of the interstellar extinction is crucial. This thesis describes a new extinction-correction method (the RJCE method) that measures the foreground extinction towards each star and, in many cases, enables recovery of its intrinsic stellar type. We have demonstrated the RJCE Method's validity and used it to produce new, reliable extinction maps of the heavily-reddened Galactic midplane. Taking advantage of the recovered stellar type information, we have generated maps probing the extinction at different heliocentric distances, thus yielding information on the elusive three-dimensional distribution of the interstellar dust. We also performed a study of the interstellar extinction law itself which revealed variations previously undetected in the diffuse ISM and established constraints on models of ISM grain formation and evolution. Furthermore, we undertook a study of large-scale stellar structure in the inner Galaxy -- the bar(s), bulge(s), and inner spiral arms. We used observed and extinction-corrected infrared photometry to map the coherent stellar features in these heavily-obscured parts of the Galaxy, placing constraints on models of the central stellar mass distribution.
Variational Bayesian Learning for Wavelet Independent Component Analysis
NASA Astrophysics Data System (ADS)
Roussos, E.; Roberts, S.; Daubechies, I.
2005-11-01
In an exploratory approach to data analysis, it is often useful to consider the observations as generated from a set of latent generators or "sources" via a generally unknown mapping. For the noisy overcomplete case, where we have more sources than observations, the problem becomes extremely ill-posed. Solutions to such inverse problems can, in many cases, be achieved by incorporating prior knowledge about the problem, captured in the form of constraints. This setting is a natural candidate for the application of the Bayesian methodology, allowing us to incorporate "soft" constraints in a natural manner. The work described in this paper is mainly driven by problems in functional magnetic resonance imaging of the brain, for the neuro-scientific goal of extracting relevant "maps" from the data. This can be stated as a `blind' source separation problem. Recent experiments in the field of neuroscience show that these maps are sparse, in some appropriate sense. The separation problem can be solved by independent component analysis (ICA), viewed as a technique for seeking sparse components, assuming appropriate distributions for the sources. We derive a hybrid wavelet-ICA model, transforming the signals into a domain where the modeling assumption of sparsity of the coefficients with respect to a dictionary is natural. We follow a graphical modeling formalism, viewing ICA as a probabilistic generative model. We use hierarchical source and mixing models and apply Bayesian inference to the problem. This allows us to perform model selection in order to infer the complexity of the representation, as well as automatic denoising. Since exact inference and learning in such a model is intractable, we follow a variational Bayesian mean-field approach in the conjugate-exponential family of distributions, for efficient unsupervised learning in multi-dimensional settings. The performance of the proposed algorithm is demonstrated on some representative experiments.
Peyrat, Jean-Marc; Delingette, Hervé; Sermesant, Maxime; Xu, Chenyang; Ayache, Nicholas
2010-07-01
We propose a framework for the nonlinear spatiotemporal registration of 4D time-series of images based on the Diffeomorphic Demons (DD) algorithm. In this framework, the 4D spatiotemporal registration is decoupled into a 4D temporal registration, defined as mapping physiological states, and a 4D spatial registration, defined as mapping trajectories of physical points. Our contribution focuses more specifically on the 4D spatial registration that should be consistent over time as opposed to 3D registration that solely aims at mapping homologous points at a given time-point. First, we estimate in each sequence the motion displacement field, which is a dense representation of the point trajectories we want to register. Then, we perform simultaneously 3D registrations of corresponding time-points with the constraints to map the same physical points over time called the trajectory constraints. Under these constraints, we show that the 4D spatial registration can be formulated as a multichannel registration of 3D images. To solve it, we propose a novel version of the Diffeomorphic Demons (DD) algorithm extended to vector-valued 3D images, the Multichannel Diffeomorphic Demons (MDD). For evaluation, this framework is applied to the registration of 4D cardiac computed tomography (CT) sequences and compared to other standard methods with real patient data and synthetic data simulated from a physiologically realistic electromechanical cardiac model. Results show that the trajectory constraints act as a temporal regularization consistent with motion whereas the multichannel registration acts as a spatial regularization. Finally, using these trajectory constraints with multichannel registration yields the best compromise between registration accuracy, temporal and spatial smoothness, and computation times. A prospective example of application is also presented with the spatiotemporal registration of 4D cardiac CT sequences of the same patient before and after radiofrequency ablation (RFA) in case of atrial fibrillation (AF). The intersequence spatial transformations over a cardiac cycle allow to analyze and quantify the regression of left ventricular hypertrophy and its impact on the cardiac function.
Connecting CO intensity mapping to molecular gas and star formation in the epoch of galaxy assembly
Li, Tony Y.; Wechsler, Risa H.; Devaraj, Kiruthika; ...
2016-01-29
Intensity mapping, which images a single spectral line from unresolved galaxies across cosmological volumes, is a promising technique for probing the early universe. Here we present predictions for the intensity map and power spectrum of the CO(1–0) line from galaxies atmore » $$z\\sim 2.4$$–2.8, based on a parameterized model for the galaxy–halo connection, and demonstrate the extent to which properties of high-redshift galaxies can be directly inferred from such observations. We find that our fiducial prediction should be detectable by a realistic experiment. Motivated by significant modeling uncertainties, we demonstrate the effect on the power spectrum of varying each parameter in our model. Using simulated observations, we infer constraints on our model parameter space with an MCMC procedure, and show corresponding constraints on the $${L}_{\\mathrm{IR}}$$–$${L}_{\\mathrm{CO}}$$ relation and the CO luminosity function. These constraints would be complementary to current high-redshift galaxy observations, which can detect the brightest galaxies but not complete samples from the faint end of the luminosity function. Furthermore, by probing these populations in aggregate, CO intensity mapping could be a valuable tool for probing molecular gas and its relation to star formation in high-redshift galaxies.« less
NASA Astrophysics Data System (ADS)
Rodriguez, S.; Cornet, T.; Maltagliati, L.; Appéré, T.; Le Mouelic, S.; Sotin, C.; Barnes, J. W.; Brown, R. H.
2017-12-01
Mapping Titan's surface albedo is a necessary step to give reliable constraints on its composition. However, even after the end of the Cassini mission, surface albedo maps of Titan, especially over large regions, are still very rare, the surface windows being strongly affected by atmospheric contributions (absorption, scattering). A full radiative transfer model is an essential tool to remove these effects, but too time-consuming to treat systematically the 50000 hyperspectral images VIMS acquired since the beginning of the mission. We developed a massive inversion of VIMS data based on lookup tables computed from a state-of-the-art radiative transfer model in pseudo-spherical geometry, updated with new aerosol properties coming from our analysis of observations acquired recently by VIMS (solar occultations and emission phase curves). Once the physical properties of gases, aerosols and surface are fixed, the lookup tables are built for the remaining free parameters: the incidence, emergence and azimuth angles, given by navigation; and two products (the aerosol opacity and the surface albedo at all wavelengths). The lookup table grid was carefully selected after thorough testing. The data inversion on these pre-computed spectra (opportunely interpolated) is more than 1000 times faster than recalling the full radiative transfer at each minimization step. We present here the results from selected flybys. We invert mosaics composed by couples of flybys observing the same area at two different times. The composite albedo maps do not show significant discontinuities in any of the surface windows, suggesting a robust correction of the effects of the geometry (and thus the aerosols) on the observations. Maps of aerosol and albedo uncertainties are also provided, along with absolute errors. We are thus able to provide reliable surface albedo maps at pixel scale for entire regions of Titan and for the whole VIMS spectral range.
Genome-wide association mapping of crown rust resistance in oat elite germplasm
USDA-ARS?s Scientific Manuscript database
Oat crown rust, caused by Puccinia coronata f. sp. avenae, is a major constraint to oat production in many parts of the world. In this first comprehensive multi-environment genome-wide association map of oat crown rust, we used 2,972 SNPs genotyped on 631 oat lines for association mapping of quantit...
Subsurface Investigation of the Neogene Mygdonian Basin, Greece Using Magnetic Data
NASA Astrophysics Data System (ADS)
Ibraheem, Ismael M.; Gurk, Marcus; Tougiannidis, Nikolaos; Tezkan, Bülent
2018-02-01
A high-resolution ground and marine magnetic survey was executed to determine the structure of the subsurface and the thickness of the sedimentary cover in the Mygdonian Basin. A spacing of approximately 250 m or 500 m between measurement stations was selected to cover an area of 15 km × 22 km. Edge detectors such as total horizontal derivative (THDR), analytic signal (AS), tilt derivative (TDR), enhanced total horizontal gradient of tilt derivative (ETHDR) were applied to map the subsurface structure. Depth was estimated by power spectrum analysis, tilt derivative, source parameter imaging (SPI), and 2D-forward modeling techniques. Spectral analysis and SPI suggest a depth to the basement ranging from near surface to 600 m. For some selected locations, depth was also calculated using the TDR technique suggesting depths from 160 to 400 m. 2D forward magnetic modeling using existing boreholes as constraints was carried out along four selected profiles and confirmed the presence of alternative horsts and grabens formed by parallel normal faults. The dominant structural trends inferred from THDR, AS, TDR, and ETHDR are N-S, NW-SE, NE-SW and E-W. This corresponds with the known structural trends in the area. Finally, a detailed structural map showing the magnetic blocks and the structural architecture of the Mygdonian Basin was drawn up by collating all of the results.
Wang, Cong; Wang, Shuai; Fu, Bojie; Li, Zongshan; Wu, Xing; Tang, Qiang
2017-01-01
A tight coupling exists between biogeochemical cycles and water availability in drylands. However, studies regarding the coupling among soil moisture (SM), soil carbon/nitrogen, and plants are rare in the literature, and clarifying these relationships changing with climate gradient is challenging. Thus, soil organic carbon (SOC), total nitrogen (TN), and species richness (SR) were selected as soil-plant system variables, and the tradeoff relationships between SM and these variables and their variations along the precipitation gradient were quantified in the Loess Plateau, China. Results showed these variables increased linearly along the precipitation gradient in the woodland, shrubland, and grassland, respectively, except for the SR in the woodland and grassland, and SOC in the grassland (p>0.05). Correlation analysis showed that the SM-SOC and SM-TN tradeoffs were significantly correlated with mean annual precipitation (MAP) across the three vegetation types, and SM-SR tradeoff was significantly correlated with MAP in grassland and woodland. The linear piece-wise quantile regression was applied to determine the inflection points of these tradeoffs responses to the precipitation gradient. The inflection point for the SM-SOC tradeoff was detected at MAP=570mm; no inflection point was detected for SM-TN tradeoff; SM-SR tradeoff variation trends were different in the woodland and grassland, and the inflection points were detected at MAP=380mm and MAP=570mm, respectively. Before the turning point, constraint exerted by soil moisture on SOC and SR existed in the relatively arid regions, while the constraint disappears or is lessened in the relatively humid regions in this study. The results demonstrate the tradeoff revealed obvious trends along the precipitation gradient and were affected by vegetation type. Consequently, tradeoffs could be an ecological indicator and tool for restoration management in the Loess Plateau. In further study, the mechanism of how the tradeoff is affected by the precipitation gradient and vegetation type should be clarified. Copyright © 2016 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hajian, Amir; Bond, J. Richard; Battaglia, Nicholas
We measure a significant correlation between the thermal Sunyaev-Zel'dovich effect in the Planck and WMAP maps and an X-ray cluster map based on ROSAT. We use the 100, 143 and 343 GHz Planck maps and the WMAP 94 GHz map to obtain this cluster cross spectrum. We check our measurements for contamination from dusty galaxies using the cross correlations with the 217, 545 and 857 GHz maps from Planck. Our measurement yields a direct characterization of the cluster power spectrum over a wide range of angular scales that is consistent with large cosmological simulations. The amplitude of this signal dependsmore » on cosmological parameters that determine the growth of structure (σ{sub 8} and Ω M) and scales as σ{sub 8}{sup 7.4} and Ω M{sup 1.9} around the multipole (ℓ) ∼ 1000. We constrain σ{sub 8} and Ω M from the cross-power spectrum to be σ{sub 8}(Ω M/0.30){sup 0.26} = 0.8±0.02. Since this cross spectrum produces a tight constraint in the σ{sub 8} and Ω M plane the errors on a σ{sub 8} constraint will be mostly limited by the uncertainties from external constraints. Future cluster catalogs, like those from eRosita and LSST, and pointed multi-wavelength observations of clusters will improve the constraining power of this cross spectrum measurement. In principle this analysis can be extended beyond σ{sub 8} and Ω M to constrain dark energy or the sum of the neutrino masses.« less
Portfolios with nonlinear constraints and spin glasses
NASA Astrophysics Data System (ADS)
Gábor, Adrienn; Kondor, I.
1999-12-01
In a recent paper Galluccio, Bouchaud and Potters demonstrated that a certain portfolio problem with a nonlinear constraint maps exactly onto finding the ground states of a long-range spin glass, with the concomitant nonuniqueness and instability of the optimal portfolios. Here we put forward geometric arguments that lead to qualitatively similar conclusions, without recourse to the methods of spin glass theory, and give two more examples of portfolio problems with convex nonlinear constraints.
Compromise Approach-Based Genetic Algorithm for Constrained Multiobjective Portfolio Selection Model
NASA Astrophysics Data System (ADS)
Li, Jun
In this paper, fuzzy set theory is incorporated into a multiobjective portfolio selection model for investors’ taking into three criteria: return, risk and liquidity. The cardinality constraint, the buy-in threshold constraint and the round-lots constraints are considered in the proposed model. To overcome the difficulty of evaluation a large set of efficient solutions and selection of the best one on non-dominated surface, a compromise approach-based genetic algorithm is presented to obtain a compromised solution for the proposed constrained multiobjective portfolio selection model.
Shape optimization of disc-type flywheels
NASA Technical Reports Server (NTRS)
Nizza, R. S.
1976-01-01
Techniques were developed for presenting an analytical and graphical means for selecting an optimum flywheel system design, based on system requirements, geometric constraints, and weight limitations. The techniques for creating an analytical solution are formulated from energy and structural principals. The resulting flywheel design relates stress and strain pattern distribution, operating speeds, geometry, and specific energy levels. The design techniques incorporate the lowest stressed flywheel for any particular application and achieve the highest specific energy per unit flywheel weight possible. Stress and strain contour mapping and sectional profile plotting reflect the results of the structural behavior manifested under rotating conditions. This approach toward flywheel design is applicable to any metal flywheel, and permits the selection of the flywheel design to be based solely on the criteria of the system requirements that must be met, those that must be optimized, and those system parameters that may be permitted to vary.
Naranjo, Yandi; Pons, Miquel; Konrat, Robert
2012-01-01
The number of existing protein sequences spans a very small fraction of sequence space. Natural proteins have overcome a strong negative selective pressure to avoid the formation of insoluble aggregates. Stably folded globular proteins and intrinsically disordered proteins (IDPs) use alternative solutions to the aggregation problem. While in globular proteins folding minimizes the access to aggregation prone regions, IDPs on average display large exposed contact areas. Here, we introduce the concept of average meta-structure correlation maps to analyze sequence space. Using this novel conceptual view we show that representative ensembles of folded and ID proteins show distinct characteristics and respond differently to sequence randomization. By studying the way evolutionary constraints act on IDPs to disable a negative function (aggregation) we might gain insight into the mechanisms by which function-enabling information is encoded in IDPs.
Are Synonymous Sites in Primates and Rodents Functionally Constrained?
Price, Nicholas; Graur, Dan
2016-01-01
It has been claimed that synonymous sites in mammals are under selective constraint. Furthermore, in many studies the selective constraint at such sites in primates was claimed to be more stringent than that in rodents. Given the larger effective population sizes in rodents than in primates, the theoretical expectation is that selection in rodents would be more effective than that in primates. To resolve this contradiction between expectations and observations, we used processed pseudogenes as a model for strict neutral evolution, and estimated selective constraint on synonymous sites using the rate of substitution at pseudosynonymous and pseudononsynonymous sites in pseudogenes as the neutral expectation. After controlling for the effects of GC content, our results were similar to those from previous studies, i.e., synonymous sites in primates exhibited evidence for higher selective constraint that those in rodents. Specifically, our results indicated that in primates up to 24% of synonymous sites could be under purifying selection, while in rodents synonymous sites evolved neutrally. To further control for shifts in GC content, we estimated selective constraint at fourfold degenerate sites using a maximum parsimony approach. This allowed us to estimate selective constraint using mutational patterns that cause a shift in GC content (GT ↔ TG, CT ↔ TC, GA ↔ AG, and CA ↔ AC) and ones that do not (AT ↔ TA and CG ↔ GC). Using this approach, we found that synonymous sites evolve neutrally in both primates and rodents. Apparent deviations from neutrality were caused by a higher rate of C → A and C → T mutations in pseudogenes. Such differences are most likely caused by the shift in GC content experienced by pseudogenes. We conclude that previous estimates according to which 20-40% of synonymous sites in primates were under selective constraint were most likely artifacts of the biased pattern of mutation.
Eskandari, Mahnaz; Homaee, Mehdi; Mahmodi, Shahla
2012-08-01
Landfill site selection is a complicated multi criteria land use planning that should convince all related stakeholders with different insights. This paper addresses an integrating approach for landfill siting based on conflicting opinions among environmental, economical and socio-cultural expertise. In order to gain optimized siting decision, the issue was investigated in different viewpoints. At first step based on opinion sampling and questionnaire results of 35 experts familiar with local situations, the national environmental legislations and international practices, 13 constraints and 15 factors were built in hierarchical structure. Factors divided into three environmental, economical and socio-cultural groups. In the next step, the GIS-database was developed based on the designated criteria. In the third stage, the criteria standardization and criteria weighting were accomplished. The relative importance weights of criteria and subcriteria were estimated, respectively, using analytical hierarchy process and rank ordering methods based on different experts opinions. Thereafter, by using simple additive weighting method, the suitability maps for landfill siting in Marvdasht, Iran, was evaluated in environmental, economical and socio-cultural visions. The importance of each group of criteria in its own vision was assigned to be higher than two other groups. In the fourth stage, the final suitability map was obtained after crossing three resulted maps in different visions and reported in five suitability classes for landfill construction. This map indicated that almost 1224 ha of the study area can be considered as best suitable class for landfill siting considering all visions. In the last stage, a comprehensive field visit was performed to verify the selected site obtained from the proposed model. This field inspection has confirmed the proposed integrating approach for the landfill siting. Copyright © 2012 Elsevier Ltd. All rights reserved.
Re-Assembling Formal Features in Second Language Acquisition: Beyond Minimalism
ERIC Educational Resources Information Center
Carroll, Susanne E.
2009-01-01
In this commentary, Lardiere's discussion of features is compared with the use of features in constraint-based theories, and it is argued that constraint-based theories might offer a more elegant account of second language acquisition (SLA). Further evidence is reported to question the accuracy of Chierchia's (1998) Nominal Mapping Parameter.…
MethPrimer: designing primers for methylation PCRs.
Li, Long-Cheng; Dahiya, Rajvir
2002-11-01
DNA methylation is an epigenetic mechanism of gene regulation. Bisulfite- conversion-based PCR methods, such as bisulfite sequencing PCR (BSP) and methylation specific PCR (MSP), remain the most commonly used techniques for methylation mapping. Existing primer design programs developed for standard PCR cannot handle primer design for bisulfite-conversion-based PCRs due to changes in DNA sequence context caused by bisulfite treatment and many special constraints both on the primers and the region to be amplified for such experiments. Therefore, the present study was designed to develop a program for such applications. MethPrimer, based on Primer 3, is a program for designing PCR primers for methylation mapping. It first takes a DNA sequence as its input and searches the sequence for potential CpG islands. Primers are then picked around the predicted CpG islands or around regions specified by users. MethPrimer can design primers for BSP and MSP. Results of primer selection are delivered through a web browser in text and in graphic view.
Efficient robust reconstruction of dynamic PET activity maps with radioisotope decay constraints.
Gao, Fei; Liu, Huafeng; Shi, Pengcheng
2010-01-01
Dynamic PET imaging performs sequence of data acquisition in order to provide visualization and quantification of physiological changes in specific tissues and organs. The reconstruction of activity maps is generally the first step in dynamic PET. State space Hinfinity approaches have been proved to be a robust method for PET image reconstruction where, however, temporal constraints are not considered during the reconstruction process. In addition, the state space strategies for PET image reconstruction have been computationally prohibitive for practical usage because of the need for matrix inversion. In this paper, we present a minimax formulation of the dynamic PET imaging problem where a radioisotope decay model is employed as physics-based temporal constraints on the photon counts. Furthermore, a robust steady state Hinfinity filter is developed to significantly improve the computational efficiency with minimal loss of accuracy. Experiments are conducted on Monte Carlo simulated image sequences for quantitative analysis and validation.
Statistical Inference in Hidden Markov Models Using k-Segment Constraints
Titsias, Michalis K.; Holmes, Christopher C.; Yau, Christopher
2016-01-01
Hidden Markov models (HMMs) are one of the most widely used statistical methods for analyzing sequence data. However, the reporting of output from HMMs has largely been restricted to the presentation of the most-probable (MAP) hidden state sequence, found via the Viterbi algorithm, or the sequence of most probable marginals using the forward–backward algorithm. In this article, we expand the amount of information we could obtain from the posterior distribution of an HMM by introducing linear-time dynamic programming recursions that, conditional on a user-specified constraint in the number of segments, allow us to (i) find MAP sequences, (ii) compute posterior probabilities, and (iii) simulate sample paths. We collectively call these recursions k-segment algorithms and illustrate their utility using simulated and real examples. We also highlight the prospective and retrospective use of k-segment constraints for fitting HMMs or exploring existing model fits. Supplementary materials for this article are available online. PMID:27226674
NASA Astrophysics Data System (ADS)
She, Yuchen; Li, Shuang
2018-01-01
The planning algorithm to calculate a satellite's optimal slew trajectory with a given keep-out constraint is proposed. An energy-optimal formulation is proposed for the Space-based multiband astronomical Variable Objects Monitor Mission Analysis and Planning (MAP) system. The innovative point of the proposed planning algorithm lies in that the satellite structure and control limitation are not considered as optimization constraints but are formulated into the cost function. This modification is able to relieve the burden of the optimizer and increases the optimization efficiency, which is the major challenge for designing the MAP system. Mathematical analysis is given to prove that there is a proportional mapping between the formulation and the satellite controller output. Simulations with different scenarios are given to demonstrate the efficiency of the developed algorithm.
NASA Astrophysics Data System (ADS)
BICEP2 Collaboration; Keck Array Collaboration; Ade, P. A. R.; Ahmed, Z.; Aikin, R. W.; Alexander, K. D.; Barkats, D.; Benton, S. J.; Bischoff, C. A.; Bock, J. J.; Bowens-Rubin, R.; Brevik, J. A.; Buder, I.; Bullock, E.; Buza, V.; Connors, J.; Crill, B. P.; Duband, L.; Dvorkin, C.; Filippini, J. P.; Fliescher, S.; Germaine, T. St.; Ghosh, T.; Grayson, J.; Harrison, S.; Hildebrandt, S. R.; Hilton, G. C.; Hui, H.; Irwin, K. D.; Kang, J.; Karkare, K. S.; Karpel, E.; Kaufman, J. P.; Keating, B. G.; Kefeli, S.; Kernasovskiy, S. A.; Kovac, J. M.; Kuo, C. L.; Larson, N.; Leitch, E. M.; Megerian, K. G.; Moncelsi, L.; Namikawa, T.; Netterfield, C. B.; Nguyen, H. T.; O'Brient, R.; Ogburn, R. W.; Pryke, C.; Richter, S.; Schillaci, A.; Schwarz, R.; Sheehy, C. D.; Staniszewski, Z. K.; Steinbach, B.; Sudiwala, R. V.; Teply, G. P.; Thompson, K. L.; Tolan, J. E.; Tucker, C.; Turner, A. D.; Vieregg, A. G.; Weber, A. C.; Wiebe, D. V.; Willmert, J.; Wong, C. L.; Wu, W. L. K.; Yoon, K. W.
2017-11-01
We present the strongest constraints to date on anisotropies of cosmic microwave background (CMB) polarization rotation derived from 150 GHz data taken by the BICEP2 & Keck Array CMB experiments up to and including the 2014 observing season (BK14). The definition of the polarization angle in BK14 maps has gone through self-calibration in which the overall angle is adjusted to minimize the observed T B and E B power spectra. After this procedure, the Q U maps lose sensitivity to a uniform polarization rotation but are still sensitive to anisotropies of polarization rotation. This analysis places constraints on the anisotropies of polarization rotation, which could be generated by CMB photons interacting with axionlike pseudoscalar fields or Faraday rotation induced by primordial magnetic fields. The sensitivity of BK14 maps (˜3 μ K -arc min ) makes it possible to reconstruct anisotropies of the polarization rotation angle and measure their angular power spectrum much more precisely than previous attempts. Our data are found to be consistent with no polarization rotation anisotropies, improving the upper bound on the amplitude of the rotation angle spectrum by roughly an order of magnitude compared to the previous best constraints. Our results lead to an order of magnitude better constraint on the coupling constant of the Chern-Simons electromagnetic term ga γ≤7.2 ×10-2/HI (95% confidence) than the constraint derived from the B -mode spectrum, where HI is the inflationary Hubble scale. This constraint leads to a limit on the decay constant of 10-6≲fa/Mpl at mass range of 10-33≤ma≤10-28 eV for r =0.01 , assuming ga γ˜α /(2 π fa) with α denoting the fine structure constant. The upper bound on the amplitude of the primordial magnetic fields is 30 nG (95% confidence) from the polarization rotation anisotropies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Tony Y.; Wechsler, Risa H.; Devaraj, Kiruthika
Intensity mapping, which images a single spectral line from unresolved galaxies across cosmological volumes, is a promising technique for probing the early universe. Here we present predictions for the intensity map and power spectrum of the CO(1–0) line from galaxies atmore » $$z\\sim 2.4$$–2.8, based on a parameterized model for the galaxy–halo connection, and demonstrate the extent to which properties of high-redshift galaxies can be directly inferred from such observations. We find that our fiducial prediction should be detectable by a realistic experiment. Motivated by significant modeling uncertainties, we demonstrate the effect on the power spectrum of varying each parameter in our model. Using simulated observations, we infer constraints on our model parameter space with an MCMC procedure, and show corresponding constraints on the $${L}_{\\mathrm{IR}}$$–$${L}_{\\mathrm{CO}}$$ relation and the CO luminosity function. These constraints would be complementary to current high-redshift galaxy observations, which can detect the brightest galaxies but not complete samples from the faint end of the luminosity function. Furthermore, by probing these populations in aggregate, CO intensity mapping could be a valuable tool for probing molecular gas and its relation to star formation in high-redshift galaxies.« less
Lucas, Lauren K; Nice, Chris C; Gompert, Zachariah
2018-03-13
Patterns of phenotypic variation within and among species can be shaped and constrained by trait genetic architecture. This is particularly true for complex traits, such as butterfly wing patterns, that consist of multiple elements. Understanding the genetics of complex trait variation across species boundaries is difficult, as it necessitates mapping in structured populations and can involve many loci with small or variable phenotypic effects. Here, we investigate the genetic architecture of complex wing pattern variation in Lycaeides butterflies as a case study of mapping multivariate traits in wild populations that include multiple nominal species or groups. We identify conserved modules of integrated wing pattern elements within populations and species. We show that trait covariances within modules have a genetic basis and thus represent genetic constraints that can channel evolution. Consistent with this, we find evidence that evolutionary changes in wing patterns among populations and species occur in the directions of genetic covariances within these groups. Thus, we show that genetic constraints affect patterns of biological diversity (wing pattern) in Lycaeides, and we provide an analytical template for similar work in other systems. © 2018 John Wiley & Sons Ltd.
ISSUES IN DIGITAL IMAGE PROCESSING OF AERIAL PHOTOGRAPHY FOR MAPPING SUBMERSED AQUATIC VEGETATION
The paper discusses the numerous issues that needed to be addressed when developing a methodology for mapping Submersed Aquatic Vegetation (SAV) from digital aerial photography. Specifically, we discuss 1) choice of film; 2) consideration of tide and weather constraints; 3) in-s...
Health Information System Role-Based Access Control Current Security Trends and Challenges.
de Carvalho Junior, Marcelo Antonio; Bandiera-Paiva, Paulo
2018-01-01
This article objective is to highlight implementation characteristics, concerns, or limitations over role-based access control (RBAC) use on health information system (HIS) using industry-focused literature review of current publishing for that purpose. Based on the findings, assessment for indication of RBAC is obsolete considering HIS authorization control needs. We have selected articles related to our investigation theme "RBAC trends and limitations" in 4 different sources related to health informatics or to the engineering technical field. To do so, we have applied the following search query string: "Role-Based Access Control" OR "RBAC" AND "Health information System" OR "EHR" AND "Trends" OR "Challenges" OR "Security" OR "Authorization" OR "Attacks" OR "Permission Assignment" OR "Permission Relation" OR "Permission Mapping" OR "Constraint". We followed PRISMA applicable flow and general methodology used on software engineering for systematic review. 20 articles were selected after applying inclusion and exclusion criteria resulting contributions from 10 different countries. 17 articles advocate RBAC adaptations. The main security trends and limitations mapped were related to emergency access, grant delegation, and interdomain access control. Several publishing proposed RBAC adaptations and enhancements in order to cope current HIS use characteristics. Most of the existent RBAC studies are not related to health informatics industry though. There is no clear indication of RBAC obsolescence for HIS use.
Vast Portfolio Selection with Gross-exposure Constraints*
Fan, Jianqing; Zhang, Jingjin; Yu, Ke
2012-01-01
We introduce the large portfolio selection using gross-exposure constraints. We show that with gross-exposure constraint the empirically selected optimal portfolios based on estimated covariance matrices have similar performance to the theoretical optimal ones and there is no error accumulation effect from estimation of vast covariance matrices. This gives theoretical justification to the empirical results in Jagannathan and Ma (2003). We also show that the no-short-sale portfolio can be improved by allowing some short positions. The applications to portfolio selection, tracking, and improvements are also addressed. The utility of our new approach is illustrated by simulation and empirical studies on the 100 Fama-French industrial portfolios and the 600 stocks randomly selected from Russell 3000. PMID:23293404
Strong Purifying Selection at Synonymous Sites in D. melanogaster
Lawrie, David S.; Messer, Philipp W.; Hershberg, Ruth; Petrov, Dmitri A.
2013-01-01
Synonymous sites are generally assumed to be subject to weak selective constraint. For this reason, they are often neglected as a possible source of important functional variation. We use site frequency spectra from deep population sequencing data to show that, contrary to this expectation, 22% of four-fold synonymous (4D) sites in Drosophila melanogaster evolve under very strong selective constraint while few, if any, appear to be under weak constraint. Linking polymorphism with divergence data, we further find that the fraction of synonymous sites exposed to strong purifying selection is higher for those positions that show slower evolution on the Drosophila phylogeny. The function underlying the inferred strong constraint appears to be separate from splicing enhancers, nucleosome positioning, and the translational optimization generating canonical codon bias. The fraction of synonymous sites under strong constraint within a gene correlates well with gene expression, particularly in the mid-late embryo, pupae, and adult developmental stages. Genes enriched in strongly constrained synonymous sites tend to be particularly functionally important and are often involved in key developmental pathways. Given that the observed widespread constraint acting on synonymous sites is likely not limited to Drosophila, the role of synonymous sites in genetic disease and adaptation should be reevaluated. PMID:23737754
NASA Astrophysics Data System (ADS)
Pásztor, László; Bakacsi, Zsófia; Laborczi, Annamária; Takács, Katalin; Szatmári, Gábor; Tóth, Tibor; Szabó, József
2016-04-01
One of the main objectives of the EU's Common Agricultural Policy is to encourage maintaining agricultural production in Areas Facing Natural Constraints (ANC) in order to sustain agricultural production and use natural resources, in such a way to secure both stable production and income to farmers and to protect the environment. ANC assignment has both ecological and severe economical aspects. Recently the delimitation of ANCs is suggested to be carried out by using common biophysical diagnostic criteria on low soil productivity and poor climate conditions all over Europe. The criterion system was elaborated and has been repeatedly upgraded by JRC. The operational implementation is under member state competence. This process requires application of available soil databases and proper thematic and spatial inference methods. In our paper we present the inferences applied for the latest identification and delineation of areas with low soil productivity in Hungary according to JRC biophysical criteria related to soil: limited soil drainage, texture and stoniness (coarse texture, heavy clay, vertic properties), shallow rooting depth, chemical properties (salinity, sodicity, low pH). The compilation of target specific maps were based on the available legacy and recently collected data. In the present work three different data sources were used. The most relevant available data were queried from the datasets for each mapped criterion for either direct application or for the compilation a suitable, synthetic (non-measured) parameter. In some cases the values of the target variable originated from only one, in other cases from more databases. The reference dataset used in the mapping process was set up after substantial statistical analysis and filtering. It consisted of the values of the target variable attributed to the finally selected georeferenced locations. For spatial inference regression kriging was applied. Accuracy assessment was carried out by Leave One Out Cross Validation (LOOCV). In some cases the DSM product directly provided the delineation result by simple querying, in other cases further interpretation of the map was necessary. As the result of our work not only spatial fulfilment of the European biophysical criteria was assessed and provided for decision makers, but unique digital soil map products were elaborated regionalizing specific soil features, which were never mapped before, even nationally with 1 ha spatial resolution. Acknowledgement: Our work was supported by the "European Fund for Agricultural and Rural Development: Europe investing in rural areas" with the support of the European Union and Hungary and by the Hungarian National Scientific Research Foundation (OTKA, Grant No. K105167).
Science objectives and observing strategy for the OMEGA imaging spectrometer on Mars-Express
NASA Astrophysics Data System (ADS)
Erard, S.; Bibring, J.-P.; Drossart, P.; Forget, F.; Schmitt, B.; OMEGA Team
2003-04-01
The science objectives of OMEGA, which were first defined at the time of instruments selection for Mars-Express, were recently updated to integrate new results from MGS and Odyssey concerning three main fields: Martian surface and atmosphere, and polar processes. Thematic categories of observations are derived from the scientific objectives whenever spectral observations from OMEGA are expected to provide insights to Mars present situation and evolution. Targets within these categories are selected on the basis of their expected usefulness, which is related to their intrinsic properties and to the instrument capabilities. The whole surface will be mapped at low resolution (~5 km/pixel) in the course of the nominal mission, and possibly routinely at very coarse resolution to monitor time-varying processes from apocenter. However, only 5% of the surface can be observed at high resolution (up to 350 m/pixel) owing to constraints on telemetry rate. HR targets are therefore selected on the basis of telemetry constraints, orbital parameters, observing opportunities (visibility under given conditions), and spacecraft functionalities (e.g., depointing capacity), then prioritized within each category according to the probability to perform significant observations with OMEGA (in many situations, according to the estimated dust coverage). Target selection is performed interactively between OMEGA co-Is, in close contact with teams from other MEx experiments (mostly HRSC, PFS and Spicam) and other missions (e.g., MER and MRO). Most HR surface targets are selected on the basis of deep examination of Viking, THEMIS, and MOC HR images. Other surface targets include areas presenting unusual spectral properties in previous observations, or suspected to exhibit signatures of hydrothermal activity. Proposed landing sites and suggested source areas for the SNC meteorites are also included. Atmospheric/polar objectives more often translate as particular observing modes, sometimes at HR (e.g., limb observations, EPF sequences). The constraints are related to local time and seasonal occurrence of particular processes, and to spacecraft pointing. About 1000 HR targets are currently identified in the Southern hemisphere (first six month in orbit). The targets are described in a database with geographic coordinates in IAU-2000 system, context and detailed images, optimum observing conditions, science rationale and references. This database is currently being interfaced with ESA's MAPSS planning software.
Spielman, Stephanie J; Wilke, Claus O
2016-11-01
The mutation-selection model of coding sequence evolution has received renewed attention for its use in estimating site-specific amino acid propensities and selection coefficient distributions. Two computationally tractable mutation-selection inference frameworks have been introduced: One framework employs a fixed-effects, highly parameterized maximum likelihood approach, whereas the other employs a random-effects Bayesian Dirichlet Process approach. While both implementations follow the same model, they appear to make distinct predictions about the distribution of selection coefficients. The fixed-effects framework estimates a large proportion of highly deleterious substitutions, whereas the random-effects framework estimates that all substitutions are either nearly neutral or weakly deleterious. It remains unknown, however, how accurately each method infers evolutionary constraints at individual sites. Indeed, selection coefficient distributions pool all site-specific inferences, thereby obscuring a precise assessment of site-specific estimates. Therefore, in this study, we use a simulation-based strategy to determine how accurately each approach recapitulates the selective constraint at individual sites. We find that the fixed-effects approach, despite its extensive parameterization, consistently and accurately estimates site-specific evolutionary constraint. By contrast, the random-effects Bayesian approach systematically underestimates the strength of natural selection, particularly for slowly evolving sites. We also find that, despite the strong differences between their inferred selection coefficient distributions, the fixed- and random-effects approaches yield surprisingly similar inferences of site-specific selective constraint. We conclude that the fixed-effects mutation-selection framework provides the more reliable software platform for model application and future development. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Constraint-based Attribute and Interval Planning
NASA Technical Reports Server (NTRS)
Jonsson, Ari; Frank, Jeremy
2013-01-01
In this paper we describe Constraint-based Attribute and Interval Planning (CAIP), a paradigm for representing and reasoning about plans. The paradigm enables the description of planning domains with time, resources, concurrent activities, mutual exclusions among sets of activities, disjunctive preconditions and conditional effects. We provide a theoretical foundation for the paradigm, based on temporal intervals and attributes. We then show how the plans are naturally expressed by networks of constraints, and show that the process of planning maps directly to dynamic constraint reasoning. In addition, we de ne compatibilities, a compact mechanism for describing planning domains. We describe how this framework can incorporate the use of constraint reasoning technology to improve planning. Finally, we describe EUROPA, an implementation of the CAIP framework.
Constraints on the optical depth of galaxy groups and clusters
Flender, Samuel; Nagai, Daisuke; McDonald, Michael
2017-03-10
Here, future data from galaxy redshift surveys, combined with high-resolutions maps of the cosmic microwave background, will enable measurements of the pairwise kinematic Sunyaev–Zel'dovich (kSZ) signal with unprecedented statistical significance. This signal probes the matter-velocity correlation function, scaled by the average optical depth (τ) of the galaxy groups and clusters in the sample, and is thus of fundamental importance for cosmology. However, in order to translate pairwise kSZ measurements into cosmological constraints, external constraints on τ are necessary. In this work, we present a new model for the intracluster medium, which takes into account star formation, feedback, non-thermal pressure, and gas cooling. Our semi-analytic model is computationally efficient and can reproduce results of recent hydrodynamical simulations of galaxy cluster formation. We calibrate the free parameters in the model using recent X-ray measurements of gas density profiles of clusters, and gas masses of groups and clusters. Our observationally calibrated model predicts the averagemore » $${\\tau }_{500}$$ (i.e., the integrated τ within a disk of size R 500) to better than 6% modeling uncertainty (at 95% confidence level). If the remaining uncertainties associated with other astrophysical uncertainties and X-ray selection effects can be better understood, our model for the optical depth should break the degeneracy between optical depth and cluster velocity in the analysis of future pairwise kSZ measurements and improve cosmological constraints with the combination of upcoming galaxy and CMB surveys, including the nature of dark energy, modified gravity, and neutrino mass.« less
Constraints on the optical depth of galaxy groups and clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Flender, Samuel; Nagai, Daisuke; McDonald, Michael
Here, future data from galaxy redshift surveys, combined with high-resolutions maps of the cosmic microwave background, will enable measurements of the pairwise kinematic Sunyaev–Zel'dovich (kSZ) signal with unprecedented statistical significance. This signal probes the matter-velocity correlation function, scaled by the average optical depth (τ) of the galaxy groups and clusters in the sample, and is thus of fundamental importance for cosmology. However, in order to translate pairwise kSZ measurements into cosmological constraints, external constraints on τ are necessary. In this work, we present a new model for the intracluster medium, which takes into account star formation, feedback, non-thermal pressure, and gas cooling. Our semi-analytic model is computationally efficient and can reproduce results of recent hydrodynamical simulations of galaxy cluster formation. We calibrate the free parameters in the model using recent X-ray measurements of gas density profiles of clusters, and gas masses of groups and clusters. Our observationally calibrated model predicts the averagemore » $${\\tau }_{500}$$ (i.e., the integrated τ within a disk of size R 500) to better than 6% modeling uncertainty (at 95% confidence level). If the remaining uncertainties associated with other astrophysical uncertainties and X-ray selection effects can be better understood, our model for the optical depth should break the degeneracy between optical depth and cluster velocity in the analysis of future pairwise kSZ measurements and improve cosmological constraints with the combination of upcoming galaxy and CMB surveys, including the nature of dark energy, modified gravity, and neutrino mass.« less
Empty tracks optimization based on Z-Map model
NASA Astrophysics Data System (ADS)
Liu, Le; Yan, Guangrong; Wang, Zaijun; Zang, Genao
2017-12-01
For parts with many features, there are more empty tracks during machining. If these tracks are not optimized, the machining efficiency will be seriously affected. In this paper, the characteristics of the empty tracks are studied in detail. Combining with the existing optimization algorithm, a new tracks optimization method based on Z-Map model is proposed. In this method, the tool tracks are divided into the unit processing section, and then the Z-Map model simulation technique is used to analyze the order constraint between the unit segments. The empty stroke optimization problem is transformed into the TSP with sequential constraints, and then through the genetic algorithm solves the established TSP problem. This kind of optimization method can not only optimize the simple structural parts, but also optimize the complex structural parts, so as to effectively plan the empty tracks and greatly improve the processing efficiency.
Optimum Strategies for Selecting Descent Flight-Path Angles
NASA Technical Reports Server (NTRS)
Wu, Minghong G. (Inventor); Green, Steven M. (Inventor)
2016-01-01
An information processing system and method for adaptively selecting an aircraft descent flight path for an aircraft, are provided. The system receives flight adaptation parameters, including aircraft flight descent time period, aircraft flight descent airspace region, and aircraft flight descent flyability constraints. The system queries a plurality of flight data sources and retrieves flight information including any of winds and temperatures aloft data, airspace/navigation constraints, airspace traffic demand, and airspace arrival delay model. The system calculates a set of candidate descent profiles, each defined by at least one of a flight path angle and a descent rate, and each including an aggregated total fuel consumption value for the aircraft following a calculated trajectory, and a flyability constraints metric for the calculated trajectory. The system selects a best candidate descent profile having the least fuel consumption value while the fly ability constraints metric remains within aircraft flight descent flyability constraints.
Action-Based Dynamical Modelling For The Milky Way Disk
NASA Astrophysics Data System (ADS)
Trick, Wilma; Rix, Hans-Walter; Bovy, Jo
2016-09-01
We present Road Mapping, a full-likelihood dynamical modelling machinery, that aims to recover the Milky Way's (MW) gravitational potential from large samples of stars in the Galactic disk. Road Mapping models the observed positions and velocities of stars with a parameterized, action-based distribution function (DF) in a parameterized axisymmetric gravitational potential (Binney & McMillan 2011, Binney 2012, Bovy & Rix 2013).In anticipation of the Gaia data release in autumn, we have fully tested Road Mapping and demonstrated its robustness against the breakdown of its assumptions.Using large suites of mock data, we investigated in isolated test cases how the modelling would be affected if the data's true potential or DF was not included in the families of potentials and DFs assumed by Road Mapping, or if we misjudged measurement errors or the spatial selection function (SF) (Trick et al., submitted to ApJ). We found that the potential can be robustly recovered — given the limitations of the assumed potential model—, even for minor misjudgments in DF or SF, or for proper motion errors or distances known to within 10%.We were also able to demonstrate that Road Mapping is still successful if the strong assumption of axisymmetric breaks down (Trick et al., in preparation). Data drawn from a highresolution simulation (D'Onghia et al. 2013) of a MW-like galaxy with pronounced spiral arms does neither follow the assumed simple DF, nor does it come from an axisymmetric potential. We found that as long as the survey volume is large enough, Road Mapping gives good average constraints on the galaxy's potential.We are planning to apply Road Mapping to a real data set — the Tycho-2 catalogue (Hog et al. 2000) —very soon, and might be able to present some preliminary results already at the conference.
Approach for scene reconstruction from the analysis of a triplet of still images
NASA Astrophysics Data System (ADS)
Lechat, Patrick; Le Mestre, Gwenaelle; Pele, Danielle
1997-03-01
Three-dimensional modeling of a scene from the automatic analysis of 2D image sequences is a big challenge for future interactive audiovisual services based on 3D content manipulation such as virtual vests, 3D teleconferencing and interactive television. We propose a scheme that computes 3D objects models from stereo analysis of image triplets shot by calibrated cameras. After matching the different views with a correlation based algorithm, a depth map referring to a given view is built by using a fusion criterion taking into account depth coherency, visibility constraints and correlation scores. Because luminance segmentation helps to compute accurate object borders and to detect and improve the unreliable depth values, a two steps segmentation algorithm using both depth map and graylevel image is applied to extract the objects masks. First an edge detection segments the luminance image in regions and a multimodal thresholding method selects depth classes from the depth map. Then the regions are merged and labelled with the different depth classes numbers by using a coherence test on depth values according to the rate of reliable and dominant depth values and the size of the regions. The structures of the segmented objects are obtained with a constrained Delaunay triangulation followed by a refining stage. Finally, texture mapping is performed using open inventor or VRML1.0 tools.
Constraint programming based biomarker optimization.
Zhou, Manli; Luo, Youxi; Sun, Guoquan; Mai, Guoqin; Zhou, Fengfeng
2015-01-01
Efficient and intuitive characterization of biological big data is becoming a major challenge for modern bio-OMIC based scientists. Interactive visualization and exploration of big data is proven to be one of the successful solutions. Most of the existing feature selection algorithms do not allow the interactive inputs from users in the optimizing process of feature selection. This study investigates this question as fixing a few user-input features in the finally selected feature subset and formulates these user-input features as constraints for a programming model. The proposed algorithm, fsCoP (feature selection based on constrained programming), performs well similar to or much better than the existing feature selection algorithms, even with the constraints from both literature and the existing algorithms. An fsCoP biomarker may be intriguing for further wet lab validation, since it satisfies both the classification optimization function and the biomedical knowledge. fsCoP may also be used for the interactive exploration of bio-OMIC big data by interactively adding user-defined constraints for modeling.
Alsubaie, Naif M; Youssef, Ahmed A; El-Sheimy, Naser
2017-09-30
This paper introduces a new method which facilitate the use of smartphones as a handheld low-cost mobile mapping system (MMS). Smartphones are becoming more sophisticated and smarter and are quickly closing the gap between computers and portable tablet devices. The current generation of smartphones are equipped with low-cost GPS receivers, high-resolution digital cameras, and micro-electro mechanical systems (MEMS)-based navigation sensors (e.g., accelerometers, gyroscopes, magnetic compasses, and barometers). These sensors are in fact the essential components for a MMS. However, smartphone navigation sensors suffer from the poor accuracy of global navigation satellite System (GNSS), accumulated drift, and high signal noise. These issues affect the accuracy of the initial Exterior Orientation Parameters (EOPs) that are inputted into the bundle adjustment algorithm, which then produces inaccurate 3D mapping solutions. This paper proposes new methodologies for increasing the accuracy of direct geo-referencing of smartphones using relative orientation and smartphone motion sensor measurements as well as integrating geometric scene constraints into free network bundle adjustment. The new methodologies incorporate fusing the relative orientations of the captured images and their corresponding motion sensor measurements to improve the initial EOPs. Then, the geometric features (e.g., horizontal and vertical linear lines) visible in each image are extracted and used as constraints in the bundle adjustment procedure which correct the relative position and orientation of the 3D mapping solution.
Alsubaie, Naif M.; Youssef, Ahmed A.; El-Sheimy, Naser
2017-01-01
This paper introduces a new method which facilitate the use of smartphones as a handheld low-cost mobile mapping system (MMS). Smartphones are becoming more sophisticated and smarter and are quickly closing the gap between computers and portable tablet devices. The current generation of smartphones are equipped with low-cost GPS receivers, high-resolution digital cameras, and micro-electro mechanical systems (MEMS)-based navigation sensors (e.g., accelerometers, gyroscopes, magnetic compasses, and barometers). These sensors are in fact the essential components for a MMS. However, smartphone navigation sensors suffer from the poor accuracy of global navigation satellite System (GNSS), accumulated drift, and high signal noise. These issues affect the accuracy of the initial Exterior Orientation Parameters (EOPs) that are inputted into the bundle adjustment algorithm, which then produces inaccurate 3D mapping solutions. This paper proposes new methodologies for increasing the accuracy of direct geo-referencing of smartphones using relative orientation and smartphone motion sensor measurements as well as integrating geometric scene constraints into free network bundle adjustment. The new methodologies incorporate fusing the relative orientations of the captured images and their corresponding motion sensor measurements to improve the initial EOPs. Then, the geometric features (e.g., horizontal and vertical linear lines) visible in each image are extracted and used as constraints in the bundle adjustment procedure which correct the relative position and orientation of the 3D mapping solution. PMID:28973958
Arkell, Karolina; Knutson, Hans-Kristian; Frederiksen, Søren S; Breil, Martin P; Nilsson, Bernt
2018-01-12
With the shift of focus of the regulatory bodies, from fixed process conditions towards flexible ones based on process understanding, model-based optimization is becoming an important tool for process development within the biopharmaceutical industry. In this paper, a multi-objective optimization study of separation of three insulin variants by reversed-phase chromatography (RPC) is presented. The decision variables were the load factor, the concentrations of ethanol and KCl in the eluent, and the cut points for the product pooling. In addition to the purity constraints, a solubility constraint on the total insulin concentration was applied. The insulin solubility is a function of the ethanol concentration in the mobile phase, and the main aim was to investigate the effect of this constraint on the maximal productivity. Multi-objective optimization was performed with and without the solubility constraint, and visualized as Pareto fronts, showing the optimal combinations of the two objectives productivity and yield for each case. Comparison of the constrained and unconstrained Pareto fronts showed that the former diverges when the constraint becomes active, because the increase in productivity with decreasing yield is almost halted. Consequently, we suggest the operating point at which the total outlet concentration of insulin reaches the solubility limit as the most suitable one. According to the results from the constrained optimizations, the maximal productivity on the C 4 adsorbent (0.41 kg/(m 3 column h)) is less than half of that on the C 18 adsorbent (0.87 kg/(m 3 column h)). This is partly caused by the higher selectivity between the insulin variants on the C 18 adsorbent, but the main reason is the difference in how the solubility constraint affects the processes. Since the optimal ethanol concentration for elution on the C 18 adsorbent is higher than for the C 4 one, the insulin solubility is also higher, allowing a higher pool concentration. An alternative method of finding the suggested operating point was also evaluated, and it was shown to give very satisfactory results for well-mapped Pareto fronts. Copyright © 2017 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lazkoz, Ruth; Escamilla-Rivera, Celia; Salzano, Vincenzo
Cosmography provides a model-independent way to map the expansion history of the Universe. In this paper we simulate a Euclid-like survey and explore cosmographic constraints from future Baryonic Acoustic Oscillations (BAO) observations. We derive general expressions for the BAO transverse and radial modes and discuss the optimal order of the cosmographic expansion that provides reliable cosmological constraints. Through constraints on the deceleration and jerk parameters, we show that future BAO data have the potential to provide a model-independent check of the cosmic acceleration as well as a discrimination between the standard ΛCDM model and alternative mechanisms of cosmic acceleration.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simard, G.; et al.
We report constraints on cosmological parameters from the angular power spectrum of a cosmic microwave background (CMB) gravitational lensing potential map created using temperature data from 2500 degmore » $^2$ of South Pole Telescope (SPT) data supplemented with data from Planck in the same sky region, with the statistical power in the combined map primarily from the SPT data. We fit the corresponding lensing angular power spectrum to a model including cold dark matter and a cosmological constant ($$\\Lambda$$CDM), and to models with single-parameter extensions to $$\\Lambda$$CDM. We find constraints that are comparable to and consistent with constraints found using the full-sky Planck CMB lensing data. Specifically, we find $$\\sigma_8 \\Omega_{\\rm m}^{0.25}=0.598 \\pm 0.024$$ from the lensing data alone with relatively weak priors placed on the other $$\\Lambda$$CDM parameters. In combination with primary CMB data from Planck, we explore single-parameter extensions to the $$\\Lambda$$CDM model. We find $$\\Omega_k = -0.012^{+0.021}_{-0.023}$$ or $$M_{\
NASA Astrophysics Data System (ADS)
Kuznetsov, Sergey P.
2017-04-01
We consider motions of the Chaplygin sleigh on a plane supposing that the nonholonomic constraint is located periodically turn by turn at each of three legs supporting the sleigh. We assume that at switching on the constraint the respective element (“knife-edge”) is directed along the local velocity vector and becomes fixed relatively to the sleigh for a certain time interval till the next switch. Differential equations of the mathematical model are formulated and analytical derivation of a 2D map for the state transformation on the switching period is provided. The dynamics takes place with conservation of the mechanical energy. Numerical simulations show phenomena characteristic to nonholonomic systems with complex dynamics. In particular, on the energy surface attractors may occur responsible for regular sustained motions settling in domains of prevalent area compression by the map. In addition, chaotic and quasi-periodic regimes take place similar to those observed in conservative nonlinear dynamics.
Identification of saline soils with multi-year remote sensing of crop yields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lobell, D; Ortiz-Monasterio, I; Gurrola, F C
2006-10-17
Soil salinity is an important constraint to agricultural sustainability, but accurate information on its variation across agricultural regions or its impact on regional crop productivity remains sparse. We evaluated the relationships between remotely sensed wheat yields and salinity in an irrigation district in the Colorado River Delta Region. The goals of this study were to (1) document the relative importance of salinity as a constraint to regional wheat production and (2) develop techniques to accurately identify saline fields. Estimates of wheat yield from six years of Landsat data agreed well with ground-based records on individual fields (R{sup 2} = 0.65).more » Salinity measurements on 122 randomly selected fields revealed that average 0-60 cm salinity levels > 4 dS m{sup -1} reduced wheat yields, but the relative scarcity of such fields resulted in less than 1% regional yield loss attributable to salinity. Moreover, low yield was not a reliable indicator of high salinity, because many other factors contributed to yield variability in individual years. However, temporal analysis of yield images showed a significant fraction of fields exhibited consistently low yields over the six year period. A subsequent survey of 60 additional fields, half of which were consistently low yielding, revealed that this targeted subset had significantly higher salinity at 30-60 cm depth than the control group (p = 0.02). These results suggest that high subsurface salinity is associated with consistently low yields in this region, and that multi-year yield maps derived from remote sensing therefore provide an opportunity to map salinity across agricultural regions.« less
Road-corridor planning in the EIA procedure in Spain. A review of case studies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loro, Manuel, E-mail: manuel.loro@upm.es; Transport Research Centre; Centro de investigación del transporte, TRANSyT-UPM, ETSI Caminos, Canales y Puertos, Universidad Politécnica de Madrid, Prof. Aranguren s/n, 28040 Madrid
The assessment of different alternatives in road-corridor planning must be based on a number of well-defined territorial variables that serve as decision making criteria, and this requires a high-quality preliminary environmental assessment study. In Spain the formal specifications for the technical requirements stipulate the constraints that must be considered in the early stages of defining road corridors, but not how they should be analyzed and ranked. As part of the feasibility study of a new road definition, the most common methodology is to establish different levels of Territorial Carrying Capacity (TCC) in the study area in order to summarize themore » territorial variables on thematic maps and to ease the tracing process of road-corridor layout alternatives. This paper explores the variables used in 22 road-construction projects conducted by the Ministry of Public Works that were subject to the Spanish EIA regulation and published between 2006 and 2008. The aim was to evaluate the quality of the methods applied and the homogeneity and suitability of the variables used for defining the TCC. The variables were clustered into physical, environmental, land-use and cultural constraints for the purpose of comparing the TCC values assigned in the studies reviewed. We found the average quality of the studies to be generally acceptable in terms of the justification of the methodology, the weighting and classification of the variables, and the creation of a synthesis map. Nevertheless, the methods for assessing the TCC are not sufficiently standardized; there is a lack of uniformity in the cartographic information sources and methodologies for the TCC valuation. -- Highlights: • We explore 22 road-corridor planning studies subjected to the Spanish EIA regulation. • We analyze the variables selected for defining territorial carrying capacity. • The quality of the studies is acceptable (methodology, variable weighting, mapping). • There is heterogeneity in the methods for territorial carrying capacity valuation.« less
Trimodal interpretation of constraints for planning
NASA Technical Reports Server (NTRS)
Krieger, David; Brown, Richard
1987-01-01
Constraints are used in the CAMPS knowledge based planning system to represent those propositions that must be true for a plan to be acceptable. CAMPS introduces the make-mode for interpreting a constraint. Given an unsatisfied constraint, make evaluation mode suggests planning actions which, if taken, would result in a modified plan in which the constraint in question may be satisfied. These suggested planning actions, termed delta-tuples, are the raw material of intelligent plan repair. They are used both in debugging an almost-right plan and in replanning due to changing situations. Given a defective plan in which some set of constraints are violated, a problem solving strategy selects one or more constraints as a focus of attention. These selected constraints are evaluated in the make-mode to produce delta-tuples. The problem solving strategy then reviews the delta-tuples according to its application and problem-specific criteria to find the most acceptable change in terms of success likelihood and plan disruption. Finally, the problem solving strategy makes the suggested alteration to the plan and then rechecks constraints to find any unexpected consequences.
Thayer, Edward C.; Olson, Maynard V.; Karp, Richard M.
1999-01-01
Genetic and physical maps display the relative positions of objects or markers occurring within a target DNA molecule. In constructing maps, the primary objective is to determine the ordering of these objects. A further objective is to assign a coordinate to each object, indicating its distance from a reference end of the target molecule. This paper describes a computational method and a body of software for assigning coordinates to map objects, given a solution or partial solution to the ordering problem. We describe our method in the context of multiple–complete–digest (MCD) mapping, but it should be applicable to a variety of other mapping problems. Because of errors in the data or insufficient clone coverage to uniquely identify the true ordering of the map objects, a partial ordering is typically the best one can hope for. Once a partial ordering has been established, one often seeks to overlay a metric along the map to assess the distances between the map objects. This problem often proves intractable because of data errors such as erroneous local length measurements (e.g., large clone lengths on low-resolution physical maps). We present a solution to the coordinate assignment problem for MCD restriction-fragment mapping, in which a coordinated set of single-enzyme restriction maps are simultaneously constructed. We show that the coordinate assignment problem can be expressed as the solution of a system of linear constraints. If the linear system is free of inconsistencies, it can be solved using the standard Bellman–Ford algorithm. In the more typical case where the system is inconsistent, our program perturbs it to find a new consistent system of linear constraints, close to those of the given inconsistent system, using a modified Bellman–Ford algorithm. Examples are provided of simple map inconsistencies and the methods by which our program detects candidate data errors and directs the user to potential suspect regions of the map. PMID:9927487
Ade, P. A. R.; Ahmed, Z.; Aikin, R. W.; ...
2017-11-09
We present the strongest constraints to date on anisotropies of cosmic microwave background (CMB) polarization rotation derived from 150 GHz data taken by the BICEP2 & Keck Array CMB experiments up to and including the 2014 observing season (BK14). The definition of the polarization angle in BK14 maps has gone through self-calibration in which the overall angle is adjusted to minimize the observed TB and EB power spectra. After this procedure, the QU maps lose sensitivity to a uniform polarization rotation but are still sensitive to anisotropies of polarization rotation. This analysis places constraints on the anisotropies of polarization rotation,more » which could be generated by CMB photons interacting with axionlike pseudoscalar fields or Faraday rotation induced by primordial magnetic fields. The sensitivity of BK14 maps ( ~3 μK - arc min ) makes it possible to reconstruct anisotropies of the polarization rotation angle and measure their angular power spectrum much more precisely than previous attempts. Our data are found to be consistent with no polarization rotation anisotropies, improving the upper bound on the amplitude of the rotation angle spectrum by roughly an order of magnitude compared to the previous best constraints. Our results lead to an order of magnitude better constraint on the coupling constant of the Chern-Simons electromagnetic term g aγ ≤ 7.2 × 10 -2/H I (95% confidence) than the constraint derived from the B -mode spectrum, where H I is the inflationary Hubble scale. This constraint leads to a limit on the decay constant of 10 -6 ≲ f a / M pl at mass range of 10 -33 ≤ m a ≤ 10 -28eV for r = 0.01 , assuming g aγ ~ α/( 2πf a) with α denoting the fine structure constant. The upper bound on the amplitude of the primordial magnetic fields is 30 nG (95% confidence) from the polarization rotation anisotropies.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, W; Zhang, Y; Ren, L
2014-06-01
Purpose: To investigate the feasibility of using nanoparticle markers to validate liver tumor motion together with a deformation field map-based four dimensional (4D) cone-beam computed tomography (CBCT) reconstruction method. Methods: A technique for lung 4D-CBCT reconstruction has been previously developed using a deformation field map (DFM)-based strategy. In this method, each phase of the 4D-CBCT is considered as a deformation of a prior CT volume. The DFM is solved by a motion modeling and free-form deformation (MM-FD) technique, using a data fidelity constraint and the deformation energy minimization. For liver imaging, there is low contrast of a liver tumor inmore » on-board projections. A validation of liver tumor motion using implanted gold nanoparticles, along with the MM-FD deformation technique is implemented to reconstruct onboard 4D CBCT liver radiotherapy images. These nanoparticles were placed around the liver tumor to reflect the tumor positions in both CT simulation and on-board image acquisition. When reconstructing each phase of the 4D-CBCT, the migrations of the gold nanoparticles act as a constraint to regularize the deformation field, along with the data fidelity and the energy minimization constraints. In this study, multiple tumor diameters and positions were simulated within the liver for on-board 4D-CBCT imaging. The on-board 4D-CBCT reconstructed by the proposed method was compared with the “ground truth” image. Results: The preliminary data, which uses reconstruction for lung radiotherapy suggests that the advanced reconstruction algorithm including the gold nanoparticle constraint will Resultin volume percentage differences (VPD) between lesions in reconstructed images by MM-FD and “ground truth” on-board images of 11.5% (± 9.4%) and a center of mass shift of 1.3 mm (± 1.3 mm) for liver radiotherapy. Conclusion: The advanced MM-FD technique enforcing the additional constraints from gold nanoparticles, results in improved accuracy for reconstructing on-board 4D-CBCT of liver tumor. Varian medical systems research grant.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ade, P. A. R.; Ahmed, Z.; Aikin, R. W.
We present the strongest constraints to date on anisotropies of cosmic microwave background (CMB) polarization rotation derived from 150 GHz data taken by the BICEP2 & Keck Array CMB experiments up to and including the 2014 observing season (BK14). The definition of the polarization angle in BK14 maps has gone through self-calibration in which the overall angle is adjusted to minimize the observed TB and EB power spectra. After this procedure, the QU maps lose sensitivity to a uniform polarization rotation but are still sensitive to anisotropies of polarization rotation. This analysis places constraints on the anisotropies of polarization rotation,more » which could be generated by CMB photons interacting with axionlike pseudoscalar fields or Faraday rotation induced by primordial magnetic fields. The sensitivity of BK14 maps ( ~3 μK - arc min ) makes it possible to reconstruct anisotropies of the polarization rotation angle and measure their angular power spectrum much more precisely than previous attempts. Our data are found to be consistent with no polarization rotation anisotropies, improving the upper bound on the amplitude of the rotation angle spectrum by roughly an order of magnitude compared to the previous best constraints. Our results lead to an order of magnitude better constraint on the coupling constant of the Chern-Simons electromagnetic term g aγ ≤ 7.2 × 10 -2/H I (95% confidence) than the constraint derived from the B -mode spectrum, where H I is the inflationary Hubble scale. This constraint leads to a limit on the decay constant of 10 -6 ≲ f a / M pl at mass range of 10 -33 ≤ m a ≤ 10 -28eV for r = 0.01 , assuming g aγ ~ α/( 2πf a) with α denoting the fine structure constant. The upper bound on the amplitude of the primordial magnetic fields is 30 nG (95% confidence) from the polarization rotation anisotropies.« less
McMurray, Bob; Horst, Jessica S; Samuelson, Larissa K
2012-10-01
Classic approaches to word learning emphasize referential ambiguity: In naming situations, a novel word could refer to many possible objects, properties, actions, and so forth. To solve this, researchers have posited constraints, and inference strategies, but assume that determining the referent of a novel word is isomorphic to learning. We present an alternative in which referent selection is an online process and independent of long-term learning. We illustrate this theoretical approach with a dynamic associative model in which referent selection emerges from real-time competition between referents and learning is associative (Hebbian). This model accounts for a range of findings including the differences in expressive and receptive vocabulary, cross-situational learning under high degrees of ambiguity, accelerating (vocabulary explosion) and decelerating (power law) learning, fast mapping by mutual exclusivity (and differences in bilinguals), improvements in familiar word recognition with development, and correlations between speed of processing and learning. Together it suggests that (a) association learning buttressed by dynamic competition can account for much of the literature; (b) familiar word recognition is subserved by the same processes that identify the referents of novel words (fast mapping); (c) online competition may allow the children to leverage information available in the task to augment performance despite slow learning; (d) in complex systems, associative learning is highly multifaceted; and (e) learning and referent selection, though logically distinct, can be subtly related. It suggests more sophisticated ways of describing the interaction between situation- and developmental-time processes and points to the need for considering such interactions as a primary determinant of development. PsycINFO Database Record (c) 2012 APA, all rights reserved.
Zhang, Zhao; Zhao, Mingbo; Chow, Tommy W S
2012-12-01
In this work, sub-manifold projections based semi-supervised dimensionality reduction (DR) problem learning from partial constrained data is discussed. Two semi-supervised DR algorithms termed Marginal Semi-Supervised Sub-Manifold Projections (MS³MP) and orthogonal MS³MP (OMS³MP) are proposed. MS³MP in the singular case is also discussed. We also present the weighted least squares view of MS³MP. Based on specifying the types of neighborhoods with pairwise constraints (PC) and the defined manifold scatters, our methods can preserve the local properties of all points and discriminant structures embedded in the localized PC. The sub-manifolds of different classes can also be separated. In PC guided methods, exploring and selecting the informative constraints is challenging and random constraint subsets significantly affect the performance of algorithms. This paper also introduces an effective technique to select the informative constraints for DR with consistent constraints. The analytic form of the projection axes can be obtained by eigen-decomposition. The connections between this work and other related work are also elaborated. The validity of the proposed constraint selection approach and DR algorithms are evaluated by benchmark problems. Extensive simulations show that our algorithms can deliver promising results over some widely used state-of-the-art semi-supervised DR techniques. Copyright © 2012 Elsevier Ltd. All rights reserved.
USDA-ARS?s Scientific Manuscript database
Genetic linkage maps in plants are usually constructed using segregating populations obtained from crosses between two inbred lines such as rice, maize, or soybean. Such populations are generally not available for forest trees because of time constraints. But tree species have the property of outcro...
NASA Astrophysics Data System (ADS)
Xu, Shuo; Ji, Ze; Truong Pham, Duc; Yu, Fan
2011-11-01
The simultaneous mission assignment and home allocation for hospital service robots studied is a Multidimensional Assignment Problem (MAP) with multiobjectives and multiconstraints. A population-based metaheuristic, the Binary Bees Algorithm (BBA), is proposed to optimize this NP-hard problem. Inspired by the foraging mechanism of honeybees, the BBA's most important feature is an explicit functional partitioning between global search and local search for exploration and exploitation, respectively. Its key parts consist of adaptive global search, three-step elitism selection (constraint handling, non-dominated solutions selection, and diversity preservation), and elites-centred local search within a Hamming neighbourhood. Two comparative experiments were conducted to investigate its single objective optimization, optimization effectiveness (indexed by the S-metric and C-metric) and optimization efficiency (indexed by computational burden and CPU time) in detail. The BBA outperformed its competitors in almost all the quantitative indices. Hence, the above overall scheme, and particularly the searching history-adapted global search strategy was validated.
NASA Astrophysics Data System (ADS)
Debats, Stephanie Renee
Smallholder farms dominate in many parts of the world, including Sub-Saharan Africa. These systems are characterized by small, heterogeneous, and often indistinct field patterns, requiring a specialized methodology to map agricultural landcover. In this thesis, we developed a benchmark labeled data set of high-resolution satellite imagery of agricultural fields in South Africa. We presented a new approach to mapping agricultural fields, based on efficient extraction of a vast set of simple, highly correlated, and interdependent features, followed by a random forest classifier. The algorithm achieved similar high performance across agricultural types, including spectrally indistinct smallholder fields, and demonstrated the ability to generalize across large geographic areas. In sensitivity analyses, we determined multi-temporal images provided greater performance gains than the addition of multi-spectral bands. We also demonstrated how active learning can be incorporated in the algorithm to create smaller, more efficient training data sets, which reduced computational resources, minimized the need for humans to hand-label data, and boosted performance. We designed a patch-based uncertainty metric to drive the active learning framework, based on the regular grid of a crowdsourcing platform, and demonstrated how subject matter experts can be replaced with fleets of crowdsourcing workers. Our active learning algorithm achieved similar performance as an algorithm trained with randomly selected data, but with 62% less data samples. This thesis furthers the goal of providing accurate agricultural landcover maps, at a scale that is relevant for the dominant smallholder class. Accurate maps are crucial for monitoring and promoting agricultural production. Furthermore, improved agricultural landcover maps will aid a host of other applications, including landcover change assessments, cadastral surveys to strengthen smallholder land rights, and constraints for crop modeling and famine prediction.
NASA Astrophysics Data System (ADS)
Samson, Philippe
2005-05-01
The constant evolution of the satellite market is asking for better technical performances and reliability for a reduced cost. Solar array is in front line of this challenge.This can be achieved by present technologies progressive improvement in cost reduction or by technological breakthrough.To reach an effective End Of Live performance100 W/kg of solar array is not so easy, even if you suppose that the mass of everything is nothing!Thin film cells are potential candidate to contribute to this challenge with certain confidence level and consequent development plan validation and qualification on ground and flight.Based on a strong flight heritage in flexible Solar Array design, the work has allowed in these last years, to pave the way on road map of thin film technologies . This is encouraged by ESA on many technological contracts put in concurrent engineering.CISG was selected cell and their strategy of design, contributions and results will be presented.Trade-off results and Design to Cost solutions will discussed.Main technical drivers, system design constraints, market access, key technologies needed will be detailed in this paper and the resulting road-map and development plan will be presented.
Cross-correlating 2D and 3D galaxy surveys
Passaglia, Samuel; Manzotti, Alessandro; Dodelson, Scott
2017-06-08
Galaxy surveys probe both structure formation and the expansion rate, making them promising avenues for understanding the dark universe. Photometric surveys accurately map the 2D distribution of galaxy positions and shapes in a given redshift range, while spectroscopic surveys provide sparser 3D maps of the galaxy distribution. We present a way to analyse overlapping 2D and 3D maps jointly and without loss of information. We represent 3D maps using spherical Fourier-Bessel (sFB) modes, which preserve radial coverage while accounting for the spherical sky geometry, and we decompose 2D maps in a spherical harmonic basis. In these bases, a simple expression exists for the cross-correlation of the two fields. One very powerful application is the ability to simultaneously constrain the redshift distribution of the photometric sample, the sample biases, and cosmological parameters. We use our framework to show that combined analysis of DESI and LSST can improve cosmological constraints by factors ofmore » $${\\sim}1.2$$ to $${\\sim}1.8$$ on the region where they overlap relative to identically sized disjoint regions. We also show that in the overlap of DES and SDSS-III in Stripe 82, cross-correlating improves photo-$z$ parameter constraints by factors of $${\\sim}2$$ to $${\\sim}12$$ over internal photo-$z$ reconstructions.« less
Spectral edge: gradient-preserving spectral mapping for image fusion.
Connah, David; Drew, Mark S; Finlayson, Graham D
2015-12-01
This paper describes a novel approach to image fusion for color display. Our goal is to generate an output image whose gradient matches that of the input as closely as possible. We achieve this using a constrained contrast mapping paradigm in the gradient domain, where the structure tensor of a high-dimensional gradient representation is mapped exactly to that of a low-dimensional gradient field which is then reintegrated to form an output. Constraints on output colors are provided by an initial RGB rendering. Initially, we motivate our solution with a simple "ansatz" (educated guess) for projecting higher-D contrast onto color gradients, which we expand to a more rigorous theorem to incorporate color constraints. The solution to these constrained optimizations is closed-form, allowing for simple and hence fast and efficient algorithms. The approach can map any N-D image data to any M-D output and can be used in a variety of applications using the same basic algorithm. In this paper, we focus on the problem of mapping N-D inputs to 3D color outputs. We present results in five applications: hyperspectral remote sensing, fusion of color and near-infrared or clear-filter images, multilighting imaging, dark flash, and color visualization of magnetic resonance imaging diffusion-tensor imaging.
Cross-correlating 2D and 3D galaxy surveys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Passaglia, Samuel; Manzotti, Alessandro; Dodelson, Scott
Galaxy surveys probe both structure formation and the expansion rate, making them promising avenues for understanding the dark universe. Photometric surveys accurately map the 2D distribution of galaxy positions and shapes in a given redshift range, while spectroscopic surveys provide sparser 3D maps of the galaxy distribution. We present a way to analyse overlapping 2D and 3D maps jointly and without loss of information. We represent 3D maps using spherical Fourier-Bessel (sFB) modes, which preserve radial coverage while accounting for the spherical sky geometry, and we decompose 2D maps in a spherical harmonic basis. In these bases, a simple expression exists for the cross-correlation of the two fields. One very powerful application is the ability to simultaneously constrain the redshift distribution of the photometric sample, the sample biases, and cosmological parameters. We use our framework to show that combined analysis of DESI and LSST can improve cosmological constraints by factors ofmore » $${\\sim}1.2$$ to $${\\sim}1.8$$ on the region where they overlap relative to identically sized disjoint regions. We also show that in the overlap of DES and SDSS-III in Stripe 82, cross-correlating improves photo-$z$ parameter constraints by factors of $${\\sim}2$$ to $${\\sim}12$$ over internal photo-$z$ reconstructions.« less
Radiofrequency pulse design in parallel transmission under strict temperature constraints.
Boulant, Nicolas; Massire, Aurélien; Amadon, Alexis; Vignaud, Alexandre
2014-09-01
To gain radiofrequency (RF) pulse performance by directly addressing the temperature constraints, as opposed to the specific absorption rate (SAR) constraints, in parallel transmission at ultra-high field. The magnitude least-squares RF pulse design problem under hard SAR constraints was solved repeatedly by using the virtual observation points and an active-set algorithm. The SAR constraints were updated at each iteration based on the result of a thermal simulation. The numerical study was performed for an SAR-demanding and simplified time of flight sequence using B1 and ΔB0 maps obtained in vivo on a human brain at 7T. The proposed adjustment of the SAR constraints combined with an active-set algorithm provided higher flexibility in RF pulse design within a reasonable time. The modifications of those constraints acted directly upon the thermal response as desired. Although further confidence in the thermal models is needed, this study shows that RF pulse design under strict temperature constraints is within reach, allowing better RF pulse performance and faster acquisitions at ultra-high fields at the cost of higher sequence complexity. Copyright © 2013 Wiley Periodicals, Inc.
On the evolutionary constraint surface of hydra
NASA Technical Reports Server (NTRS)
Slobodkin, L. B.; Dunn, K.
1983-01-01
Food consumption, body size, and budding rate were measured simultaneously in isolated individual hydra of six strains. For each individual hydra the three measurements define a point in the three dimensional space with axes: food consumption, budding rate, and body size. These points lie on a single surface, regardless of species. Floating rate and incidence of sexuality map onto this surface. It is suggested that this surface is an example of a general class of evolutionary constraint surfaces derived from the conjunction of evolutinary theory and the theory of ecological resource budgets. These constraint surfaces correspond to microevolutionary domains.
Geodetic satellite observations in North American (solution NA-9)
NASA Technical Reports Server (NTRS)
Mueller, I. I.; Reilly, J. P.; Soler, T.
1972-01-01
A new detailed geoidal map with claimed accuracies of plus or minus 2 meters (on land), based on gravimetric and satellite data, was presented. With the new geoid and the orthometric heights given, more reliable height constraints were calculated and applied. The basic purpose of this experiment was to compute the new solution NA9 by defining the origin of the system, from the point of view of error propagation, in the most favorable position applying inner constraints and imposing new weighted height constraints to all of the stations. The major differences with respect to formerly published adjustments are presented.
Multitracer CMB delensing maps from Planck and WISE data
NASA Astrophysics Data System (ADS)
Yu, Byeonghee; Hill, J. Colin; Sherwin, Blake D.
2017-12-01
Delensing, the removal of the limiting lensing B -mode background, is crucial for the success of future cosmic microwave background (CMB) surveys in constraining inflationary gravitational waves (IGWs). In recent work, delensing with large-scale structure tracers has emerged as a promising method both for improving constraints on IGWs and for testing delensing methods for future use. However, the delensing fractions (i.e., the fraction of the lensing-B mode power removed) achieved by recent efforts have been only 20%-30%. In this work, we provide a detailed characterization of a full-sky, dust-cleaned cosmic infrared background (CIB) map for delensing and construct a further-improved delensing template by adding additional tracers to increase delensing performance. In particular, we build a multitracer delensing template by combining the dust-cleaned Planck CIB map with a reconstructed CMB lensing map from Planck and a galaxy number density map from the Wide-field Infrared Survey Explorer (WISE) satellite. For this combination, we calculate the relevant weightings by fitting smooth templates to measurements of all the cross-spectra and autospectra of these maps. On a large fraction of the sky (fsky=0.43 ), we demonstrate that our maps are capable of providing a delensing factor of 43 ±1 % ; using a more restrictive mask (fsky=0.11 ), the delensing factor reaches 48 ±1 % . For low-noise surveys, our delensing maps, which cover much of the sky, can thus improve constraints on the tensor-to-scalar ratio (r ) by nearly a factor of 2. The delensing tracer maps are made publicly available, and we encourage their use in ongoing and upcoming B -mode surveys.
NASA Astrophysics Data System (ADS)
Benson, B. A.; Ade, P. A. R.; Ahmed, Z.; Allen, S. W.; Arnold, K.; Austermann, J. E.; Bender, A. N.; Bleem, L. E.; Carlstrom, J. E.; Chang, C. L.; Cho, H. M.; Cliche, J. F.; Crawford, T. M.; Cukierman, A.; de Haan, T.; Dobbs, M. A.; Dutcher, D.; Everett, W.; Gilbert, A.; Halverson, N. W.; Hanson, D.; Harrington, N. L.; Hattori, K.; Henning, J. W.; Hilton, G. C.; Holder, G. P.; Holzapfel, W. L.; Irwin, K. D.; Keisler, R.; Knox, L.; Kubik, D.; Kuo, C. L.; Lee, A. T.; Leitch, E. M.; Li, D.; McDonald, M.; Meyer, S. S.; Montgomery, J.; Myers, M.; Natoli, T.; Nguyen, H.; Novosad, V.; Padin, S.; Pan, Z.; Pearson, J.; Reichardt, C.; Ruhl, J. E.; Saliwanchik, B. R.; Simard, G.; Smecher, G.; Sayre, J. T.; Shirokoff, E.; Stark, A. A.; Story, K.; Suzuki, A.; Thompson, K. L.; Tucker, C.; Vanderlinde, K.; Vieira, J. D.; Vikhlinin, A.; Wang, G.; Yefremenko, V.; Yoon, K. W.
2014-07-01
We describe the design of a new polarization sensitive receiver, spt-3g, for the 10-meter South Pole Telescope (spt). The spt-3g receiver will deliver a factor of ~20 improvement in mapping speed over the current receiver, spt-pol. The sensitivity of the spt-3g receiver will enable the advance from statistical detection of B-mode polarization anisotropy power to high signal-to-noise measurements of the individual modes, i.e., maps. This will lead to precise (~0.06 eV) constraints on the sum of neutrino masses with the potential to directly address the neutrino mass hierarchy. It will allow a separation of the lensing and inflationary B-mode power spectra, improving constraints on the amplitude and shape of the primordial signal, either through spt-3g data alone or in combination with bicep2/keck, which is observing the same area of sky. The measurement of small-scale temperature anisotropy will provide new constraints on the epoch of reionization. Additional science from the spt-3g survey will be significantly enhanced by the synergy with the ongoing optical Dark Energy Survey (des), including: a 1% constraint on the bias of optical tracers of large-scale structure, a measurement of the differential Doppler signal from pairs of galaxy clusters that will test General Relativity on ~200Mpc scales, and improved cosmological constraints from the abundance of clusters of galaxies
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beverly Seyler; John Grube
2004-12-10
Oil and gas have been commercially produced in Illinois for over 100 years. Existing commercial production is from more than fifty-two named pay horizons in Paleozoic rocks ranging in age from Middle Ordovician to Pennsylvanian. Over 3.2 billion barrels of oil have been produced. Recent calculations indicate that remaining mobile resources in the Illinois Basin may be on the order of several billion barrels. Thus, large quantities of oil, potentially recoverable using current technology, remain in Illinois oil fields despite a century of development. Many opportunities for increased production may have been missed due to complex development histories, multiple stackedmore » pays, and commingled production which makes thorough exploitation of pays and the application of secondary or improved/enhanced recovery strategies difficult. Access to data, and the techniques required to evaluate and manage large amounts of diverse data are major barriers to increased production of critical reserves in the Illinois Basin. These constraints are being alleviated by the development of a database access system using a Geographic Information System (GIS) approach for evaluation and identification of underdeveloped pays. The Illinois State Geological Survey has developed a methodology that is being used by industry to identify underdeveloped areas (UDAs) in and around petroleum reservoirs in Illinois using a GIS approach. This project utilizes a statewide oil and gas Oracle{reg_sign} database to develop a series of Oil and Gas Base Maps with well location symbols that are color-coded by producing horizon. Producing horizons are displayed as layers and can be selected as separate or combined layers that can be turned on and off. Map views can be customized to serve individual needs and page size maps can be printed. A core analysis database with over 168,000 entries has been compiled and assimilated into the ISGS Enterprise Oracle database. Maps of wells with core data have been generated. Data from over 1,700 Illinois waterflood units and waterflood areas have been entered into an Access{reg_sign} database. The waterflood area data has also been assimilated into the ISGS Oracle database for mapping and dissemination on the ArcIMS website. Formation depths for the Beech Creek Limestone, Ste. Genevieve Limestone and New Albany Shale in all of the oil producing region of Illinois have been calculated and entered into a digital database. Digital contoured structure maps have been constructed, edited and added to the ILoil website as map layers. This technology/methodology addresses the long-standing constraints related to information access and data management in Illinois by significantly simplifying the laborious process that industry presently must use to identify underdeveloped pay zones in Illinois.« less
Onboard Algorithms for Data Prioritization and Summarization of Aerial Imagery
NASA Technical Reports Server (NTRS)
Chien, Steve A.; Hayden, David; Thompson, David R.; Castano, Rebecca
2013-01-01
Many current and future NASA missions are capable of collecting enormous amounts of data, of which only a small portion can be transmitted to Earth. Communications are limited due to distance, visibility constraints, and competing mission downlinks. Long missions and high-resolution, multispectral imaging devices easily produce data exceeding the available bandwidth. To address this situation computationally efficient algorithms were developed for analyzing science imagery onboard the spacecraft. These algorithms autonomously cluster the data into classes of similar imagery, enabling selective downlink of representatives of each class, and a map classifying the terrain imaged rather than the full dataset, reducing the volume of the downlinked data. A range of approaches was examined, including k-means clustering using image features based on color, texture, temporal, and spatial arrangement
A conserved supergene locus controls colour pattern diversity in Heliconius butterflies.
Joron, Mathieu; Papa, Riccardo; Beltrán, Margarita; Chamberlain, Nicola; Mavárez, Jesús; Baxter, Simon; Abanto, Moisés; Bermingham, Eldredge; Humphray, Sean J; Rogers, Jane; Beasley, Helen; Barlow, Karen; ffrench-Constant, Richard H; Mallet, James; McMillan, W Owen; Jiggins, Chris D
2006-10-01
We studied whether similar developmental genetic mechanisms are involved in both convergent and divergent evolution. Mimetic insects are known for their diversity of patterns as well as their remarkable evolutionary convergence, and they have played an important role in controversies over the respective roles of selection and constraints in adaptive evolution. Here we contrast three butterfly species, all classic examples of Müllerian mimicry. We used a genetic linkage map to show that a locus, Yb, which controls the presence of a yellow band in geographic races of Heliconius melpomene, maps precisely to the same location as the locus Cr, which has very similar phenotypic effects in its co-mimic H. erato. Furthermore, the same genomic location acts as a "supergene", determining multiple sympatric morphs in a third species, H. numata. H. numata is a species with a very different phenotypic appearance, whose many forms mimic different unrelated ithomiine butterflies in the genus Melinaea. Other unlinked colour pattern loci map to a homologous linkage group in the co-mimics H. melpomene and H. erato, but they are not involved in mimetic polymorphism in H. numata. Hence, a single region from the multilocus colour pattern architecture of H. melpomene and H. erato appears to have gained control of the entire wing-pattern variability in H. numata, presumably as a result of selection for mimetic "supergene" polymorphism without intermediates. Although we cannot at this stage confirm the homology of the loci segregating in the three species, our results imply that a conserved yet relatively unconstrained mechanism underlying pattern switching can affect mimicry in radically different ways. We also show that adaptive evolution, both convergent and diversifying, can occur by the repeated involvement of the same genomic regions.
Constraining the CO intensity mapping power spectrum at intermediate redshifts
NASA Astrophysics Data System (ADS)
Padmanabhan, Hamsa
2018-04-01
We compile available constraints on the carbon monoxide (CO) 1-0 luminosity functions and abundances at redshifts 0-3. This is used to develop a data driven halo model for the evolution of the CO galaxy abundances and clustering across intermediate redshifts. It is found that the recent constraints from the CO Power Spectrum Survey (z ˜ 3; Keating et al. 2016), when combined with existing observations of local galaxies (z ˜ 0; Keres, Yun & Young 2003), lead to predictions that are consistent with the results of smaller surveys at intermediate redshifts (z ˜ 1-2). We provide convenient fitting forms for the evolution of the CO luminosity-halo mass relation, and estimates of the mean and uncertainties in the CO power spectrum in the context of future intensity mapping experiments.
Cha, Seungman; Hong, Sung-Tae; Lee, Young-Ha; Lee, Keon Hoon; Cho, Dae Seong; Lee, Jinmoo; Chai, Jong-Yil; Elhag, Mousab Siddig; Khaled, Soheir Gabralla Ahmad; Elnimeiri, Mustafa Khidir Mustafa; Siddig, Nahid Abdelgadeir Ali; Abdelrazig, Hana; Awadelkareem, Sarah; Elshafie, Azza Tag Eldin; Ismail, Hassan Ahmed Hassan Ahmed; Amin, Mutamad
2017-09-12
Schistosomiasis and soil-transmitted helminthiasis (STHs) are target neglected tropical diseases (NTDs) of preventive chemotherapy, but the control and elimination of these diseases have been impeded due to resource constraints. Few reports have described study protocol to draw on when conducting a nationwide survey. We present a detailed methodological description of the integrated mapping of schistosomiasis and STHs on the basis of our experiences, hoping that this protocol can be applied to future surveys in similar settings. In addition to determining the ecological zones requiring mass drug administration interventions, we aim to provide precise estimates of the prevalence of these diseases. A school-based cross-sectional design will be applied for the nationwide survey across Sudan. The survey is designed to cover all districts in every state. We have divided each district into 3 different ecological zones depending on proximity to bodies of water. We will employ a probability-proportional-to-size sampling method for schools and systematic sampling for student selection to provide adequate data regarding the prevalence for schistosomiasis and STHs in Sudan at the state level. A total of 108,660 students will be selected from 1811 schools across Sudan. After the survey is completed, 391 ecological zones will be mapped out. To carry out the survey, 655 staff members were recruited. The feces and urine samples are microscopically examined by the Kato-Katz method and the sediment smears for helminth eggs respectively. For quality control, a minimum of 10% of the slides will be rechecked by the federal supervisors in each state and also 5% of the smears are validated again within one day by independent supervisors. This nationwide mapping is expected to generate important epidemiological information and indicators about schistosomiasis and STHs that will be useful for monitoring and evaluating the control program. The mapping data will also be used for overviewing the status and policy formulation and updates to the control strategies. This paper, which describes a feasible and practical study protocol, is to be shared with the global health community, especially those who are planning to perform nationwide mapping of NTDs by feces or urine sampling.
WE-AB-209-10: Optimizing the Delivery of Sequential Fluence Maps for Efficient VMAT Delivery
DOE Office of Scientific and Technical Information (OSTI.GOV)
Craft, D; Balvert, M
2016-06-15
Purpose: To develop an optimization model and solution approach for computing MLC leaf trajectories and dose rates for high quality matching of a set of optimized fluence maps to be delivered sequentially around a patient in a VMAT treatment. Methods: We formulate the fluence map matching problem as a nonlinear optimization problem where time is discretized but dose rates and leaf positions are continuous variables. For a given allotted time, which is allocated across the fluence maps based on the complexity of each fluence map, the optimization problem searches for the best leaf trajectories and dose rates such that themore » original fluence maps are closely recreated. Constraints include maximum leaf speed, maximum dose rate, and leaf collision avoidance, as well as the constraint that the ending leaf positions for one map are the starting leaf positions for the next map. The resulting model is non-convex but smooth, and therefore we solve it by local searches from a variety of starting positions. We improve solution time by a custom decomposition approach which allows us to decouple the rows of the fluence maps and solve each leaf pair individually. This decomposition also makes the problem easily parallelized. Results: We demonstrate method on a prostate case and a head-and-neck case and show that one can recreate fluence maps to high degree of fidelity in modest total delivery time (minutes). Conclusion: We present a VMAT sequencing method that reproduces optimal fluence maps by searching over a vast number of possible leaf trajectories. By varying the total allotted time given, this approach is the first of its kind to allow users to produce VMAT solutions that span the range of wide-field coarse VMAT deliveries to narrow-field high-MU sliding window-like approaches.« less
NASA Astrophysics Data System (ADS)
Johnson, Traci L.; Sharon, Keren
2016-11-01
Until now, systematic errors in strong gravitational lens modeling have been acknowledged but have never been fully quantified. Here, we launch an investigation into the systematics induced by constraint selection. We model the simulated cluster Ares 362 times using random selections of image systems with and without spectroscopic redshifts and quantify the systematics using several diagnostics: image predictability, accuracy of model-predicted redshifts, enclosed mass, and magnification. We find that for models with >15 image systems, the image plane rms does not decrease significantly when more systems are added; however, the rms values quoted in the literature may be misleading as to the ability of a model to predict new multiple images. The mass is well constrained near the Einstein radius in all cases, and systematic error drops to <2% for models using >10 image systems. Magnification errors are smallest along the straight portions of the critical curve, and the value of the magnification is systematically lower near curved portions. For >15 systems, the systematic error on magnification is ∼2%. We report no trend in magnification error with the fraction of spectroscopic image systems when selecting constraints at random; however, when using the same selection of constraints, increasing this fraction up to ∼0.5 will increase model accuracy. The results suggest that the selection of constraints, rather than quantity alone, determines the accuracy of the magnification. We note that spectroscopic follow-up of at least a few image systems is crucial because models without any spectroscopic redshifts are inaccurate across all of our diagnostics.
ERIC Educational Resources Information Center
Halberda, Justin
2006-01-01
Many authors have argued that word-learning constraints help guide a word-learner's hypotheses as to the meaning of a newly heard word. One such class of constraints derives from the observation that word-learners of all ages prefer to map novel labels to novel objects in situations of referential ambiguity. In this paper I use eye-tracking to…
Cunnane, Stephen C; Crawford, Michael A
2014-12-01
The human brain confronts two major challenges during its development: (i) meeting a very high energy requirement, and (ii) reliably accessing an adequate dietary source of specific brain selective nutrients needed for its structure and function. Implicitly, these energetic and nutritional constraints to normal brain development today would also have been constraints on human brain evolution. The energetic constraint was solved in large measure by the evolution in hominins of a unique and significant layer of body fat on the fetus starting during the third trimester of gestation. By providing fatty acids for ketone production that are needed as brain fuel, this fat layer supports the brain's high energy needs well into childhood. This fat layer also contains an important reserve of the brain selective omega-3 fatty acid, docosahexaenoic acid (DHA), not available in other primates. Foremost amongst the brain selective minerals are iodine and iron, with zinc, copper and selenium also being important. A shore-based diet, i.e., fish, molluscs, crustaceans, frogs, bird's eggs and aquatic plants, provides the richest known dietary sources of brain selective nutrients. Regular access to these foods by the early hominin lineage that evolved into humans would therefore have helped free the nutritional constraint on primate brain development and function. Inadequate dietary supply of brain selective nutrients still has a deleterious impact on human brain development on a global scale today, demonstrating the brain's ongoing vulnerability. The core of the shore-based paradigm of human brain evolution proposes that sustained access by certain groups of early Homo to freshwater and marine food resources would have helped surmount both the nutritional as well as the energetic constraints on mammalian brain development. Copyright © 2014 Elsevier Ltd. All rights reserved.
Non-supervised method for early forest fire detection and rapid mapping
NASA Astrophysics Data System (ADS)
Artés, Tomás; Boca, Roberto; Liberta, Giorgio; San-Miguel, Jesús
2017-09-01
Natural hazards are a challenge for the society. Scientific community efforts have been severely increased assessing tasks about prevention and damage mitigation. The most important points to minimize natural hazard damages are monitoring and prevention. This work focuses particularly on forest fires. This phenomenon depends on small-scale factors and fire behavior is strongly related to the local weather. Forest fire spread forecast is a complex task because of the scale of the phenomena, the input data uncertainty and time constraints in forest fire monitoring. Forest fire simulators have been improved, including some calibration techniques avoiding data uncertainty and taking into account complex factors as the atmosphere. Such techniques increase dramatically the computational cost in a context where the available time to provide a forecast is a hard constraint. Furthermore, an early mapping of the fire becomes crucial to assess it. In this work, a non-supervised method for forest fire early detection and mapping is proposed. As main sources, the method uses daily thermal anomalies from MODIS and VIIRS combined with land cover map to identify and monitor forest fires with very few resources. This method relies on a clustering technique (DBSCAN algorithm) and on filtering thermal anomalies to detect the forest fires. In addition, a concave hull (alpha shape algorithm) is applied to obtain rapid mapping of the fire area (very coarse accuracy mapping). Therefore, the method leads to a potential use for high-resolution forest fire rapid mapping based on satellite imagery using the extent of each early fire detection. It shows the way to an automatic rapid mapping of the fire at high resolution processing as few data as possible.
Timber production in selectively logged tropical forests in South America.
Michael Keller; Gregory P. Asner; Geoffrey Blate; Frank McGlocklin; John Merry; Marielos Peña-Claros; Johan Zweede
2007-01-01
Selective logging is an extensive land-use practice in South America. Governments in the region have enacted policies to promote the establishment and maintenance of economically productive and sustainable forest industries.However, both biological and policy constraints threaten to limit the viability of the industry over the long term.Biological constraints, such as...
NASA Astrophysics Data System (ADS)
Cadenas, P.; Fernández-Viejo, G.; Pulgar, J. A.; Tugend, J.; Manatschal, G.; Minshull, T. A.
2018-03-01
The Alpine Pyrenean-Cantabrian orogen developed along the plate boundary between Iberia and Europe, involving the inversion of Mesozoic hyperextended basins along the southern Biscay margin. Thus, this margin represents a natural laboratory to analyze the control of structural rift inheritance on the compressional reactivation of a continental margin. With the aim to identify former rift domains and investigate their role during the subsequent compression, we performed a structural analysis of the central and western North Iberian margin, based on the interpretation of seismic reflection profiles and local constraints from drill-hole data. Seismic interpretations and published seismic velocity models enabled the development of crustal thickness maps that helped to constrain further the offshore and onshore segmentation. Based on all these constraints, we present a rift domain map across the central and western North Iberian margin, as far as the adjacent western Cantabrian Mountains. Furthermore, we provide a first-order description of the margin segmentation resulting from its polyphase tectonic evolution. The most striking result is the presence of a hyperthinned domain (e.g., Asturian Basin) along the central continental platform that is bounded to the north by the Le Danois High, interpreted as a rift-related continental block separating two distinctive hyperextended domains. From the analysis of the rift domain map and the distribution of reactivation structures, we conclude that the landward limit of the necking domain and the hyperextended domains, respectively, guide and localize the compressional overprint. The Le Danois block acted as a local buttress, conditioning the inversion of the Asturian Basin.
NASA Astrophysics Data System (ADS)
Simard, G.; Omori, Y.; Aylor, K.; Baxter, E. J.; Benson, B. A.; Bleem, L. E.; Carlstrom, J. E.; Chang, C. L.; Cho, H.-M.; Chown, R.; Crawford, T. M.; Crites, A. T.; de Haan, T.; Dobbs, M. A.; Everett, W. B.; George, E. M.; Halverson, N. W.; Harrington, N. L.; Henning, J. W.; Holder, G. P.; Hou, Z.; Holzapfel, W. L.; Hrubes, J. D.; Knox, L.; Lee, A. T.; Leitch, E. M.; Luong-Van, D.; Manzotti, A.; McMahon, J. J.; Meyer, S. S.; Mocanu, L. M.; Mohr, J. J.; Natoli, T.; Padin, S.; Pryke, C.; Reichardt, C. L.; Ruhl, J. E.; Sayre, J. T.; Schaffer, K. K.; Shirokoff, E.; Staniszewski, Z.; Stark, A. A.; Story, K. T.; Vanderlinde, K.; Vieira, J. D.; Williamson, R.; Wu, W. L. K.
2018-06-01
We report constraints on cosmological parameters from the angular power spectrum of a cosmic microwave background (CMB) gravitational lensing potential map created using temperature data from 2500 deg2 of South Pole Telescope (SPT) data supplemented with data from Planck in the same sky region, with the statistical power in the combined map primarily from the SPT data. We fit the lensing power spectrum to a model including cold dark matter and a cosmological constant ({{Λ }}{CDM}), and to models with single-parameter extensions to {{Λ }}{CDM}. We find constraints that are comparable to and consistent with those found using the full-sky Planck CMB lensing data, e.g., {σ }8{{{Ω }}}{{m}}0.25 = 0.598 ± 0.024 from the lensing data alone with weak priors placed on other parameters. Combining with primary CMB data, we explore single-parameter extensions to {{Λ }}{CDM}. We find {{{Ω }}}k =-{0.012}-0.023+0.021 or {M}ν < 0.70 eV at 95% confidence, in good agreement with results including the lensing potential as measured by Planck. We include two parameters that scale the effect of lensing on the CMB: {A}L, which scales the lensing power spectrum in both the lens reconstruction power and in the smearing of the acoustic peaks, and {A}φ φ , which scales only the amplitude of the lensing reconstruction power spectrum. We find {A}φ φ × {A}L = 1.01 ± 0.08 for the lensing map made from combined SPT and Planck data, indicating that the amount of lensing is in excellent agreement with expectations from the observed CMB angular power spectrum when not including the information from smearing of the acoustic peaks.
Interactive Maps on War and Peace: A WebGIS Application for Civic Education
NASA Astrophysics Data System (ADS)
Wirkus, Lars; Strunck, Alexander
2013-04-01
War and violent conflict are omnipresent-be it war in the Middle East, violent conflicts in failed states or increasing military expenditures and exports/ imports of military goods. To understand certain conflicts or peace processes and their possible interrelation, to conduct a well-founded political discussion and to support or influence decision-making, one matter is of special importance: easily accessible and, in particular, reliable data and information. Against this background, the Bonn International Center for Conversion (BICC) in close cooperation with the German Federal Agency for Civic Education (bpb) has been developing a map-based information portal on war and peace with various thematic modules for the latter's online service (http://sicherheitspolitik.bpb.de). The portal will eventually offer nine of such modules that are intended to give various target groups, such as interested members of the public, teachers and learners, policymakers and representatives of the media access to the required information in form of an interactive and country-based global overview or a comparison of different issues. Five thematic modules have been completed so far: War and conflict, peace and demobilization, military capacities, resources and conflict, conventional weapons. The portal offers a broad spectrum of different data processing and visualization tools. Its central feature is an interactive mapping component based on WebGIS and a relational database. Content and data provided through thematic maps in the form of WebGIS layers are generally supplemented by info graphics, data tables and short articles providing deeper knowledge on the respective issue. All modules and their sub-chapters are introduced by background texts. They put all interactive maps of a module into an appropriate context and help the users to also understand the interrelation between various layers. If a layer is selected, all corresponding texts and graphics are shown automatically below the map. Data tables are offered if the copyright of datasets allows such use. All data of all thematic modules is presented in country profiles in a consolidated manner. The portal has been created with Open Source Software. PostgreSQL and PostGIS, MapServer, OpenLayers, MapProxy and cmsmadesimple are combined to manipulate and transform global data sets into interactive thematic maps. A purpose-programmed layer selection menu enables users to select single layers or to combine up to three matching layers from all possible pre-set layer combinations. This applies both to fields of topics within a module and across various modules. Due to the complexity of the structure and visualization constraints, no more than three layers can be combined. The WebGIS-based information portal on war and peace is an excellent example of how GIS technologies can be used for education and outreach. Not only can they play a crucial role in supporting the educational mandate and mission of certain institutions. They can also directly support various target groups in obtaining the knowledge needed by providing a collection of straight forward designed, ready-to-use data, info graphics and maps.
USDA-ARS?s Scientific Manuscript database
Root rot diseases of bean (Phaseolus vulgaris L.) are a constraint to dry and snap bean production. We developed the RR138 RIL mapping population from the cross of OSU5446, a susceptible line that meets current snap bean processing industry standards, and RR6950, a root rot resistant dry bean in th...
Evolutionary constraints and the neutral theory. [mutation-caused nucleotide substitutions in DNA
NASA Technical Reports Server (NTRS)
Jukes, T. H.; Kimura, M.
1984-01-01
The neutral theory of molecular evolution postulates that nucleotide substitutions inherently take place in DNA as a result of point mutations followed by random genetic drift. In the absence of selective constraints, the substitution rate reaches the maximum value set by the mutation rate. The rate in globin pseudogenes is about 5 x 10 to the -9th substitutions per site per year in mammals. Rates slower than this indicate the presence of constraints imposed by negative (natural) selection, which rejects and discards deleterious mutations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cao Daliang; Earl, Matthew A.; Luan, Shuang
2006-04-15
A new leaf-sequencing approach has been developed that is designed to reduce the number of required beam segments for step-and-shoot intensity modulated radiation therapy (IMRT). This approach to leaf sequencing is called continuous-intensity-map-optimization (CIMO). Using a simulated annealing algorithm, CIMO seeks to minimize differences between the optimized and sequenced intensity maps. Two distinguishing features of the CIMO algorithm are (1) CIMO does not require that each optimized intensity map be clustered into discrete levels and (2) CIMO is not rule-based but rather simultaneously optimizes both the aperture shapes and weights. To test the CIMO algorithm, ten IMRT patient cases weremore » selected (four head-and-neck, two pancreas, two prostate, one brain, and one pelvis). For each case, the optimized intensity maps were extracted from the Pinnacle{sup 3} treatment planning system. The CIMO algorithm was applied, and the optimized aperture shapes and weights were loaded back into Pinnacle. A final dose calculation was performed using Pinnacle's convolution/superposition based dose calculation. On average, the CIMO algorithm provided a 54% reduction in the number of beam segments as compared with Pinnacle's leaf sequencer. The plans sequenced using the CIMO algorithm also provided improved target dose uniformity and a reduced discrepancy between the optimized and sequenced intensity maps. For ten clinical intensity maps, comparisons were performed between the CIMO algorithm and the power-of-two reduction algorithm of Xia and Verhey [Med. Phys. 25(8), 1424-1434 (1998)]. When the constraints of a Varian Millennium multileaf collimator were applied, the CIMO algorithm resulted in a 26% reduction in the number of segments. For an Elekta multileaf collimator, the CIMO algorithm resulted in a 67% reduction in the number of segments. An average leaf sequencing time of less than one minute per beam was observed.« less
Platz, Thomas; Michael, Gregory; Tanaka, Kenneth L.; Skinner, James A.; Fortezzo, Corey M.
2013-01-01
The new, post-Viking generation of Mars orbital imaging and topographical data provide significant higher-resolution details of surface morphologies, which induced a new effort to photo-geologically map the surface of Mars at 1:20,000,000 scale. Although from unit superposition relations a relative stratigraphical framework can be compiled, it was the ambition of this mapping project to provide absolute unit age constraints through crater statistics. In this study, the crater counting method is described in detail, starting with the selection of image data, type locations (both from the mapper’s and crater counter’s perspectives) and the identification of impact craters. We describe the criteria used to validate and analyse measured crater populations, and to derive and interpret crater model ages. We provide examples of how geological information about the unit’s resurfacing history can be retrieved from crater size–frequency distributions. Three cases illustrate short-, intermediate, and long-term resurfacing histories. In addition, we introduce an interpretation-independent visualisation of the crater resurfacing history that uses the reduction of the crater population in a given size range relative to the expected population given the observed crater density at larger sizes. From a set of potential type locations, 48 areas from 22 globally mapped units were deemed suitable for crater counting. Because resurfacing ages were derived from crater statistics, these secondary ages were used to define the unit age rather than the base age. Using the methods described herein, we modelled ages that are consistent with the interpreted stratigraphy. Our derived model ages allow age assignments to be included in unit names. We discuss the limitations of using the crater dating technique for global-scale geological mapping. Finally, we present recommendations for the documentation and presentation of crater statistics in publications.
Single Point vs. Mapping Approach for Spectral Cytopathology (SCP)
Schubert, Jennifer M.; Mazur, Antonella I.; Bird, Benjamin; Miljković, Miloš; Diem, Max
2011-01-01
In this paper we describe the advantages of collecting infrared microspectral data in imaging mode opposed to point mode. Imaging data are processed using the PapMap algorithm, which co-adds pixel spectra that have been scrutinized for R-Mie scattering effects as well as other constraints. The signal-to-noise quality of PapMap spectra will be compared to point spectra for oral mucosa cells deposited onto low-e slides. Also the effects of software atmospheric correction will be discussed. Combined with the PapMap algorithm, data collection in imaging mode proves to be a superior method for spectral cytopathology. PMID:20449833
Balancing Flexible Constraints and Measurement Precision in Computerized Adaptive Testing
ERIC Educational Resources Information Center
Moyer, Eric L.; Galindo, Jennifer L.; Dodd, Barbara G.
2012-01-01
Managing test specifications--both multiple nonstatistical constraints and flexibly defined constraints--has become an important part of designing item selection procedures for computerized adaptive tests (CATs) in achievement testing. This study compared the effectiveness of three procedures: constrained CAT, flexible modified constrained CAT,…
Intelligent process mapping through systematic improvement of heuristics
NASA Technical Reports Server (NTRS)
Ieumwananonthachai, Arthur; Aizawa, Akiko N.; Schwartz, Steven R.; Wah, Benjamin W.; Yan, Jerry C.
1992-01-01
The present system for automatic learning/evaluation of novel heuristic methods applicable to the mapping of communication-process sets on a computer network has its basis in the testing of a population of competing heuristic methods within a fixed time-constraint. The TEACHER 4.1 prototype learning system implemented or learning new postgame analysis heuristic methods iteratively generates and refines the mappings of a set of communicating processes on a computer network. A systematic exploration of the space of possible heuristic methods is shown to promise significant improvement.
Optimization techniques for integrating spatial data
Herzfeld, U.C.; Merriam, D.F.
1995-01-01
Two optimization techniques ta predict a spatial variable from any number of related spatial variables are presented. The applicability of the two different methods for petroleum-resource assessment is tested in a mature oil province of the Midcontinent (USA). The information on petroleum productivity, usually not directly accessible, is related indirectly to geological, geophysical, petrographical, and other observable data. This paper presents two approaches based on construction of a multivariate spatial model from the available data to determine a relationship for prediction. In the first approach, the variables are combined into a spatial model by an algebraic map-comparison/integration technique. Optimal weights for the map comparison function are determined by the Nelder-Mead downhill simplex algorithm in multidimensions. Geologic knowledge is necessary to provide a first guess of weights to start the automatization, because the solution is not unique. In the second approach, active set optimization for linear prediction of the target under positivity constraints is applied. Here, the procedure seems to select one variable from each data type (structure, isopachous, and petrophysical) eliminating data redundancy. Automating the determination of optimum combinations of different variables by applying optimization techniques is a valuable extension of the algebraic map-comparison/integration approach to analyzing spatial data. Because of the capability of handling multivariate data sets and partial retention of geographical information, the approaches can be useful in mineral-resource exploration. ?? 1995 International Association for Mathematical Geology.
NASA Astrophysics Data System (ADS)
Steinel, Anke; Schelkes, Klaus; Subah, Ali; Himmelsbach, Thomas
2016-11-01
In (semi-)arid regions, available water resources are scarce and groundwater resources are often overused. Therefore, the option to increase available water resources by managed aquifer recharge (MAR) via infiltration of captured surface runoff was investigated for two basins in northern Jordan. This study evaluated the general suitability of catchments to generate sufficient runoff and tried to identify promising sites to harvest and infiltrate the runoff into the aquifer for later recovery. Large sets of available data were used to create regional thematic maps, which were then combined to constraint maps using Boolean logic and to create suitability maps using weighted linear combination. This approach might serve as a blueprint which could be adapted and applied to similar regions. The evaluation showed that non-committed source water availability is the most restricting factor for successful water harvesting in regions with <200 mm/a rainfall. Experiences with existing structures showed that sediment loads of runoff are high. Therefore, the effectiveness of any existing MAR scheme will decrease rapidly to the point where it results in an overall negative impact due to increased evaporation if maintenance is not undertaken. It is recommended to improve system operation and maintenance, as well as monitoring, in order to achieve a better and constant effectiveness of the infiltration activities.
Genome-Wide Association Mapping of Crown Rust Resistance in Oat Elite Germplasm.
Klos, Kathy Esvelt; Yimer, Belayneh A; Babiker, Ebrahiem M; Beattie, Aaron D; Bonman, J Michael; Carson, Martin L; Chong, James; Harrison, Stephen A; Ibrahim, Amir M H; Kolb, Frederic L; McCartney, Curt A; McMullen, Michael; Fetch, Jennifer Mitchell; Mohammadi, Mohsen; Murphy, J Paul; Tinker, Nicholas A
2017-07-01
Oat crown rust, caused by f. sp. , is a major constraint to oat ( L.) production in many parts of the world. In this first comprehensive multienvironment genome-wide association map of oat crown rust, we used 2972 single-nucleotide polymorphisms (SNPs) genotyped on 631 oat lines for association mapping of quantitative trait loci (QTL). Seedling reaction to crown rust in these lines was assessed as infection type (IT) with each of 10 crown rust isolates. Adult plant reaction was assessed in the field in a total of 10 location-years as percentage severity (SV) and as infection reaction (IR) in a 0-to-1 scale. Overall, 29 SNPs on 12 linkage groups were predictive of crown rust reaction in at least one experiment at a genome-wide level of statistical significance. The QTL identified here include those in regions previously shown to be linked with seedling resistance genes , , , , , and and also with adult-plant resistance and adaptation-related QTL. In addition, QTL on linkage groups Mrg03, Mrg08, and Mrg23 were identified in regions not previously associated with crown rust resistance. Evaluation of marker genotypes in a set of crown rust differential lines supported as the identity of . The SNPs with rare alleles associated with lower disease scores may be suitable for use in marker-assisted selection of oat lines for crown rust resistance. Copyright © 2017 Crop Science Society of America.
Zhang, Jinshui; Yuan, Zhoumiqi; Shuai, Guanyuan; Pan, Yaozhong; Zhu, Xiufang
2017-04-26
This paper developed an approach, the window-based validation set for support vector data description (WVS-SVDD), to determine optimal parameters for support vector data description (SVDD) model to map specific land cover by integrating training and window-based validation sets. Compared to the conventional approach where the validation set included target and outlier pixels selected visually and randomly, the validation set derived from WVS-SVDD constructed a tightened hypersphere because of the compact constraint by the outlier pixels which were located neighboring to the target class in the spectral feature space. The overall accuracies for wheat and bare land achieved were as high as 89.25% and 83.65%, respectively. However, target class was underestimated because the validation set covers only a small fraction of the heterogeneous spectra of the target class. The different window sizes were then tested to acquire more wheat pixels for validation set. The results showed that classification accuracy increased with the increasing window size and the overall accuracies were higher than 88% at all window size scales. Moreover, WVS-SVDD showed much less sensitivity to the untrained classes than the multi-class support vector machine (SVM) method. Therefore, the developed method showed its merits using the optimal parameters, tradeoff coefficient ( C ) and kernel width ( s ), in mapping homogeneous specific land cover.
Drawing Road Networks with Mental Maps.
Lin, Shih-Syun; Lin, Chao-Hung; Hu, Yan-Jhang; Lee, Tong-Yee
2014-09-01
Tourist and destination maps are thematic maps designed to represent specific themes in maps. The road network topologies in these maps are generally more important than the geometric accuracy of roads. A road network warping method is proposed to facilitate map generation and improve theme representation in maps. The basic idea is deforming a road network to meet a user-specified mental map while an optimization process is performed to propagate distortions originating from road network warping. To generate a map, the proposed method includes algorithms for estimating road significance and for deforming a road network according to various geometric and aesthetic constraints. The proposed method can produce an iconic mark of a theme from a road network and meet a user-specified mental map. Therefore, the resulting map can serve as a tourist or destination map that not only provides visual aids for route planning and navigation tasks, but also visually emphasizes the presentation of a theme in a map for the purpose of advertising. In the experiments, the demonstrations of map generations show that our method enables map generation systems to generate deformed tourist and destination maps efficiently.
Preschoolers' encoding of rational actions: the role of task features and verbal information.
Pfeifer, Caroline; Elsner, Birgit
2013-10-01
In the current study, we first investigated whether preschoolers imitate selectively across three imitation tasks. Second, we examined whether preschoolers' selective imitation is influenced by differences in the modeled actions and/or by the situational context. Finally, we investigated how verbal cues given by the model affect preschoolers' imitation. Participants (3- to 5-year-olds) watched an adult performing an unusual action in three imitation tasks (touch light, house, and obstacle). In two conditions, the model either was or was not restricted by situational constraints. In addition, the model verbalized either the goal that was to be achieved, the movement, or none of the action components. Preschoolers always acted on the objects without constraints. Results revealed differences in preschoolers' selective imitation across the tasks. In the house task, they showed the selective imitation pattern that has been interpreted as rational, imitating the unusual action more often in the no-constraint condition than in the constraint condition. In contrast, in the touch light task, preschoolers imitated the unusual head touch irrespective of the model's constraints or of the verbal cues that had been presented. Finally, in the obstacle task, children mostly emulated the observed goal irrespective of the presence of the constraint, but they increased their imitation of the unusual action when the movement had been emphasized. Overall, our data suggest that preschoolers adjust their imitative behavior to context-specific information about objects, actions, and their interpretations of the model's intention to teach something. Copyright © 2012 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Traci L.; Sharon, Keren, E-mail: tljohn@umich.edu
Until now, systematic errors in strong gravitational lens modeling have been acknowledged but have never been fully quantified. Here, we launch an investigation into the systematics induced by constraint selection. We model the simulated cluster Ares 362 times using random selections of image systems with and without spectroscopic redshifts and quantify the systematics using several diagnostics: image predictability, accuracy of model-predicted redshifts, enclosed mass, and magnification. We find that for models with >15 image systems, the image plane rms does not decrease significantly when more systems are added; however, the rms values quoted in the literature may be misleading asmore » to the ability of a model to predict new multiple images. The mass is well constrained near the Einstein radius in all cases, and systematic error drops to <2% for models using >10 image systems. Magnification errors are smallest along the straight portions of the critical curve, and the value of the magnification is systematically lower near curved portions. For >15 systems, the systematic error on magnification is ∼2%. We report no trend in magnification error with the fraction of spectroscopic image systems when selecting constraints at random; however, when using the same selection of constraints, increasing this fraction up to ∼0.5 will increase model accuracy. The results suggest that the selection of constraints, rather than quantity alone, determines the accuracy of the magnification. We note that spectroscopic follow-up of at least a few image systems is crucial because models without any spectroscopic redshifts are inaccurate across all of our diagnostics.« less
Digit loss in archosaur evolution and the interplay between selection and constraints.
de Bakker, Merijn A G; Fowler, Donald A; den Oude, Kelly; Dondorp, Esther M; Navas, M Carmen Garrido; Horbanczuk, Jaroslaw O; Sire, Jean-Yves; Szczerbińska, Danuta; Richardson, Michael K
2013-08-22
Evolution involves interplay between natural selection and developmental constraints. This is seen, for example, when digits are lost from the limbs during evolution. Extant archosaurs (crocodiles and birds) show several instances of digit loss under different selective regimes, and show limbs with one, two, three, four or the ancestral number of five digits. The 'lost' digits sometimes persist for millions of years as developmental vestiges. Here we examine digit loss in the Nile crocodile and five birds, using markers of three successive stages of digit development. In two independent lineages under different selection, wing digit I and all its markers disappear. In contrast, hindlimb digit V persists in all species sampled, both as cartilage, and as Sox9- expressing precartilage domains, 250 million years after the adult digit disappeared. There is therefore a mismatch between evolution of the embryonic and adult phenotypes. All limbs, regardless of digit number, showed similar expression of sonic hedgehog (Shh). Even in the one-fingered emu wing, expression of posterior genes Hoxd11 and Hoxd12 was conserved, whereas expression of anterior genes Gli3 and Alx4 was not. We suggest that the persistence of digit V in the embryo may reflect constraints, particularly the conserved posterior gene networks associated with the zone of polarizing activity (ZPA). The more rapid and complete disappearance of digit I may reflect its ZPA-independent specification, and hence, weaker developmental constraints. Interacting with these constraints are selection pressures for limb functions such as flying and perching. This model may help to explain the diverse patterns of digit loss in tetrapods. Our study may also help to understand how selection on adults leads to changes in development.
Spontaneously broken spacetime symmetries and the role of inessential Goldstones
NASA Astrophysics Data System (ADS)
Klein, Remko; Roest, Diederik; Stefanyszyn, David
2017-10-01
In contrast to internal symmetries, there is no general proof that the coset construction for spontaneously broken spacetime symmetries leads to universal dynamics. One key difference lies in the role of Goldstone bosons, which for spacetime symmetries includes a subset which are inessential for the non-linear realisation and hence can be eliminated. In this paper we address two important issues that arise when eliminating inessential Goldstones. The first concerns the elimination itself, which is often performed by imposing so-called inverse Higgs constraints. Contrary to claims in the literature, there are a series of conditions on the structure constants which must be satisfied to employ the inverse Higgs phenomenon, and we discuss which parametrisation of the coset element is the most effective in this regard. We also consider generalisations of the standard inverse Higgs constraints, which can include integrating out inessential Goldstones at low energies, and prove that under certain assumptions these give rise to identical effective field theories for the essential Goldstones. Secondly, we consider mappings between non-linear realisations that differ both in the coset element and the algebra basis. While these can always be related to each other by a point transformation, remarkably, the inverse Higgs constraints are not necessarily mapped onto each other under this transformation. We discuss the physical implications of this non-mapping, with a particular emphasis on the coset space corresponding to the spontaneous breaking of the Anti-De Sitter isometries by a Minkowski probe brane.
An end-to-end X-IFU simulator: constraints on ICM kinematics
NASA Astrophysics Data System (ADS)
Roncarelli, M.; Gaspari, M.; Ettori, S.; Brighenti, F.
2017-10-01
In the next years the study of ICM physics will benefit from a completely new type of oservations made available by the X-IFU microcalorimeter of the ATHENA X-ray telescope. X-IFU will combine energy and spatial resolution (2.5 eV and 5 arcsec) allowing to map line emission and, potentially, to characterise the ICM dynamics with an unprecedented detail. I will present an end-to-end simulator aimed at describing the ability of X-IFU to characterise ICM velocity features. Starting from hydrodynamical simulations of ICM turbulence (Gaspari et al. 2013) we went through a detailed and realistic spectral analysis of simulated observations to derive mapped quantities of gas density, temperature, metallicity and, most notably, centroid shift and velocity broadening of the emission lines, with relative errors. Our results show that X-IFU will be able to map in great detail the ICM velocity features and provide precise measurements of the broadening power spectrum. This will provide interesting constraints on the characteristics of turbulent motions, both on large and small scales.
Transformational and derivational strategies in analogical problem solving.
Schelhorn, Sven-Eric; Griego, Jacqueline; Schmid, Ute
2007-03-01
Analogical problem solving is mostly described as transfer of a source solution to a target problem based on the structural correspondences (mapping) between source and target. Derivational analogy (Carbonell, Machine learning: an artificial intelligence approach Los Altos. Morgan Kaufmann, 1986) proposes an alternative view: a target problem is solved by replaying a remembered problem-solving episode. Thus, the experience with the source problem is used to guide the search for the target solution by applying the same solution technique rather than by transferring the complete solution. We report an empirical study using the path finding problems presented in Novick and Hmelo (J Exp Psychol Learn Mem Cogn 20:1296-1321, 1994) as material. We show that both transformational and derivational analogy are problem-solving strategies realized by human problem solvers. Which strategy is evoked in a given problem-solving context depends on the constraints guiding object-to-object mapping between source and target problem. Specifically, if constraints facilitating mapping are available, subjects are more likely to employ a transformational strategy, otherwise they are more likely to use a derivational strategy.
Cluster and constraint analysis in tetrahedron packings
NASA Astrophysics Data System (ADS)
Jin, Weiwei; Lu, Peng; Liu, Lufeng; Li, Shuixiang
2015-04-01
The disordered packings of tetrahedra often show no obvious macroscopic orientational or positional order for a wide range of packing densities, and it has been found that the local order in particle clusters is the main order form of tetrahedron packings. Therefore, a cluster analysis is carried out to investigate the local structures and properties of tetrahedron packings in this work. We obtain a cluster distribution of differently sized clusters, and peaks are observed at two special clusters, i.e., dimer and wagon wheel. We then calculate the amounts of dimers and wagon wheels, which are observed to have linear or approximate linear correlations with packing density. Following our previous work, the amount of particles participating in dimers is used as an order metric to evaluate the order degree of the hierarchical packing structure of tetrahedra, and an order map is consequently depicted. Furthermore, a constraint analysis is performed to determine the isostatic or hyperstatic region in the order map. We employ a Monte Carlo algorithm to test jamming and then suggest a new maximally random jammed packing of hard tetrahedra from the order map with a packing density of 0.6337.
Time domain localization technique with sparsity constraint for imaging acoustic sources
NASA Astrophysics Data System (ADS)
Padois, Thomas; Doutres, Olivier; Sgard, Franck; Berry, Alain
2017-09-01
This paper addresses source localization technique in time domain for broadband acoustic sources. The objective is to accurately and quickly detect the position and amplitude of noise sources in workplaces in order to propose adequate noise control options and prevent workers hearing loss or safety risk. First, the generalized cross correlation associated with a spherical microphone array is used to generate an initial noise source map. Then a linear inverse problem is defined to improve this initial map. Commonly, the linear inverse problem is solved with an l2 -regularization. In this study, two sparsity constraints are used to solve the inverse problem, the orthogonal matching pursuit and the truncated Newton interior-point method. Synthetic data are used to highlight the performances of the technique. High resolution imaging is achieved for various acoustic sources configurations. Moreover, the amplitudes of the acoustic sources are correctly estimated. A comparison of computation times shows that the technique is compatible with quasi real-time generation of noise source maps. Finally, the technique is tested with real data.
Kim, Yusung; Tomé, Wolfgang A
2008-01-01
Voxel based iso-Tumor Control Probability (TCP) maps and iso-Complication maps are proposed as a plan-review tool especially for functional image-guided intensity-modulated radiotherapy (IMRT) strategies such as selective boosting (dose painting) and conformal avoidance IMRT. The maps employ voxel-based phenomenological biological dose-response models for target volumes and normal organs. Two IMRT strategies for prostate cancer, namely conventional uniform IMRT delivering an EUD = 84 Gy (equivalent uniform dose) to the entire PTV and selective boosting delivering an EUD = 82 Gy to the entire PTV, are investigated, to illustrate the advantages of this approach over iso-dose maps. Conventional uniform IMRT did yield a more uniform isodose map to the entire PTV while selective boosting did result in a nonuniform isodose map. However, when employing voxel based iso-TCP maps selective boosting exhibited a more uniform tumor control probability map compared to what could be achieved using conventional uniform IMRT, which showed TCP cold spots in high-risk tumor subvolumes despite delivering a higher EUD to the entire PTV. Voxel based iso-Complication maps are presented for rectum and bladder, and their utilization for selective avoidance IMRT strategies are discussed. We believe as the need for functional image guided treatment planning grows, voxel based iso-TCP and iso-Complication maps will become an important tool to assess the integrity of such treatment plans.
Language at Three Timescales: The Role of Real-Time Processes in Language Development and Evolution.
McMurray, Bob
2016-04-01
Evolutionary developmental systems (evo-devo) theory stresses that selection pressures operate on entire developmental systems rather than just genes. This study extends this approach to language evolution, arguing that selection pressure may operate on two quasi-independent timescales. First, children clearly must acquire language successfully (as acknowledged in traditional evo-devo accounts) and evolution must equip them with the tools to do so. Second, while this is developing, they must also communicate with others in the moment using partially developed knowledge. These pressures may require different solutions, and their combination may underlie the evolution of complex mechanisms for language development and processing. I present two case studies to illustrate how the demands of both real-time communication and language acquisition may be subtly different (and interact). The first case study examines infant-directed speech (IDS). A recent view is that IDS underwent cultural to statistical learning mechanisms that infants use to acquire the speech categories of their language. However, recent data suggest is it may not have evolved to enhance development, but rather to serve a more real-time communicative function. The second case study examines the argument for seemingly specialized mechanisms for learning word meanings (e.g., fast-mapping). Both behavioral and computational work suggest that learning may be much slower and served by general-purpose mechanisms like associative learning. Fast-mapping, then, may be a real-time process meant to serve immediate communication, not learning, by augmenting incomplete vocabulary knowledge with constraints from the current context. Together, these studies suggest that evolutionary accounts consider selection pressure arising from both real-time communicative demands and from the need for accurate language development. Copyright © 2016 Cognitive Science Society, Inc.
Applying CBR to machine tool product configuration design oriented to customer requirements
NASA Astrophysics Data System (ADS)
Wang, Pengjia; Gong, Yadong; Xie, Hualong; Liu, Yongxian; Nee, Andrew Yehching
2017-01-01
Product customization is a trend in the current market-oriented manufacturing environment. However, deduction from customer requirements to design results and evaluation of design alternatives are still heavily reliant on the designer's experience and knowledge. To solve the problem of fuzziness and uncertainty of customer requirements in product configuration, an analysis method based on the grey rough model is presented. The customer requirements can be converted into technical characteristics effectively. In addition, an optimization decision model for product planning is established to help the enterprises select the key technical characteristics under the constraints of cost and time to serve the customer to maximal satisfaction. A new case retrieval approach that combines the self-organizing map and fuzzy similarity priority ratio method is proposed in case-based design. The self-organizing map can reduce the retrieval range and increase the retrieval efficiency, and the fuzzy similarity priority ratio method can evaluate the similarity of cases comprehensively. To ensure that the final case has the best overall performance, an evaluation method of similar cases based on grey correlation analysis is proposed to evaluate similar cases to select the most suitable case. Furthermore, a computer-aided system is developed using MATLAB GUI to assist the product configuration design. The actual example and result on an ETC series machine tool product show that the proposed method is effective, rapid and accurate in the process of product configuration. The proposed methodology provides a detailed instruction for the product configuration design oriented to customer requirements.
Genomics-assisted breeding for boosting crop improvement in pigeonpea (Cajanus cajan)
Pazhamala, Lekha; Saxena, Rachit K.; Singh, Vikas K.; Sameerkumar, C. V.; Kumar, Vinay; Sinha, Pallavi; Patel, Kishan; Obala, Jimmy; Kaoneka, Seleman R.; Tongoona, P.; Shimelis, Hussein A.; Gangarao, N. V. P. R.; Odeny, Damaris; Rathore, Abhishek; Dharmaraj, P. S.; Yamini, K. N.; Varshney, Rajeev K.
2015-01-01
Pigeonpea is an important pulse crop grown predominantly in the tropical and sub-tropical regions of the world. Although pigeonpea growing area has considerably increased, yield has remained stagnant for the last six decades mainly due to the exposure of the crop to various biotic and abiotic constraints. In addition, low level of genetic variability and limited genomic resources have been serious impediments to pigeonpea crop improvement through modern breeding approaches. In recent years, however, due to the availability of next generation sequencing and high-throughput genotyping technologies, the scenario has changed tremendously. The reduced sequencing costs resulting in the decoding of the pigeonpea genome has led to the development of various genomic resources including molecular markers, transcript sequences and comprehensive genetic maps. Mapping of some important traits including resistance to Fusarium wilt and sterility mosaic disease, fertility restoration, determinacy with other agronomically important traits have paved the way for applying genomics-assisted breeding (GAB) through marker assisted selection as well as genomic selection (GS). This would accelerate the development and improvement of both varieties and hybrids in pigeonpea. Particularly for hybrid breeding programme, mitochondrial genomes of cytoplasmic male sterile (CMS) lines, maintainers and hybrids have been sequenced to identify genes responsible for cytoplasmic male sterility. Furthermore, several diagnostic molecular markers have been developed to assess the purity of commercial hybrids. In summary, pigeonpea has become a genomic resources-rich crop and efforts have already been initiated to integrate these resources in pigeonpea breeding. PMID:25741349
Alexander C. Vibrans; Ronald E. McRoberts; Paolo Moser; Adilson L. Nicoletti
2013-01-01
Estimation of large area forest attributes, such as area of forest cover, from remote sensing-based maps is challenging because of image processing, logistical, and data acquisition constraints. In addition, techniques for estimating and compensating for misclassification and estimating uncertainty are often unfamiliar. Forest area for the state of Santa Catarina in...
A. L. Hammett; James L. Chamberlain
2002-01-01
This paper discusses the legacy of nontimber forest products (NTFPs) and, more specifically, medicinal plant use in North America. It also discusses briefly MAP markets both in North America and throughout the world, and describes the constraints to the sustainable use and development of MAP resources. Lastly, the paper relates some lessons that may be appropriate for...
Vesta Mineralogy: VIR maps Vesta's surface
NASA Technical Reports Server (NTRS)
Coradina, A.; DeSanctis, M.; Ammannito, E.; Capaccioni, F.; Capria, T.; Carraro, F.; Cartacci, M.; Filacchione, G.; Fonte, S.; Magni, G.;
2011-01-01
The Dawn mission will have completed Survey orbit around 4 Vesta by the end of August 2011. We present a preliminary analysis of data acquired by the Visual and InfraRed Spectrometer (VIR) to map Vesta mineralogy. Thermal properties and mineralogical data are combined to provide constraints on Vesta's formation and thermal evolution. delivery of exogenic materials, space weathering processes, and origin of the howardite. eucrite, and diogenite (HED) meteorites.
Planarity constrained multi-view depth map reconstruction for urban scenes
NASA Astrophysics Data System (ADS)
Hou, Yaolin; Peng, Jianwei; Hu, Zhihua; Tao, Pengjie; Shan, Jie
2018-05-01
Multi-view depth map reconstruction is regarded as a suitable approach for 3D generation of large-scale scenes due to its flexibility and scalability. However, there are challenges when this technique is applied to urban scenes where apparent man-made regular shapes may present. To address this need, this paper proposes a planarity constrained multi-view depth (PMVD) map reconstruction method. Starting with image segmentation and feature matching for each input image, the main procedure is iterative optimization under the constraints of planar geometry and smoothness. A set of candidate local planes are first generated by an extended PatchMatch method. The image matching costs are then computed and aggregated by an adaptive-manifold filter (AMF), whereby the smoothness constraint is applied to adjacent pixels through belief propagation. Finally, multiple criteria are used to eliminate image matching outliers. (Vertical) aerial images, oblique (aerial) images and ground images are used for qualitative and quantitative evaluations. The experiments demonstrated that the PMVD outperforms the popular multi-view depth map reconstruction with an accuracy two times better for the aerial datasets and achieves an outcome comparable to the state-of-the-art for ground images. As expected, PMVD is able to preserve the planarity for piecewise flat structures in urban scenes and restore the edges in depth discontinuous areas.
NASA Technical Reports Server (NTRS)
Ider, Sitki Kemal
1989-01-01
Conventionally kinematical constraints in multibody systems are treated similar to geometrical constraints and are modeled by constraint reaction forces which are perpendicular to constraint surfaces. However, in reality, one may want to achieve the desired kinematical conditions by control forces having different directions in relation to the constraint surfaces. The conventional equations of motion for multibody systems subject to kinematical constraints are generalized by introducing general direction control forces. Conditions for the selections of the control force directions are also discussed. A redundant robotic system subject to prescribed end-effector motion is analyzed to illustrate the methods proposed.
Qian, Wei; Fan, Guiyan; Liu, Dandan; Zhang, Helong; Wang, Xiaowu; Wu, Jian; Xu, Zhaosheng
2017-04-04
Cultivated spinach (Spinacia oleracea L.) is one of the most widely cultivated types of leafy vegetable in the world, and it has a high nutritional value. Spinach is also an ideal plant for investigating the mechanism of sex determination because it is a dioecious species with separate male and female plants. Some reports on the sex labeling and localization of spinach in the study of molecular markers have surfaced. However, there have only been two reports completed on the genetic map of spinach. The lack of rich and reliable molecular markers and the shortage of high-density linkage maps are important constraints in spinach research work. In this study, a high-density genetic map of spinach based on the Specific-locus Amplified Fragment Sequencing (SLAF-seq) technique was constructed; the sex-determining gene was also finely mapped. Through bio-information analysis, 50.75 Gb of data in total was obtained, including 207.58 million paired-end reads. Finally, 145,456 high-quality SLAF markers were obtained, with 27,800 polymorphic markers and 4080 SLAF markers were finally mapped onto the genetic map after linkage analysis. The map spanned 1,125.97 cM with an average distance of 0.31 cM between the adjacent marker loci. It was divided into 6 linkage groups corresponding to the number of spinach chromosomes. Besides, the combination of Bulked Segregation Analysis (BSA) with SLAF-seq technology(super-BSA) was employed to generate the linkage markers with the sex-determining gene. Combined with the high-density genetic map of spinach, the sex-determining gene X/Y was located at the position of the linkage group (LG) 4 (66.98 cM-69.72 cM and 75.48 cM-92.96 cM), which may be the ideal region for the sex-determining gene. A high-density genetic map of spinach based on the SLAF-seq technique was constructed with a backcross (BC 1 ) population (which is the highest density genetic map of spinach reported at present). At the same time, the sex-determining gene X/Y was mapped to LG4 with super-BSA. This map will offer a suitable basis for further study of spinach, such as gene mapping, map-based cloning of Specific genes, quantitative trait locus (QTL) mapping and marker-assisted selection (MAS). It will also provide an efficient reference for studies on the mechanism of sex determination in other dioecious plants.
Magnetic Doppler imaging of Ap stars
NASA Astrophysics Data System (ADS)
Silvester, J.; Wade, G. A.; Kochukhov, O.; Landstreet, J. D.; Bagnulo, S.
2008-04-01
Historically, the magnetic field geometries of the chemically peculiar Ap stars were modelled in the context of a simple dipole field. However, with the acquisition of increasingly sophisticated diagnostic data, it has become clear that the large-scale field topologies exhibit important departures from this simple model. Recently, new high-resolution circular and linear polarisation spectroscopy has even hinted at the presence of strong, small-scale field structures, which were completely unexpected based on earlier modelling. This project investigates the detailed structure of these strong fossil magnetic fields, in particular the large-scale field geometry, as well as small scale magnetic structures, by mapping the magnetic and chemical surface structure of a selected sample of Ap stars. These maps will be used to investigate the relationship between the local field vector and local surface chemistry, looking for the influence the field may have on the various chemical transport mechanisms (i.e., diffusion, convection and mass loss). This will lead to better constraints on the origin and evolution, as well as refining the magnetic field model for Ap stars. Mapping will be performed using high resolution and signal-to-noise ratio time-series of spectra in both circular and linear polarisation obtained using the new-generation ESPaDOnS (CFHT, Mauna Kea, Hawaii) and NARVAL spectropolarimeters (Pic du Midi Observatory). With these data we will perform tomographic inversion of Doppler-broadened Stokes IQUV Zeeman profiles of a large variety of spectral lines using the INVERS10 magnetic Doppler imaging code, simultaneously recovering the detailed surface maps of the vector magnetic field and chemical abundances.
NASA Astrophysics Data System (ADS)
Daskalou, Olympia; Karanastasi, Maria; Markonis, Yannis; Dimitriadis, Panayiotis; Koukouvinos, Antonis; Efstratiadis, Andreas; Koutsoyiannis, Demetris
2016-04-01
Following the legislative EU targets and taking advantage of its high renewable energy potential, Greece can obtain significant benefits from developing its water, solar and wind energy resources. In this context we present a GIS-based methodology for the optimal sizing and siting of solar and wind energy systems at the regional scale, which is tested in the Prefecture of Thessaly. First, we assess the wind and solar potential, taking into account the stochastic nature of the associated meteorological processes (i.e. wind speed and solar radiation, respectively), which is essential component for both planning (i.e., type selection and sizing of photovoltaic panels and wind turbines) and management purposes (i.e., real-time operation of the system). For the optimal siting, we assess the efficiency and economic performance of the energy system, also accounting for a number of constraints, associated with topographic limitations (e.g., terrain slope, proximity to road and electricity grid network, etc.), the environmental legislation and other land use constraints. Based on this analysis, we investigate favorable alternatives using technical, environmental as well as financial criteria. The final outcome is GIS maps that depict the available energy potential and the optimal layout for photovoltaic panels and wind turbines over the study area. We also consider a hypothetical scenario of future development of the study area, in which we assume the combined operation of the above renewables with major hydroelectric dams and pumped-storage facilities, thus providing a unique hybrid renewable system, extended at the regional scale.
NASA Astrophysics Data System (ADS)
Lambrakos, S. G.
2017-08-01
An inverse thermal analysis of Alloy 690 laser and hybrid laser-GMA welds is presented that uses numerical-analytical basis functions and boundary constraints based on measured solidification cross sections. In particular, the inverse analysis procedure uses three-dimensional constraint conditions such that two-dimensional projections of calculated solidification boundaries are constrained to map within experimentally measured solidification cross sections. Temperature histories calculated by this analysis are input data for computational procedures that predict solid-state phase transformations and mechanical response. These temperature histories can be used for inverse thermal analysis of welds corresponding to other welding processes whose process conditions are within similar regimes.
Extinction Mapping and Dust-to-Gas Ratios of Nearby Galaxies using LEGUS
NASA Astrophysics Data System (ADS)
Kahre, Lauren; Walterbos, Rene; Kim, Hwihyun; Thilker, David; Lee, Janice; LEGUS Team
2018-01-01
Dust is commonly used as a tracer for cold dense gas, either through IR and NIR emission maps or through extinction mapping, and dust abundance and gas metallicity are critical constraints for chemical and galaxy evolution models. Extinction mapping has been used to trace dust column densities in the Milky Way, the Magellanic Clouds, and M31. The maps for M31 use IR and NIR photometry of red giant branch stars, which is more difficult to obtain for more distant galaxies. Work by Kahre et al. (in prep) uses the extinctions derived for individual massive stars using the isochrone-matching method described by Kim et al. (2012) to generate extinction maps for these more distant galaxies.Isochrones of massive stars lie in the same location on a color-color diagram with little dependence on metallicity and luminosity class, so the extinction can be directly derived from the observed photometry. We generate extinction maps using photometry of massive stars from the Hubble Space Telescope for several of the nearly 50 galaxies observed by the Legacy Extragalactic Ultraviolet Survey (LEGUS). The derived extinction maps will allow us to correct ground-based and HST Halpha maps for extinction, and will be used to constrain changes in the dust-to-gas ratio across the galaxy sample and in different star formation, metallicity and morphological environments. Previous studies have found links between galaxy metallicity and the dust-to-gas mass ratio. We present a study of LEGUS galaxies spanning a range of distances, metallicities, and galaxy morphologies, expanding on our previous study of metal-poor dwarfs Holmberg I and II and giant spirals NGC 6503 and NGC 628. We see clear evidence for changes in the dust-to-gas mass ratio with changing metallicity. We also examine changes in the dust-to-gas mass ratio with galactocentric radius. Ultimately, we will provide constraints on the dust-to-gas mass ratio across a wide range of galaxy environments.
NASA Astrophysics Data System (ADS)
Fassett, C.; Levy, J.; Head, J.
2013-09-01
Landforms inferred to have formed from glacial processes are abundant on Mars and include features such as concentric crater fill (CCF), lobate debris aprons (LDA), and lineated valley fill (LVF). Here, we present new mapping of the spatial extent of these landforms derived from CTX and THEMIS VIS image data, and new geometric constraints on the volume of glaciogenic fill material present in concentric crater fill deposits.
The structure of creative cognition in the human brain
Jung, Rex E.; Mead, Brittany S.; Carrasco, Jessica; Flores, Ranee A.
2013-01-01
Creativity is a vast construct, seemingly intractable to scientific inquiry—perhaps due to the vague concepts applied to the field of research. One attempt to limit the purview of creative cognition formulates the construct in terms of evolutionary constraints, namely that of blind variation and selective retention (BVSR). Behaviorally, one can limit the “blind variation” component to idea generation tests as manifested by measures of divergent thinking. The “selective retention” component can be represented by measures of convergent thinking, as represented by measures of remote associates. We summarize results from measures of creative cognition, correlated with structural neuroimaging measures including structural magnetic resonance imaging (sMRI), diffusion tensor imaging (DTI), and proton magnetic resonance spectroscopy (1H-MRS). We also review lesion studies, considered to be the “gold standard” of brain-behavioral studies. What emerges is a picture consistent with theories of disinhibitory brain features subserving creative cognition, as described previously (Martindale, 1981). We provide a perspective, involving aspects of the default mode network (DMN), which might provide a “first approximation” regarding how creative cognition might map on to the human brain. PMID:23847503
Xylenes transformation over zeolites ZSM-5 ruled by acidic properties
NASA Astrophysics Data System (ADS)
Gołąbek, Kinga; Tarach, Karolina A.; Góra-Marek, Kinga
2018-03-01
The studies presented in this work offer an insight into xylene isomerization process, followed by 2D COS analysis, in the terms of different acidity of microporous zeolites ZSM-5. The isomerisation reaction proceeded effectively over zeolites ZSM-5 of Si/Al equal of 12 and 32. Among them, the Al-poorer zeolite (Si/Al = 32) was found to offer the highest conversion and selectivity to p-xylene with the lowest number of disproportionation products, both in ortho- and meta-xylene transformation. Further reduction of Brønsted acidity facilitated the disproportionation path (zeolites of Si/Al = 48 and 750). The formation of intermediate species induced by the diffusion constraints for m-xylene in 10-ring channels was rationalized in the terms of the methylbenzenium ions formation inside the rigid micropore environment. Finally, both microporous character of zeolite and the optimised acidity were found to be crucial for high selectivity to the most desired product i.e. p-xylene. The analysis of asynchronous maps allowed for concluding on the order of the appearance of the respective products on the zeolite surface.
Spinfoam cosmology with the proper vertex amplitude
NASA Astrophysics Data System (ADS)
Vilensky, Ilya
2017-11-01
The proper vertex amplitude is derived from the Engle-Pereira-Rovelli-Livine vertex by restricting to a single gravitational sector in order to achieve the correct semi-classical behaviour. We apply the proper vertex to calculate a cosmological transition amplitude that can be viewed as the Hartle-Hawking wavefunction. To perform this calculation we deduce the integral form of the proper vertex and use extended stationary phase methods to estimate the large-volume limit. We show that the resulting amplitude satisfies an operator constraint whose classical analogue is the Hamiltonian constraint of the Friedmann-Robertson-Walker cosmology. We find that the constraint dynamically selects the relevant family of coherent states and demonstrate a similar dynamic selection in standard quantum mechanics. We investigate the effects of dynamical selection on long-range correlations.
Automated Derivation of Complex System Constraints from User Requirements
NASA Technical Reports Server (NTRS)
Foshee, Mark; Murey, Kim; Marsh, Angela
2010-01-01
The Payload Operations Integration Center (POIC) located at the Marshall Space Flight Center has the responsibility of integrating US payload science requirements for the International Space Station (ISS). All payload operations must request ISS system resources so that the resource usage will be included in the ISS on-board execution timelines. The scheduling of resources and building of the timeline is performed using the Consolidated Planning System (CPS). The ISS resources are quite complex due to the large number of components that must be accounted for. The planners at the POIC simplify the process for Payload Developers (PD) by providing the PDs with a application that has the basic functionality PDs need as well as list of simplified resources in the User Requirements Collection (URC) application. The planners maintained a mapping of the URC resources to the CPS resources. The process of manually converting PD's science requirements from a simplified representation to a more complex CPS representation is a time-consuming and tedious process. The goal is to provide a software solution to allow the planners to build a mapping of the complex CPS constraints to the basic URC constraints and automatically convert the PD's requirements into systems requirements during export to CPS.
Symbolic control of visual attention: semantic constraints on the spatial distribution of attention.
Gibson, Bradley S; Scheutz, Matthias; Davis, Gregory J
2009-02-01
Humans routinely use spatial language to control the spatial distribution of attention. In so doing, spatial information may be communicated from one individual to another across opposing frames of reference, which in turn can lead to inconsistent mappings between symbols and directions (or locations). These inconsistencies may have important implications for the symbolic control of attention because they can be translated into differences in cue validity, a manipulation that is known to influence the focus of attention. This differential validity hypothesis was tested in Experiment 1 by comparing spatial word cues that were predicted to have high learned spatial validity ("above/below") and low learned spatial validity ("left/right"). Consistent with this prediction, when two measures of selective attention were used, the results indicated that attention was less focused in response to "left/right" cues than in response to "above/below" cues, even when the actual validity of each of the cues was equal. In addition, Experiment 2 predicted that spatial words such as "left/right" would have lower spatial validity than would other directional symbols that specify direction along the horizontal axis, such as "<--/-->" cues. The results were also consistent with this hypothesis. Altogether, the present findings demonstrate important semantic-based constraints on the spatial distribution of attention.
Brain evolution and development: adaptation, allometry and constraint
Barton, Robert A.
2016-01-01
Phenotypic traits are products of two processes: evolution and development. But how do these processes combine to produce integrated phenotypes? Comparative studies identify consistent patterns of covariation, or allometries, between brain and body size, and between brain components, indicating the presence of significant constraints limiting independent evolution of separate parts. These constraints are poorly understood, but in principle could be either developmental or functional. The developmental constraints hypothesis suggests that individual components (brain and body size, or individual brain components) tend to evolve together because natural selection operates on relatively simple developmental mechanisms that affect the growth of all parts in a concerted manner. The functional constraints hypothesis suggests that correlated change reflects the action of selection on distributed functional systems connecting the different sub-components, predicting more complex patterns of mosaic change at the level of the functional systems and more complex genetic and developmental mechanisms. These hypotheses are not mutually exclusive but make different predictions. We review recent genetic and neurodevelopmental evidence, concluding that functional rather than developmental constraints are the main cause of the observed patterns. PMID:27629025
Kim, Yusung; Tomé, Wolfgang A.
2010-01-01
Summary Voxel based iso-Tumor Control Probability (TCP) maps and iso-Complication maps are proposed as a plan-review tool especially for functional image-guided intensity-modulated radiotherapy (IMRT) strategies such as selective boosting (dose painting) and conformal avoidance IMRT. The maps employ voxel-based phenomenological biological dose-response models for target volumes and normal organs. Two IMRT strategies for prostate cancer, namely conventional uniform IMRT delivering an EUD = 84 Gy (equivalent uniform dose) to the entire PTV and selective boosting delivering an EUD = 82 Gy to the entire PTV, are investigated, to illustrate the advantages of this approach over iso-dose maps. Conventional uniform IMRT did yield a more uniform isodose map to the entire PTV while selective boosting did result in a nonuniform isodose map. However, when employing voxel based iso-TCP maps selective boosting exhibited a more uniform tumor control probability map compared to what could be achieved using conventional uniform IMRT, which showed TCP cold spots in high-risk tumor subvolumes despite delivering a higher EUD to the entire PTV. Voxel based iso-Complication maps are presented for rectum and bladder, and their utilization for selective avoidance IMRT strategies are discussed. We believe as the need for functional image guided treatment planning grows, voxel based iso-TCP and iso-Complication maps will become an important tool to assess the integrity of such treatment plans. PMID:21151734
Tulchinsky, Alexander Y; Johnson, Norman A; Watt, Ward B; Porter, Adam H
2014-11-01
Postzygotic isolation between incipient species results from the accumulation of incompatibilities that arise as a consequence of genetic divergence. When phenotypes are determined by regulatory interactions, hybrid incompatibility can evolve even as a consequence of parallel adaptation in parental populations because interacting genes can produce the same phenotype through incompatible allelic combinations. We explore the evolutionary conditions that promote and constrain hybrid incompatibility in regulatory networks using a bioenergetic model (combining thermodynamics and kinetics) of transcriptional regulation, considering the bioenergetic basis of molecular interactions between transcription factors (TFs) and their binding sites. The bioenergetic parameters consider the free energy of formation of the bond between the TF and its binding site and the availability of TFs in the intracellular environment. Together these determine fractional occupancy of the TF on the promoter site, the degree of subsequent gene expression and in diploids, and the degree of dominance among allelic interactions. This results in a sigmoid genotype-phenotype map and fitness landscape, with the details of the shape determining the degree of bioenergetic evolutionary constraint on hybrid incompatibility. Using individual-based simulations, we subjected two allopatric populations to parallel directional or stabilizing selection. Misregulation of hybrid gene expression occurred under either type of selection, although it evolved faster under directional selection. Under directional selection, the extent of hybrid incompatibility increased with the slope of the genotype-phenotype map near the derived parental expression level. Under stabilizing selection, hybrid incompatibility arose from compensatory mutations and was greater when the bioenergetic properties of the interaction caused the space of nearly neutral genotypes around the stable expression level to be wide. F2's showed higher hybrid incompatibility than F1's to the extent that the bioenergetic properties favored dominant regulatory interactions. The present model is a mechanistically explicit case of the Bateson-Dobzhansky-Muller model, connecting environmental selective pressure to hybrid incompatibility through the molecular mechanism of regulatory divergence. The bioenergetic parameters that determine expression represent measurable properties of transcriptional regulation, providing a predictive framework for empirical studies of how phenotypic evolution results in epistatic incompatibility at the molecular level in hybrids. Copyright © 2014 by the Genetics Society of America.
Mapping Queer Bioethics: Space, Place, and Locality.
Wahlert, Lance
2016-01-01
This article, which introduces the special issue of the Journal of Homosexuality on "Mapping Queer Bioethics," begins by offering an overview of the analytical scope of the issue. Specifically, the first half of this essay raises critical questions central to the concept of a space-related queer bioethics, such as: How do we appreciate and understand the special needs of queer parties given the constraints of location, space, and geography? The second half of this article describes each feature article in the issue, as well as the subsequent special sections on the ethics of reading literal, health-related maps ("Cartographies") and scrutinizing the history of this journal as concerns LGBT health ("Mapping the Journal of Homosexuality").
A bibliography of planetary geology principal investigators and their associates, 1976-1978
NASA Technical Reports Server (NTRS)
1978-01-01
This bibliography cites publications submitted by 484 principal investigators and their associates who were supported through NASA's Office of Space Sciences Planetary Geology Program. Subject classifications include: solar system formation, comets, and asteroids; planetary satellites, planetary interiors, geological and geochemical constraints on planetary evolution; impact crater studies, volcanism, eolian studies, fluvian studies, Mars geological mapping; Mercury geological mapping; planetary cartography; and instrument development and techniques. An author/editor index is provided.
Selection and constraint underlie irreversibility of tooth loss in cypriniform fishes
Aigler, Sharon R.; Jandzik, David; Hatta, Kohei; Uesugi, Kentaro; Stock, David W.
2014-01-01
The apparent irreversibility of the loss of complex traits in evolution (Dollo’s Law) has been explained either by constraints on generating the lost traits or the complexity of selection required for their return. Distinguishing between these explanations is challenging, however, and little is known about the specific nature of potential constraints. We investigated the mechanisms underlying the irreversibility of trait loss using reduction of dentition in cypriniform fishes, a lineage that includes the zebrafish (Danio rerio) as a model. Teeth were lost from the mouth and upper pharynx in this group at least 50 million y ago and retained only in the lower pharynx. We identified regional loss of expression of the Ectodysplasin (Eda) signaling ligand as a likely cause of dentition reduction. In addition, we found that overexpression of this gene in the zebrafish is sufficient to restore teeth to the upper pharynx but not to the mouth. Because both regions are competent to respond to Eda signaling with transcriptional output, the likely constraint on the reappearance of oral teeth is the alteration of multiple genetic pathways required for tooth development. The upper pharyngeal teeth are fully formed, but do not exhibit the ancestral relationship to other pharyngeal structures, suggesting that they would not be favored by selection. Our results illustrate an underlying commonality between constraint and selection as explanations for the irreversibility of trait loss; multiple genetic changes would be required to restore teeth themselves to the oral region and optimally functioning ones to the upper pharynx. PMID:24821783
Mapping the CMB with the Wilkinson Microwave Anisotropy Probe
NASA Technical Reports Server (NTRS)
Hinshaw, Gary
2007-01-01
The data from the Wilkinson Microwave Anisotropy Probe (WMAP) satellite provide detailed full-sky maps of the cosmic microwave background temperature anisotropy and new full-sky maps of the polarization. Together, the data provide a wealth of cosmological information, including the age of the universe, the epoch when the first stars formed, and the overall composition of baryonic matter, dark matter, and dark energy. The results also provide constraints on the period of inflationary expansion in the very first moments of time. These and other aspects of the mission results will be discussed and commented on.
Great Basin NP and USGS cooperate on a geologic mapping program
Brown, Janet L.; Davila, Vidal
1993-01-01
The GRBA draft General Management Plan proposes development in several locations in Kious Spring and Lehman Caves 1:24,000 topographic quadrangles, and these proposed developments need geologic evaluation before construction. Brown will act as project manager to coordinate the IA with time frames, budget constraints, and the timely preparation of required maps, reports, and GIS data sets. In addition to having been an interpretive Ranger-Naturalist in two National Parks, Brown has published USGS interpretive geologic maps and USGS bulletins. Her research includes sedimentologic, stratigraphic, and structural analyses of Laramide intermontane basins in the Westem Interior.
Varanka, Dalia
2006-01-01
Historical topographic maps are the only systematically collected data resource covering the entire nation for long-term landscape change studies over the 20th century for geographical and environmental research. The paper discusses aspects of the historical U.S. Geological Survey topographic maps that present constraints on the design of a database for such studies. Problems involved in this approach include locating the required maps, understanding land feature classification differences between topographic vs. land use/land cover maps, the approximation of error between different map editions of the same area, and the identification of true changes on the landscape between time periods. Suggested approaches to these issues are illustrated using an example of such a study by the author.
Landing Area Narrowed for 2016 InSight Mission to Mars
2013-09-04
The process of selecting a site for NASA's next landing on Mars, planned for September 2016, has narrowed to four semifinalist sites located close together in the Elysium Planitia region of Mars. The mission known by the acronym InSight will study the Red Planet's interior, rather than surface features, to advance understanding of the processes that formed and shaped the rocky planets of the inner solar system, including Earth. The location of the cluster of semifinalist landing sites for InSight is indicated on this near-global topographic map of Mars, which also indicates landing sites of current and past NASA missions to the surface of Mars. The mission's full name is Interior Exploration Using Seismic Investigations, Geodesy and Heat Transport. The location of Elysium Planitia close to the Martian equator meets an engineering requirement for the stationary InSight lander to receive adequate solar irradiation year-round on its photovoltaic array. The location also meets an engineering constraint for low elevation, optimizing the amount of atmosphere the spacecraft can use for deceleration during its descent to the surface. The number of candidate landing sites for InSight was trimmed from 22 down to four in August 2013. This down-selection facilitates focusing the efforts to further evaluate the four sites. Cameras on NASA's Mars Reconnaissance Orbiter will be used to gather more information about them before the final selection. The topographic map uses data from the Mars Orbiter Laser Altimeter on NASA's Mars Global Surveyor spacecraft. The color coding on this map indicates elevation relative to a reference datum, since Mars has no "sea level." The lowest elevations are presented as dark blue; the highest as white. The difference between green and orange in the color coding is about 2.5 miles (4 kilometers) vertically. Note: After thorough examination, NASA managers have decided to suspend the planned March 2016 launch of the Interior Exploration using Seismic Investigations Geodesy and Heat Transport (InSight) mission. The decision follows unsuccessful attempts to repair a leak in a section of the prime instrument in the science payload. http://photojournal.jpl.nasa.gov/catalog/PIA17357
NASA Technical Reports Server (NTRS)
Smith, Jeffrey, S.; Aronstein, David L.; Dean, Bruce H.; Lyon, Richard G.
2012-01-01
The performance of an optical system (for example, a telescope) is limited by the misalignments and manufacturing imperfections of the optical elements in the system. The impact of these misalignments and imperfections can be quantified by the phase variations imparted on light traveling through the system. Phase retrieval is a methodology for determining these variations. Phase retrieval uses images taken with the optical system and using a light source of known shape and characteristics. Unlike interferometric methods, which require an optical reference for comparison, and unlike Shack-Hartmann wavefront sensors that require special optical hardware at the optical system's exit pupil, phase retrieval is an in situ, image-based method for determining the phase variations of light at the system s exit pupil. Phase retrieval can be used both as an optical metrology tool (during fabrication of optical surfaces and assembly of optical systems) and as a sensor used in active, closed-loop control of an optical system, to optimize performance. One class of phase-retrieval algorithms is the iterative transform algorithm (ITA). ITAs estimate the phase variations by iteratively enforcing known constraints in the exit pupil and at the detector, determined from modeled or measured data. The Variable Sampling Mapping (VSM) technique is a new method for enforcing these constraints in ITAs. VSM is an open framework for addressing a wide range of issues that have previously been considered detrimental to high-accuracy phase retrieval, including undersampled images, broadband illumination, images taken at or near best focus, chromatic aberrations, jitter or vibration of the optical system or detector, and dead or noisy detector pixels. The VSM is a model-to-data mapping procedure. In VSM, fully sampled electric fields at multiple wavelengths are modeled inside the phase-retrieval algorithm, and then these fields are mapped to intensities on the light detector, using the properties of the detector and optical system, for comparison with measured data. Ultimately, this model-to-data mapping procedure enables a more robust and accurate way of incorporating the exit-pupil and image detector constraints, which are fundamental to the general class of ITA phase retrieval algorithms.
Method for Veterbi decoding of large constraint length convolutional codes
NASA Technical Reports Server (NTRS)
Hsu, In-Shek (Inventor); Truong, Trieu-Kie (Inventor); Reed, Irving S. (Inventor); Jing, Sun (Inventor)
1988-01-01
A new method of Viterbi decoding of convolutional codes lends itself to a pipline VLSI architecture using a single sequential processor to compute the path metrics in the Viterbi trellis. An array method is used to store the path information for NK intervals where N is a number, and K is constraint length. The selected path at the end of each NK interval is then selected from the last entry in the array. A trace-back method is used for returning to the beginning of the selected path back, i.e., to the first time unit of the interval NK to read out the stored branch metrics of the selected path which correspond to the message bits. The decoding decision made in this way is no longer maximum likelihood, but can be almost as good, provided that constraint length K in not too small. The advantage is that for a long message, it is not necessary to provide a large memory to store the trellis derived information until the end of the message to select the path that is to be decoded; the selection is made at the end of every NK time unit, thus decoding a long message in successive blocks.
Preduction of Vehicle Mobility on Large-Scale Soft-Soil Terrain Maps Using Physics-Based Simulation
2016-08-02
PREDICTION OF VEHICLE MOBILITY ON LARGE-SCALE SOFT- SOIL TERRAIN MAPS USING PHYSICS-BASED SIMULATION Tamer M. Wasfy, Paramsothy Jayakumar, Dave...NRMM • Objectives • Soft Soils • Review of Physics-Based Soil Models • MBD/DEM Modeling Formulation – Joint & Contact Constraints – DEM Cohesive... Soil Model • Cone Penetrometer Experiment • Vehicle- Soil Model • Vehicle Mobility DOE Procedure • Simulation Results • Concluding Remarks 2UNCLASSIFIED
2014-12-01
Introduction 1.1 Background In today’s world of high -tech warfare, we have developed the ability to deploy virtually any type of ordnance quickly and... ANSI Std. 239–18 i THIS PAGE INTENTIONALLY LEFT BLANK ii Approved for public release; distribution is unlimited TEMPORALLY ADJUSTED COMPLEX AMBIGUITY...this time due to time constraints and the high computational complexity involved in the current implementation of the Moss algorithm. Full maps, with
Wifall, Tim; Hazeltine, Eliot; Toby Mordkoff, J
2016-07-01
Hick/Hyman Law describes one of the core phenomena in the study of human information processing: mean response time is a linear function of average uncertainty. In the original work of Hick, (1952) and Hyman, (1953), along with many follow-up studies, uncertainty regarding the stimulus and uncertainty regarding the response were confounded such that the relative importance of these two factors remains mostly unknown. The present work first replicates Hick/Hyman Law with a new set of stimuli and then goes on to separately estimate the roles of stimulus and response uncertainty. The results demonstrate that, for a popular type of task-visual stimuli mapped to vocal responses-response uncertainty accounts for a majority of the effect. The results justify a revised expression of Hick/Hyman Law and place strong constraints on theoretical accounts of the law, as well as models of response selection in general.
Incentive compatibility in kidney exchange problems.
Villa, Silvia; Patrone, Fioravante
2009-12-01
The problem of kidney exchanges shares common features with the classical problem of exchange of indivisible goods studied in the mechanism design literature, while presenting additional constraints on the size of feasible exchanges. The solution of a kidney exchange problem can be summarized in a mapping from the relevant underlying characteristics of the players (patients and their donors) to the set of matchings. The goal is to select only matchings maximizing a chosen welfare function. Since the final outcome heavily depends on the private information in possess of the players, a basic requirement in order to reach efficiency is the truthful revelation of this information. We show that for the kidney exchange problem, a class of (in principle) efficient mechanisms does not enjoy the incentive compatibility property and therefore is subject to possible manipulations made by the players in order to profit of the misrepresentation of their private information.
Heffel, James W [Lake Matthews, CA; Scott, Paul B [Northridge, CA; Park, Chan Seung [Yorba Linda, CA
2011-11-01
An apparatus and method for utilizing any arbitrary mixture ratio of multiple fuel gases having differing combustion characteristics, such as natural gas and hydrogen gas, within an internal combustion engine. The gaseous fuel composition ratio is first sensed, such as by thermal conductivity, infrared signature, sound propagation speed, or equivalent mixture differentiation mechanisms and combinations thereof which are utilized as input(s) to a "multiple map" engine control module which modulates selected operating parameters of the engine, such as fuel injection and ignition timing, in response to the proportions of fuel gases available so that the engine operates correctly and at high efficiency irrespective of the gas mixture ratio being utilized. As a result, an engine configured according to the teachings of the present invention may be fueled from at least two different fuel sources without admixing constraints.
Heffel, James W.; Scott, Paul B.
2003-09-02
An apparatus and method for utilizing any arbitrary mixture ratio of multiple fuel gases having differing combustion characteristics, such as natural gas and hydrogen gas, within an internal combustion engine. The gaseous fuel composition ratio is first sensed, such as by thermal conductivity, infrared signature, sound propagation speed, or equivalent mixture differentiation mechanisms and combinations thereof which are utilized as input(s) to a "multiple map" engine control module which modulates selected operating parameters of the engine, such as fuel injection and ignition timing, in response to the proportions of fuel gases available so that the engine operates correctly and at high efficiency irrespective of the gas mixture ratio being utilized. As a result, an engine configured according to the teachings of the present invention may be fueled from at least two different fuel sources without admixing constraints.
Navigation, behaviors, and control modes in an autonomous vehicle
NASA Astrophysics Data System (ADS)
Byler, Eric A.
1995-01-01
An Intelligent Mobile Sensing System (IMSS) has been developed for the automated inspection of radioactive and hazardous waste storage containers in warehouse facilities at Department of Energy sites. A 2D space of control modes was used that provides a combined view of reactive and planning approaches wherein a 2D situation space is defined by dimensions representing the predictability of the agent's task environment and the constraint imposed by its goals. In this sense selection of appropriate systems for planning, navigation, and control depends on the problem at hand. The IMSS vehicle navigation system is based on a combination of feature based motion, landmark sightings, and an a priori logical map of the mockup storage facility. Motion for the inspection activities are composed of different interactions of several available control modes, several obstacle avoidance modes, and several feature identification modes. Features used to drive these behaviors are both visual and acoustic.
The Structure Of The Gaia Deployable Sunshield Assembly
NASA Astrophysics Data System (ADS)
Pereira, Carlos; Urgoiti, Eduardo; Pinto, Inaki
2012-07-01
GAIA is an ESA mission with launch date in 2013. Its main objective is to map the stars. After launch it will unfold a 10.2 m diameter sunshield .The structure of this shield consists of twelve 3.5 meter long composite trusses which act as scaffold to two multilayer insulation blankets. Due to thermal stability constraints the planarity of the shield must be better than 1.0 mm. The trusses are therefore lightweight structures capable of withstanding the launch loads and once deployed, the thermal environment of the spacecraft with a minimum of distortion. This paper details: • The material selection for the composite structure • Validation of the chosen materials and truss layout • The modification of manufacturing process in order to lightweight the structure • The extensive structural and thermal stability testing The sunshield has been delivered to the satellite prime after successful mechanical, thermal and deployment tests.
Tidal dissipation, surface heat flow, and figure of viscoelastic models of Io
NASA Technical Reports Server (NTRS)
Segatz, M.; Spohn, T.; Ross, M. N.; Schubert, G.
1988-01-01
The deformation of Io, the tidal dissipation rate, and its interior spatial distribution are investigated by means of numerical simulations based on (1) a three-layer model (with dissipation in the mantle) or (2) a four-layer model (with dissipation in the asthenosphere). The mathematical derivation of the models is outlined; the selection of the input-parameter values is explained; the results are presented in extensive graphs and contour maps; and the constraints imposed on the models by observational data on the hot-spot distribution, tidal deformation, and gravity field are discussed in detail. It is found that both dissipation mechanisms may play a role on Io: model (2) is better able to explain the concentration of hot spots near the equator, while the presence of a large hot spot near the south pole (if confirmed by observations) would favor model (1).
Egocentric Mapping of Body Surface Constraints.
Molla, Eray; Debarba, Henrique Galvan; Boulic, Ronan
2018-07-01
The relative location of human body parts often materializes the semantics of on-going actions, intentions and even emotions expressed, or performed, by a human being. However, traditional methods of performance animation fail to correctly and automatically map the semantics of performer postures involving self-body contacts onto characters with different sizes and proportions. Our method proposes an egocentric normalization of the body-part relative distances to preserve the consistency of self contacts for a large variety of human-like target characters. Egocentric coordinates are character independent and encode the whole posture space, i.e., it ensures the continuity of the motion with and without self-contacts. We can transfer classes of complex postures involving multiple interacting limb segments by preserving their spatial order without depending on temporal coherence. The mapping process exploits a low-cost constraint relaxation technique relying on analytic inverse kinematics; thus, we can achieve online performance animation. We demonstrate our approach on a variety of characters and compare it with the state of the art in online retargeting with a user study. Overall, our method performs better than the state of the art, especially when the proportions of the animated character deviate from those of the performer.
Pose and motion recovery from feature correspondences and a digital terrain map.
Lerner, Ronen; Rivlin, Ehud; Rotstein, Héctor P
2006-09-01
A novel algorithm for pose and motion estimation using corresponding features and a Digital Terrain Map is proposed. Using a Digital Terrain (or Digital Elevation) Map (DTM/DEM) as a global reference enables the elimination of the ambiguity present in vision-based algorithms for motion recovery. As a consequence, the absolute position and orientation of a camera can be recovered with respect to the external reference frame. In order to do this, the DTM is used to formulate a constraint between corresponding features in two consecutive frames. Explicit reconstruction of the 3D world is not required. When considering a number of feature points, the resulting constraints can be solved using nonlinear optimization in terms of position, orientation, and motion. Such a procedure requires an initial guess of these parameters, which can be obtained from dead-reckoning or any other source. The feasibility of the algorithm is established through extensive experimentation. Performance is compared with a state-of-the-art alternative algorithm, which intermediately reconstructs the 3D structure and then registers it to the DTM. A clear advantage for the novel algorithm is demonstrated in variety of scenarios.
This map service contains data from aerial radiological surveys of 41 potential uranium mining areas (1,144 square miles) within the Navajo Nation that were conducted during the period from October 1994 through October 1999. The US Environmental Protection Agency (USEPA) Region 9 funded the surveys and the US Department of Energy (USDOE) Remote Sensing Laboratory (RSL) in Las Vegas, Nevada conducted the aerial surveys. The aerial survey data were used to characterize the overall radioactivity and excess Bismuth 214 levels within the surveyed areas.This US EPA Region 9 web service contains the following map layers: Total Terrestrial Gamma Activity Polygons, Total Terrestrial Gamma Activity Contours, Excess Bismuth 214 Contours, Excess Bismuth 214 Polygons, Flight AreasFull FGDC metadata records for each layer can be found by clicking the layer name at the web service endpoint and viewing the layer description.Security Classification: Public. Access Constraints: None. Use Constraints: None. Please check sources, scale, accuracy, currentness and other available information. Please confirm that you are using the most recent copy of both data and metadata. Acknowledgement of the EPA would be appreciated.
A new chaotic multi-verse optimization algorithm for solving engineering optimization problems
NASA Astrophysics Data System (ADS)
Sayed, Gehad Ismail; Darwish, Ashraf; Hassanien, Aboul Ella
2018-03-01
Multi-verse optimization algorithm (MVO) is one of the recent meta-heuristic optimization algorithms. The main inspiration of this algorithm came from multi-verse theory in physics. However, MVO like most optimization algorithms suffers from low convergence rate and entrapment in local optima. In this paper, a new chaotic multi-verse optimization algorithm (CMVO) is proposed to overcome these problems. The proposed CMVO is applied on 13 benchmark functions and 7 well-known design problems in the engineering and mechanical field; namely, three-bar trust, speed reduce design, pressure vessel problem, spring design, welded beam, rolling element-bearing and multiple disc clutch brake. In the current study, a modified feasible-based mechanism is employed to handle constraints. In this mechanism, four rules were used to handle the specific constraint problem through maintaining a balance between feasible and infeasible solutions. Moreover, 10 well-known chaotic maps are used to improve the performance of MVO. The experimental results showed that CMVO outperforms other meta-heuristic optimization algorithms on most of the optimization problems. Also, the results reveal that sine chaotic map is the most appropriate map to significantly boost MVO's performance.
Polarimetric Study of Jupiter's Atmosphere
NASA Astrophysics Data System (ADS)
Yanamandra-Fisher, P. A.; McLean, W.; Wesley, A.; Miles, P.; Masding, P.
2017-12-01
Jupiter's atmosphere displays polarization, attributed to changes in the clouds/thermal filed that can be brought about by endogenic dynamical processes such merger of vortices; global, planetary scale upheavals, and external factors such as celestial collisions (such as D/Shoemaker-Levy 9 impact with Jupiter in 1994, etc.). Although the range of phase angles available from Earth for Jupiter is restricted to a narrow range, limb polarization measurements provide constraints on the polarimetric properties. Jupiter is known to exhibit a strong polar limb polarization and a low equatorial limb polarization due to the presence of haze particles and Rayleigh scattering at the poles. In contrast, at the equator, the concentration of particulates in the high atmosphere might change, changing the polarimetric signature and aurorae at both poles. The polarimetric maps, in conjunction with thermal maps and albedo maps, can provide constraints on modeling efforts to understand the nature of the aerosols/hazes in Jovian atmosphere. With Jupiter experiencing morphological changes at many latitudes, we have initiated a polarimetric observing campaign of Jupiter, in conjunction with The PACA Project and preliminary results will be discussed. Some of our observations are acquired by a team of professional/amateur planetary imagers astronomers.
The landscape of sex-differential transcriptome and its consequent selection in human adults.
Gershoni, Moran; Pietrokovski, Shmuel
2017-02-07
The prevalence of several human morbid phenotypes is sometimes much higher than intuitively expected. This can directly arise from the presence of two sexes, male and female, in one species. Men and women have almost identical genomes but are distinctly dimorphic, with dissimilar disease susceptibilities. Sexually dimorphic traits mainly result from differential expression of genes present in both sexes. Such genes can be subject to different, and even opposing, selection constraints in the two sexes. This can impact human evolution by differential selection on mutations with dissimilar effects on the two sexes. We comprehensively mapped human sex-differential genetic architecture across 53 tissues. Analyzing available RNA-sequencing data from 544 adults revealed thousands of genes differentially expressed in the reproductive tracts and tissues common to both sexes. Sex-differential genes are related to various biological systems, and suggest new insights into the pathophysiology of diverse human diseases. We also identified a significant association between sex-specific gene transcription and reduced selection efficiency and accumulation of deleterious mutations, which might affect the prevalence of different traits and diseases. Interestingly, many of the sex-specific genes that also undergo reduced selection efficiency are essential for successful reproduction in men or women. This seeming paradox might partially explain the high incidence of human infertility. This work provides a comprehensive overview of the sex-differential transcriptome and its importance to human evolution and human physiology in health and in disease.
Selection of sleeping trees in pileated gibbons (Hylobates pileatus).
Phoonjampa, Rungnapa; Koenig, Andreas; Borries, Carola; Gale, George A; Savini, Tommaso
2010-06-01
Selection and use patterns of sleeping sites in nonhuman primates are suggested to have multiple functions, such as predation avoidance, but they might be further affected by range defense as well as foraging constraints or other factors. Here, we investigate sleeping tree selection by the male and female members of one group of pileated gibbons (Hylobates pileatus) at Khao Ang Rue Nai Wildlife Sanctuary, Thailand. Data were collected on 113 nights, between September 2006 and January 2009, yielding data on 201 sleeping tree choices (107 by the female and 94 by the male) and on the characteristics of 71 individual sleeping trees. Each sleeping tree and all trees > or =40 cm diameter at breast height (DBH) in the home range were assessed (height, DBH, canopy structure, liana load) and mapped using a GPS. The gibbons preferentially selected tall (mean=38.5 m), emergent trees without lianas. The majority of the sleeping trees (53.5%) were used only once and consecutive reuse was rare (9.5%). Sleeping trees were closer to the last feeding tree of the evening than to the first feeding tree in the morning, and sleeping trees were located in the overlap areas with neighbors less often than expected based on time spent in these areas. These results suggest avoidance of predators as the main factor influencing sleeping tree selection in pileated gibbons. However, other non-mutually exclusive factors may be involved as well. (c) 2010 Wiley-Liss, Inc.
Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction
NASA Astrophysics Data System (ADS)
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-11-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constraint involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the PAPA. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality.
Animal movement constraints improve resource selection inference in the presence of telemetry error
Brost, Brian M.; Hooten, Mevin B.; Hanks, Ephraim M.; Small, Robert J.
2016-01-01
Multiple factors complicate the analysis of animal telemetry location data. Recent advancements address issues such as temporal autocorrelation and telemetry measurement error, but additional challenges remain. Difficulties introduced by complicated error structures or barriers to animal movement can weaken inference. We propose an approach for obtaining resource selection inference from animal location data that accounts for complicated error structures, movement constraints, and temporally autocorrelated observations. We specify a model for telemetry data observed with error conditional on unobserved true locations that reflects prior knowledge about constraints in the animal movement process. The observed telemetry data are modeled using a flexible distribution that accommodates extreme errors and complicated error structures. Although constraints to movement are often viewed as a nuisance, we use constraints to simultaneously estimate and account for telemetry error. We apply the model to simulated data, showing that it outperforms common ad hoc approaches used when confronted with measurement error and movement constraints. We then apply our framework to an Argos satellite telemetry data set on harbor seals (Phoca vitulina) in the Gulf of Alaska, a species that is constrained to move within the marine environment and adjacent coastlines.
Regional magnetic anomaly constraints on continental breakup
DOE Office of Scientific and Technical Information (OSTI.GOV)
von Frese, R.R.B.; Hinze, W.J.; Olivier, R.
1986-01-01
Continental lithosphere magnetic anomalies mapped by the Magsat satellite are related to tectonic features associated with regional compositional variations of the crust and upper mantle and crustal thickness and thermal perturbations. These continental-scale anomaly patterns when corrected for varying observation elevation and the global change in the direction and intensity of the geomagnetic field show remarkable correlation of regional lithospheric magnetic sources across rifted continental margins when plotted on a reconstruction of Pangea. Accordingly, these anomalies provide new and fundamental constraints on the geologic evolution and dynamics of the continents and oceans.
Interior point techniques for LP and NLP
DOE Office of Scientific and Technical Information (OSTI.GOV)
Evtushenko, Y.
By using surjective mapping the initial constrained optimization problem is transformed to a problem in a new space with only equality constraints. For the numerical solution of the latter problem we use the generalized gradient-projection method and Newton`s method. After inverse transformation to the initial space we obtain the family of numerical methods for solving optimization problems with equality and inequality constraints. In the linear programming case after some simplification we obtain Dikin`s algorithm, affine scaling algorithm and generalized primal dual interior point linear programming algorithm.
Monitoring in the nearshore: A process for making reasoned decisions
Bodkin, James L.; Dean, T.A.
2003-01-01
Over the past several years, a conceptual framework for the GEM nearshore monitoring program has been developed through a series of workshops. However, details of the proposed monitoring program, e.g. what to sample, where to sample, when to sample and at how many sites, have yet to be determined. In FY 03 we were funded under Project 03687 to outline a process whereby specific alternatives to monitoring are developed and presented to the EVOS Trustee Council for consideration. As part of this process, two key elements are required before reasoned decisions can be made. These are: 1) a comprehensive historical perspective of locations and types of past studies conducted in the nearshore marine communities within Gulf of Alaska, and 2) estimates of costs for each element of a proposed monitoring program. We have developed a GIS database that details available information from past studies of selected nearshore habitats and species in the Gulf of Alaska and provide a visual means of selecting sites based (in part) on the locations for which historical data of interest are available. We also provide cost estimates for specific monitoring plan alternatives and outline several alternative plans that can be accomplished within reasonable budgetary constraints. The products that we will provide are: 1) A GIS database and maps showing the location and types of information available from the nearshore in the Gulf of Alaska; 2) A list of several specific monitoring alternatives that can be conducted within reasonable budgetary constraints; and 3) Cost estimates for proposed tasks to be conducted as part of the nearshore program. Because data compilation and management will not be completed until late in FY03 we are requesting support for close-out of this project in FY 04.
Assessing Multivariate Constraints to Evolution across Ten Long-Term Avian Studies
Teplitsky, Celine; Tarka, Maja; Møller, Anders P.; Nakagawa, Shinichi; Balbontín, Javier; Burke, Terry A.; Doutrelant, Claire; Gregoire, Arnaud; Hansson, Bengt; Hasselquist, Dennis; Gustafsson, Lars; de Lope, Florentino; Marzal, Alfonso; Mills, James A.; Wheelwright, Nathaniel T.; Yarrall, John W.; Charmantier, Anne
2014-01-01
Background In a rapidly changing world, it is of fundamental importance to understand processes constraining or facilitating adaptation through microevolution. As different traits of an organism covary, genetic correlations are expected to affect evolutionary trajectories. However, only limited empirical data are available. Methodology/Principal Findings We investigate the extent to which multivariate constraints affect the rate of adaptation, focusing on four morphological traits often shown to harbour large amounts of genetic variance and considered to be subject to limited evolutionary constraints. Our data set includes unique long-term data for seven bird species and a total of 10 populations. We estimate population-specific matrices of genetic correlations and multivariate selection coefficients to predict evolutionary responses to selection. Using Bayesian methods that facilitate the propagation of errors in estimates, we compare (1) the rate of adaptation based on predicted response to selection when including genetic correlations with predictions from models where these genetic correlations were set to zero and (2) the multivariate evolvability in the direction of current selection to the average evolvability in random directions of the phenotypic space. We show that genetic correlations on average decrease the predicted rate of adaptation by 28%. Multivariate evolvability in the direction of current selection was systematically lower than average evolvability in random directions of space. These significant reductions in the rate of adaptation and reduced evolvability were due to a general nonalignment of selection and genetic variance, notably orthogonality of directional selection with the size axis along which most (60%) of the genetic variance is found. Conclusions These results suggest that genetic correlations can impose significant constraints on the evolution of avian morphology in wild populations. This could have important impacts on evolutionary dynamics and hence population persistence in the face of rapid environmental change. PMID:24608111
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loebman, Sarah R.; Ivezic, Zeljko; Quinn, Thomas R.
2012-10-10
We search for evidence of dark matter in the Milky Way by utilizing the stellar number density distribution and kinematics measured by the Sloan Digital Sky Survey (SDSS) to heliocentric distances exceeding {approx}10 kpc. We employ the cylindrically symmetric form of Jeans equations and focus on the morphology of the resulting acceleration maps, rather than the normalization of the total mass as done in previous, mostly local, studies. Jeans equations are first applied to a mock catalog based on a cosmologically derived N-body+SPH simulation, and the known acceleration (gradient of gravitational potential) is successfully recovered. The same simulation is alsomore » used to quantify the impact of dark matter on the total acceleration. We use Galfast, a code designed to quantitatively reproduce SDSS measurements and selection effects, to generate a synthetic stellar catalog. We apply Jeans equations to this catalog and produce two-dimensional maps of stellar acceleration. These maps reveal that in a Newtonian framework, the implied gravitational potential cannot be explained by visible matter alone. The acceleration experienced by stars at galactocentric distances of {approx}20 kpc is three times larger than what can be explained by purely visible matter. The application of an analytic method for estimating the dark matter halo axis ratio to SDSS data implies an oblate halo with q{sub DM} = 0.47 {+-} 0.14 within the same distance range. These techniques can be used to map the dark matter halo to much larger distances from the Galactic center using upcoming deep optical surveys, such as LSST.« less
NASA Astrophysics Data System (ADS)
Krogh, J.; Dalton, C. A.; Ma, Z.
2017-12-01
Rayleigh wave dispersion extracted from ambient seismic noise has been widely used to image crustal and uppermost mantle structure in continents, but there have been relatively few studies within ocean basins. Here, we extract Rayleigh wave dispersion from ambient noise across the Arctic basin and surrounding continents. Continuous time series were collected from 427 broadband stations for the time period 1990-2016. Following the method described by Ma and Dalton (2017), we cross-correlated the noise records for 57,782 pairs of stations and measured phase arrival times for the frequency range 5-30 mHz. After data selection, which utilized criteria for path length, signal-to-noise ratio, and waveform quality, between 670 and 20,284 paths remained. Phase-velocity maps for the study region were determined from only the ambient noise Rayleigh waves and from a combined data set of ambient noise and earthquakes. Resolution tests and hit count maps illustrate the enhanced path coverage and resolution that is afforded by combining the two data sets. The maps show a clear association with tectonic features, including: fast velocities associated with the Siberian, Baltic, and North American cratons; very slow velocities associated with Iceland and the Alaska-Aleutian subduction zone; and an abrupt transition between the low-velocity North American Cordillera and fast-velocity craton that corresponds nearly perfectly with surface topography. The ultra-slow spreading Gakkel Ridge has only a weak seismic signature, although the dependence of seismic velocity on seafloor age is apparent in the maps. These results will be used to investigate the variations in temperature, composition, and melt and volatile content in the Arctic lithosphere and asthenosphere.
Darwinian demons, evolutionary complexity, and information maximization.
Krakauer, David C
2011-09-01
Natural selection is shown to be an extended instance of a Maxwell's demon device. A demonic selection principle is introduced that states that organisms cannot exceed the complexity of their selective environment. Thermodynamic constraints on error repair impose a fundamental limit to the rate that information can be transferred from the environment (via the selective demon) to the genome. Evolved mechanisms of learning and inference can overcome this limitation, but remain subject to the same fundamental constraint, such that plastic behaviors cannot exceed the complexity of reward signals. A natural measure of evolutionary complexity is provided by mutual information, and niche construction activity--the organismal contribution to the construction of selection pressures--might in principle lead to its increase, bounded by thermodynamic free energy required for error correction.
Genetic approaches in comparative and evolutionary physiology
Bridgham, Jamie T.; Kelly, Scott A.; Garland, Theodore
2015-01-01
Whole animal physiological performance is highly polygenic and highly plastic, and the same is generally true for the many subordinate traits that underlie performance capacities. Quantitative genetics, therefore, provides an appropriate framework for the analysis of physiological phenotypes and can be used to infer the microevolutionary processes that have shaped patterns of trait variation within and among species. In cases where specific genes are known to contribute to variation in physiological traits, analyses of intraspecific polymorphism and interspecific divergence can reveal molecular mechanisms of functional evolution and can provide insights into the possible adaptive significance of observed sequence changes. In this review, we explain how the tools and theory of quantitative genetics, population genetics, and molecular evolution can inform our understanding of mechanism and process in physiological evolution. For example, lab-based studies of polygenic inheritance can be integrated with field-based studies of trait variation and survivorship to measure selection in the wild, thereby providing direct insights into the adaptive significance of physiological variation. Analyses of quantitative genetic variation in selection experiments can be used to probe interrelationships among traits and the genetic basis of physiological trade-offs and constraints. We review approaches for characterizing the genetic architecture of physiological traits, including linkage mapping and association mapping, and systems approaches for dissecting intermediary steps in the chain of causation between genotype and phenotype. We also discuss the promise and limitations of population genomic approaches for inferring adaptation at specific loci. We end by highlighting the role of organismal physiology in the functional synthesis of evolutionary biology. PMID:26041111
Genetic approaches in comparative and evolutionary physiology.
Storz, Jay F; Bridgham, Jamie T; Kelly, Scott A; Garland, Theodore
2015-08-01
Whole animal physiological performance is highly polygenic and highly plastic, and the same is generally true for the many subordinate traits that underlie performance capacities. Quantitative genetics, therefore, provides an appropriate framework for the analysis of physiological phenotypes and can be used to infer the microevolutionary processes that have shaped patterns of trait variation within and among species. In cases where specific genes are known to contribute to variation in physiological traits, analyses of intraspecific polymorphism and interspecific divergence can reveal molecular mechanisms of functional evolution and can provide insights into the possible adaptive significance of observed sequence changes. In this review, we explain how the tools and theory of quantitative genetics, population genetics, and molecular evolution can inform our understanding of mechanism and process in physiological evolution. For example, lab-based studies of polygenic inheritance can be integrated with field-based studies of trait variation and survivorship to measure selection in the wild, thereby providing direct insights into the adaptive significance of physiological variation. Analyses of quantitative genetic variation in selection experiments can be used to probe interrelationships among traits and the genetic basis of physiological trade-offs and constraints. We review approaches for characterizing the genetic architecture of physiological traits, including linkage mapping and association mapping, and systems approaches for dissecting intermediary steps in the chain of causation between genotype and phenotype. We also discuss the promise and limitations of population genomic approaches for inferring adaptation at specific loci. We end by highlighting the role of organismal physiology in the functional synthesis of evolutionary biology. Copyright © 2015 the American Physiological Society.
A theory of photometric stereo for a class of diffuse non-Lambertian surfaces
NASA Technical Reports Server (NTRS)
Tagare, Hemant D.; Defigueiredo, Rui J. P.
1991-01-01
A theory of photometric stereo is proposed for a large class of non-Lambertian reflectance maps. The authors review the different reflectance maps proposed in the literature for modeling reflection from real-world surfaces. From this, they obtain a mathematical class of reflectance maps to which the maps belong. They show that three lights can be sufficient for a unique inversion of the photometric stereo equation for the entire class of reflectance maps. They also obtain a constraint on the positions of light sources for obtaining this solution. They investigate the sufficiency of three light sources to estimate the surface normal and the illuminant strength. The issue of completeness of reconstruction is addressed. They shown that if k lights are sufficient for a unique inversion, 2k lights are necessary for a complete inversion.
Choi, Kate H.; Tienda, Marta
2016-01-01
Despite theoretical consensus that marriage markets constrain mate selection behavior, few studies directly evaluate how local marriage market conditions influence intermarriage patterns. Using data from the American Community Survey, we examine what aspects of marriage markets influence mate selection; assess whether the associations between marriage market conditions and intermarriage are uniform by gender and across pan-ethnic groups; and investigate the extent to which marriage market conditions account for group differences in intermarriage patterns. Relative group size is the most salient and consistent determinant of intermarriage patterns across pan-ethnic groups and by gender. Marriage market constraints typically explain a larger share of pan-ethnic differences in intermarriage rates than individual traits, suggesting that scarcity of co-ethnic partners is a key reason behind decisions to intermarry. When faced with market constraints, men are more willing or more successful than women in crossing racial and ethnic boundaries in marriage. PMID:28579638
Quantitative Variation in Responses to Root Spatial Constraint within Arabidopsis thaliana[OPEN
Joseph, Bindu; Lau, Lillian; Kliebenstein, Daniel J.
2015-01-01
Among the myriad of environmental stimuli that plants utilize to regulate growth and development to optimize fitness are signals obtained from various sources in the rhizosphere that give an indication of the nutrient status and volume of media available. These signals include chemical signals from other plants, nutrient signals, and thigmotropic interactions that reveal the presence of obstacles to growth. Little is known about the genetics underlying the response of plants to physical constraints present within the rhizosphere. In this study, we show that there is natural variation among Arabidopsis thaliana accessions in their growth response to physical rhizosphere constraints and competition. We mapped growth quantitative trait loci that regulate a positive response of foliar growth to short physical constraints surrounding the root. This is a highly polygenic trait and, using quantitative validation studies, we showed that natural variation in EARLY FLOWERING3 (ELF3) controls the link between root constraint and altered shoot growth. This provides an entry point to study how root and shoot growth are integrated to respond to environmental stimuli. PMID:26243313
Invalid-point removal based on epipolar constraint in the structured-light method
NASA Astrophysics Data System (ADS)
Qi, Zhaoshuai; Wang, Zhao; Huang, Junhui; Xing, Chao; Gao, Jianmin
2018-06-01
In structured-light measurement, there unavoidably exist many invalid points caused by shadows, image noise and ambient light. According to the property of the epipolar constraint, because the retrieved phase of the invalid point is inaccurate, the corresponding projector image coordinate (PIC) will not satisfy the epipolar constraint. Based on this fact, a new invalid-point removal method based on the epipolar constraint is proposed in this paper. First, the fundamental matrix of the measurement system is calculated, which will be used for calculating the epipolar line. Then, according to the retrieved phase map of the captured fringes, the PICs of each pixel are retrieved. Subsequently, the epipolar line in the projector image plane of each pixel is obtained using the fundamental matrix. The distance between the corresponding PIC and the epipolar line of a pixel is defined as the invalidation criterion, which quantifies the satisfaction degree of the epipolar constraint. Finally, all pixels with a distance larger than a certain threshold are removed as invalid points. Experiments verified that the method is easy to implement and demonstrates better performance than state-of-the-art measurement systems.
Quadratic constrained mixed discrete optimization with an adiabatic quantum optimizer
NASA Astrophysics Data System (ADS)
Chandra, Rishabh; Jacobson, N. Tobias; Moussa, Jonathan E.; Frankel, Steven H.; Kais, Sabre
2014-07-01
We extend the family of problems that may be implemented on an adiabatic quantum optimizer (AQO). When a quadratic optimization problem has at least one set of discrete controls and the constraints are linear, we call this a quadratic constrained mixed discrete optimization (QCMDO) problem. QCMDO problems are NP-hard, and no efficient classical algorithm for their solution is known. Included in the class of QCMDO problems are combinatorial optimization problems constrained by a linear partial differential equation (PDE) or system of linear PDEs. An essential complication commonly encountered in solving this type of problem is that the linear constraint may introduce many intermediate continuous variables into the optimization while the computational cost grows exponentially with problem size. We resolve this difficulty by developing a constructive mapping from QCMDO to quadratic unconstrained binary optimization (QUBO) such that the size of the QUBO problem depends only on the number of discrete control variables. With a suitable embedding, taking into account the physical constraints of the realizable coupling graph, the resulting QUBO problem can be implemented on an existing AQO. The mapping itself is efficient, scaling cubically with the number of continuous variables in the general case and linearly in the PDE case if an efficient preconditioner is available.
Probing Primordial Non-Gaussianity with Weak-lensing Minkowski Functionals
NASA Astrophysics Data System (ADS)
Shirasaki, Masato; Yoshida, Naoki; Hamana, Takashi; Nishimichi, Takahiro
2012-11-01
We study the cosmological information contained in the Minkowski functionals (MFs) of weak gravitational lensing convergence maps. We show that the MFs provide strong constraints on the local-type primordial non-Gaussianity parameter f NL. We run a set of cosmological N-body simulations and perform ray-tracing simulations of weak lensing to generate 100 independent convergence maps of a 25 deg2 field of view for f NL = -100, 0 and 100. We perform a Fisher analysis to study the degeneracy among other cosmological parameters such as the dark energy equation of state parameter w and the fluctuation amplitude σ8. We use fully nonlinear covariance matrices evaluated from 1000 ray-tracing simulations. For upcoming wide-field observations such as those from the Subaru Hyper Suprime-Cam survey with a proposed survey area of 1500 deg2, the primordial non-Gaussianity can be constrained with a level of f NL ~ 80 and w ~ 0.036 by weak-lensing MFs. If simply scaled by the effective survey area, a 20,000 deg2 lensing survey using the Large Synoptic Survey Telescope will yield constraints of f NL ~ 25 and w ~ 0.013. We show that these constraints can be further improved by a tomographic method using source galaxies in multiple redshift bins.
NASA Astrophysics Data System (ADS)
Karlstrom, L.; Morriss, M. C.; Nasholds, M. W.
2016-12-01
The Miocene Columbia River Flood Basalts (CRFB) are the youngest, best preserved, and most thoroughly studied Large Igneous Province on Earth. The Grande Ronde basalts erupted 150,000 km3in less than 100 kyr ( 72% of the CRFB volume) from a network of feeder dikes, the Chief Joseph dike swarm, exposed in SE Washington, NE Oregon, and W Idaho, USA. William H. Taubeneck (1923-2016) spent several decades mapping CRFB dikes. His extensive, meticulous field work defined the spatial extent and dominant trends in the Chief Joseph dike swarm, providing a key constraint for theories of CRFB emplacement and their deep origin. However, these measurements were never published nor made public. We are revitalizing Taubeneck's maps, notebooks, and numerous unpublished geochemical analyses, synthesizing his work with other published and mapped dikes and field checking select measurements to ensure accurate interpretation. This dataset should lead to increased understandings of the CRFB shallow plumbing system and flood basalt eruptive dynamics in general. Preliminary analysis of 4,410 mapped CRFB feeder dike segments from Taubeneck and other workers reveals systematic trends in both dike orientation and lithology of host rock. Average dike orientation strikes to the north-northwest across 400 km. Orientations are generally parallel to the cratonic boundary, but appear generally unaffected by a major transition in craton position and also exhibit minor trends with near orthogonal orientations. Dike spatial density peaks in Paleozoic to Cenozoic accreted terranes. Exposed dikes are concentrated among Jurassic and Cretaceous plutons, which host 53% of mapped dikes and accommodate the largest variability in dike orientation. Preliminary investigations suggest variations of feeder dike thickness with depth in the plumbing system as preserved through exposure in the uplifted Wallowa Mountains, although this is complicated by evidence for dikes that accommodated multiple injections and uncertain duration of flow. Ongoing work aims to resolve these issues. Summary figure: (a) Dikes mapped by Taubeneck and others versus latitude. (b) Dike orientation. (c) Paleozoic and Mesozoic accreted terranes and the cratonic margin. Dikes are mostly exposed in the Baker and Wallowa Terranes. (d) Dike host rock lithology.
Inference of Evolutionary Forces Acting on Human Biological Pathways
Daub, Josephine T.; Dupanloup, Isabelle; Robinson-Rechavi, Marc; Excoffier, Laurent
2015-01-01
Because natural selection is likely to act on multiple genes underlying a given phenotypic trait, we study here the potential effect of ongoing and past selection on the genetic diversity of human biological pathways. We first show that genes included in gene sets are generally under stronger selective constraints than other genes and that their evolutionary response is correlated. We then introduce a new procedure to detect selection at the pathway level based on a decomposition of the classical McDonald–Kreitman test extended to multiple genes. This new test, called 2DNS, detects outlier gene sets and takes into account past demographic effects and evolutionary constraints specific to gene sets. Selective forces acting on gene sets can be easily identified by a mere visual inspection of the position of the gene sets relative to their two-dimensional null distribution. We thus find several outlier gene sets that show signals of positive, balancing, or purifying selection but also others showing an ancient relaxation of selective constraints. The principle of the 2DNS test can also be applied to other genomic contrasts. For instance, the comparison of patterns of polymorphisms private to African and non-African populations reveals that most pathways show a higher proportion of nonsynonymous mutations in non-Africans than in Africans, potentially due to different demographic histories and selective pressures. PMID:25971280
Bolstad, Geir H.; Cassara, Jason A.; Márquez, Eladio; Hansen, Thomas F.; van der Linde, Kim; Houle, David; Pélabon, Christophe
2015-01-01
Precise exponential scaling with size is a fundamental aspect of phenotypic variation. These allometric power laws are often invariant across taxa and have long been hypothesized to reflect developmental constraints. Here we test this hypothesis by investigating the evolutionary potential of an allometric scaling relationship in drosophilid wing shape that is nearly invariant across 111 species separated by at least 50 million years of evolution. In only 26 generations of artificial selection in a population of Drosophila melanogaster, we were able to drive the allometric slope to the outer range of those found among the 111 sampled species. This response was rapidly lost when selection was suspended. Only a small proportion of this reversal could be explained by breakup of linkage disequilibrium, and direct selection on wing shape is also unlikely to explain the reversal, because the more divergent wing shapes produced by selection on the allometric intercept did not revert. We hypothesize that the reversal was instead caused by internal selection arising from pleiotropic links to unknown traits. Our results also suggest that the observed selection response in the allometric slope was due to a component expressed late in larval development and that variation in earlier development did not respond to selection. Together, these results are consistent with a role for pleiotropic constraints in explaining the remarkable evolutionary stability of allometric scaling. PMID:26371319
Development of Generation System of Simplified Digital Maps
NASA Astrophysics Data System (ADS)
Uchimura, Keiichi; Kawano, Masato; Tokitsu, Hiroki; Hu, Zhencheng
In recent years, digital maps have been used in a variety of scenarios, including car navigation systems and map information services over the Internet. These digital maps are formed by multiple layers of maps of different scales; the map data most suitable for the specific situation are used. Currently, the production of map data of different scales is done by hand due to constraints related to processing time and accuracy. We conducted research concerning technologies for automatic generation of simplified map data from detailed map data. In the present paper, the authors propose the following: (1) a method to transform data related to streets, rivers, etc. containing widths into line data, (2) a method to eliminate the component points of the data, and (3) a method to eliminate data that lie below a certain threshold. In addition, in order to evaluate the proposed method, a user survey was conducted; in this survey we compared maps generated using the proposed method with the commercially available maps. From the viewpoint of the amount of data reduction and processing time, and on the basis of the results of the survey, we confirmed the effectiveness of the automatic generation of simplified maps using the proposed methods.
Darmon, Nicole; Ferguson, Elaine L; Briend, André
2002-12-01
Economic constraints may contribute to the unhealthy food choices observed among low socioeconomic groups in industrialized countries. The objective of the present study was to predict the food choices a rational individual would make to reduce his or her food budget, while retaining a diet as close as possible to the average population diet. Isoenergetic diets were modeled by linear programming. To ensure these diets were consistent with habitual food consumption patterns, departure from the average French diet was minimized and constraints that limited portion size and the amount of energy from food groups were introduced into the models. A cost constraint was introduced and progressively strengthened to assess the effect of cost on the selection of foods by the program. Strengthening the cost constraint reduced the proportion of energy contributed by fruits and vegetables, meat and dairy products and increased the proportion from cereals, sweets and added fats, a pattern similar to that observed among low socioeconomic groups. This decreased the nutritional quality of modeled diets, notably the lowest cost linear programming diets had lower vitamin C and beta-carotene densities than the mean French adult diet (i.e., <25% and 10% of the mean density, respectively). These results indicate that a simple cost constraint can decrease the nutrient densities of diets and influence food selection in ways that reproduce the food intake patterns observed among low socioeconomic groups. They suggest that economic measures will be needed to effectively improve the nutritional quality of diets consumed by these populations.
Mobile robot exploration and navigation of indoor spaces using sonar and vision
NASA Technical Reports Server (NTRS)
Kortenkamp, David; Huber, Marcus; Koss, Frank; Belding, William; Lee, Jaeho; Wu, Annie; Bidlack, Clint; Rodgers, Seth
1994-01-01
Integration of skills into an autonomous robot that performs a complex task is described. Time constraints prevented complete integration of all the described skills. The biggest problem was tuning the sensor-based region-finding algorithm to the environment involved. Since localization depended on matching regions found with the a priori map, the robot became lost very quickly. If the low level sensing of the world is not working, then high level reasoning or map making will be unsuccessful.
Statistical density modification using local pattern matching
Terwilliger, Thomas C.
2007-01-23
A computer implemented method modifies an experimental electron density map. A set of selected known experimental and model electron density maps is provided and standard templates of electron density are created from the selected experimental and model electron density maps by clustering and averaging values of electron density in a spherical region about each point in a grid that defines each selected known experimental and model electron density maps. Histograms are also created from the selected experimental and model electron density maps that relate the value of electron density at the center of each of the spherical regions to a correlation coefficient of a density surrounding each corresponding grid point in each one of the standard templates. The standard templates and the histograms are applied to grid points on the experimental electron density map to form new estimates of electron density at each grid point in the experimental electron density map.
How biochemical constraints of cellular growth shape evolutionary adaptations in metabolism.
Berkhout, Jan; Bosdriesz, Evert; Nikerel, Emrah; Molenaar, Douwe; de Ridder, Dick; Teusink, Bas; Bruggeman, Frank J
2013-06-01
Evolutionary adaptations in metabolic networks are fundamental to evolution of microbial growth. Studies on unneeded-protein synthesis indicate reductions in fitness upon nonfunctional protein synthesis, showing that cell growth is limited by constraints acting on cellular protein content. Here, we present a theory for optimal metabolic enzyme activity when cells are selected for maximal growth rate given such growth-limiting biochemical constraints. We show how optimal enzyme levels can be understood to result from an enzyme benefit minus cost optimization. The constraints we consider originate from different biochemical aspects of microbial growth, such as competition for limiting amounts of ribosomes or RNA polymerases, or limitations in available energy. Enzyme benefit is related to its kinetics and its importance for fitness, while enzyme cost expresses to what extent resource consumption reduces fitness through constraint-induced reductions of other enzyme levels. A metabolic fitness landscape is introduced to define the fitness potential of an enzyme. This concept is related to the selection coefficient of the enzyme and can be expressed in terms of its fitness benefit and cost.
IntegratedMap: a Web interface for integrating genetic map data.
Yang, Hongyu; Wang, Hongyu; Gingle, Alan R
2005-05-01
IntegratedMap is a Web application and database schema for storing and interactively displaying genetic map data. Its Web interface includes a menu for direct chromosome/linkage group selection, a search form for selection based on mapped object location and linkage group displays. An overview display provides convenient access to the full range of mapped and anchored object types with genetic locus details, such as numbers, types and names of mapped/anchored objects displayed in a compact scrollable list box that automatically updates based on selected map location and object type. Also, multilinkage group and localized map views are available along with links that can be configured for integration with other Web resources. IntegratedMap is implemented in C#/ASP.NET and the package, including a MySQL schema creation script, is available from http://cggc.agtec.uga.edu/Data/download.asp
Salehi, Mojtaba; Bahreininejad, Ardeshir
2011-08-01
Optimization of process planning is considered as the key technology for computer-aided process planning which is a rather complex and difficult procedure. A good process plan of a part is built up based on two elements: (1) the optimized sequence of the operations of the part; and (2) the optimized selection of the machine, cutting tool and Tool Access Direction (TAD) for each operation. In the present work, the process planning is divided into preliminary planning, and secondary/detailed planning. In the preliminary stage, based on the analysis of order and clustering constraints as a compulsive constraint aggregation in operation sequencing and using an intelligent searching strategy, the feasible sequences are generated. Then, in the detailed planning stage, using the genetic algorithm which prunes the initial feasible sequences, the optimized operation sequence and the optimized selection of the machine, cutting tool and TAD for each operation based on optimization constraints as an additive constraint aggregation are obtained. The main contribution of this work is the optimization of sequence of the operations of the part, and optimization of machine selection, cutting tool and TAD for each operation using the intelligent search and genetic algorithm simultaneously.
Salehi, Mojtaba
2010-01-01
Optimization of process planning is considered as the key technology for computer-aided process planning which is a rather complex and difficult procedure. A good process plan of a part is built up based on two elements: (1) the optimized sequence of the operations of the part; and (2) the optimized selection of the machine, cutting tool and Tool Access Direction (TAD) for each operation. In the present work, the process planning is divided into preliminary planning, and secondary/detailed planning. In the preliminary stage, based on the analysis of order and clustering constraints as a compulsive constraint aggregation in operation sequencing and using an intelligent searching strategy, the feasible sequences are generated. Then, in the detailed planning stage, using the genetic algorithm which prunes the initial feasible sequences, the optimized operation sequence and the optimized selection of the machine, cutting tool and TAD for each operation based on optimization constraints as an additive constraint aggregation are obtained. The main contribution of this work is the optimization of sequence of the operations of the part, and optimization of machine selection, cutting tool and TAD for each operation using the intelligent search and genetic algorithm simultaneously. PMID:21845020
Abriata, Luciano A; Bovigny, Christophe; Dal Peraro, Matteo
2016-06-17
Protein variability can now be studied by measuring high-resolution tolerance-to-substitution maps and fitness landscapes in saturated mutational libraries. But these rich and expensive datasets are typically interpreted coarsely, restricting detailed analyses to positions of extremely high or low variability or dubbed important beforehand based on existing knowledge about active sites, interaction surfaces, (de)stabilizing mutations, etc. Our new webserver PsychoProt (freely available without registration at http://psychoprot.epfl.ch or at http://lucianoabriata.altervista.org/psychoprot/index.html ) helps to detect, quantify, and sequence/structure map the biophysical and biochemical traits that shape amino acid preferences throughout a protein as determined by deep-sequencing of saturated mutational libraries or from large alignments of naturally occurring variants. We exemplify how PsychoProt helps to (i) unveil protein structure-function relationships from experiments and from alignments that are consistent with structures according to coevolution analysis, (ii) recall global information about structural and functional features and identify hitherto unknown constraints to variation in alignments, and (iii) point at different sources of variation among related experimental datasets or between experimental and alignment-based data. Remarkably, metabolic costs of the amino acids pose strong constraints to variability at protein surfaces in nature but not in the laboratory. This and other differences call for caution when extrapolating results from in vitro experiments to natural scenarios in, for example, studies of protein evolution. We show through examples how PsychoProt can be a useful tool for the broad communities of structural biology and molecular evolution, particularly for studies about protein modeling, evolution and design.
van Dongen, J M; Ketheswaran, J; Tordrup, D; Ostelo, R W J G; Bertollini, R; van Tulder, M W
2016-12-01
Despite the increased interest in economic evaluations, there are difficulties in applying the results of such studies in practice. Therefore, the "Research Agenda for Health Economic Evaluation" (RAHEE) project was initiated, which aimed to improve the use of health economic evidence in practice for the 10 highest burden conditions in the European Union (including low back pain [LBP] and neck pain [NP]). This was done by undertaking literature mapping and convening an Expert Panel meeting, during which the literature mapping results were discussed and evidence gaps and methodological constraints were identified. The current paper is a part of the RAHEE project and aimed to identify economic evidence gaps and methodological constraints in the LBP and NP literature, in particular. The literature mapping revealed that economic evidence was unavailable for various commonly used LBP and NP treatments (e.g., injections, traction, and discography). Even if economic evidence was available, many treatments were only evaluated in a single study or studies for the same intervention were highly heterogeneous in terms of their patient population, control condition, follow-up duration, setting, and/or economic perspective. Up until now, this has prevented economic evaluation results from being statistically pooled in the LBP and NP literature, and strong conclusions about the cost-effectiveness of LBP and NP treatments can therefore not be made. The Expert Panel identified the need for further high-quality economic evaluations, especially on surgery versus conservative care and competing treatment options for chronic LBP. Handling of uncertainty and reporting quality were considered the most important methodological challenges. Copyright © 2017. Published by Elsevier Ltd.
Ade, P A R; Ahmed, Z; Aikin, R W; Alexander, K D; Barkats, D; Benton, S J; Bischoff, C A; Bock, J J; Bowens-Rubin, R; Brevik, J A; Buder, I; Bullock, E; Buza, V; Connors, J; Crill, B P; Duband, L; Dvorkin, C; Filippini, J P; Fliescher, S; Grayson, J; Halpern, M; Harrison, S; Hilton, G C; Hui, H; Irwin, K D; Karkare, K S; Karpel, E; Kaufman, J P; Keating, B G; Kefeli, S; Kernasovskiy, S A; Kovac, J M; Kuo, C L; Leitch, E M; Lueker, M; Megerian, K G; Netterfield, C B; Nguyen, H T; O'Brient, R; Ogburn, R W; Orlando, A; Pryke, C; Richter, S; Schwarz, R; Sheehy, C D; Staniszewski, Z K; Steinbach, B; Sudiwala, R V; Teply, G P; Thompson, K L; Tolan, J E; Tucker, C; Turner, A D; Vieregg, A G; Weber, A C; Wiebe, D V; Willmert, J; Wong, C L; Wu, W L K; Yoon, K W
2016-01-22
We present results from an analysis of all data taken by the BICEP2 and Keck Array cosmic microwave background (CMB) polarization experiments up to and including the 2014 observing season. This includes the first Keck Array observations at 95 GHz. The maps reach a depth of 50 nK deg in Stokes Q and U in the 150 GHz band and 127 nK deg in the 95 GHz band. We take auto- and cross-spectra between these maps and publicly available maps from WMAP and Planck at frequencies from 23 to 353 GHz. An excess over lensed ΛCDM is detected at modest significance in the 95×150 BB spectrum, and is consistent with the dust contribution expected from our previous work. No significant evidence for synchrotron emission is found in spectra such as 23×95, or for correlation between the dust and synchrotron sky patterns in spectra such as 23×353. We take the likelihood of all the spectra for a multicomponent model including lensed ΛCDM, dust, synchrotron, and a possible contribution from inflationary gravitational waves (as parametrized by the tensor-to-scalar ratio r) using priors on the frequency spectral behaviors of dust and synchrotron emission from previous analyses of WMAP and Planck data in other regions of the sky. This analysis yields an upper limit r_{0.05}<0.09 at 95% confidence, which is robust to variations explored in analysis and priors. Combining these B-mode results with the (more model-dependent) constraints from Planck analysis of CMB temperature plus baryon acoustic oscillations and other data yields a combined limit r_{0.05}<0.07 at 95% confidence. These are the strongest constraints to date on inflationary gravitational waves.
Optimized detection of shear peaks in weak lensing maps
NASA Astrophysics Data System (ADS)
Marian, Laura; Smith, Robert E.; Hilbert, Stefan; Schneider, Peter
2012-06-01
We present a new method to extract cosmological constraints from weak lensing (WL) peak counts, which we denote as ‘the hierarchical algorithm’. The idea of this method is to combine information from WL maps sequentially smoothed with a series of filters of different size, from the largest down to the smallest, thus increasing the cosmological sensitivity of the resulting peak function. We compare the cosmological constraints resulting from the peak abundance measured in this way and the abundance obtained by using a filter of fixed size, which is the standard practice in WL peak studies. For this purpose, we employ a large set of WL maps generated by ray tracing through N-body simulations, and the Fisher matrix formalism. We find that if low signal-to-noise ratio (?) peaks are included in the analysis (?), the hierarchical method yields constraints significantly better than the single-sized filtering. For a large future survey such as Euclid or Large Synoptic Survey Telescope, combined with information from a cosmic microwave background experiment like Planck, the results for the hierarchical (single-sized) method are Δns= 0.0039 (0.004), ΔΩm= 0.002 (0.0045), Δσ8= 0.003 (0.006) and Δw= 0.019 (0.0525). This forecast is conservative, as we assume no knowledge of the redshifts of the lenses, and consider a single broad bin for the redshifts of the sources. If only peaks with ? are considered, then there is little difference between the results of the two methods. We also examine the statistical properties of the hierarchical peak function: Its covariance matrix has off-diagonal terms for bins with ? and aperture mass of M < 3 × 1014 h-1 M⊙, the higher bins being largely uncorrelated and therefore well described by a Poisson distribution.
Automated Derivation of Complex System Constraints from User Requirements
NASA Technical Reports Server (NTRS)
Muery, Kim; Foshee, Mark; Marsh, Angela
2006-01-01
International Space Station (ISS) payload developers submit their payload science requirements for the development of on-board execution timelines. The ISS systems required to execute the payload science operations must be represented as constraints for the execution timeline. Payload developers use a software application, User Requirements Collection (URC), to submit their requirements by selecting a simplified representation of ISS system constraints. To fully represent the complex ISS systems, the constraints require a level of detail that is beyond the insight of the payload developer. To provide the complex representation of the ISS system constraints, HOSC operations personnel, specifically the Payload Activity Requirements Coordinators (PARC), manually translate the payload developers simplified constraints into detailed ISS system constraints used for scheduling the payload activities in the Consolidated Planning System (CPS). This paper describes the implementation for a software application, User Requirements Integration (URI), developed to automate the manual ISS constraint translation process.
Evolutionary stasis in Euphorbiaceae pollen: selection and constraints.
Matamoro-Vidal, A; Furness, C A; Gouyon, P-H; Wurdack, K J; Albert, B
2012-06-01
Although much attention has been paid to the role of stabilizing selection, empirical analyses testing the role of developmental constraints in evolutionary stasis remain rare, particularly for plants. This topic is studied here with a focus on the evolution of a pollen ontogenetic feature, the last points of callose deposition (LPCD) pattern, involved in the determination of an adaptive morphological pollen character (aperture pattern). The LPCD pattern exhibits a low level of evolution in eudicots, as compared to the evolution observed in monocots. Stasis in this pattern might be explained by developmental constraints expressed during male meiosis (microsporogenesis) or by selective pressures expressed through the adaptive role of the aperture pattern. Here, we demonstrate that the LPCD pattern is conserved in Euphorbiaceae s.s. and that this conservatism is primarily due to selective pressures. A phylogenetic association was found between the putative removal of selective pressures on pollen morphology after the origin of inaperturate pollen, and the appearance of variation in microsporogenesis and in the resulting LPCD pattern, suggesting that stasis was due to these selective pressures. However, even in a neutral context, variation in microsporogenesis was biased. This should therefore favour the appearance of some developmental and morphological phenotypes rather than others. © 2012 The Authors. Journal of Evolutionary Biology © 2012 European Society For Evolutionary Biology.
On-board computational efficiency in real time UAV embedded terrain reconstruction
NASA Astrophysics Data System (ADS)
Partsinevelos, Panagiotis; Agadakos, Ioannis; Athanasiou, Vasilis; Papaefstathiou, Ioannis; Mertikas, Stylianos; Kyritsis, Sarantis; Tripolitsiotis, Achilles; Zervos, Panagiotis
2014-05-01
In the last few years, there is a surge of applications for object recognition, interpretation and mapping using unmanned aerial vehicles (UAV). Specifications in constructing those UAVs are highly diverse with contradictory characteristics including cost-efficiency, carrying weight, flight time, mapping precision, real time processing capabilities, etc. In this work, a hexacopter UAV is employed for near real time terrain mapping. The main challenge addressed is to retain a low cost flying platform with real time processing capabilities. The UAV weight limitation affecting the overall flight time, makes the selection of the on-board processing components particularly critical. On the other hand, surface reconstruction, as a computational demanding task, calls for a highly demanding processing unit on board. To merge these two contradicting aspects along with customized development, a System on a Chip (SoC) integrated circuit is proposed as a low-power, low-cost processor, which natively supports camera sensors and positioning and navigation systems. Modern SoCs, such as Omap3530 or Zynq, are classified as heterogeneous devices and provide a versatile platform, allowing access to both general purpose processors, such as the ARM11, as well as specialized processors, such as a digital signal processor and floating field-programmable gate array. A UAV equipped with the proposed embedded processors, allows on-board terrain reconstruction using stereo vision in near real time. Furthermore, according to the frame rate required, additional image processing may concurrently take place, such as image rectification andobject detection. Lastly, the onboard positioning and navigation (e.g., GNSS) chip may further improve the quality of the generated map. The resulting terrain maps are compared to ground truth geodetic measurements in order to access the accuracy limitations of the overall process. It is shown that with our proposed novel system,there is much potential in computational efficiency on board and in optimized time constraints.
The Conceptual Framework of Thematic Mapping in Case Conceptualization.
Ridley, Charles R; Jeffrey, Christina E
2017-04-01
This article, the 3rd in a series of 5, introduces the conceptual framework for thematic mapping, a novel approach to case conceptualization. The framework is transtheoretical in that it is not constrained by the tenets or concepts of any one therapeutic orientation and transdiagnostic in that it conceptualizes clients outside the constraints of diagnostic criteria. Thematic mapping comprises 4 components: a definition, foundational principles, defining features, and core concepts. These components of the framework, deemed building blocks, are explained in this article. Like the foundation of any structure, the heuristic value of the method requires that the building blocks have integrity, coherence, and sound anchoring. We assert that the conceptual framework provides a solid foundation, making thematic mapping a potential asset in mental health treatment. © 2017 Wiley Periodicals, Inc.
Composite boson mapping for lattice boson systems.
Huerga, Daniel; Dukelsky, Jorge; Scuseria, Gustavo E
2013-07-26
We present a canonical mapping transforming physical boson operators into quadratic products of cluster composite bosons that preserves matrix elements of operators when a physical constraint is enforced. We map the 2D lattice Bose-Hubbard Hamiltonian into 2×2 composite bosons and solve it within a generalized Hartree-Bogoliubov approximation. The resulting Mott insulator-superfluid phase diagram reproduces well quantum Monte Carlo results. The Higgs boson behavior in the superfluid phase along the unit density line is unraveled and in remarkable agreement with experiments. Results for the properties of the ground and excited states are competitive with other state-of-the-art approaches, but at a fraction of their computational cost. The composite boson mapping here introduced can be readily applied to frustrated many-body systems where most methodologies face significant hurdles.
Complete denture tooth arrangement technology driven by a reconfigurable rule.
Dai, Ning; Yu, Xiaoling; Fan, Qilei; Yuan, Fulai; Liu, Lele; Sun, Yuchun
2018-01-01
The conventional technique for the fabrication of complete dentures is complex, with a long fabrication process and difficult-to-control restoration quality. In recent years, digital complete denture design has become a research focus. Digital complete denture tooth arrangement is a challenging issue that is difficult to efficiently implement under the constraints of complex tooth arrangement rules and the patient's individualized functional aesthetics. The present study proposes a complete denture automatic tooth arrangement method driven by a reconfigurable rule; it uses four typical operators, including a position operator, a scaling operator, a posture operator, and a contact operator, to establish the constraint mapping association between the teeth and the constraint set of the individual patient. By using the process reorganization of different constraint operators, this method can flexibly implement different clinical tooth arrangement rules. When combined with a virtual occlusion algorithm based on progressive iterative Laplacian deformation, the proposed method can achieve automatic and individual tooth arrangement. Finally, the experimental results verify that the proposed method is flexible and efficient.
Training feed-forward neural networks with gain constraints
Hartman
2000-04-01
Inaccurate input-output gains (partial derivatives of outputs with respect to inputs) are common in neural network models when input variables are correlated or when data are incomplete or inaccurate. Accurate gains are essential for optimization, control, and other purposes. We develop and explore a method for training feedforward neural networks subject to inequality or equality-bound constraints on the gains of the learned mapping. Gain constraints are implemented as penalty terms added to the objective function, and training is done using gradient descent. Adaptive and robust procedures are devised for balancing the relative strengths of the various terms in the objective function, which is essential when the constraints are inconsistent with the data. The approach has the virtue that the model domain of validity can be extended via extrapolation training, which can dramatically improve generalization. The algorithm is demonstrated here on artificial and real-world problems with very good results and has been advantageously applied to dozens of models currently in commercial use.
Screening of Potential Landing Gear Noise Control Devices at Virginia Tech For QTD II Flight Test
NASA Technical Reports Server (NTRS)
Ravetta, Patricio A.; Burdisso, Ricardo A.; Ng, Wing F.; Khorrami, Mehdi R.; Stoker, Robert W.
2007-01-01
In support of the QTD II (Quiet Technology Demonstrator) program, aeroacoustic measurements of a 26%-scale, Boeing 777 main landing gear model were conducted in the Virginia Tech Stability Tunnel. The objective of these measurements was to perform risk mitigation studies on noise control devices for a flight test performed at Glasgow, Montana in 2005. The noise control devices were designed to target the primary main gear noise sources as observed in several previous tests. To accomplish this task, devices to reduce noise were built using stereo lithography for landing gear components such as the brakes, the forward cable harness, the shock strut, the door/strut gap and the lower truck. The most promising device was down selected from test results. In subsequent stages, the initial design of the selected lower truck fairing was improved to account for all the implementation constraints encountered in the full-scale airplane. The redesigned truck fairing was then retested to assess the impact of the modifications on the noise reduction potential. From extensive acoustic measurements obtained using a 63-element microphone phased array, acoustic source maps and integrated spectra were generated in order to estimate the noise reduction achievable with each device.
Evolutionary heritage influences Amazon tree ecology.
Coelho de Souza, Fernanda; Dexter, Kyle G; Phillips, Oliver L; Brienen, Roel J W; Chave, Jerome; Galbraith, David R; Lopez Gonzalez, Gabriela; Monteagudo Mendoza, Abel; Pennington, R Toby; Poorter, Lourens; Alexiades, Miguel; Álvarez-Dávila, Esteban; Andrade, Ana; Aragão, Luis E O C; Araujo-Murakami, Alejandro; Arets, Eric J M M; Aymard C, Gerardo A; Baraloto, Christopher; Barroso, Jorcely G; Bonal, Damien; Boot, Rene G A; Camargo, José L C; Comiskey, James A; Valverde, Fernando Cornejo; de Camargo, Plínio B; Di Fiore, Anthony; Elias, Fernando; Erwin, Terry L; Feldpausch, Ted R; Ferreira, Leandro; Fyllas, Nikolaos M; Gloor, Emanuel; Herault, Bruno; Herrera, Rafael; Higuchi, Niro; Honorio Coronado, Eurídice N; Killeen, Timothy J; Laurance, William F; Laurance, Susan; Lloyd, Jon; Lovejoy, Thomas E; Malhi, Yadvinder; Maracahipes, Leandro; Marimon, Beatriz S; Marimon-Junior, Ben H; Mendoza, Casimiro; Morandi, Paulo; Neill, David A; Vargas, Percy Núñez; Oliveira, Edmar A; Lenza, Eddie; Palacios, Walter A; Peñuela-Mora, Maria C; Pipoly, John J; Pitman, Nigel C A; Prieto, Adriana; Quesada, Carlos A; Ramirez-Angulo, Hirma; Rudas, Agustin; Ruokolainen, Kalle; Salomão, Rafael P; Silveira, Marcos; Stropp, Juliana; Ter Steege, Hans; Thomas-Caesar, Raquel; van der Hout, Peter; van der Heijden, Geertje M F; van der Meer, Peter J; Vasquez, Rodolfo V; Vieira, Simone A; Vilanova, Emilio; Vos, Vincent A; Wang, Ophelia; Young, Kenneth R; Zagt, Roderick J; Baker, Timothy R
2016-12-14
Lineages tend to retain ecological characteristics of their ancestors through time. However, for some traits, selection during evolutionary history may have also played a role in determining trait values. To address the relative importance of these processes requires large-scale quantification of traits and evolutionary relationships among species. The Amazonian tree flora comprises a high diversity of angiosperm lineages and species with widely differing life-history characteristics, providing an excellent system to investigate the combined influences of evolutionary heritage and selection in determining trait variation. We used trait data related to the major axes of life-history variation among tropical trees (e.g. growth and mortality rates) from 577 inventory plots in closed-canopy forest, mapped onto a phylogenetic hypothesis spanning more than 300 genera including all major angiosperm clades to test for evolutionary constraints on traits. We found significant phylogenetic signal (PS) for all traits, consistent with evolutionarily related genera having more similar characteristics than expected by chance. Although there is also evidence for repeated evolution of pioneer and shade tolerant life-history strategies within independent lineages, the existence of significant PS allows clearer predictions of the links between evolutionary diversity, ecosystem function and the response of tropical forests to global change. © 2016 The Authors.
Evolutionary heritage influences Amazon tree ecology
Coelho de Souza, Fernanda; Dexter, Kyle G.; Phillips, Oliver L.; Brienen, Roel J. W.; Chave, Jerome; Galbraith, David R.; Lopez Gonzalez, Gabriela; Monteagudo Mendoza, Abel; Pennington, R. Toby; Poorter, Lourens; Alexiades, Miguel; Álvarez-Dávila, Esteban; Andrade, Ana; Aragão, Luis E. O. C.; Araujo-Murakami, Alejandro; Arets, Eric J. M. M.; Aymard C, Gerardo A.; Baraloto, Christopher; Barroso, Jorcely G.; Bonal, Damien; Boot, Rene G. A.; Camargo, José L. C.; Comiskey, James A.; Valverde, Fernando Cornejo; de Camargo, Plínio B.; Di Fiore, Anthony; Erwin, Terry L.; Feldpausch, Ted R.; Ferreira, Leandro; Fyllas, Nikolaos M.; Gloor, Emanuel; Herault, Bruno; Herrera, Rafael; Higuchi, Niro; Honorio Coronado, Eurídice N.; Killeen, Timothy J.; Laurance, William F.; Laurance, Susan; Lloyd, Jon; Lovejoy, Thomas E.; Malhi, Yadvinder; Maracahipes, Leandro; Marimon, Beatriz S.; Marimon-Junior, Ben H.; Mendoza, Casimiro; Morandi, Paulo; Neill, David A.; Vargas, Percy Núñez; Oliveira, Edmar A.; Lenza, Eddie; Palacios, Walter A.; Peñuela-Mora, Maria C.; Pipoly, John J.; Pitman, Nigel C. A.; Prieto, Adriana; Quesada, Carlos A.; Ramirez-Angulo, Hirma; Rudas, Agustin; Ruokolainen, Kalle; Salomão, Rafael P.; Silveira, Marcos; ter Steege, Hans; Thomas-Caesar, Raquel; van der Hout, Peter; van der Heijden, Geertje M. F.; van der Meer, Peter J.; Vasquez, Rodolfo V.; Vieira, Simone A.; Vilanova, Emilio; Vos, Vincent A.; Wang, Ophelia; Young, Kenneth R.; Zagt, Roderick J.; Baker, Timothy R.
2016-01-01
Lineages tend to retain ecological characteristics of their ancestors through time. However, for some traits, selection during evolutionary history may have also played a role in determining trait values. To address the relative importance of these processes requires large-scale quantification of traits and evolutionary relationships among species. The Amazonian tree flora comprises a high diversity of angiosperm lineages and species with widely differing life-history characteristics, providing an excellent system to investigate the combined influences of evolutionary heritage and selection in determining trait variation. We used trait data related to the major axes of life-history variation among tropical trees (e.g. growth and mortality rates) from 577 inventory plots in closed-canopy forest, mapped onto a phylogenetic hypothesis spanning more than 300 genera including all major angiosperm clades to test for evolutionary constraints on traits. We found significant phylogenetic signal (PS) for all traits, consistent with evolutionarily related genera having more similar characteristics than expected by chance. Although there is also evidence for repeated evolution of pioneer and shade tolerant life-history strategies within independent lineages, the existence of significant PS allows clearer predictions of the links between evolutionary diversity, ecosystem function and the response of tropical forests to global change. PMID:27974517
A System for Automatically Generating Scheduling Heuristics
NASA Technical Reports Server (NTRS)
Morris, Robert
1996-01-01
The goal of this research is to improve the performance of automated schedulers by designing and implementing an algorithm by automatically generating heuristics by selecting a schedule. The particular application selected by applying this method solves the problem of scheduling telescope observations, and is called the Associate Principal Astronomer. The input to the APA scheduler is a set of observation requests submitted by one or more astronomers. Each observation request specifies an observation program as well as scheduling constraints and preferences associated with the program. The scheduler employs greedy heuristic search to synthesize a schedule that satisfies all hard constraints of the domain and achieves a good score with respect to soft constraints expressed as an objective function established by an astronomer-user.
Horndeski: beyond, or not beyond?
NASA Astrophysics Data System (ADS)
Crisostomi, Marco; Hull, Matthew; Koyama, Kazuya; Tasinato, Gianmassimo
2016-03-01
Determining the most general, consistent scalar tensor theory of gravity is important for building models of inflation and dark energy. In this work we investigate the number of degrees of freedom present in the theory of beyond Horndeski. We discuss how to construct the theory from the extrinsic curvature of the constant scalar field hypersurface, and find a simple expression for the action which guarantees the existence of the primary constraint necessary to avoid the Ostrogradsky instability. Our analysis is completely gauge-invariant. However we confirm that, mixing together beyond Horndeski with a different order of Horndeski, obstructs the construction of this primary constraint. Instead, when the mixing is between actions of the same order, the theory can be mapped to Horndeski through a generalised disformal transformation. This mapping however is impossible with beyond Horndeski alone, since we find that the theory is invariant under such a transformation. The picture that emerges is that beyond Horndeski is a healthy but isolated theory: combined with Horndeski, it either becomes Horndeski, or likely propagates a ghost.
NASA Astrophysics Data System (ADS)
O'Connell, Julia; Frinchaboy, Peter M.; Shetrone, Matthew D.; Melendez, Matthew; Cunha, Katia M. L.; Majewski, Steven R.; Zasowski, Gail; APOGEE Team
2017-01-01
The evolution of elements, as a function or age, throughout the Milky Way disk provides a key constraint for galaxy evolution models. In an effort to provide these constraints, we have conducted an investigation into the r- and s- process elemental abundances for a large sample of open clusters as part of an optical follow-up to the SDSS-III/APOGEE-1 survey. Stars were identified as cluster members by the Open Cluster Chemical Abundance & Mapping (OCCAM) survey, which culls member candidates by radial velocity, metallicity, and proper motion from the observed APOGEE sample. To obtain data for neutron capture elements in these clusters, we conducted a long-term observing campaign covering three years (2013-2016) using the McDonald Observatory Otto Struve 2.1-m telescope and Sandiford Cass Echelle Spectrograph (R ~ 60,000). We present Galactic neutron-capture abundance gradients using 30+ clusters, within 6 kpc of the Sun, covering a range of ages from ~80 Myr to ~10 Gyr .
NASA Astrophysics Data System (ADS)
O'Connell, Julia; Frinchaboy, Peter M.; Shetrone, Matthew D.; Melendez, Matthew; Cunha, Katia; Majewski, Steven R.; Zasowski, Gail; APOGEE Team
2017-06-01
The evolution of elements, as a function or age, throughout the Milky Way disk provides a key constraint for galaxy evolution models. In an effort to provide these constraints, we have conducted an investigation into the r- and s- process elemental abundances for a large sample of open clusters as part of an optical follow-up to the SDSS-III/APOGEE-1 survey. Stars were identified as cluster members by the Open Cluster Chemical Abundance & Mapping (OCCAM) survey, which culls member candidates by radial velocity, metallicity and proper motion from the observed APOGEE sample. To obtain data for neutron capture elements in these clusters, we conducted a long-term observing campaign covering three years (2013-2016) using the McDonald Observatory Otto Struve 2.1-m telescope and Sandiford Cass Echelle Spectrograph (R ~ 60,000). We present Galactic neutron capture abundance gradients using 30+ clusters, within 6 kpc of the Sun, covering a range of ages from ~80 Myr to ~10 Gyr .
A scene-analysis approach to remote sensing. [San Francisco, California
NASA Technical Reports Server (NTRS)
Tenenbaum, J. M. (Principal Investigator); Fischler, M. A.; Wolf, H. C.
1978-01-01
The author has identified the following significant results. Geometric correspondance between a sensed image and a symbolic map is established in an initial stage of processing by adjusting parameters of a sensed model so that the image features predicted from the map optimally match corresponding features extracted from the sensed image. Information in the map is then used to constrain where to look in an image, what to look for, and how to interpret what is seen. For simple monitoring tasks involving multispectral classification, these constraints significantly reduce computation, simplify interpretation, and improve the utility of the resulting information. Previously intractable tasks requiring spatial and textural analysis may become straightforward in the context established by the map knowledge. The use of map-guided image analysis in monitoring the volume of water in a reservoir, the number of boxcars in a railyard, and the number of ships in a harbor is demonstrated.
Future Cosmological Constraints From Fast Radio Bursts
NASA Astrophysics Data System (ADS)
Walters, Anthony; Weltman, Amanda; Gaensler, B. M.; Ma, Yin-Zhe; Witzemann, Amadeus
2018-03-01
We consider the possible observation of fast radio bursts (FRBs) with planned future radio telescopes, and investigate how well the dispersions and redshifts of these signals might constrain cosmological parameters. We construct mock catalogs of FRB dispersion measure (DM) data and employ Markov Chain Monte Carlo analysis, with which we forecast and compare with existing constraints in the flat ΛCDM model, as well as some popular extensions that include dark energy equation of state and curvature parameters. We find that the scatter in DM observations caused by inhomogeneities in the intergalactic medium (IGM) poses a big challenge to the utility of FRBs as a cosmic probe. Only in the most optimistic case, with a high number of events and low IGM variance, do FRBs aid in improving current constraints. In particular, when FRBs are combined with CMB+BAO+SNe+H 0 data, we find the biggest improvement comes in the {{{Ω }}}{{b}}{h}2 constraint. Also, we find that the dark energy equation of state is poorly constrained, while the constraint on the curvature parameter, Ω k , shows some improvement when combined with current constraints. When FRBs are combined with future baryon acoustic oscillation (BAO) data from 21 cm Intensity Mapping, we find little improvement over the constraints from BAOs alone. However, the inclusion of FRBs introduces an additional parameter constraint, {{{Ω }}}{{b}}{h}2, which turns out to be comparable to existing constraints. This suggests that FRBs provide valuable information about the cosmological baryon density in the intermediate redshift universe, independent of high-redshift CMB data.
An algorithm for converting a virtual-bond chain into a complete polypeptide backbone chain
NASA Technical Reports Server (NTRS)
Luo, N.; Shibata, M.; Rein, R.
1991-01-01
A systematic analysis is presented of the algorithm for converting a virtual-bond chain, defined by the coordinates of the alpha-carbons of a given protein, into a complete polypeptide backbone. An alternative algorithm, based upon the same set of geometric parameters used in the Purisima-Scheraga algorithm but with a different "linkage map" of the algorithmic procedures, is proposed. The global virtual-bond chain geometric constraints are more easily separable from the loal peptide geometric and energetic constraints derived from, for example, the Ramachandran criterion, within the framework of this approach.
Tsuchimochi, Takashi; Henderson, Thomas M; Scuseria, Gustavo E; Savin, Andreas
2010-10-07
Our previously developed constrained-pairing mean-field theory (CPMFT) is shown to map onto an unrestricted Hartree-Fock (UHF) type method if one imposes a corresponding pair constraint to the correlation problem that forces occupation numbers to occur in pairs adding to one. In this new version, CPMFT has all the advantages of standard independent particle models (orbitals and orbital energies, to mention a few), yet unlike UHF, it can dissociate polyatomic molecules to the correct ground-state restricted open-shell Hartree-Fock atoms or fragments.
Generating Multi-Destination Maps.
Zhang, Junsong; Fan, Jiepeng; Luo, Zhenshan
2017-08-01
Multi-destination maps are a kind of navigation maps aimed to guide visitors to multiple destinations within a region, which can be of great help to urban visitors. However, they have not been developed in the current online map service. To address this issue, we introduce a novel layout model designed especially for generating multi-destination maps, which considers the global and local layout of a multi-destination map. We model the layout problem as a graph drawing that satisfies a set of hard and soft constraints. In the global layout phase, we balance the scale factor between ROIs. In the local layout phase, we make all edges have good visibility and optimize the map layout to preserve the relative length and angle of roads. We also propose a perturbation-based optimization method to find an optimal layout in the complex solution space. The multi-destination maps generated by our system are potential feasible on the modern mobile devices and our result can show an overview and a detail view of the whole map at the same time. In addition, we perform a user study to evaluate the effectiveness of our method, and the results prove that the multi-destination maps achieve our goals well.
Biological function in the twilight zone of sequence conservation.
Ponting, Chris P
2017-08-16
Strong DNA conservation among divergent species is an indicator of enduring functionality. With weaker sequence conservation we enter a vast 'twilight zone' in which sequence subject to transient or lower constraint cannot be distinguished easily from neutrally evolving, non-functional sequence. Twilight zone functional sequence is illuminated instead by principles of selective constraint and positive selection using genomic data acquired from within a species' population. Application of these principles reveals that despite being biochemically active, most twilight zone sequence is not functional.
An Ontology for State Analysis: Formalizing the Mapping to SysML
NASA Technical Reports Server (NTRS)
Wagner, David A.; Bennett, Matthew B.; Karban, Robert; Rouquette, Nicolas; Jenkins, Steven; Ingham, Michel
2012-01-01
State Analysis is a methodology developed over the last decade for architecting, designing and documenting complex control systems. Although it was originally conceived for designing robotic spacecraft, recent applications include the design of control systems for large ground-based telescopes. The European Southern Observatory (ESO) began a project to design the European Extremely Large Telescope (E-ELT), which will require coordinated control of over a thousand articulated mirror segments. The designers are using State Analysis as a methodology and the Systems Modeling Language (SysML) as a modeling and documentation language in this task. To effectively apply the State Analysis methodology in this context it became necessary to provide ontological definitions of the concepts and relations in State Analysis and greater flexibility through a mapping of State Analysis into a practical extension of SysML. The ontology provides the formal basis for verifying compliance with State Analysis semantics including architectural constraints. The SysML extension provides the practical basis for applying the State Analysis methodology with SysML tools. This paper will discuss the method used to develop these formalisms (the ontology), the formalisms themselves, the mapping to SysML and approach to using these formalisms to specify a control system and enforce architectural constraints in a SysML model.
NASA Astrophysics Data System (ADS)
Renqi, L.; Wu, J. E.; Suppe, J.; Kanda, R. V.
2013-12-01
It is well known from seafloor spreading and hotspot data that the Australian plate has moved ~2500km northward in a mantle reference frame since 43Ma, during which time the Pacific plate moved approximately orthogonally ~3000km in a WNW direction. In addition the Australian plate has expanded up to 2000 km as a result of back arc spreading associated with evolving subduction systems on its northern and eastern margins. Here we attempt to account for this plate motion and subduction using new quantitative constraints of mapped slabs of subducted mantle lithosphere underlying the Australian plate and its surroundings. We have mapped a large swath of sub-horizontal slabs in the lower mantle under onshore and offshore NE Australia using global mantle seismic tomography. When restored together with other mapped slabs from the Asia Pacific region, these slabs reveal the existence of a major ocean between NE Australia, E. Asian, and the Pacific at 43 Ma, which we call the East Asian Sea. The southern half of this East Asian Sea was overrun and completely subducted by northward-moving Australia and the expanding Melanesian arcs, and the WNW-converging Pacific. This lost ocean fills a major gap in plate tectonic reconstructions and also constraints the possible motion of the Caroline Sea and New Guinea arcs. Slabs were mapped from MITP08 global P-wave seismic tomography data (Li and Hilst, 2008) and the TX2011 S-wave seismic tomography data (Grand and Simmons, 2011) using Gocad software. The mapped slabs were unfolded to the spherical Earth surface to assess their pre-subduction geometry. Gplates software was used to constrain plate tectonic reconstructions within a fully animated, globally consistent framework.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, C.; Pujol, A.; Gaztañaga, E.
We measure the redshift evolution of galaxy bias from a magnitude-limited galaxy sample by combining the galaxy density maps and weak lensing shear maps for amore » $$\\sim$$116 deg$$^{2}$$ area of the Dark Energy Survey (DES) Science Verification data. This method was first developed in Amara et al. (2012) and later re-examined in a companion paper (Pujol et al., in prep) with rigorous simulation tests and analytical treatment of tomographic measurements. In this work we apply this method to the DES SV data and measure the galaxy bias for a magnitude-limited galaxy sample. We find the galaxy bias and 1$$\\sigma$$ error bars in 4 photometric redshift bins to be 1.33$$\\pm$$0.18 (z=0.2-0.4), 1.19$$\\pm$$0.23 (z=0.4-0.6), 0.99$$\\pm$$0.36 ( z=0.6-0.8), and 1.66$$\\pm$$0.56 (z=0.8-1.0). These measurements are consistent at the 1-2$$\\sigma$$ level with mea- surements on the same dataset using galaxy clustering and cross-correlation of galaxies with CMB lensing. In addition, our method provides the only $$\\sigma_8$$-independent constraint among the three. We forward-model the main observational effects using mock galaxy catalogs by including shape noise, photo-z errors and masking effects. We show that our bias measurement from the data is consistent with that expected from simulations. With the forthcoming full DES data set, we expect this method to provide additional constraints on the galaxy bias measurement from more traditional methods. Furthermore, in the process of our measurement, we build up a 3D mass map that allows further exploration of the dark matter distribution and its relation to galaxy evolution.« less
Developmental constraints shape the evolution of the nematode mid-developmental transition.
Zalts, Harel; Yanai, Itai
2017-03-27
Evolutionary theory assumes that genetic variation is uniform and gradual in nature, yet morphological and gene expression studies have revealed that different life-stages exhibit distinct levels of cross-species conservation. In particular, a stage in mid-embryogenesis is highly conserved across species of the same phylum, suggesting that this stage is subject to developmental constraints, either by increased purifying selection or by a strong mutational bias. An alternative explanation, however, holds that the same 'hourglass' pattern of variation may result from increased positive selection at the earlier and later stages of development. To distinguish between these scenarios, we examined gene expression variation in a population of the nematode Caenorhabditis elegans using an experimental design that eliminated the influence of positive selection. By measuring gene expression for all genes throughout development in 20 strains, we found that variations were highly uneven throughout development, with a significant depletion during mid-embryogenesis. In particular, the family of homeodomain transcription factors, whose expression generally coincides with mid-embryogenesis, evolved under high constraint. Our data further show that genes responsible for the integration of germ layers during morphogenesis are the most constrained class of genes. Together, these results provide strong evidence for developmental constraints as the mechanism underlying the hourglass model of animal evolution. Understanding the pattern and mechanism of developmental constraints provides a framework to understand how evolutionary processes have interacted with embryogenesis and led to the diversity of animal life on Earth.
Will, Jessica L; Kim, Hyun Seok; Clarke, Jessica; Painter, John C; Fay, Justin C; Gasch, Audrey P
2010-04-01
A major goal in evolutionary biology is to understand how adaptive evolution has influenced natural variation, but identifying loci subject to positive selection has been a challenge. Here we present the adaptive loss of a pair of paralogous genes in specific Saccharomyces cerevisiae subpopulations. We mapped natural variation in freeze-thaw tolerance to two water transporters, AQY1 and AQY2, previously implicated in freeze-thaw survival. However, whereas freeze-thaw-tolerant strains harbor functional aquaporin genes, the set of sensitive strains lost aquaporin function at least 6 independent times. Several genomic signatures at AQY1 and/or AQY2 reveal low variation surrounding these loci within strains of the same haplotype, but high variation between strain groups. This is consistent with recent adaptive loss of aquaporins in subgroups of strains, leading to incipient balancing selection. We show that, although aquaporins are critical for surviving freeze-thaw stress, loss of both genes provides a major fitness advantage on high-sugar substrates common to many strains' natural niche. Strikingly, strains with non-functional alleles have also lost the ancestral requirement for aquaporins during spore formation. Thus, the antagonistic effect of aquaporin function-providing an advantage in freeze-thaw tolerance but a fitness defect for growth in high-sugar environments-contributes to the maintenance of both functional and nonfunctional alleles in S. cerevisiae. This work also shows that gene loss through multiple missense and nonsense mutations, hallmarks of pseudogenization presumed to emerge after loss of constraint, can arise through positive selection.
CONSTRAINTS ON VARIABLES IN SYNTAX.
ERIC Educational Resources Information Center
ROSS, JOHN ROBERT
IN ATTEMPTING TO DEFINE "SYNTACTIC VARIABLE," THE AUTHOR BASES HIS DISCUSSION ON THE ASSUMPTION THAT SYNTACTIC FACTS ARE A COLLECTION OF TWO TYPES OF RULES--CONTEXT-FREE PHRASE STRUCTURE RULES (GENERATING UNDERLYING OR DEEP PHRASE MARKERS) AND GRAMMATICAL TRANSFORMATIONS, WHICH MAP UNDERLYING PHRASE MARKERS ONTO SUPERFICIAL (OR SURFACE) PHRASE…
William Smith's Mapping Milestone
ERIC Educational Resources Information Center
Clary, Renee
2015-01-01
Interactive Historical Vignettes (IHVs) can serve as introductions to scientific content, pique students' interest, and reveal the nature of science to students (Clary and Wandersee 2006). Additionally, pivotal episodes in the life of a scientist can reveal the humanness of science, and the cultural and societal constraints in which the scientist…
Pellegrino Vidal, Rocío B; Allegrini, Franco; Olivieri, Alejandro C
2018-03-20
Multivariate curve resolution-alternating least-squares (MCR-ALS) is the model of choice when dealing with some non-trilinear arrays, specifically when the data are of chromatographic origin. To drive the iterative procedure to chemically interpretable solutions, the use of constraints becomes essential. In this work, both simulated and experimental data have been analyzed by MCR-ALS, applying chemically reasonable constraints, and investigating the relationship between selectivity, analytical sensitivity (γ) and root mean square error of prediction (RMSEP). As the selectivity in the instrumental modes decreases, the estimated values for γ did not fully represent the predictive model capabilities, judged from the obtained RMSEP values. Since the available sensitivity expressions have been developed by error propagation theory in unconstrained systems, there is a need of developing new expressions or analytical indicators. They should not only consider the specific profiles retrieved by MCR-ALS, but also the constraints under which the latter ones have been obtained. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Chakraborty, A.; Goto, H.
2017-12-01
The 2011 off the Pacific coast of Tohoku earthquake caused severe damage in many areas further inside the mainland because of site-amplification. Furukawa district in Miyagi Prefecture, Japan recorded significant spatial differences in ground motion even at sub-kilometer scales. The site responses in the damage zone far exceeded the levels in the hazard maps. A reason why the mismatch occurred is that mapping follow only the mean value at the measurement locations with no regard to the data uncertainties and thus are not always reliable. Our research objective is to develop a methodology to incorporate data uncertainties in mapping and propose a reliable map. The methodology is based on a hierarchical Bayesian modeling of normally-distributed site responses in space where the mean (μ), site-specific variance (σ2) and between-sites variance(s2) parameters are treated as unknowns with a prior distribution. The observation data is artificially created site responses with varying means and variances for 150 seismic events across 50 locations in one-dimensional space. Spatially auto-correlated random effects were added to the mean (μ) using a conditionally autoregressive (CAR) prior. The inferences on the unknown parameters are done using Markov Chain Monte Carlo methods from the posterior distribution. The goal is to find reliable estimates of μ sensitive to uncertainties. During initial trials, we observed that the tau (=1/s2) parameter of CAR prior controls the μ estimation. Using a constraint, s = 1/(k×σ), five spatial models with varying k-values were created. We define reliability to be measured by the model likelihood and propose the maximum likelihood model to be highly reliable. The model with maximum likelihood was selected using a 5-fold cross-validation technique. The results show that the maximum likelihood model (μ*) follows the site-specific mean at low uncertainties and converges to the model-mean at higher uncertainties (Fig.1). This result is highly significant as it successfully incorporates the effect of data uncertainties in mapping. This novel approach can be applied to any research field using mapping techniques. The methodology is now being applied to real records from a very dense seismic network in Furukawa district, Miyagi Prefecture, Japan to generate a reliable map of the site responses.
Neurocomputing strategies in decomposition based structural design
NASA Technical Reports Server (NTRS)
Szewczyk, Z.; Hajela, P.
1993-01-01
The present paper explores the applicability of neurocomputing strategies in decomposition based structural optimization problems. It is shown that the modeling capability of a backpropagation neural network can be used to detect weak couplings in a system, and to effectively decompose it into smaller, more tractable, subsystems. When such partitioning of a design space is possible, parallel optimization can be performed in each subsystem, with a penalty term added to its objective function to account for constraint violations in all other subsystems. Dependencies among subsystems are represented in terms of global design variables, and a neural network is used to map the relations between these variables and all subsystem constraints. A vector quantization technique, referred to as a z-Network, can effectively be used for this purpose. The approach is illustrated with applications to minimum weight sizing of truss structures with multiple design constraints.
HI Intensity Mapping with FAST
NASA Astrophysics Data System (ADS)
Bigot-Sazy, M.-A.; Ma, Y.-Z.; Battye, R. A.; Browne, I. W. A.; Chen, T.; Dickinson, C.; Harper, S.; Maffei, B.; Olivari, L. C.; Wilkinsondagger, P. N.
2016-02-01
We discuss the detectability of large-scale HI intensity fluctuations using the FAST telescope. We present forecasts for the accuracy of measuring the Baryonic Acoustic Oscillations and constraining the properties of dark energy. The FAST 19-beam L-band receivers (1.05-1.45 GHz) can provide constraints on the matter power spectrum and dark energy equation of state parameters (w0,wa) that are comparable to the BINGO and CHIME experiments. For one year of integration time we find that the optimal survey area is 6000 deg2. However, observing with larger frequency coverage at higher redshift (0.95-1.35 GHz) improves the projected errorbars on the HI power spectrum by more than 2 σ confidence level. The combined constraints from FAST, CHIME, BINGO and Planck CMB observations can provide reliable, stringent constraints on the dark energy equation of state.
A neurocomputational account of taxonomic responding and fast mapping in early word learning.
Mayor, Julien; Plunkett, Kim
2010-01-01
We present a neurocomputational model with self-organizing maps that accounts for the emergence of taxonomic responding and fast mapping in early word learning, as well as a rapid increase in the rate of acquisition of words observed in late infancy. The quality and efficiency of generalization of word-object associations is directly related to the quality of prelexical, categorical representations in the model. We show how synaptogenesis supports coherent generalization of word-object associations and show that later synaptic pruning minimizes metabolic costs without being detrimental to word learning. The role played by joint-attentional activities is identified in the model, both at the level of selecting efficient cross-modal synapses and at the behavioral level, by accelerating and refining overall vocabulary acquisition. The model can account for the qualitative shift in the way infants use words, from an associative to a referential-like use, for the pattern of overextension errors in production and comprehension observed during early childhood and typicality effects observed in lexical development. Interesting by-products of the model include a potential explanation of the shift from prototype to exemplar-based effects reported for adult category formation, an account of mispronunciation effects in early lexical development, and extendability to include accounts of individual differences in lexical development and specific disorders such as Williams syndrome. The model demonstrates how an established constraint on lexical learning, which has often been regarded as domain-specific, can emerge from domain-general learning principles that are simultaneously biologically, psychologically, and socially plausible.
Design sensitivity analysis of rotorcraft airframe structures for vibration reduction
NASA Technical Reports Server (NTRS)
Murthy, T. Sreekanta
1987-01-01
Optimization of rotorcraft structures for vibration reduction was studied. The objective of this study is to develop practical computational procedures for structural optimization of airframes subject to steady-state vibration response constraints. One of the key elements of any such computational procedure is design sensitivity analysis. A method for design sensitivity analysis of airframes under vibration response constraints is presented. The mathematical formulation of the method and its implementation as a new solution sequence in MSC/NASTRAN are described. The results of the application of the method to a simple finite element stick model of the AH-1G helicopter airframe are presented and discussed. Selection of design variables that are most likely to bring about changes in the response at specified locations in the airframe is based on consideration of forced response strain energy. Sensitivity coefficients are determined for the selected design variable set. Constraints on the natural frequencies are also included in addition to the constraints on the steady-state response. Sensitivity coefficients for these constraints are determined. Results of the analysis and insights gained in applying the method to the airframe model are discussed. The general nature of future work to be conducted is described.
Simultaneous multislice refocusing via time optimal control.
Rund, Armin; Aigner, Christoph Stefan; Kunisch, Karl; Stollberger, Rudolf
2018-02-09
Joint design of minimum duration RF pulses and slice-selective gradient shapes for MRI via time optimal control with strict physical constraints, and its application to simultaneous multislice imaging. The minimization of the pulse duration is cast as a time optimal control problem with inequality constraints describing the refocusing quality and physical constraints. It is solved with a bilevel method, where the pulse length is minimized in the upper level, and the constraints are satisfied in the lower level. To address the inherent nonconvexity of the optimization problem, the upper level is enhanced with new heuristics for finding a near global optimizer based on a second optimization problem. A large set of optimized examples shows an average temporal reduction of 87.1% for double diffusion and 74% for turbo spin echo pulses compared to power independent number of slices pulses. The optimized results are validated on a 3T scanner with phantom measurements. The presented design method computes minimum duration RF pulse and slice-selective gradient shapes subject to physical constraints. The shorter pulse duration can be used to decrease the effective echo time in existing echo-planar imaging or echo spacing in turbo spin echo sequences. © 2018 International Society for Magnetic Resonance in Medicine.
Xiao, Youping; Kavanau, Christopher; Bertin, Lauren; Kaplan, Ehud
2011-01-01
Many studies have provided evidence for the existence of universal constraints on color categorization or naming in various languages, but the biological basis of these constraints is unknown. A recent study of the pattern of color categorization across numerous languages has suggested that these patterns tend to avoid straddling a region in color space at or near the border between the English composite categories of "warm" and "cool". This fault line in color space represents a fundamental constraint on color naming. Here we report that the two-way categorization along the fault line is correlated with the sign of the L- versus M-cone contrast of a stimulus color. Moreover, we found that the sign of the L-M cone contrast also accounted for the two-way clustering of the spatially distributed neural responses in small regions of the macaque primary visual cortex, visualized with optical imaging. These small regions correspond to the hue maps, where our previous study found a spatially organized representation of stimulus hue. Altogether, these results establish a direct link between a universal constraint on color naming and the cone-specific information that is represented in the primate early visual system.
Migrate small, sound big: functional constraints on body size promote tracheal elongation in cranes.
Jones, M R; Witt, C C
2014-06-01
Organismal traits often represent the outcome of opposing selection pressures. Although social or sexual selection can cause the evolution of traits that constrain function or survival (e.g. ornamental feathers), it is unclear how the strength and direction of selection respond to ecological shifts that increase the severity of the constraint. For example, reduced body size might evolve by natural selection to enhance flight performance in migratory birds, but social or sexual selection favouring large body size may provide a countervailing force. Tracheal elongation is a potential outcome of these opposing pressures because it allows birds to convey an auditory signal of exaggerated body size. We predicted that the evolution of migration in cranes has coincided with a reduction in body size and a concomitant intensification of social or sexual selection for apparent large body size via tracheal elongation. We used a phylogenetic comparative approach to examine the relationships among migration distance, body mass and trachea length in cranes. As predicted, we found that migration distance correlated negatively with body size and positively with proportional trachea length. This result was consistent with our hypothesis that evolutionary reductions in body size led to intensified selection for trachea length. The most likely ultimate causes of intensified positive selection on trachea length are the direct benefits of conveying a large body size in intraspecific contests for mates and territories. We conclude that the strength of social or sexual selection on crane body size is linked to the degree of functional constraint. © 2014 The Authors. Journal of Evolutionary Biology © 2014 European Society For Evolutionary Biology.
Meter-scale slopes of candidate MER landing sites from point photoclinometry
Beyer, R.A.; McEwen, A.S.; Kirk, R.L.
2003-01-01
Photoclinometry was used to analyze the small-scale roughness of areas that fall within the proposed Mars Exploration Rover (MER) 2003 landing ellipses. The landing ellipses presented in this study were those in Athabasca Valles, Elysium Planitia, Eos Chasma, Gusev Crater, Isidis Planitia, Melas Chasma, and Meridiani Planum. We were able to constrain surface slopes on length scales comparable to the image resolution (1.5 to 12 m/pixel). The MER 2003 mission has various engineering constraints that each candidate landing ellipse must satisfy. These constraints indicate that the statistical slope values at 5 m baselines are an important criterion. We used our technique to constrain maximum surface slopes across large swaths of each image, and built up slope statistics for the images in each landing ellipse. We are confident that all MER 2003 landing site ellipses in this study, with the exception of the Melas Chasma ellipse, are within the small-scale roughness constraints. Our results have provided input into the landing hazard assessment process. In addition to evaluating the safety of the landing sites, our mapping of small-scale roughnesses can also be used to better define and map morphologic units. The morphology of a surface is characterized by the slope distribution and magnitude of slopes. In looking at how slopes are distributed, we can better define landforms and determine the boundaries of morphologic units. Copyright 2003 by the American Geophysical Union.
The conformational preferences of γ-lactam and its role in constraining peptide structure
NASA Astrophysics Data System (ADS)
Paul, P. K. C.; Burney, P. A.; Campbell, M. M.; Osguthorpe, D. J.
1990-09-01
The conformational constraints imposed by γ-lactams in peptides have been studied using valence force field energy calculations and flexible geometry maps. It has been found that while cyclisation restrains the Ψ of the lactam, non-bonded interactions contribute to the constraints on ϕ of the lactam. The γ-lactam also affects the (ϕ,Ψ) of the residue after it in a peptide sequence. For an l-lactam, the ring geometry restricts Ψ to about-120°, and ϕ has two minima, the lowest energy around-140° and a higher minimum (5 kcal/mol higher) at 60°, making an l-γ-lactam more favourably accommodated in a near extended conformation than in position 2 of a type II' β-turn. The energy of the ϕ˜+60° minimum can be lowered substantially until it is more favoured than the-140° minimum by progressive substitution of bulkier groups on the amide N of the l-γ-lactam. The (ϕ,Ψ) maps of the residue succeeding a γ-lactam show subtle differences from those of standard N-methylated residues. The dependence of the constraints on the chirality of γ-lactams and N-substituted γ-lactams, in terms of the formation of secondary structures like β-turns is discussed and the comparison of the theoretical conformations with experimental results is highlighted.
Locally Linear Embedding of Local Orthogonal Least Squares Images for Face Recognition
NASA Astrophysics Data System (ADS)
Hafizhelmi Kamaru Zaman, Fadhlan
2018-03-01
Dimensionality reduction is very important in face recognition since it ensures that high-dimensionality data can be mapped to lower dimensional space without losing salient and integral facial information. Locally Linear Embedding (LLE) has been previously used to serve this purpose, however, the process of acquiring LLE features requires high computation and resources. To overcome this limitation, we propose a locally-applied Local Orthogonal Least Squares (LOLS) model can be used as initial feature extraction before the application of LLE. By construction of least squares regression under orthogonal constraints we can preserve more discriminant information in the local subspace of facial features while reducing the overall features into a more compact form that we called LOLS images. LLE can then be applied on the LOLS images to maps its representation into a global coordinate system of much lower dimensionality. Several experiments carried out using publicly available face datasets such as AR, ORL, YaleB, and FERET under Single Sample Per Person (SSPP) constraint demonstrates that our proposed method can reduce the time required to compute LLE features while delivering better accuracy when compared to when either LLE or OLS alone is used. Comparison against several other feature extraction methods and more recent feature-learning method such as state-of-the-art Convolutional Neural Networks (CNN) also reveal the superiority of the proposed method under SSPP constraint.
Estimating landscape carrying capacity through maximum clique analysis
Donovan, Therese; Warrington, Greg; Schwenk, W. Scott; Dinitz, Jeffrey H.
2012-01-01
Habitat suitability (HS) maps are widely used tools in wildlife science and establish a link between wildlife populations and landscape pattern. Although HS maps spatially depict the distribution of optimal resources for a species, they do not reveal the population size a landscape is capable of supporting--information that is often crucial for decision makers and managers. We used a new approach, "maximum clique analysis," to demonstrate how HS maps for territorial species can be used to estimate the carrying capacity, N(k), of a given landscape. We estimated the N(k) of Ovenbirds (Seiurus aurocapillus) and bobcats (Lynx rufus) in an 1153-km2 study area in Vermont, USA. These two species were selected to highlight different approaches in building an HS map as well as computational challenges that can arise in a maximum clique analysis. We derived 30-m2 HS maps for each species via occupancy modeling (Ovenbird) and by resource utilization modeling (bobcats). For each species, we then identified all pixel locations on the map (points) that had sufficient resources in the surrounding area to maintain a home range (termed a "pseudo-home range"). These locations were converted to a mathematical graph, where any two points were linked if two pseudo-home ranges could exist on the landscape without violating territory boundaries. We used the program Cliquer to find the maximum clique of each graph. The resulting estimates of N(k) = 236 Ovenbirds and N(k) = 42 female bobcats were sensitive to different assumptions and model inputs. Estimates of N(k) via alternative, ad hoc methods were 1.4 to > 30 times greater than the maximum clique estimate, suggesting that the alternative results may be upwardly biased. The maximum clique analysis was computationally intensive but could handle problems with < 1500 total pseudo-home ranges (points). Given present computational constraints, it is best suited for species that occur in clustered distributions (where the problem can be broken into several, smaller problems), or for species with large home ranges relative to grid scale where resampling the points to a coarser resolution can reduce the problem to manageable proportions.
Cosmology and Astrophysics using the Post-Reionization HI
NASA Astrophysics Data System (ADS)
Sarkar, Tapomoy Guha; Sen, Anjan A.
2016-12-01
We discuss the prospects of using the redshifted 21-cm emission from neutral hydrogen in the post-reionization epoch to study our Universe. The main aim of the article is to highlight the efforts of Indian scientists in this area with the SKA in mind. It turns out that the intensity mapping surveys from SKA can be instrumental in obtaining tighter constraints on the dark energy models. Cross-correlation of the HI intensity maps with the Ly α forest data can also be useful in measuring the BAO scale.
Aeromagnetic Map with Geology of the Los Angeles 30 x 60 Minute Quadrangle, Southern California
Langenheim, V.E.; Hildenbrand, T.G.; Jachens, R.C.; Campbell, R.H.; Yerkes, R.F.
2006-01-01
Introduction: An important objective of geologic mapping is to project surficial structures and stratigraphy into the subsurface. Geophysical data and analysis are useful tools for achieving this objective. This aeromagnetic anomaly map provides a three-dimensional perspective to the geologic mapping of the Los Angeles 30 by 60 minute quadrangle. Aeromagnetic maps show the distribution of magnetic rocks, primarily those containing magnetite (Blakely, 1995). In the Los Angeles quadrangle, the magnetic sources are Tertiary and Mesozoic igneous rocks and Precambrian crystalline rocks. Aeromagnetic anomalies mark abrupt spatial contrasts in magnetization that can be attributed to lithologic boundaries, perhaps caused by faulting of these rocks or by intrusive contacts. This aeromagnetic map overlain on geology, with information from wells and other geophysical data, provides constraints on the subsurface geology by allowing us to trace faults beneath surficial cover and estimate fault dip and offset. This map supersedes Langenheim and Jachens (1997) because of its digital form and the added value of overlaying the magnetic data on a geologic base. The geologic base for this map is from Yerkes and Campbell (2005); some of their subunits have been merged into one on this map.
NASA Astrophysics Data System (ADS)
Antonakos, Andreas K.; Voudouris, Konstantinos S.; Lambrakis, Nikolaos I.
2014-12-01
The implementation of a geographic information system (GIS)/fuzzy spatial decision support system in the selection of sites for drinking-water pumping boreholes is described. Groundwater is the main source of domestic supply and irrigation in Korinthia prefecture, south-eastern Greece. Water demand has increased considerably over the last 30 years and is mainly met by groundwater abstracted via numerous wells and boreholes. The definition of the most "suitable" site for the drilling of new boreholes is a major issue in this area. A method of allocating suitable locations has been developed based on multicriteria analysis and fuzzy logic. Twelve parameters were finally involved in the model, prearranged into three categories: borehole yield, groundwater quality, and economic and technical constraints. GIS was used to create a classification map of the research area, based on the suitability of each point for the placement of new borehole fields. The coastal part of the study area is completely unsuitable, whereas high values of suitability are recorded in the south-western part. The study demonstrated that the method of multicriteria analysis in combination with fuzzy logic is a useful tool for selecting the best sites for new borehole drilling on a regional scale. The results could be used by local authorities and decision-makers for integrated groundwater resources management.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, T; Zhou, L; Li, Y
Purpose: For intensity modulated radiotherapy, the plan optimization is time consuming with difficulties of selecting objectives and constraints, and their relative weights. A fast and automatic multi-objective optimization algorithm with abilities to predict optimal constraints and manager their trade-offs can help to solve this problem. Our purpose is to develop such a framework and algorithm for a general inverse planning. Methods: There are three main components contained in this proposed multi-objective optimization framework: prediction of initial dosimetric constraints, further adjustment of constraints and plan optimization. We firstly use our previously developed in-house geometry-dosimetry correlation model to predict the optimal patient-specificmore » dosimetric endpoints, and treat them as initial dosimetric constraints. Secondly, we build an endpoint(organ) priority list and a constraint adjustment rule to repeatedly tune these constraints from their initial values, until every single endpoint has no room for further improvement. Lastly, we implement a voxel-independent based FMO algorithm for optimization. During the optimization, a model for tuning these voxel weighting factors respecting to constraints is created. For framework and algorithm evaluation, we randomly selected 20 IMRT prostate cases from the clinic and compared them with our automatic generated plans, in both the efficiency and plan quality. Results: For each evaluated plan, the proposed multi-objective framework could run fluently and automatically. The voxel weighting factor iteration time varied from 10 to 30 under an updated constraint, and the constraint tuning time varied from 20 to 30 for every case until no more stricter constraint is allowed. The average total costing time for the whole optimization procedure is ∼30mins. By comparing the DVHs, better OAR dose sparing could be observed in automatic generated plan, for 13 out of the 20 cases, while others are with competitive results. Conclusion: We have successfully developed a fast and automatic multi-objective optimization for intensity modulated radiotherapy. This work is supported by the National Natural Science Foundation of China (No: 81571771)« less
Application of a sparseness constraint in multivariate curve resolution - Alternating least squares.
Hugelier, Siewert; Piqueras, Sara; Bedia, Carmen; de Juan, Anna; Ruckebusch, Cyril
2018-02-13
The use of sparseness in chemometrics is a concept that has increased in popularity. The advantage is, above all, a better interpretability of the results obtained. In this work, sparseness is implemented as a constraint in multivariate curve resolution - alternating least squares (MCR-ALS), which aims at reproducing raw (mixed) data by a bilinear model of chemically meaningful profiles. In many cases, the mixed raw data analyzed are not sparse by nature, but their decomposition profiles can be, as it is the case in some instrumental responses, such as mass spectra, or in concentration profiles linked to scattered distribution maps of powdered samples in hyperspectral images. To induce sparseness in the constrained profiles, one-dimensional and/or two-dimensional numerical arrays can be fitted using a basis of Gaussian functions with a penalty on the coefficients. In this work, a least squares regression framework with L 0 -norm penalty is applied. This L 0 -norm penalty constrains the number of non-null coefficients in the fit of the array constrained without having an a priori on the number and their positions. It has been shown that the sparseness constraint induces the suppression of values linked to uninformative channels and noise in MS spectra and improves the location of scattered compounds in distribution maps, resulting in a better interpretability of the constrained profiles. An additional benefit of the sparseness constraint is a lower ambiguity in the bilinear model, since the major presence of null coefficients in the constrained profiles also helps to limit the solutions for the profiles in the counterpart matrix of the MCR bilinear model. Copyright © 2017 Elsevier B.V. All rights reserved.
Evolutionary stasis in pollen morphogenesis due to natural selection.
Matamoro-Vidal, Alexis; Prieu, Charlotte; Furness, Carol A; Albert, Béatrice; Gouyon, Pierre-Henri
2016-01-01
The contribution of developmental constraints and selective forces to the determination of evolutionary patterns is an important and unsolved question. We test whether the long-term evolutionary stasis observed for pollen morphogenesis (microsporogenesis) in eudicots is due to developmental constraints or to selection on a morphological trait shaped by microsporogenesis: the equatorial aperture pattern. Most eudicots have three equatorial apertures but several taxa have independently lost the equatorial pattern and have microsporogenesis decoupled from aperture pattern determination. If selection on the equatorial pattern limits variation, we expect to see increased variation in microsporogenesis in the nonequatorial clades. Variation of microsporogenesis was studied using phylogenetic comparative analyses in 83 species dispersed throughout eudicots including species with and without equatorial apertures. The species that have lost the equatorial pattern have highly variable microsporogenesis at the intra-individual and inter-specific levels regardless of their pollen morphology, whereas microsporogenesis remains stable in species with the equatorial pattern. The observed burst of variation upon loss of equatorial apertures shows that there are no strong developmental constraints precluding variation in microsporogenesis, and that the stasis is likely to be due principally to selective pressure acting on pollen morphogenesis because of its implication in the determination of the equatorial aperture pattern. © 2015 The Authors. New Phytologist © 2015 New Phytologist Trust.
NASA Astrophysics Data System (ADS)
Fan, Qingbiao; Xu, Caijun; Yi, Lei; Liu, Yang; Wen, Yangmao; Yin, Zhi
2017-10-01
When ill-posed problems are inverted, the regularization process is equivalent to adding constraint equations or prior information from a Bayesian perspective. The veracity of the constraints (or the regularization matrix R) significantly affects the solution, and a smoothness constraint is usually added in seismic slip inversions. In this paper, an adaptive smoothness constraint (ASC) based on the classic Laplacian smoothness constraint (LSC) is proposed. The ASC not only improves the smoothness constraint, but also helps constrain the slip direction. A series of experiments are conducted in which different magnitudes of noise are imposed and different densities of observation are assumed, and the results indicated that the ASC was superior to the LSC. Using the proposed ASC, the Helmert variance component estimation method is highlighted as the best for selecting the regularization parameter compared with other methods, such as generalized cross-validation or the mean squared error criterion method. The ASC may also benefit other ill-posed problems in which a smoothness constraint is required.
Genome-wide association mapping of canopy wilting in diverse soybean genotypes
USDA-ARS?s Scientific Manuscript database
Drought stress is a major global constraint for crop production, and slow canopy wilting has been shown to be a promising trait for improving drought tolerance. The objective of this study was to identify genetic loci associated with canopy wilting and confirm those loci with previously reported can...
Mapping Languaging in Digital Spaces: Literacy Practices at Borderlands
ERIC Educational Resources Information Center
Dahlberg, Giulia Messina; Bagga-Gupta, Sangeeta
2016-01-01
The study presented in this article explores the ways in which discursive-technologies shape interaction in "digitally-mediated" educational settings in terms of affordances and constraints for the participants. Our multi-scale sociocultural-dialogical analysis of the interactional order in the online sessions of an "Italian for…
The Role of Novelty in Early Word Learning
ERIC Educational Resources Information Center
Mather, Emily; Plunkett, Kim
2012-01-01
What mechanism implements the mutual exclusivity bias to map novel labels to objects without names? Prominent theoretical accounts of mutual exclusivity (e.g., Markman, 1989, 1990) propose that infants are guided by their knowledge of object names. However, the mutual exclusivity constraint could be implemented via monitoring of object novelty…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Deguchi, Akira; Tsuchi, Hiroyuki; Kitayama, Kazumi
2007-07-01
Available in abstract form only. Full text of publication follows: A stepwise site selection process has been adopted for geological disposal of HLW in Japan. Literature surveys (LS), followed by preliminary investigations (PI) and, finally, detailed investigations (DI) in underground facilities will be carried out in the successive selection stages. In the PI stage, surface-based investigations such as borehole surveys and geophysical prospecting will be implemented with two main objectives. The first is to obtain information relating to legal requirements on siting, such as the occurrence of igneous or fault activity, and to confirm the extremely low likelihood of adversemore » impacts on the candidate site resulting from such phenomena. The second is to obtain the information required for the design and performance assessment of the engineered barrier system and the repository. In order to implement these preliminary investigations rigorously and efficiently within the constraints of a limited time period, budget and resources, PI planning before commencing investigations and on-site PI management during the investigation phase are very important issues. The planning and management of PI have to be performed by NUMO staff, but not all staff have sufficient experience in the range of disciplines involved. NUMO therefore decided to compile existing knowledge and experience in the planning and management of investigations in the form of manuals to be used to improve and maintain internal expertise. Experts with experience in overseas investigation programs were requested to prepare these manuals. This paper outlines the structure and scope of the upper level manual (road-map) and discusses NUMO's experience in applying it in 'dry-runs' to model sites. (authors)« less
NASA Astrophysics Data System (ADS)
Goodenough, Anne E.; Hart, Adam G.; Elliot, Simon L.
2011-01-01
Phenological studies have demonstrated changes in the timing of seasonal events across multiple taxonomic groups as the climate warms. Some northern European migrant bird populations, however, show little or no significant change in breeding phenology, resulting in synchrony with key food sources becoming mismatched. This phenological inertia has often been ascribed to migration constraints (i.e. arrival date at breeding grounds preventing earlier laying). This has been based primarily on research in The Netherlands and Germany where time between arrival and breeding is short (often as few as 9 days). Here, we test the arrival constraint hypothesis over a 15-year period for a U.K. pied flycatcher ( Ficedula hypoleuca) population where laying date is not constrained by arrival as the period between arrival and breeding is substantial and consistent (average 27 ± 4.57 days SD). Despite increasing spring temperatures and quantifiably stronger selection for early laying on the basis of number of offspring to fledge, we found no significant change in breeding phenology, in contrast with co-occurring resident blue tits ( Cyanistes caeruleus). We discuss possible non-migratory constraints on phenological adjustment, including limitations on plasticity, genetic constraints and competition, as well as the possibility of counter-selection pressures relating to adult survival, longevity or future reproductive success. We propose that such factors need to be considered in conjunction with the arrival constraint hypothesis.
Goodenough, Anne E; Hart, Adam G; Elliot, Simon L
2011-01-01
Phenological studies have demonstrated changes in the timing of seasonal events across multiple taxonomic groups as the climate warms. Some northern European migrant bird populations, however, show little or no significant change in breeding phenology, resulting in synchrony with key food sources becoming mismatched. This phenological inertia has often been ascribed to migration constraints (i.e. arrival date at breeding grounds preventing earlier laying). This has been based primarily on research in The Netherlands and Germany where time between arrival and breeding is short (often as few as 9 days). Here, we test the arrival constraint hypothesis over a 15-year period for a U.K. pied flycatcher (Ficedula hypoleuca) population where laying date is not constrained by arrival as the period between arrival and breeding is substantial and consistent (average 27 ± 4.57 days SD). Despite increasing spring temperatures and quantifiably stronger selection for early laying on the basis of number of offspring to fledge, we found no significant change in breeding phenology, in contrast with co-occurring resident blue tits (Cyanistes caeruleus). We discuss possible non-migratory constraints on phenological adjustment, including limitations on plasticity, genetic constraints and competition, as well as the possibility of counter-selection pressures relating to adult survival, longevity or future reproductive success. We propose that such factors need to be considered in conjunction with the arrival constraint hypothesis.
Zheng, Hong-Xiang; Li, Lei; Jiang, Xiao-Yan; Yan, Shi; Qin, Zhendong; Wang, Xiaofeng; Jin, Li
2017-10-01
Considerable attention has been focused on the effect of deleterious mutations caused by the recent relaxation of selective constraints on human health, including the prevalence of obesity, which might represent an adaptive response of energy-conserving metabolism under the conditions of modern society. Mitochondrial DNA (mtDNA) encoding 13 core subunits of oxidative phosphorylation plays an important role in metabolism. Therefore, we hypothesized that a relaxation of selection constraints on mtDNA and an increase in the proportion of deleterious mutations have played a role in obesity prevalence. In this study, we collected and sequenced the mtDNA genomes of 722 Uyghurs, a typical population with a high prevalence of obesity. We identified the variants that occurred in the Uyghur population for each sample and found that the number of nonsynonymous mutations carried by Uyghur individuals declined with elevation of their BMI (P = 0.015). We further calculated the nonsynonymous and synonymous ratio (N/S) of the high-BMI and low-BMI haplogroups, and the results showed that a significantly higher N/S occurred in the whole mtDNA genomes of the low-BMI haplogroups (0.64) than in that of the high-BMI haplogroups (0.35, P = 0.030) and ancestor haplotypes (0.41, P = 0.032); these findings indicated that low-BMI individuals showed a recent relaxation of selective constraints. In addition, we investigated six clinical characteristics and found that fasting plasma glucose might be correlated with the N/S and selective pressures. We hypothesized that a higher proportion of deleterious mutations led to mild mitochondrial dysfunction, which helps to drive glucose consumption and thereby prevents obesity. Our results provide new insights into the relationship between obesity predisposition and mitochondrial genome evolution.
Multi-task feature selection in microarray data by binary integer programming.
Lan, Liang; Vucetic, Slobodan
2013-12-20
A major challenge in microarray classification is that the number of features is typically orders of magnitude larger than the number of examples. In this paper, we propose a novel feature filter algorithm to select the feature subset with maximal discriminative power and minimal redundancy by solving a quadratic objective function with binary integer constraints. To improve the computational efficiency, the binary integer constraints are relaxed and a low-rank approximation to the quadratic term is applied. The proposed feature selection algorithm was extended to solve multi-task microarray classification problems. We compared the single-task version of the proposed feature selection algorithm with 9 existing feature selection methods on 4 benchmark microarray data sets. The empirical results show that the proposed method achieved the most accurate predictions overall. We also evaluated the multi-task version of the proposed algorithm on 8 multi-task microarray datasets. The multi-task feature selection algorithm resulted in significantly higher accuracy than when using the single-task feature selection methods.
Coffino, Jaime A; Hormes, Julia M
2018-06-01
This study aimed to examine the feasibility and initial efficacy of a novel default option intervention targeting nutritional quality of online grocery purchases within the financial constraints of food insecurity. Female undergraduates (n = 59) without eating disorder symptoms or dietary restrictions selected foods online with a budget corresponding to maximum Supplemental Nutrition Assistance Program benefits. Before completing the task again, participants were randomly assigned to receive a $10 incentive for selecting nutritious groceries (n = 17), education about nutrition (n = 24), or a default online shopping cart containing a nutritionally balanced selection of groceries (n = 18) to which they could make changes. Nutritional quality was quantified by using the Thrifty Food Plan Calculator. Compared with the education condition, participants in the default condition selected significantly more whole grains and fruits and foods lower in cholesterol, saturated fats, sodium, and overall calories. There were no statistically significant differences in nutritional outcomes between the incentive condition and the other two groups. Findings provide initial support for the efficacy of a default option in facilitating healthier food choice behaviors within financial constraints. © 2018 The Obesity Society.
A Novel Face-on-Face Contact Method for Nonlinear Solid Mechanics
NASA Astrophysics Data System (ADS)
Wopschall, Steven Robert
The implicit solution to contact problems in nonlinear solid mechanics poses many difficulties. Traditional node-to-segment methods may suffer from locking and experience contact force chatter in the presence of sliding. More recent developments include mortar based methods, which resolve local contact interactions over face-pairs and feature a kinematic constraint in integral form that smoothes contact behavior, especially in the presence of sliding. These methods have been shown to perform well in the presence of geometric nonlinearities and are demonstratively more robust than node-to-segment methods. These methods are typically biased, however, interpolating contact tractions and gap equations on a designated non-mortar face, which leads to an asymmetry in the formulation. Another challenge is constraint enforcement. The general selection of the active set of constraints is brought with difficulty, often leading to non-physical solutions and easily resulting in missed face-pair interactions. Details on reliable constraint enforcement methods are lacking in the greater contact literature. This work presents an unbiased contact formulation utilizing a median-plane methodology. Up to linear polynomials are used for the discrete pressure representation and integral gap constraints are enforced using a novel subcycling procedure. This procedure reliably determines the active set of contact constraints leading to physical and kinematically admissible solutions void of heuristics and user action. The contact method presented herein successfully solves difficult quasi-static contact problems in the implicit computational setting. These problems feature finite deformations, material nonlinearity, and complex interface geometries, all of which are challenging characteristics for contact implementations and constraint enforcement algorithms. The subcycling procedure is a key feature of this method, handling active constraint selection for complex interfaces and mesh geometries.
Hadoux, Xavier; Kumar, Dinesh Kant; Sarossy, Marc G; Roger, Jean-Michel; Gorretta, Nathalie
2016-05-19
Visible and near-infrared (Vis-NIR) spectra are generated by the combination of numerous low resolution features. Spectral variables are thus highly correlated, which can cause problems for selecting the most appropriate ones for a given application. Some decomposition bases such as Fourier or wavelet generally help highlighting spectral features that are important, but are by nature constraint to have both positive and negative components. Thus, in addition to complicating the selected features interpretability, it impedes their use for application-dedicated sensors. In this paper we have proposed a new method for feature selection: Application-Dedicated Selection of Filters (ADSF). This method relaxes the shape constraint by enabling the selection of any type of user defined custom features. By considering only relevant features, based on the underlying nature of the data, high regularization of the final model can be obtained, even in the small sample size context often encountered in spectroscopic applications. For larger scale deployment of application-dedicated sensors, these predefined feature constraints can lead to application specific optical filters, e.g., lowpass, highpass, bandpass or bandstop filters with positive only coefficients. In a similar fashion to Partial Least Squares, ADSF successively selects features using covariance maximization and deflates their influences using orthogonal projection in order to optimally tune the selection to the data with limited redundancy. ADSF is well suited for spectroscopic data as it can deal with large numbers of highly correlated variables in supervised learning, even with many correlated responses. Copyright © 2016 Elsevier B.V. All rights reserved.
Chiu, Yi-Yuan; Lin, Chih-Ta; Huang, Jhang-Wei; Hsu, Kai-Cheng; Tseng, Jen-Hu; You, Syuan-Ren; Yang, Jinn-Moon
2013-01-01
Kinases play central roles in signaling pathways and are promising therapeutic targets for many diseases. Designing selective kinase inhibitors is an emergent and challenging task, because kinases share an evolutionary conserved ATP-binding site. KIDFamMap (http://gemdock.life.nctu.edu.tw/KIDFamMap/) is the first database to explore kinase-inhibitor families (KIFs) and kinase-inhibitor-disease (KID) relationships for kinase inhibitor selectivity and mechanisms. This database includes 1208 KIFs, 962 KIDs, 55 603 kinase-inhibitor interactions (KIIs), 35 788 kinase inhibitors, 399 human protein kinases, 339 diseases and 638 disease allelic variants. Here, a KIF can be defined as follows: (i) the kinases in the KIF with significant sequence similarity, (ii) the inhibitors in the KIF with significant topology similarity and (iii) the KIIs in the KIF with significant interaction similarity. The KIIs within a KIF are often conserved on some consensus KIDFamMap anchors, which represent conserved interactions between the kinase subsites and consensus moieties of their inhibitors. Our experimental results reveal that the members of a KIF often possess similar inhibition profiles. The KIDFamMap anchors can reflect kinase conformations types, kinase functions and kinase inhibitor selectivity. We believe that KIDFamMap provides biological insights into kinase inhibitor selectivity and binding mechanisms. PMID:23193279
A transformation method for constrained-function minimization
NASA Technical Reports Server (NTRS)
Park, S. K.
1975-01-01
A direct method for constrained-function minimization is discussed. The method involves the construction of an appropriate function mapping all of one finite dimensional space onto the region defined by the constraints. Functions which produce such a transformation are constructed for a variety of constraint regions including, for example, those arising from linear and quadratic inequalities and equalities. In addition, the computational performance of this method is studied in the situation where the Davidon-Fletcher-Powell algorithm is used to solve the resulting unconstrained problem. Good performance is demonstrated for 19 test problems by achieving rapid convergence to a solution from several widely separated starting points.
A Ground Flash Fraction Retrieval Algorithm for GLM
NASA Technical Reports Server (NTRS)
Koshak, William J.
2010-01-01
A Bayesian inversion method is introduced for retrieving the fraction of ground flashes in a set of N lightning observed by a satellite lightning imager (such as the Geostationary Lightning Mapper, GLM). An exponential model is applied as a physically reasonable constraint to describe the measured lightning optical parameter distributions. Population statistics (i.e., the mean and variance) are invoked to add additional constraints to the retrieval process. The Maximum A Posteriori (MAP) solution is employed. The approach is tested by performing simulated retrievals, and retrieval error statistics are provided. The approach is feasible for N greater than 2000, and retrieval errors decrease as N is increased.
Martin, Sébastien; Troccaz, Jocelyne; Daanenc, Vincent
2010-04-01
The authors present a fully automatic algorithm for the segmentation of the prostate in three-dimensional magnetic resonance (MR) images. The approach requires the use of an anatomical atlas which is built by computing transformation fields mapping a set of manually segmented images to a common reference. These transformation fields are then applied to the manually segmented structures of the training set in order to get a probabilistic map on the atlas. The segmentation is then realized through a two stage procedure. In the first stage, the processed image is registered to the probabilistic atlas. Subsequently, a probabilistic segmentation is obtained by mapping the probabilistic map of the atlas to the patient's anatomy. In the second stage, a deformable surface evolves toward the prostate boundaries by merging information coming from the probabilistic segmentation, an image feature model and a statistical shape model. During the evolution of the surface, the probabilistic segmentation allows the introduction of a spatial constraint that prevents the deformable surface from leaking in an unlikely configuration. The proposed method is evaluated on 36 exams that were manually segmented by a single expert. A median Dice similarity coefficient of 0.86 and an average surface error of 2.41 mm are achieved. By merging prior knowledge, the presented method achieves a robust and completely automatic segmentation of the prostate in MR images. Results show that the use of a spatial constraint is useful to increase the robustness of the deformable model comparatively to a deformable surface that is only driven by an image appearance model.
Selecting PPE for the Workplace (Personal Protective Equipment for the Eyes and Face)
... Additional References Site Map Credits Selecting Personal Protective Equipment (PPE) for the Workplace Impact Heat Chemicals Dust Optical Radiation OSHA Requirements Home | Selecting Personal Protective Equipment (PPE) for the Workplace | OSHA Requirements Site Map | ...
Lecours, Vincent; Brown, Craig J; Devillers, Rodolphe; Lucieer, Vanessa L; Edinger, Evan N
2016-01-01
Selecting appropriate environmental variables is a key step in ecology. Terrain attributes (e.g. slope, rugosity) are routinely used as abiotic surrogates of species distribution and to produce habitat maps that can be used in decision-making for conservation or management. Selecting appropriate terrain attributes for ecological studies may be a challenging process that can lead users to select a subjective, potentially sub-optimal combination of attributes for their applications. The objective of this paper is to assess the impacts of subjectively selecting terrain attributes for ecological applications by comparing the performance of different combinations of terrain attributes in the production of habitat maps and species distribution models. Seven different selections of terrain attributes, alone or in combination with other environmental variables, were used to map benthic habitats of German Bank (off Nova Scotia, Canada). 29 maps of potential habitats based on unsupervised classifications of biophysical characteristics of German Bank were produced, and 29 species distribution models of sea scallops were generated using MaxEnt. The performances of the 58 maps were quantified and compared to evaluate the effectiveness of the various combinations of environmental variables. One of the combinations of terrain attributes-recommended in a related study and that includes a measure of relative position, slope, two measures of orientation, topographic mean and a measure of rugosity-yielded better results than the other selections for both methodologies, confirming that they together best describe terrain properties. Important differences in performance (up to 47% in accuracy measurement) and spatial outputs (up to 58% in spatial distribution of habitats) highlighted the importance of carefully selecting variables for ecological applications. This paper demonstrates that making a subjective choice of variables may reduce map accuracy and produce maps that do not adequately represent habitats and species distributions, thus having important implications when these maps are used for decision-making.
The artificial-free technique along the objective direction for the simplex algorithm
NASA Astrophysics Data System (ADS)
Boonperm, Aua-aree; Sinapiromsaran, Krung
2014-03-01
The simplex algorithm is a popular algorithm for solving linear programming problems. If the origin point satisfies all constraints then the simplex can be started. Otherwise, artificial variables will be introduced to start the simplex algorithm. If we can start the simplex algorithm without using artificial variables then the simplex iterate will require less time. In this paper, we present the artificial-free technique for the simplex algorithm by mapping the problem into the objective plane and splitting constraints into three groups. In the objective plane, one of variables which has a nonzero coefficient of the objective function is fixed in terms of another variable. Then it can split constraints into three groups: the positive coefficient group, the negative coefficient group and the zero coefficient group. Along the objective direction, some constraints from the positive coefficient group will form the optimal solution. If the positive coefficient group is nonempty, the algorithm starts with relaxing constraints from the negative coefficient group and the zero coefficient group. We guarantee the feasible region obtained from the positive coefficient group to be nonempty. The transformed problem is solved using the simplex algorithm. Additional constraints from the negative coefficient group and the zero coefficient group will be added to the solved problem and use the dual simplex method to determine the new optimal solution. An example shows the effectiveness of our algorithm.
Gemperline, Paul J; Cash, Eric
2003-08-15
A new algorithm for self-modeling curve resolution (SMCR) that yields improved results by incorporating soft constraints is described. The method uses least squares penalty functions to implement constraints in an alternating least squares algorithm, including nonnegativity, unimodality, equality, and closure constraints. By using least squares penalty functions, soft constraints are formulated rather than hard constraints. Significant benefits are (obtained using soft constraints, especially in the form of fewer distortions due to noise in resolved profiles. Soft equality constraints can also be used to introduce incomplete or partial reference information into SMCR solutions. Four different examples demonstrating application of the new method are presented, including resolution of overlapped HPLC-DAD peaks, flow injection analysis data, and batch reaction data measured by UV/visible and near-infrared spectroscopy (NIR). Each example was selected to show one aspect of the significant advantages of soft constraints over traditionally used hard constraints. Incomplete or partial reference information into self-modeling curve resolution models is described. The method offers a substantial improvement in the ability to resolve time-dependent concentration profiles from mixture spectra recorded as a function of time.
Near-Seafloor Magnetic Exploration of Submarine Hydrothermal Systems in the Kermadec Arc
NASA Astrophysics Data System (ADS)
Caratori Tontini, F.; de Ronde, C. E. J.; Tivey, M.; Kinsey, J. C.
2014-12-01
Magnetic data can provide important information about hydrothermal systems because hydrothermal alteration can drastically reduce the magnetization of the host volcanic rocks. Near-seafloor data (≤70 m altitude) are required to map hydrothermal systems in detail; Autonomous Underwater Vehicles (AUVs) are the ideal platform to provide this level of resolution. Here, we show the results of high-resolution magnetic surveys by the ABE and Sentry AUVs for selected submarine volcanoes of the Kermadec arc. 3-D magnetization models derived from the inversion of magnetic data, when combined with high resolution seafloor bathymetry derived from multibeam surveys, provide important constraints on the subseafloor geometry of hydrothermal upflow zones and the structural control on the development of seafloor hydrothermal vent sites as well as being a tool for the discovery of previously unknown hydrothermal sites. Significant differences exist between the magnetic expressions of hydrothermal sites at caldera volcanoes ("donut" pattern) and cones ("Swiss cheese" pattern), respectively. Subseafloor 3-D magnetization models also highlight structural differences between focused and diffuse vent sites.
Resolving Supercritical Orion Cores
NASA Astrophysics Data System (ADS)
Li, Di; Chapman, N.; Goldsmith, P.; Velusamy, T.
2009-01-01
The theoretical framework for high mass star formation (HMSF) is unclear. Observations reveal a seeming dichotomy between high- and low-mass star formation, with HMSF occurring only in Giant Molecular Clouds (GMC), mostly in clusters, and with higher star formation efficiencies than low-mass star formation. One crucial constraint to any theoretical model is the dynamical state of massive cores, in particular, whether a massive core is in supercritical collapse. Based on the mass-size relation of dust emission, we select likely unstable targets from a sample of massive cores (Li et al. 2007 ApJ 655, 351) in the nearest GMC, Orion. We have obtained N2H+ (1-0) maps using CARMA with resolution ( 2.5", 0.006 pc) significantly better than existing observations. We present observational and modeling results for ORI22. By revealing the dynamic structure down to Jeans scale, CARMA data confirms the dominance of gravity over turbulence in this cores. This work was performed by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
3-D model-based Bayesian classification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Soenneland, L.; Tenneboe, P.; Gehrmann, T.
1994-12-31
The challenging task of the interpreter is to integrate different pieces of information and combine them into an earth model. The sophistication level of this earth model might vary from the simplest geometrical description to the most complex set of reservoir parameters related to the geometrical description. Obviously the sophistication level also depend on the completeness of the available information. The authors describe the interpreter`s task as a mapping between the observation space and the model space. The information available to the interpreter exists in observation space and the task is to infer a model in model-space. It is well-knownmore » that this inversion problem is non-unique. Therefore any attempt to find a solution depend son constraints being added in some manner. The solution will obviously depend on which constraints are introduced and it would be desirable to allow the interpreter to modify the constraints in a problem-dependent manner. They will present a probabilistic framework that gives the interpreter the tools to integrate the different types of information and produce constrained solutions. The constraints can be adapted to the problem at hand.« less
A proto-architecture for innate directionally selective visual maps.
Adams, Samantha V; Harris, Chris M
2014-01-01
Self-organizing artificial neural networks are a popular tool for studying visual system development, in particular the cortical feature maps present in real systems that represent properties such as ocular dominance (OD), orientation-selectivity (OR) and direction selectivity (DS). They are also potentially useful in artificial systems, for example robotics, where the ability to extract and learn features from the environment in an unsupervised way is important. In this computational study we explore a DS map that is already latent in a simple artificial network. This latent selectivity arises purely from the cortical architecture without any explicit coding for DS and prior to any self-organising process facilitated by spontaneous activity or training. We find DS maps with local patchy regions that exhibit features similar to maps derived experimentally and from previous modeling studies. We explore the consequences of changes to the afferent and lateral connectivity to establish the key features of this proto-architecture that support DS.
2011-01-01
Background Pigeonpea [Cajanus cajan (L.) Millsp.] is an important legume crop of rainfed agriculture. Despite of concerted research efforts directed to pigeonpea improvement, stagnated productivity of pigeonpea during last several decades may be accounted to prevalence of various biotic and abiotic constraints and the situation is exacerbated by availability of inadequate genomic resources to undertake any molecular breeding programme for accelerated crop improvement. With the objective of enhancing genomic resources for pigeonpea, this study reports for the first time, large scale development of SSR markers from BAC-end sequences and their subsequent use for genetic mapping and hybridity testing in pigeonpea. Results A set of 88,860 BAC (bacterial artificial chromosome)-end sequences (BESs) were generated after constructing two BAC libraries by using HindIII (34,560 clones) and BamHI (34,560 clones) restriction enzymes. Clustering based on sequence identity of BESs yielded a set of >52K non-redundant sequences, comprising 35 Mbp or >4% of the pigeonpea genome. These sequences were analyzed to develop annotation lists and subdivide the BESs into genome fractions (e.g., genes, retroelements, transpons and non-annotated sequences). Parallel analysis of BESs for microsatellites or simple sequence repeats (SSRs) identified 18,149 SSRs, from which a set of 6,212 SSRs were selected for further analysis. A total of 3,072 novel SSR primer pairs were synthesized and tested for length polymorphism on a set of 22 parental genotypes of 13 mapping populations segregating for traits of interest. In total, we identified 842 polymorphic SSR markers that will have utility in pigeonpea improvement. Based on these markers, the first SSR-based genetic map comprising of 239 loci was developed for this previously uncharacterized genome. Utility of developed SSR markers was also demonstrated by identifying a set of 42 markers each for two hybrids (ICPH 2671 and ICPH 2438) for genetic purity assessment in commercial hybrid breeding programme. Conclusion In summary, while BAC libraries and BESs should be useful for genomics studies, BES-SSR markers, and the genetic map should be very useful for linking the genetic map with a future physical map as well as for molecular breeding in pigeonpea. PMID:21447154
Trajectory Design for the Microwave Anisotropy Probe (MAP)
NASA Technical Reports Server (NTRS)
Newman, Lauri Kraft; Rohrbaugh, David; Bauer, Frank H. (Technical Monitor)
2001-01-01
The Microwave Anisotropy, Probe (MAP) is a Medium Class Explorers (MIDEX) Mission produced in partnership between Goddard Space Flight Center (GSFC) and Princeton University. The goal of the MAP mission is to produce an accurate fill-sky, map of the cosmic microwave background temperature fluctuations (anisotropy). The mission orbit is a Lissajous orbit about the L(sub 2) Sun-Earth Lagrange point. The trajectory design for MAP is complex, having many requirements that must be met including shadow avoidance, sun angle constraints, Lissqjous size and shape characteristics, and limited Delta-V budget. In order to find a trajectory that met the design requirements for the entire 4-year mission lifetime goal, GSFC Flight Dynamics engineers performed many analyses, the results of which are presented herein. The paper discusses the preliminary trade-offs to establish a baseline trajectory, analysis to establish the nominal daily trajectory, and the launch window determination to widen the opportunity from instantaneous to several minutes for each launch date.
Multi-instrument data analysis for interpretation of the Martian North polar layered deposits
NASA Astrophysics Data System (ADS)
Mirino, Melissa; Sefton-Nash, Elliot; Witasse, Olivier; Frigeri, Alessandro
2017-04-01
The Martian polar caps have engendered substantial study due to their spiral morphology, layered structure and the seasonal variability in thickness of the uppermost H2O and CO2 ice layers. We demonstrate a multi-instrument study of exposed and buried north polar layers using data from ESA's Mars Express (MEx) and NASA's Mars Reconnaissance Orbiter (MRO) missions. We perform analysis of high resolution images from MRO's HiRISE, which provide textural and morphological information about surface features larger than 0.3m, with NIR hyperspectral data from MRO CRISM, which allows study of surface mineralogy at a maximum resolution of 18 m/pixel. Stereo-derived topography is provided by MEx's HRSC. Together with these surficial observations we interpret radargrams from MRO SHARAD to obtain information about layered structures at a horizontal resolution between 0.3 and 3 kilometers and a free-space vertical resolution of 15 meters (vertical resolution depends on the dielectric properties of the medium). This combination of datasets allows us to attempt to correlate polar layering, made visible by dielectric interfaces between beds, with surface mineralogies and structures outcropping at specific stratigraphic levels. We analyse two opposite areas of the north polar cap with the intention to characterise in multiple datasets each geologic unit identified in the north polar cap's stratigraphy (mapped by e.g. [1]). We selected deposits observed in Chasma Boreale and Olympia Cavi because these areas allow us to observe and map strata at opposing sides of the north polar cap. Using the CRISM Analysis Tool and spectral summary parameters [2] we map the spectral characteristics of the two areas that show H2O and CO2 ice layering exposed on polar scarps. Through spatial-registration in a GIS with HRSC topography and HiRISE imagery we assess the mineralogical and morphological characteristics of exposed layers. In order to constrain the cross section between the two selected localities we choose SHARAD radargrams that most closely align with the transect between the sites. We interpret sub-horizontal features to be due to dielectric interfaces involving the deposits analysed. Our interpretation of radargrams in the context of compositional and structural constraints, from areas where pertinent beds outcrop, illustrates how joint analysis of surface and sub-surface data can benefit geological interpretation of planetary surfaces and subsurfaces. This technique applied to Mars' north polar layered deposits may offer additional constraint on morphology of internal layering resulting from seasonal deposition/sublimation cycles over varying obliquity [3]. References: [1] Tanaka et al. (2008), Icarus, 196, p. 318-358, doi:10.1016/j.icarus.2008.01.021. [2] Viviano-Beck et al. (2014), J. Geophys. Res. Planets, 119, p. 1403-1431, doi:10.1002/2014JE004627..[3] Putzig et al. (2009), Icarus, 204, p. 443-457, doi:10.1016/j.icarus.2009.07.034.
Harpur, Brock A; Zayed, Amro
2013-07-01
The genomes of eusocial insects have a reduced complement of immune genes-an unusual finding considering that sociality provides ideal conditions for disease transmission. The following three hypotheses have been invoked to explain this finding: 1) social insects are attacked by fewer pathogens, 2) social insects have effective behavioral or 3) novel molecular mechanisms for combating pathogens. At the molecular level, these hypotheses predict that canonical innate immune pathways experience a relaxation of selective constraint. A recent study of several innate immune genes in ants and bees showed a pattern of accelerated amino acid evolution, which is consistent with either positive selection or a relaxation of constraint. We studied the population genetics of innate immune genes in the honey bee Apis mellifera by partially sequencing 13 genes from the bee's Toll pathway (∼10.5 kb) and 20 randomly chosen genes (∼16.5 kb) sequenced in 43 diploid workers. Relative to the random gene set, Toll pathway genes had significantly higher levels of amino acid replacement mutations segregating within A. mellifera and fixed between A. mellifera and A. cerana. However, levels of diversity and divergence at synonymous sites did not differ between the two gene sets. Although we detect strong signs of balancing selection on the pathogen recognition gene pgrp-sa, many of the genes in the Toll pathway show signatures of relaxed selective constraint. These results are consistent with the reduced complement of innate immune genes found in social insects and support the hypothesis that some aspect of eusociality renders canonical innate immunity superfluous.
Milky Way Tomography IV: Dissecting Dust
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berry, Michael; /Washington U., Seattle, Astron. Dept. /Rutgers U., Piscataway; Ivezic, Zeljko
2011-11-01
We use SDSS photometry of 73 million stars to simultaneously obtain best-fit main-sequence stellar energy distribution (SED) and amount of dust extinction along the line of sight towards each star. Using a subsample of 23 million stars with 2MASS photometry, whose addition enables more robust results, we show that SDSS photometry alone is sufficient to break degeneracies between intrinsic stellar color and dust amount when the shape of extinction curve is fixed. When using both SDSS and 2MASS photometry, the ratio of the total to selective absorption, R{sub V}, can be determined with an uncertainty of about 0.1 for mostmore » stars in high-extinction regions. These fits enable detailed studies of the dust properties and its spatial distribution, and of the stellar spatial distribution at low Galactic latitudes (|b| < 30{sup o}). Our results are in good agreement with the extinction normalization given by the Schlegel et al. (1998, SFD) dust maps at high northern Galactic latitudes, but indicate that the SFD extinction map appears to be consistently overestimated by about 20% in the southern sky, in agreement with recent study by Schlafly et al. (2010). The constraints on the shape of the dust extinction curve across the SDSS and 2MASS bandpasses disfavor the reddening law of O'Donnell (1994), but support the models by Fitzpatrick (1999) and Cardelli et al. (1989). For the latter, we find a ratio of the total to selective absorption to be R{sub V} = 3.0 {+-} 0.1(random) {+-} 0.1 (systematic) over most of the high-latitude sky. At low Galactic latitudes (|b| < 5{sup o}), we demonstrate that the SFD map cannot be reliably used to correct for extinction because most stars are embedded in dust, rather than behind it, as is the case at high Galactic latitudes. We analyze three-dimensional maps of the best-fit R{sub V} and find that R{sub V} = 3.1 cannot be ruled out in any of the ten SEGUE stripes at a precision level of {approx} 0.1 - 0.2. Our best estimate for the intrinsic scatter of R{sub V} in the regions probed by SEGUE stripes is {approx} 0.2. We introduce a method for efficient selection of candidate red giant stars in the disk, dubbed 'dusty parallax relation', which utilizes a correlation between distance and the extinction along the line of sight. We make these best-fit parameters, as well as all the input SDSS and 2MASS data, publicly available in a user-friendly format. These data can be used for studies of stellar number density distribution, the distribution of dust properties, for selecting sources whose SED differs from SEDs for high-latitude main sequence stars, and for estimating distances to dust clouds and, in turn, to molecular gas clouds.« less
THE MILKY WAY TOMOGRAPHY WITH SLOAN DIGITAL SKY SURVEY. IV. DISSECTING DUST
DOE Office of Scientific and Technical Information (OSTI.GOV)
Berry, Michael; Ivezic, Zeljko; Brooks, Keira J.
2012-10-01
We use Sloan Digital Sky Survey (SDSS) photometry of 73 million stars to simultaneously constrain best-fit main-sequence stellar spectral energy distribution (SED) and amount of dust extinction along the line of sight toward each star. Using a subsample of 23 million stars with Two Micron All Sky Survey (2MASS) photometry, whose addition enables more robust results, we show that SDSS photometry alone is sufficient to break degeneracies between intrinsic stellar color and dust amount when the shape of extinction curve is fixed. When using both SDSS and 2MASS photometry, the ratio of the total to selective absorption, R{sub V} ,more » can be determined with an uncertainty of about 0.1 for most stars in high-extinction regions. These fits enable detailed studies of the dust properties and its spatial distribution, and of the stellar spatial distribution at low Galactic latitudes (|b| < 30 Degree-Sign ). Our results are in good agreement with the extinction normalization given by the Schlegel et al. (SFD) dust maps at high northern Galactic latitudes, but indicate that the SFD extinction map appears to be consistently overestimated by about 20% in the southern sky, in agreement with recent study by Schlafly et al. The constraints on the shape of the dust extinction curve across the SDSS and 2MASS bandpasses disfavor the reddening law of O'Donnell, but support the models by Fitzpatrick and Cardelli et al. For the latter, we find a ratio of the total to selective absorption to be R{sub V} = 3.0 {+-} 0.1(random){+-}0.1 (systematic) over most of the high-latitude sky. At low Galactic latitudes (|b| < 5 Degree-Sign ), we demonstrate that the SFD map cannot be reliably used to correct for extinction because most stars are embedded in dust, rather than behind it, as is the case at high Galactic latitudes. We analyze three-dimensional maps of the best-fit R{sub V} and find that R{sub V} = 3.1 cannot be ruled out in any of the 10 SEGUE stripes at a precision level of {approx}0.1-0.2. Our best estimate for the intrinsic scatter of R{sub V} in the regions probed by SEGUE stripes is {approx}0.2. We introduce a method for efficient selection of candidate red giant stars in the disk, dubbed 'dusty parallax relation', which utilizes a correlation between distance and the extinction along the line of sight. We make these best-fit parameters, as well as all the input SDSS and 2MASS data, publicly available in a user-friendly format. These data can be used for studies of stellar number density distribution, the distribution of dust properties, for selecting sources whose SED differs from SEDs for high-latitude main-sequence stars, and for estimating distances to dust clouds and, in turn, to molecular gas clouds.« less
Designing a space-based galaxy redshift survey to probe dark energy
NASA Astrophysics Data System (ADS)
Wang, Yun; Percival, Will; Cimatti, Andrea; Mukherjee, Pia; Guzzo, Luigi; Baugh, Carlton M.; Carbone, Carmelita; Franzetti, Paolo; Garilli, Bianca; Geach, James E.; Lacey, Cedric G.; Majerotto, Elisabetta; Orsi, Alvaro; Rosati, Piero; Samushia, Lado; Zamorani, Giovanni
2010-12-01
A space-based galaxy redshift survey would have enormous power in constraining dark energy and testing general relativity, provided that its parameters are suitably optimized. We study viable space-based galaxy redshift surveys, exploring the dependence of the Dark Energy Task Force (DETF) figure-of-merit (FoM) on redshift accuracy, redshift range, survey area, target selection and forecast method. Fitting formulae are provided for convenience. We also consider the dependence on the information used: the full galaxy power spectrum P(k), P(k) marginalized over its shape, or just the Baryon Acoustic Oscillations (BAO). We find that the inclusion of growth rate information (extracted using redshift space distortion and galaxy clustering amplitude measurements) leads to a factor of ~3 improvement in the FoM, assuming general relativity is not modified. This inclusion partially compensates for the loss of information when only the BAO are used to give geometrical constraints, rather than using the full P(k) as a standard ruler. We find that a space-based galaxy redshift survey covering ~20000deg2 over with σz/(1 + z) <= 0.001 exploits a redshift range that is only easily accessible from space, extends to sufficiently low redshifts to allow both a vast 3D map of the universe using a single tracer population, and overlaps with ground-based surveys to enable robust modelling of systematic effects. We argue that these parameters are close to their optimal values given current instrumental and practical constraints.
Being and Becoming a University Teacher
ERIC Educational Resources Information Center
McMillan, Wendy; Gordon, Natalie
2017-01-01
This study examined how one academic framed the enablements and constraints to her project of being and becoming an academic. Complexity facilitated reflection in that it provided a visual representation of data, which was used to generate a concept map, which represented as equal all the component parts of her landscape. Five spaces with…
#WomenInSTEM: Making a Cleaner Future
Lindgren, Mallory
2018-01-16
Mallory Lindgren uses geographic information systems or GIS - a mapping software that she compares to "a real-life videogame" - to assess how various constraints, such as wetlands or an airport, may interact with potential renewable energy projects. Her aim is to site and design projects that can effectively co-exist with the surrounding environment.
Frequency of Input Effects on Word Comprehension of Children with Specific Language Impairment.
ERIC Educational Resources Information Center
Rice, Mabel L.; And Others
1994-01-01
This study compared factors contributing to Quick Incidental Learning of new vocabulary by 50 5-year olds with specific language impairment (SLI) and 2 comparison groups. Although SLI children exhibited a robust representational mapping ability, performance was modulated by a minimum input constraint and apparent problems with storage into…
#WomenInSTEM: Making a Cleaner Future
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lindgren, Mallory
2014-09-09
Mallory Lindgren uses geographic information systems or GIS - a mapping software that she compares to "a real-life videogame" - to assess how various constraints, such as wetlands or an airport, may interact with potential renewable energy projects. Her aim is to site and design projects that can effectively co-exist with the surrounding environment.
Duckworth, Renée A
2015-12-01
Personality traits are behaviors that show limited flexibility over time and across contexts, and thus understanding their origin requires an understanding of what limits behavioral flexibility. Here, I suggest that insight into the evolutionary origin of personality traits requires determining the relative importance of selection and constraint in producing limits to behavioral flexibility. Natural selection as the primary cause of limits to behavioral flexibility assumes that the default state of behavior is one of high flexibility and predicts that personality variation arises through evolution of buffering mechanisms to stabilize behavioral expression, whereas the constraint hypothesis assumes that the default state is one of limited flexibility and predicts that the neuroendocrine components that underlie personality variation are those most constrained in flexibility. Using recent work on the neurobiology of sensitive periods and maternal programming of offspring behavior, I show that some of the most stable aspects of the neuroendocrine system are structural components and maternally induced epigenetic effects. Evidence of numerous constraints to changes in structural features of the neuroendocrine system and far fewer constraints to flexibility of epigenetic systems suggests that structural constraints play a primary role in the origin of behavioral stability and that epigenetic programming may be more important in generating adaptive variation among individuals. © 2015 New York Academy of Sciences.
Efficient QoS-aware Service Composition
NASA Astrophysics Data System (ADS)
Alrifai, Mohammad; Risse, Thomas
Web service composition requests are usually combined with endto-end QoS requirements, which are specified in terms of non-functional properties (e.g. response time, throughput and price). The goal of QoS-aware service composition is to find the best combination of services such that their aggregated QoS values meet these end-to-end requirements. Local selection techniques are very efficient but fail short in handling global QoS constraints. Global optimization techniques, on the other hand, can handle global constraints, but their poor performance render them inappropriate for applications with dynamic and real-time requirements. In this paper we address this problem and propose a solution that combines global optimization with local selection techniques for achieving a better performance. The proposed solution consists of two steps: first we use mixed integer linear programming (MILP) to find the optimal decomposition of global QoS constraints into local constraints. Second, we use local search to find the best web services that satisfy these local constraints. Unlike existing MILP-based global planning solutions, the size of the MILP model in our case is much smaller and independent on the number of available services, yields faster computation and more scalability. Preliminary experiments have been conducted to evaluate the performance of the proposed solution.
Regression Model Optimization for the Analysis of Experimental Data
NASA Technical Reports Server (NTRS)
Ulbrich, N.
2009-01-01
A candidate math model search algorithm was developed at Ames Research Center that determines a recommended math model for the multivariate regression analysis of experimental data. The search algorithm is applicable to classical regression analysis problems as well as wind tunnel strain gage balance calibration analysis applications. The algorithm compares the predictive capability of different regression models using the standard deviation of the PRESS residuals of the responses as a search metric. This search metric is minimized during the search. Singular value decomposition is used during the search to reject math models that lead to a singular solution of the regression analysis problem. Two threshold dependent constraints are also applied. The first constraint rejects math models with insignificant terms. The second constraint rejects math models with near-linear dependencies between terms. The math term hierarchy rule may also be applied as an optional constraint during or after the candidate math model search. The final term selection of the recommended math model depends on the regressor and response values of the data set, the user s function class combination choice, the user s constraint selections, and the result of the search metric minimization. A frequently used regression analysis example from the literature is used to illustrate the application of the search algorithm to experimental data.
Perceived constraints to art museum attendance
Jinhee Jun; Gerard Kyle; Joseph T. O' Leary
2007-01-01
We explored selected socio-demographic factors that influence the perception of constraints to art museum attendance among a sample of interested individuals who were currently not enjoying art museum visitation. Data from the Survey of Public Participation in the Arts (SPPA), a nationwide survey were used for this study. Using multivariate analysis of variance, we...
Economopoulou, M A; Economopoulou, A A; Economopoulos, A P
2013-11-01
The paper describes a software system capable of formulating alternative optimal Municipal Solid Wastes (MSWs) management plans, each of which meets a set of constraints that may reflect selected objections and/or wishes of local communities. The objective function to be minimized in each plan is the sum of the annualized capital investment and annual operating cost of all transportation, treatment and final disposal operations involved, taking into consideration the possible income from the sale of products and any other financial incentives or disincentives that may exist. For each plan formulated, the system generates several reports that define the plan, analyze its cost elements and yield an indicative profile of selected types of installations, as well as data files that facilitate the geographic representation of the optimal solution in maps through the use of GIS. A number of these reports compare the technical and economic data from all scenarios considered at the study area, municipality and installation level constituting in effect sensitivity analysis. The generation of alternative plans offers local authorities the opportunity of choice and the results of the sensitivity analysis allow them to choose wisely and with consensus. The paper presents also an application of this software system in the capital Region of Attica in Greece, for the purpose of developing an optimal waste transportation system in line with its approved waste management plan. The formulated plan was able to: (a) serve 113 Municipalities and Communities that generate nearly 2 milliont/y of comingled MSW with distinctly different waste collection patterns, (b) take into consideration several existing waste transfer stations (WTS) and optimize their use within the overall plan, (c) select the most appropriate sites among the potentially suitable (new and in use) ones, (d) generate the optimal profile of each WTS proposed, and (e) perform sensitivity analysis so as to define the impact of selected sets of constraints (limitations in the availability of sites and in the capacity of their installations) on the design and cost of the ensuing optimal waste transfer system. The results show that optimal planning offers significant economic savings to municipalities, while reducing at the same time the present levels of traffic, fuel consumptions and air emissions in the congested Athens basin. Copyright © 2013 Elsevier Ltd. All rights reserved.
Model-independent curvature determination with 21 cm intensity mapping experiments
NASA Astrophysics Data System (ADS)
Witzemann, Amadeus; Bull, Philip; Clarkson, Chris; Santos, Mario G.; Spinelli, Marta; Weltman, Amanda
2018-06-01
Measurements of the spatial curvature of the Universe have improved significantly in recent years, but still tend to require strong assumptions to be made about the equation of state of dark energy (DE) in order to reach sub-percent precision. When these assumptions are relaxed, strong degeneracies arise that make it hard to disentangle DE and curvature, degrading the constraints. We show that forthcoming 21 cm intensity mapping experiments such as Hydrogen Intensity and Real-time Analysis eXperiment (HIRAX) are ideally designed to carry out model-independent curvature measurements, as they can measure the clustering signal at high redshift with sufficient precision to break many of the degeneracies. We consider two different model-independent methods, based on `avoiding' the DE-dominated regime and non-parametric modelling of the DE equation of state, respectively. Our forecasts show that HIRAX will be able to improve upon current model-independent constraints by around an order of magnitude, reaching percent-level accuracy even when an arbitrary DE equation of state is assumed. In the same model-independent analysis, the sample variance limit for a similar survey is another order of magnitude better.
Model-independent curvature determination with 21cm intensity mapping experiments
NASA Astrophysics Data System (ADS)
Witzemann, Amadeus; Bull, Philip; Clarkson, Chris; Santos, Mario G.; Spinelli, Marta; Weltman, Amanda
2018-04-01
Measurements of the spatial curvature of the Universe have improved significantly in recent years, but still tend to require strong assumptions to be made about the equation of state of dark energy (DE) in order to reach sub-percent precision. When these assumptions are relaxed, strong degeneracies arise that make it hard to disentangle DE and curvature, degrading the constraints. We show that forthcoming 21cm intensity mapping experiments such as HIRAX are ideally designed to carry out model-independent curvature measurements, as they can measure the clustering signal at high redshift with sufficient precision to break many of the degeneracies. We consider two different model-independent methods, based on `avoiding' the DE-dominated regime and non-parametric modelling of the DE equation of state respectively. Our forecasts show that HIRAX will be able to improve upon current model-independent constraints by around an order of magnitude, reaching percent-level accuracy even when an arbitrary DE equation of state is assumed. In the same model-independent analysis, the sample variance limit for a similar survey is another order of magnitude better.
The Feasibility of 3d Point Cloud Generation from Smartphones
NASA Astrophysics Data System (ADS)
Alsubaie, N.; El-Sheimy, N.
2016-06-01
This paper proposes a new technique for increasing the accuracy of direct geo-referenced image-based 3D point cloud generated from low-cost sensors in smartphones. The smartphone's motion sensors are used to directly acquire the Exterior Orientation Parameters (EOPs) of the captured images. These EOPs, along with the Interior Orientation Parameters (IOPs) of the camera/ phone, are used to reconstruct the image-based 3D point cloud. However, because smartphone motion sensors suffer from poor GPS accuracy, accumulated drift and high signal noise, inaccurate 3D mapping solutions often result. Therefore, horizontal and vertical linear features, visible in each image, are extracted and used as constraints in the bundle adjustment procedure. These constraints correct the relative position and orientation of the 3D mapping solution. Once the enhanced EOPs are estimated, the semi-global matching algorithm (SGM) is used to generate the image-based dense 3D point cloud. Statistical analysis and assessment are implemented herein, in order to demonstrate the feasibility of 3D point cloud generation from the consumer-grade sensors in smartphones.
A novel neural network for variational inequalities with linear and nonlinear constraints.
Gao, Xing-Bao; Liao, Li-Zhi; Qi, Liqun
2005-11-01
Variational inequality is a uniform approach for many important optimization and equilibrium problems. Based on the sufficient and necessary conditions of the solution, this paper presents a novel neural network model for solving variational inequalities with linear and nonlinear constraints. Three sufficient conditions are provided to ensure that the proposed network with an asymmetric mapping is stable in the sense of Lyapunov and converges to an exact solution of the original problem. Meanwhile, the proposed network with a gradient mapping is also proved to be stable in the sense of Lyapunov and to have a finite-time convergence under some mild condition by using a new energy function. Compared with the existing neural networks, the new model can be applied to solve some nonmonotone problems, has no adjustable parameter, and has lower complexity. Thus, the structure of the proposed network is very simple. Since the proposed network can be used to solve a broad class of optimization problems, it has great application potential. The validity and transient behavior of the proposed neural network are demonstrated by several numerical examples.
Horndeski: beyond, or not beyond?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Crisostomi, Marco; Hull, Matthew; Koyama, Kazuya
2016-03-01
Determining the most general, consistent scalar tensor theory of gravity is important for building models of inflation and dark energy. In this work we investigate the number of degrees of freedom present in the theory of beyond Horndeski. We discuss how to construct the theory from the extrinsic curvature of the constant scalar field hypersurface, and find a simple expression for the action which guarantees the existence of the primary constraint necessary to avoid the Ostrogradsky instability. Our analysis is completely gauge-invariant. However we confirm that, mixing together beyond Horndeski with a different order of Horndeski, obstructs the construction ofmore » this primary constraint. Instead, when the mixing is between actions of the same order, the theory can be mapped to Horndeski through a generalised disformal transformation. This mapping however is impossible with beyond Horndeski alone, since we find that the theory is invariant under such a transformation. The picture that emerges is that beyond Horndeski is a healthy but isolated theory: combined with Horndeski, it either becomes Horndeski, or likely propagates a ghost.« less
Sliding Window Generalized Kernel Affine Projection Algorithm Using Projection Mappings
NASA Astrophysics Data System (ADS)
Slavakis, Konstantinos; Theodoridis, Sergios
2008-12-01
Very recently, a solution to the kernel-based online classification problem has been given by the adaptive projected subgradient method (APSM). The developed algorithm can be considered as a generalization of a kernel affine projection algorithm (APA) and the kernel normalized least mean squares (NLMS). Furthermore, sparsification of the resulting kernel series expansion was achieved by imposing a closed ball (convex set) constraint on the norm of the classifiers. This paper presents another sparsification method for the APSM approach to the online classification task by generating a sequence of linear subspaces in a reproducing kernel Hilbert space (RKHS). To cope with the inherent memory limitations of online systems and to embed tracking capabilities to the design, an upper bound on the dimension of the linear subspaces is imposed. The underlying principle of the design is the notion of projection mappings. Classification is performed by metric projection mappings, sparsification is achieved by orthogonal projections, while the online system's memory requirements and tracking are attained by oblique projections. The resulting sparsification scheme shows strong similarities with the classical sliding window adaptive schemes. The proposed design is validated by the adaptive equalization problem of a nonlinear communication channel, and is compared with classical and recent stochastic gradient descent techniques, as well as with the APSM's solution where sparsification is performed by a closed ball constraint on the norm of the classifiers.
Bounded relative motion under zonal harmonics perturbations
NASA Astrophysics Data System (ADS)
Baresi, Nicola; Scheeres, Daniel J.
2017-04-01
The problem of finding natural bounded relative trajectories between the different units of a distributed space system is of great interest to the astrodynamics community. This is because most popular initialization methods still fail to establish long-term bounded relative motion when gravitational perturbations are involved. Recent numerical searches based on dynamical systems theory and ergodic maps have demonstrated that bounded relative trajectories not only exist but may extend up to hundreds of kilometers, i.e., well beyond the reach of currently available techniques. To remedy this, we introduce a novel approach that relies on neither linearized equations nor mean-to-osculating orbit element mappings. The proposed algorithm applies to rotationally symmetric bodies and is based on a numerical method for computing quasi-periodic invariant tori via stroboscopic maps, including extra constraints to fix the average of the nodal period and RAAN drift between two consecutive equatorial plane crossings of the quasi-periodic solutions. In this way, bounded relative trajectories of arbitrary size can be found with great accuracy as long as these are allowed by the natural dynamics and the physical constraints of the system (e.g., the surface of the gravitational attractor). This holds under any number of zonal harmonics perturbations and for arbitrary time intervals as demonstrated by numerical simulations about an Earth-like planet and the highly oblate primary of the binary asteroid (66391) 1999 KW4.
Underwater Multi-Vehicle Trajectory Alignment and Mapping Using Acoustic and Optical Constraints
Campos, Ricard; Gracias, Nuno; Ridao, Pere
2016-01-01
Multi-robot formations are an important advance in recent robotic developments, as they allow a group of robots to merge their capacities and perform surveys in a more convenient way. With the aim of keeping the costs and acoustic communications to a minimum, cooperative navigation of multiple underwater vehicles is usually performed at the control level. In order to maintain the desired formation, individual robots just react to simple control directives extracted from range measurements or ultra-short baseline (USBL) systems. Thus, the robots are unaware of their global positioning, which presents a problem for the further processing of the collected data. The aim of this paper is two-fold. First, we present a global alignment method to correct the dead reckoning trajectories of multiple vehicles to resemble the paths followed during the mission using the acoustic messages passed between vehicles. Second, we focus on the optical mapping application of these types of formations and extend the optimization framework to allow for multi-vehicle geo-referenced optical 3D mapping using monocular cameras. The inclusion of optical constraints is not performed using the common bundle adjustment techniques, but in a form improving the computational efficiency of the resulting optimization problem and presenting a generic process to fuse optical reconstructions with navigation data. We show the performance of the proposed method on real datasets collected within the Morph EU-FP7 project. PMID:26999144
NASA Technical Reports Server (NTRS)
Boughn, S. P.; Crittenden, R. G.; Turok, N. G.
1998-01-01
In universes with significant curvature or cosmological constant, cosmic microwave background (CMB) anisotropies are created very recently via the Rees-Sciama or integrated Sachs-Wolfe effects. This causes the CMB anisotropies to become partially correlated with the local matter density (z less than 4). We examine the prospects of using the hard (2- 10 keV) X-ray background as a probe of the local density and the measured correlation between the HEAO1 A2 X-ray survey and the 4-year COBE-DMR map to obtain a constraint on the cosmological constant. The 95% confidence level upper limit on the cosmological constant is OMega(sub Lambda) less than or equal to 0.5, assuming that the observed fluctuations in the X-ray map result entirely from large scale structure. (This would also imply that the X-rays trace matter with a bias factor of b(sub x) approx. = 5.6 Omega(sub m, sup 0.53)). This bound is weakened considerably if a large portion of the X-ray fluctuations arise from Poisson noise from unresolved sources. For example, if one assumes that the X-ray bias is b(sub x) = 2, then the 95% confidence level upper limit is weaker, Omega(sub Lambda) less than or equal to 0.7. More stringent limits should be attainable with data from the next generation of CMB and X-ray background maps.
Herlin, Antoine; Jacquemet, Vincent
2012-05-01
Phase singularity analysis provides a quantitative description of spiral wave patterns observed in chemical or biological excitable media. The configuration of phase singularities (locations and directions of rotation) is easily derived from phase maps in two-dimensional manifolds. The question arises whether one can construct a phase map with a given configuration of phase singularities. The existence of such a phase map is guaranteed provided that the phase singularity configuration satisfies a certain constraint associated with the topology of the supporting medium. This paper presents a constructive mathematical approach to numerically solve this problem in the plane and on the sphere as well as in more general geometries relevant to atrial anatomy including holes and a septal wall. This tool can notably be used to create initial conditions with a controllable spiral wave configuration for cardiac propagation models and thus help in the design of computer experiments in atrial electrophysiology.
Liu, Mengying; Sun, Peihua
2014-01-01
A typical model of hypersonic vehicle has the complicated dynamics such as the unstable states, the nonminimum phases, and the strong coupling input-output relations. As a result, designing a robust stabilization controller is essential to implement the anticipated tasks. This paper presents a robust stabilization controller based on the guardian maps theory for hypersonic vehicle. First, the guardian maps theories are provided to explain the constraint relations between the open subsets of complex plane and the eigenvalues of the state matrix of closed-loop control system. Then, a general control structure in relation to the guardian maps theories is proposed to achieve the respected design demands. Furthermore, the robust stabilization control law depending on the given general control structure is designed for the longitudinal model of hypersonic vehicle. Finally, a simulation example is provided to verify the effectiveness of the proposed methods. PMID:24795535
Liu, Yanbin; Liu, Mengying; Sun, Peihua
2014-01-01
A typical model of hypersonic vehicle has the complicated dynamics such as the unstable states, the nonminimum phases, and the strong coupling input-output relations. As a result, designing a robust stabilization controller is essential to implement the anticipated tasks. This paper presents a robust stabilization controller based on the guardian maps theory for hypersonic vehicle. First, the guardian maps theories are provided to explain the constraint relations between the open subsets of complex plane and the eigenvalues of the state matrix of closed-loop control system. Then, a general control structure in relation to the guardian maps theories is proposed to achieve the respected design demands. Furthermore, the robust stabilization control law depending on the given general control structure is designed for the longitudinal model of hypersonic vehicle. Finally, a simulation example is provided to verify the effectiveness of the proposed methods.
Multimodal interaction for human-robot teams
NASA Astrophysics Data System (ADS)
Burke, Dustin; Schurr, Nathan; Ayers, Jeanine; Rousseau, Jeff; Fertitta, John; Carlin, Alan; Dumond, Danielle
2013-05-01
Unmanned ground vehicles have the potential for supporting small dismounted teams in mapping facilities, maintaining security in cleared buildings, and extending the team's reconnaissance and persistent surveillance capability. In order for such autonomous systems to integrate with the team, we must move beyond current interaction methods using heads-down teleoperation which require intensive human attention and affect the human operator's ability to maintain local situational awareness and ensure their own safety. This paper focuses on the design, development and demonstration of a multimodal interaction system that incorporates naturalistic human gestures, voice commands, and a tablet interface. By providing multiple, partially redundant interaction modes, our system degrades gracefully in complex environments and enables the human operator to robustly select the most suitable interaction method given the situational demands. For instance, the human can silently use arm and hand gestures for commanding a team of robots when it is important to maintain stealth. The tablet interface provides an overhead situational map allowing waypoint-based navigation for multiple ground robots in beyond-line-of-sight conditions. Using lightweight, wearable motion sensing hardware either worn comfortably beneath the operator's clothing or integrated within their uniform, our non-vision-based approach enables an accurate, continuous gesture recognition capability without line-of-sight constraints. To reduce the training necessary to operate the system, we designed the interactions around familiar arm and hand gestures.
A methodology for quantifying and mapping ecosystem services provided by watersheds
Villamagna, Amy M.; Angermeier, Paul L.
2015-01-01
Watershed processes – physical, chemical, and biological – are the foundation for many benefits that ecosystems provide for human societies. A crucial step toward accurately representing those benefits, so they can ultimately inform decisions about land and water management, is the development of a coherent methodology that can translate available data into the ecosystem services (ES) produced by watersheds. Ecosystem services (ES) provide an instinctive way to understand the tradeoffs associated with natural resource management. We provide a synthesis of common terminology and explain a rationale and framework for distinguishing among the components of ecosystem service delivery, including: an ecosystem’s capacity to produce a service; societal demand for the service; ecological pressures on this service; and flow of the service to people. We discuss how interpretation and measurement of these components can differ among provisioning, regulating, and cultural services and describe selected methods for quantifying ES components as well as constraints on data availability. We also present several case studies to illustrate our methods, including mapping capacity of several water purification services and demand for two forms of wildlife-based recreation, and discuss future directions for ecosystem service assessments. Our flexible framework treats service capacity, demand, ecological pressure, and flow as separate but interactive entities to better evaluate the sustainability of service provision across space and time and to help guide management decisions.
Gaitán-Espitia, Juan Diego; Marshall, Dustin; Dupont, Sam; Bacigalupe, Leonardo D.; Bodrossy, Levente; Hobday, Alistair J.
2017-01-01
Geographical gradients in selection can shape different genetic architectures in natural populations, reflecting potential genetic constraints for adaptive evolution under climate change. Investigation of natural pH/pCO2 variation in upwelling regions reveals different spatio-temporal patterns of natural selection, generating genetic and phenotypic clines in populations, and potentially leading to local adaptation, relevant to understanding effects of ocean acidification (OA). Strong directional selection, associated with intense and continuous upwellings, may have depleted genetic variation in populations within these upwelling regions, favouring increased tolerances to low pH but with an associated cost in other traits. In contrast, diversifying or weak directional selection in populations with seasonal upwellings or outside major upwelling regions may have resulted in higher genetic variances and the lack of genetic correlations among traits. Testing this hypothesis in geographical regions with similar environmental conditions to those predicted under climate change will build insights into how selection may act in the future and how populations may respond to stressors such as OA. PMID:28148831
Computational analysis of sequence selection mechanisms.
Meyerguz, Leonid; Grasso, Catherine; Kleinberg, Jon; Elber, Ron
2004-04-01
Mechanisms leading to gene variations are responsible for the diversity of species and are important components of the theory of evolution. One constraint on gene evolution is that of protein foldability; the three-dimensional shapes of proteins must be thermodynamically stable. We explore the impact of this constraint and calculate properties of foldable sequences using 3660 structures from the Protein Data Bank. We seek a selection function that receives sequences as input, and outputs survival probability based on sequence fitness to structure. We compute the number of sequences that match a particular protein structure with energy lower than the native sequence, the density of the number of sequences, the entropy, and the "selection" temperature. The mechanism of structure selection for sequences longer than 200 amino acids is approximately universal. For shorter sequences, it is not. We speculate on concrete evolutionary mechanisms that show this behavior.
Genetic constraints predict evolutionary divergence in Dalechampia blossoms.
Bolstad, Geir H; Hansen, Thomas F; Pélabon, Christophe; Falahati-Anbaran, Mohsen; Pérez-Barrales, Rocío; Armbruster, W Scott
2014-08-19
If genetic constraints are important, then rates and direction of evolution should be related to trait evolvability. Here we use recently developed measures of evolvability to test the genetic constraint hypothesis with quantitative genetic data on floral morphology from the Neotropical vine Dalechampia scandens (Euphorbiaceae). These measures were compared against rates of evolution and patterns of divergence among 24 populations in two species in the D. scandens species complex. We found clear evidence for genetic constraints, particularly among traits that were tightly phenotypically integrated. This relationship between evolvability and evolutionary divergence is puzzling, because the estimated evolvabilities seem too large to constitute real constraints. We suggest that this paradox can be explained by a combination of weak stabilizing selection around moving adaptive optima and small realized evolvabilities relative to the observed additive genetic variance.
McMurray, Bob; Horst, Jessica S.; Samuelson, Larissa K.
2013-01-01
Classic approaches to word learning emphasize the problem of referential ambiguity: in any naming situation the referent of a novel word must be selected from many possible objects, properties, actions, etc. To solve this problem, researchers have posited numerous constraints, and inference strategies, but assume that determining the referent of a novel word is isomorphic to learning. We present an alternative model in which referent selection is an online process that is independent of long-term learning. This two timescale approach creates significant power in the developing system. We illustrate this with a dynamic associative model in which referent selection is simulated as dynamic competition between competing referents, and learning is simulated using associative (Hebbian) learning. This model can account for a range of findings including the delay in expressive vocabulary relative to receptive vocabulary, learning under high degrees of referential ambiguity using cross-situational statistics, accelerating (vocabulary explosion) and decelerating (power-law) learning rates, fast-mapping by mutual exclusivity (and differences in bilinguals), improvements in familiar word recognition with development, and correlations between individual differences in speed of processing and learning. Five theoretical points are illustrated. 1) Word learning does not require specialized processes – general association learning buttressed by dynamic competition can account for much of the literature. 2) The processes of recognizing familiar words are not different than those that support novel words (e.g., fast-mapping). 3) Online competition may allow the network (or child) to leverage information available in the task to augment performance or behavior despite what might be relatively slow learning or poor representations. 4) Even associative learning is more complex than previously thought – a major contributor to performance is the pruning of incorrect associations between words and referents. 5) Finally, the model illustrates that learning and referent selection/word recognition, though logically distinct, can be deeply and subtly related as phenomena like speed of processing and mutual exclusivity may derive in part from the way learning shapes the system. As a whole, this suggests more sophisticated ways of describing the interaction between situation- and developmental-time processes and points to the need for considering such interactions as a primary determinant of development and processing in children. PMID:23088341
Li, Shuo; Peng, Jun; Liu, Weirong; Zhu, Zhengfa; Lin, Kuo-Chi
2013-12-19
Recent research has indicated that using the mobility of the actuator in wireless sensor and actuator networks (WSANs) to achieve mobile data collection can greatly increase the sensor network lifetime. However, mobile data collection may result in unacceptable collection delays in the network if the path of the actuator is too long. Because real-time network applications require meeting data collection delay constraints, planning the path of the actuator is a very important issue to balance the prolongation of the network lifetime and the reduction of the data collection delay. In this paper, a multi-hop routing mobile data collection algorithm is proposed based on dynamic polling point selection with delay constraints to address this issue. The algorithm can actively update the selection of the actuator's polling points according to the sensor nodes' residual energies and their locations while also considering the collection delay constraint. It also dynamically constructs the multi-hop routing trees rooted by these polling points to balance the sensor node energy consumption and the extension of the network lifetime. The effectiveness of the algorithm is validated by simulation.
Carpenter, G.B.; Cardinell, A.P.; Francois, D.K.; Good, L.K.; Lewis, R.L.; Stiles, N.T.
1982-01-01
Analysis of high-resolution geophysical data collected over 540 blocks tentatively selected for leasing in proposed OCS Oil and Gas Lease Sale 52 (Georges Bank) revealed a number of potential geologic hazards to oil and gas exploration and development activities: evidence of mass movements and shallow gas deposits on the continental slope. No potential hazards were observed on the continental shelf or rise. Other geology-related problems, termed constraints because they pose a relatively low degree of risk and can be routinely dealt with by the use of existing technology have been observed on the continental shelf. Constraints identified in the proposed sale area are erosion, sand waves, filled channels and deep faults. Piston cores were collected for geotechnical analysis at selected locations on the continental slope in the proposed lease sale area. The core locations were selected to provide information on slope stability and to establish the general geotechnical properties of the sediments. Preliminary results of a testing program suggest that the surficial sediment cover is stable with respect to mass movement.
A synoptic description of coal basins via image processing
NASA Technical Reports Server (NTRS)
Farrell, K. W., Jr.; Wherry, D. B.
1978-01-01
An existing image processing system is adapted to describe the geologic attributes of a regional coal basin. This scheme handles a map as if it were a matrix, in contrast to more conventional approaches which represent map information in terms of linked polygons. The utility of the image processing approach is demonstrated by a multiattribute analysis of the Herrin No. 6 coal seam in Illinois. Findings include the location of a resource and estimation of tonnage corresponding to constraints on seam thickness, overburden, and Btu value, which are illustrative of the need for new mining technology.
UVMAS: Venus ultraviolet-visual mapping spectrometer
NASA Astrophysics Data System (ADS)
Bellucci, G.; Zasova, L.; Altieri, F.; Nuccilli, F.; Ignatiev, N.; Moroz, V.; Khatuntsev, I.; Korablev, O.; Rodin, A.
This paper summarizes the capabilities and technical solutions of an Ultraviolet Visual Mapping Spectrometer designed for remote sensing of Venus from a planetary orbiter. The UVMAS consists of a multichannel camera with a spectral range 0.19 << 0.49 μm which acquires data in several spectral channels (up to 400) with a spectral resolution of 0.58 nm. The instantaneous field of view of the instrument is 0.244 × 0.244 mrad. These characteristics allow: a) to study the upper clouds dynamics and chemistry; b) giving constraints on the unknown absorber; c) observation of the night side airglow.
Taking the Measure of the Universe: Cosmology from the WMAP Mission
NASA Technical Reports Server (NTRS)
Hinshaw, Gary F.
2007-01-01
The data from the first three years of operation of the Wilkinson Microwave Anisotropy Probe (WMAP) satellite provide detailed full-sky maps of the cosmic microwave background temperature anisotropy and new full-sky maps of the polarization. Together, the data provide a wealth of cosmological information, including the age of the universe, the epoch when the first stars formed, and the overall composition of baryonic matter, dark matter, and dark energy. The results also provide constraints on the period of inflationary expansion in the very first moments of time. These and other aspects of the mission will be discussed.
NASA Technical Reports Server (NTRS)
Leake, M. A.
1982-01-01
The relative ages of various geologic units and structures place tight constraints on the origin of the Moon and the planet Mercury, and thus provide a better understanding of the geologic histories of these bodies. Crater statistics, a reexamination of lunar geologic maps, and the compilation of a geologic map of a quarter of Mercury's surface based on plains units dated relative to crater degradation classes were used to determine relative ages. This provided the basis for deducing the origin of intercrater plains and their role in terrestrial planet evolution.
Growing a hypercubical output space in a self-organizing feature map.
Bauer, H U; Villmann, T
1997-01-01
Neural maps project data from an input space onto a neuron position in a (often lower dimensional) output space grid in a neighborhood preserving way, with neighboring neurons in the output space responding to neighboring data points in the input space. A map-learning algorithm can achieve an optimal neighborhood preservation only, if the output space topology roughly matches the effective structure of the data in the input space. We here present a growth algorithm, called the GSOM or growing self-organizing map, which enhances a widespread map self-organization process, Kohonen's self-organizing feature map (SOFM), by an adaptation of the output space grid during learning. The GSOM restricts the output space structure to the shape of a general hypercubical shape, with the overall dimensionality of the grid and its extensions along the different directions being subject of the adaptation. This constraint meets the demands of many larger information processing systems, of which the neural map can be a part. We apply our GSOM-algorithm to three examples, two of which involve real world data. Using recently developed methods for measuring the degree of neighborhood preservation in neural maps, we find the GSOM-algorithm to produce maps which preserve neighborhoods in a nearly optimal fashion.
Predicting protein contact map using evolutionary and physical constraints by integer programming.
Wang, Zhiyong; Xu, Jinbo
2013-07-01
Protein contact map describes the pairwise spatial and functional relationship of residues in a protein and contains key information for protein 3D structure prediction. Although studied extensively, it remains challenging to predict contact map using only sequence information. Most existing methods predict the contact map matrix element-by-element, ignoring correlation among contacts and physical feasibility of the whole-contact map. A couple of recent methods predict contact map by using mutual information, taking into consideration contact correlation and enforcing a sparsity restraint, but these methods demand for a very large number of sequence homologs for the protein under consideration and the resultant contact map may be still physically infeasible. This article presents a novel method PhyCMAP for contact map prediction, integrating both evolutionary and physical restraints by machine learning and integer linear programming. The evolutionary restraints are much more informative than mutual information, and the physical restraints specify more concrete relationship among contacts than the sparsity restraint. As such, our method greatly reduces the solution space of the contact map matrix and, thus, significantly improves prediction accuracy. Experimental results confirm that PhyCMAP outperforms currently popular methods no matter how many sequence homologs are available for the protein under consideration. http://raptorx.uchicago.edu.
Non-Gaussian shape discrimination with spectroscopic galaxy surveys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Byun, Joyce; Bean, Rachel, E-mail: byun@astro.cornell.edu, E-mail: rbean@astro.cornell.edu
2015-03-01
We consider how galaxy clustering data, from Mpc to Gpc scales, from upcoming large scale structure surveys, such as Euclid and DESI, can provide discriminating information about the bispectrum shape arising from a variety of inflationary scenarios. Through exploring in detail the weighting of shape properties in the calculation of the halo bias and halo mass function we show how they probe a broad range of configurations, beyond those in the squeezed limit, that can help distinguish between shapes with similar large scale bias behaviors. We assess the impact, on constraints for a diverse set of non-Gaussian shapes, of galaxymore » clustering information in the mildly non-linear regime, and surveys that span multiple redshifts and employ different galactic tracers of the dark matter distribution. Fisher forecasts are presented for a Euclid-like spectroscopic survey of Hα-selected emission line galaxies (ELGs), and a DESI-like survey, of luminous red galaxies (LRGs) and [O-II] doublet-selected ELGs, in combination with Planck-like CMB temperature and polarization data.While ELG samples provide better probes of shapes that are divergent in the squeezed limit, LRG constraints, centered below z<1, yield stronger constraints on shapes with scale-independent large-scale halo biases, such as the equilateral template. The ELG and LRG samples provide complementary degeneracy directions for distinguishing between different shapes. For Hα-selected galaxies, we note that recent revisions of the expected Hα luminosity function reduce the halo bias constraints on the local shape, relative to the CMB. For galaxy clustering constraints to be comparable to those from the CMB, additional information about the Gaussian galaxy bias is needed, such as can be determined from the galaxy clustering bispectrum or probing the halo power spectrum directly through weak lensing. If the Gaussian galaxy bias is constrained to better than a percent level then the LSS and CMB data could provide complementary constraints that will enable differentiation of bispectrum with distinct theoretical origins but with similar large scale, squeezed-limit properties.« less
NASA Astrophysics Data System (ADS)
Wu, J. E.; Suppe, J.; Renqi, L.; Lin, C.; Kanda, R. V.
2013-12-01
The past locations, shapes and polarity of subduction trenches provide first-order constraints for plate tectonic reconstructions. Analogue and numerical models of subduction zones suggest that relative subducting (Vs) and overriding (Vor) plate velocities may strongly influence final subducted slab geometries. Here we have mapped the 3D geometries of subducted slabs in the upper and lower mantle of Asia from global seismic tomography. We have incorporated these slabs into plate tectonic models, which allows us to infer the subducting and overriding plate velocities. We describe two distinct slab geometry styles, ';flat slabs' and ';slab curtains', and show their implications for paleo-trench positions and subduction geometries in plate tectonic reconstructions. When compared to analogue and numerical models, the mapped slab styles show similarities to modeled slabs that occupy very different locations within Vs:Vor parameter space. ';Flat slabs' include large swaths of sub-horizontal slabs in the lower mantle that underlie the well-known northward paths of India and Australia from Eastern Gondwana, viewed in a moving hotspot reference. At India the flat slabs account for a significant proportion of the predicted lost Ceno-Tethys Ocean since ~100 Ma, whereas at Australia they record the existence of a major 8000km by 2500-3000km ocean that existed at ~43 Ma between East Asia, the Pacific and Australia. Plate reconstructions incorporating the slab constraints imply these flat slab geometries were generated when continent overran oceanic lithosphere to produce rapid trench retreat, or in other words, when subducting and overriding velocities were equal (i.e. Vs ~ Vor). ';Slab curtains' include subvertical Pacific slabs near the Izu-Bonin and Marianas trenches that extend from the surface down to 1500 km in the lower mantle and are 400 to 500 km thick. Reconstructed slab lengths were assessed from tomographic volumes calculated at serial cross-sections. The ';slab curtain' geometry and restored slab lengths indicate a nearly stationary Pacific trench since ~43 Ma. In contrast to the flat slabs, here the reconstructed subduction zone had large subducting plate velocities relative to very small overriding plate velocities (i.e. Vs >> Vor). In addition to flat slabs and slab curtains, we also find other less widespread local subduction settings that lie at other locations in Vs:Vor parameter space or involved other processes. Slabs were mapped using Gocad software. Mapped slabs were restored to a spherical model Earth surface by two approaches: unfolding (i.e. piecewise flattening) to minimize shape and area distortions, and by evaluated mapped slab volumes. Gplates software was used to integrate the mapped slabs with plate tectonic reconstructions.
Searching for quantum optimal controls under severe constraints
Riviello, Gregory; Tibbetts, Katharine Moore; Brif, Constantin; ...
2015-04-06
The success of quantum optimal control for both experimental and theoretical objectives is connected to the topology of the corresponding control landscapes, which are free from local traps if three conditions are met: (1) the quantum system is controllable, (2) the Jacobian of the map from the control field to the evolution operator is of full rank, and (3) there are no constraints on the control field. This paper investigates how the violation of assumption (3) affects gradient searches for globally optimal control fields. The satisfaction of assumptions (1) and (2) ensures that the control landscape lacks fundamental traps, butmore » certain control constraints can still prevent successful optimization of the objective. Using optimal control simulations, we show that the most severe field constraints are those that limit essential control resources, such as the number of control variables, the control duration, and the field strength. Proper management of these resources is an issue of great practical importance for optimization in the laboratory. For each resource, we show that constraints exceeding quantifiable limits can introduce artificial traps to the control landscape and prevent gradient searches from reaching a globally optimal solution. These results demonstrate that careful choice of relevant control parameters helps to eliminate artificial traps and facilitate successful optimization.« less
Wavelength selection of rolling-grain ripples in the laboratory
NASA Astrophysics Data System (ADS)
Rousseaux, Germain; Stegner, Alexandre; Wesfreid, José Eduardo
2004-03-01
We have performed an experimental study, at very high resolution, of the wavelength selection and the evolution of rolling-grain ripples. A clear distinction is made between the flat sand bed instability and the ripple coarsening. The observation of the initial wavelength for the rolling-grain ripples is only possible close to the threshold for movement which imposes a constraint on the parameters. Moreover, we have proposed a law for the selection of the unstable wavelength under the latter constraint. Our results suggest that the initial wavelength depends on the amplitude of oscillation, the grain diameter, and the Stokes layer. Besides, during the coarsening, we observe no self-similarity of the ripple shape and for few cases a logarithmic growth of the wavelength.
FOR Allocation to Distribution Systems based on Credible Improvement Potential (CIP)
NASA Astrophysics Data System (ADS)
Tiwary, Aditya; Arya, L. D.; Arya, Rajesh; Choube, S. C.
2017-02-01
This paper describes an algorithm for forced outage rate (FOR) allocation to each section of an electrical distribution system subject to satisfaction of reliability constraints at each load point. These constraints include threshold values of basic reliability indices, for example, failure rate, interruption duration and interruption duration per year at load points. Component improvement potential measure has been used for FOR allocation. Component with greatest magnitude of credible improvement potential (CIP) measure is selected for improving reliability performance. The approach adopted is a monovariable method where one component is selected for FOR allocation and in the next iteration another component is selected for FOR allocation based on the magnitude of CIP. The developed algorithm is implemented on sample radial distribution system.
Questioning the social intelligence hypothesis.
Holekamp, Kay E
2007-02-01
The social intelligence hypothesis posits that complex cognition and enlarged "executive brains" evolved in response to challenges that are associated with social complexity. This hypothesis has been well supported, but some recent data are inconsistent with its predictions. It is becoming increasingly clear that multiple selective agents, and non-selective constraints, must have acted to shape cognitive abilities in humans and other animals. The task now is to develop a larger theoretical framework that takes into account both inter-specific differences and similarities in cognition. This new framework should facilitate consideration of how selection pressures that are associated with sociality interact with those that are imposed by non-social forms of environmental complexity, and how both types of functional demands interact with phylogenetic and developmental constraints.
Constraining the crustal root geometry beneath Northern Morocco
NASA Astrophysics Data System (ADS)
Díaz, J.; Gil, A.; Carbonell, R.; Gallart, J.; Harnafi, M.
2016-10-01
Consistent constraints of an over-thickened crust beneath the Rif Cordillera (N. Morocco) are inferred from analyses of recently acquired seismic datasets including controlled source wide-angle reflections and receiver functions from teleseismic events. Offline arrivals of Moho-reflected phases recorded in RIFSIS project provide estimations of the crustal thicknesses in 3D. Additional constraints on the onshore-offshore transition are inferred from shots in a coeval experiment in the Alboran Sea recorded at land stations in northern Morocco. A regional crustal thickness map is computed from all these results. In parallel, we use natural seismicity data collected throughout TopoIberia and PICASSO experiments, and from a new RIFSIS deployment, to obtain receiver functions and explore the crustal thickness variations with a H-κ grid-search approach. This larger dataset provides better resolution constraints and reveals a number of abrupt crustal changes. A gridded surface is built up by interpolating the Moho depths inferred for each seismic station, then compared with the map from controlled source experiments. A remarkably consistent image is observed in both maps, derived from completely independent data and methods. Both approaches document a large crustal root, exceeding 50 km depth in the central part of the Rif, in contrast with the rather small topographic elevations. This large crustal thickness, consistent with the available Bouguer anomaly data, favors models proposing that the high velocity slab imaged by seismic tomography beneath the Alboran Sea is still attached to the lithosphere beneath the Rif, hence pulling down the lithosphere and thickening the crust. The thickened area corresponds to a quiet seismic zone located between the western Morocco arcuate seismic zone, the deep seismicity area beneath western Alboran Sea and the superficial seismicity in Alhoceima area. Therefore, the presence of a crustal root seems to play also a major role in the seismicity distribution in northern Morocco.
Explanation Constraint Programming for Model-based Diagnosis of Engineered Systems
NASA Technical Reports Server (NTRS)
Narasimhan, Sriram; Brownston, Lee; Burrows, Daniel
2004-01-01
We can expect to see an increase in the deployment of unmanned air and land vehicles for autonomous exploration of space. In order to maintain autonomous control of such systems, it is essential to track the current state of the system. When the system includes safety-critical components, failures or faults in the system must be diagnosed as quickly as possible, and their effects compensated for so that control and safety are maintained under a variety of fault conditions. The Livingstone fault diagnosis and recovery kernel and its temporal extension L2 are examples of model-based reasoning engines for health management. Livingstone has been shown to be effective, it is in demand, and it is being further developed. It was part of the successful Remote Agent demonstration on Deep Space One in 1999. It has been and is being utilized by several projects involving groups from various NASA centers, including the In Situ Propellant Production (ISPP) simulation at Kennedy Space Center, the X-34 and X-37 experimental reusable launch vehicle missions, Techsat-21, and advanced life support projects. Model-based and consistency-based diagnostic systems like Livingstone work only with discrete and finite domain models. When quantitative and continuous behaviors are involved, these are abstracted to discrete form using some mapping. This mapping from the quantitative domain to the qualitative domain is sometimes very involved and requires the design of highly sophisticated and complex monitors. We propose a diagnostic methodology that deals directly with quantitative models and behaviors, thereby mitigating the need for these sophisticated mappings. Our work brings together ideas from model-based diagnosis systems like Livingstone and concurrent constraint programming concepts. The system uses explanations derived from the propagation of quantitative constraints to generate conflicts. Fast conflict generation algorithms are used to generate and maintain multiple candidates whose consistency can be tracked across multiple time steps.
NASA Astrophysics Data System (ADS)
Taie Semiromi, M.; Koch, M.
2017-12-01
Although linear/regression statistical downscaling methods are very straightforward and widely used, and they can be applied to a single predictor-predictand pair or spatial fields of predictors-predictands, the greatest constraint is the requirement of a normal distribution of the predictor and the predictand values, which means that it cannot be used to predict the distribution of daily rainfall because it is typically non-normal. To tacked with such a limitation, the current study aims to introduce a new developed hybrid technique taking advantages from Artificial Neural Networks (ANNs), Wavelet and Quantile Mapping (QM) for downscaling of daily precipitation for 10 rain-gauge stations located in Gharehsoo River Basin, Iran. With the purpose of daily precipitation downscaling, the study makes use of Second Generation Canadian Earth System Model (CanESM2) developed by Canadian Centre for Climate Modeling and Analysis (CCCma). Climate projections are available for three representative concentration pathways (RCPs) namely RCP 2.6, RCP 4.5 and RCP 8.5 for up to 2100. In this regard, 26 National Centers for Environmental Prediction (NCEP) reanalysis large-scale variables which have potential physical relationships with precipitation, were selected as candidate predictors. Afterwards, predictor screening was conducted using correlation, partial correlation and explained variance between predictors and predictand (precipitation). Depending on each rain-gauge station between two and three predictors were selected which their decomposed details (D) and approximation (A) obtained from discrete wavelet analysis were fed as inputs to the neural networks. After downscaling of daily precipitation, bias correction was conducted using quantile mapping. Out of the complete time series available, i.e. 1978-2005, two third of which namely 1978-1996 was used for calibration of QM and the reminder, i.e. 1997-2005 was considered for the validation. Result showed that the proposed hybrid method supported by QM for bias-correction could quite satisfactorily simulate daily precipitation. Also, results indicated that under all RCPs, precipitation will be more or less than 12% decreased by 2100. However, precipitation will be less decreased under RCP 8.5 compared with RCP 4.5.
Constrained optimization for position calibration of an NMR field camera.
Chang, Paul; Nassirpour, Sahar; Eschelbach, Martin; Scheffler, Klaus; Henning, Anke
2018-07-01
Knowledge of the positions of field probes in an NMR field camera is necessary for monitoring the B 0 field. The typical method of estimating these positions is by switching the gradients with known strengths and calculating the positions using the phases of the FIDs. We investigated improving the accuracy of estimating the probe positions and analyzed the effect of inaccurate estimations on field monitoring. The field probe positions were estimated by 1) assuming ideal gradient fields, 2) using measured gradient fields (including nonlinearities), and 3) using measured gradient fields with relative position constraints. The fields measured with the NMR field camera were compared to fields acquired using a dual-echo gradient recalled echo B 0 mapping sequence. Comparisons were done for shim fields from second- to fourth-order shim terms. The position estimation was the most accurate when relative position constraints were used in conjunction with measured (nonlinear) gradient fields. The effect of more accurate position estimates was seen when compared to fields measured using a B 0 mapping sequence (up to 10%-15% more accurate for some shim fields). The models acquired from the field camera are sensitive to noise due to the low number of spatial sample points. Position estimation of field probes in an NMR camera can be improved using relative position constraints and nonlinear gradient fields. Magn Reson Med 80:380-390, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
Sarro-Ramírez, Andrea; Sánchez, Daniel; Tejeda-Padrón, Alma; Buenfil-Canto, Linda Vianey; Valladares-García, Jorge; Pacheco-Pantoja, Elda; Arias-Carrión, Oscar; Murillo-Rodríguez, Eric
2016-01-01
Obesity is a world-wide health problem that requires different experimental perspectives to understand the onset of this disease, including the neurobiological basis of food selection. From a molecular perspective, obesity has been related with activity of several endogenous molecules, including the mitogenactivated protein kinases (MAP-K). The aim of this study was to characterize MAP-K expression in hedonic and learning and memory brain-associated areas such as nucleus accumbens (AcbC) and hippocampus (HIPP) after food selection. We show that animals fed with cafeteria diet during 14 days displayed an increase in p38 MAP-K activity in AcbC if chose cheese. Conversely, a diminution was observed in animals that preferred chocolate in AcbC. Also, a decrease of p38 MAP-K phosphorylation was found in HIPP in rats that selected either cheese or chocolate. Our data demonstrate a putative role of MAP-K expression in food selection. These findings advance our understanding of neuromolecular basis engaged in obesity.
Development of echolocation calls and neural selectivity for echolocation calls in the pallid bat.
Razak, Khaleel A; Fuzessery, Zoltan M
2015-10-01
Studies of birdsongs and neural selectivity for songs have provided important insights into principles of concurrent behavioral and auditory system development. Relatively little is known about mammalian auditory system development in terms of vocalizations or other behaviorally relevant sounds. This review suggests echolocating bats are suitable mammalian model systems to understand development of auditory behaviors. The simplicity of echolocation calls with known behavioral relevance and strong neural selectivity provides a platform to address how natural experience shapes cortical receptive field (RF) mechanisms. We summarize recent studies in the pallid bat that followed development of echolocation calls and cortical processing of such calls. We also discuss similar studies in the mustached bat for comparison. These studies suggest: (1) there are different developmental sensitive periods for different acoustic features of the same vocalization. The underlying basis is the capacity for some components of the RF to be modified independent of others. Some RF computations and maps involved in call processing are present even before the cochlea is mature and well before use of echolocation in flight. Others develop over a much longer time course. (2) Normal experience is required not just for refinement, but also for maintenance, of response properties that develop in an experience independent manner. (3) Experience utilizes millisecond range changes in timing of inhibitory and excitatory RF components as substrates to shape vocalization selectivity. We suggest that bat species and call diversity provide a unique opportunity to address developmental constraints in the evolution of neural mechanisms of vocalization processing. © 2014 Wiley Periodicals, Inc.
Development of echolocation calls and neural selectivity for echolocation calls in the pallid bat
Razak, Khaleel A.; Fuzessery, Zoltan M.
2014-01-01
Studies of birdsongs and neural selectivity for songs have provided important insights into principles of concurrent behavioral and auditory system development. Relatively little is known about mammalian auditory system development in terms of vocalizations, or other behaviorally relevant sounds. This review suggests echolocating bats are suitable mammalian model systems to understand development of auditory behaviors. The simplicity of echolocation calls with known behavioral relevance and strong neural selectivity provides a platform to address how natural experience shapes cortical receptive field (RF) mechanisms. We summarize recent studies in the pallid bat that followed development of echolocation calls and cortical processing of such calls. We also discuss similar studies in the mustached bat for comparison. These studies suggest: (1) there are different developmental sensitive periods for different acoustic features of the same vocalization. The underlying basis is the capacity for some components of the RF to be modified independent of others. Some RF computations and maps involved in call processing are present even before the cochlea is mature and well before use of echolocation in flight. Others develop over a much longer time course. (2) Normal experience is required not just for refinement, but also for maintenance, of response properties that develop in an experience independent manner. (3) Experience utilizes millisecond range changes in timing of inhibitory and excitatory RF components as substrates to shape vocalization selectivity. We suggest that bat species and call diversity provide a unique opportunity to address developmental constraints in the evolution of neural mechanisms of vocalization processing. PMID:25142131
Will, Jessica L.; Kim, Hyun Seok; Clarke, Jessica; Painter, John C.; Fay, Justin C.; Gasch, Audrey P.
2010-01-01
A major goal in evolutionary biology is to understand how adaptive evolution has influenced natural variation, but identifying loci subject to positive selection has been a challenge. Here we present the adaptive loss of a pair of paralogous genes in specific Saccharomyces cerevisiae subpopulations. We mapped natural variation in freeze-thaw tolerance to two water transporters, AQY1 and AQY2, previously implicated in freeze-thaw survival. However, whereas freeze-thaw–tolerant strains harbor functional aquaporin genes, the set of sensitive strains lost aquaporin function at least 6 independent times. Several genomic signatures at AQY1 and/or AQY2 reveal low variation surrounding these loci within strains of the same haplotype, but high variation between strain groups. This is consistent with recent adaptive loss of aquaporins in subgroups of strains, leading to incipient balancing selection. We show that, although aquaporins are critical for surviving freeze-thaw stress, loss of both genes provides a major fitness advantage on high-sugar substrates common to many strains' natural niche. Strikingly, strains with non-functional alleles have also lost the ancestral requirement for aquaporins during spore formation. Thus, the antagonistic effect of aquaporin function—providing an advantage in freeze-thaw tolerance but a fitness defect for growth in high-sugar environments—contributes to the maintenance of both functional and nonfunctional alleles in S. cerevisiae. This work also shows that gene loss through multiple missense and nonsense mutations, hallmarks of pseudogenization presumed to emerge after loss of constraint, can arise through positive selection. PMID:20369021
NASA Technical Reports Server (NTRS)
Price, Kevin P.; Nellis, M. Duane
1996-01-01
The purpose of this project was to develop a practical protocol that employs multitemporal remotely sensed imagery, integrated with environmental parameters to model and monitor agricultural and natural resources in the High Plains Region of the United States. The value of this project would be extended throughout the region via workshops targeted at carefully selected audiences and designed to transfer remote sensing technology and the methods and applications developed. Implementation of such a protocol using remotely sensed satellite imagery is critical for addressing many issues of regional importance, including: (1) Prediction of rural land use/land cover (LULC) categories within a region; (2) Use of rural LULC maps for successive years to monitor change; (3) Crop types derived from LULC maps as important inputs to water consumption models; (4) Early prediction of crop yields; (5) Multi-date maps of crop types to monitor patterns related to crop change; (6) Knowledge of crop types to monitor condition and improve prediction of crop yield; (7) More precise models of crop types and conditions to improve agricultural economic forecasts; (8;) Prediction of biomass for estimating vegetation production, soil protection from erosion forces, nonpoint source pollution, wildlife habitat quality and other related factors; (9) Crop type and condition information to more accurately predict production of biogeochemicals such as CO2, CH4, and other greenhouse gases that are inputs to global climate models; (10) Provide information regarding limiting factors (i.e., economic constraints of pumping, fertilizing, etc.) used in conjunction with other factors, such as changes in climate for predicting changes in rural LULC; (11) Accurate prediction of rural LULC used to assess the effectiveness of government programs such as the U.S. Soil Conservation Service (SCS) Conservation Reserve Program; and (12) Prediction of water demand based on rural LULC that can be related to rates of draw-down of underground water supplies.
Effective Scenarios for Exploring Asteroid Surfaces
NASA Astrophysics Data System (ADS)
Clark, Pamela E.; Clark, C.; Weisbin, C.
2010-10-01
In response to the proposal that asteroids be the next targets for exploration, we attempt to develop scenarios for exploring previously mapped asteroid 433 Eros, harnessing our recent experience gained planning such activity for return to the lunar surface. The challenges faced in planning Apollo led to the development of a baseline methodology for extraterrestrial field science. What `lessons learned’ can be applied for asteroids? Effective reconnaissance (advanced mapping at <0.5 m, photos with plotted routes as in-field reference maps), training/simulating/planning (highly interactive abundant field time for extended crew), and documentation (hands-free audio and visual systematic description) procedures are still valid. The use of Constant Scale Natural Boundary rather than standard projection maps eases the challenge of navigating and interpreting a highly irregular object. Lunar and asteroid surfaces are dominated by bombardment and space radiation/dust/charged particle/regolith interactions, with similar implications for sampling. Asteroid work stations are selected on the basis of impact-induced exposure of `outcrops’ from prominent ridges (e.g., Himeros, the noses) potentially representing underlying material, supplemented by sampling of areas of especially thin or deep regolith (ponds). Unlike the Moon, an asteroid lacks sufficient gravity and most likely the necessary stability to support `normal’ driving or walking. In fact, the crew delivery vehicle might not even be `tetherable’ and would most likely `station keep’ to maintain a position. The most convenient local mobility mechanism for astronauts/robots would be `hand over hand’ above the surface at a field station supplemented by a `tetherless’ (small rocket-pack) control system for changing station or return to vehicle. Thus, we assume similar mobility constraints (meters to hundreds of meters at a local station, kilometers between stations) as those used for Apollo. We also assume the vehicle could `station keep’ at more than one location separated by tens of kilometers distance.
Cost-driven materials selection criteria for redox flow battery electrolytes
NASA Astrophysics Data System (ADS)
Dmello, Rylan; Milshtein, Jarrod D.; Brushett, Fikile R.; Smith, Kyle C.
2016-10-01
Redox flow batteries show promise for grid-scale energy storage applications but are presently too expensive for widespread adoption. Electrolyte material costs constitute a sizeable fraction of the redox flow battery price. As such, this work develops a techno-economic model for redox flow batteries that accounts for redox-active material, salt, and solvent contributions to the electrolyte cost. Benchmark values for electrolyte constituent costs guide identification of design constraints. Nonaqueous battery design is sensitive to all electrolyte component costs, cell voltage, and area-specific resistance. Design challenges for nonaqueous batteries include minimizing salt content and dropping redox-active species concentration requirements. Aqueous battery design is sensitive to only redox-active material cost and cell voltage, due to low area-specific resistance and supporting electrolyte costs. Increasing cell voltage and decreasing redox-active material cost present major materials selection challenges for aqueous batteries. This work minimizes cost-constraining variables by mapping the battery design space with the techno-economic model, through which we highlight pathways towards low price and moderate concentration. Furthermore, the techno-economic model calculates quantitative iterations of battery designs to achieve the Department of Energy battery price target of 100 per kWh and highlights cost cutting strategies to drive battery prices down further.
2012-01-01
Background An important question in the analysis of biochemical data is that of identifying subsets of molecular variables that may jointly influence a biological response. Statistical variable selection methods have been widely used for this purpose. In many settings, it may be important to incorporate ancillary biological information concerning the variables of interest. Pathway and network maps are one example of a source of such information. However, although ancillary information is increasingly available, it is not always clear how it should be used nor how it should be weighted in relation to primary data. Results We put forward an approach in which biological knowledge is incorporated using informative prior distributions over variable subsets, with prior information selected and weighted in an automated, objective manner using an empirical Bayes formulation. We employ continuous, linear models with interaction terms and exploit biochemically-motivated sparsity constraints to permit exact inference. We show an example of priors for pathway- and network-based information and illustrate our proposed method on both synthetic response data and by an application to cancer drug response data. Comparisons are also made to alternative Bayesian and frequentist penalised-likelihood methods for incorporating network-based information. Conclusions The empirical Bayes method proposed here can aid prior elicitation for Bayesian variable selection studies and help to guard against mis-specification of priors. Empirical Bayes, together with the proposed pathway-based priors, results in an approach with a competitive variable selection performance. In addition, the overall procedure is fast, deterministic, and has very few user-set parameters, yet is capable of capturing interplay between molecular players. The approach presented is general and readily applicable in any setting with multiple sources of biological prior knowledge. PMID:22578440
Fuzzy Multi-Objective Vendor Selection Problem with Modified S-CURVE Membership Function
NASA Astrophysics Data System (ADS)
Díaz-Madroñero, Manuel; Peidro, David; Vasant, Pandian
2010-06-01
In this paper, the S-Curve membership function methodology is used in a vendor selection (VS) problem. An interactive method for solving multi-objective VS problems with fuzzy goals is developed. The proposed method attempts simultaneously to minimize the total order costs, the number of rejected items and the number of late delivered items with reference to several constraints such as meeting buyers' demand, vendors' capacity, vendors' quota flexibility, vendors' allocated budget, etc. We compare in an industrial case the performance of S-curve membership functions, representing uncertainty goals and constraints in VS problems, with linear membership functions.
A Constraint-Based Planner for Data Production
NASA Technical Reports Server (NTRS)
Pang, Wanlin; Golden, Keith
2005-01-01
This paper presents a graph-based backtracking algorithm designed to support constrain-tbased planning in data production domains. This algorithm performs backtracking at two nested levels: the outer- backtracking following the structure of the planning graph to select planner subgoals and actions to achieve them and the inner-backtracking inside a subproblem associated with a selected action to find action parameter values. We show this algorithm works well in a planner applied to automating data production in an ecological forecasting system. We also discuss how the idea of multi-level backtracking may improve efficiency of solving semi-structured constraint problems.
User Guide for the Anvil Threat Cooridor Forecast Tool V2.4 for AWIPS
NASA Technical Reports Server (NTRS)
Barett, Joe H., III; Bauman, William H., III
2008-01-01
The Anvil Tool GUI allows users to select a Data Type, toggle the map refresh on/off, place labels, and choose the Profiler Type (source of the KSC 50 MHz profiler data), the Date- Time of the data, the Center of Plot, and the Station (location of the RAOB or 50 MHz profiler). If the Data Type is Models, the user selects a Fcst Hour (forecast hour) instead of Station. There are menus for User Profiles, Circle Label Options, and Frame Label Options. Labels can be placed near the center circle of the plot and/or at a specified distance and direction from the center of the circle (Center of Plot). The default selection for the map refresh is "ON". When the user creates a new Anvil Tool map with Refresh Map "ON, the plot is automatically displayed in the AWIPS frame. If another Anvil Tool map is already displayed and the user does not change the existing map number shown at the bottom of the GUI, the new Anvil Tool map will overwrite the old one. If the user turns the Refresh Map "OFF", the new Anvil Tool map is created but not automatically displayed. The user can still display the Anvil Tool map through the Maps dropdown menu* as shown in Figure 4.
Townsend, Erik M.; Schrock, Richard R.; Hoveyda, Amir H.
2012-01-01
Molybdenum or tungsten MAP complexes that contain OHIPT as the aryloxide (hexaisopropylterphenoxide) are effective catalysts for homocoupling of simple (E)-1,3-dienes to give (E,Z,E)-trienes in high yield and with high Z selectivities. A vinylalkylidene MAP species was shown to have the expected syn structure in an X-ray study. MAP catalysts that contain OHMT (hexamethylterphenoxide) are relatively inefficient. PMID:22734508
USGS ShakeMap Developments, Implementation, and Derivative Tools
NASA Astrophysics Data System (ADS)
Wald, D. J.; Lin, K.; Quitoriano, V.; Worden, B.
2007-12-01
We discuss ongoing development and enhancements of ShakeMap, a system for automatically generating maps of ground shaking and intensity in the minutes following an earthquake. The rapid availability of these maps is of particular value to emergency response organizations, utilities, insurance companies, government decision- makers, the media, and the general public. ShakeMap Version 3.2 was released in March, 2007, on a download site which allows ShakeMap developers to track operators' updates and provide follow-up information; V3.2 has now been downloaded in 15 countries. The V3.2 release supports LINUX in addition to other UNIX operating systems and adds enhancements to XML, KML, metadata, and other products. We have also added an uncertainty measure, quantified as a function of spatial location. Uncertainty is essential for evaluating the range of possible losses. Though not released in V3.2, we will describe a new quantitative uncertainty letter grading for each ShakeMap produced, allowing users to gauge the appropriate level of confidence when using rapidly produced ShakeMaps as part of their post-earthquake critical decision-making process. Since the V3.2 release, several new ground motion predictions equations have also been added to the prediction equation modules. ShakeMap is implemented in several new regions as reported in this Session. Within the U.S., robust systems serve California, Nevada, Utah, Washington and Oregon, Hawaii, and Anchorage. Additional systems are in development and efforts to provide backup capabilities for all Advanced National Seismic System (ANSS) regions at the National Earthquake Information Center are underway. Outside the U.S., this Session has descriptions of ShakeMap systems in Italy, Switzerland, Romania, and Turkey, among other countries. We also describe our predictive global ShakeMap system for the rapid evaluation of significant earthquakes globally for the Prompt Assessment of Global Earthquakes for Response (PAGER) system. These global ShakeMaps are constrained by rapidly gathered intensity data via the Internet and by finite fault and aftershock analyses for portraying fault rupture dimensions. As part of the PAGER loss calibration process we have produced an Atlas of ShakeMaps for significant earthquakes around the globe since 1973 (Allen and others, this Session); these Atlas events have additional constraints provided by archival strong motion, faulting dimensions, and macroseismic intensity data. We also describe derivative tools for further utilizing ShakeMap including ShakeCast, a fully automated system for delivering specific ShakeMap products to critical users and triggering established post-earthquake response protocols. We have released ShakeCast Version 2.0 (Lin and others, this Session), which allows RSS feeds for automatically receiving ShakeMap files, auto-launching of post-download processing scripts, and delivering notifications based on users' likely facility damage states derived from ShakeMap shaking parameters. As part of our efforts to produce estimated ShakeMaps globally, we have developed a procedure for deriving Vs30 estimates from correlations with topographic slope, and we have now implemented a global Vs30 Server, allowing users to generate Vs30 maps for custom user-selected regions around the globe (Allen and Wald, this Session). Finally, as a further derivative product of the ShakeMap Atlas project, we will present a shaking hazard Map for the past 30 years based on approximately 3,900 earthquake ShakeMaps of historic earthquakes.
OxfordGrid: a web interface for pairwise comparative map views.
Yang, Hongyu; Gingle, Alan R
2005-12-01
OxfordGrid is a web application and database schema for storing and interactively displaying genetic map data in a comparative, dot-plot, fashion. Its display is composed of a matrix of cells, each representing a pairwise comparison of mapped probe data for two linkage groups or chromosomes. These are arranged along the axes with one forming grid columns and the other grid rows with the degree and pattern of synteny/colinearity between the two linkage groups manifested in the cell's dot density and structure. A mouse click over the selected grid cell launches an image map-based display for the selected cell. Both individual and linear groups of mapped probes can be selected and displayed. Also, configurable links can be used to access other web resources for mapped probe information. OxfordGrid is implemented in C#/ASP.NET and the package, including MySQL schema creation scripts, is available at ftp://cggc.agtec.uga.edu/OxfordGrid/.
Evolution of sparsity and modularity in a model of protein allostery
NASA Astrophysics Data System (ADS)
Hemery, Mathieu; Rivoire, Olivier
2015-04-01
The sequence of a protein is not only constrained by its physical and biochemical properties under current selection, but also by features of its past evolutionary history. Understanding the extent and the form that these evolutionary constraints may take is important to interpret the information in protein sequences. To study this problem, we introduce a simple but physical model of protein evolution where selection targets allostery, the functional coupling of distal sites on protein surfaces. This model shows how the geometrical organization of couplings between amino acids within a protein structure can depend crucially on its evolutionary history. In particular, two scenarios are found to generate a spatial concentration of functional constraints: high mutation rates and fluctuating selective pressures. This second scenario offers a plausible explanation for the high tolerance of natural proteins to mutations and for the spatial organization of their least tolerant amino acids, as revealed by sequence analysis and mutagenesis experiments. It also implies a faculty to adapt to new selective pressures that is consistent with observations. The model illustrates how several independent functional modules may emerge within the same protein structure, depending on the nature of past environmental fluctuations. Our model thus relates the evolutionary history of proteins to the geometry of their functional constraints, with implications for decoding and engineering protein sequences.
NASA Astrophysics Data System (ADS)
Quesada-Montano, Beatriz; Westerberg, Ida K.; Fuentes-Andino, Diana; Hidalgo-Leon, Hugo; Halldin, Sven
2017-04-01
Long-term hydrological data are key to understanding catchment behaviour and for decision making within water management and planning. Given the lack of observed data in many regions worldwide, hydrological models are an alternative for reproducing historical streamflow series. Additional types of information - to locally observed discharge - can be used to constrain model parameter uncertainty for ungauged catchments. Climate variability exerts a strong influence on streamflow variability on long and short time scales, in particular in the Central-American region. We therefore explored the use of climate variability knowledge to constrain the simulated discharge uncertainty of a conceptual hydrological model applied to a Costa Rican catchment, assumed to be ungauged. To reduce model uncertainty we first rejected parameter relationships that disagreed with our understanding of the system. We then assessed how well climate-based constraints applied at long-term, inter-annual and intra-annual time scales could constrain model uncertainty. Finally, we compared the climate-based constraints to a constraint on low-flow statistics based on information obtained from global maps. We evaluated our method in terms of the ability of the model to reproduce the observed hydrograph and the active catchment processes in terms of two efficiency measures, a statistical consistency measure, a spread measure and 17 hydrological signatures. We found that climate variability knowledge was useful for reducing model uncertainty, in particular, unrealistic representation of deep groundwater processes. The constraints based on global maps of low-flow statistics provided more constraining information than those based on climate variability, but the latter rejected slow rainfall-runoff representations that the low flow statistics did not reject. The use of such knowledge, together with information on low-flow statistics and constraints on parameter relationships showed to be useful to constrain model uncertainty for an - assumed to be - ungauged basin. This shows that our method is promising for reconstructing long-term flow data for ungauged catchments on the Pacific side of Central America, and that similar methods can be developed for ungauged basins in other regions where climate variability exerts a strong control on streamflow variability.
The evolution of mimicry under constraints.
Holen, Øistein Haugsten; Johnstone, Rufus A
2004-11-01
The resemblance between mimetic organisms and their models varies from near perfect to very crude. One possible explanation, which has received surprisingly little attention, is that evolution can improve mimicry only at some cost to the mimetic organism. In this article, an evolutionary game theory model of mimicry is presented that incorporates such constraints. The model generates novel and testable predictions. First, Batesian mimics that are very common and/or mimic very weakly defended models should evolve either inaccurate mimicry (by stabilizing selection) or mimetic polymorphism. Second, Batesian mimics that are very common and/or mimic very weakly defended models are more likely to evolve mimetic polymorphism if they encounter predators at high rates and/or are bad at evading predator attacks. The model also examines how cognitive constraints acting on signal receivers may help determine evolutionarily stable levels of mimicry. Surprisingly, improved discrimination abilities among signal receivers may sometimes select for less accurate mimicry.
Poverty of the stimulus revisited.
Berwick, Robert C; Pietroski, Paul; Yankama, Beracah; Chomsky, Noam
2011-01-01
A central goal of modern generative grammar has been to discover invariant properties of human languages that reflect "the innate schematism of mind that is applied to the data of experience" and that "might reasonably be attributed to the organism itself as its contribution to the task of the acquisition of knowledge" (Chomsky, 1971). Candidates for such invariances include the structure dependence of grammatical rules, and in particular, certain constraints on question formation. Various "poverty of stimulus" (POS) arguments suggest that these invariances reflect an innate human endowment, as opposed to common experience: Such experience warrants selection of the grammars acquired only if humans assume, a priori, that selectable grammars respect substantive constraints. Recently, several researchers have tried to rebut these POS arguments. In response, we illustrate why POS arguments remain an important source of support for appeal to a priori structure-dependent constraints on the grammars that humans naturally acquire. Copyright © 2011 Cognitive Science Society, Inc.
Fine mapping of RYMV3: a new resistance gene to Rice yellow mottle virus from Oryza glaberrima.
Pidon, Hélène; Ghesquière, Alain; Chéron, Sophie; Issaka, Souley; Hébrard, Eugénie; Sabot, François; Kolade, Olufisayo; Silué, Drissa; Albar, Laurence
2017-04-01
A new resistance gene against Rice yellow mottle virus was identified and mapped in a 15-kb interval. The best candidate is a CC-NBS-LRR gene. Rice yellow mottle virus (RYMV) disease is a serious constraint to the cultivation of rice in Africa and selection for resistance is considered to be the most effective management strategy. The aim of this study was to characterize the resistance of Tog5307, a highly resistant accession belonging to the African cultivated rice species (Oryza glaberrima), that has none of the previously identified resistance genes to RYMV. The specificity of Tog5307 resistance was analyzed using 18 RYMV isolates. While three of them were able to infect Tog5307 very rapidly, resistance against the others was effective despite infection events attributed to resistance-breakdown or incomplete penetrance of the resistance. Segregation of resistance in an interspecific backcross population derived from a cross between Tog5307 and the susceptible Oryza sativa variety IR64 showed that resistance is dominant and is controlled by a single gene, named RYMV3. RYMV3 was mapped in an approximately 15-kb interval in which two candidate genes, coding for a putative transmembrane protein and a CC-NBS-LRR domain-containing protein, were annotated. Sequencing revealed non-synonymous polymorphisms between Tog5307 and the O. glaberrima susceptible accession CG14 in both candidate genes. An additional resistant O. glaberrima accession, Tog5672, was found to have the Tog5307 genotype for the CC-NBS-LRR gene but not for the putative transmembrane protein gene. Analysis of the cosegregation of Tog5672 resistance with the RYMV3 locus suggests that RYMV3 is also involved in Tog5672 resistance, thereby supporting the CC-NBS-LRR gene as the best candidate for RYMV3.
The Earth Gravitational Observatory (EGO): Nanosat Constellations For Advanced Gravity Mapping
NASA Astrophysics Data System (ADS)
Yunck, T.; Saltman, A.; Bettadpur, S. V.; Nerem, R. S.; Abel, J.
2017-12-01
The trend to nanosats for space-based remote sensing is transforming system architectures: fleets of "cellular" craft scanning Earth with exceptional precision and economy. GeoOptics Inc has been selected by NASA to develop a vision for that transition with an initial focus on advanced gravity field mapping. Building on our spaceborne GNSS technology we introduce innovations that will improve gravity mapping roughly tenfold over previous missions at a fraction of the cost. The power of EGO is realized in its N-satellite form where all satellites in a cluster receive dual-frequency crosslinks from all other satellites, yielding N(N-1)/2 independent measurements. Twelve "cells" thus yield 66 independent links. Because the cells form a 2D arc with spacings ranging from 200 km to 3,000 km, EGO senses a wider range of gravity wavelengths and offers greater geometrical observing strength. The benefits are two-fold: Improved time resolution enables observation of sub-seasonal processes, as from hydro-meteorological phenomena; improved measurement quality enhances all gravity solutions. For the GRACE mission, key limitations arise from such spacecraft factors as long-term accelerometer error, attitude knowledge and thermal stability, which are largely independent from cell to cell. Data from a dozen cells reduces their impact by 3x, by the "root-n" averaging effect. Multi-cell closures improve on this further. The many closure paths among 12 cells provide strong constraints to correct for observed range changes not compatible with a gravity source, including accelerometer errors in measuring non-conservative forces. Perhaps more significantly from a science standpoint, system-level estimates with data from diverse orbits can attack the many scientifically limiting sources of temporal aliasing.
Structural geology mapping using PALSAR data in the Bau gold mining district, Sarawak, Malaysia
NASA Astrophysics Data System (ADS)
Pour, Amin Beiranvand; Hashim, Mazlan
2014-08-01
The application of optical remote sensing data for geological mapping is difficult in the tropical environment. The persistent cloud coverage, dominated vegetation in the landscape and limited bedrock exposures are constraints imposed by the tropical climate. Structural geology investigations that are searching for epithermal or polymetallic vein-type ore deposits can be developed using Synthetic Aperture Radar (SAR) remote sensing data in tropical/sub-tropical regions. The Bau gold mining district in the State of Sarawak, East Malaysia, on the island of Borneo has been selected for this study. The Bau is a gold field similar to Carlin style gold deposits, but gold mineralization at Bau is much more structurally controlled. Geological analyses coupled with the Phased Array type L-band Synthetic Aperture Radar (PALSAR) remote sensing data were used to detect structural elements associated with gold mineralization. The PALSAR data were used to perform lithological-structural mapping of mineralized zones in the study area and surrounding terrain. Structural elements were detected along the SSW to NNE trend of the Tuban fault zone and Tai Parit fault that corresponds to the areas of occurrence of the gold mineralization in the Bau Limestone. Most of quartz-gold bearing veins occur in high-angle faults, fractures and joints within massive units of the Bau Limestone. The results show that four deformation events (D1-D4) in the structures of the Bau district and structurally controlled gold mineralization indicators, including faults, joints and fractures are detectable using PALSAR data at both regional and district scales. The approach used in this study can be more broadly applicable to provide preliminary information for exploration potentially interesting areas of epithermal or polymetallic vein-type mineralization using the PALSAR data in the tropical/sub-tropical regions.
Mapping Interaction Sites on Human Chemokine Receptors by Deep Mutational Scanning.
Heredia, Jeremiah D; Park, Jihye; Brubaker, Riley J; Szymanski, Steven K; Gill, Kevin S; Procko, Erik
2018-06-01
Chemokine receptors CXCR4 and CCR5 regulate WBC trafficking and are engaged by the HIV-1 envelope glycoprotein gp120 during infection. We combine a selection of human CXCR4 and CCR5 libraries comprising nearly all of ∼7000 single amino acid substitutions with deep sequencing to define sequence-activity landscapes for surface expression and ligand interactions. After consideration of sequence constraints for surface expression, known interaction sites with HIV-1-blocking Abs were appropriately identified as conserved residues following library sorting for Ab binding, validating the use of deep mutational scanning to map functional interaction sites in G protein-coupled receptors. Chemokine CXCL12 was found to interact with residues extending asymmetrically into the CXCR4 ligand-binding cavity, similar to the binding surface of CXCR4 recognized by an antagonistic viral chemokine previously observed crystallographically. CXCR4 mutations distal from the chemokine binding site were identified that enhance chemokine recognition. This included disruptive mutations in the G protein-coupling site that diminished calcium mobilization, as well as conservative mutations to a membrane-exposed site (CXCR4 residues H79 2.45 and W161 4.50 ) that increased ligand binding without loss of signaling. Compared with CXCR4-CXCL12 interactions, CCR5 residues conserved for gp120 (HIV-1 BaL strain) interactions map to a more expansive surface, mimicking how the cognate chemokine CCL5 makes contacts across the entire CCR5 binding cavity. Acidic substitutions in the CCR5 N terminus and extracellular loops enhanced gp120 binding. This study demonstrates how comprehensive mutational scanning can define functional interaction sites on receptors, and novel mutations that enhance receptor activities can be found simultaneously. Copyright © 2018 by The American Association of Immunologists, Inc.
Constraints on the Energy Content of the Universe from a Combination of Galaxy Cluster Observables
NASA Technical Reports Server (NTRS)
Molnar, Sandor M.; Haiman, Zoltan; Birkinshaw, Mark; Mushotzky, Richard F.
2003-01-01
We demonstrate that constraints on cosmological parameters from the distribution of clusters as a function of redshift (dN/dz) are complementary to accurate angular diameter distance (D(sub A)) measurements to clusters, and their combination significantly tightens constraints on the energy density content of the Universe. The number counts can be obtained from X-ray and/or SZ (Sunyaev-Ze'dovich effect) surveys, and the angular diameter distances can be determined from deep observations of the intra-cluster gas using their thermal bremsstrahlung X-ray emission and the SZ effect. We combine constraints from simulated cluster number counts expected from a 12 deg(sup 2) SZ cluster survey and constraints from simulated angular diameter distance measurements based on the X-ray/SZ method assuming a statistical accuracy of 10% in the angular diameter distance determination of 100 clusters with redshifts less than 1.5. We find that Omega(sub m), can be determined within about 25%, Omega(sub lambda) within 20% and w within 16%. We show that combined dN/dz+(sub lambda) constraints can be used to constrain the different energy densities in the Universe even in the presence of a few percent redshift dependent systematic error in D(sub lambda). We also address the question of how best to select clusters of galaxies for accurate diameter distance determinations. We show that the joint dN/dz+ D(lambda) constraints on cosmological parameters for a fixed target accuracy in the energy density parameters are optimized by selecting clusters with redshift upper cut-offs in the range 0.55 approx. less than 1. Subject headings: cosmological parameters - cosmology: theory - galaxies:clusters: general
Constraints faced by urban poor in managing diabetes care: patients' perspectives from South India.
Bhojani, Upendra; Mishra, Arima; Amruthavalli, Subramani; Devadasan, Narayanan; Kolsteren, Patrick; De Henauw, Stefaan; Criel, Bart
2013-10-03
Four out of five adults with diabetes live in low- and middle-income countries (LMIC). India has the second highest number of diabetes patients in the world. Despite a huge burden, diabetes care remains suboptimal. While patients (and families) play an important role in managing chronic conditions, there is a dearth of studies in LMIC and virtually none in India capturing perspectives and experiences of patients in regard to diabetes care. The objective of this study was to better understand constraints faced by patients from urban slums in managing care for type 2 diabetes in India. We conducted in-depth interviews, using a phenomenological approach, with 16 type 2- diabetes patients from a poor urban neighbourhood in South India. These patients were selected with the help of four community health workers (CHWs) and were interviewed by two trained researchers exploring patients' experiences of living with and seeking care for diabetes. The sampling followed the principle of saturation. Data were initially coded using the NVivo software. Emerging themes were periodically discussed among the researchers and were refined over time through an iterative process using a mind-mapping tool. Despite an abundance of healthcare facilities in the vicinity, diabetes patients faced several constraints in accessing healthcare such as financial hardship, negative attitudes and inadequate communication by healthcare providers and a fragmented healthcare service system offering inadequate care. Strongly defined gender-based family roles disadvantaged women by restricting their mobility and autonomy to access healthcare. The prevailing nuclear family structure and inter-generational conflicts limited support and care for elderly adults. There is a need to strengthen primary care services with a special focus on improving the availability and integration of health services for diabetes at the community level, enhancing patient centredness and continuity in delivery of care. Our findings also point to the need to provide social services in conjunction with health services aiming at improving status of women and elderly in families and society.
Evaluation of an artificial intelligence guided inverse planning system: clinical case study.
Yan, Hui; Yin, Fang-Fang; Willett, Christopher
2007-04-01
An artificial intelligence (AI) guided method for parameter adjustment of inverse planning was implemented on a commercial inverse treatment planning system. For evaluation purpose, four typical clinical cases were tested and the results from both plans achieved by automated and manual methods were compared. The procedure of parameter adjustment mainly consists of three major loops. Each loop is in charge of modifying parameters of one category, which is carried out by a specially customized fuzzy inference system. A physician prescribed multiple constraints for a selected volume were adopted to account for the tradeoff between prescription dose to the PTV and dose-volume constraints for critical organs. The searching process for an optimal parameter combination began with the first constraint, and proceeds to the next until a plan with acceptable dose was achieved. The initial setup of the plan parameters was the same for each case and was adjusted independently by both manual and automated methods. After the parameters of one category were updated, the intensity maps of all fields were re-optimized and the plan dose was subsequently re-calculated. When final plan arrived, the dose statistics were calculated from both plans and compared. For planned target volume (PTV), the dose for 95% volume is up to 10% higher in plans using the automated method than those using the manual method. For critical organs, an average decrease of the plan dose was achieved. However, the automated method cannot improve the plan dose for some critical organs due to limitations of the inference rules currently employed. For normal tissue, there was no significant difference between plan doses achieved by either automated or manual method. With the application of AI-guided method, the basic parameter adjustment task can be accomplished automatically and a comparable plan dose was achieved in comparison with that achieved by the manual method. Future improvements to incorporate case-specific inference rules are essential to fully automate the inverse planning process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moresco, M.; Cimatti, A.; Jimenez, R.
2012-08-01
We present new improved constraints on the Hubble parameter H(z) in the redshift range 0.15 < z < 1.1, obtained from the differential spectroscopic evolution of early-type galaxies as a function of redshift. We extract a large sample of early-type galaxies ( ∼ 11000) from several spectroscopic surveys, spanning almost 8 billion years of cosmic lookback time (0.15 < z < 1.42). We select the most massive, red elliptical galaxies, passively evolving and without signature of ongoing star formation. Those galaxies can be used as standard cosmic chronometers, as firstly proposed by Jimenez and Loeb (2002), whose differential age evolutionmore » as a function of cosmic time directly probes H(z). We analyze the 4000 Å break (D4000) as a function of redshift, use stellar population synthesis models to theoretically calibrate the dependence of the differential age evolution on the differential D4000, and estimate the Hubble parameter taking into account both statistical and systematical errors. We provide 8 new measurements of H(z), and determine its change in H(z) to a precision of 5–12% mapping homogeneously the redshift range up to z ∼ 1.1; for the first time, we place a constraint on H(z) at z≠0 with a precision comparable with the one achieved for the Hubble constant (about 5–6% at z ∼ 0.2), and covered a redshift range (0.5 < z < 0.8) which is crucial to distinguish many different quintessence cosmologies. These measurements have been tested to best match a ΛCDM model, clearly providing a statistically robust indication that the Universe is undergoing an accelerated expansion. This method shows the potentiality to open a new avenue in constrain a variety of alternative cosmologies, especially when future surveys (e.g. Euclid) will open the possibility to extend it up to z ∼ 2.« less
Constraints and Suggestions in Adopting Seasonal Climate Forecasts by Farmers in South India
ERIC Educational Resources Information Center
Shankar, K. Ravi; Nagasree, K.; Venkateswarlu, B.; Maraty, Pochaiah
2011-01-01
The main objective of this study was to determine constraints and suggestions of farmers towards adopting seasonal climate forecasts. It addresses the question: Which forms of providing forecasts will be helpful to farmers in agricultural decision making? For the study, farmers were selected from Andhra Pradesh state of South India. One hundred…
Constrained spectral clustering under a local proximity structure assumption
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri; Xu, Qianjun; des Jardins, Marie
2005-01-01
This work focuses on incorporating pairwise constraints into a spectral clustering algorithm. A new constrained spectral clustering method is proposed, as well as an active constraint acquisition technique and a heuristic for parameter selection. We demonstrate that our constrained spectral clustering method, CSC, works well when the data exhibits what we term local proximity structure.
Dawn Maps the Surface Composition of Vesta
NASA Technical Reports Server (NTRS)
Prettyman, T.; Palmer, E.; Reedy, R.; Sykes, M.; Yingst, R.; McSween, H.; DeSanctis, M. C.; Capaccinoni, F.; Capria, M. T.; Filacchione, G.;
2011-01-01
By 7-October-2011, the Dawn mission will have completed Survey orbit and commenced high altitude mapping of 4-Vesta. We present a preliminary analysis of data acquired by Dawn's Framing Camera (FC) and the Visual and InfraRed Spectrometer (VIR) to map mineralogy and surface temperature, and to detect and quantify surficial OH. The radiometric calibration of VIR and FC is described. Background counting data acquired by GRaND are used to determine elemental detection limits from measurements at low altitude, which will commence in November. Geochemical models used in the interpretation of the data are described. Thermal properties, mineral-, and geochemical-data are combined to provide constraints on Vesta s formation and thermal evolution, the delivery of exogenic materials, space weathering processes, and the origin of the howardite, eucrite, and diogenite (HED) meteorites.
Drawing out the Resistance Narrative via Mapping in "The Selected Works of T. S. Spivet"
ERIC Educational Resources Information Center
Hameed, Alya
2017-01-01
Though many children's texts include maps that visually demarcate their journeys, modern texts rarely involve active mapping by child characters themselves, suggesting that children cannot (or should not) conceptualise the world for themselves, but require an adult's guidance to traverse it. Reif Larsen's "The Selected Works of T. S.…
The research of selection model based on LOD in multi-scale display of electronic map
NASA Astrophysics Data System (ADS)
Zhang, Jinming; You, Xiong; Liu, Yingzhen
2008-10-01
This paper proposes a selection model based on LOD to aid the display of electronic map. The ratio of display scale to map scale is regarded as a LOD operator. The categorization rule, classification rule, elementary rule and spatial geometry character rule of LOD operator setting are also concluded.
Genetic constraints predict evolutionary divergence in Dalechampia blossoms
Bolstad, Geir H.; Hansen, Thomas F.; Pélabon, Christophe; Falahati-Anbaran, Mohsen; Pérez-Barrales, Rocío; Armbruster, W. Scott
2014-01-01
If genetic constraints are important, then rates and direction of evolution should be related to trait evolvability. Here we use recently developed measures of evolvability to test the genetic constraint hypothesis with quantitative genetic data on floral morphology from the Neotropical vine Dalechampia scandens (Euphorbiaceae). These measures were compared against rates of evolution and patterns of divergence among 24 populations in two species in the D. scandens species complex. We found clear evidence for genetic constraints, particularly among traits that were tightly phenotypically integrated. This relationship between evolvability and evolutionary divergence is puzzling, because the estimated evolvabilities seem too large to constitute real constraints. We suggest that this paradox can be explained by a combination of weak stabilizing selection around moving adaptive optima and small realized evolvabilities relative to the observed additive genetic variance. PMID:25002700
Garadat, Soha N.; Zwolan, Teresa A.; Pfingst, Bryan E.
2013-01-01
Previous studies in our laboratory showed that temporal acuity as assessed by modulation detection thresholds (MDTs) varied across activation sites and that this site-to-site variability was subject specific. Using two 10-channel MAPs, the previous experiments showed that processor MAPs that had better across-site mean (ASM) MDTs yielded better speech recognition than MAPs with poorer ASM MDTs tested in the same subject. The current study extends our earlier work on developing more optimal fitting strategies to test the feasibility of using a site-selection approach in the clinical domain. This study examined the hypothesis that revising the clinical speech processor MAP for cochlear implant (CI) recipients by turning off selected sites that have poorer temporal acuity and reallocating frequencies to the remaining electrodes would lead to improved speech recognition. Twelve CI recipients participated in the experiments. We found that site selection procedure based on MDTs in the presence of a masker resulted in improved performance on consonant recognition and recognition of sentences in noise. In contrast, vowel recognition was poorer with the experimental MAP than with the clinical MAP, possibly due to reduced spectral resolution when sites were removed from the experimental MAP. Overall, these results suggest a promising path for improving recipient outcomes using personalized processor-fitting strategies based on a psychophysical measure of temporal acuity. PMID:23881208
Robust and Accurate Image-Based Georeferencing Exploiting Relative Orientation Constraints
NASA Astrophysics Data System (ADS)
Cavegn, S.; Blaser, S.; Nebiker, S.; Haala, N.
2018-05-01
Urban environments with extended areas of poor GNSS coverage as well as indoor spaces that often rely on real-time SLAM algorithms for camera pose estimation require sophisticated georeferencing in order to fulfill our high requirements of a few centimeters for absolute 3D point measurement accuracies. Since we focus on image-based mobile mapping, we extended the structure-from-motion pipeline COLMAP with georeferencing capabilities by integrating exterior orientation parameters from direct sensor orientation or SLAM as well as ground control points into bundle adjustment. Furthermore, we exploit constraints for relative orientation parameters among all cameras in bundle adjustment, which leads to a significant robustness and accuracy increase especially by incorporating highly redundant multi-view image sequences. We evaluated our integrated georeferencing approach on two data sets, one captured outdoors by a vehicle-based multi-stereo mobile mapping system and the other captured indoors by a portable panoramic mobile mapping system. We obtained mean RMSE values for check point residuals between image-based georeferencing and tachymetry of 2 cm in an indoor area, and 3 cm in an urban environment where the measurement distances are a multiple compared to indoors. Moreover, in comparison to a solely image-based procedure, our integrated georeferencing approach showed a consistent accuracy increase by a factor of 2-3 at our outdoor test site. Due to pre-calibrated relative orientation parameters, images of all camera heads were oriented correctly in our challenging indoor environment. By performing self-calibration of relative orientation parameters among respective cameras of our vehicle-based mobile mapping system, remaining inaccuracies from suboptimal test field calibration were successfully compensated.
Microprobe monazite geochronology: new techniques for dating deformation and metamorphism
NASA Astrophysics Data System (ADS)
Williams, M.; Jercinovic, M.; Goncalves, P.; Mahan, K.
2003-04-01
High-resolution compositional mapping, age mapping, and precise dating of monazite on the electron microprobe are powerful additions to microstructural and petrologic analysis and important tools for tectonic studies. The in-situ nature and high spatial resolution of the technique offer an entirely new level of structurally and texturally specific geochronologic data that can be used to put absolute time constraints on P-T-D paths, constrain the rates of sedimentary, metamorphic, and deformational processes, and provide new links between metamorphism and deformation. New analytical techniques (including background modeling, sample preparation, and interference analysis) have significantly improved the precision and accuracy of the technique and new mapping and image analysis techniques have increased the efficiency and strengthened the correlation with fabrics and textures. Microprobe geochronology is particularly applicable to three persistent microstructural-microtextural problem areas: (1) constraining the chronology of metamorphic assemblages; (2) constraining the timing of deformational fabrics; and (3) interpreting other geochronological results. In addition, authigenic monazite can be used to date sedimentary basins, and detrital monazite can fingerprint sedimentary source areas, both critical for tectonic analysis. Although some monazite generations can be directly tied to metamorphism or deformation, at present, the most common constraints rely on monazite inclusion relations in porphyroblasts that, in turn, can be tied to the deformation and/or metamorphic history. Examples will be presented from deep-crustal rocks of northern Saskatchewan and from mid-crustal rocks from the southwestern USA. Microprobe monazite geochronology has been used in both regions to deconvolute overprinting deformation and metamorphic events and to clarify the interpretation of other geochronologic data. Microprobe mapping and dating are powerful companions to mass spectroscopic dating techniques. They allow geochronology to be incorporated into the microstructural analytical process, resulting in a new level of integration of time (t) into P-T-D histories.
Wordform Similarity Increases with Semantic Similarity: An Analysis of 100 Languages
ERIC Educational Resources Information Center
Dautriche, Isabelle; Mahowald, Kyle; Gibson, Edward; Piantadosi, Steven T.
2017-01-01
Although the mapping between form and meaning is often regarded as arbitrary, there are in fact well-known constraints on words which are the result of functional pressures associated with language use and its acquisition. In particular, languages have been shown to encode meaning distinctions in their sound properties, which may be important for…
Narrative Inquiry as Travel Study Method: Affordances and Constraints
ERIC Educational Resources Information Center
Craig, Cheryl J.; Zou, Yali; Poimbeauf, Rita
2014-01-01
This article maps how narrative inquiry--the use of story to study human experience--has been employed as both method and form to capture cross-cultural learning associated with Western doctoral students' travel study to eastern destinations. While others were the first to employ this method in the travel study domain, we are the first to…
ERIC Educational Resources Information Center
Cordova, Ralph A.; Matthiesen, Amanda L.
2010-01-01
An inner-city second grade teacher-researcher and her university-based partner examine how she and her inner-city second graders learned to resist and expand the constraints of their mandated, scripted reading curriculum by exploring their neighborhood and classrooms as communities spaces for literacies learning. Drawing on an interactional…
Ontology-Based Adaptive Dynamic e-Learning Map Planning Method for Conceptual Knowledge Learning
ERIC Educational Resources Information Center
Chen, Tsung-Yi; Chu, Hui-Chuan; Chen, Yuh-Min; Su, Kuan-Chun
2016-01-01
E-learning improves the shareability and reusability of knowledge, and surpasses the constraints of time and space to achieve remote asynchronous learning. Since the depth of learning content often varies, it is thus often difficult to adjust materials based on the individual levels of learners. Therefore, this study develops an ontology-based…
LF "Wh"-Movement and Its Locality Constraints in Child Japanese
ERIC Educational Resources Information Center
Sugisaki, Koji
2012-01-01
In natural languages, the mapping from surface form to meaning is often quite complex, and hence the acquisition of the phenomena at the boundary between syntax and semantics has been one of the central issues in current acquisition research. This study addresses the issue of whether children have adult-like knowledge of LF "wh"-movement and its…
Freedom of Speech on Campus: Rights and Responsibilities in UK Universities
ERIC Educational Resources Information Center
Universities UK, 2011
2011-01-01
This report considers the role of universities in promoting academic freedom and freedom of speech, and some of the constraints surrounding these freedoms. These issues are not straightforward and are often contested. The report does not offer easy solutions or absolute rules but seeks to map out the different considerations that might need to be…
NASA Astrophysics Data System (ADS)
Hiremath, Varun; Pope, Stephen B.
2013-04-01
The Rate-Controlled Constrained-Equilibrium (RCCE) method is a thermodynamic based dimension reduction method which enables representation of chemistry involving n s species in terms of fewer n r constraints. Here we focus on the application of the RCCE method to Lagrangian particle probability density function based computations. In these computations, at every reaction fractional step, given the initial particle composition (represented using RCCE), we need to compute the reaction mapping, i.e. the particle composition at the end of the time step. In this work we study three different implementations of RCCE for computing this reaction mapping, and compare their relative accuracy and efficiency. These implementations include: (1) RCCE/TIFS (Trajectory In Full Space): this involves solving a system of n s rate-equations for all the species in the full composition space to obtain the reaction mapping. The other two implementations obtain the reaction mapping by solving a reduced system of n r rate-equations obtained by projecting the n s rate-equations for species evaluated in the full space onto the constrained subspace. These implementations include (2) RCCE: this is the classical implementation of RCCE which uses a direct projection of the rate-equations for species onto the constrained subspace; and (3) RCCE/RAMP (Reaction-mixing Attracting Manifold Projector): this is a new implementation introduced here which uses an alternative projector obtained using the RAMP approach. We test these three implementations of RCCE for methane/air premixed combustion in the partially-stirred reactor with chemistry represented using the n s=31 species GRI-Mech 1.2 mechanism with n r=13 to 19 constraints. We show that: (a) the classical RCCE implementation involves an inaccurate projector which yields large errors (over 50%) in the reaction mapping; (b) both RCCE/RAMP and RCCE/TIFS approaches yield significantly lower errors (less than 2%); and (c) overall the RCCE/TIFS approach is the most accurate, efficient (by orders of magnitude) and robust implementation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, C.; Pujol, A.; Gaztañaga, E.
We measure the redshift evolution of galaxy bias for a magnitude-limited galaxy sample by combining the galaxy density maps and weak lensing shear maps for a ~116 deg 2 area of the Dark Energy Survey (DES) Science Verification (SV) data. This method was first developed in Amara et al. and later re-examined in a companion paper with rigorous simulation tests and analytical treatment of tomographic measurements. In this work we apply this method to the DES SV data and measure the galaxy bias for a i < 22.5 galaxy sample. We find the galaxy bias and 1σ error bars inmore » four photometric redshift bins to be 1.12 ± 0.19 (z = 0.2–0.4), 0.97 ± 0.15 (z = 0.4–0.6), 1.38 ± 0.39 (z = 0.6–0.8), and 1.45 ± 0.56 (z = 0.8–1.0). These measurements are consistent at the 2σ level with measurements on the same data set using galaxy clustering and cross-correlation of galaxies with cosmic microwave background lensing, with most of the redshift bins consistent within the 1σ error bars. In addition, our method provides the only σ8 independent constraint among the three. We forward model the main observational effects using mock galaxy catalogues by including shape noise, photo-z errors, and masking effects. We show that our bias measurement from the data is consistent with that expected from simulations. With the forthcoming full DES data set, we expect this method to provide additional constraints on the galaxy bias measurement from more traditional methods. Moreover, in the process of our measurement, we build up a 3D mass map that allows further exploration of the dark matter distribution and its relation to galaxy evolution.« less
Chang, C.; Pujol, A.; Gaztañaga, E.; ...
2016-04-15
We measure the redshift evolution of galaxy bias for a magnitude-limited galaxy sample by combining the galaxy density maps and weak lensing shear maps for a ~116 deg 2 area of the Dark Energy Survey (DES) Science Verification (SV) data. This method was first developed in Amara et al. and later re-examined in a companion paper with rigorous simulation tests and analytical treatment of tomographic measurements. In this work we apply this method to the DES SV data and measure the galaxy bias for a i < 22.5 galaxy sample. We find the galaxy bias and 1σ error bars inmore » four photometric redshift bins to be 1.12 ± 0.19 (z = 0.2–0.4), 0.97 ± 0.15 (z = 0.4–0.6), 1.38 ± 0.39 (z = 0.6–0.8), and 1.45 ± 0.56 (z = 0.8–1.0). These measurements are consistent at the 2σ level with measurements on the same data set using galaxy clustering and cross-correlation of galaxies with cosmic microwave background lensing, with most of the redshift bins consistent within the 1σ error bars. In addition, our method provides the only σ8 independent constraint among the three. We forward model the main observational effects using mock galaxy catalogues by including shape noise, photo-z errors, and masking effects. We show that our bias measurement from the data is consistent with that expected from simulations. With the forthcoming full DES data set, we expect this method to provide additional constraints on the galaxy bias measurement from more traditional methods. Moreover, in the process of our measurement, we build up a 3D mass map that allows further exploration of the dark matter distribution and its relation to galaxy evolution.« less
Constraints on the Energy Density Content of the Universe Using Only Clusters of Galaxies
NASA Technical Reports Server (NTRS)
Molnar, Sandor M.; Haiman, Zoltan; Birkinshaw, Mark
2003-01-01
We demonstrate that it is possible to constrain the energy content of the Universe with high accuracy using observations of clusters of galaxies only. The degeneracies in the cosmological parameters are lifted by combining constraints from different observables of galaxy clusters. We show that constraints on cosmological parameters from galaxy cluster number counts as a function of redshift and accurate angular diameter distance measurements to clusters are complementary to each other and their combination can constrain the energy density content of the Universe well. The number counts can be obtained from X-ray and/or SZ (Sunyaev-Zeldovich effect) surveys, the angular diameter distances can be determined from deep observations of the intra-cluster gas using their thermal bremsstrahlung X-ray emission and the SZ effect (X-SZ method). In this letter we combine constraints from simulated cluster number counts expected from a 12 deg2 SZ cluster survey and constraints from simulated angular diameter distance measurements based on using the X-SZ method assuming an expected accuracy of 7% in the angular diameter distance determination of 70 clusters with redshifts less than 1.5. We find that R, can be determined within about 25%, A within 20%, and w within 16%. Any cluster survey can be used to select clusters for high accuracy distance measurements, but we assumed accurate angular diameter distance measurements for only 70 clusters since long observations are necessary to achieve high accuracy in distance measurements. Thus the question naturally arises: How to select clusters of galaxies for accurate diameter distance determinations? In this letter, as an example, we demonstrate that it is possible to optimize this selection changing the number of clusters observed, and the upper cut off of their redshift range. We show that constraints on cosmological parameters from combining cluster number counts and angular diameter distance measurements, as opposed to general expectations, will not improve substantially selecting clusters with redshifts higher than one. This important conclusion allow us to restrict our cluster sample to clusters closer than one, in a range where the observational time for accurate distance measurements are more manageable. Subject headings: cosmological parameters - cosmology: theory - galaxies: clusters: general - X-rays: galaxies: clusters
Using perceptual rules in interactive visualization
NASA Astrophysics Data System (ADS)
Rogowitz, Bernice E.; Treinish, Lloyd A.
1994-05-01
In visualization, data are represented as variations in grayscale, hue, shape, and texture. They can be mapped to lines, surfaces, and glyphs, and can be represented statically or in animation. In modem visualization systems, the choices for representing data seem unlimited. This is both a blessing and a curse, however, since the visual impression created by the visualization depends critically on which dimensions are selected for representing the data (Bertin, 1967; Tufte, 1983; Cleveland, 1991). In modem visualization systems, the user can interactively select many different mapping and representation operations, and can interactively select processing operations (e.g., applying a color map), realization operations (e.g., generating geometric structures such as contours or streamlines), and rendering operations (e.g., shading or ray-tracing). The user can, for example, map data to a color map, then apply contour lines, then shift the viewing angle, then change the color map again, etc. In many systems, the user can vary the choices for each operation, selecting, for example, particular color maps, contour characteristics, and shading techniques. The hope is that this process will eventually converge on a visual representation which expresses the structure of the data and effectively communicates its message in a way that meets the user's goals. Sometimes, however, it results in visual representations which are confusing, misleading, and garish.
Constraining the ensemble Kalman filter for improved streamflow forecasting
NASA Astrophysics Data System (ADS)
Maxwell, Deborah H.; Jackson, Bethanna M.; McGregor, James
2018-05-01
Data assimilation techniques such as the Ensemble Kalman Filter (EnKF) are often applied to hydrological models with minimal state volume/capacity constraints enforced during ensemble generation. Flux constraints are rarely, if ever, applied. Consequently, model states can be adjusted beyond physically reasonable limits, compromising the integrity of model output. In this paper, we investigate the effect of constraining the EnKF on forecast performance. A "free run" in which no assimilation is applied is compared to a completely unconstrained EnKF implementation, a 'typical' hydrological implementation (in which mass constraints are enforced to ensure non-negativity and capacity thresholds of model states are not exceeded), and then to a more tightly constrained implementation where flux as well as mass constraints are imposed to force the rate of water movement to/from ensemble states to be within physically consistent boundaries. A three year period (2008-2010) was selected from the available data record (1976-2010). This was specifically chosen as it had no significant data gaps and represented well the range of flows observed in the longer dataset. Over this period, the standard implementation of the EnKF (no constraints) contained eight hydrological events where (multiple) physically inconsistent state adjustments were made. All were selected for analysis. Mass constraints alone did little to improve forecast performance; in fact, several were significantly degraded compared to the free run. In contrast, the combined use of mass and flux constraints significantly improved forecast performance in six events relative to all other implementations, while the remaining two events showed no significant difference in performance. Placing flux as well as mass constraints on the data assimilation framework encourages physically consistent state estimation and results in more accurate and reliable forward predictions of streamflow for robust decision-making. We also experiment with the observation error, which has a profound effect on filter performance. We note an interesting tension exists between specifying an error which reflects known uncertainties and errors in the measurement versus an error that allows "optimal" filter updating.
König, S; Tsehay, F; Sitzenstock, F; von Borstel, U U; Schmutz, M; Preisinger, R; Simianer, H
2010-04-01
Due to consistent increases of inbreeding of on average 0.95% per generation in layer populations, selection tools should consider both genetic gain and genetic relationships in the long term. The optimum genetic contribution theory using official estimated breeding values for egg production was applied for 3 different lines of a layer breeding program to find the optimal allocations of hens and sires. Constraints in different scenarios encompassed restrictions related to additive genetic relationships, the increase of inbreeding, the number of selected sires and hens, and the number of selected offspring per mating. All these constraints enabled higher genetic gain up to 10.9% at the same level of additive genetic relationships or in lower relationships at the same gain when compared with conventional selection schemes ignoring relationships. Increases of inbreeding and genetic gain were associated with the number of selected sires. For the lowest level of the allowed average relationship at 10%, the optimal number of sires was 70 and the estimated breeding value for egg production of the selected group was 127.9. At the highest relationship constraint (16%), the optimal number of sires decreased to 15, and the average genetic value increased to 139.7. Contributions from selected sires and hens were used to develop specific mating plans to minimize inbreeding in the following generation by applying a simulated annealing algorithm. The additional reduction of average additive genetic relationships for matings was up to 44.9%. An innovative deterministic approach to estimate kinship coefficients between and within defined selection groups based on gene flow theory was applied to compare increases of inbreeding from random matings with layer populations undergoing selection. Large differences in rates of inbreeding were found, and they underline the necessity to establish selection tools controlling long-term relationships. Furthermore, it was suggested to use optimum genetic contribution theory for conservation schemes or, for example, the experimental line in our study.
A Higher Harmonic Optimal Controller to Optimise Rotorcraft Aeromechanical Behaviour
NASA Technical Reports Server (NTRS)
Leyland, Jane Anne
1996-01-01
Three methods to optimize rotorcraft aeromechanical behavior for those cases where the rotorcraft plant can be adequately represented by a linear model system matrix were identified and implemented in a stand-alone code. These methods determine the optimal control vector which minimizes the vibration metric subject to constraints at discrete time points, and differ from the commonly used non-optimal constraint penalty methods such as those employed by conventional controllers in that the constraints are handled as actual constraints to an optimization problem rather than as just additional terms in the performance index. The first method is to use a Non-linear Programming algorithm to solve the problem directly. The second method is to solve the full set of non-linear equations which define the necessary conditions for optimality. The third method is to solve each of the possible reduced sets of equations defining the necessary conditions for optimality when the constraints are pre-selected to be either active or inactive, and then to simply select the best solution. The effects of maneuvers and aeroelasticity on the systems matrix are modelled by using a pseudo-random pseudo-row-dependency scheme to define the systems matrix. Cases run to date indicate that the first method of solution is reliable, robust, and easiest to use, and that it was superior to the conventional controllers which were considered.
A virtual pebble game to ensemble average graph rigidity.
González, Luis C; Wang, Hui; Livesay, Dennis R; Jacobs, Donald J
2015-01-01
The body-bar Pebble Game (PG) algorithm is commonly used to calculate network rigidity properties in proteins and polymeric materials. To account for fluctuating interactions such as hydrogen bonds, an ensemble of constraint topologies are sampled, and average network properties are obtained by averaging PG characterizations. At a simpler level of sophistication, Maxwell constraint counting (MCC) provides a rigorous lower bound for the number of internal degrees of freedom (DOF) within a body-bar network, and it is commonly employed to test if a molecular structure is globally under-constrained or over-constrained. MCC is a mean field approximation (MFA) that ignores spatial fluctuations of distance constraints by replacing the actual molecular structure by an effective medium that has distance constraints globally distributed with perfect uniform density. The Virtual Pebble Game (VPG) algorithm is a MFA that retains spatial inhomogeneity in the density of constraints on all length scales. Network fluctuations due to distance constraints that may be present or absent based on binary random dynamic variables are suppressed by replacing all possible constraint topology realizations with the probabilities that distance constraints are present. The VPG algorithm is isomorphic to the PG algorithm, where integers for counting "pebbles" placed on vertices or edges in the PG map to real numbers representing the probability to find a pebble. In the VPG, edges are assigned pebble capacities, and pebble movements become a continuous flow of probability within the network. Comparisons between the VPG and average PG results over a test set of proteins and disordered lattices demonstrate the VPG quantitatively estimates the ensemble average PG results well. The VPG performs about 20% faster than one PG, and it provides a pragmatic alternative to averaging PG rigidity characteristics over an ensemble of constraint topologies. The utility of the VPG falls in between the most accurate but slowest method of ensemble averaging over hundreds to thousands of independent PG runs, and the fastest but least accurate MCC.
Rapid independent trait evolution despite a strong pleiotropic genetic correlation.
Conner, Jeffrey K; Karoly, Keith; Stewart, Christy; Koelling, Vanessa A; Sahli, Heather F; Shaw, Frank H
2011-10-01
Genetic correlations are the most commonly studied of all potential constraints on adaptive evolution. We present a comprehensive test of constraints caused by genetic correlation, comparing empirical results to predictions from theory. The additive genetic correlation between the filament and the corolla tube in wild radish flowers is very high in magnitude, is estimated with good precision (0.85 ± 0.06), and is caused by pleiotropy. Thus, evolutionary changes in the relative lengths of these two traits should be constrained. Still, artificial selection produced rapid evolution of these traits in opposite directions, so that in one replicate relative to controls, the difference between them increased by six standard deviations in only nine generations. This would result in a 54% increase in relative fitness on the basis of a previous estimate of natural selection in this population, and it would produce the phenotypes found in the most extreme species in the family Brassicaceae in less than 100 generations. These responses were within theoretical expectations and were much slower than if the genetic correlation was zero; thus, there was evidence for constraint. These results, coupled with comparable results from other species, show that evolution can be rapid despite the constraints caused by genetic correlations.
On Quantile Regression in Reproducing Kernel Hilbert Spaces with Data Sparsity Constraint
Zhang, Chong; Liu, Yufeng; Wu, Yichao
2015-01-01
For spline regressions, it is well known that the choice of knots is crucial for the performance of the estimator. As a general learning framework covering the smoothing splines, learning in a Reproducing Kernel Hilbert Space (RKHS) has a similar issue. However, the selection of training data points for kernel functions in the RKHS representation has not been carefully studied in the literature. In this paper we study quantile regression as an example of learning in a RKHS. In this case, the regular squared norm penalty does not perform training data selection. We propose a data sparsity constraint that imposes thresholding on the kernel function coefficients to achieve a sparse kernel function representation. We demonstrate that the proposed data sparsity method can have competitive prediction performance for certain situations, and have comparable performance in other cases compared to that of the traditional squared norm penalty. Therefore, the data sparsity method can serve as a competitive alternative to the squared norm penalty method. Some theoretical properties of our proposed method using the data sparsity constraint are obtained. Both simulated and real data sets are used to demonstrate the usefulness of our data sparsity constraint. PMID:27134575
Assembly flow simulation of a radar
NASA Technical Reports Server (NTRS)
Rutherford, W. C.; Biggs, P. M.
1994-01-01
A discrete event simulation model has been developed to predict the assembly flow time of a new radar product. The simulation was the key tool employed to identify flow constraints. The radar, production facility, and equipment complement were designed, arranged, and selected to provide the most manufacturable assembly possible. A goal was to reduce the assembly and testing cycle time from twenty-six weeks. A computer software simulation package (SLAM 2) was utilized as the foundation for simulating the assembly flow time. FORTRAN subroutines were incorporated into the software to deal with unique flow circumstances that were not accommodated by the software. Detailed information relating to the assembly operations was provided by a team selected from the engineering, manufacturing management, inspection, and production assembly staff. The simulation verified that it would be possible to achieve the cycle time goal of six weeks. Equipment and manpower constraints were identified during the simulation process and adjusted as required to achieve the flow with a given monthly production requirement. The simulation is being maintained as a planning tool to be used to identify constraints in the event that monthly output is increased. 'What-if' studies have been conducted to identify the cost of reducing constraints caused by increases in output requirement.
Solving multiconstraint assignment problems using learning automata.
Horn, Geir; Oommen, B John
2010-02-01
This paper considers the NP-hard problem of object assignment with respect to multiple constraints: assigning a set of elements (or objects) into mutually exclusive classes (or groups), where the elements which are "similar" to each other are hopefully located in the same class. The literature reports solutions in which the similarity constraint consists of a single index that is inappropriate for the type of multiconstraint problems considered here and where the constraints could simultaneously be contradictory. This feature, where we permit possibly contradictory constraints, distinguishes this paper from the state of the art. Indeed, we are aware of no learning automata (or other heuristic) solutions which solve this problem in its most general setting. Such a scenario is illustrated with the static mapping problem, which consists of distributing the processes of a parallel application onto a set of computing nodes. This is a classical and yet very important problem within the areas of parallel computing, grid computing, and cloud computing. We have developed four learning-automata (LA)-based algorithms to solve this problem: First, a fixed-structure stochastic automata algorithm is presented, where the processes try to form pairs to go onto the same node. This algorithm solves the problem, although it requires some centralized coordination. As it is desirable to avoid centralized control, we subsequently present three different variable-structure stochastic automata (VSSA) algorithms, which have superior partitioning properties in certain settings, although they forfeit some of the scalability features of the fixed-structure algorithm. All three VSSA algorithms model the processes as automata having first the hosting nodes as possible actions; second, the processes as possible actions; and, third, attempting to estimate the process communication digraph prior to probabilistically mapping the processes. This paper, which, we believe, comprehensively reports the pioneering LA solutions to this problem, unequivocally demonstrates that LA can play an important role in solving complex combinatorial and integer optimization problems.
Selecting a Good Conference Location Based on Participants' Interests
ERIC Educational Resources Information Center
Miah, Muhammed
2011-01-01
Selecting a good conference location within budget constraints to attract paper authors and participants is a very difficult job for the conference organizers. A conference location is also very important along with other issues such as ranking of the conference. Selecting a bad conference location may reduce the number of paper submissions and…
Gaitán-Espitia, Juan Diego; Marshall, Dustin; Dupont, Sam; Bacigalupe, Leonardo D; Bodrossy, Levente; Hobday, Alistair J
2017-02-01
Geographical gradients in selection can shape different genetic architectures in natural populations, reflecting potential genetic constraints for adaptive evolution under climate change. Investigation of natural pH/pCO 2 variation in upwelling regions reveals different spatio-temporal patterns of natural selection, generating genetic and phenotypic clines in populations, and potentially leading to local adaptation, relevant to understanding effects of ocean acidification (OA). Strong directional selection, associated with intense and continuous upwellings, may have depleted genetic variation in populations within these upwelling regions, favouring increased tolerances to low pH but with an associated cost in other traits. In contrast, diversifying or weak directional selection in populations with seasonal upwellings or outside major upwelling regions may have resulted in higher genetic variances and the lack of genetic correlations among traits. Testing this hypothesis in geographical regions with similar environmental conditions to those predicted under climate change will build insights into how selection may act in the future and how populations may respond to stressors such as OA. © 2017 The Author(s).
Constraints and Approach for Selecting the Mars Surveyor '01 Landing Site
NASA Technical Reports Server (NTRS)
Golombek, M.; Bridges, N.; Gilmore, M.; Haldemann, A.; Parker, T.; Saunders, R.; Spencer, D.; Smith, J.; Weitz, C.
1999-01-01
There are many similarities between the Mars Surveyor '01 (MS '01) landing site selection process and that of Mars Pathfinder. The selection process includes two parallel activities in which engineers define and refine the capabilities of the spacecraft through design, testing and modeling and scientists define a set of landing site constraints based on the spacecraft design and landing scenario. As for Pathfinder, the safety of the site is without question the single most important factor, for the simple reason that failure to land safely yields no science and exposes the mission and program to considerable risk. The selection process must be thorough and defensible and capable of surviving multiple withering reviews similar to the Pathfinder decision. On Pathfinder, this was accomplished by attempting to understand the surface properties of sites using available remote sensing data sets and models based on them. Science objectives are factored into the selection process only after the safety of the site is validated. Finally, as for Pathfinder, the selection process is being done in an open environment with multiple opportunities for community involvement including open workshops, with education and outreach opportunities.
Constraints, Approach and Present Status for Selecting the Mars Surveyor 2001 Landing Site
NASA Technical Reports Server (NTRS)
Golombek, M.; Anderson, F.; Bridges, N.; Briggs, G.; Gilmore, M.; Gulick, V.; Haldemann, A.; Parker, T.; Saunders, R.; Spencer, D.;
1999-01-01
There are many similarities between the Mars Surveyor '01 (MS '01) landing site selection process and that of Mars Pathfinder. The selection process includes two parallel activities in which engineers define and refine the capabilities of the spacecraft through design, testing and modeling and scientists define a set of landing site constraints based on the spacecraft design and landing scenario. As for Pathfinder, the safety of the site is without question the single most important factor, for the simple reason that failure to land safely yields no science and exposes the mission and program to considerable risk. The selection process must be thorough, defensible and capable of surviving multiple withering reviews similar to the Pathfinder decision. On Pathfinder, this was accomplished by attempting to understand the surface properties of sites using available remote sensing data sets and models based on them. Science objectives are factored into the selection process only after the safety of the site is validated. Finally, as for Pathfinder, the selection process is being done in an open environment with multiple opportunities for community involvement including open workshops, with education and outreach opportunities.
Mapping Resource Selection Functions in Wildlife Studies: Concerns and Recommendations
Morris, Lillian R.; Proffitt, Kelly M.; Blackburn, Jason K.
2018-01-01
Predicting the spatial distribution of animals is an important and widely used tool with applications in wildlife management, conservation, and population health. Wildlife telemetry technology coupled with the availability of spatial data and GIS software have facilitated advancements in species distribution modeling. There are also challenges related to these advancements including the accurate and appropriate implementation of species distribution modeling methodology. Resource Selection Function (RSF) modeling is a commonly used approach for understanding species distributions and habitat usage, and mapping the RSF results can enhance study findings and make them more accessible to researchers and wildlife managers. Currently, there is no consensus in the literature on the most appropriate method for mapping RSF results, methods are frequently not described, and mapping approaches are not always related to accuracy metrics. We conducted a systematic review of the RSF literature to summarize the methods used to map RSF outputs, discuss the relationship between mapping approaches and accuracy metrics, performed a case study on the implications of employing different mapping methods, and provide recommendations as to appropriate mapping techniques for RSF studies. We found extensive variability in methodology for mapping RSF results. Our case study revealed that the most commonly used approaches for mapping RSF results led to notable differences in the visual interpretation of RSF results, and there is a concerning disconnect between accuracy metrics and mapping methods. We make 5 recommendations for researchers mapping the results of RSF studies, which are focused on carefully selecting and describing the method used to map RSF studies, and relating mapping approaches to accuracy metrics. PMID:29887652
Automating the selection of standard parallels for conic map projections
NASA Astrophysics Data System (ADS)
Šavriǒ, Bojan; Jenny, Bernhard
2016-05-01
Conic map projections are appropriate for mapping regions at medium and large scales with east-west extents at intermediate latitudes. Conic projections are appropriate for these cases because they show the mapped area with less distortion than other projections. In order to minimize the distortion of the mapped area, the two standard parallels of conic projections need to be selected carefully. Rules of thumb exist for placing the standard parallels based on the width-to-height ratio of the map. These rules of thumb are simple to apply, but do not result in maps with minimum distortion. There also exist more sophisticated methods that determine standard parallels such that distortion in the mapped area is minimized. These methods are computationally expensive and cannot be used for real-time web mapping and GIS applications where the projection is adjusted automatically to the displayed area. This article presents a polynomial model that quickly provides the standard parallels for the three most common conic map projections: the Albers equal-area, the Lambert conformal, and the equidistant conic projection. The model defines the standard parallels with polynomial expressions based on the spatial extent of the mapped area. The spatial extent is defined by the length of the mapped central meridian segment, the central latitude of the displayed area, and the width-to-height ratio of the map. The polynomial model was derived from 3825 maps-each with a different spatial extent and computationally determined standard parallels that minimize the mean scale distortion index. The resulting model is computationally simple and can be used for the automatic selection of the standard parallels of conic map projections in GIS software and web mapping applications.
New leads for selective GSK-3 inhibition: pharmacophore mapping and virtual screening studies.
Patel, Dhilon S; Bharatam, Prasad V
2006-01-01
Glycogen Synthase Kinase-3 is a regulatory serine/threonine kinase, which is being targeted for the treatment of a number of human diseases including type-2 diabetes mellitus, neurodegenerative diseases, cancer and chronic inflammation. Selective GSK-3 inhibition is an important requirement owing to the possibility of side effects arising from other kinases. A pharmacophore mapping strategy is employed in this work to identify new leads for selective GSK-3 inhibition. Ligands known to show selective GSK-3 inhibition were employed in generating a pharmacophore map using distance comparison method (DISCO). The derived pharmacophore map was validated using (i) important interactions involved in selective GSK-3 inhibitions, and (ii) an in-house database containing different classes of GSK-3 selective, non-selective and inactive molecules. New Lead identification was carried out by performing virtual screening using validated pharmacophoric query and three chemical databases namely NCI, Maybridge and Leadquest. Further data reduction was carried out by employing virtual filters based on (i) Lipinski's rule of 5 (ii) van der Waals bumps and (iii) restricting the number of rotatable bonds to seven. Final screening was carried out using FlexX based molecular docking study.
New leads for selective GSK-3 inhibition: pharmacophore mapping and virtual screening studies
NASA Astrophysics Data System (ADS)
Patel, Dhilon S.; Bharatam, Prasad V.
2006-01-01
Glycogen Synthase Kinase-3 is a regulatory serine/threonine kinase, which is being targeted for the treatment of a number of human diseases including type-2 diabetes mellitus, neurodegenerative diseases, cancer and chronic inflammation. Selective GSK-3 inhibition is an important requirement owing to the possibility of side effects arising from other kinases. A pharmacophore mapping strategy is employed in this work to identify new leads for selective GSK-3 inhibition. Ligands known to show selective GSK-3 inhibition were employed in generating a pharmacophore map using distance comparison method (DISCO). The derived pharmacophore map was validated using (i) important interactions involved in selective GSK-3 inhibitions, and (ii) an in-house database containing different classes of GSK-3 selective, non-selective and inactive molecules. New Lead identification was carried out by performing virtual screening using validated pharmacophoric query and three chemical databases namely NCI, Maybridge and Leadquest. Further data reduction was carried out by employing virtual filters based on (i) Lipinski's rule of 5 (ii) van der Waals bumps and (iii) restricting the number of rotatable bonds to seven. Final screening was carried out using FlexX based molecular docking study.
Detecting and Quantifying Topography in Neural Maps
Yarrow, Stuart; Razak, Khaleel A.; Seitz, Aaron R.; Seriès, Peggy
2014-01-01
Topographic maps are an often-encountered feature in the brains of many species, yet there are no standard, objective procedures for quantifying topography. Topographic maps are typically identified and described subjectively, but in cases where the scale of the map is close to the resolution limit of the measurement technique, identifying the presence of a topographic map can be a challenging subjective task. In such cases, an objective topography detection test would be advantageous. To address these issues, we assessed seven measures (Pearson distance correlation, Spearman distance correlation, Zrehen's measure, topographic product, topological correlation, path length and wiring length) by quantifying topography in three classes of cortical map model: linear, orientation-like, and clusters. We found that all but one of these measures were effective at detecting statistically significant topography even in weakly-ordered maps, based on simulated noisy measurements of neuronal selectivity and sparse sampling of the maps. We demonstrate the practical applicability of these measures by using them to examine the arrangement of spatial cue selectivity in pallid bat A1. This analysis shows that significantly topographic arrangements of interaural intensity difference and azimuth selectivity exist at the scale of individual binaural clusters. PMID:24505279
JIGSAW: Joint Inhomogeneity estimation via Global Segment Assembly for Water-fat separation.
Lu, Wenmiao; Lu, Yi
2011-07-01
Water-fat separation in magnetic resonance imaging (MRI) is of great clinical importance, and the key to uniform water-fat separation lies in field map estimation. This work deals with three-point field map estimation, in which water and fat are modelled as two single-peak spectral lines, and field inhomogeneities shift the spectrum by an unknown amount. Due to the simplified spectrum modelling, there exists inherent ambiguity in forming field maps from multiple locally feasible field map values at each pixel. To resolve such ambiguity, spatial smoothness of field maps has been incorporated as a constraint of an optimization problem. However, there are two issues: the optimization problem is computationally intractable and even when it is solved exactly, it does not always separate water and fat images. Hence, robust field map estimation remains challenging in many clinically important imaging scenarios. This paper proposes a novel field map estimation technique called JIGSAW. It extends a loopy belief propagation (BP) algorithm to obtain an approximate solution to the optimization problem. The solution produces locally smooth segments and avoids error propagation associated with greedy methods. The locally smooth segments are then assembled into a globally consistent field map by exploiting the periodicity of the feasible field map values. In vivo results demonstrate that JIGSAW outperforms existing techniques and produces correct water-fat separation in challenging imaging scenarios.
NASA Astrophysics Data System (ADS)
Williams, Michael L.; Jercinovic, Michael J.; Terry, Michael P.
1999-11-01
High-resolution X-ray mapping and dating of monazite on the electron microprobe are powerful geochronological tools for structural, metamorphic, and tectonic analysis. X-ray maps commonly show complex Th, U, and Pb zoning that reflects monazite growth and overgrowth events. Age maps constructed from the X-ray maps simplify the zoning and highlight age domains. Microprobe dating offers a rapid, in situ method for estimating ages of mapped domains. Application of these techniques has placed new constraints on the tectonic history of three areas. In western Canada, age mapping has revealed multiphase monazite, with older cores and younger rims, included in syntectonic garnet. Microprobe ages show that tectonism occurred ca. 1.9 Ga, 700 m.y. later than mylonitization in the adjacent Snowbird tectonic zone. In New Mexico, age mapping and dating show that the dominant fabric and triple-point metamorphism occurred during a 1.4 Ga reactivation, not during the 1.7 Ga Yavapai-Mazatzal orogeny. In Norway, monazite inclusions in garnet constrain high-pressure metamorphism to ca. 405 Ma, and older cores indicate a previously unrecognized component of ca. 1.0 Ga monazite. In all three areas, microprobe dating and age mapping have provided a critical textural context for geochronologic data and a better understanding of the complex age spectra of these multistage orogenic belts.
Stability of the Kepler-11 system and its origin
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mahajan, Nikhil; Wu, Yanqin
2014-11-01
A significant fraction of Kepler systems are closely packed, largely coplanar, and circular. We study the stability of a six-planet system, Kepler-11, to gain insights on the dynamics and formation history of such systems. Using a technique called 'frequency maps' as fast indicators of long-term stability, we explore the stability of the Kepler-11 system by analyzing the neighborhood space around its orbital parameters. Frequency maps provide a visual representation of chaos and stability, and their dependence on orbital parameters. We find that the current system is stable, but lies within a few percent of several dynamically dangerous two-body mean-motion resonances.more » Planet eccentricities are restricted below a small value, ∼0.04, for long-term stability, but planet masses can be more than twice their reported values (thus allowing for the possibility of mass loss by past photoevaporation). Based on our frequency maps, we speculate on the origin of instability in closely packed systems. We then proceed to investigate how the system could have been assembled. The stability constraints on Kepler-11 (mainly eccentricity constraints) suggest that if the system were assembled in situ, a dissipation mechanism must have been at work to neutralize the eccentricity excitation. On the other hand, if migration was responsible for assembling the planets, there has to be little differential migration among the planets to avoid them either getting trapped into mean motion resonances, or crashing into each other.« less
Ecological constraint and the evolution of sexual dichromatism in darters.
Bossu, Christen M; Near, Thomas J
2015-05-01
It is not known how environmental pressures and sexual selection interact to influence the evolution of extravagant male traits. Sexual and natural selection are often viewed as antagonistic forces shaping the evolution of visual signals, where conspicuousness is favored by sexual selection and crypsis is favored by natural selection. Although typically investigated independently, the interaction between natural and sexual selection remains poorly understood. Here, we investigate whether sexual dichromatism evolves stochastically, independent from, or in concert with habitat use in darters, a species-rich lineage of North American freshwater fish. We find the evolution of sexual dichromatism is coupled to habitat use in darter species. Comparative analyses reveal that mid-water darter lineages exhibit a narrow distribution of dichromatism trait space surrounding a low optimum, suggesting a constraint imposed on the evolution of dichromatism, potentially through predator-mediated selection. Alternatively, the transition to benthic habitats coincides with greater variability in the levels of dichromatism that surround a higher optimum, likely due to relaxation of the predator-mediated selection and heterogeneous microhabitat dependent selection regimes. These results suggest a complex interaction of sexual selection with potentially two mechanisms of natural selection, predation and sensory drive, that influence the evolution of diverse male nuptial coloration in darters. © 2015 The Author(s).
Prabha, Ratna; Singh, Dhananjaya P; Sinha, Swati; Ahmad, Khurshid; Rai, Anil
2017-04-01
With the increasing accumulation of genomic sequence information of prokaryotes, the study of codon usage bias has gained renewed attention. The purpose of this study was to examine codon selection pattern within and across cyanobacterial species belonging to diverse taxonomic orders and habitats. We performed detailed comparative analysis of cyanobacterial genomes with respect to codon bias. Our analysis reflects that in cyanobacterial genomes, A- and/or T-ending codons were used predominantly in the genes whereas G- and/or C-ending codons were largely avoided. Variation in the codon context usage of cyanobacterial genes corresponded to the clustering of cyanobacteria as per their GC content. Analysis of codon adaptation index (CAI) and synonymous codon usage order (SCUO) revealed that majority of genes are associated with low codon bias. Codon selection pattern in cyanobacterial genomes reflected compositional constraints as major influencing factor. It is also identified that although, mutational constraint may play some role in affecting codon usage bias in cyanobacteria, compositional constraint in terms of genomic GC composition coupled with environmental factors affected codon selection pattern in cyanobacterial genomes. Copyright © 2016 Elsevier B.V. All rights reserved.
Li, Shuo; Peng, Jun; Liu, Weirong; Zhu, Zhengfa; Lin, Kuo-Chi
2014-01-01
Recent research has indicated that using the mobility of the actuator in wireless sensor and actuator networks (WSANs) to achieve mobile data collection can greatly increase the sensor network lifetime. However, mobile data collection may result in unacceptable collection delays in the network if the path of the actuator is too long. Because real-time network applications require meeting data collection delay constraints, planning the path of the actuator is a very important issue to balance the prolongation of the network lifetime and the reduction of the data collection delay. In this paper, a multi-hop routing mobile data collection algorithm is proposed based on dynamic polling point selection with delay constraints to address this issue. The algorithm can actively update the selection of the actuator's polling points according to the sensor nodes' residual energies and their locations while also considering the collection delay constraint. It also dynamically constructs the multi-hop routing trees rooted by these polling points to balance the sensor node energy consumption and the extension of the network lifetime. The effectiveness of the algorithm is validated by simulation. PMID:24451455
Implementation of a Water Flow Control System into the ISS'S Planned Fluids & Combustion Facility
NASA Technical Reports Server (NTRS)
Edwards, Daryl A.
2003-01-01
The Fluids and Combustion Facility (FCF) will become an ISS facility capable of performing basic combustion and fluids research. The facility consists of two independent payload racks specifically configured to support multiple experiments over the life of the ISS. Both racks will depend upon the ISS's Moderate Temperature Loop (MTL) for removing waste heat generated by the avionics and experiments operating within the racks. By using the MTL, constraints are imposed by the ISS vehicle on how the coolant resource is used. On the other hand, the FCF depends upon effective thermal control for maximizing life of the hardware and for supplying proper boundary conditions for the experiments. In the implementation of a design solution, significant factors in the selection of the hardware included ability to measure and control relatively low flow rates, ability to throttle flow within the time constraints of the ISS MTL, conserve energy usage, observe low mass and small volume requirements. An additional factor in the final design solution selection was considering how the system would respond to a loss of power event. This paper describes the method selected to satisfy the FCF design requirements while maintaining the constraints applied by the ISS vehicle.
Large-Scale, High-Resolution Neurophysiological Maps Underlying fMRI of Macaque Temporal Lobe
Papanastassiou, Alex M.; DiCarlo, James J.
2013-01-01
Maps obtained by functional magnetic resonance imaging (fMRI) are thought to reflect the underlying spatial layout of neural activity. However, previous studies have not been able to directly compare fMRI maps to high-resolution neurophysiological maps, particularly in higher level visual areas. Here, we used a novel stereo microfocal x-ray system to localize thousands of neural recordings across monkey inferior temporal cortex (IT), construct large-scale maps of neuronal object selectivity at subvoxel resolution, and compare those neurophysiology maps with fMRI maps from the same subjects. While neurophysiology maps contained reliable structure at the sub-millimeter scale, fMRI maps of object selectivity contained information at larger scales (>2.5 mm) and were only partly correlated with raw neurophysiology maps collected in the same subjects. However, spatial smoothing of neurophysiology maps more than doubled that correlation, while a variety of alternative transforms led to no significant improvement. Furthermore, raw spiking signals, once spatially smoothed, were as predictive of fMRI maps as local field potential signals. Thus, fMRI of the inferior temporal lobe reflects a spatially low-passed version of neurophysiology signals. These findings strongly validate the widespread use of fMRI for detecting large (>2.5 mm) neuronal domains of object selectivity but show that a complete understanding of even the most pure domains (e.g., faces vs nonface objects) requires investigation at fine scales that can currently only be obtained with invasive neurophysiological methods. PMID:24048850
Measuring Dark Energy with CHIME
NASA Astrophysics Data System (ADS)
Newburgh, Laura; Chime Collaboration
2015-04-01
The Canadian Hydrogen Intensity Mapping Experiment (CHIME) is a new radio transit interferometer currently being built at the Dominion Radio Astrophysical Observatory (DRAO) in Penticton, BC, Canada. We will use the 21 cm emission line of neutral hydrogen to map baryon acoustic oscillations between 400-800 MHz across 3/4 of the sky. These measurements will yield sensitive constraints on the dark energy equation of state between redshifts 0.8 - 2.5, a fascinating but poorly probed era corresponding to when dark energy began to impact the expansion history of the Universe. I will describe theCHIME instrument, the analysis challenges, the calibration requirements, and current status.
Calibrating CHIME: a new radio interferometer to probe dark energy
NASA Astrophysics Data System (ADS)
Newburgh, Laura B.; Addison, Graeme E.; Amiri, Mandana; Bandura, Kevin; Bond, J. Richard; Connor, Liam; Cliche, Jean-François; Davis, Greg; Deng, Meiling; Denman, Nolan; Dobbs, Matt; Fandino, Mateus; Fong, Heather; Gibbs, Kenneth; Gilbert, Adam; Griffin, Elizabeth; Halpern, Mark; Hanna, David; Hincks, Adam D.; Hinshaw, Gary; Höfer, Carolin; Klages, Peter; Landecker, Tom; Masui, Kiyoshi; Parra, Juan Mena; Pen, Ue-Li; Peterson, Jeff; Recnik, Andre; Shaw, J. Richard; Sigurdson, Kris; Sitwell, Micheal; Smecher, Graeme; Smegal, Rick; Vanderlinde, Keith; Wiebe, Don
2014-07-01
The Canadian Hydrogen Intensity Mapping Experiment (CHIME) is a transit interferometer currently being built at the Dominion Radio Astrophysical Observatory (DRAO) in Penticton, BC, Canada. We will use CHIME to map neutral hydrogen in the frequency range 400 { 800MHz over half of the sky, producing a measurement of baryon acoustic oscillations (BAO) at redshifts between 0.8 { 2.5 to probe dark energy. We have deployed a pathfinder version of CHIME that will yield constraints on the BAO power spectrum and provide a test-bed for our calibration scheme. I will discuss the CHIME calibration requirements and describe instrumentation we are developing to meet these requirements.
Locating Sequence on FPC Maps and Selecting a Minimal Tiling Path
Engler, Friedrich W.; Hatfield, James; Nelson, William; Soderlund, Carol A.
2003-01-01
This study discusses three software tools, the first two aid in integrating sequence with an FPC physical map and the third automatically selects a minimal tiling path given genomic draft sequence and BAC end sequences. The first tool, FSD (FPC Simulated Digest), takes a sequenced clone and adds it back to the map based on a fingerprint generated by an in silico digest of the clone. This allows verification of sequenced clone positions and the integration of sequenced clones that were not originally part of the FPC map. The second tool, BSS (Blast Some Sequence), takes a query sequence and positions it on the map based on sequence associated with the clones in the map. BSS has multiple uses as follows: (1) When the query is a file of marker sequences, they can be added as electronic markers. (2) When the query is draft sequence, the results of BSS can be used to close gaps in a sequenced clone or the physical map. (3) When the query is a sequenced clone and the target is BAC end sequences, one may select the next clone for sequencing using both sequence comparison results and map location. (4) When the query is whole-genome draft sequence and the target is BAC end sequences, the results can be used to select many clones for a minimal tiling path at once. The third tool, pickMTP, automates the majority of this last usage of BSS. Results are presented using the rice FPC map, BAC end sequences, and whole-genome shotgun from Syngenta. PMID:12915486
Development and application of a unified balancing approach with multiple constraints
NASA Technical Reports Server (NTRS)
Zorzi, E. S.; Lee, C. C.; Giordano, J. C.
1985-01-01
The development of a general analytic approach to constrained balancing that is consistent with past influence coefficient methods is described. The approach uses Lagrange multipliers to impose orbit and/or weight constraints; these constraints are combined with the least squares minimization process to provide a set of coupled equations that result in a single solution form for determining correction weights. Proper selection of constraints results in the capability to: (1) balance higher speeds without disturbing previously balanced modes, thru the use of modal trial weight sets; (2) balance off-critical speeds; and (3) balance decoupled modes by use of a single balance plane. If no constraints are imposed, this solution form reduces to the general weighted least squares influence coefficient method. A test facility used to examine the use of the general constrained balancing procedure and application of modal trial weight ratios is also described.
SModelS v1.1 user manual: Improving simplified model constraints with efficiency maps
NASA Astrophysics Data System (ADS)
Ambrogi, Federico; Kraml, Sabine; Kulkarni, Suchita; Laa, Ursula; Lessa, Andre; Magerl, Veronika; Sonneveld, Jory; Traub, Michael; Waltenberger, Wolfgang
2018-06-01
SModelS is an automatized tool for the interpretation of simplified model results from the LHC. It allows to decompose models of new physics obeying a Z2 symmetry into simplified model components, and to compare these against a large database of experimental results. The first release of SModelS, v1.0, used only cross section upper limit maps provided by the experimental collaborations. In this new release, v1.1, we extend the functionality of SModelS to efficiency maps. This increases the constraining power of the software, as efficiency maps allow to combine contributions to the same signal region from different simplified models. Other new features of version 1.1 include likelihood and χ2 calculations, extended information on the topology coverage, an extended database of experimental results as well as major speed upgrades for both the code and the database. We describe in detail the concepts and procedures used in SModelS v1.1, explaining in particular how upper limits and efficiency map results are dealt with in parallel. Detailed instructions for code usage are also provided.
Geologic map of the Sunshine 7.5' quadrangle, Taos County, New Mexico
Thompson, Ren A.; Turner, Kenzie J.; Shroba, Ralph R.; Cosca, Michael A.; Ruleman, Chester A.; Lee, John P.; Brandt, Theodore R.
2014-01-01
Pliocene and younger basin deposition was accommodated along predominantly north-trending fault-bounded grabens and is preserved as poorly exposed fault scarps that cut lava flows of Ute Mountain volcano, north of the map area. The Servilleta Basalt and younger surficial deposits record largely down-to-east basinward displacement. Faults are identified with varying confidence levels in the map area. Recognizing and mapping faults developed near the surface in relatively young, brittle volcanic rocks is difficult because: (1) they tend to form fractured zones tens of meters wide rather than discrete fault planes, (2) the relative youth of the deposits has resulted in only modest displacements on most faults, and (3) some of the faults may have significant strike-slip components that do not result in large vertical offsets that are readily apparent in offset of sub-horizontal contacts. Those faults characterized as “certain” either have distinct offset of map units or had slip planes that were directly observed in the field. Lineaments defined from magnetic anomalies form an additional constraint on potential fault locations.
NASA Astrophysics Data System (ADS)
Liao, Haitao; Wu, Wenwang; Fang, Daining
2018-07-01
A coupled approach combining the reduced space Sequential Quadratic Programming (SQP) method with the harmonic balance condensation technique for finding the worst resonance response is developed. The nonlinear equality constraints of the optimization problem are imposed on the condensed harmonic balance equations. Making use of the null space decomposition technique, the original optimization formulation in the full space is mathematically simplified, and solved in the reduced space by means of the reduced SQP method. The transformation matrix that maps the full space to the null space of the constrained optimization problem is constructed via the coordinate basis scheme. The removal of the nonlinear equality constraints is accomplished, resulting in a simple optimization problem subject to bound constraints. Moreover, second order correction technique is introduced to overcome Maratos effect. The combination application of the reduced SQP method and condensation technique permits a large reduction of the computational cost. Finally, the effectiveness and applicability of the proposed methodology is demonstrated by two numerical examples.
Phenotypic convergence in bacterial adaptive evolution to ethanol stress.
Horinouchi, Takaaki; Suzuki, Shingo; Hirasawa, Takashi; Ono, Naoaki; Yomo, Tetsuya; Shimizu, Hiroshi; Furusawa, Chikara
2015-09-03
Bacterial cells have a remarkable ability to adapt to environmental changes, a phenomenon known as adaptive evolution. During adaptive evolution, phenotype and genotype dynamically changes; however, the relationship between these changes and associated constraints is yet to be fully elucidated. In this study, we analyzed phenotypic and genotypic changes in Escherichia coli cells during adaptive evolution to ethanol stress. Phenotypic changes were quantified by transcriptome and metabolome analyses and were similar among independently evolved ethanol tolerant populations, which indicate the existence of evolutionary constraints in the dynamics of adaptive evolution. Furthermore, the contribution of identified mutations in one of the tolerant strains was evaluated using site-directed mutagenesis. The result demonstrated that the introduction of all identified mutations cannot fully explain the observed tolerance in the tolerant strain. The results demonstrated that the convergence of adaptive phenotypic changes and diverse genotypic changes, which suggested that the phenotype-genotype mapping is complex. The integration of transcriptome and genome data provides a quantitative understanding of evolutionary constraints.
Real-Time Large-Scale Dense Mapping with Surfels
Fu, Xingyin; Zhu, Feng; Wu, Qingxiao; Sun, Yunlei; Lu, Rongrong; Yang, Ruigang
2018-01-01
Real-time dense mapping systems have been developed since the birth of consumer RGB-D cameras. Currently, there are two commonly used models in dense mapping systems: truncated signed distance function (TSDF) and surfel. The state-of-the-art dense mapping systems usually work fine with small-sized regions. The generated dense surface may be unsatisfactory around the loop closures when the system tracking drift grows large. In addition, the efficiency of the system with surfel model slows down when the number of the model points in the map becomes large. In this paper, we propose to use two maps in the dense mapping system. The RGB-D images are integrated into a local surfel map. The old surfels that reconstructed in former times and far away from the camera frustum are moved from the local map to the global map. The updated surfels in the local map when every frame arrives are kept bounded. Therefore, in our system, the scene that can be reconstructed is very large, and the frame rate of our system remains high. We detect loop closures and optimize the pose graph to distribute system tracking drift. The positions and normals of the surfels in the map are also corrected using an embedded deformation graph so that they are consistent with the updated poses. In order to deal with large surface deformations, we propose a new method for constructing constraints with system trajectories and loop closure keyframes. The proposed new method stabilizes large-scale surface deformation. Experimental results show that our novel system behaves better than the prior state-of-the-art dense mapping systems. PMID:29747450
Existence of Lipschitz selections of the Steiner map
NASA Astrophysics Data System (ADS)
Bednov, B. B.; Borodin, P. A.; Chesnokova, K. V.
2018-02-01
This paper is concerned with the problem of the existence of Lipschitz selections of the Steiner map {St}_n, which associates with n points of a Banach space X the set of their Steiner points. The answer to this problem depends on the geometric properties of the unit sphere S(X) of X, its dimension, and the number n. For n≥slant 4 general conditions are obtained on the space X under which {St}_n admits no Lipschitz selection. When X is finite dimensional it is shown that, if n≥slant 4 is even, the map {St}_n has a Lipschitz selection if and only if S(X) is a finite polytope; this is not true if n≥slant 3 is odd. For n=3 the (single-valued) map {St}_3 is shown to be Lipschitz continuous in any smooth strictly-convex two-dimensional space; this ceases to be true in three-dimensional spaces. Bibliography: 21 titles.
Velocity selection in coupled-map lattices
NASA Astrophysics Data System (ADS)
Parekh, Nita; Puri, Sanjay
1993-02-01
We investigate the phenomenon of velocity selection for traveling wave fronts in a class of coupled-map lattices, derived by discretizations of the Fisher equation [Ann. Eugenics 7, 355 (1937)]. We find that the velocity selection can be understood in terms of a discrete analog of the marginal-stability hypothesis. A perturbative approach also enables us to estimate the selected velocity accurately for small values of the discretization mesh sizes.
Registration of 4D time-series of cardiac images with multichannel Diffeomorphic Demons.
Peyrat, Jean-Marc; Delingette, Hervé; Sermesant, Maxime; Pennec, Xavier; Xu, Chenyang; Ayache, Nicholas
2008-01-01
In this paper, we propose a generic framework for intersubject non-linear registration of 4D time-series images. In this framework, spatio-temporal registration is defined by mapping trajectories of physical points as opposed to spatial registration that solely aims at mapping homologous points. First, we determine the trajectories we want to register in each sequence using a motion tracking algorithm based on the Diffeomorphic Demons algorithm. Then, we perform simultaneously pairwise registrations of corresponding time-points with the constraint to map the same physical points over time. We show this trajectory registration can be formulated as a multichannel registration of 3D images. We solve it using the Diffeomorphic Demons algorithm extended to vector-valued 3D images. This framework is applied to the inter-subject non-linear registration of 4D cardiac CT sequences.
Geologic Mapping of MTM -30247, -35247 and -40247 Quadrangles, Reull Vallis Region, Mars
NASA Technical Reports Server (NTRS)
Mest, S. C.; Crown, D. A.
2009-01-01
Geologic mapping of MTM -30247, -35247, and -40247 quadrangles is being used to characterize Reull Vallis (RV) and to determine the history of the eastern Hellas region of Mars. Studies of RV examine the roles and timing of volatile-driven erosional and depositional processes and provide constraints on potential associated climatic changes. This study complements earlier investigations of the eastern Hellas region, including regional analyses [1-6], mapping studies of circum-Hellas canyons [7-10], and volcanic studies of Hadriaca and Tyrrhena Paterae [11-13]. Key scientific objectives include 1) characterizing RV in its "fluvial zone," 2) analysis of channels in the surrounding plains and potential connections to and interactions with RV, 3) examining young, presumably sedimentary plains along RV, and 4) determining the nature of the connection between the segments of RV.
Structure-aware depth super-resolution using Gaussian mixture model
NASA Astrophysics Data System (ADS)
Kim, Sunok; Oh, Changjae; Kim, Youngjung; Sohn, Kwanghoon
2015-03-01
This paper presents a probabilistic optimization approach to enhance the resolution of a depth map. Conventionally, a high-resolution color image is considered as a cue for depth super-resolution under the assumption that the pixels with similar color likely belong to similar depth. This assumption might induce a texture transferring from the color image into the depth map and an edge blurring artifact to the depth boundaries. In order to alleviate these problems, we propose an efficient depth prior exploiting a Gaussian mixture model in which an estimated depth map is considered to a feature for computing affinity between two pixels. Furthermore, a fixed-point iteration scheme is adopted to address the non-linearity of a constraint derived from the proposed prior. The experimental results show that the proposed method outperforms state-of-the-art methods both quantitatively and qualitatively.
A test of the size-constraint hypothesis for a limit to sexual dimorphism in plants.
Labouche, Anne-Marie; Pannell, John R
2016-07-01
In flowering plants, many dioecious species display a certain degree of sexual dimorphism in non-reproductive traits, but this dimorphism tends to be much less striking than that found in animals. Sexual size dimorphism in plants may be limited because competition for light in crowded environments so strongly penalises small plants. The idea that competition for light constrains the evolution of strong sexual size dimorphism in plants (the size-constraint hypothesis) implies a strong dependency of the expression of sexual size dimorphism on the neighbouring density as a result of the capacity of plants to adjust their reproductive effort and investment in growth in response to their local environment. Here, we tested this hypothesis by experimentally altering the context of competition for light among male-female pairs of the light-demanding dioecious annual plant Mercurialis annua. We found that males were smaller than females across all treatments, but sexual size dimorphism was diminished for pairs grown at higher densities. This result is consistent with the size-constraint hypothesis. We discuss our results in terms of the tension between selection on size acting in opposite directions on males and females, which have different optima under sexual selection, and stabilizing selection for similar sizes in males and females, which have similar optima under viability selection for plasticity in size expression under different density conditions.
NASA Astrophysics Data System (ADS)
Martinet, Nicolas; Schneider, Peter; Hildebrandt, Hendrik; Shan, HuanYuan; Asgari, Marika; Dietrich, Jörg P.; Harnois-Déraps, Joachim; Erben, Thomas; Grado, Aniello; Heymans, Catherine; Hoekstra, Henk; Klaes, Dominik; Kuijken, Konrad; Merten, Julian; Nakajima, Reiko
2018-02-01
We study the statistics of peaks in a weak-lensing reconstructed mass map of the first 450 deg2 of the Kilo Degree Survey (KiDS-450). The map is computed with aperture masses directly applied to the shear field with an NFW-like compensated filter. We compare the peak statistics in the observations with that of simulations for various cosmologies to constrain the cosmological parameter S_8 = σ _8 √{Ω _m/0.3}, which probes the (Ωm, σ8) plane perpendicularly to its main degeneracy. We estimate S8 = 0.750 ± 0.059, using peaks in the signal-to-noise range 0 ≤ S/N ≤ 4, and accounting for various systematics, such as multiplicative shear bias, mean redshift bias, baryon feedback, intrinsic alignment, and shear-position coupling. These constraints are ˜ 25 per cent tighter than the constraints from the high significance peaks alone (3 ≤ S/N ≤ 4) which typically trace single-massive haloes. This demonstrates the gain of information from low-S/N peaks. However, we find that including S/N < 0 peaks does not add further information. Our results are in good agreement with the tomographic shear two-point correlation function measurement in KiDS-450. Combining shear peaks with non-tomographic measurements of the shear two-point correlation functions yields a ˜20 per cent improvement in the uncertainty on S8 compared to the shear two-point correlation functions alone, highlighting the great potential of peaks as a cosmological probe.
An Analysis of the Neighborhood Impacts of a Mortgage Assistance Program: A Spatial Hedonic Model
ERIC Educational Resources Information Center
Di, Wenhua; Ma, Jielai; Murdoch, James C.
2010-01-01
Down payment or closing cost assistance is an effective program in addressing the wealth constraints of low-and moderate-income homebuyers. However, the spillover effect of such programs on the neighborhood is unknown. This paper estimates the impact of the City of Dallas Mortgage Assistance Program (MAP) on nearby home values using a hedonic…
ERIC Educational Resources Information Center
Nguyen, Hien Thu
2009-01-01
Research shows both benefits and challenges of online discussion as a collaborative learning activity. Online discussion is especially challenging for novice college students who have limited metacognitive skills as well as limited knowledge of the subject domain. With limited metacognitive skills, it can be challenging for novice students to…
Estimating true instead of apparent survival using spatial Cormack-Jolly-Seber models
Schaub, Michael; Royle, J. Andrew
2014-01-01
Spatial CJS models enable study of dispersal and survival independent of study design constraints such as imperfect detection and size of the study area provided that some of the dispersing individuals remain in the study area. We discuss possible extensions of our model: alternative dispersal models and the inclusion of covariates and of a habitat suitability map.
Integrating Planning and Control for Constrained Dynamical Systems
2007-12-01
38 4.2 Mapping from polygonal cell to disk . . . . . . . . . . . . . . . . . . . . . . . . . 39 4.3 Convergent potential ...some idealized potential function. The feedback control policies defined in this thesis are specifically designed to satisfy the low-level constraints...problem into different parts, only focusing on one part, and leaving the rest to others. Some techniques work only in ideal conditions; while others solve
Wu, Jin-Gen; Liu, Man-Chi; Tsai, Ming-Fei; Yu, Wei-Shun; Chen, Jian-Zhang; Cheng, I-Chun; Lin, Pei-Chun
2012-04-01
We demonstrate a novel, vertical temperature-mapping incubator utilizing eight layers of thermoelectric (TE) modules mounted around a test tube. The temperature at each layer of the TE module is individually controlled to simulate the vertical temperature profile of geo-temperature variations with depth. Owing to the constraint of non-intrusion to the filled geo-samples, the temperature on the tube wall is adopted for measurement feedback. The design considerations for the incubator include spatial arrangement of the energy transfer mechanism, heating capacity of the TE modules, minimum required sample amount for follow-up instrumental or chemical analysis, and the constraint of non-intrusion to the geo-samples during incubation. The performance of the incubator is experimentally evaluated with two tube conditions and under four preset temperature profiles. Test tubes are either empty or filled with quartz sand, which has comparable thermal properties to the materials in the geo-environment. The applied temperature profiles include uniform, constant temperature gradient, monotonic-increasing parabolic, and parabolic. The temperature on the tube wall can be controlled between 20 °C and 90 °C with an averaged root mean squared error of 1 °C. © 2012 American Institute of Physics
NASA Astrophysics Data System (ADS)
Watts, Duncan; CLASS Collaboration
2018-01-01
The Cosmology Large Angular Scale Surveyor (CLASS) will use large-scale measurements of the polarized cosmic microwave background (CMB) to constrain the physics of inflation, reionization, and massive neutrinos. The experiment is designed to characterize the largest scales, which are inaccessible to most ground-based experiments, and remove Galactic foregrounds from the CMB maps. In this dissertation talk, I present simulations of CLASS data and demonstrate their ability to constrain the simplest single-field models of inflation and to reduce the uncertainty of the optical depth to reionization, τ, to near the cosmic variance limit, significantly improving on current constraints. These constraints will bring a qualitative shift in our understanding of standard ΛCDM cosmology. In particular, CLASS's measurement of τ breaks cosmological parameter degeneracies. Probes of large scale structure (LSS) test the effect of neutrino free-streaming at small scales, which depends on the mass of the neutrinos. CLASS's τ measurement, when combined with next-generation LSS and BAO measurements, will enable a 4σ detection of neutrino mass, compared with 2σ without CLASS data.. I will also briefly discuss the CLASS experiment's measurements of circular polarization of the CMB and the implications of the first-such near-all-sky map.
Structured Kernel Dictionary Learning with Correlation Constraint for Object Recognition.
Wang, Zhengjue; Wang, Yinghua; Liu, Hongwei; Zhang, Hao
2017-06-21
In this paper, we propose a new discriminative non-linear dictionary learning approach, called correlation constrained structured kernel KSVD, for object recognition. The objective function for dictionary learning contains a reconstructive term and a discriminative term. In the reconstructive term, signals are implicitly non-linearly mapped into a space, where a structured kernel dictionary, each sub-dictionary of which lies in the span of the mapped signals from the corresponding class, is established. In the discriminative term, by analyzing the classification mechanism, the correlation constraint is proposed in kernel form, constraining the correlations between different discriminative codes, and restricting the coefficient vectors to be transformed into a feature space, where the features are highly correlated inner-class and nearly independent between-classes. The objective function is optimized by the proposed structured kernel KSVD. During the classification stage, the specific form of the discriminative feature is needless to be known, while the inner product of the discriminative feature with kernel matrix embedded is available, and is suitable for a linear SVM classifier. Experimental results demonstrate that the proposed approach outperforms many state-of-the-art dictionary learning approaches for face, scene and synthetic aperture radar (SAR) vehicle target recognition.
Jones, David B; Jerry, Dean R; Khatkar, Mehar S; Raadsma, Herman W; Zenger, Kyall R
2013-11-20
The silver-lipped pearl oyster, Pinctada maxima, is an important tropical aquaculture species extensively farmed for the highly sought "South Sea" pearls. Traditional breeding programs have been initiated for this species in order to select for improved pearl quality, but many economic traits under selection are complex, polygenic and confounded with environmental factors, limiting the accuracy of selection. The incorporation of a marker-assisted selection (MAS) breeding approach would greatly benefit pearl breeding programs by allowing the direct selection of genes responsible for pearl quality. However, before MAS can be incorporated, substantial genomic resources such as genetic linkage maps need to be generated. The construction of a high-density genetic linkage map for P. maxima is not only essential for unravelling the genomic architecture of complex pearl quality traits, but also provides indispensable information on the genome structure of pearl oysters. A total of 1,189 informative genome-wide single nucleotide polymorphisms (SNPs) were incorporated into linkage map construction. The final linkage map consisted of 887 SNPs in 14 linkage groups, spans a total genetic distance of 831.7 centimorgans (cM), and covers an estimated 96% of the P. maxima genome. Assessment of sex-specific recombination across all linkage groups revealed limited overall heterochiasmy between the sexes (i.e. 1.15:1 F/M map length ratio). However, there were pronounced localised differences throughout the linkage groups, whereby male recombination was suppressed near the centromeres compared to female recombination, but inflated towards telomeric regions. Mean values of LD for adjacent SNP pairs suggest that a higher density of markers will be required for powerful genome-wide association studies. Finally, numerous nacre biomineralization genes were localised providing novel positional information for these genes. This high-density SNP genetic map is the first comprehensive linkage map for any pearl oyster species. It provides an essential genomic tool facilitating studies investigating the genomic architecture of complex trait variation and identifying quantitative trait loci for economically important traits useful in genetic selection programs within the P. maxima pearling industry. Furthermore, this map provides a foundation for further research aiming to improve our understanding of the dynamic process of biomineralization, and pearl oyster evolution and synteny.
National Maps - Pacific - NOAA's National Weather Service
select the go button to submit request City, St Go Sign-up for Email Alerts RSS Feeds RSS Feeds Warnings Skip Navigation Links weather.gov NOAA logo-Select to go to the NOAA homepage National Oceanic and Atmospheric Administration's Select to go to the NWS homepage National Weather Service Site Map News
NASA Astrophysics Data System (ADS)
Deng, Shuang; Xiang, Wenting; Tian, Yangge
2009-10-01
Map coloring is a hard task even to the experienced map experts. In the GIS project, usually need to color map according to the customer, which make the work more complex. With the development of GIS, more and more programmers join the project team, which lack the training of cartology, their coloring map are harder to meet the requirements of customer. From the experience, customers with similar background usually have similar tastes for coloring map. So, we developed a GIS color scheme decision-making system which can select color schemes of similar customers from case base for customers to select and adjust. The system is a BS/CS mixed system, the client side use JSP and make it possible for the system developers to go on remote calling of the colors scheme cases in the database server and communicate with customers. Different with general case-based reasoning, even the customers are very similar, their selection may have difference, it is hard to provide a "best" option. So, we select the Simulated Annealing Algorithm (SAA) to arrange the emergence order of different color schemes. Customers can also dynamically adjust certain features colors based on existing case. The result shows that the system can facilitate the communication between the designers and the customers and improve the quality and efficiency of coloring map.
Pixel-by-pixel absolute phase retrieval using three phase-shifted fringe patterns without markers
NASA Astrophysics Data System (ADS)
Jiang, Chufan; Li, Beiwen; Zhang, Song
2017-04-01
This paper presents a method that can recover absolute phase pixel by pixel without embedding markers on three phase-shifted fringe patterns, acquiring additional images, or introducing additional hardware component(s). The proposed three-dimensional (3D) absolute shape measurement technique includes the following major steps: (1) segment the measured object into different regions using rough priori knowledge of surface geometry; (2) artificially create phase maps at different z planes using geometric constraints of structured light system; (3) unwrap the phase pixel by pixel for each region by properly referring to the artificially created phase map; and (4) merge unwrapped phases from all regions into a complete absolute phase map for 3D reconstruction. We demonstrate that conventional three-step phase-shifted fringe patterns can be used to create absolute phase map pixel by pixel even for large depth range objects. We have successfully implemented our proposed computational framework to achieve absolute 3D shape measurement at 40 Hz.
One- and two-objective approaches to an area-constrained habitat reserve site selection problem
Stephanie Snyder; Charles ReVelle; Robert Haight
2004-01-01
We compare several ways to model a habitat reserve site selection problem in which an upper bound on the total area of the selected sites is included. The models are cast as optimization coverage models drawn from the location science literature. Classic covering problems typically include a constraint on the number of sites that can be selected. If potential reserve...
Mars, John L.; Zientek, M.L.; Hammarstrom, J.M.; Johnson, K.M.; Pierce, F.W.
2014-01-01
The ASTER alteration map and corresponding geologic maps were used to select circular to elliptical patterns of argillic- and phyllic-altered volcanic and intrusive rocks as potential porphyry copper sites. One hundred and seventy eight potential porphyry copper sites were mapped along the UDVB, and 23 sites were mapped along the CVB. The potential sites were selected to assist in further exploration and assessments of undiscovered porphyry copper deposits.
Le Cunff, Loïc; Garsmeur, Olivier; Raboin, Louis Marie; Pauquet, Jérome; Telismart, Hugues; Selvi, Athiappan; Grivet, Laurent; Philippe, Romain; Begum, Dilara; Deu, Monique; Costet, Laurent; Wing, Rod; Glaszmann, Jean Christophe; D'Hont, Angélique
2008-01-01
The genome of modern sugarcane cultivars is highly polyploid (∼12x), aneuploid, of interspecific origin, and contains 10 Gb of DNA. Its size and complexity represent a major challenge for the isolation of agronomically important genes. Here we report on the first attempt to isolate a gene from sugarcane by map-based cloning, targeting a durable major rust resistance gene (Bru1). We describe the genomic strategies that we have developed to overcome constraints associated with high polyploidy in the successive steps of map-based cloning approaches, including diploid/polyploid syntenic shuttle mapping with two model diploid species (sorghum and rice) and haplotype-specific chromosome walking. Their applications allowed us (i) to develop a high-resolution map including markers at 0.28 and 0.14 cM on both sides and 13 markers cosegregating with Bru1 and (ii) to develop a physical map of the target haplotype that still includes two gaps at this stage due to the discovery of an insertion specific to this haplotype. These approaches will pave the way for the development of future map-based cloning approaches for sugarcane and other complex polyploid species. PMID:18757946
Reichenbach, Stephen E; Kottapalli, Visweswara; Ni, Mingtian; Visvanathan, Arvind
2005-04-15
This paper describes a language for expressing criteria for chemical identification with comprehensive two-dimensional gas chromatography paired with mass spectrometry (GC x GC-MS) and presents computer-based tools implementing the language. The Computer Language for Indentifying Chemicals (CLIC) allows expressions that describe rules (or constraints) for selecting chemical peaks or data points based on multi-dimensional chromatographic properties and mass spectral characteristics. CLIC offers chromatographic functions of retention times, functions of mass spectra, numbers for quantitative and relational evaluation, and logical and arithmetic operators. The language is demonstrated with the compound-class selection rules described by Welthagen et al. [W. Welthagen, J. Schnelle-Kreis, R. Zimmermann, J. Chromatogr. A 1019 (2003) 233-249]. A software implementation of CLIC provides a calculator-like graphical user-interface (GUI) for building and applying selection expressions. From the selection calculator, expressions can be used to select chromatographic peaks that meet the criteria or create selection chromatograms that mask data points inconsistent with the criteria. Selection expressions can be combined with graphical, geometric constraints in the retention-time plane as a powerful component for chemical identification with template matching or used to speed and improve mass spectrum library searches.
Ureba, A; Salguero, F J; Barbeiro, A R; Jimenez-Ortega, E; Baeza, J A; Miras, H; Linares, R; Perucha, M; Leal, A
2014-08-01
The authors present a hybrid direct multileaf collimator (MLC) aperture optimization model exclusively based on sequencing of patient imaging data to be implemented on a Monte Carlo treatment planning system (MC-TPS) to allow the explicit radiation transport simulation of advanced radiotherapy treatments with optimal results in efficient times for clinical practice. The planning system (called CARMEN) is a full MC-TPS, controlled through aMATLAB interface, which is based on the sequencing of a novel map, called "biophysical" map, which is generated from enhanced image data of patients to achieve a set of segments actually deliverable. In order to reduce the required computation time, the conventional fluence map has been replaced by the biophysical map which is sequenced to provide direct apertures that will later be weighted by means of an optimization algorithm based on linear programming. A ray-casting algorithm throughout the patient CT assembles information about the found structures, the mass thickness crossed, as well as PET values. Data are recorded to generate a biophysical map for each gantry angle. These maps are the input files for a home-made sequencer developed to take into account the interactions of photons and electrons with the MLC. For each linac (Axesse of Elekta and Primus of Siemens) and energy beam studied (6, 9, 12, 15 MeV and 6 MV), phase space files were simulated with the EGSnrc/BEAMnrc code. The dose calculation in patient was carried out with the BEAMDOSE code. This code is a modified version of EGSnrc/DOSXYZnrc able to calculate the beamlet dose in order to combine them with different weights during the optimization process. Three complex radiotherapy treatments were selected to check the reliability of CARMEN in situations where the MC calculation can offer an added value: A head-and-neck case (Case I) with three targets delineated on PET/CT images and a demanding dose-escalation; a partial breast irradiation case (Case II) solved with photon and electron modulated beams (IMRT + MERT); and a prostatic bed case (Case III) with a pronounced concave-shaped PTV by using volumetric modulated arc therapy. In the three cases, the required target prescription doses and constraints on organs at risk were fulfilled in a short enough time to allow routine clinical implementation. The quality assurance protocol followed to check CARMEN system showed a high agreement with the experimental measurements. A Monte Carlo treatment planning model exclusively based on maps performed from patient imaging data has been presented. The sequencing of these maps allows obtaining deliverable apertures which are weighted for modulation under a linear programming formulation. The model is able to solve complex radiotherapy treatments with high accuracy in an efficient computation time.
Maier, Uwe-G; Zauner, Stefan; Woehle, Christian; Bolte, Kathrin; Hempel, Franziska; Allen, John F.; Martin, William F.
2013-01-01
Plastid and mitochondrial genomes have undergone parallel evolution to encode the same functional set of genes. These encode conserved protein components of the electron transport chain in their respective bioenergetic membranes and genes for the ribosomes that express them. This highly convergent aspect of organelle genome evolution is partly explained by the redox regulation hypothesis, which predicts a separate plastid or mitochondrial location for genes encoding bioenergetic membrane proteins of either photosynthesis or respiration. Here we show that convergence in organelle genome evolution is far stronger than previously recognized, because the same set of genes for ribosomal proteins is independently retained by both plastid and mitochondrial genomes. A hitherto unrecognized selective pressure retains genes for the same ribosomal proteins in both organelles. On the Escherichia coli ribosome assembly map, the retained proteins are implicated in 30S and 50S ribosomal subunit assembly and initial rRNA binding. We suggest that ribosomal assembly imposes functional constraints that govern the retention of ribosomal protein coding genes in organelles. These constraints are subordinate to redox regulation for electron transport chain components, which anchor the ribosome to the organelle genome in the first place. As organelle genomes undergo reduction, the rRNAs also become smaller. Below size thresholds of approximately 1,300 nucleotides (16S rRNA) and 2,100 nucleotides (26S rRNA), all ribosomal protein coding genes are lost from organelles, while electron transport chain components remain organelle encoded as long as the organelles use redox chemistry to generate a proton motive force. PMID:24259312
Avena-Koenigsberger, Andrea; Goñi, Joaquín; Solé, Ricard; Sporns, Olaf
2015-01-01
The structure of complex networks has attracted much attention in recent years. It has been noted that many real-world examples of networked systems share a set of common architectural features. This raises important questions about their origin, for example whether such network attributes reflect common design principles or constraints imposed by selectional forces that have shaped the evolution of network topology. Is it possible to place the many patterns and forms of complex networks into a common space that reveals their relations, and what are the main rules and driving forces that determine which positions in such a space are occupied by systems that have actually evolved? We suggest that these questions can be addressed by combining concepts from two currently relatively unconnected fields. One is theoretical morphology, which has conceptualized the relations between morphological traits defined by mathematical models of biological form. The second is network science, which provides numerous quantitative tools to measure and classify different patterns of local and global network architecture across disparate types of systems. Here, we explore a new theoretical concept that lies at the intersection between both fields, the ‘network morphospace’. Defined by axes that represent specific network traits, each point within such a space represents a location occupied by networks that share a set of common ‘morphological’ characteristics related to aspects of their connectivity. Mapping a network morphospace reveals the extent to which the space is filled by existing networks, thus allowing a distinction between actual and impossible designs and highlighting the generative potential of rules and constraints that pervade the evolution of complex systems. PMID:25540237
Suresh K. Shrestha; Robert C. Burns
2012-01-01
We conducted a self-administered mail survey in September 2009 with randomly selected Oregon hunters who had purchased big game hunting licenses/tags for the 2008 hunting season. Survey questions explored hunting practices, the meanings of and motivations for big game hunting, the constraints to big game hunting participation, and the effects of age, years of hunting...
Constraint-Free Theories of Gravitation
NASA Technical Reports Server (NTRS)
Estabrook, Frank B.; Robinson, R. Steve; Wahlquist, Hugo D.
1998-01-01
Lovelock actions (more precisely, extended Gauss-Bonnet forms) when varied as Cartan forms on subspaces of higher dimensional flat Riemannian manifolds, generate well set, causal exterior differential systems. In particular, the Einstein- Hilbert action 4-form, varied on a 4 dimensional subspace of E(sub 10) yields a well set generalized theory of gravity having no constraints. Rcci-flat solutions are selected by initial conditions on a bounding 3-space.
Mapping the developmental constraints on working memory span performance.
Bayliss, Donna M; Jarrold, Christopher; Baddeley, Alan D; Gunn, Deborah M; Leigh, Eleanor
2005-07-01
This study investigated the constraints underlying developmental improvements in complex working memory span performance among 120 children of between 6 and 10 years of age. Independent measures of processing efficiency, storage capacity, rehearsal speed, and basic speed of processing were assessed to determine their contribution to age-related variance in complex span. Results showed that developmental improvements in complex span were driven by 2 age-related but separable factors: 1 associated with general speed of processing and 1 associated with storage ability. In addition, there was an age-related contribution shared between working memory, processing speed, and storage ability that was important for higher level cognition. These results pose a challenge for models of complex span performance that emphasize the importance of processing speed alone.
Minimal models from W-constrained hierarchies via the Kontsevich-Miwa transform
NASA Astrophysics Data System (ADS)
Gato-Rivera, B.; Semikhatov, A. M.
1992-08-01
A direct relation between the conformal formalism for 2D quantum gravity and the W-constrained KP hierarchy is found, without the need to invoke intermediate matrix model technology. The Kontsevich-Miwa transform of the KP hierarchy is used to establish an identification between W constraints on the KP tau function and decoupling equations corresponding to Virasoro null vectors. The Kontsevich-Miwa transform maps the W ( l) -constrained KP hierarchy to the ( p‧, p‧) minimal model, with the tau function being given by the correlator of a product of (dressed) ( l, 1) [or (1, l)] operators, provided the Miwa parameter ni and the free parameter (an abstract bc spin) present in the constraint are expressed through the ratio p‧/ p and the level l.
System Considerations and Challendes in 3d Mapping and Modeling Using Low-Cost Uav Systems
NASA Astrophysics Data System (ADS)
Lari, Z.; El-Sheimy, N.
2015-08-01
In the last few years, low-cost UAV systems have been acknowledged as an affordable technology for geospatial data acquisition that can meet the needs of a variety of traditional and non-traditional mapping applications. In spite of its proven potential, UAV-based mapping is still lacking in terms of what is needed for it to become an acceptable mapping tool. In other words, a well-designed system architecture that considers payload restrictions as well as the specifications of the utilized direct geo-referencing component and the imaging systems in light of the required mapping accuracy and intended application is still required. Moreover, efficient data processing workflows, which are capable of delivering the mapping products with the specified quality while considering the synergistic characteristics of the sensors onboard, the wide range of potential users who might lack deep knowledge in mapping activities, and time constraints of emerging applications, are still needed to be adopted. Therefore, the introduced challenges by having low-cost imaging and georeferencing sensors onboard UAVs with limited payload capability, the necessity of efficient data processing techniques for delivering required products for intended applications, and the diversity of potential users with insufficient mapping-related expertise needs to be fully investigated and addressed by UAV-based mapping research efforts. This paper addresses these challenges and reviews system considerations, adaptive processing techniques, and quality assurance/quality control procedures for achievement of accurate mapping products from these systems.
Continuous-time mean-variance portfolio selection with value-at-risk and no-shorting constraints
NASA Astrophysics Data System (ADS)
Yan, Wei
2012-01-01
An investment problem is considered with dynamic mean-variance(M-V) portfolio criterion under discontinuous prices which follow jump-diffusion processes according to the actual prices of stocks and the normality and stability of the financial market. The short-selling of stocks is prohibited in this mathematical model. Then, the corresponding stochastic Hamilton-Jacobi-Bellman(HJB) equation of the problem is presented and the solution of the stochastic HJB equation based on the theory of stochastic LQ control and viscosity solution is obtained. The efficient frontier and optimal strategies of the original dynamic M-V portfolio selection problem are also provided. And then, the effects on efficient frontier under the value-at-risk constraint are illustrated. Finally, an example illustrating the discontinuous prices based on M-V portfolio selection is presented.
Developing a decision support system for R&D project portfolio selection with interdependencies
NASA Astrophysics Data System (ADS)
Ashrafi, Maryam; Davoudpour, Hamid; Abbassi, Mohammad
2012-11-01
Although investment in research and technology is a promising tool for technology centered organizations through obtaining their objectives, resource constraints make organizations select between their pool of research and technology projects through means of R&D project portfolio selection techniques mitigating corresponding risks and enhancing the overall value of project portfolio.
A New Item Selection Procedure for Mixed Item Type in Computerized Classification Testing.
ERIC Educational Resources Information Center
Lau, C. Allen; Wang, Tianyou
This paper proposes a new Information-Time index as the basis for item selection in computerized classification testing (CCT) and investigates how this new item selection algorithm can help improve test efficiency for item pools with mixed item types. It also investigates how practical constraints such as item exposure rate control, test…
Curtiss, W C; Vournakis, J N
1984-01-01
Eukaryotic 5S rRNA sequences from 34 diverse species were compared by the following method: (1) The sequences were aligned; (2) the positions of substitutions were located by comparison of all possible pairs of sequences; (3) the substitution sites were mapped to an assumed general base pairing model; and (4) the R-Y model of base stacking was used to study stacking pattern relationships in the structure. An analysis of the sequence and structure variability in each region of the molecule is presented. It was found that the degree of base substitution varies over a wide range, from absolute conservation to occurrence of over 90% of the possible observable substitutions. The substitutions are located primarily in stem regions of the 5S rRNA secondary structure. More than 88% of the substitutions in helical regions maintain base pairing. The disruptive substitutions are primarily located at the edges of helical regions, resulting in shortening of the helical regions and lengthening of the adjacent nonpaired regions. Base stacking patterns determined by the R-Y model are mapped onto the general secondary structure. Intrastrand and interstrand stacking could stabilize alternative coaxial structures and limit the conformational flexibility of nonpaired regions. Two short contiguous regions are 100% conserved in all species. This may reflect evolutionary constraints imposed at the DNA level by the requirement for binding of a 5S gene transcription initiation factor during gene expression.
NASA Astrophysics Data System (ADS)
Tapete, Deodato; Cigna, Francesca
2016-08-01
Timely availability of images of suitable spatial resolution, temporal frequency and coverage is currently one of the major technical constraints on the application of satellite SAR remote sensing for the conservation of heritage assets in urban environments that are impacted by human-induced transformation. TerraSAR-X and Sentinel-1A, in this regard, are two different models of SAR data provision: very high resolution on-demand imagery with end user-selected acquisition parameters, on one side, and freely accessible GIS-ready products with intended regular temporal coverage, on the other. What this means for change detection analyses in urban areas is demonstrated in this paper via the experiment over Homs, the third largest city of Syria with an history of settlement since 2300 BCE, where the impacts of the recent civil war combine with pre- and post-conflict urban transformation . The potential performance of Sentinel-1A StripMap scenes acquired in an emergency context is simulated via the matching StripMap beam mode offered by TerraSAR-X. Benefits and limitations of the different radar frequency band, spatial resolution and single/multi-channel polarization are discussed, as a proof-of-concept of regular monitoring currently achievable with space-borne SAR in historic urban settings. Urban transformation observed across Homs in 2009, 2014 and 2015 shows the impact of the Syrian conflict on the cityscape and proves that operator-driven interpretation is required to understand the complexity of multiple and overlapping urban changes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gomez-Cardona, D; Li, K; Lubner, M G
Purpose: The introduction of the highly nonlinear MBIR algorithm to clinical CT systems has made CNR an invalid metric for kV optimization. The purpose of this work was to develop a task-based framework to unify kV and mAs optimization for both FBP- and MBIR-based CT systems. Methods: The kV-mAs optimization was formulated as a constrained minimization problem: to select kV and mAs to minimize dose under the constraint of maintaining the detection performance as clinically prescribed. To experimentally solve this optimization problem, exhaustive measurements of detectability index (d’) for a hepatic lesion detection task were performed at 15 different mAmore » levels and 4 kV levels using an anthropomorphic phantom. The measured d’ values were used to generate an iso-detectability map; similarly, dose levels recorded at different kV-mAs combinations were used to generate an iso-dose map. The iso-detectability map was overlaid on top of the iso-dose map so that for a prescribed detectability level d’, the optimal kV-mA can be determined from the crossing between the d’ contour and the dose contour that corresponds to the minimum dose. Results: Taking d’=16 as an example: the kV-mAs combinations on the measured iso-d’ line of MBIR are 80–150 (3.8), 100–140 (6.6), 120–150 (11.3), and 140–160 (17.2), where values in the parentheses are measured dose values. As a Result, the optimal kV was 80 and optimal mA was 150. In comparison, the optimal kV and mA for FBP were 100 and 500, which corresponded to a dose level of 24 mGy. Results of in vivo animal experiments were consistent with the phantom results. Conclusion: A new method to optimize kV and mAs selection has been developed. This method is applicable to both linear and nonlinear CT systems such as those using MBIR. Additional dose savings can be achieved by combining MBIR with this method. This work was partially supported by an NIH grant R01CA169331 and GE Healthcare. K. Li, D. Gomez-Cardona, M. G. Lubner: Nothing to disclose. P. J. Pickhardt: Co-founder, VirtuoCTC, LLC Stockholder, Cellectar Biosciences, Inc. G.-H. Chen: Research funded, GE Healthcare; Research funded, Siemens AX.« less
Constraints on food chain length arising from regional metacommunity dynamics
Calcagno, Vincent; Massol, François; Mouquet, Nicolas; Jarne, Philippe; David, Patrice
2011-01-01
Classical ecological theory has proposed several determinants of food chain length, but the role of metacommunity dynamics has not yet been fully considered. By modelling patchy predator–prey metacommunities with extinction–colonization dynamics, we identify two distinct constraints on food chain length. First, finite colonization rates limit predator occupancy to a subset of prey-occupied sites. Second, intrinsic extinction rates accumulate along trophic chains. We show how both processes concur to decrease maximal and average food chain length in metacommunities. This decrease is mitigated if predators track their prey during colonization (habitat selection) and can be reinforced by top-down control of prey vital rates (especially extinction). Moreover, top-down control of colonization and habitat selection can interact to produce a counterintuitive positive relationship between perturbation rate and food chain length. Our results show how novel limits to food chain length emerge in spatially structured communities. We discuss the connections between these constraints and the ones commonly discussed, and suggest ways to test for metacommunity effects in food webs. PMID:21367786
Lourdais, Olivier; Dupoué, Andréaz; Guillon, Michaël; Guiller, Gaëtan; Michaud, Bruno; DeNardo, Dale F
Water constraints can mediate evolutionary conflict either among individuals (e.g., parent-offspring conflict, sexual conflict) or within an individual (e.g., cost of reproduction). During pregnancy, water is of particular importance because the female provides all water needed for embryonic development and experiences important maternal shifts in behavior and physiology that, together, can compromise female water balance if water availability is limited. We examined the effect of pregnancy on evaporative water loss and microhabitat selection in a viviparous snake, the aspic viper. We found that both physiological (increased metabolism and body temperature) and morphological (body distension) changes contribute to an increased evaporative water loss in pregnant females. We also found that pregnant females in the wild select warmer and moister basking locations than nonreproductive females, likely to mitigate the conflict between thermal needs and water loss. Water resources likely induce significant reproductive constraints across diverse taxa and thus warrant further consideration in ecological research. From an evolutionary perspective, water constraints during reproduction may contribute to shaping reproductive effort.
Characteristics and significance of intergenic polyadenylated RNA transcription in Arabidopsis.
Moghe, Gaurav D; Lehti-Shiu, Melissa D; Seddon, Alex E; Yin, Shan; Chen, Yani; Juntawong, Piyada; Brandizzi, Federica; Bailey-Serres, Julia; Shiu, Shin-Han
2013-01-01
The Arabidopsis (Arabidopsis thaliana) genome is the most well-annotated plant genome. However, transcriptome sequencing in Arabidopsis continues to suggest the presence of polyadenylated (polyA) transcripts originating from presumed intergenic regions. It is not clear whether these transcripts represent novel noncoding or protein-coding genes. To understand the nature of intergenic polyA transcription, we first assessed its abundance using multiple messenger RNA sequencing data sets. We found 6,545 intergenic transcribed fragments (ITFs) occupying 3.6% of Arabidopsis intergenic space. In contrast to transcribed fragments that map to protein-coding and RNA genes, most ITFs are significantly shorter, are expressed at significantly lower levels, and tend to be more data set specific. A surprisingly large number of ITFs (32.1%) may be protein coding based on evidence of translation. However, our results indicate that these "translated" ITFs tend to be close to and are likely associated with known genes. To investigate if ITFs are under selection and are functional, we assessed ITF conservation through cross-species as well as within-species comparisons. Our analysis reveals that 237 ITFs, including 49 with translation evidence, are under strong selective constraint and relatively distant from annotated features. These ITFs are likely parts of novel genes. However, the selective pressure imposed on most ITFs is similar to that of randomly selected, untranscribed intergenic sequences. Our findings indicate that despite the prevalence of ITFs, apart from the possibility of genomic contamination, many may be background or noisy transcripts derived from "junk" DNA, whose production may be inherent to the process of transcription and which, on rare occasions, may act as catalysts for the creation of novel genes.
The Application of Computer-Aided Discovery to Spacecraft Site Selection
NASA Astrophysics Data System (ADS)
Pankratius, V.; Blair, D. M.; Gowanlock, M.; Herring, T.
2015-12-01
The selection of landing and exploration sites for interplanetary robotic or human missions is a complex task. Historically it has been labor-intensive, with large groups of scientists manually interpreting a planetary surface across a variety of datasets to identify potential sites based on science and engineering constraints. This search process can be lengthy, and excellent sites may get overlooked when the aggregate value of site selection criteria is non-obvious or non-intuitive. As planetary data collection leads to Big Data repositories and a growing set of selection criteria, scientists will face a combinatorial search space explosion that requires scalable, automated assistance. We are currently exploring more general computer-aided discovery techniques in the context of planetary surface deformation phenomena that can lend themselves to application in the landing site search problem. In particular, we are developing a general software framework that addresses key difficulties: characterizing a given phenomenon or site based on data gathered from multiple instruments (e.g. radar interferometry, gravity, thermal maps, or GPS time series), and examining a variety of possible workflows whose individual configurations are optimized to isolate different features. The framework allows algorithmic pipelines and hypothesized models to be perturbed or permuted automatically within well-defined bounds established by the scientist. For example, even simple choices for outlier and noise handling or data interpolation can drastically affect the detectability of certain features. These techniques aim to automate repetitive tasks that scientists routinely perform in exploratory analysis, and make them more efficient and scalable by executing them in parallel in the cloud. We also explore ways in which machine learning can be combined with human feedback to prune the search space and converge to desirable results. Acknowledgements: We acknowledge support from NASA AIST NNX15AG84G (PI V. Pankratius)
NASA Technical Reports Server (NTRS)
Dunkey, J.; Komatsu, E.; Nolta, M.R.; Spergel, D.N.; Larson, D.; Hinshaw, G.; Page, L.; Bennett, C.L.; Gold, B.; Jarosik, N.;
2008-01-01
The Wilkinson Microwave Anisotropy Probe (WMAP), launched in 2001, has mapped out the Cosmic Microwave Background with unprecedented accuracy over the whole sky. Its observations have led to the establishment of a simple concordance cosmological model for the contents and evolution of the universe, consistent with virtually all other astronomical measurements. The WMAP first-year and three-year data have allowed us to place strong constraints on the parameters describing the ACDM model. a flat universe filled with baryons, cold dark matter, neutrinos. and a cosmological constant. with initial fluctuations described by nearly scale-invariant power law fluctuations, as well as placing limits on extensions to this simple model (Spergel et al. 2003. 2007). With all-sky measurements of the polarization anisotropy (Kogut et al. 2003; Page et al. 2007), two orders of magnitude smaller than the intensity fluctuations. WMAP has not only given us an additional picture of the universe as it transitioned from ionized to neutral at redshift z approx.1100. but also an observation of the later reionization of the universe by the first stars. In this paper we present cosmological constraints from WMAP alone. for both the ACDM model and a set of possible extensions. We also consider tlle consistency of WMAP constraints with other recent astronomical observations. This is one of seven five-year WMAP papers. Hinshaw et al. (2008) describe the data processing and basic results. Hill et al. (2008) present new beam models arid window functions, Gold et al. (2008) describe the emission from Galactic foregrounds, and Wright et al. (2008) the emission from extra-Galactic point sources. The angular power spectra are described in Nolta et al. (2008), and Komatsu et al. (2008) present and interpret cosmological constraints based on combining WMAP with other data. WMAP observations are used to produce full-sky maps of the CMB in five frequency bands centered at 23, 33, 41, 61, and 94 GHz (Hinshaw et al. 2008). With five years of data, we are now able to place better limits on the ACDM model. as well as to move beyond it to test the composition of the universe. details of reionization. sub-dominant components, characteristics of inflation, and primordial fluctuations. We have more than doubled the amount of polarized data used for cosmological analysis. allowing a better measure of the large-scale E-mode signal (Nolta et al. 2008). To this end we describe an alternative way to remove Galactic foregrounds from low resolution polarization maps in which Galactic emission is marginalized over, providing a cross-check of our results. With longer integration we also better probe the second and third acoustic peaks in the temperature angular power spectrum, and have many more year-to-year difference maps available for cross-checking systematic effects (Hinshaw et al. 2008).