Sample records for distributed object models

  1. ON A POSSIBLE SIZE/COLOR RELATIONSHIP IN THE KUIPER BELT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pike, R. E.; Kavelaars, J. J., E-mail: repike@uvic.ca

    2013-10-01

    Color measurements and albedo distributions introduce non-intuitive observational biases in size-color relationships among Kuiper Belt Objects (KBOs) that cannot be disentangled without a well characterized sample population with systematic photometry. Peixinho et al. report that the form of the KBO color distribution varies with absolute magnitude, H. However, Tegler et al. find that KBO color distributions are a property of object classification. We construct synthetic models of observed KBO colors based on two B-R color distribution scenarios: color distribution dependent on H magnitude (H-Model) and color distribution based on object classification (Class-Model). These synthetic B-R color distributions were modified tomore » account for observational flux biases. We compare our synthetic B-R distributions to the observed ''Hot'' and ''Cold'' detected objects from the Canada-France Ecliptic Plane Survey and the Meudon Multicolor Survey. For both surveys, the Hot population color distribution rejects the H-Model, but is well described by the Class-Model. The Cold objects reject the H-Model, but the Class-Model (while not statistically rejected) also does not provide a compelling match for data. Although we formally reject models where the structure of the color distribution is a strong function of H magnitude, we also do not find that a simple dependence of color distribution on orbit classification is sufficient to describe the color distribution of classical KBOs.« less

  2. An object-based storage model for distributed remote sensing images

    NASA Astrophysics Data System (ADS)

    Yu, Zhanwu; Li, Zhongmin; Zheng, Sheng

    2006-10-01

    It is very difficult to design an integrated storage solution for distributed remote sensing images to offer high performance network storage services and secure data sharing across platforms using current network storage models such as direct attached storage, network attached storage and storage area network. Object-based storage, as new generation network storage technology emerged recently, separates the data path, the control path and the management path, which solves the bottleneck problem of metadata existed in traditional storage models, and has the characteristics of parallel data access, data sharing across platforms, intelligence of storage devices and security of data access. We use the object-based storage in the storage management of remote sensing images to construct an object-based storage model for distributed remote sensing images. In the storage model, remote sensing images are organized as remote sensing objects stored in the object-based storage devices. According to the storage model, we present the architecture of a distributed remote sensing images application system based on object-based storage, and give some test results about the write performance comparison of traditional network storage model and object-based storage model.

  3. A Bayesian Alternative for Multi-objective Ecohydrological Model Specification

    NASA Astrophysics Data System (ADS)

    Tang, Y.; Marshall, L. A.; Sharma, A.; Ajami, H.

    2015-12-01

    Process-based ecohydrological models combine the study of hydrological, physical, biogeochemical and ecological processes of the catchments, which are usually more complex and parametric than conceptual hydrological models. Thus, appropriate calibration objectives and model uncertainty analysis are essential for ecohydrological modeling. In recent years, Bayesian inference has become one of the most popular tools for quantifying the uncertainties in hydrological modeling with the development of Markov Chain Monte Carlo (MCMC) techniques. Our study aims to develop appropriate prior distributions and likelihood functions that minimize the model uncertainties and bias within a Bayesian ecohydrological framework. In our study, a formal Bayesian approach is implemented in an ecohydrological model which combines a hydrological model (HyMOD) and a dynamic vegetation model (DVM). Simulations focused on one objective likelihood (Streamflow/LAI) and multi-objective likelihoods (Streamflow and LAI) with different weights are compared. Uniform, weakly informative and strongly informative prior distributions are used in different simulations. The Kullback-leibler divergence (KLD) is used to measure the dis(similarity) between different priors and corresponding posterior distributions to examine the parameter sensitivity. Results show that different prior distributions can strongly influence posterior distributions for parameters, especially when the available data is limited or parameters are insensitive to the available data. We demonstrate differences in optimized parameters and uncertainty limits in different cases based on multi-objective likelihoods vs. single objective likelihoods. We also demonstrate the importance of appropriately defining the weights of objectives in multi-objective calibration according to different data types.

  4. A Bayesian alternative for multi-objective ecohydrological model specification

    NASA Astrophysics Data System (ADS)

    Tang, Yating; Marshall, Lucy; Sharma, Ashish; Ajami, Hoori

    2018-01-01

    Recent studies have identified the importance of vegetation processes in terrestrial hydrologic systems. Process-based ecohydrological models combine hydrological, physical, biochemical and ecological processes of the catchments, and as such are generally more complex and parametric than conceptual hydrological models. Thus, appropriate calibration objectives and model uncertainty analysis are essential for ecohydrological modeling. In recent years, Bayesian inference has become one of the most popular tools for quantifying the uncertainties in hydrological modeling with the development of Markov chain Monte Carlo (MCMC) techniques. The Bayesian approach offers an appealing alternative to traditional multi-objective hydrologic model calibrations by defining proper prior distributions that can be considered analogous to the ad-hoc weighting often prescribed in multi-objective calibration. Our study aims to develop appropriate prior distributions and likelihood functions that minimize the model uncertainties and bias within a Bayesian ecohydrological modeling framework based on a traditional Pareto-based model calibration technique. In our study, a Pareto-based multi-objective optimization and a formal Bayesian framework are implemented in a conceptual ecohydrological model that combines a hydrological model (HYMOD) and a modified Bucket Grassland Model (BGM). Simulations focused on one objective (streamflow/LAI) and multiple objectives (streamflow and LAI) with different emphasis defined via the prior distribution of the model error parameters. Results show more reliable outputs for both predicted streamflow and LAI using Bayesian multi-objective calibration with specified prior distributions for error parameters based on results from the Pareto front in the ecohydrological modeling. The methodology implemented here provides insight into the usefulness of multiobjective Bayesian calibration for ecohydrologic systems and the importance of appropriate prior distributions in such approaches.

  5. Performance of the Heavy Flavor Tracker (HFT) detector in star experiment at RHIC

    NASA Astrophysics Data System (ADS)

    Alruwaili, Manal

    With the growing technology, the number of the processors is becoming massive. Current supercomputer processing will be available on desktops in the next decade. For mass scale application software development on massive parallel computing available on desktops, existing popular languages with large libraries have to be augmented with new constructs and paradigms that exploit massive parallel computing and distributed memory models while retaining the user-friendliness. Currently, available object oriented languages for massive parallel computing such as Chapel, X10 and UPC++ exploit distributed computing, data parallel computing and thread-parallelism at the process level in the PGAS (Partitioned Global Address Space) memory model. However, they do not incorporate: 1) any extension at for object distribution to exploit PGAS model; 2) the programs lack the flexibility of migrating or cloning an object between places to exploit load balancing; and 3) lack the programming paradigms that will result from the integration of data and thread-level parallelism and object distribution. In the proposed thesis, I compare different languages in PGAS model; propose new constructs that extend C++ with object distribution and object migration; and integrate PGAS based process constructs with these extensions on distributed objects. Object cloning and object migration. Also a new paradigm MIDD (Multiple Invocation Distributed Data) is presented when different copies of the same class can be invoked, and work on different elements of a distributed data concurrently using remote method invocations. I present new constructs, their grammar and their behavior. The new constructs have been explained using simple programs utilizing these constructs.

  6. Emerald: an object-based language for distributed programming

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hutchinson, N.C.

    1987-01-01

    Distributed systems have become more common, however constructing distributed applications remains a very difficult task. Numerous operating systems and programming languages have been proposed that attempt to simplify the programming of distributed applications. Here a programing language called Emerald is presented that simplifies distributed programming by extending the concepts of object-based languages to the distributed environment. Emerald supports a single model of computation: the object. Emerald objects include private entities such as integers and Booleans, as well as shared, distributed entities such as compilers, directories, and entire file systems. Emerald objects may move between machines in the system, but objectmore » invocation is location independent. The uniform semantic model used for describing all Emerald objects makes the construction of distributed applications in Emerald much simpler than in systems where the differences in implementation between local and remote entities are visible in the language semantics. Emerald incorporates a type system that deals only with the specification of objects - ignoring differences in implementation. Thus, two different implementations of the same abstraction may be freely mixed.« less

  7. Multi-objective possibilistic model for portfolio selection with transaction cost

    NASA Astrophysics Data System (ADS)

    Jana, P.; Roy, T. K.; Mazumder, S. K.

    2009-06-01

    In this paper, we introduce the possibilistic mean value and variance of continuous distribution, rather than probability distributions. We propose a multi-objective Portfolio based model and added another entropy objective function to generate a well diversified asset portfolio within optimal asset allocation. For quantifying any potential return and risk, portfolio liquidity is taken into account and a multi-objective non-linear programming model for portfolio rebalancing with transaction cost is proposed. The models are illustrated with numerical examples.

  8. Coordinating complex problem-solving among distributed intelligent agents

    NASA Technical Reports Server (NTRS)

    Adler, Richard M.

    1992-01-01

    A process-oriented control model is described for distributed problem solving. The model coordinates the transfer and manipulation of information across independent networked applications, both intelligent and conventional. The model was implemented using SOCIAL, a set of object-oriented tools for distributing computing. Complex sequences of distributed tasks are specified in terms of high level scripts. Scripts are executed by SOCIAL objects called Manager Agents, which realize an intelligent coordination model that routes individual tasks to suitable server applications across the network. These tools are illustrated in a prototype distributed system for decision support of ground operations for NASA's Space Shuttle fleet.

  9. Imprecision and Uncertainty in the UFO Database Model.

    ERIC Educational Resources Information Center

    Van Gyseghem, Nancy; De Caluwe, Rita

    1998-01-01

    Discusses how imprecision and uncertainty are dealt with in the UFO (Uncertainty and Fuzziness in an Object-oriented) database model. Such information is expressed by means of possibility distributions, and modeled by means of the proposed concept of "role objects." The role objects model uncertain, tentative information about objects,…

  10. Development of a Dynamically Configurable, Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation

    NASA Technical Reports Server (NTRS)

    Afjeh, Abdollah A.; Reed, John A.

    2003-01-01

    The following reports are presented on this project:A first year progress report on: Development of a Dynamically Configurable,Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation; A second year progress report on: Development of a Dynamically Configurable, Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation; An Extensible, Interchangeable and Sharable Database Model for Improving Multidisciplinary Aircraft Design; Interactive, Secure Web-enabled Aircraft Engine Simulation Using XML Databinding Integration; and Improving the Aircraft Design Process Using Web-based Modeling and Simulation.

  11. None of the above: A Bayesian account of the detection of novel categories.

    PubMed

    Navarro, Daniel J; Kemp, Charles

    2017-10-01

    Every time we encounter a new object, action, or event, there is some chance that we will need to assign it to a novel category. We describe and evaluate a class of probabilistic models that detect when an object belongs to a category that has not previously been encountered. The models incorporate a prior distribution that is influenced by the distribution of previous objects among categories, and we present 2 experiments that demonstrate that people are also sensitive to this distributional information. Two additional experiments confirm that distributional information is combined with similarity when both sources of information are available. We compare our approach to previous models of unsupervised categorization and to several heuristic-based models, and find that a hierarchical Bayesian approach provides the best account of our data. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  12. Numerical modeling of magnetic moments for UXO applications

    USGS Publications Warehouse

    Sanchez, V.; Li, Y.; Nabighian, M.; Wright, D.

    2006-01-01

    The surface magnetic anomaly observed in UXO clearance is mainly dipolar and, consequently, the dipole is the only magnetic moment regularly recovered in UXO applications. The dipole moment contains information about intensity of magnetization but lacks information about shape. In contrast, higher-order moments, such as quadrupole and octupole, encode asymmetry properties of the magnetization distribution within the buried targets. In order to improve our understanding of magnetization distribution within UXO and non-UXO objects and its potential utility in UXO clearance, we present a 3D numerical modeling study for highly susceptible metallic objects. The basis for the modeling is the solution of a nonlinear integral equation describing magnetization within isolated objects. A solution for magnetization distribution then allows us to compute magnetic moments of the object, analyze their relationships, and provide a depiction of the surface anomaly produced by different moments within the object. Our modeling results show significant high-order moments for more asymmetric objects situated at depths typical of UXO burial, and suggest that the increased relative contribution to magnetic gradient data from these higher-order moments may provide a practical tool for improved UXO discrimination.

  13. Accessing and distributing EMBL data using CORBA (common object request broker architecture).

    PubMed

    Wang, L; Rodriguez-Tomé, P; Redaschi, N; McNeil, P; Robinson, A; Lijnzaad, P

    2000-01-01

    The EMBL Nucleotide Sequence Database is a comprehensive database of DNA and RNA sequences and related information traditionally made available in flat-file format. Queries through tools such as SRS (Sequence Retrieval System) also return data in flat-file format. Flat files have a number of shortcomings, however, and the resources therefore currently lack a flexible environment to meet individual researchers' needs. The Object Management Group's common object request broker architecture (CORBA) is an industry standard that provides platform-independent programming interfaces and models for portable distributed object-oriented computing applications. Its independence from programming languages, computing platforms and network protocols makes it attractive for developing new applications for querying and distributing biological data. A CORBA infrastructure developed by EMBL-EBI provides an efficient means of accessing and distributing EMBL data. The EMBL object model is defined such that it provides a basis for specifying interfaces in interface definition language (IDL) and thus for developing the CORBA servers. The mapping from the object model to the relational schema in the underlying Oracle database uses the facilities provided by PersistenceTM, an object/relational tool. The techniques of developing loaders and 'live object caching' with persistent objects achieve a smart live object cache where objects are created on demand. The objects are managed by an evictor pattern mechanism. The CORBA interfaces to the EMBL database address some of the problems of traditional flat-file formats and provide an efficient means for accessing and distributing EMBL data. CORBA also provides a flexible environment for users to develop their applications by building clients to our CORBA servers, which can be integrated into existing systems.

  14. Accessing and distributing EMBL data using CORBA (common object request broker architecture)

    PubMed Central

    Wang, Lichun; Rodriguez-Tomé, Patricia; Redaschi, Nicole; McNeil, Phil; Robinson, Alan; Lijnzaad, Philip

    2000-01-01

    Background: The EMBL Nucleotide Sequence Database is a comprehensive database of DNA and RNA sequences and related information traditionally made available in flat-file format. Queries through tools such as SRS (Sequence Retrieval System) also return data in flat-file format. Flat files have a number of shortcomings, however, and the resources therefore currently lack a flexible environment to meet individual researchers' needs. The Object Management Group's common object request broker architecture (CORBA) is an industry standard that provides platform-independent programming interfaces and models for portable distributed object-oriented computing applications. Its independence from programming languages, computing platforms and network protocols makes it attractive for developing new applications for querying and distributing biological data. Results: A CORBA infrastructure developed by EMBL-EBI provides an efficient means of accessing and distributing EMBL data. The EMBL object model is defined such that it provides a basis for specifying interfaces in interface definition language (IDL) and thus for developing the CORBA servers. The mapping from the object model to the relational schema in the underlying Oracle database uses the facilities provided by PersistenceTM, an object/relational tool. The techniques of developing loaders and 'live object caching' with persistent objects achieve a smart live object cache where objects are created on demand. The objects are managed by an evictor pattern mechanism. Conclusions: The CORBA interfaces to the EMBL database address some of the problems of traditional flat-file formats and provide an efficient means for accessing and distributing EMBL data. CORBA also provides a flexible environment for users to develop their applications by building clients to our CORBA servers, which can be integrated into existing systems. PMID:11178259

  15. A hierarchical distributed control model for coordinating intelligent systems

    NASA Technical Reports Server (NTRS)

    Adler, Richard M.

    1991-01-01

    A hierarchical distributed control (HDC) model for coordinating cooperative problem-solving among intelligent systems is described. The model was implemented using SOCIAL, an innovative object-oriented tool for integrating heterogeneous, distributed software systems. SOCIAL embeds applications in 'wrapper' objects called Agents, which supply predefined capabilities for distributed communication, control, data specification, and translation. The HDC model is realized in SOCIAL as a 'Manager'Agent that coordinates interactions among application Agents. The HDC Manager: indexes the capabilities of application Agents; routes request messages to suitable server Agents; and stores results in a commonly accessible 'Bulletin-Board'. This centralized control model is illustrated in a fault diagnosis application for launch operations support of the Space Shuttle fleet at NASA, Kennedy Space Center.

  16. Model of Distributed Learning Objects Repository for a Heterogenic Internet Environment

    ERIC Educational Resources Information Center

    Kaczmarek, Jerzy; Landowska, Agnieszka

    2006-01-01

    In this article, an extension of the existing structure of learning objects is described. The solution addresses the problem of the access and discovery of educational resources in the distributed Internet environment. An overview of e-learning standards, reference models, and problems with educational resources delivery is presented. The paper…

  17. The Population Consequences of Disturbance Model Application to North Atlantic Right Whales (Eubalaena Glacialis)

    DTIC Science & Technology

    2013-09-30

    the revised approach is called PCOD (Population Consequences Of Disturbance) . In North Atlantic right whales (Eubalaena glacialis), extensive data on...disturbance and prey variability into the PCOD model. DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Report...Figure 1. Modified model of population consequences of disturbance ( PCOD ) (Thomas et al. 2011). OBJECTIVES The objectives for this study are

  18. Analysis of the In-Water and Sky Radiance Distribution Data Acquired During the Radyo Project

    DTIC Science & Technology

    2013-09-30

    radiative transfer to model the BRDF of particulate surfaces. OBJECTIVES The major objective of this research is to understand the downwelling...of image and radiative transfer models used in the ocean. My near term ocean optics objectives have been: 1) to improve the measurement capability...directional Reflectance Distribution Function ( BRDF ) of benthic surfaces in the ocean, and 4) to understand the capabilities and limitations of using

  19. Distribution of compact object mergers around galaxies

    NASA Astrophysics Data System (ADS)

    Bulik, T.; Belczyński, K.; Zbijewski, W.

    1999-09-01

    Compact object mergers are one of the favoured models of gamma ray bursts (GRB). Using a binary population synthesis code we calculate properties of the population of compact object binaries; e.g. lifetimes and velocities. We then propagate them in galactic potentials and find their distribution in relation to the host.

  20. Data Analysis, Modeling, and Ensemble Forecasting to Support NOWCAST and Forecast Activities at the Fallon Naval Station

    DTIC Science & Technology

    2010-09-30

    and climate forecasting and use of satellite data assimilation for model evaluation. He is a task leader on another NSF_EPSCoR project for the...1 DISTRIBUTION STATEMENT A: Approved for public release; distribution is unlimited. Data Analysis, Modeling, and Ensemble Forecasting to...observations including remotely sensed data . OBJECTIVES The main objectives of the study are: 1) to further develop, test, and continue twice daily

  1. Snow on Sea Ice Workshop - An Icy Meeting of the Minds: Modelers and Measurers

    DTIC Science & Technology

    2015-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Snow on Sea Ice Workshop - An Icy Meeting of the Minds...workshop was to promote more seamless and better integration between measurements and modeling of snow on sea ice , thereby improving our predictive...capabilities for sea ice . OBJECTIVES The key objective was to improve the ability of modelers and measurers work together closely. To that end, we

  2. COAGULATION CALCULATIONS OF ICY PLANET FORMATION AT 15-150 AU: A CORRELATION BETWEEN THE MAXIMUM RADIUS AND THE SLOPE OF THE SIZE DISTRIBUTION FOR TRANS-NEPTUNIAN OBJECTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kenyon, Scott J.; Bromley, Benjamin C., E-mail: skenyon@cfa.harvard.edu, E-mail: bromley@physics.utah.edu

    2012-03-15

    We investigate whether coagulation models of planet formation can explain the observed size distributions of trans-Neptunian objects (TNOs). Analyzing published and new calculations, we demonstrate robust relations between the size of the largest object and the slope of the size distribution for sizes 0.1 km and larger. These relations yield clear, testable predictions for TNOs and other icy objects throughout the solar system. Applying our results to existing observations, we show that a broad range of initial disk masses, planetesimal sizes, and fragmentation parameters can explain the data. Adding dynamical constraints on the initial semimajor axis of 'hot' Kuiper Beltmore » objects along with probable TNO formation times of 10-700 Myr restricts the viable models to those with a massive disk composed of relatively small (1-10 km) planetesimals.« less

  3. Developing the snow component of a distributed hydrological model: a step-wise approach based on multi-objective analysis

    NASA Astrophysics Data System (ADS)

    Dunn, S. M.; Colohan, R. J. E.

    1999-09-01

    A snow component has been developed for the distributed hydrological model, DIY, using an approach that sequentially evaluates the behaviour of different functions as they are implemented in the model. The evaluation is performed using multi-objective functions to ensure that the internal structure of the model is correct. The development of the model, using a sub-catchment in the Cairngorm Mountains in Scotland, demonstrated that the degree-day model can be enhanced for hydroclimatic conditions typical of those found in Scotland, without increasing meteorological data requirements. An important element of the snow model is a function to account for wind re-distribution. This causes large accumulations of snow in small pockets, which are shown to be important in sustaining baseflows in the rivers during the late spring and early summer, long after the snowpack has melted from the bulk of the catchment. The importance of the wind function would not have been identified using a single objective function of total streamflow to evaluate the model behaviour.

  4. Numerical modeling of higher order magnetic moments in UXO discrimination

    USGS Publications Warehouse

    Sanchez, V.; Yaoguo, L.; Nabighian, M.N.; Wright, D.L.

    2008-01-01

    The surface magnetic anomaly observed in unexploded ordnance (UXO) clearance is mainly dipolar, and consequently, the dipole is the only magnetic moment regularly recovered in UXO discrimination. The dipole moment contains information about the intensity of magnetization but lacks information about the shape of the target. In contrast, higher order moments, such as quadrupole and octupole, encode asymmetry properties of the magnetization distribution within the buried targets. In order to improve our understanding of magnetization distribution within UXO and non-UXO objects and to show its potential utility in UXO clearance, we present a numerical modeling study of UXO and related metallic objects. The tool for the modeling is a nonlinear integral equation describing magnetization within isolated compact objects of high susceptibility. A solution for magnetization distribution then allows us to compute the magnetic multipole moments of the object, analyze their relationships, and provide a depiction of the anomaly produced by different moments within the object. Our modeling results show the presence of significant higher order moments for more asymmetric objects, and the fields of these higher order moments are well above the noise level of magnetic gradient data. The contribution from higher order moments may provide a practical tool for improved UXO discrimination. ?? 2008 IEEE.

  5. Performance Analysis of Distributed Object-Oriented Applications

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D.

    1998-01-01

    The purpose of this research was to evaluate the efficiency of a distributed simulation architecture which creates individual modules which are made self-scheduling through the use of a message-based communication system used for requesting input data from another module which is the source of that data. To make the architecture as general as possible, the message-based communication architecture was implemented using standard remote object architectures (Common Object Request Broker Architecture (CORBA) and/or Distributed Component Object Model (DCOM)). A series of experiments were run in which different systems are distributed in a variety of ways across multiple computers and the performance evaluated. The experiments were duplicated in each case so that the overhead due to message communication and data transmission can be separated from the time required to actually perform the computational update of a module each iteration. The software used to distribute the modules across multiple computers was developed in the first year of the current grant and was modified considerably to add a message-based communication scheme supported by the DCOM distributed object architecture. The resulting performance was analyzed using a model created during the first year of this grant which predicts the overhead due to CORBA and DCOM remote procedure calls and includes the effects of data passed to and from the remote objects. A report covering the distributed simulation software and the results of the performance experiments has been submitted separately. The above report also discusses possible future work to apply the methodology to dynamically distribute the simulation modules so as to minimize overall computation time.

  6. GOSSIP: SED fitting code

    NASA Astrophysics Data System (ADS)

    Franzetti, Paolo; Scodeggio, Marco

    2012-10-01

    GOSSIP fits the electro-magnetic emission of an object (the SED, Spectral Energy Distribution) against synthetic models to find the simulated one that best reproduces the observed data. It builds-up the observed SED of an object (or a large sample of objects) combining magnitudes in different bands and eventually a spectrum; then it performs a chi-square minimization fitting procedure versus a set of synthetic models. The fitting results are used to estimate a number of physical parameters like the Star Formation History, absolute magnitudes, stellar mass and their Probability Distribution Functions.

  7. Parallel and distributed computation for fault-tolerant object recognition

    NASA Technical Reports Server (NTRS)

    Wechsler, Harry

    1988-01-01

    The distributed associative memory (DAM) model is suggested for distributed and fault-tolerant computation as it relates to object recognition tasks. The fault-tolerance is with respect to geometrical distortions (scale and rotation), noisy inputs, occulsion/overlap, and memory faults. An experimental system was developed for fault-tolerant structure recognition which shows the feasibility of such an approach. The approach is futher extended to the problem of multisensory data integration and applied successfully to the recognition of colored polyhedral objects.

  8. Extended Relation Metadata for SCORM-Based Learning Content Management Systems

    ERIC Educational Resources Information Center

    Lu, Eric Jui-Lin; Horng, Gwoboa; Yu, Chia-Ssu; Chou, Ling-Ying

    2010-01-01

    To increase the interoperability and reusability of learning objects, Advanced Distributed Learning Initiative developed a model called Content Aggregation Model (CAM) to describe learning objects and express relationships between learning objects. However, the suggested relations defined in the CAM can only describe structure-oriented…

  9. a Heuristic Approach for Multi Objective Distribution Feeder Reconfiguration: Using Fuzzy Sets in Normalization of Objective Functions

    NASA Astrophysics Data System (ADS)

    Milani, Armin Ebrahimi; Haghifam, Mahmood Reza

    2008-10-01

    The reconfiguration is an operation process used for optimization with specific objectives by means of changing the status of switches in a distribution network. In this paper each objectives is normalized with inspiration from fuzzy sets-to cause optimization more flexible- and formulized as a unique multi-objective function. The genetic algorithm is used for solving the suggested model, in which there is no risk of non-liner objective functions and constraints. The effectiveness of the proposed method is demonstrated through the examples.

  10. A knowledge base architecture for distributed knowledge agents

    NASA Technical Reports Server (NTRS)

    Riedesel, Joel; Walls, Bryan

    1990-01-01

    A tuple space based object oriented model for knowledge base representation and interpretation is presented. An architecture for managing distributed knowledge agents is then implemented within the model. The general model is based upon a database implementation of a tuple space. Objects are then defined as an additional layer upon the database. The tuple space may or may not be distributed depending upon the database implementation. A language for representing knowledge and inference strategy is defined whose implementation takes advantage of the tuple space. The general model may then be instantiated in many different forms, each of which may be a distinct knowledge agent. Knowledge agents may communicate using tuple space mechanisms as in the LINDA model as well as using more well known message passing mechanisms. An implementation of the model is presented describing strategies used to keep inference tractable without giving up expressivity. An example applied to a power management and distribution network for Space Station Freedom is given.

  11. [Location selection for Shenyang urban parks based on GIS and multi-objective location allocation model].

    PubMed

    Zhou, Yuan; Shi, Tie-Mao; Hu, Yuan-Man; Gao, Chang; Liu, Miao; Song, Lin-Qi

    2011-12-01

    Based on geographic information system (GIS) technology and multi-objective location-allocation (LA) model, and in considering of four relatively independent objective factors (population density level, air pollution level, urban heat island effect level, and urban land use pattern), an optimized location selection for the urban parks within the Third Ring of Shenyang was conducted, and the selection results were compared with the spatial distribution of existing parks, aimed to evaluate the rationality of the spatial distribution of urban green spaces. In the location selection of urban green spaces in the study area, the factor air pollution was most important, and, compared with single objective factor, the weighted analysis results of multi-objective factors could provide optimized spatial location selection of new urban green spaces. The combination of GIS technology with LA model would be a new approach for the spatial optimizing of urban green spaces.

  12. Group-oriented coordination models for distributed client-server computing

    NASA Technical Reports Server (NTRS)

    Adler, Richard M.; Hughes, Craig S.

    1994-01-01

    This paper describes group-oriented control models for distributed client-server interactions. These models transparently coordinate requests for services that involve multiple servers, such as queries across distributed databases. Specific capabilities include: decomposing and replicating client requests; dispatching request subtasks or copies to independent, networked servers; and combining server results into a single response for the client. The control models were implemented by combining request broker and process group technologies with an object-oriented communication middleware tool. The models are illustrated in the context of a distributed operations support application for space-based systems.

  13. The implementation of aerial object recognition algorithm based on contour descriptor in FPGA-based on-board vision system

    NASA Astrophysics Data System (ADS)

    Babayan, Pavel; Smirnov, Sergey; Strotov, Valery

    2017-10-01

    This paper describes the aerial object recognition algorithm for on-board and stationary vision system. Suggested algorithm is intended to recognize the objects of a specific kind using the set of the reference objects defined by 3D models. The proposed algorithm based on the outer contour descriptor building. The algorithm consists of two stages: learning and recognition. Learning stage is devoted to the exploring of reference objects. Using 3D models we can build the database containing training images by rendering the 3D model from viewpoints evenly distributed on a sphere. Sphere points distribution is made by the geosphere principle. Gathered training image set is used for calculating descriptors, which will be used in the recognition stage of the algorithm. The recognition stage is focusing on estimating the similarity of the captured object and the reference objects by matching an observed image descriptor and the reference object descriptors. The experimental research was performed using a set of the models of the aircraft of the different types (airplanes, helicopters, UAVs). The proposed orientation estimation algorithm showed good accuracy in all case studies. The real-time performance of the algorithm in FPGA-based vision system was demonstrated.

  14. ESPC Common Model Architecture

    DTIC Science & Technology

    2014-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. ESPC Common Model Architecture Earth System Modeling...Operational Prediction Capability (NUOPC) was established between NOAA and Navy to develop common software architecture for easy and efficient...development under a common model architecture and other software-related standards in this project. OBJECTIVES NUOPC proposes to accelerate

  15. Studying the Impact of Distributed Solar PV on Power Systems using Integrated Transmission and Distribution Models: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jain, Himanshu; Palmintier, Bryan S; Krad, Ibrahim

    This paper presents the results of a distributed solar PV impact assessment study that was performed using a synthetic integrated transmission (T) and distribution (D) model. The primary objective of the study was to present a new approach for distributed solar PV impact assessment, where along with detailed models of transmission and distribution networks, consumer loads were modeled using the physics of end-use equipment, and distributed solar PV was geographically dispersed and connected to the secondary distribution networks. The highlights of the study results were (i) increase in the Area Control Error (ACE) at high penetration levels of distributed solarmore » PV; and (ii) differences in distribution voltages profiles and voltage regulator operations between integrated T&D and distribution only simulations.« less

  16. Design of object-oriented distributed simulation classes

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D. (Principal Investigator)

    1995-01-01

    Distributed simulation of aircraft engines as part of a computer aided design package is being developed by NASA Lewis Research Center for the aircraft industry. The project is called NPSS, an acronym for 'Numerical Propulsion Simulation System'. NPSS is a flexible object-oriented simulation of aircraft engines requiring high computing speed. It is desirable to run the simulation on a distributed computer system with multiple processors executing portions of the simulation in parallel. The purpose of this research was to investigate object-oriented structures such that individual objects could be distributed. The set of classes used in the simulation must be designed to facilitate parallel computation. Since the portions of the simulation carried out in parallel are not independent of one another, there is the need for communication among the parallel executing processors which in turn implies need for their synchronization. Communication and synchronization can lead to decreased throughput as parallel processors wait for data or synchronization signals from other processors. As a result of this research, the following have been accomplished. The design and implementation of a set of simulation classes which result in a distributed simulation control program have been completed. The design is based upon MIT 'Actor' model of a concurrent object and uses 'connectors' to structure dynamic connections between simulation components. Connectors may be dynamically created according to the distribution of objects among machines at execution time without any programming changes. Measurements of the basic performance have been carried out with the result that communication overhead of the distributed design is swamped by the computation time of modules unless modules have very short execution times per iteration or time step. An analytical performance model based upon queuing network theory has been designed and implemented. Its application to realistic configurations has not been carried out.

  17. Design of Object-Oriented Distributed Simulation Classes

    NASA Technical Reports Server (NTRS)

    Schoeffler, James D.

    1995-01-01

    Distributed simulation of aircraft engines as part of a computer aided design package being developed by NASA Lewis Research Center for the aircraft industry. The project is called NPSS, an acronym for "Numerical Propulsion Simulation System". NPSS is a flexible object-oriented simulation of aircraft engines requiring high computing speed. It is desirable to run the simulation on a distributed computer system with multiple processors executing portions of the simulation in parallel. The purpose of this research was to investigate object-oriented structures such that individual objects could be distributed. The set of classes used in the simulation must be designed to facilitate parallel computation. Since the portions of the simulation carried out in parallel are not independent of one another, there is the need for communication among the parallel executing processors which in turn implies need for their synchronization. Communication and synchronization can lead to decreased throughput as parallel processors wait for data or synchronization signals from other processors. As a result of this research, the following have been accomplished. The design and implementation of a set of simulation classes which result in a distributed simulation control program have been completed. The design is based upon MIT "Actor" model of a concurrent object and uses "connectors" to structure dynamic connections between simulation components. Connectors may be dynamically created according to the distribution of objects among machines at execution time without any programming changes. Measurements of the basic performance have been carried out with the result that communication overhead of the distributed design is swamped by the computation time of modules unless modules have very short execution times per iteration or time step. An analytical performance model based upon queuing network theory has been designed and implemented. Its application to realistic configurations has not been carried out.

  18. A support architecture for reliable distributed computing systems

    NASA Technical Reports Server (NTRS)

    Dasgupta, Partha; Leblanc, Richard J., Jr.

    1988-01-01

    The Clouds project is well underway to its goal of building a unified distributed operating system supporting the object model. The operating system design uses the object concept of structuring software at all levels of the system. The basic operating system was developed and work is under progress to build a usable system.

  19. Object-oriented biomedical system modelling--the language.

    PubMed

    Hakman, M; Groth, T

    1999-11-01

    The paper describes a new object-oriented biomedical continuous system modelling language (OOBSML). It is fully object-oriented and supports model inheritance, encapsulation, and model component instantiation and behaviour polymorphism. Besides the traditional differential and algebraic equation expressions the language includes also formal expressions for documenting models and defining model quantity types and quantity units. It supports explicit definition of model input-, output- and state quantities, model components and component connections. The OOBSML model compiler produces self-contained, independent, executable model components that can be instantiated and used within other OOBSML models and/or stored within model and model component libraries. In this way complex models can be structured as multilevel, multi-component model hierarchies. Technically the model components produced by the OOBSML compiler are executable computer code objects based on distributed object and object request broker technology. This paper includes both the language tutorial and the formal language syntax and semantic description.

  20. Advances in the spatially distributed ages-w model: parallel computation, java connection framework (JCF) integration, and streamflow/nitrogen dynamics assessment

    USDA-ARS?s Scientific Manuscript database

    AgroEcoSystem-Watershed (AgES-W) is a modular, Java-based spatially distributed model which implements hydrologic and water quality (H/WQ) simulation components under the Java Connection Framework (JCF) and the Object Modeling System (OMS) environmental modeling framework. AgES-W is implicitly scala...

  1. The Spatially-Distributed Agroecosystem-Watershed (Ages-W) Hydrologic/Water Quality (H/WQ) model for assessment of conservation effects

    USDA-ARS?s Scientific Manuscript database

    AgroEcoSystem-Watershed (AgES-W) is a modular, Java-based spatially distributed model which implements hydrologic/water quality (H/WQ) simulation components under the Object Modeling System (OMS3) environmental modeling framework. AgES-W has recently been enhanced with the addition of nitrogen (N) a...

  2. Mapping the current and potential distribution of red spruce in Virginia: implications for the restoration of degraded high elevation habitat

    Treesearch

    Heather Griscom; Helmut Kraenzle; Zachary. Bortolot

    2010-01-01

    The objective of our project is to create a habitat suitability model to predict potential and future red spruce forest distributions. This model will be used to better understand the influence of climate change on red spruce distribution and to help guide forest restoration efforts.

  3. A risk-based multi-objective model for optimal placement of sensors in water distribution system

    NASA Astrophysics Data System (ADS)

    Naserizade, Sareh S.; Nikoo, Mohammad Reza; Montaseri, Hossein

    2018-02-01

    In this study, a new stochastic model based on Conditional Value at Risk (CVaR) and multi-objective optimization methods is developed for optimal placement of sensors in water distribution system (WDS). This model determines minimization of risk which is caused by simultaneous multi-point contamination injection in WDS using CVaR approach. The CVaR considers uncertainties of contamination injection in the form of probability distribution function and calculates low-probability extreme events. In this approach, extreme losses occur at tail of the losses distribution function. Four-objective optimization model based on NSGA-II algorithm is developed to minimize losses of contamination injection (through CVaR of affected population and detection time) and also minimize the two other main criteria of optimal placement of sensors including probability of undetected events and cost. Finally, to determine the best solution, Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE), as a subgroup of Multi Criteria Decision Making (MCDM) approach, is utilized to rank the alternatives on the trade-off curve among objective functions. Also, sensitivity analysis is done to investigate the importance of each criterion on PROMETHEE results considering three relative weighting scenarios. The effectiveness of the proposed methodology is examined through applying it to Lamerd WDS in the southwestern part of Iran. The PROMETHEE suggests 6 sensors with suitable distribution that approximately cover all regions of WDS. Optimal values related to CVaR of affected population and detection time as well as probability of undetected events for the best optimal solution are equal to 17,055 persons, 31 mins and 0.045%, respectively. The obtained results of the proposed methodology in Lamerd WDS show applicability of CVaR-based multi-objective simulation-optimization model for incorporating the main uncertainties of contamination injection in order to evaluate extreme value of losses in WDS.

  4. IR characteristic simulation of city scenes based on radiosity model

    NASA Astrophysics Data System (ADS)

    Xiong, Xixian; Zhou, Fugen; Bai, Xiangzhi; Yu, Xiyu

    2013-09-01

    Reliable modeling for thermal infrared (IR) signatures of real-world city scenes is required for signature management of civil and military platforms. Traditional modeling methods generally assume that scene objects are individual entities during the physical processes occurring in infrared range. However, in reality, the physical scene involves convective and conductive interactions between objects as well as the radiations interactions between objects. A method based on radiosity model describes these complex effects. It has been developed to enable an accurate simulation for the radiance distribution of the city scenes. Firstly, the physical processes affecting the IR characteristic of city scenes were described. Secondly, heat balance equations were formed on the basis of combining the atmospheric conditions, shadow maps and the geometry of scene. Finally, finite difference method was used to calculate the kinetic temperature of object surface. A radiosity model was introduced to describe the scattering effect of radiation between surface elements in the scene. By the synthesis of objects radiance distribution in infrared range, we could obtain the IR characteristic of scene. Real infrared images and model predictions were shown and compared. The results demonstrate that this method can realistically simulate the IR characteristic of city scenes. It effectively displays the infrared shadow effects and the radiation interactions between objects in city scenes.

  5. Space Object Classification and Characterization Via Multiple Model Adaptive Estimation

    DTIC Science & Technology

    2014-07-14

    BRDF ) which models light distribution scattered from the surface due to the incident light. The BRDF at any point on the surface is a function of two...uu B vu B nu obs I u sun I u I hu (b) Reflection Geometry Fig. 2: Reflection Geometry and Space Object Shape Model of the BRDF is ρdiff(i...Space Object Classification and Characterization Via Multiple Model Adaptive Estimation Richard Linares Director’s Postdoctoral Fellow Space Science

  6. A POSSIBLE DIVOT IN THE SIZE DISTRIBUTION OF THE KUIPER BELT'S SCATTERING OBJECTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shankman, C.; Gladman, B. J.; Kaib, N.

    Via joint analysis of a calibrated telescopic survey, which found scattering Kuiper Belt objects, and models of their expected orbital distribution, we explore the scattering-object (SO) size distribution. Although for D > 100 km the number of objects quickly rise as diameters decrease, we find a relative lack of smaller objects, ruling out a single power law at greater than 99% confidence. After studying traditional ''knees'' in the size distribution, we explore other formulations and find that, surprisingly, our analysis is consistent with a very sudden decrease (a divot) in the number distribution as diameters decrease below 100 km, whichmore » then rises again as a power law. Motivated by other dynamically hot populations and the Centaurs, we argue for a divot size distribution where the number of smaller objects rises again as expected via collisional equilibrium. Extrapolation yields enough kilometer-scale SOs to supply the nearby Jupiter-family comets. Our interpretation is that this divot feature is a preserved relic of the size distribution made by planetesimal formation, now ''frozen in'' to portions of the Kuiper Belt sharing a ''hot'' orbital inclination distribution, explaining several puzzles in Kuiper Belt science. Additionally, we show that to match today's SO inclination distribution, the supply source that was scattered outward must have already been vertically heated to the of order 10 Degree-Sign .« less

  7. A Verification System for Distributed Objects with Asynchronous Method Calls

    NASA Astrophysics Data System (ADS)

    Ahrendt, Wolfgang; Dylla, Maximilian

    We present a verification system for Creol, an object-oriented modeling language for concurrent distributed applications. The system is an instance of KeY, a framework for object-oriented software verification, which has so far been applied foremost to sequential Java. Building on KeY characteristic concepts, like dynamic logic, sequent calculus, explicit substitutions, and the taclet rule language, the system presented in this paper addresses functional correctness of Creol models featuring local cooperative thread parallelism and global communication via asynchronous method calls. The calculus heavily operates on communication histories which describe the interfaces of Creol units. Two example scenarios demonstrate the usage of the system.

  8. Common Object Library Description

    DTIC Science & Technology

    2012-08-01

    Information Modeling ( BIM ) technology to be successful, it must be consistently applied across many projects, by many teams. The National Building Information ...distribution is unlimited. 13. SUPPLEMENTARY NOTES 14. ABSTRACT For Building Information Modeling ( BIM ) technology to be successful, it must be... BIM standards and for future research projects. 15. SUBJECT TERMS building information modeling ( BIM ), object

  9. Surrogate Based Uni/Multi-Objective Optimization and Distribution Estimation Methods

    NASA Astrophysics Data System (ADS)

    Gong, W.; Duan, Q.; Huo, X.

    2017-12-01

    Parameter calibration has been demonstrated as an effective way to improve the performance of dynamic models, such as hydrological models, land surface models, weather and climate models etc. Traditional optimization algorithms usually cost a huge number of model evaluations, making dynamic model calibration very difficult, or even computationally prohibitive. With the help of a serious of recently developed adaptive surrogate-modelling based optimization methods: uni-objective optimization method ASMO, multi-objective optimization method MO-ASMO, and probability distribution estimation method ASMO-PODE, the number of model evaluations can be significantly reduced to several hundreds, making it possible to calibrate very expensive dynamic models, such as regional high resolution land surface models, weather forecast models such as WRF, and intermediate complexity earth system models such as LOVECLIM. This presentation provides a brief introduction to the common framework of adaptive surrogate-based optimization algorithms of ASMO, MO-ASMO and ASMO-PODE, a case study of Common Land Model (CoLM) calibration in Heihe river basin in Northwest China, and an outlook of the potential applications of the surrogate-based optimization methods.

  10. DAVE: A plug and play model for distributed multimedia application development

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mines, R.F.; Friesen, J.A.; Yang, C.L.

    1994-07-01

    This paper presents a model being used for the development of distributed multimedia applications. The Distributed Audio Video Environment (DAVE) was designed to support the development of a wide range of distributed applications. The implementation of this model is described. DAVE is unique in that it combines a simple ``plug and play`` programming interface, supports both centralized and fully distributed applications, provides device and media extensibility, promotes object reuseability, and supports interoperability and network independence. This model enables application developers to easily develop distributed multimedia applications and create reusable multimedia toolkits. DAVE was designed for developing applications such as videomore » conferencing, media archival, remote process control, and distance learning.« less

  11. Sediment Acoustics: Wideband Model, Reflection Loss and Ambient Noise Inversion

    DTIC Science & Technology

    2010-01-01

    DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Sediment acoustics : Wideband model , reflection loss and...Physically sound models of acoustic interaction with the ocean floor including penetration, reflection and scattering in support of MCM and ASW needs...OBJECTIVES (1) Consolidation of the BIC08 model of sediment acoustics , its verification in a variety of sediment types, parameter reduction and

  12. Analysis of the In-Water and Sky Radiance Distribution Data Acquired During the RaDyO Project

    DTIC Science & Technology

    2011-09-30

    radiative transfer to model the BRDF of particulate surfaces. OBJECTIVES The major objective of this research is to understand the downwelling spectral...in the water, was also used by the two major modeling groups in RaDyO, to successfully validate their radiative transfer models . This work is...image and radiative transfer models used in the ocean. My near term ocean optics objectives have been: 1) to improve the measurement capability of

  13. Evaluation of Rock Surface Characterization by Means of Temperature Distribution

    NASA Astrophysics Data System (ADS)

    Seker, D. Z.; Incekara, A. H.; Acar, A.; Kaya, S.; Bayram, B.; Sivri, N.

    2017-12-01

    Rocks have many different types which are formed over many years. Close range photogrammetry is a techniques widely used and preferred rather than other conventional methods. In this method, the photographs overlapping each other are the basic data source of the point cloud data which is the main data source for 3D model that provides analysts automation possibility. Due to irregular and complex structures of rocks, representation of their surfaces with a large number points is more effective. Color differences caused by weathering on the rock surfaces or naturally occurring make it possible to produce enough number of point clouds from the photographs. Objects such as small trees, shrubs and weeds on and around the surface also contribute to this. These differences and properties are important for efficient operation of pixel matching algorithms to generate adequate point cloud from photographs. In this study, possibilities of using temperature distribution for interpretation of roughness of rock surface which is one of the parameters representing the surface, was investigated. For the study, a small rock which is in size of 3 m x 1 m, located at ITU Ayazaga Campus was selected as study object. Two different methods were used. The first one is production of producing choropleth map by interpolation using temperature values of control points marked on object which were also used in 3D model. 3D object model was created with the help of terrestrial photographs and 12 control points marked on the object and coordinated. Temperature value of control points were measured by using infrared thermometer and used as basic data source in order to create choropleth map with interpolation. Temperature values range from 32 to 37.2 degrees. In the second method, 3D object model was produced by means of terrestrial thermal photographs. Fort this purpose, several terrestrial photographs were taken by thermal camera and 3D object model showing temperature distribution was created. The temperature distributions in both applications are almost identical in position. The areas on the rock surface that roughness values are higher than the surroundings can be clearly identified. When the temperature distributions produced by both methods are evaluated, it is observed that as the roughness on the surface increases, the temperature increases.

  14. GUEST EDITORS' INTRODUCTION: Guest Editors' introduction

    NASA Astrophysics Data System (ADS)

    Guerraoui, Rachid; Vinoski, Steve

    1997-09-01

    The organization of a distributed system can have a tremendous impact on its capabilities, its performance, and its ability to evolve to meet changing requirements. For example, the client - server organization model has proven to be adequate for organizing a distributed system as a number of distributed servers that offer various functions to client processes across the network. However, it lacks peer-to-peer capabilities, and experience with the model has been predominantly in the context of local networks. To achieve peer-to-peer cooperation in a more global context, systems issues of scale, heterogeneity, configuration management, accounting and sharing are crucial, and the complexity of migrating from locally distributed to more global systems demands new tools and techniques. An emphasis on interfaces and modules leads to the modelling of a complex distributed system as a collection of interacting objects that communicate with each other only using requests sent to well defined interfaces. Although object granularity typically varies at different levels of a system architecture, the same object abstraction can be applied to various levels of a computing architecture. Since 1989, the Object Management Group (OMG), an international software consortium, has been defining an architecture for distributed object systems called the Object Management Architecture (OMA). At the core of the OMA is a `software bus' called an Object Request Broker (ORB), which is specified by the OMG Common Object Request Broker Architecture (CORBA) specification. The OMA distributed object model fits the structure of heterogeneous distributed applications, and is applied in all layers of the OMA. For example, each of the OMG Object Services, such as the OMG Naming Service, is structured as a set of distributed objects that communicate using the ORB. Similarly, higher-level OMA components such as Common Facilities and Domain Interfaces are also organized as distributed objects that can be layered over both Object Services and the ORB. The OMG creates specifications, not code, but the interfaces it standardizes are always derived from demonstrated technology submitted by member companies. The specified interfaces are written in a neutral Interface Definition Language (IDL) that defines contractual interfaces with potential clients. Interfaces written in IDL can be translated to a number of programming languages via OMG standard language mappings so that they can be used to develop components. The resulting components can transparently communicate with other components written in different languages and running on different operating systems and machine types. The ORB is responsible for providing the illusion of `virtual homogeneity' regardless of the programming languages, tools, operating systems and networks used to realize and support these components. With the adoption of the CORBA 2.0 specification in 1995, these components are able to interoperate across multi-vendor CORBA-based products. More than 700 member companies have joined the OMG, including Hewlett-Packard, Digital, Siemens, IONA Technologies, Netscape, Sun Microsystems, Microsoft and IBM, which makes it the largest standards body in existence. These companies continue to work together within the OMG to refine and enhance the OMA and its components. This special issue of Distributed Systems Engineering publishes five papers that were originally presented at the `Distributed Object-Based Platforms' track of the 30th Hawaii International Conference on System Sciences (HICSS), which was held in Wailea on Maui on 6 - 10 January 1997. The papers, which were selected based on their quality and the range of topics they cover, address different aspects of CORBA, including advanced aspects such as fault tolerance and transactions. These papers discuss the use of CORBA and evaluate CORBA-based development for different types of distributed object systems and architectures. The first paper, by S Rahkila and S Stenberg, discusses the application of CORBA to telecommunication management networks. In the second paper, P Narasimhan, L E Moser and P M Melliar-Smith present a fault-tolerant extension of an ORB. The third paper, by J Liang, S Sédillot and B Traverson, provides an overview of the CORBA Transaction Service and its integration with the ISO Distributed Transaction Processing protocol. In the fourth paper, D Sherer, T Murer and A Würtz discuss the evolution of a cooperative software engineering infrastructure to a CORBA-based framework. The fifth paper, by R Fatoohi, evaluates the communication performance of a commercially-available Object Request Broker (Orbix from IONA Technologies) on several networks, and compares the performance with that of more traditional communication primitives (e.g., BSD UNIX sockets and PVM). We wish to thank both the referees and the authors of these papers, as their cooperation was fundamental in ensuring timely publication.

  15. Coupling Behavior and Vertical Distribution of Pteropods in Coastal Waters Using Data from the Video Plankton Recorder

    DTIC Science & Technology

    1997-09-30

    COUPLING BEHAVIOR AND VERTICAL DISTRIBUTION OF PTEROPODS IN COASTAL WATERS USING DATA FROM THE VIDEO PLANKTON RECORDER Scott M. Gallager Woods Hole...OBJECTIVES The general hypothesis being tested is that the vertical distribution of the pteropod Limacina retroversa is predictable as a function of light...the plankton, to a dynamic description of its instantaneous swimming behavior. 3) To couple objectives 1 and 2 through numerical modeling of pteropod

  16. Coupling Behavior and Vertical Distribution of Pteropods in Coastal Waters using Data from the Video Plankton Recorder

    DTIC Science & Technology

    1999-09-30

    Coupling Behavior and Vertical Distribution of Pteropods in Coastal Waters using Data from the Video Plankton Recorder Scott M. Gallager Woods Hole...gradients. OBJECTIVES My objective in this project is to test the hypothesis that the vertical distribution of the pteropod Limacina retroversa...the mini-VPR are being used to infer behavior of individual pteropods . Third, a random walk turbulence model with behavioral feed-back is providing

  17. Relative importance of magnetic moments in UXO clearance applications

    USGS Publications Warehouse

    Sanchez, V.; Li, Y.; Nabighian, M.; Wright, D.

    2006-01-01

    Surface magnetic anomaly observed in UXO clearance is mainly dipolar and, as a result, the dipole is the only moment used regularly in UXO applications. The dipole moment contains intensity of magnetization information but lacks shape information. Unlike dipole, higher-order moments, such as quadrupole and octupole, encode asymmetry properties of magnetization distribution within buried targets. In order to improve our understanding of magnetization distribution within UXO and non-UXO objects and its potential utility in UXO clearance, we present results of a 3D numerical modeling study for highly susceptible metallic objects. The basis for modeling is the solution of a nonlinear integral equation, describing magnetization within isolated objects, allowing us to compute magnetic moments of the object, analyze their relationships, and provide a depiction of the surface anomaly produced by the different moments within the object. Our modeling results show significant high-order moments for more asymmetric objects situated at typical UXO burial depths, and suggest that the increased relative contribution to magnetic gradient data from these higher-order moments may provide a practical tool for improved UXO discrimination. ?? 2005 Society of Exploration Geophysicists.

  18. [Optimal solution and analysis of muscular force during standing balance].

    PubMed

    Wang, Hongrui; Zheng, Hui; Liu, Kun

    2015-02-01

    The present study was aimed at the optimal solution of the main muscular force distribution in the lower extremity during standing balance of human. The movement musculoskeletal system of lower extremity was simplified to a physical model with 3 joints and 9 muscles. Then on the basis of this model, an optimum mathematical model was built up to solve the problem of redundant muscle forces. Particle swarm optimization (PSO) algorithm is used to calculate the single objective and multi-objective problem respectively. The numerical results indicated that the multi-objective optimization could be more reasonable to obtain the distribution and variation of the 9 muscular forces. Finally, the coordination of each muscle group during maintaining standing balance under the passive movement was qualitatively analyzed using the simulation results obtained.

  19. Incorporating location, routing, and inventory decisions in a bi-objective supply chain design problem with risk-pooling

    NASA Astrophysics Data System (ADS)

    Tavakkoli-Moghaddam, Reza; Forouzanfar, Fateme; Ebrahimnejad, Sadoullah

    2013-07-01

    This paper considers a single-sourcing network design problem for a three-level supply chain. For the first time, a novel mathematical model is presented considering risk-pooling, the inventory existence at distribution centers (DCs) under demand uncertainty, the existence of several alternatives to transport the product between facilities, and routing of vehicles from distribution centers to customer in a stochastic supply chain system, simultaneously. This problem is formulated as a bi-objective stochastic mixed-integer nonlinear programming model. The aim of this model is to determine the number of located distribution centers, their locations, and capacity levels, and allocating customers to distribution centers and distribution centers to suppliers. It also determines the inventory control decisions on the amount of ordered products and the amount of safety stocks at each opened DC, selecting a type of vehicle for transportation. Moreover, it determines routing decisions, such as determination of vehicles' routes starting from an opened distribution center to serve its allocated customers and returning to that distribution center. All are done in a way that the total system cost and the total transportation time are minimized. The Lingo software is used to solve the presented model. The computational results are illustrated in this paper.

  20. Evaluation of Low-Voltage Distribution Network Index Based on Improved Principal Component Analysis

    NASA Astrophysics Data System (ADS)

    Fan, Hanlu; Gao, Suzhou; Fan, Wenjie; Zhong, Yinfeng; Zhu, Lei

    2018-01-01

    In order to evaluate the development level of the low-voltage distribution network objectively and scientifically, chromatography analysis method is utilized to construct evaluation index model of low-voltage distribution network. Based on the analysis of principal component and the characteristic of logarithmic distribution of the index data, a logarithmic centralization method is adopted to improve the principal component analysis algorithm. The algorithm can decorrelate and reduce the dimensions of the evaluation model and the comprehensive score has a better dispersion degree. The clustering method is adopted to analyse the comprehensive score because the comprehensive score of the courts is concentrated. Then the stratification evaluation of the courts is realized. An example is given to verify the objectivity and scientificity of the evaluation method.

  1. An object-oriented framework for distributed hydrologic and geomorphic modeling using triangulated irregular networks

    NASA Astrophysics Data System (ADS)

    Tucker, Gregory E.; Lancaster, Stephen T.; Gasparini, Nicole M.; Bras, Rafael L.; Rybarczyk, Scott M.

    2001-10-01

    We describe a new set of data structures and algorithms for dynamic terrain modeling using a triangulated irregular network (TINs). The framework provides an efficient method for storing, accessing, and updating a Delaunay triangulation and its associated Voronoi diagram. The basic data structure consists of three interconnected data objects: triangles, nodes, and directed edges. Encapsulating each of these geometric elements within a data object makes it possible to essentially decouple the TIN representation from the modeling applications that make use of it. Both the triangulation and its corresponding Voronoi diagram can be rapidly retrieved or updated, making these methods well suited to adaptive remeshing schemes. We develop a set of algorithms for defining drainage networks and identifying closed depressions (e.g., lakes) for hydrologic and geomorphic modeling applications. We also outline simple numerical algorithms for solving network routing and 2D transport equations within the TIN framework. The methods are illustrated with two example applications, a landscape evolution model and a distributed rainfall-runoff model.

  2. Road screening and distribution route multi-objective robust optimization for hazardous materials based on neural network and genetic algorithm.

    PubMed

    Ma, Changxi; Hao, Wei; Pan, Fuquan; Xiang, Wang

    2018-01-01

    Route optimization of hazardous materials transportation is one of the basic steps in ensuring the safety of hazardous materials transportation. The optimization scheme may be a security risk if road screening is not completed before the distribution route is optimized. For road screening issues of hazardous materials transportation, a road screening algorithm of hazardous materials transportation is built based on genetic algorithm and Levenberg-Marquardt neural network (GA-LM-NN) by analyzing 15 attributes data of each road network section. A multi-objective robust optimization model with adjustable robustness is constructed for the hazardous materials transportation problem of single distribution center to minimize transportation risk and time. A multi-objective genetic algorithm is designed to solve the problem according to the characteristics of the model. The algorithm uses an improved strategy to complete the selection operation, applies partial matching cross shift and single ortho swap methods to complete the crossover and mutation operation, and employs an exclusive method to construct Pareto optimal solutions. Studies show that the sets of hazardous materials transportation road can be found quickly through the proposed road screening algorithm based on GA-LM-NN, whereas the distribution route Pareto solutions with different levels of robustness can be found rapidly through the proposed multi-objective robust optimization model and algorithm.

  3. Distributed Hypothesis Testing in Distributed Sensor Networks

    DTIC Science & Technology

    1984-07-01

    single structure(, object Is Itself an important task in many applica- tions. At least at he conceptual level, there is no dlffculty in treating targets...First, we need to provide a modeling framwork within which the models of the various nodes, constructed as discussed above, can be embedded. It is within

  4. Information Interaction Study for DER and DMS Interoperability

    NASA Astrophysics Data System (ADS)

    Liu, Haitao; Lu, Yiming; Lv, Guangxian; Liu, Peng; Chen, Yu; Zhang, Xinhui

    The Common Information Model (CIM) is an abstract data model that can be used to represent the major objects in Distribution Management System (DMS) applications. Because the Common Information Model (CIM) doesn't modeling the Distributed Energy Resources (DERs), it can't meet the requirements of DER operation and management for Distribution Management System (DMS) advanced applications. Modeling of DER were studied based on a system point of view, the article initially proposed a CIM extended information model. By analysis the basic structure of the message interaction between DMS and DER, a bidirectional messaging mapping method based on data exchange was proposed.

  5. Modeling the spatial distribution of African buffalo (Syncerus caffer) in the Kruger National Park, South Africa

    PubMed Central

    Hughes, Kristen; Budke, Christine M.; Ward, Michael P.; Kerry, Ruth; Ingram, Ben

    2017-01-01

    The population density of wildlife reservoirs contributes to disease transmission risk for domestic animals. The objective of this study was to model the African buffalo distribution of the Kruger National Park. A secondary objective was to collect field data to evaluate models and determine environmental predictors of buffalo detection. Spatial distribution models were created using buffalo census information and archived data from previous research. Field data were collected during the dry (August 2012) and wet (January 2013) seasons using a random walk design. The fit of the prediction models were assessed descriptively and formally by calculating the root mean square error (rMSE) of deviations from field observations. Logistic regression was used to estimate the effects of environmental variables on the detection of buffalo herds and linear regression was used to identify predictors of larger herd sizes. A zero-inflated Poisson model produced distributions that were most consistent with expected buffalo behavior. Field data confirmed that environmental factors including season (P = 0.008), vegetation type (P = 0.002), and vegetation density (P = 0.010) were significant predictors of buffalo detection. Bachelor herds were more likely to be detected in dense vegetation (P = 0.005) and during the wet season (P = 0.022) compared to the larger mixed-sex herds. Static distribution models for African buffalo can produce biologically reasonable results but environmental factors have significant effects and therefore could be used to improve model performance. Accurate distribution models are critical for the evaluation of disease risk and to model disease transmission. PMID:28902858

  6. A 3D object-based model to simulate highly-heterogeneous, coarse, braided river deposits

    NASA Astrophysics Data System (ADS)

    Huber, E.; Huggenberger, P.; Caers, J.

    2016-12-01

    There is a critical need in hydrogeological modeling for geologically more realistic representation of the subsurface. Indeed, widely-used representations of the subsurface heterogeneity based on smooth basis functions such as cokriging or the pilot-point approach fail at reproducing the connectivity of high permeable geological structures that control subsurface solute transport. To realistically model the connectivity of high permeable structures of coarse, braided river deposits, multiple-point statistics and object-based models are promising alternatives. We therefore propose a new object-based model that, according to a sedimentological model, mimics the dominant processes of floodplain dynamics. Contrarily to existing models, this object-based model possesses the following properties: (1) it is consistent with field observations (outcrops, ground-penetrating radar data, etc.), (2) it allows different sedimentological dynamics to be modeled that result in different subsurface heterogeneity patterns, (3) it is light in memory and computationally fast, and (4) it can be conditioned to geophysical data. In this model, the main sedimentological elements (scour fills with open-framework-bimodal gravel cross-beds, gravel sheet deposits, open-framework and sand lenses) and their internal structures are described by geometrical objects. Several spatial distributions are proposed that allow to simulate the horizontal position of the objects on the floodplain as well as the net rate of sediment deposition. The model is grid-independent and any vertical section can be computed algebraically. Furthermore, model realizations can serve as training images for multiple-point statistics. The significance of this model is shown by its impact on the subsurface flow distribution that strongly depends on the sedimentological dynamics modeled. The code will be provided as a free and open-source R-package.

  7. Integrating satellite actual evapotranspiration patterns into distributed model parametrization and evaluation for a mesoscale catchment

    NASA Astrophysics Data System (ADS)

    Demirel, M. C.; Mai, J.; Stisen, S.; Mendiguren González, G.; Koch, J.; Samaniego, L. E.

    2016-12-01

    Distributed hydrologic models are traditionally calibrated and evaluated against observations of streamflow. Spatially distributed remote sensing observations offer a great opportunity to enhance spatial model calibration schemes. For that it is important to identify the model parameters that can change spatial patterns before the satellite based hydrologic model calibration. Our study is based on two main pillars: first we use spatial sensitivity analysis to identify the key parameters controlling the spatial distribution of actual evapotranspiration (AET). Second, we investigate the potential benefits of incorporating spatial patterns from MODIS data to calibrate the mesoscale Hydrologic Model (mHM). This distributed model is selected as it allows for a change in the spatial distribution of key soil parameters through the calibration of pedo-transfer function parameters and includes options for using fully distributed daily Leaf Area Index (LAI) directly as input. In addition the simulated AET can be estimated at the spatial resolution suitable for comparison to the spatial patterns observed using MODIS data. We introduce a new dynamic scaling function employing remotely sensed vegetation to downscale coarse reference evapotranspiration. In total, 17 parameters of 47 mHM parameters are identified using both sequential screening and Latin hypercube one-at-a-time sampling methods. The spatial patterns are found to be sensitive to the vegetation parameters whereas streamflow dynamics are sensitive to the PTF parameters. The results of multi-objective model calibration show that calibration of mHM against observed streamflow does not reduce the spatial errors in AET while they improve only the streamflow simulations. We will further examine the results of model calibration using only multi spatial objective functions measuring the association between observed AET and simulated AET maps and another case including spatial and streamflow metrics together.

  8. Considering Decision Variable Diversity in Multi-Objective Optimization: Application in Hydrologic Model Calibration

    NASA Astrophysics Data System (ADS)

    Sahraei, S.; Asadzadeh, M.

    2017-12-01

    Any modern multi-objective global optimization algorithm should be able to archive a well-distributed set of solutions. While the solution diversity in the objective space has been explored extensively in the literature, little attention has been given to the solution diversity in the decision space. Selection metrics such as the hypervolume contribution and crowding distance calculated in the objective space would guide the search toward solutions that are well-distributed across the objective space. In this study, the diversity of solutions in the decision-space is used as the main selection criteria beside the dominance check in multi-objective optimization. To this end, currently archived solutions are clustered in the decision space and the ones in less crowded clusters are given more chance to be selected for generating new solution. The proposed approach is first tested on benchmark mathematical test problems. Second, it is applied to a hydrologic model calibration problem with more than three objective functions. Results show that the chance of finding more sparse set of high-quality solutions increases, and therefore the analyst would receive a well-diverse set of options with maximum amount of information. Pareto Archived-Dynamically Dimensioned Search, which is an efficient and parsimonious multi-objective optimization algorithm for model calibration, is utilized in this study.

  9. The structure of the distant Kuiper belt in a Nice model scenario

    NASA Astrophysics Data System (ADS)

    Pike, Rosemary E.; Lawler, Samantha; Brasser, Ramon; Shankman, Cory; Alexandersen, Mike; Kavelaars, J. J.

    2016-10-01

    By utilizing a well-sampled migration model and characterized survey detections, we demonstrate that the Nice-model scenario results in consistent populations of scattering trans-Neptunian objects (TNOs) and several resonant TNO populations, but fails to reproduce the large population of 5:1 resonators discovered in surveys. We examine in detail the TNO populations implanted by the Nice model simulation from Brasser and Morbidelli (2013, B&M). This analysis focuses on the region from 25-155 AU, probing the classical, scattering, detached, and major resonant populations. Additional integrations were necessary to classify the test particles and determine population sizes and characteristics. The classified simulation objects are compared to the real TNOs from the Canada-France Ecliptic Plane Survey (CFEPS), CFEPS high latitude fields, and the Alexandersen (2016) survey. These surveys all include a detailed characterization of survey depth, pointing, and tracking efficiency, which allows detailed testing of this independently produced model of TNO populations. In the B&M model, the regions of the outer Solar System populated via capture of scattering objects are consistent with survey constraints. The scattering TNOs and most n:1 resonant populations have consistent orbital distributions and population sizes with the real detections, as well as a starting disk mass consistent with expectations. The B&M 5:1 resonators have a consistent orbital distribution with the real detections and previous models. However, the B&M 5:1 Neptune resonance is underpopulated by a factor of ~100 and would require a starting proto-planetesimal disk with a mass of ~100 Earth masses. The large population in the 5:1 Neptune resonance is unexplained by scattering capture in a Nice-model scenario, however this model accurately produces the TNO subpopulations that result from scattering object capture and provides additional insight into sub-population orbital distributions.

  10. A Variance Distribution Model of Surface EMG Signals Based on Inverse Gamma Distribution.

    PubMed

    Hayashi, Hideaki; Furui, Akira; Kurita, Yuichi; Tsuji, Toshio

    2017-11-01

    Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force. Objective: This paper describes the formulation of a surface electromyogram (EMG) model capable of representing the variance distribution of EMG signals. Methods: In the model, EMG signals are handled based on a Gaussian white noise process with a mean of zero for each variance value. EMG signal variance is taken as a random variable that follows inverse gamma distribution, allowing the representation of noise superimposed onto this variance. Variance distribution estimation based on marginal likelihood maximization is also outlined in this paper. The procedure can be approximated using rectified and smoothed EMG signals, thereby allowing the determination of distribution parameters in real time at low computational cost. Results: A simulation experiment was performed to evaluate the accuracy of distribution estimation using artificially generated EMG signals, with results demonstrating that the proposed model's accuracy is higher than that of maximum-likelihood-based estimation. Analysis of variance distribution using real EMG data also suggested a relationship between variance distribution and signal-dependent noise. Conclusion: The study reported here was conducted to examine the performance of a proposed surface EMG model capable of representing variance distribution and a related distribution parameter estimation method. Experiments using artificial and real EMG data demonstrated the validity of the model. Significance: Variance distribution estimated using the proposed model exhibits potential in the estimation of muscle force.

  11. Sensor fusion V; Proceedings of the Meeting, Boston, MA, Nov. 15-17, 1992

    NASA Technical Reports Server (NTRS)

    Schenker, Paul S. (Editor)

    1992-01-01

    Topics addressed include 3D object perception, human-machine interface in multisensor systems, sensor fusion architecture, fusion of multiple and distributed sensors, interface and decision models for sensor fusion, computational networks, simple sensing for complex action, multisensor-based control, and metrology and calibration of multisensor systems. Particular attention is given to controlling 3D objects by sketching 2D views, the graphical simulation and animation environment for flexible structure robots, designing robotic systems from sensorimotor modules, cylindrical object reconstruction from a sequence of images, an accurate estimation of surface properties by integrating information using Bayesian networks, an adaptive fusion model for a distributed detection system, multiple concurrent object descriptions in support of autonomous navigation, robot control with multiple sensors and heuristic knowledge, and optical array detectors for image sensors calibration. (No individual items are abstracted in this volume)

  12. The DICOM-based radiation therapy information system

    NASA Astrophysics Data System (ADS)

    Law, Maria Y. Y.; Chan, Lawrence W. C.; Zhang, Xiaoyan; Zhang, Jianguo

    2004-04-01

    Similar to DICOM for PACS (Picture Archiving and Communication System), standards for radiotherapy (RT) information have been ratified with seven DICOM-RT objects and their IODs (Information Object Definitions), which are more than just images. This presentation describes how a DICOM-based RT Information System Server can be built based on the PACS technology and its data model for a web-based distribution. Methods: The RT information System consists of a Modality Simulator, a data format translator, a RT Gateway, the DICOM RT Server, and the Web-based Application Server. The DICOM RT Server was designed based on a PACS data model and was connected to a Web application Server for distribution of the RT information including therapeutic plans, structures, dose distribution, images and records. Various DICOM RT objects of the patient transmitted to the RT Server were routed to the Web Application Server where the contents of the DICOM RT objects were decoded and mapped to the corresponding location of the RT data model for display in the specially-designed Graphic User Interface. The non-DICOM objects were first rendered to DICOM RT Objects in the translator before they were sent to the RT Server. Results: Ten clinical cases have been collected from different hopsitals for evaluation of the DICOM-based RT Information System. They were successfully routed through the data flow and displayed in the client workstation of the RT information System. Conclusion: Using the DICOM-RT standards, integration of RT data from different vendors is possible.

  13. Multi-objective Calibration of DHSVM Based on Hydrologic Key Elements in Jinhua River Basin, East China

    NASA Astrophysics Data System (ADS)

    Pan, S.; Liu, L.; Xu, Y. P.

    2017-12-01

    Abstract: In physically based distributed hydrological model, large number of parameters, representing spatial heterogeneity of watershed and various processes in hydrologic cycle, are involved. For lack of calibration module in Distributed Hydrology Soil Vegetation Model, this study developed a multi-objective calibration module using Epsilon-Dominance Non-Dominated Sorted Genetic Algorithm II (ɛ-NSGAII) and based on parallel computing of Linux cluster for DHSVM (ɛP-DHSVM). In this study, two hydrologic key elements (i.e., runoff and evapotranspiration) are used as objectives in multi-objective calibration of model. MODIS evapotranspiration obtained by SEBAL is adopted to fill the gap of lack of observation for evapotranspiration. The results show that good performance of runoff simulation in single objective calibration cannot ensure good simulation performance of other hydrologic key elements. Self-developed ɛP-DHSVM model can make multi-objective calibration more efficiently and effectively. The running speed can be increased by more than 20-30 times via applying ɛP-DHSVM. In addition, runoff and evapotranspiration can be simulated very well simultaneously by ɛP-DHSVM, with superior values for two efficiency coefficients (0.74 for NS of runoff and 0.79 for NS of evapotranspiration, -10.5% and -8.6% for PBIAS of runoff and evapotranspiration respectively).

  14. Integrated models to support multiobjective ecological restoration decisions.

    PubMed

    Fraser, Hannah; Rumpff, Libby; Yen, Jian D L; Robinson, Doug; Wintle, Brendan A

    2017-12-01

    Many objectives motivate ecological restoration, including improving vegetation condition, increasing the range and abundance of threatened species, and improving species richness and diversity. Although models have been used to examine the outcomes of ecological restoration, few researchers have attempted to develop models to account for multiple, potentially competing objectives. We developed a combined state-and-transition, species-distribution model to predict the effects of restoration actions on vegetation condition and extent, bird diversity, and the distribution of several bird species in southeastern Australian woodlands. The actions reflected several management objectives. We then validated the models against an independent data set and investigated how the best management decision might change when objectives were valued differently. We also used model results to identify effective restoration options for vegetation and bird species under a constrained budget. In the examples we evaluated, no one action (improving vegetation condition and extent, increasing bird diversity, or increasing the probability of occurrence for threatened species) provided the best outcome across all objectives. In agricultural lands, the optimal management actions for promoting the occurrence of the Brown Treecreeper (Climacteris picumnus), an iconic threatened species, resulted in little improvement in the extent of the vegetation and a high probability of decreased vegetation condition. This result highlights that the best management action in any situation depends on how much the different objectives are valued. In our example scenario, no management or weed control were most likely to be the best management options to satisfy multiple restoration objectives. Our approach to exploring trade-offs in management outcomes through integrated modeling and structured decision-support approaches has wide application for situations in which trade-offs exist between competing conservation objectives. © 2017 Society for Conservation Biology.

  15. The implementation of contour-based object orientation estimation algorithm in FPGA-based on-board vision system

    NASA Astrophysics Data System (ADS)

    Alpatov, Boris; Babayan, Pavel; Ershov, Maksim; Strotov, Valery

    2016-10-01

    This paper describes the implementation of the orientation estimation algorithm in FPGA-based vision system. An approach to estimate an orientation of objects lacking axial symmetry is proposed. Suggested algorithm is intended to estimate orientation of a specific known 3D object based on object 3D model. The proposed orientation estimation algorithm consists of two stages: learning and estimation. Learning stage is devoted to the exploring of studied object. Using 3D model we can gather set of training images by capturing 3D model from viewpoints evenly distributed on a sphere. Sphere points distribution is made by the geosphere principle. Gathered training image set is used for calculating descriptors, which will be used in the estimation stage of the algorithm. The estimation stage is focusing on matching process between an observed image descriptor and the training image descriptors. The experimental research was performed using a set of images of Airbus A380. The proposed orientation estimation algorithm showed good accuracy in all case studies. The real-time performance of the algorithm in FPGA-based vision system was demonstrated.

  16. Contour-based object orientation estimation

    NASA Astrophysics Data System (ADS)

    Alpatov, Boris; Babayan, Pavel

    2016-04-01

    Real-time object orientation estimation is an actual problem of computer vision nowadays. In this paper we propose an approach to estimate an orientation of objects lacking axial symmetry. Proposed algorithm is intended to estimate orientation of a specific known 3D object, so 3D model is required for learning. The proposed orientation estimation algorithm consists of 2 stages: learning and estimation. Learning stage is devoted to the exploring of studied object. Using 3D model we can gather set of training images by capturing 3D model from viewpoints evenly distributed on a sphere. Sphere points distribution is made by the geosphere principle. It minimizes the training image set. Gathered training image set is used for calculating descriptors, which will be used in the estimation stage of the algorithm. The estimation stage is focusing on matching process between an observed image descriptor and the training image descriptors. The experimental research was performed using a set of images of Airbus A380. The proposed orientation estimation algorithm showed good accuracy (mean error value less than 6°) in all case studies. The real-time performance of the algorithm was also demonstrated.

  17. Development of a Dynamically Configurable,Object-Oriented Framework for Distributed, Multi-modal Computational Aerospace Systems Simulation

    NASA Technical Reports Server (NTRS)

    Afjeh, Abdollah A.; Reed, John A.

    2003-01-01

    This research is aimed at developing a neiv and advanced simulation framework that will significantly improve the overall efficiency of aerospace systems design and development. This objective will be accomplished through an innovative integration of object-oriented and Web-based technologies ivith both new and proven simulation methodologies. The basic approach involves Ihree major areas of research: Aerospace system and component representation using a hierarchical object-oriented component model which enables the use of multimodels and enforces component interoperability. Collaborative software environment that streamlines the process of developing, sharing and integrating aerospace design and analysis models. . Development of a distributed infrastructure which enables Web-based exchange of models to simplify the collaborative design process, and to support computationally intensive aerospace design and analysis processes. Research for the first year dealt with the design of the basic architecture and supporting infrastructure, an initial implementation of that design, and a demonstration of its application to an example aircraft engine system simulation.

  18. Space Object Radiometric Modeling for Hardbody Optical Signature Database Generation

    DTIC Science & Technology

    2009-09-01

    Introduction This presentation summarizes recent activity in monitoring spacecraft health status using passive remote optical nonimaging ...Approved for public release; distribution is unlimited. Space Object Radiometric Modeling for Hardbody Optical Signature Database Generation...It is beneficial to the observer/analyst to understand the fundamental optical signature variability associated with these detection and

  19. Design of Magnetic Charged Particle Lens Using Analytical Potential Formula

    NASA Astrophysics Data System (ADS)

    Al-Batat, A. H.; Yaseen, M. J.; Abbas, S. R.; Al-Amshani, M. S.; Hasan, H. S.

    2018-05-01

    In the current research was to benefit from the potential of the two cylindrical electric lenses to be used in the product a mathematical model from which, one can determine the magnetic field distribution of the charged particle objective lens. With aid of simulink in matlab environment, some simulink models have been building to determine the distribution of the target function and their related axial functions along the optical axis of the charged particle lens. The present study showed that the physical parameters (i.e., the maximum value, Bmax, and the half width W of the field distribution) and the objective properties of the charged particle lens have been affected by varying the main geometrical parameter of the lens named the bore radius R.

  20. Gravitational lensing by a smoothly variable surface mass density

    NASA Technical Reports Server (NTRS)

    Paczynski, Bohdan; Wambsganss, Joachim

    1989-01-01

    The statistical properties of gravitational lensing due to smooth but nonuniform distributions of matter are considered. It is found that a majority of triple images had a parity characteristic for 'shear-induced' lensing. Almost all cases of triple or multiple imaging were associated with large surface density enhancements, and lensing objects were present between the images. Thus, the observed gravitational lens candidates for which no lensing object has been detected between the images are unlikely to be a result of asymmetric distribution of mass external to the image circle. In a model with smoothly variable surface mass density, moderately and highly amplified images tended to be single rather than multiple. An opposite trend was found in models which had singularities in the surface mass distribution.

  1. The Structure of the Distant Kuiper Belt in a Nice Model Scenario

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pike, R. E.; Shankman, C. J.; Kavelaars, J. J.

    2017-03-01

    This work explores the orbital distribution of minor bodies in the outer Solar System emplaced as a result of a Nice model migration from the simulations of Brasser and Morbidelli. This planetary migration scatters a planetesimal disk from between 29 and 34 au and emplaces a population of objects into the Kuiper Belt region. From the 2:1 Neptune resonance and outward, the test particles analyzed populate the outer resonances with orbital distributions consistent with trans-Neptunian object (TNO) detections in semimajor axis, inclination, and eccentricity, while capture into the closest resonances is too efficient. The relative populations of the simulated scatteringmore » objects and resonant objects in the 3:1 and 4:1 resonances are also consistent with observed populations based on debiased TNO surveys, but the 5:1 resonance is severely underpopulated compared to population estimates from survey results. Scattering emplacement results in the expected orbital distribution for the majority of the TNO populations; however, the origin of the large observed population in the 5:1 resonance remains unexplained.« less

  2. 3D shape representation with spatial probabilistic distribution of intrinsic shape keypoints

    NASA Astrophysics Data System (ADS)

    Ghorpade, Vijaya K.; Checchin, Paul; Malaterre, Laurent; Trassoudaine, Laurent

    2017-12-01

    The accelerated advancement in modeling, digitizing, and visualizing techniques for 3D shapes has led to an increasing amount of 3D models creation and usage, thanks to the 3D sensors which are readily available and easy to utilize. As a result, determining the similarity between 3D shapes has become consequential and is a fundamental task in shape-based recognition, retrieval, clustering, and classification. Several decades of research in Content-Based Information Retrieval (CBIR) has resulted in diverse techniques for 2D and 3D shape or object classification/retrieval and many benchmark data sets. In this article, a novel technique for 3D shape representation and object classification has been proposed based on analyses of spatial, geometric distributions of 3D keypoints. These distributions capture the intrinsic geometric structure of 3D objects. The result of the approach is a probability distribution function (PDF) produced from spatial disposition of 3D keypoints, keypoints which are stable on object surface and invariant to pose changes. Each class/instance of an object can be uniquely represented by a PDF. This shape representation is robust yet with a simple idea, easy to implement but fast enough to compute. Both Euclidean and topological space on object's surface are considered to build the PDFs. Topology-based geodesic distances between keypoints exploit the non-planar surface properties of the object. The performance of the novel shape signature is tested with object classification accuracy. The classification efficacy of the new shape analysis method is evaluated on a new dataset acquired with a Time-of-Flight camera, and also, a comparative evaluation on a standard benchmark dataset with state-of-the-art methods is performed. Experimental results demonstrate superior classification performance of the new approach on RGB-D dataset and depth data.

  3. Sensitivity analysis, calibration, and testing of a distributed hydrological model using error‐based weighting and one objective function

    USGS Publications Warehouse

    Foglia, L.; Hill, Mary C.; Mehl, Steffen W.; Burlando, P.

    2009-01-01

    We evaluate the utility of three interrelated means of using data to calibrate the fully distributed rainfall‐runoff model TOPKAPI as applied to the Maggia Valley drainage area in Switzerland. The use of error‐based weighting of observation and prior information data, local sensitivity analysis, and single‐objective function nonlinear regression provides quantitative evaluation of sensitivity of the 35 model parameters to the data, identification of data types most important to the calibration, and identification of correlations among parameters that contribute to nonuniqueness. Sensitivity analysis required only 71 model runs, and regression required about 50 model runs. The approach presented appears to be ideal for evaluation of models with long run times or as a preliminary step to more computationally demanding methods. The statistics used include composite scaled sensitivities, parameter correlation coefficients, leverage, Cook's D, and DFBETAS. Tests suggest predictive ability of the calibrated model typical of hydrologic models.

  4. Optical Characterization of Deep-Space Object Rotation States

    DTIC Science & Technology

    2014-09-01

    surface bi-directional reflectance distribution function ( BRDF ), and then estimate the asteroid’s shape via a best-fit parameterized model . This hybrid...approach can be used because asteroid BRDFs are relatively well studied, but their shapes are generally unknown [17]. Asteroid shape models range...can be accomplished using a shape-dependent method that employs a model of the shape and reflectance characteristics of the object. Our analysis

  5. Trojans and Plutinos as probes of planet building

    NASA Astrophysics Data System (ADS)

    Alexandersen, Mike; Gladman, B.; Kavelaars, J. J.; Petit, J.; Gwyn, S.; Greenstreet, S.

    2013-10-01

    Planetesimals formed during planet formation are the building blocks of giant planet cores; some are preserved as large trans-neptunian objects (TNOs). Previous work has shown steep power-law distributions for TNOs of diameters > 100 km. Recent results claim a dramatic roll-over or divot in the size distribution of Neptunian Trojans and scattering TNOs, with a significant lack of intermediate-size D<100 km planetesimals. One theoretical explanation for this is that planetesimals were born big, skipping the intermediate sizes, contrary to the classical understanding of planetesimal formation. Exploration of the TNO size distribution requires more precisely calibrated detections in order to improve statistics on these results. We have searched a 32 sq.deg. area near RA=2 hr to a r-band limiting magnitude of m_r=24.6 using the Canada-France-Hawaii Telescope. This coverage was near the Neptunian L4 region to maximise our detection rate, as this is where Trojans reside and where Plutinos (and several other resonant populations) come to perihelion. Our program successfully detected, tracked and characterized 77 TNOs and Centaurs for up to 17 months, giving us the high-quality orbits needed for precise modelling. Among our detections were one Uranian Trojan (see Alexandersen et al. 2013 & abstract by Greenstreet et al.), two Neptunian Trojans, 18 Plutinos and many other resonant objects. This meticulously calibrated survey and the high-quality orbits obtained for the detected objects allow us to create and test models of TNO size and orbital distributions. We test these models using a survey simulator, which simulates the detectability of model objects, accounting for the constraints and biases of our survey. Thus, we set precise constraints on the size and orbital distributions of the Neptunian Trojans, Plutinos and other resonant populations. We show that the Plutino inclination distribution is dynamically colder than found by the Canada-France Ecliptic Plane Survey. We also show that the Plutino size distribution cannot continue with the same slope for diameters < 100 km; a best-fit alternative will be presented. This research was supported by the Canadian National Sciences and Engineering Research Council.

  6. Research on vehicle routing optimization for the terminal distribution of B2C E-commerce firms

    NASA Astrophysics Data System (ADS)

    Zhang, Shiyun; Lu, Yapei; Li, Shasha

    2018-05-01

    In this paper, we established a half open multi-objective optimization model for the vehicle routing problem of B2C (business-to-customer) E-Commerce firms. To minimize the current transport distance as well as the disparity between the excepted shipments and the transport capacity in the next distribution, we applied the concept of dominated solution and Pareto solutions to the standard particle swarm optimization and proposed a MOPSO (multi-objective particle swarm optimization) algorithm to support the model. Besides, we also obtained the optimization solution of MOPSO algorithm based on data randomly generated through the system, which verified the validity of the model.

  7. An High Resolution Near-Earth Objects Population Enabling Next-Generation Search Strategies

    NASA Technical Reports Server (NTRS)

    Tricaico, Pasquale; Beshore, E. C.; Larson, S. M.; Boattini, A.; Williams, G. V.

    2010-01-01

    Over the past decade, the dedicated search for kilometer-size near-Earth objects (NEOs), potentially hazardous objects (PHOs), and potential Earth impactors has led to a boost in the rate of discoveries of these objects. The catalog of known NEOs is the fundamental ingredient used to develop a model for the NEOs population, either by assessing and correcting for the observational bias (Jedicke et al., 2002), or by evaluating the migration rates from the NEOs source regions (Bottke et al., 2002). The modeled NEOs population is a necessary tool used to track the progress in the search of large NEOs (Jedicke et al., 2003) and to try to predict the distribution of the ones still undiscovered, as well as to study the sky distribution of potential Earth impactors (Chesley & Spahr, 2004). We present a method to model the NEOs population in all six orbital elements, on a finely grained grid, allowing us the design and test of targeted and optimized search strategies. This method relies on the observational data routinely reported to the Minor Planet Center (MPC) by the Catalina Sky Survey (CSS) and by other active NEO surveys over the past decade, to determine on a nightly basis the efficiency in detecting moving objects as a function of observable quantities including apparent magnitude, rate of motion, airmass, and galactic latitude. The cumulative detection probability is then be computed for objects within a small range in orbital elements and absolute magnitude, and the comparison with the number of know NEOs within the same range allows us to model the population. When propagated to the present epoch and projected on the sky plane, this provides the distribution of the missing large NEOs, PHOs, and potential impactors.

  8. Informational model verification of ZVS Buck quasi-resonant DC-DC converter

    NASA Astrophysics Data System (ADS)

    Vakovsky, Dimiter; Hinov, Nikolay

    2016-12-01

    The aim of the paper is to create a polymorphic informational model of a ZVS Buck quasi-resonant DC-DC converter for the modeling purposes of the object. For the creation of the model is applied flexible open standards for setting, storing, publishing and exchange of data in distributed information environment. The created model is useful for creation of many and different by type variants with different configuration of the composing elements and different inner model of the examined object.

  9. Network-based Modeling of Mesoscale Catchments - The Hydrology Perspective of Glowa-danube

    NASA Astrophysics Data System (ADS)

    Ludwig, R.; Escher-Vetter, H.; Hennicker, R.; Mauser, W.; Niemeyer, S.; Reichstein, M.; Tenhunen, J.

    Within the GLOWA initiative of the German Ministry for Research and Educa- tion (BMBF), the project GLOWA-Danube is funded to establish a transdisciplinary network-based decision support tool for water related issues in the Upper Danube wa- tershed. It aims to develop and validate integration techniques, integrated models and integrated monitoring procedures and to implement them in the network-based De- cision Support System DANUBIA. An accurate description of processes involved in energy, water and matter fluxes and turnovers requires an intense collaboration and exchange of water related expertise of different scientific disciplines. DANUBIA is conceived as a distributed expert network and is developed on the basis of re-useable, refineable, and documented sub-models. In order to synthesize a common understand- ing between the project partners, a standardized notation of parameters and functions and a platform-independent structure of computational methods and interfaces has been established using the Unified Modeling Language UML. DANUBIA is object- oriented, spatially distributed and raster-based at its core. It applies the concept of "proxels" (Process Pixel) as its basic object, which has different dimensions depend- ing on the viewing scale and connects to its environment through fluxes. The presented study excerpts the hydrological view point of GLOWA-Danube, its approach of model coupling and network based communication (using the Remote Method Invocation RMI), the object-oriented technology to simulate physical processes and interactions at the land surface and the methodology to treat the issue of spatial and temporal scal- ing in large, heterogeneous catchments. The mechanisms applied to communicate data and model parameters across the typical discipline borders will be demonstrated from the perspective of a land-surface object, which comprises the capabilities of interde- pendent expert models for snowmelt, soil water movement, runoff formation, plant growth and radiation balance in a distributed JAVA-based modeling environment. The coupling to the adjacent physical objects of atmosphere, groundwater and river net- work will also be addressed.

  10. Photometric model of diffuse surfaces described as a distribution of interfaced Lambertian facets.

    PubMed

    Simonot, Lionel

    2009-10-20

    The Lambertian model for diffuse reflection is widely used for the sake of its simplicity. Nevertheless, this model is known to be inaccurate in describing a lot of real-world objects, including those that present a matte surface. To overcome this difficulty, we propose a photometric model where the surfaces are described as a distribution of facets where each facet consists of a flat interface on a Lambertian background. Compared to the Lambertian model, it includes two additional physical parameters: an interface roughness parameter and the ratio between the refractive indices of the background binder and of the upper medium. The Torrance-Sparrow model--distribution of strictly specular facets--and the Oren-Nayar model--distribution of strictly Lambertian facets--appear as special cases.

  11. Virtual gonio-spectrophotometer for validation of BRDF designs

    NASA Astrophysics Data System (ADS)

    Mihálik, Andrej; Ďurikovič, Roman

    2011-10-01

    Measurement of the appearance of an object consists of a group of measurements to characterize the color and surface finish of the object. This group of measurements involves the spectral energy distribution of propagated light measured in terms of reflectance and transmittance, and the spatial energy distribution of that light measured in terms of the bidirectional reflectance distribution function (BRDF). In this article we present the virtual gonio-spectrophotometer, a device that measures flux (power) as a function of illumination and observation. Virtual gonio-spectrophotometer measurements allow the determination of the scattering profile of specimens that can be used to verify the physical characteristics of the computer model used to simulate the scattering profile. Among the characteristics that we verify is the energy conservation of the computer model. A virtual gonio-spectrophotometer is utilized to find the correspondence between industrial measurements obtained from gloss meters and the parameters of a computer reflectance model.

  12. Mentat: An object-oriented macro data flow system

    NASA Technical Reports Server (NTRS)

    Grimshaw, Andrew S.; Liu, Jane W. S.

    1988-01-01

    Mentat, an object-oriented macro data flow system designed to facilitate parallelism in distributed systems, is presented. The macro data flow model is a model of computation similar to the data flow model with two principal differences: the computational complexity of the actors is much greater than in traditional data flow systems, and there are persistent actors that maintain state information between executions. Mentat is a system that combines the object-oriented programming paradigm and the macro data flow model of computation. Mentat programs use a dynamic structure called a future list to represent the future of computations.

  13. Timing of repetition suppression of event-related potentials to unattended objects.

    PubMed

    Stefanics, Gabor; Heinzle, Jakob; Czigler, István; Valentini, Elia; Stephan, Klaas Enno

    2018-05-26

    Current theories of object perception emphasize the automatic nature of perceptual inference. Repetition suppression (RS), the successive decrease of brain responses to repeated stimuli, is thought to reflect the optimization of perceptual inference through neural plasticity. While functional imaging studies revealed brain regions that show suppressed responses to the repeated presentation of an object, little is known about the intra-trial time course of repetition effects to everyday objects. Here we used event-related potentials (ERP) to task-irrelevant line-drawn objects, while participants engaged in a distractor task. We quantified changes in ERPs over repetitions using three general linear models (GLM) that modelled RS by an exponential, linear, or categorical "change detection" function in each subject. Our aim was to select the model with highest evidence and determine the within-trial time-course and scalp distribution of repetition effects using that model. Model comparison revealed the superiority of the exponential model indicating that repetition effects are observable for trials beyond the first repetition. Model parameter estimates revealed a sequence of RS effects in three time windows (86-140ms, 322-360ms, and 400-446ms) and with occipital, temporo-parietal, and fronto-temporal distribution, respectively. An interval of repetition enhancement (RE) was also observed (320-340ms) over occipito-temporal sensors. Our results show that automatic processing of task-irrelevant objects involves multiple intervals of RS with distinct scalp topographies. These sequential intervals of RS and RE might reflect the short-term plasticity required for optimization of perceptual inference and the associated changes in prediction errors (PE) and predictions, respectively, over stimulus repetitions during automatic object processing. This article is protected by copyright. All rights reserved. © 2018 The Authors European Journal of Neuroscience published by Federation of European Neuroscience Societies and John Wiley & Sons Ltd.

  14. Deployment strategy for battery energy storage system in distribution network based on voltage violation regulation

    NASA Astrophysics Data System (ADS)

    Wu, H.; Zhou, L.; Xu, T.; Fang, W. L.; He, W. G.; Liu, H. M.

    2017-11-01

    In order to improve the situation of voltage violation caused by the grid-connection of photovoltaic (PV) system in a distribution network, a bi-level programming model is proposed for battery energy storage system (BESS) deployment. The objective function of inner level programming is to minimize voltage violation, with the power of PV and BESS as the variables. The objective function of outer level programming is to minimize the comprehensive function originated from inner layer programming and all the BESS operating parameters, with the capacity and rated power of BESS as the variables. The differential evolution (DE) algorithm is applied to solve the model. Based on distribution network operation scenarios with photovoltaic generation under multiple alternative output modes, the simulation results of IEEE 33-bus system prove that the deployment strategy of BESS proposed in this paper is well adapted to voltage violation regulation invariable distribution network operation scenarios. It contributes to regulating voltage violation in distribution network, as well as to improve the utilization of PV systems.

  15. ADHydro: A Parallel Implementation of a Large-scale High-Resolution Multi-Physics Distributed Water Resources Model Using the Charm++ Run Time System

    NASA Astrophysics Data System (ADS)

    Steinke, R. C.; Ogden, F. L.; Lai, W.; Moreno, H. A.; Pureza, L. G.

    2014-12-01

    Physics-based watershed models are useful tools for hydrologic studies, water resources management and economic analyses in the contexts of climate, land-use, and water-use changes. This poster presents a parallel implementation of a quasi 3-dimensional, physics-based, high-resolution, distributed water resources model suitable for simulating large watersheds in a massively parallel computing environment. Developing this model is one of the objectives of the NSF EPSCoR RII Track II CI-WATER project, which is joint between Wyoming and Utah EPSCoR jurisdictions. The model, which we call ADHydro, is aimed at simulating important processes in the Rocky Mountain west, including: rainfall and infiltration, snowfall and snowmelt in complex terrain, vegetation and evapotranspiration, soil heat flux and freezing, overland flow, channel flow, groundwater flow, water management and irrigation. Model forcing is provided by the Weather Research and Forecasting (WRF) model, and ADHydro is coupled with the NOAH-MP land-surface scheme for calculating fluxes between the land and atmosphere. The ADHydro implementation uses the Charm++ parallel run time system. Charm++ is based on location transparent message passing between migrateable C++ objects. Each object represents an entity in the model such as a mesh element. These objects can be migrated between processors or serialized to disk allowing the Charm++ system to automatically provide capabilities such as load balancing and checkpointing. Objects interact with each other by passing messages that the Charm++ system routes to the correct destination object regardless of its current location. This poster discusses the algorithms, communication patterns, and caching strategies used to implement ADHydro with Charm++. The ADHydro model code will be released to the hydrologic community in late 2014.

  16. Quantifying evenly distributed states in exclusion and nonexclusion processes

    NASA Astrophysics Data System (ADS)

    Binder, Benjamin J.; Landman, Kerry A.

    2011-04-01

    Spatial-point data sets, generated from a wide range of physical systems and mathematical models, can be analyzed by counting the number of objects in equally sized bins. We find that the bin counts are related to the Pólya distribution. New measures are developed which indicate whether or not a spatial data set, generated from an exclusion process, is at its most evenly distributed state, the complete spatial randomness (CSR) state. To this end, we define an index in terms of the variance between the bin counts. Limiting values of the index are determined when objects have access to the entire domain and when there are subregions of the domain that are inaccessible to objects. Using three case studies (Lagrangian fluid particles in chaotic laminar flows, cellular automata agents in discrete models, and biological cells within colonies), we calculate the indexes and verify that our theoretical CSR limit accurately predicts the state of the system. These measures should prove useful in many biological applications.

  17. Does the Sound of a Barking Dog Activate its Corresponding Visual Form? An fMRI Investigation of Modality-Specific Semantic Access

    PubMed Central

    Reilly, Jamie; Garcia, Amanda; Binney, Richard J.

    2016-01-01

    Much remains to be learned about the neural architecture underlying word meaning. Fully distributed models of semantic memory predict that the sound of a barking dog will conjointly engage a network of distributed sensorimotor spokes. An alternative framework holds that modality-specific features additionally converge within transmodal hubs. Participants underwent functional MRI while covertly naming familiar objects versus newly learned novel objects from only one of their constituent semantic features (visual form, characteristic sound, or point-light motion representation). Relative to the novel object baseline, familiar concepts elicited greater activation within association regions specific to that presentation modality. Furthermore, visual form elicited activation within high-level auditory association cortex. Conversely, environmental sounds elicited activation in regions proximal to visual association cortex. Both conditions commonly engaged a putative hub region within lateral anterior temporal cortex. These results support hybrid semantic models in which local hubs and distributed spokes are dually engaged in service of semantic memory. PMID:27289210

  18. The Clouds distributed operating system - Functional description, implementation details and related work

    NASA Technical Reports Server (NTRS)

    Dasgupta, Partha; Leblanc, Richard J., Jr.; Appelbe, William F.

    1988-01-01

    Clouds is an operating system in a novel class of distributed operating systems providing the integration, reliability, and structure that makes a distributed system usable. Clouds is designed to run on a set of general purpose computers that are connected via a medium-of-high speed local area network. The system structuring paradigm chosen for the Clouds operating system, after substantial research, is an object/thread model. All instances of services, programs and data in Clouds are encapsulated in objects. The concept of persistent objects does away with the need for file systems, and replaces it with a more powerful concept, namely the object system. The facilities in Clouds include integration of resources through location transparency; support for various types of atomic operations, including conventional transactions; advanced support for achieving fault tolerance; and provisions for dynamic reconfiguration.

  19. Derivation of WECC Distributed PV System Model Parameters from Quasi-Static Time-Series Distribution System Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mather, Barry A; Boemer, Jens C.; Vittal, Eknath

    The response of low voltage networks with high penetration of PV systems to transmission network faults will, in the future, determine the overall power system performance during certain hours of the year. The WECC distributed PV system model (PVD1) is designed to represent small-scale distribution-connected systems. Although default values are provided by WECC for the model parameters, tuning of those parameters seems to become important in order to accurately estimate the partial loss of distributed PV systems for bulk system studies. The objective of this paper is to describe a new methodology to determine the WECC distributed PV system (PVD1)more » model parameters and to derive parameter sets obtained for six distribution circuits of a Californian investor-owned utility with large amounts of distributed PV systems. The results indicate that the parameters for the partial loss of distributed PV systems may differ significantly from the default values provided by WECC.« less

  20. Taking a(c)count of eye movements: Multiple mechanisms underlie fixations during enumeration.

    PubMed

    Paul, Jacob M; Reeve, Robert A; Forte, Jason D

    2017-03-01

    We habitually move our eyes when we enumerate sets of objects. It remains unclear whether saccades are directed for numerosity processing as distinct from object-oriented visual processing (e.g., object saliency, scanning heuristics). Here we investigated the extent to which enumeration eye movements are contingent upon the location of objects in an array, and whether fixation patterns vary with enumeration demands. Twenty adults enumerated random dot arrays twice: first to report the set cardinality and second to judge the perceived number of subsets. We manipulated the spatial location of dots by presenting arrays at 0°, 90°, 180°, and 270° orientations. Participants required a similar time to enumerate the set or the perceived number of subsets in the same array. Fixation patterns were systematically shifted in the direction of array rotation, and distributed across similar locations when the same array was shown on multiple occasions. We modeled fixation patterns and dot saliency using a simple filtering model and show participants judged groups of dots in close proximity (2°-2.5° visual angle) as distinct subsets. Modeling results are consistent with the suggestion that enumeration involves visual grouping mechanisms based on object saliency, and specific enumeration demands affect spatial distribution of fixations. Our findings highlight the importance of set computation, rather than object processing per se, for models of numerosity processing.

  1. Mapping Sub-Antarctic Cushion Plants Using Random Forests to Combine Very High Resolution Satellite Imagery and Terrain Modelling

    PubMed Central

    Bricher, Phillippa K.; Lucieer, Arko; Shaw, Justine; Terauds, Aleks; Bergstrom, Dana M.

    2013-01-01

    Monitoring changes in the distribution and density of plant species often requires accurate and high-resolution baseline maps of those species. Detecting such change at the landscape scale is often problematic, particularly in remote areas. We examine a new technique to improve accuracy and objectivity in mapping vegetation, combining species distribution modelling and satellite image classification on a remote sub-Antarctic island. In this study, we combine spectral data from very high resolution WorldView-2 satellite imagery and terrain variables from a high resolution digital elevation model to improve mapping accuracy, in both pixel- and object-based classifications. Random forest classification was used to explore the effectiveness of these approaches on mapping the distribution of the critically endangered cushion plant Azorella macquariensis Orchard (Apiaceae) on sub-Antarctic Macquarie Island. Both pixel- and object-based classifications of the distribution of Azorella achieved very high overall validation accuracies (91.6–96.3%, κ = 0.849–0.924). Both two-class and three-class classifications were able to accurately and consistently identify the areas where Azorella was absent, indicating that these maps provide a suitable baseline for monitoring expected change in the distribution of the cushion plants. Detecting such change is critical given the threats this species is currently facing under altering environmental conditions. The method presented here has applications to monitoring a range of species, particularly in remote and isolated environments. PMID:23940805

  2. Delving into α-stable distribution in noise suppression for seizure detection from scalp EEG

    NASA Astrophysics Data System (ADS)

    Wang, Yueming; Qi, Yu; Wang, Yiwen; Lei, Zhen; Zheng, Xiaoxiang; Pan, Gang

    2016-10-01

    Objective. There is serious noise in EEG caused by eye blink and muscle activities. The noise exhibits similar morphologies to epileptic seizure signals, leading to relatively high false alarms in most existing seizure detection methods. The objective in this paper is to develop an effective noise suppression method in seizure detection and explore the reason why it works. Approach. Based on a state-space model containing a non-linear observation function and multiple features as the observations, this paper delves deeply into the effect of the α-stable distribution in the noise suppression for seizure detection from scalp EEG. Compared with the Gaussian distribution, the α-stable distribution is asymmetric and has relatively heavy tails. These properties make it more powerful in modeling impulsive noise in EEG, which usually can not be handled by the Gaussian distribution. Specially, we give a detailed analysis in the state estimation process to show the reason why the α-stable distribution can suppress the impulsive noise. Main results. To justify each component in our model, we compare our method with 4 different models with different settings on a collected 331-hour epileptic EEG data. To show the superiority of our method, we compare it with the existing approaches on both our 331-hour data and 892-hour public data. The results demonstrate that our method is most effective in both the detection rate and the false alarm. Significance. This is the first attempt to incorporate the α-stable distribution to a state-space model for noise suppression in seizure detection and achieves the state-of-the-art performance.

  3. A Bayesian approach to modeling 2D gravity data using polygon states

    NASA Astrophysics Data System (ADS)

    Titus, W. J.; Titus, S.; Davis, J. R.

    2015-12-01

    We present a Bayesian Markov chain Monte Carlo (MCMC) method for the 2D gravity inversion of a localized subsurface object with constant density contrast. Our models have four parameters: the density contrast, the number of vertices in a polygonal approximation of the object, an upper bound on the ratio of the perimeter squared to the area, and the vertices of a polygon container that bounds the object. Reasonable parameter values can be estimated prior to inversion using a forward model and geologic information. In addition, we assume that the field data have a common random uncertainty that lies between two bounds but that it has no systematic uncertainty. Finally, we assume that there is no uncertainty in the spatial locations of the measurement stations. For any set of model parameters, we use MCMC methods to generate an approximate probability distribution of polygons for the object. We then compute various probability distributions for the object, including the variance between the observed and predicted fields (an important quantity in the MCMC method), the area, the center of area, and the occupancy probability (the probability that a spatial point lies within the object). In addition, we compare probabilities of different models using parallel tempering, a technique which also mitigates trapping in local optima that can occur in certain model geometries. We apply our method to several synthetic data sets generated from objects of varying shape and location. We also analyze a natural data set collected across the Rio Grande Gorge Bridge in New Mexico, where the object (i.e. the air below the bridge) is known and the canyon is approximately 2D. Although there are many ways to view results, the occupancy probability proves quite powerful. We also find that the choice of the container is important. In particular, large containers should be avoided, because the more closely a container confines the object, the better the predictions match properties of object.

  4. Enabling Object Storage via shims for Grid Middleware

    NASA Astrophysics Data System (ADS)

    Cadellin Skipsey, Samuel; De Witt, Shaun; Dewhurst, Alastair; Britton, David; Roy, Gareth; Crooks, David

    2015-12-01

    The Object Store model has quickly become the basis of most commercially successful mass storage infrastructure, backing so-called ”Cloud” storage such as Amazon S3, but also underlying the implementation of most parallel distributed storage systems. Many of the assumptions in Object Store design are similar, but not identical, to concepts in the design of Grid Storage Elements, although the requirement for ”POSIX-like” filesystem structures on top of SEs makes the disjunction seem larger. As modern Object Stores provide many features that most Grid SEs do not (block level striping, parallel access, automatic file repair, etc.), it is of interest to see how easily we can provide interfaces to typical Object Stores via plugins and shims for Grid tools, and how well experiments can adapt their data models to them. We present evaluation of, and first-deployment experiences with, (for example) Xrootd-Ceph interfaces for direct object-store access, as part of an initiative within GridPP[1] hosted at RAL. Additionally, we discuss the tradeoffs and experience of developing plugins for the currently-popular Ceph parallel distributed filesystem for the GFAL2 access layer, at Glasgow.

  5. Body posture differentially impacts on visual attention towards tool, graspable, and non-graspable objects.

    PubMed

    Ambrosini, Ettore; Costantini, Marcello

    2017-02-01

    Viewed objects have been shown to afford suitable actions, even in the absence of any intention to act. However, little is known as to whether gaze behavior (i.e., the way we simply look at objects) is sensitive to action afforded by the seen object and how our actual motor possibilities affect this behavior. We recorded participants' eye movements during the observation of tools, graspable and ungraspable objects, while their hands were either freely resting on the table or tied behind their back. The effects of the observed object and hand posture on gaze behavior were measured by comparing the actual fixation distribution with that predicted by 2 widely supported models of visual attention, namely the Graph-Based Visual Saliency and the Adaptive Whitening Salience models. Results showed that saliency models did not accurately predict participants' fixation distributions for tools. Indeed, participants mostly fixated the action-related, functional part of the tools, regardless of its visual saliency. Critically, the restriction of the participants' action possibility led to a significant reduction of this effect and significantly improved the model prediction of the participants' gaze behavior. We suggest, first, that action-relevant object information at least in part guides gaze behavior. Second, postural information interacts with visual information to the generation of priority maps of fixation behavior. We support the view that the kind of information we access from the environment is constrained by our readiness to act. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  6. Multi-objective optimization to predict muscle tensions in a pinch function using genetic algorithm

    NASA Astrophysics Data System (ADS)

    Bensghaier, Amani; Romdhane, Lotfi; Benouezdou, Fethi

    2012-03-01

    This work is focused on the determination of the thumb and the index finger muscle tensions in a tip pinch task. A biomechanical model of the musculoskeletal system of the thumb and the index finger is developed. Due to the assumptions made in carrying out the biomechanical model, the formulated force analysis problem is indeterminate leading to an infinite number of solutions. Thus, constrained single and multi-objective optimization methodologies are used in order to explore the muscular redundancy and to predict optimal muscle tension distributions. Various models are investigated using the optimization process. The basic criteria to minimize are the sum of the muscle stresses, the sum of individual muscle tensions and the maximum muscle stress. The multi-objective optimization is solved using a Pareto genetic algorithm to obtain non-dominated solutions, defined as the set of optimal distributions of muscle tensions. The results show the advantage of the multi-objective formulation over the single objective one. The obtained solutions are compared to those available in the literature demonstrating the effectiveness of our approach in the analysis of the fingers musculoskeletal systems when predicting muscle tensions.

  7. Object Management Group object transaction service based on an X/Open and International Organization for Standardization open systems interconnection transaction processing kernel

    NASA Astrophysics Data System (ADS)

    Liang, J.; Sédillot, S.; Traverson, B.

    1997-09-01

    This paper addresses federation of a transactional object standard - Object Management Group (OMG) object transaction service (OTS) - with the X/Open distributed transaction processing (DTP) model and International Organization for Standardization (ISO) open systems interconnection (OSI) transaction processing (TP) communication protocol. The two-phase commit propagation rules within a distributed transaction tree are similar in the X/Open, ISO and OMG models. Building an OTS on an OSI TP protocol machine is possible because the two specifications are somewhat complementary. OTS defines a set of external interfaces without specific internal protocol machine, while OSI TP specifies an internal protocol machine without any application programming interface. Given these observations, and having already implemented an X/Open two-phase commit transaction toolkit based on an OSI TP protocol machine, we analyse the feasibility of using this implementation as a transaction service provider for OMG interfaces. Based on the favourable result of this feasibility study, we are implementing an OTS compliant system, which, by initiating the extensibility and openness strengths of OSI TP, is able to provide interoperability between X/Open DTP and OMG OTS models.

  8. Free-form geometric modeling by integrating parametric and implicit PDEs.

    PubMed

    Du, Haixia; Qin, Hong

    2007-01-01

    Parametric PDE techniques, which use partial differential equations (PDEs) defined over a 2D or 3D parametric domain to model graphical objects and processes, can unify geometric attributes and functional constraints of the models. PDEs can also model implicit shapes defined by level sets of scalar intensity fields. In this paper, we present an approach that integrates parametric and implicit trivariate PDEs to define geometric solid models containing both geometric information and intensity distribution subject to flexible boundary conditions. The integrated formulation of second-order or fourth-order elliptic PDEs permits designers to manipulate PDE objects of complex geometry and/or arbitrary topology through direct sculpting and free-form modeling. We developed a PDE-based geometric modeling system for shape design and manipulation of PDE objects. The integration of implicit PDEs with parametric geometry offers more general and arbitrary shape blending and free-form modeling for objects with intensity attributes than pure geometric models.

  9. Assessment of Forest Conservation Value Using a Species Distribution Model and Object-based Image Analysis

    NASA Astrophysics Data System (ADS)

    Jin, Y.; Lee, D. K.; Jeong, S. G.

    2015-12-01

    The ecological and social values of forests have recently been highlighted. Assessments of the biodiversity of forests, as well as their other ecological values, play an important role in regional and national conservation planning. The preservation of habitats is linked to the protection of biodiversity. For mapping habitats, species distribution model (SDM) is used for predicting suitable habitat of significant species, and such distribution modeling is increasingly being used in conservation science. However, the pixel-based analysis does not contain contextual or topological information. In order to provide more accurate habitats predictions, a continuous field view that assumes the real world is required. Here we analyze and compare at different scales, habitats of the Yellow Marten's(Martes Flavigula), which is a top predator and also an umbrella species in South Korea. The object-scale, which is a group of pixels that have similar spatial and spectral characteristics, and pixel-scale were used for SDM. Our analysis using the SDM at different scales suggests that object-scale analysis provides a superior representation of continuous habitat, and thus will be useful in forest conservation planning as well as for species habitat monitoring.

  10. Algorithms and Object-Oriented Software for Distributed Physics-Based Modeling

    NASA Technical Reports Server (NTRS)

    Kenton, Marc A.

    2001-01-01

    The project seeks to develop methods to more efficiently simulate aerospace vehicles. The goals are to reduce model development time, increase accuracy (e.g.,by allowing the integration of multidisciplinary models), facilitate collaboration by geographically- distributed groups of engineers, support uncertainty analysis and optimization, reduce hardware costs, and increase execution speeds. These problems are the subject of considerable contemporary research (e.g., Biedron et al. 1999; Heath and Dick, 2000).

  11. On numerical model of time-dependent processes in three-dimensional porous heat-releasing objects

    NASA Astrophysics Data System (ADS)

    Lutsenko, Nickolay A.

    2016-10-01

    The gas flows in the gravity field through porous objects with heat-releasing sources are investigated when the self-regulation of the flow rate of the gas passing through the porous object takes place. Such objects can appear after various natural or man-made disasters (like the exploded unit of the Chernobyl NPP). The mathematical model and the original numerical method, based on a combination of explicit and implicit finite difference schemes, are developed for investigating the time-dependent processes in 3D porous energy-releasing objects. The advantage of the numerical model is its ability to describe unsteady processes under both natural convection and forced filtration. The gas cooling of 3D porous objects with different distribution of heat sources is studied using computational experiment.

  12. Overt attention in natural scenes: objects dominate features.

    PubMed

    Stoll, Josef; Thrun, Michael; Nuthmann, Antje; Einhäuser, Wolfgang

    2015-02-01

    Whether overt attention in natural scenes is guided by object content or by low-level stimulus features has become a matter of intense debate. Experimental evidence seemed to indicate that once object locations in a scene are known, salience models provide little extra explanatory power. This approach has recently been criticized for using inadequate models of early salience; and indeed, state-of-the-art salience models outperform trivial object-based models that assume a uniform distribution of fixations on objects. Here we propose to use object-based models that take a preferred viewing location (PVL) close to the centre of objects into account. In experiment 1, we demonstrate that, when including this comparably subtle modification, object-based models again are at par with state-of-the-art salience models in predicting fixations in natural scenes. One possible interpretation of these results is that objects rather than early salience dominate attentional guidance. In this view, early-salience models predict fixations through the correlation of their features with object locations. To test this hypothesis directly, in two additional experiments we reduced low-level salience in image areas of high object content. For these modified stimuli, the object-based model predicted fixations significantly better than early salience. This finding held in an object-naming task (experiment 2) and a free-viewing task (experiment 3). These results provide further evidence for object-based fixation selection--and by inference object-based attentional guidance--in natural scenes. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  13. Combining satellite data and appropriate objective functions for improved spatial pattern performance of a distributed hydrologic model

    NASA Astrophysics Data System (ADS)

    Demirel, Mehmet C.; Mai, Juliane; Mendiguren, Gorka; Koch, Julian; Samaniego, Luis; Stisen, Simon

    2018-02-01

    Satellite-based earth observations offer great opportunities to improve spatial model predictions by means of spatial-pattern-oriented model evaluations. In this study, observed spatial patterns of actual evapotranspiration (AET) are utilised for spatial model calibration tailored to target the pattern performance of the model. The proposed calibration framework combines temporally aggregated observed spatial patterns with a new spatial performance metric and a flexible spatial parameterisation scheme. The mesoscale hydrologic model (mHM) is used to simulate streamflow and AET and has been selected due to its soil parameter distribution approach based on pedo-transfer functions and the build in multi-scale parameter regionalisation. In addition two new spatial parameter distribution options have been incorporated in the model in order to increase the flexibility of root fraction coefficient and potential evapotranspiration correction parameterisations, based on soil type and vegetation density. These parameterisations are utilised as they are most relevant for simulated AET patterns from the hydrologic model. Due to the fundamental challenges encountered when evaluating spatial pattern performance using standard metrics, we developed a simple but highly discriminative spatial metric, i.e. one comprised of three easily interpretable components measuring co-location, variation and distribution of the spatial data. The study shows that with flexible spatial model parameterisation used in combination with the appropriate objective functions, the simulated spatial patterns of actual evapotranspiration become substantially more similar to the satellite-based estimates. Overall 26 parameters are identified for calibration through a sequential screening approach based on a combination of streamflow and spatial pattern metrics. The robustness of the calibrations is tested using an ensemble of nine calibrations based on different seed numbers using the shuffled complex evolution optimiser. The calibration results reveal a limited trade-off between streamflow dynamics and spatial patterns illustrating the benefit of combining separate observation types and objective functions. At the same time, the simulated spatial patterns of AET significantly improved when an objective function based on observed AET patterns and a novel spatial performance metric compared to traditional streamflow-only calibration were included. Since the overall water balance is usually a crucial goal in hydrologic modelling, spatial-pattern-oriented optimisation should always be accompanied by traditional discharge measurements. In such a multi-objective framework, the current study promotes the use of a novel bias-insensitive spatial pattern metric, which exploits the key information contained in the observed patterns while allowing the water balance to be informed by discharge observations.

  14. Estimating migratory fish distribution from altitude and basin area: a case study in a large Neotropical river

    Treesearch

    Jose Ricardo Barradas; Lucas G. Silva; Bret C. Harvey; Nelson F. Fontoura

    2012-01-01

    1. The objective of this study was to identify longitudinal distribution patterns of large migratory fish species in the Uruguay River basin, southern Brazil, and construct statistical distribution models for Salminus brasiliensis, Prochilodus lineatus, Leporinus obtusidens and Pseudoplatystoma corruscans. 2. The sampling programme resulted in 202 interviews with old...

  15. Distributed Architecture for the Object-Oriented Method for Interoperability

    DTIC Science & Technology

    2003-03-01

    Collaborative Environment. ......................121 Figure V-2. Distributed OOMI And The Collaboration Centric Paradigm. .....................123 Figure V...of systems are formed into a system federation to resolve differences in modeling. An OOMI Integrated Development Environment (OOMI IDE) lends ...space for the creation of possible distributed systems is partitioned into User Centric systems, Processing/Storage Centric systems, Implementation

  16. A System-Level Throughput Model for Quantum Key Distribution

    DTIC Science & Technology

    2015-09-17

    object. In quantum entanglement , the physical properties of particle pairs or groups of particles are correlated – the quantum state of each particle...One-Time Pad Algorithm ............................................................................. 8 Figure 2. Photon Polarization [19...64 Poisson distribution for multi- photon probability (29

  17. Direct statistical modeling and its implications for predictive mapping in mining exploration

    NASA Astrophysics Data System (ADS)

    Sterligov, Boris; Gumiaux, Charles; Barbanson, Luc; Chen, Yan; Cassard, Daniel; Cherkasov, Sergey; Zolotaya, Ludmila

    2010-05-01

    Recent advances in geosciences make more and more multidisciplinary data available for mining exploration. This allowed developing methodologies for computing forecast ore maps from the statistical combination of such different input parameters, all based on an inverse problem theory. Numerous statistical methods (e.g. algebraic method, weight of evidence, Siris method, etc) with varying degrees of complexity in their development and implementation, have been proposed and/or adapted for ore geology purposes. In literature, such approaches are often presented through applications on natural examples and the results obtained can present specificities due to local characteristics. Moreover, though crucial for statistical computations, "minimum requirements" needed for input parameters (number of minimum data points, spatial distribution of objects, etc) are often only poorly expressed. From these, problems often arise when one has to choose between one and the other method for her/his specific question. In this study, a direct statistical modeling approach is developed in order to i) evaluate the constraints on the input parameters and ii) test the validity of different existing inversion methods. The approach particularly focused on the analysis of spatial relationships between location of points and various objects (e.g. polygons and /or polylines) which is particularly well adapted to constrain the influence of intrusive bodies - such as a granite - and faults or ductile shear-zones on spatial location of ore deposits (point objects). The method is designed in a way to insure a-dimensionality with respect to scale. In this approach, both spatial distribution and topology of objects (polygons and polylines) can be parametrized by the user (e.g. density of objects, length, surface, orientation, clustering). Then, the distance of points with respect to a given type of objects (polygons or polylines) is given using a probability distribution. The location of points is computed assuming either independency or different grades of dependency between the two probability distributions. The results show that i)polygons surface mean value, polylines length mean value, the number of objects and their clustering are critical and ii) the validity of the different tested inversion methods strongly depends on the relative importance and on the dependency between the parameters used. In addition, this combined approach of direct and inverse modeling offers an opportunity to test the robustness of the inferred distribution point laws with respect to the quality of the input data set.

  18. On the effect of model parameters on forecast objects

    NASA Astrophysics Data System (ADS)

    Marzban, Caren; Jones, Corinne; Li, Ning; Sandgathe, Scott

    2018-04-01

    Many physics-based numerical models produce a gridded, spatial field of forecasts, e.g., a temperature map. The field for some quantities generally consists of spatially coherent and disconnected objects. Such objects arise in many problems, including precipitation forecasts in atmospheric models, eddy currents in ocean models, and models of forest fires. Certain features of these objects (e.g., location, size, intensity, and shape) are generally of interest. Here, a methodology is developed for assessing the impact of model parameters on the features of forecast objects. The main ingredients of the methodology include the use of (1) Latin hypercube sampling for varying the values of the model parameters, (2) statistical clustering algorithms for identifying objects, (3) multivariate multiple regression for assessing the impact of multiple model parameters on the distribution (across the forecast domain) of object features, and (4) methods for reducing the number of hypothesis tests and controlling the resulting errors. The final output of the methodology is a series of box plots and confidence intervals that visually display the sensitivities. The methodology is demonstrated on precipitation forecasts from a mesoscale numerical weather prediction model.

  19. A Rational Model of the Effects of Distributional Information on Feature Learning

    ERIC Educational Resources Information Center

    Austerweil, Joseph L.; Griffiths, Thomas L.

    2011-01-01

    Most psychological theories treat the features of objects as being fixed and immediately available to observers. However, novel objects have an infinite array of properties that could potentially be encoded as features, raising the question of how people learn which features to use in representing those objects. We focus on the effects of…

  20. Selecting the right statistical model for analysis of insect count data by using information theoretic measures.

    PubMed

    Sileshi, G

    2006-10-01

    Researchers and regulatory agencies often make statistical inferences from insect count data using modelling approaches that assume homogeneous variance. Such models do not allow for formal appraisal of variability which in its different forms is the subject of interest in ecology. Therefore, the objectives of this paper were to (i) compare models suitable for handling variance heterogeneity and (ii) select optimal models to ensure valid statistical inferences from insect count data. The log-normal, standard Poisson, Poisson corrected for overdispersion, zero-inflated Poisson, the negative binomial distribution and zero-inflated negative binomial models were compared using six count datasets on foliage-dwelling insects and five families of soil-dwelling insects. Akaike's and Schwarz Bayesian information criteria were used for comparing the various models. Over 50% of the counts were zeros even in locally abundant species such as Ootheca bennigseni Weise, Mesoplatys ochroptera Stål and Diaecoderus spp. The Poisson model after correction for overdispersion and the standard negative binomial distribution model provided better description of the probability distribution of seven out of the 11 insects than the log-normal, standard Poisson, zero-inflated Poisson or zero-inflated negative binomial models. It is concluded that excess zeros and variance heterogeneity are common data phenomena in insect counts. If not properly modelled, these properties can invalidate the normal distribution assumptions resulting in biased estimation of ecological effects and jeopardizing the integrity of the scientific inferences. Therefore, it is recommended that statistical models appropriate for handling these data properties be selected using objective criteria to ensure efficient statistical inference.

  1. Time-Series INSAR: An Integer Least-Squares Approach For Distributed Scatterers

    NASA Astrophysics Data System (ADS)

    Samiei-Esfahany, Sami; Hanssen, Ramon F.

    2012-01-01

    The objective of this research is to extend the geode- tic mathematical model which was developed for persistent scatterers to a model which can exploit distributed scatterers (DS). The main focus is on the integer least- squares framework, and the main challenge is to include the decorrelation effect in the mathematical model. In order to adapt the integer least-squares mathematical model for DS we altered the model from a single master to a multi-master configuration and introduced the decorrelation effect stochastically. This effect is described in our model by a full covariance matrix. We propose to de- rive this covariance matrix by numerical integration of the (joint) probability distribution function (PDF) of interferometric phases. This PDF is a function of coherence values and can be directly computed from radar data. We show that the use of this model can improve the performance of temporal phase unwrapping of distributed scatterers.

  2. From sixty-two interviews on 'the worst and the best episode of your life'. Relationships between internal working models and a grammatical scale of subject-object affective connections.

    PubMed

    Seganti, A; Carnevale, G; Mucelli, R; Solano, L; Target, M

    2000-06-01

    The authors address the issue of inferring unconscious internal working models of interaction through language. After reviewing Main's seminal work of linguistic assessment through the 'adult attachment interview', they stress the idea of adults' internal working models (IWMs) as information-processing devices, which give moment-to-moment sensory orientation in the face of any past or present, animate or inanimate object. They propose that a selective perception of the objects could match expected with actual influence of objects on the subject's self, through very simple 'parallel-processed' categories of internal objects. They further hypothesise that the isomorphism between internal working models of interaction and grammatical connections between subjects and objects within a clause could be a key to tracking positive and negative images of self and other during discourse. An experiment is reported applying the authors' 'scale of subject/object affective connection' to the narratives of sixty-two subjects asked to write about the 'worst' and 'best' episodes of their lives. Participants had previously been classified using Hazan & Shaver's self-reported 'attachment types' (avoidant, anxious and secure) categorising individuals' general expectations in relation to others. The findings were that the subject/object distribution of positive and negative experience, through verbs defined for this purpose as either performative or state verbs, did significantly differ between groups. In addition, different groups tended, during the best episodes, significantly to invert the trend of positive/negative subject/object distribution shown during the worst episode. Results are discussed in terms of a psychoanalytic theory of improvement through co-operative elaboration of negative relational issues.

  3. A service-oriented data access control model

    NASA Astrophysics Data System (ADS)

    Meng, Wei; Li, Fengmin; Pan, Juchen; Song, Song; Bian, Jiali

    2017-01-01

    The development of mobile computing, cloud computing and distributed computing meets the growing individual service needs. Facing with complex application system, it's an urgent problem to ensure real-time, dynamic, and fine-grained data access control. By analyzing common data access control models, on the basis of mandatory access control model, the paper proposes a service-oriented access control model. By regarding system services as subject and data of databases as object, the model defines access levels and access identification of subject and object, and ensures system services securely to access databases.

  4. A possible divot in the Kuiper belt's scattered-object size distribution

    NASA Astrophysics Data System (ADS)

    Shankman, C.; Kavelaars, J.; Gladman, B.; Petit, J.

    2014-07-01

    The formation and evolution history of the Solar System, while not directly accessible, has measurable signatures in the present-day size distributions of the Trans-Neptunian Object (TNO) populations. The form of the size distribution is modelled as a power law with number going as size to some characteristic slope. Recent works have shown that a single power law does not match the observations across all sizes; the power law breaks to a different form [1, 2, 3]. The large- size objects record the accretion history, while the small-size objects record the collision history. The changes of size-distribution shape and slope as one moves from 'large' to 'medium' to 'small' KBOs are the signature needed to constrain the formation and collision history of the Solar System. The scattering TNOs are those TNOs undergoing strong (scattering) interactions Neptune. The scattering objects can come to pericentre in the giant planet region. This close-in pericentre passage allows for the observation of smaller objects, and thus for the constraint of the small-size end of the size distribution. Our recent analysis of the Canada France Ecliptic Plane Survey's (CFEPS) scattering objects revealed an exciting potential form for the scattering object size distribution - a divot (see Figure). Our divot (a sharp drop in the number of objects per unit size which then returns at a potentially different slope) matches our observations well and can simultaneously explain observed features in other inclined (so-called "hot") Kuiper Belt populations. In this scenario all of the hot populations would share the same source and have been implanted in the outer solar system through scattering processes. If confirmed, our divot would represent a new exciting paradigm for the formation history of the Kuiper Belt. Here we present the results of an extension of our previous work to include a new, deeper, Kuiper Belt survey. By the addition of two new faint scattering objects from this survey which, in tandem with the full characterizations of the survey's biases (acting like non- detections limits), we better constrain the form of the scattering object size distribution.

  5. Application of the Conway-Maxwell-Poisson generalized linear model for analyzing motor vehicle crashes.

    PubMed

    Lord, Dominique; Guikema, Seth D; Geedipally, Srinivas Reddy

    2008-05-01

    This paper documents the application of the Conway-Maxwell-Poisson (COM-Poisson) generalized linear model (GLM) for modeling motor vehicle crashes. The COM-Poisson distribution, originally developed in 1962, has recently been re-introduced by statisticians for analyzing count data subjected to over- and under-dispersion. This innovative distribution is an extension of the Poisson distribution. The objectives of this study were to evaluate the application of the COM-Poisson GLM for analyzing motor vehicle crashes and compare the results with the traditional negative binomial (NB) model. The comparison analysis was carried out using the most common functional forms employed by transportation safety analysts, which link crashes to the entering flows at intersections or on segments. To accomplish the objectives of the study, several NB and COM-Poisson GLMs were developed and compared using two datasets. The first dataset contained crash data collected at signalized four-legged intersections in Toronto, Ont. The second dataset included data collected for rural four-lane divided and undivided highways in Texas. Several methods were used to assess the statistical fit and predictive performance of the models. The results of this study show that COM-Poisson GLMs perform as well as NB models in terms of GOF statistics and predictive performance. Given the fact the COM-Poisson distribution can also handle under-dispersed data (while the NB distribution cannot or has difficulties converging), which have sometimes been observed in crash databases, the COM-Poisson GLM offers a better alternative over the NB model for modeling motor vehicle crashes, especially given the important limitations recently documented in the safety literature about the latter type of model.

  6. A Neural Network Architecture For Rapid Model Indexing In Computer Vision Systems

    NASA Astrophysics Data System (ADS)

    Pawlicki, Ted

    1988-03-01

    Models of objects stored in memory have been shown to be useful for guiding the processing of computer vision systems. A major consideration in such systems, however, is how stored models are initially accessed and indexed by the system. As the number of stored models increases, the time required to search memory for the correct model becomes high. Parallel distributed, connectionist, neural networks' have been shown to have appealing content addressable memory properties. This paper discusses an architecture for efficient storage and reference of model memories stored as stable patterns of activity in a parallel, distributed, connectionist, neural network. The emergent properties of content addressability and resistance to noise are exploited to perform indexing of the appropriate object centered model from image centered primitives. The system consists of three network modules each of which represent information relative to a different frame of reference. The model memory network is a large state space vector where fields in the vector correspond to ordered component objects and relative, object based spatial relationships between the component objects. The component assertion network represents evidence about the existence of object primitives in the input image. It establishes local frames of reference for object primitives relative to the image based frame of reference. The spatial relationship constraint network is an intermediate representation which enables the association between the object based and the image based frames of reference. This intermediate level represents information about possible object orderings and establishes relative spatial relationships from the image based information in the component assertion network below. It is also constrained by the lawful object orderings in the model memory network above. The system design is consistent with current psychological theories of recognition by component. It also seems to support Marr's notions of hierarchical indexing. (i.e. the specificity, adjunct, and parent indices) It supports the notion that multiple canonical views of an object may have to be stored in memory to enable its efficient identification. The use of variable fields in the state space vectors appears to keep the number of required nodes in the network down to a tractable number while imposing a semantic value on different areas of the state space. This semantic imposition supports an interface between the analogical aspects of neural networks and the propositional paradigms of symbolic processing.

  7. Time-dependent inhomogeneous jet models for BL Lac objects

    NASA Technical Reports Server (NTRS)

    Marlowe, A. T.; Urry, C. M.; George, I. M.

    1992-01-01

    Relativistic beaming can explain many of the observed properties of BL Lac objects (e.g., rapid variability, high polarization, etc.). In particular, the broadband radio through X-ray spectra are well modeled by synchrotron-self Compton emission from an inhomogeneous relativistic jet. We have done a uniform analysis on several BL Lac objects using a simple but plausible inhomogeneous jet model. For all objects, we found that the assumed power-law distribution of the magnetic field and the electron density can be adjusted to match the observed BL Lac spectrum. While such models are typically unconstrained, consideration of spectral variability strongly restricts the allowed parameters, although to date the sampling has generally been too sparse to constrain the current models effectively. We investigate the time evolution of the inhomogeneous jet model for a simple perturbation propagating along the jet. The implications of this time evolution model and its relevance to observed data are discussed.

  8. Time-dependent inhomogeneous jet models for BL Lac objects

    NASA Astrophysics Data System (ADS)

    Marlowe, A. T.; Urry, C. M.; George, I. M.

    1992-05-01

    Relativistic beaming can explain many of the observed properties of BL Lac objects (e.g., rapid variability, high polarization, etc.). In particular, the broadband radio through X-ray spectra are well modeled by synchrotron-self Compton emission from an inhomogeneous relativistic jet. We have done a uniform analysis on several BL Lac objects using a simple but plausible inhomogeneous jet model. For all objects, we found that the assumed power-law distribution of the magnetic field and the electron density can be adjusted to match the observed BL Lac spectrum. While such models are typically unconstrained, consideration of spectral variability strongly restricts the allowed parameters, although to date the sampling has generally been too sparse to constrain the current models effectively. We investigate the time evolution of the inhomogeneous jet model for a simple perturbation propagating along the jet. The implications of this time evolution model and its relevance to observed data are discussed.

  9. Distributed service-based approach for sensor data fusion in IoT environments.

    PubMed

    Rodríguez-Valenzuela, Sandra; Holgado-Terriza, Juan A; Gutiérrez-Guerrero, José M; Muros-Cobos, Jesús L

    2014-10-15

    The Internet of Things (IoT) enables the communication among smart objects promoting the pervasive presence around us of a variety of things or objects that are able to interact and cooperate jointly to reach common goals. IoT objects can obtain data from their context, such as the home, office, industry or body. These data can be combined to obtain new and more complex information applying data fusion processes. However, to apply data fusion algorithms in IoT environments, the full system must deal with distributed nodes, decentralized communication and support scalability and nodes dynamicity, among others restrictions. In this paper, a novel method to manage data acquisition and fusion based on a distributed service composition model is presented, improving the data treatment in IoT pervasive environments.

  10. Distributed Service-Based Approach for Sensor Data Fusion in IoT Environments

    PubMed Central

    Rodríguez-Valenzuela, Sandra; Holgado-Terriza, Juan A.; Gutiérrez-Guerrero, José M.; Muros-Cobos, Jesús L.

    2014-01-01

    The Internet of Things (IoT) enables the communication among smart objects promoting the pervasive presence around us of a variety of things or objects that are able to interact and cooperate jointly to reach common goals. IoT objects can obtain data from their context, such as the home, office, industry or body. These data can be combined to obtain new and more complex information applying data fusion processes. However, to apply data fusion algorithms in IoT environments, the full system must deal with distributed nodes, decentralized communication and support scalability and nodes dynamicity, among others restrictions. In this paper, a novel method to manage data acquisition and fusion based on a distributed service composition model is presented, improving the data treatment in IoT pervasive environments. PMID:25320907

  11. The variance of dispersion measure of high-redshift transient objects as a probe of ionized bubble size during reionization

    NASA Astrophysics Data System (ADS)

    Yoshiura, Shintaro; Takahashi, Keitaro

    2018-01-01

    The dispersion measure (DM) of high-redshift (z ≳ 6) transient objects such as fast radio bursts can be a powerful tool to probe the intergalactic medium during the Epoch of Reionization. In this paper, we study the variance of the DMs of objects with the same redshift as a potential probe of the size distribution of ionized bubbles. We calculate the DM variance with a simple model with randomly distributed spherical bubbles. It is found that the DM variance reflects the characteristics of the probability distribution of the bubble size. We find that the variance can be measured precisely enough to obtain the information on the typical size with a few hundred sources at a single redshift.

  12. An ant colony optimization heuristic for an integrated production and distribution scheduling problem

    NASA Astrophysics Data System (ADS)

    Chang, Yung-Chia; Li, Vincent C.; Chiang, Chia-Ju

    2014-04-01

    Make-to-order or direct-order business models that require close interaction between production and distribution activities have been adopted by many enterprises in order to be competitive in demanding markets. This article considers an integrated production and distribution scheduling problem in which jobs are first processed by one of the unrelated parallel machines and then distributed to corresponding customers by capacitated vehicles without intermediate inventory. The objective is to find a joint production and distribution schedule so that the weighted sum of total weighted job delivery time and the total distribution cost is minimized. This article presents a mathematical model for describing the problem and designs an algorithm using ant colony optimization. Computational experiments illustrate that the algorithm developed is capable of generating near-optimal solutions. The computational results also demonstrate the value of integrating production and distribution in the model for the studied problem.

  13. The structure of the clouds distributed operating system

    NASA Technical Reports Server (NTRS)

    Dasgupta, Partha; Leblanc, Richard J., Jr.

    1989-01-01

    A novel system architecture, based on the object model, is the central structuring concept used in the Clouds distributed operating system. This architecture makes Clouds attractive over a wide class of machines and environments. Clouds is a native operating system, designed and implemented at Georgia Tech. and runs on a set of generated purpose computers connected via a local area network. The system architecture of Clouds is composed of a system-wide global set of persistent (long-lived) virtual address spaces, called objects that contain persistent data and code. The object concept is implemented at the operating system level, thus presenting a single level storage view to the user. Lightweight treads carry computational activity through the code stored in the objects. The persistent objects and threads gives rise to a programming environment composed of shared permanent memory, dispensing with the need for hardware-derived concepts such as the file systems and message systems. Though the hardware may be distributed and may have disks and networks, the Clouds provides the applications with a logically centralized system, based on a shared, structured, single level store. The current design of Clouds uses a minimalist philosophy with respect to both the kernel and the operating system. That is, the kernel and the operating system support a bare minimum of functionality. Clouds also adheres to the concept of separation of policy and mechanism. Most low-level operating system services are implemented above the kernel and most high level services are implemented at the user level. From the measured performance of using the kernel mechanisms, we are able to demonstrate that efficient implementations are feasible for the object model on commercially available hardware. Clouds provides a rich environment for conducting research in distributed systems. Some of the topics addressed in this paper include distributed programming environments, consistency of persistent data and fault-tolerance.

  14. Integrated supply chain design for commodity chemicals production via woody biomass fast pyrolysis and upgrading.

    PubMed

    Zhang, Yanan; Hu, Guiping; Brown, Robert C

    2014-04-01

    This study investigates the optimal supply chain design for commodity chemicals (BTX, etc.) production via woody biomass fast pyrolysis and hydroprocessing pathway. The locations and capacities of distributed preprocessing hubs and integrated biorefinery facilities are optimized with a mixed integer linear programming model. In this integrated supply chain system, decisions on the biomass chipping methods (roadside chipping vs. facility chipping) are also explored. The economic objective of the supply chain model is to maximize the profit for a 20-year chemicals production system. In addition to the economic objective, the model also incorporates an environmental objective of minimizing life cycle greenhouse gas emissions, analyzing the trade-off between the economic and environmental considerations. The capital cost, operating cost, and revenues for the biorefinery facilities are based on techno-economic analysis, and the proposed approach is illustrated through a case study of Minnesota, with Minneapolis-St. Paul serving as the chemicals distribution hub. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Integrated control strategy for autonomous decentralized conveyance systems based on distributed MEMS arrays

    NASA Astrophysics Data System (ADS)

    Zhou, Lingfei; Chapuis, Yves-Andre; Blonde, Jean-Philippe; Bervillier, Herve; Fukuta, Yamato; Fujita, Hiroyuki

    2004-07-01

    In this paper, the authors proposed to study a model and a control strategy of a two-dimensional conveyance system based on the principles of the Autonomous Decentralized Microsystems (ADM). The microconveyance system is based on distributed cooperative MEMS actuators which can produce a force field onto the surface of the device to grip and move a micro-object. The modeling approach proposed here is based on a simple model of a microconveyance system which is represented by a 5 x 5 matrix of cells. Each cell is consisted of a microactuator, a microsensor, and a microprocessor to provide actuation, autonomy and decentralized intelligence to the cell. Thus, each cell is able to identify a micro-object crossing on it and to decide by oneself the appropriate control strategy to convey the micro-object to its destination target. The control strategy could be established through five simple decision rules that the cell itself has to respect at each calculate cycle time. Simulation and FPGA implementation results are given in the end of the paper in order to validate model and control approach of the microconveyance system.

  16. Data analysis environment (DASH2000) for the Subaru telescope

    NASA Astrophysics Data System (ADS)

    Mizumoto, Yoshihiko; Yagi, Masafumi; Chikada, Yoshihiro; Ogasawara, Ryusuke; Kosugi, George; Takata, Tadafumi; Yoshida, Michitoshi; Ishihara, Yasuhide; Yanaka, Hiroshi; Yamamoto, Tadahiro; Morita, Yasuhiro; Nakamoto, Hiroyuki

    2000-06-01

    New framework of data analysis system (DASH) has been developed for the SUBARU Telescope. It is designed using object-oriented methodology and adopted a restaurant model. DASH shares the load of CPU and I/O among distributed heterogeneous computers. The distributed object environment of the system is implemented with JAVA and CORBA. DASH has been evaluated by several prototypings. DASH2000 is the latest version, which will be released as the beta version of data analysis system for the SUBARU Telescope.

  17. The orbital distribution of Near-Earth Objects inside Earth's orbit

    NASA Astrophysics Data System (ADS)

    Greenstreet, Sarah; Ngo, Henry; Gladman, Brett

    2012-01-01

    Canada's Near-Earth Object Surveillance Satellite (NEOSSat), set to launch in early 2012, will search for and track Near-Earth Objects (NEOs), tuning its search to best detect objects with a < 1.0 AU. In order to construct an optimal pointing strategy for NEOSSat, we needed more detailed information in the a < 1.0 AU region than the best current model (Bottke, W.F., Morbidelli, A., Jedicke, R., Petit, J.M., Levison, H.F., Michel, P., Metcalfe, T.S. [2002]. Icarus 156, 399-433) provides. We present here the NEOSSat-1.0 NEO orbital distribution model with larger statistics that permit finer resolution and less uncertainty, especially in the a < 1.0 AU region. We find that Amors = 30.1 ± 0.8%, Apollos = 63.3 ± 0.4%, Atens = 5.0 ± 0.3%, Atiras (0.718 < Q < 0.983 AU) = 1.38 ± 0.04%, and Vatiras (0.307 < Q < 0.718 AU) = 0.22 ± 0.03% of the steady-state NEO population. Vatiras are a previously undiscussed NEO population clearly defined in our integrations, whose orbits lie completely interior to that of Venus. Our integrations also uncovered the unexpected production of retrograde orbits from main-belt asteroid sources; this retrograde NEA population makes up ≃0.1% of the steady-state NEO population. The relative NEO impact rate onto Mercury, Venus, and Earth, as well as the normalized distribution of impact speeds, was calculated from the NEOSSat-1.0 orbital model under the assumption of a steady-state. The new model predicts a slightly higher Mercury impact flux.

  18. Virtual memory support for distributed computing environments using a shared data object model

    NASA Astrophysics Data System (ADS)

    Huang, F.; Bacon, J.; Mapp, G.

    1995-12-01

    Conventional storage management systems provide one interface for accessing memory segments and another for accessing secondary storage objects. This hinders application programming and affects overall system performance due to mandatory data copying and user/kernel boundary crossings, which in the microkernel case may involve context switches. Memory-mapping techniques may be used to provide programmers with a unified view of the storage system. This paper extends such techniques to support a shared data object model for distributed computing environments in which good support for coherence and synchronization is essential. The approach is based on a microkernel, typed memory objects, and integrated coherence control. A microkernel architecture is used to support multiple coherence protocols and the addition of new protocols. Memory objects are typed and applications can choose the most suitable protocols for different types of object to avoid protocol mismatch. Low-level coherence control is integrated with high-level concurrency control so that the number of messages required to maintain memory coherence is reduced and system-wide synchronization is realized without severely impacting the system performance. These features together contribute a novel approach to the support for flexible coherence under application control.

  19. Development of water allocation Model Based on ET-Control and Its Application in Haihe River Basin

    NASA Astrophysics Data System (ADS)

    You, Jinjun; Gan, Hong; Gan, Zhiguo; Wang, Lin

    2010-05-01

    Traditionally, water allocation is to distribute water to different regions and sectors, without enough consideration on amount of water consumed after water distribution. Water allocation based on ET (evaporation and Transpiration) control changes this idea and emphasizes the absolute amount of evaporation and transpiration in specific area. With this ideology, the amount of ET involved the water allocation includes not only water consumed from the sectors, but the natural ET. Therefore, the water allocation consist of two steps, the first step is to estimate reasonable ET quantum in regions, then allocate water to more detailed regions and various sectors with the ET quantum according with the operational rules. To make qualified ET distribution and water allocation in various regions, a framework is put forward in this paper, in which two models are applied to analyze the different scenarios with predefined economic growth and ecological objective. The first model figures out rational ET objective with multi-objective analysis for compromised solution in economic growth and ecological maintenance. Food security and environmental protection are also taken as constraints in the optimization in the first model. The second one provides hydraulic simulation and water balance to allocate the ET objective to corresponding regions under operational rules. These two models are combined into an integrated ET-Control water allocation. Scenario analysis through the ET-Control Model could discover the relations between economy and ecology, farther to give suggestion on measures to control water use with condition of changing socio-economic growth and ecological objectives. To confirm the methodology, Haihe River is taken as a case to study. Rational water allocation is important branch of decision making on water planning and management in Haihe River Basin since water scarcity and deteriorating environment fights for water in this basin dramatically and reasonable water allocation between economy and ecology is a focus. Considering condition of water scarcity in Haihe River Basin, ET quota is taken as objective for water allocation in provinces to realize the requirement of water inflow into the Bohai Sea. Scenario analysis provides the results of water evaporation from natural water cycle and artificial use. A trade-off curve based on fulfilment of ecological and economic objectives in different scenarios discovers the competitive relation between human activities and nature.

  20. Distributed run of a one-dimensional model in a regional application using SOAP-based web services

    NASA Astrophysics Data System (ADS)

    Smiatek, Gerhard

    This article describes the setup of a distributed computing system in Perl. It facilitates the parallel run of a one-dimensional environmental model on a number of simple network PC hosts. The system uses Simple Object Access Protocol (SOAP) driven web services offering the model run on remote hosts and a multi-thread environment distributing the work and accessing the web services. Its application is demonstrated in a regional run of a process-oriented biogenic emission model for the area of Germany. Within a network consisting of up to seven web services implemented on Linux and MS-Windows hosts, a performance increase of approximately 400% has been reached compared to a model run on the fastest single host.

  1. Comparison of Unscented Kalman Filter and Unscented Schmidt Kalman Filter in Predicting Attitude and Associated Uncertainty of a Geosynchronous Satellite

    DTIC Science & Technology

    2014-09-01

    the MLI coating, and similarly, the surface model as represented by the bidirectional reflectance distribution function ( BRDF ) will never be...surface model as represented by the bidirectional reflectance distribution function ( BRDF ) will never be identical to that found on actual space objects... BRDF model and how it compares to the Ashikhmin-Shirley BRDF [14] using similar nomenclature can be found in Ref. [15]. In this scenario, the state

  2. Modeling of Fragmentation of Asteroids

    NASA Technical Reports Server (NTRS)

    Agrawal, Parul; Prabhu, Dinesh K.; Carlozzi, Alexander; Hart, Kenneth; Bryson, Katie; Sears, Derek

    2015-01-01

    The objective of this study is to understand fragmentation and fracture of a given asteroid and mechanisms of break-up. The focus of the present work is to develop modeling techniques for stony asteroids in 10m-100m range to answer two questions: 1) What is the role of material makeup of an asteroid in the stress distribution? 2)How is stress distribution altered in the presence of pre-existing defects?

  3. Monitoring Distributed Systems: A Relational Approach.

    DTIC Science & Technology

    1982-12-01

    relationship, and time. The first two have been are modeled directly in the relational model. The third is perhaps the most fundamental , for without the system ...of another, newly created file. The approach adopted here applies to object-based operatin systems , and will support capability addressing at the...in certainties. -- Francis Bacon, in The Advancement of Learning The thesis of this research is that monitoring distributed systems is fundamentally a

  4. An object-oriented software approach for a distributed human tracking motion system

    NASA Astrophysics Data System (ADS)

    Micucci, Daniela L.

    2003-06-01

    Tracking is a composite job involving the co-operation of autonomous activities which exploit a complex information model and rely on a distributed architecture. Both information and activities must be classified and related in several dimensions: abstraction levels (what is modelled and how information is processed); topology (where the modelled entities are); time (when entities exist); strategy (why something happens); responsibilities (who is in charge of processing the information). A proper Object-Oriented analysis and design approach leads to a modular architecture where information about conceptual entities is modelled at each abstraction level via classes and intra-level associations, whereas inter-level associations between classes model the abstraction process. Both information and computation are partitioned according to level-specific topological models. They are also placed in a temporal framework modelled by suitable abstractions. Domain-specific strategies control the execution of the computations. Computational components perform both intra-level processing and intra-level information conversion. The paper overviews the phases of the analysis and design process, presents major concepts at each abstraction level, and shows how the resulting design turns into a modular, flexible and adaptive architecture. Finally, the paper sketches how the conceptual architecture can be deployed into a concrete distribute architecture by relying on an experimental framework.

  5. A model independent search for new physics in final states containing leptons at the DO experiment

    NASA Astrophysics Data System (ADS)

    Piper, Joel M.

    The standard model is known to be the low energy limit of a more general theory. Several consequences of the standard model point to a strong probability of new physics becoming experimentally visible in high energy collisions of a few TeV, resulting in high momentum objects. The specific signatures of these collisions are topics of much debate. Rather than choosing a specific signature, this analysis broadly searches the data, preferring breadth over sensitivity. In searching for new physics, several different approaches are used. These include the comparison of data with standard model background expectation in overall number of events, comparisons of distributions of many kinematic variables, and finally comparisons on the tails of distributions that sum the momenta of the objects in an event. With 1.07 fb-1 at the DO experiment, we find no evidence of physics beyond the standard model. Several discrepancies from the standard model were found, but none of these provide a compelling case for new physics.

  6. Ionic Liquids for Utilization of Waste Heat from Distributed Power Generation Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joan F. Brennecke; Mihir Sen; Edward J. Maginn

    2009-01-11

    The objective of this research project was the development of ionic liquids to capture and utilize waste heat from distributed power generation systems. Ionic Liquids (ILs) are organic salts that are liquid at room temperature and they have the potential to make fundamental and far-reaching changes in the way we use energy. In particular, the focus of this project was fundamental research on the potential use of IL/CO2 mixtures in absorption-refrigeration systems. Such systems can provide cooling by utilizing waste heat from various sources, including distributed power generation. The basic objectives of the research were to design and synthesize ILsmore » appropriate for the task, to measure and model thermophysical properties and phase behavior of ILs and IL/CO2 mixtures, and to model the performance of IL/CO2 absorption-refrigeration systems.« less

  7. Energy Technology Allocation for Distributed Energy Resources: A Technology-Policy Framework

    NASA Astrophysics Data System (ADS)

    Mallikarjun, Sreekanth

    Distributed energy resources (DER) are emerging rapidly. New engineering technologies, materials, and designs improve the performance and extend the range of locations for DER. In contrast, constructing new or modernizing existing high voltage transmission lines for centralized generation are expensive and challenging. In addition, customer demand for reliability has increased and concerns about climate change have created a pull for swift renewable energy penetration. In this context, DER policy makers, developers, and users are interested in determining which energy technologies to use to accommodate different end-use energy demands. We present a two-stage multi-objective strategic technology-policy framework for determining the optimal energy technology allocation for DER. The framework simultaneously considers economic, technical, and environmental objectives. The first stage utilizes a Data Envelopment Analysis model for each end-use to evaluate the performance of each energy technology based on the three objectives. The second stage incorporates factor efficiencies determined in the first stage, capacity limitations, dispatchability, and renewable penetration for each technology, and demand for each end-use into a bottleneck multi-criteria decision model which provides the Pareto-optimal energy resource allocation. We conduct several case studies to understand the roles of various distributed energy technologies in different scenarios. We construct some policy implications based on the model results of set of case studies.

  8. Research into a distributed fault diagnosis system and its application

    NASA Astrophysics Data System (ADS)

    Qian, Suxiang; Jiao, Weidong; Lou, Yongjian; Shen, Xiaomei

    2005-12-01

    CORBA (Common Object Request Broker Architecture) is a solution to distributed computing methods over heterogeneity systems, which establishes a communication protocol between distributed objects. It takes great emphasis on realizing the interoperation between distributed objects. However, only after developing some application approaches and some practical technology in monitoring and diagnosis, can the customers share the monitoring and diagnosis information, so that the purpose of realizing remote multi-expert cooperation diagnosis online can be achieved. This paper aims at building an open fault monitoring and diagnosis platform combining CORBA, Web and agent. Heterogeneity diagnosis object interoperate in independent thread through the CORBA (soft-bus), realizing sharing resource and multi-expert cooperation diagnosis online, solving the disadvantage such as lack of diagnosis knowledge, oneness of diagnosis technique and imperfectness of analysis function, so that more complicated and further diagnosis can be carried on. Take high-speed centrifugal air compressor set for example, we demonstrate a distributed diagnosis based on CORBA. It proves that we can find out more efficient approaches to settle the problems such as real-time monitoring and diagnosis on the net and the break-up of complicated tasks, inosculating CORBA, Web technique and agent frame model to carry on complemental research. In this system, Multi-diagnosis Intelligent Agent helps improve diagnosis efficiency. Besides, this system offers an open circumstances, which is easy for the diagnosis objects to upgrade and for new diagnosis server objects to join in.

  9. Relativistic compact stars with charged anisotropic matter

    NASA Astrophysics Data System (ADS)

    Maurya, S. K.; Banerjee, Ayan; Channuie, Phongpichit

    2018-05-01

    In this article, we perform a detailed theoretical analysis of new exact solutions with anisotropic fluid distribution of matter for compact objects subject to hydrostatic equilibrium. We present a family solution to the Einstein-Maxwell equations describing a spherically symmetric, static distribution of a fluid with pressure anisotropy. We implement an embedding class one condition to obtain a relation between the metric functions. We generalize the properties of a spherical star with hydrostatic equilibrium using the generalised Tolman-Oppenheimer-Volkoff (TOV) equation. We match the interior solution to an exterior Reissner-Nordström one, and study the energy conditions, speed of sound, and mass-radius relation of the star. We also show that the obtained solutions are compatible with observational data for the compact object Her X-1. Regarding our results, the physical behaviour of the present model may serve for the modeling of ultra compact objects.

  10. Brightness variation distributions among main belt asteroids from sparse light-curve sampling with Pan-STARRS 1

    NASA Astrophysics Data System (ADS)

    McNeill, A.; Fitzsimmons, A.; Jedicke, R.; Wainscoat, R.; Denneau, L.; Vereš, P.; Magnier, E.; Chambers, K. C.; Kaiser, N.; Waters, C.

    2016-07-01

    The rotational state of asteroids is controlled by various physical mechanisms including collisions, internal damping and the Yarkovsky-O'Keefe-Radzievskii-Paddack effect. We have analysed the changes in magnitude between consecutive detections of ˜60 000 asteroids measured by the Panoramic Survey Telescope and Rapid Response System (PanSTARRS) 1 survey during its first 18 months of operations. We have attempted to explain the derived brightness changes physically and through the application of a simple model. We have found a tendency towards smaller magnitude variations with decreasing diameter for objects of 1 < D < 8 km. Assuming the shape distribution of objects in this size range to be independent of size and composition our model suggests a population with average axial ratios 1 : 0.85 ± 0.13 : 0.71 ± 0.13, with larger objects more likely to have spin axes perpendicular to the orbital plane.

  11. Infrared near-Earth-object survey modeling for observatories interior to the Earth's orbit

    NASA Astrophysics Data System (ADS)

    Buie, M.

    2014-07-01

    The search for and dynamical characterization of the near-Earth population of objects (NEOs) has been a busy topic for surveys for many years. Most of the work thus far has been from ground-based optical surveys such as the Catalina Sky Survey and LINEAR. These surveys have essentially reached a complete inventory of objects down to 1 km diameter and have shown that the known objects do not pose any significant impact threat. Smaller objects are correspondingly smaller threats but there are more of them and fewer of them have so far been discovered. The next generation of surveys is looking to extend their reach down to much smaller sizes. From an impact risk perspective, those objects as small as 30--40 m are still of interest (similar in size to the Tunguska bolide). Smaller objects than this are largely of interest from a space resource or in-situ analysis efforts. A recent mission concept promoted by the B612 Foundation and Ball Aerospace calls for an infrared survey telescope in a Venus-like orbit, known as the Sentinel Mission. This wide-field facility has been designed to complete the inventory down to a 140 m diameter while also providing substantial constraints on the NEO population down to a Tunguska-sized object. I have been working to develop a suite of tools to provide survey modeling for this class of survey telescope. The purpose of the tool is to uncover hidden complexities that govern mission design and operation while also working to quantitatively understand the orbit quality provided on its catalog of objects without additional followup assets. The baseline mission design calls for a 6.5 year survey lifetime. This survey model is a statistically based tool for establishing completeness as a function of object size and survey duration. Effects modeled include the ability to adjust the field-of-regard (includes all pointing restrictions), field-of-view, focal plane array fill factor, and the observatory orbit. Consequences tracked include time-tagged detection times from which orbit quality can be derived and efficiency by dynamical class. The dominant noise term in the simulations comes from the noise in the background flux caused by thermal emission from zodiacal dust. The model used is sufficient for the study of reasonably low-inclination spacecraft orbits such as are being considered. Results to date are based on the 2002 Bottke NEA orbit-distribution model. The system can work with any orbit-distribution model and with any size-frequency distribution. This tool also serves to quantify the amount of data that will also be collected on main-belt objects by simply testing against the known catalog of bodies. The orbit quality work clearly shows the benefit of a self-followup survey such as Sentinel. Most objects discovered will be seen in multiple observing epochs and the resulting orbits will preclude losing track of them for decades to come (or longer). All of the ephemeris calculations, including investigation of orbit determination quality, are done with the OpenOrb software package. The presentation for this meeting will be based on results of modeling the Sentinel Mission and other similar variants. The focus will be on evaluating the survey completion for different dynamical classes as well as for different sized objects. Within the fidelity of such statistically-based models, the planned Sentinel observatory is well capable of a huge step forward in the efforts to build a complete catalog of all objects that could pose future harm to planet Earth.

  12. On the objective identification of flood seasons

    NASA Astrophysics Data System (ADS)

    Cunderlik, Juraj M.; Ouarda, Taha B. M. J.; BobéE, Bernard

    2004-01-01

    The determination of seasons of high and low probability of flood occurrence is a task with many practical applications in contemporary hydrology and water resources management. Flood seasons are generally identified subjectively by visually assessing the temporal distribution of flood occurrences and, then at a regional scale, verified by comparing the temporal distribution with distributions obtained at hydrologically similar neighboring sites. This approach is subjective, time consuming, and potentially unreliable. The main objective of this study is therefore to introduce a new, objective, and systematic method for the identification of flood seasons. The proposed method tests the significance of flood seasons by comparing the observed variability of flood occurrences with the theoretical flood variability in a nonseasonal model. The method also addresses the uncertainty resulting from sampling variability by quantifying the probability associated with the identified flood seasons. The performance of the method was tested on an extensive number of samples with different record lengths generated from several theoretical models of flood seasonality. The proposed approach was then applied on real data from a large set of sites with different flood regimes across Great Britain. The results show that the method can efficiently identify flood seasons from both theoretical and observed distributions of flood occurrence. The results were used for the determination of the main flood seasonality types in Great Britain.

  13. Distributed Object Oriented Programming

    DTIC Science & Technology

    1990-02-01

    of the object oriented model of computation. Therefore, object oriented programming can provide the programmer with good conceptual tools to divide his...LABOR SALES-COMMISSION). The symbol + refers to the addition function and takes any number of numeric arguments. The third subtype of list forms is the...2) ’(:SEND-DONE) (SEWF (AREF OBJECT-i1-MESSAGES-SENT 2) ’(PROGN (FORMAT T "-s methd completely executed instr-ptr -s-V NAME %INSTR-PTR%) (INCF

  14. A Model of Objective Weighting for EIA.

    ERIC Educational Resources Information Center

    Ying, Long Gen; Liu, You Ci

    1995-01-01

    In the research of environmental impact assessment (EIA), the problem of weight distribution for a set of parameters has not yet been properly solved. Presents an approach of objective weighting by using a procedure of Pij principal component-factor analysis (Pij PCFA), which suits specifically those parameters measured directly by physical…

  15. Object Lesson: Discovering and Learning to Recognize Objects

    DTIC Science & Technology

    2002-01-01

    4 x 4 grid represents the possible appearance of an edge, quantized to just two luminance levels. The dark line centered in the grid is the average...11):33-38, 1995. [16] Maja J. Mataric . A distributed model for mobile robot environment-learning and navigation. Technical Report AIlR- 1228

  16. Marketing and Distribution: Better Learning Experiences through Proper Coordination.

    ERIC Educational Resources Information Center

    Coakley, Carroll B.

    1979-01-01

    Presents a cooperative education model that correlates the student's occupational objective with his/her training station. Components of the model discussed are (1) the task analysis, (2) the job description, (3) training plans, and (4) student evaluation. (LRA)

  17. Mathematical Modelling of Continuous Biotechnological Processes

    ERIC Educational Resources Information Center

    Pencheva, T.; Hristozov, I.; Shannon, A. G.

    2003-01-01

    Biotechnological processes (BTP) are characterized by a complicated structure of organization and interdependent characteristics. Partial differential equations or systems of partial differential equations are used for their behavioural description as objects with distributed parameters. Modelling of substrate without regard to dispersion…

  18. Simulation Models for the Electric Power Requirements in a Guideway Transit System

    DOT National Transportation Integrated Search

    1980-04-01

    This report describes a computer simulation model developed at the Transportation Systems Center to study the electrical power distribution characteristics of Automated Guideway Transit (AGT) systems. The objective of this simulation effort is to pro...

  19. Confronting Models of Massive Star Evolution and Explosions with Remnant Mass Measurements

    NASA Astrophysics Data System (ADS)

    Raithel, Carolyn A.; Sukhbold, Tuguldur; Özel, Feryal

    2018-03-01

    The mass distribution of compact objects provides a fossil record that can be studied to uncover information on the late stages of massive star evolution, the supernova explosion mechanism, and the dense matter equation of state. Observations of neutron star masses indicate a bimodal Gaussian distribution, while the observed black hole mass distribution decays exponentially for stellar-mass black holes. We use these observed distributions to directly confront the predictions of stellar evolution models and the neutrino-driven supernova simulations of Sukhbold et al. We find strong agreement between the black hole and low-mass neutron star distributions created by these simulations and the observations. We show that a large fraction of the stellar envelope must be ejected, either during the formation of stellar-mass black holes or prior to the implosion through tidal stripping due to a binary companion, in order to reproduce the observed black hole mass distribution. We also determine the origins of the bimodal peaks of the neutron star mass distribution, finding that the low-mass peak (centered at ∼1.4 M ⊙) originates from progenitors with M ZAMS ≈ 9–18 M ⊙. The simulations fail to reproduce the observed peak of high-mass neutron stars (centered at ∼1.8 M ⊙) and we explore several possible explanations. We argue that the close agreement between the observed and predicted black hole and low-mass neutron star mass distributions provides new, promising evidence that these stellar evolution and explosion models capture the majority of relevant stellar, nuclear, and explosion physics involved in the formation of compact objects.

  20. Engineering the object-relation database model in O-Raid

    NASA Technical Reports Server (NTRS)

    Dewan, Prasun; Vikram, Ashish; Bhargava, Bharat

    1989-01-01

    Raid is a distributed database system based on the relational model. O-raid is an extension of the Raid system and will support complex data objects. The design of O-Raid is evolutionary and retains all features of relational data base systems and those of a general purpose object-oriented programming language. O-Raid has several novel properties. Objects, classes, and inheritance are supported together with a predicate-base relational query language. O-Raid objects are compatible with C++ objects and may be read and manipulated by a C++ program without any 'impedance mismatch'. Relations and columns within relations may themselves be treated as objects with associated variables and methods. Relations may contain heterogeneous objects, that is, objects of more than one class in a certain column, which can individually evolve by being reclassified. Special facilities are provided to reduce the data search in a relation containing complex objects.

  1. Defense Simulation Internet: next generation information highway.

    PubMed

    Lilienthal, M G

    1995-06-01

    The Department of Defense has been engaged in the Defense Modeling and Simulation Initiative (DMSI) to provide advanced distributed simulation warfighters in geographically distributed localities. Lessons learned from the Defense Simulation Internet (DSI) concerning architecture, standards, protocols, interoperability, information sharing, and distributed data bases are equally applicable to telemedicine. Much of the vision and objectives of the DMSI are easily translated into the vision for world wide telemedicine.

  2. The orbit and size distribution of small Solar System objects orbiting the Sun interior to the Earth's orbit

    NASA Astrophysics Data System (ADS)

    Zavodny, Maximilian; Jedicke, Robert; Beshore, Edward C.; Bernardi, Fabrizio; Larson, Stephen

    2008-12-01

    We present the first observational measurement of the orbit and size distribution of small Solar System objects whose orbits are wholly interior to the Earth's (Inner Earth Objects, IEOs, with aphelion <0.983 AU). We show that we are able to model the detections of near-Earth objects (NEO) by the Catalina Sky Survey (CSS) using a detailed parameterization of the CSS survey cadence and detection efficiencies as implemented within the Jedicke et al. [Jedicke, R., Morbidelli, A., Spahr, T., Petit, J.M., Bottke, W.F., 2003. Icarus 161, 17-33] survey simulator and utilizing the Bottke et al. [Bottke, W.F., Morbidelli, A., Jedicke, R., Petit, J.-M., Levison, H.F., Michel, P., Metcalfe, T.S., 2002. Icarus 156, 399-433] model of the NEO population's size and orbit distribution. We then show that the CSS detections of 4 IEOs are consistent with the Bottke et al. [Bottke, W.F., Morbidelli, A., Jedicke, R., Petit, J.-M., Levison, H.F., Michel, P., Metcalfe, T.S., 2002. Icarus 156, 399-433] IEO model. Observational selection effects for the IEOs discovered by the CSS were then determined using the survey simulator in order to calculate the corrected number and H distribution of the IEOs. The actual number of IEOs with H<18 (21) is 36±26 ( 530±240) and the slope of the H magnitude distribution ( ∝10) for the IEOs is α=0.44-0.22+0.23. The slope is consistent with previous measurements for the NEO population of α=0.35±0.02 [Bottke, W.F., Morbidelli, A., Jedicke, R., Petit, J.-M., Levison, H.F., Michel, P., Metcalfe, T.S., 2002. Icarus 156, 399-433] and α=0.39±0.013 [Stuart, J.S., Binzel, R.P., 2004. Icarus 170, 295-311]. Based on the agreement between the predicted and observed IEO orbit and absolute magnitude distributions there is no indication of any non-gravitational effects (e.g. Yarkovsky, tidal disruption) affecting the known IEO population.

  3. Electromagnetic Resonances of Metallic Bodies.

    DTIC Science & Technology

    1997-06-01

    complex objects. MOM creates a discrete model of the object by dividing the object into electrically small charge and current segments referred to as the...distribution is unlimited ELECROMAGNETIC RESONANCES OF METALLIC BODIES William A. Lintz Lieutenant, United States Navy B.E.E., Villanova University, 1992...Submitted in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE IN ELECTRICAL ENGINEERING from the NAVAL POSTGRADUATE SCHOOL June

  4. A Group Theoretic Approach to Metaheuristic Local Search for Partitioning Problems

    DTIC Science & Technology

    2005-05-01

    Tabu Search. Mathematical and Computer Modeling 39: 599-616. 107 Daskin , M.S., E. Stern. 1981. A Hierarchical Objective Set Covering Model for EMS... A Group Theoretic Approach to Metaheuristic Local Search for Partitioning Problems by Gary W. Kinney Jr., B.G.S., M.S. Dissertation Presented to the...DISTRIBUTION STATEMENT A Approved for Public Release Distribution Unlimited The University of Texas at Austin May, 2005 20050504 002 REPORT

  5. Matrix Determination of Reflectance of Hidden Object via Indirect Photography

    DTIC Science & Technology

    2012-03-01

    the hidden object. This thesis provides an alternative method of processing the camera images by modeling the system as a set of transport and...Distribution Function ( BRDF ). Figure 1. Indirect photography with camera field of view dictated by point of illumination. 3 1.3 Research Focus In an...would need to be modeled using radiometric principles. A large amount of the improvement in this process was due to the use of a blind

  6. The correlation function for density perturbations in an expanding universe. III The three-point and predictions of the four-point and higher order correlation functions

    NASA Technical Reports Server (NTRS)

    Mcclelland, J.; Silk, J.

    1978-01-01

    Higher-order correlation functions for the large-scale distribution of galaxies in space are investigated. It is demonstrated that the three-point correlation function observed by Peebles and Groth (1975) is not consistent with a distribution of perturbations that at present are randomly distributed in space. The two-point correlation function is shown to be independent of how the perturbations are distributed spatially, and a model of clustered perturbations is developed which incorporates a nonuniform perturbation distribution and which explains the three-point correlation function. A model with hierarchical perturbations incorporating the same nonuniform distribution is also constructed; it is found that this model also explains the three-point correlation function, but predicts different results for the four-point and higher-order correlation functions than does the model with clustered perturbations. It is suggested that the model of hierarchical perturbations might be explained by the single assumption of having density fluctuations or discrete objects all of the same mass randomly placed at some initial epoch.

  7. Research on the semi-distributed monthly rainfall runoff model at the Lancang River basin based on DEM

    NASA Astrophysics Data System (ADS)

    Liu, Gang; Zhao, Rong; Liu, Jiping; Zhang, Qingpu

    2007-06-01

    The Lancang River Basin is so narrow and its hydrological and meteorological information are so flexible. The Rainfall, evaporation, glacial melt water and groundwater affect the runoff whose replenishment forms changing notable with the season in different areas at the basin. Characters of different kind of distributed model and conceptual hydrological model are analyzed. A semi-distributed hydrological model of relation between monthly runoff and rainfall, temperate and soil type has been built in Changdu County based on Visual Basic and ArcObject. The way of discretization of distributed hydrological model was used in the model, and principles of conceptual model are taken into account. The sub-catchment of Changdu is divided into regular cells, and all kinds of hydrological and meteorological information and land use classes and slope extracted from 1:250000 digital elevation models are distributed in each cell. The model does not think of the rainfall-runoff hydro-physical process but use the conceptual model to simulate the whole contributes to the runoff of the area. The affection of evapotranspiration loss and underground water is taken into account at the same time. The spatial distribute characteristics of the monthly runoff in the area are simulated and analyzed with a few parameters.

  8. Rigorous Results for the Distribution of Money on Connected Graphs

    NASA Astrophysics Data System (ADS)

    Lanchier, Nicolas; Reed, Stephanie

    2018-05-01

    This paper is concerned with general spatially explicit versions of three stochastic models for the dynamics of money that have been introduced and studied numerically by statistical physicists: the uniform reshuffling model, the immediate exchange model and the model with saving propensity. All three models consist of systems of economical agents that consecutively engage in pairwise monetary transactions. Computer simulations performed in the physics literature suggest that, when the number of agents and the average amount of money per agent are large, the limiting distribution of money as time goes to infinity approaches the exponential distribution for the first model, the gamma distribution with shape parameter two for the second model and a distribution similar but not exactly equal to a gamma distribution whose shape parameter depends on the saving propensity for the third model. The main objective of this paper is to give rigorous proofs of these conjectures and also extend these conjectures to generalizations of the first two models and a variant of the third model that include local rather than global interactions, i.e., instead of choosing the two interacting agents uniformly at random from the system, the agents are located on the vertex set of a general connected graph and can only interact with their neighbors.

  9. Detection of a Divot in the Scattering Population's Size Distribution

    NASA Astrophysics Data System (ADS)

    Shankman, Cory; Gladman, B.; Kaib, N.; Kavelaars, J.; Petit, J.

    2012-10-01

    Via joint analysis of the calibrated Canada France Ecliptic Place Survey (CFEPS, Petit et al 2011, AJ 142, 131), which found scattering Kuiper Belt objects, and models of their orbital distribution, we show that there should be enough kilometer-scale scattering objects to supply the Jupiter Family Comets (JFCs). Surprisingly, our analysis favours a divot (an abrupt drop and then recovery) in the size distribution at a diameter of 100 km, which results in a temporary flattening of the cumulative size distribution until it returns to a collisional equilibrium slope. Using the absolutely calibrated CFEPS survey we estimate that there are 2 x 10**9 scattering objects with H_g < 18, which is sufficient to provide the currently estimated JFC resupply rate. We also find that the primordial disk from which the scattering objects came must have had a "hot" initial inclination distribution before the giant planets scattered it out. We find that a divot, in the absolute magnitude number distribution, with a bright-end logarithmic slope of 0.8, a drop at a g-band H magnitude of 9, and a faint side logarithmic slope of 0.5 satisfies our data and simultaneously explains several existing nagging puzzles about Kuiper Belt luminosity functions (see Gladman et al., this meeting). Multiple explanations of how such a feature could have arisen will be discussed. This research was supported by the Natural Sciences and Engineering Research Council of Canada.

  10. Next Generation Multimedia Distributed Data Base Systems

    NASA Technical Reports Server (NTRS)

    Pendleton, Stuart E.

    1997-01-01

    The paradigm of client/server computing is changing. The model of a server running a monolithic application and supporting clients at the desktop is giving way to a different model that blurs the line between client and server. We are on the verge of plunging into the next generation of computing technology--distributed object-oriented computing. This is not only a change in requirements but a change in opportunities, and requires a new way of thinking for Information System (IS) developers. The information system demands caused by global competition are requiring even more access to decision making tools. Simply, object-oriented technology has been developed to supersede the current design process of information systems which is not capable of handling next generation multimedia.

  11. LP II--A GOAL PROGRAMMING MODEL FOR MEDIA.

    ERIC Educational Resources Information Center

    CHARNES, A.; AND OTHERS

    A GOAL PROGRAMING MODEL FOR SELECTING MEDIA IS PRESENTED WHICH ALTERS THE OBJECTIVE AND EXTENDS PREVIOUS MEDIA MODELS BY ACCOUNTING FOR CUMULATIVE DUPLICATING AUDIENCES OVER A VARIETY OF TIME PERIODS. THIS PERMITS DETAILED CONTROL OF THE DISTRIBUTION OF MESSAGE FREQUENCIES DIRECTED AT EACH OF NUMEROUS MARKETING TARGETS OVER A SEQUENCE OF…

  12. Modeling Cometary Coma with a Three Dimensional, Anisotropic Multiple Scattering Distributed Processing Code

    NASA Technical Reports Server (NTRS)

    Luchini, Chris B.

    1997-01-01

    Development of camera and instrument simulations for space exploration requires the development of scientifically accurate models of the objects to be studied. Several planned cometary missions have prompted the development of a three dimensional, multi-spectral, anisotropic multiple scattering model of cometary coma.

  13. Hierarchical representation of shapes in visual cortex—from localized features to figural shape segregation

    PubMed Central

    Tschechne, Stephan; Neumann, Heiko

    2014-01-01

    Visual structures in the environment are segmented into image regions and those combined to a representation of surfaces and prototypical objects. Such a perceptual organization is performed by complex neural mechanisms in the visual cortex of primates. Multiple mutually connected areas in the ventral cortical pathway receive visual input and extract local form features that are subsequently grouped into increasingly complex, more meaningful image elements. Such a distributed network of processing must be capable to make accessible highly articulated changes in shape boundary as well as very subtle curvature changes that contribute to the perception of an object. We propose a recurrent computational network architecture that utilizes hierarchical distributed representations of shape features to encode surface and object boundary over different scales of resolution. Our model makes use of neural mechanisms that model the processing capabilities of early and intermediate stages in visual cortex, namely areas V1–V4 and IT. We suggest that multiple specialized component representations interact by feedforward hierarchical processing that is combined with feedback signals driven by representations generated at higher stages. Based on this, global configurational as well as local information is made available to distinguish changes in the object's contour. Once the outline of a shape has been established, contextual contour configurations are used to assign border ownership directions and thus achieve segregation of figure and ground. The model, thus, proposes how separate mechanisms contribute to distributed hierarchical cortical shape representation and combine with processes of figure-ground segregation. Our model is probed with a selection of stimuli to illustrate processing results at different processing stages. We especially highlight how modulatory feedback connections contribute to the processing of visual input at various stages in the processing hierarchy. PMID:25157228

  14. Hierarchical representation of shapes in visual cortex-from localized features to figural shape segregation.

    PubMed

    Tschechne, Stephan; Neumann, Heiko

    2014-01-01

    Visual structures in the environment are segmented into image regions and those combined to a representation of surfaces and prototypical objects. Such a perceptual organization is performed by complex neural mechanisms in the visual cortex of primates. Multiple mutually connected areas in the ventral cortical pathway receive visual input and extract local form features that are subsequently grouped into increasingly complex, more meaningful image elements. Such a distributed network of processing must be capable to make accessible highly articulated changes in shape boundary as well as very subtle curvature changes that contribute to the perception of an object. We propose a recurrent computational network architecture that utilizes hierarchical distributed representations of shape features to encode surface and object boundary over different scales of resolution. Our model makes use of neural mechanisms that model the processing capabilities of early and intermediate stages in visual cortex, namely areas V1-V4 and IT. We suggest that multiple specialized component representations interact by feedforward hierarchical processing that is combined with feedback signals driven by representations generated at higher stages. Based on this, global configurational as well as local information is made available to distinguish changes in the object's contour. Once the outline of a shape has been established, contextual contour configurations are used to assign border ownership directions and thus achieve segregation of figure and ground. The model, thus, proposes how separate mechanisms contribute to distributed hierarchical cortical shape representation and combine with processes of figure-ground segregation. Our model is probed with a selection of stimuli to illustrate processing results at different processing stages. We especially highlight how modulatory feedback connections contribute to the processing of visual input at various stages in the processing hierarchy.

  15. Application of an object-oriented programming paradigm in three-dimensional computer modeling of mechanically active gastrointestinal tissues.

    PubMed

    Rashev, P Z; Mintchev, M P; Bowes, K L

    2000-09-01

    The aim of this study was to develop a novel three-dimensional (3-D) object-oriented modeling approach incorporating knowledge of the anatomy, electrophysiology, and mechanics of externally stimulated excitable gastrointestinal (GI) tissues and emphasizing the "stimulus-response" principle of extracting the modeling parameters. The modeling method used clusters of class hierarchies representing GI tissues from three perspectives: 1) anatomical; 2) electrophysiological; and 3) mechanical. We elaborated on the first four phases of the object-oriented system development life-cycle: 1) analysis; 2) design; 3) implementation; and 4) testing. Generalized cylinders were used for the implementation of 3-D tissue objects modeling the cecum, the descending colon, and the colonic circular smooth muscle tissue. The model was tested using external neural electrical tissue excitation of the descending colon with virtual implanted electrodes and the stimulating current density distributions over the modeled surfaces were calculated. Finally, the tissue deformations invoked by electrical stimulation were estimated and represented by a mesh-surface visualization technique.

  16. Small impact craters in the lunar regolith - Their morphologies, relative ages, and rates of formation

    USGS Publications Warehouse

    Moore, H.J.; Boyce, J.M.; Hahn, D.A.

    1980-01-01

    Apparently, there are two types of size-frequency distributions of small lunar craters (???1-100 m across): (1) crater production distributions for which the cumulative frequency of craters is an inverse function of diameter to power near 2.8, and (2) steady-state distributions for which the cumulative frequency of craters is inversely proportional to the square of their diameters. According to theory, cumulative frequencies of craters in each morphologic category within the steady-state should also be an inverse function of the square of their diameters. Some data on frequency distribution of craters by morphologic types are approximately consistent with theory, whereas other data are inconsistent with theory. A flux of crater producing objects can be inferred from size-frequency distributions of small craters on the flanks and ejecta of craters of known age. Crater frequency distributions and data on the craters Tycho, North Ray, Cone, and South Ray, when compared with the flux of objects measured by the Apollo Passive Seismometer, suggest that the flux of objects has been relatively constant over the last 100 m.y. (within 1/3 to 3 times of the flux estimated for Tycho). Steady-state frequency distributions for craters in several morphologic categories formed the basis for estimating the relative ages of craters and surfaces in a system used during the Apollo landing site mapping program of the U.S. Geological Survey. The relative ages in this system are converted to model absolute ages that have a rather broad range of values. The range of values of the absolute ages are between about 1/3 to 3 times the assigned model absolute age. ?? 1980 D. Reidel Publishing Co.

  17. Calibration of a distributed hydrologic model using observed spatial patterns from MODIS data

    NASA Astrophysics Data System (ADS)

    Demirel, Mehmet C.; González, Gorka M.; Mai, Juliane; Stisen, Simon

    2016-04-01

    Distributed hydrologic models are typically calibrated against streamflow observations at the outlet of the basin. Along with these observations from gauging stations, satellite based estimates offer independent evaluation data such as remotely sensed actual evapotranspiration (aET) and land surface temperature. The primary objective of the study is to compare model calibrations against traditional downstream discharge measurements with calibrations against simulated spatial patterns and combinations of both types of observations. While the discharge based model calibration typically improves the temporal dynamics of the model, it seems to give rise to minimum improvement of the simulated spatial patterns. In contrast, objective functions specifically targeting the spatial pattern performance could potentially increase the spatial model performance. However, most modeling studies, including the model formulations and parameterization, are not designed to actually change the simulated spatial pattern during calibration. This study investigates the potential benefits of incorporating spatial patterns from MODIS data to calibrate the mesoscale hydrologic model (mHM). This model is selected as it allows for a change in the spatial distribution of key soil parameters through the optimization of pedo-transfer function parameters and includes options for using fully distributed daily Leaf Area Index (LAI) values directly as input. In addition the simulated aET can be estimated at a spatial resolution suitable for comparison to the spatial patterns observed with MODIS data. To increase our control on spatial calibration we introduced three additional parameters to the model. These new parameters are part of an empirical equation to the calculate crop coefficient (Kc) from daily LAI maps and used to update potential evapotranspiration (PET) as model inputs. This is done instead of correcting/updating PET with just a uniform (or aspect driven) factor used in the mHM model (version 5.3). We selected the 20 most important parameters out of 53 mHM parameters based on a comprehensive sensitivity analysis (Cuntz et al., 2015). We calibrated 1km-daily mHM for the Skjern basin in Denmark using the Shuffled Complex Evolution (SCE) algorithm and inputs at different spatial scales i.e. meteorological data at 10km and morphological data at 250 meters. We used correlation coefficients between observed monthly (summer months only) MODIS data calculated from cloud free days over the calibration period from 2001 to 2008 and simulated aET from mHM over the same period. Similarly other metrics, e.g mapcurves and fraction skill-score, are also included in our objective function to assess the co-location of the grid-cells. The preliminary results show that multi-objective calibration of mHM against observed streamflow and spatial patterns together does not significantly reduce the spatial errors in aET while it improves the streamflow simulations. This is a strong signal for further investigation of the multi parameter regionalization affecting spatial aET patterns and weighting the spatial metrics in the objective function relative to the streamflow metrics.

  18. Governance and assessment in a widely distributed medical education program in Australia.

    PubMed

    Solarsh, Geoff; Lindley, Jennifer; Whyte, Gordon; Fahey, Michael; Walker, Amanda

    2012-06-01

    The learning objectives, curriculum content, and assessment standards for distributed medical education programs must be aligned across the health care systems and community contexts in which their students train. In this article, the authors describe their experiences at Monash University implementing a distributed medical education program at metropolitan, regional, and rural Australian sites and an offshore Malaysian site, using four different implementation models. Standardizing learning objectives, curriculum content, and assessment standards across all sites while allowing for site-specific implementation models created challenges for educational alignment. At the same time, this diversity created opportunities to customize the curriculum to fit a variety of settings and for innovations that have enriched the educational system as a whole.Developing these distributed medical education programs required a detailed review of Monash's learning objectives and curriculum content and their relevance to the four different sites. It also required a review of assessment methods to ensure an identical and equitable system of assessment for students at all sites. It additionally demanded changes to the systems of governance and the management of the educational program away from a centrally constructed and mandated curriculum to more collaborative approaches to curriculum design and implementation involving discipline leaders at multiple sites.Distributed medical education programs, like that at Monash, in which cohorts of students undertake the same curriculum in different contexts, provide potentially powerful research platforms to compare different pedagogical approaches to medical education and the impact of context on learning outcomes.

  19. Influence of Trabecular Bone on Peri-Implant Stress and Strain Based on Micro-CT Finite Element Modeling of Beagle Dog

    PubMed Central

    Liao, Sheng-hui; Zhu, Xing-hao; Xie, Jing; Sohodeb, Vikesh Kumar; Ding, Xi

    2016-01-01

    The objective of this investigation is to analyze the influence of trabecular microstructure modeling on the biomechanical distribution of the implant-bone interface. Two three-dimensional finite element mandible models, one with trabecular microstructure (a refined model) and one with macrostructure (a simplified model), were built. The values of equivalent stress at the implant-bone interface in the refined model increased compared with those of the simplified model and strain on the contrary. The distributions of stress and strain were more uniform in the refined model of trabecular microstructure, in which stress and strain were mainly concentrated in trabecular bone. It was concluded that simulation of trabecular bone microstructure had a significant effect on the distribution of stress and strain at the implant-bone interface. These results suggest that trabecular structures could disperse stress and strain and serve as load buffers. PMID:27403424

  20. Influence of Trabecular Bone on Peri-Implant Stress and Strain Based on Micro-CT Finite Element Modeling of Beagle Dog.

    PubMed

    Liao, Sheng-Hui; Zhu, Xing-Hao; Xie, Jing; Sohodeb, Vikesh Kumar; Ding, Xi

    2016-01-01

    The objective of this investigation is to analyze the influence of trabecular microstructure modeling on the biomechanical distribution of the implant-bone interface. Two three-dimensional finite element mandible models, one with trabecular microstructure (a refined model) and one with macrostructure (a simplified model), were built. The values of equivalent stress at the implant-bone interface in the refined model increased compared with those of the simplified model and strain on the contrary. The distributions of stress and strain were more uniform in the refined model of trabecular microstructure, in which stress and strain were mainly concentrated in trabecular bone. It was concluded that simulation of trabecular bone microstructure had a significant effect on the distribution of stress and strain at the implant-bone interface. These results suggest that trabecular structures could disperse stress and strain and serve as load buffers.

  1. Asymptotic Behavior of the Stock Price Distribution Density and Implied Volatility in Stochastic Volatility Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gulisashvili, Archil, E-mail: guli@math.ohiou.ed; Stein, Elias M., E-mail: stein@math.princeton.ed

    2010-06-15

    We study the asymptotic behavior of distribution densities arising in stock price models with stochastic volatility. The main objects of our interest in the present paper are the density of time averages of the squared volatility process and the density of the stock price process in the Stein-Stein and the Heston model. We find explicit formulas for leading terms in asymptotic expansions of these densities and give error estimates. As an application of our results, sharp asymptotic formulas for the implied volatility in the Stein-Stein and the Heston model are obtained.

  2. Evolution of Scientific and Technical Information Distribution

    NASA Technical Reports Server (NTRS)

    Esler, Sandra; Nelson, Michael L.

    1998-01-01

    World Wide Web (WWW) and related information technologies are transforming the distribution of scientific and technical information (STI). We examine 11 recent, functioning digital libraries focusing on the distribution of STI publications, including journal articles, conference papers, and technical reports. We introduce 4 main categories of digital library projects: based on the architecture (distributed vs. centralized) and the contributor (traditional publisher vs. authoring individual/organization). Many digital library prototypes merely automate existing publishing practices or focus solely on the digitization of the publishing cycle output, not sampling and capturing elements of the input. Still others do not consider for distribution the large body of "gray literature." We address these deficiencies in the current model of STI exchange by suggesting methods for expanding the scope and target of digital libraries by focusing on a greater source of technical publications and using "buckets," an object-oriented construct for grouping logically related information objects, to include holdings other than technical publications.

  3. Predicting objective function weights from patient anatomy in prostate IMRT treatment planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Taewoo, E-mail: taewoo.lee@utoronto.ca; Hammad, Muhannad; Chan, Timothy C. Y.

    2013-12-15

    Purpose: Intensity-modulated radiation therapy (IMRT) treatment planning typically combines multiple criteria into a single objective function by taking a weighted sum. The authors propose a statistical model that predicts objective function weights from patient anatomy for prostate IMRT treatment planning. This study provides a proof of concept for geometry-driven weight determination. Methods: A previously developed inverse optimization method (IOM) was used to generate optimal objective function weights for 24 patients using their historical treatment plans (i.e., dose distributions). These IOM weights were around 1% for each of the femoral heads, while bladder and rectum weights varied greatly between patients. Amore » regression model was developed to predict a patient's rectum weight using the ratio of the overlap volume of the rectum and bladder with the planning target volume at a 1 cm expansion as the independent variable. The femoral head weights were fixed to 1% each and the bladder weight was calculated as one minus the rectum and femoral head weights. The model was validated using leave-one-out cross validation. Objective values and dose distributions generated through inverse planning using the predicted weights were compared to those generated using the original IOM weights, as well as an average of the IOM weights across all patients. Results: The IOM weight vectors were on average six times closer to the predicted weight vectors than to the average weight vector, usingl{sub 2} distance. Likewise, the bladder and rectum objective values achieved by the predicted weights were more similar to the objective values achieved by the IOM weights. The difference in objective value performance between the predicted and average weights was statistically significant according to a one-sided sign test. For all patients, the difference in rectum V54.3 Gy, rectum V70.0 Gy, bladder V54.3 Gy, and bladder V70.0 Gy values between the dose distributions generated by the predicted weights and IOM weights was less than 5 percentage points. Similarly, the difference in femoral head V54.3 Gy values between the two dose distributions was less than 5 percentage points for all but one patient. Conclusions: This study demonstrates a proof of concept that patient anatomy can be used to predict appropriate objective function weights for treatment planning. In the long term, such geometry-driven weights may serve as a starting point for iterative treatment plan design or may provide information about the most clinically relevant region of the Pareto surface to explore.« less

  4. Predicting objective function weights from patient anatomy in prostate IMRT treatment planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Taewoo, E-mail: taewoo.lee@utoronto.ca; Hammad, Muhannad; Chan, Timothy C. Y.

    Purpose: Intensity-modulated radiation therapy (IMRT) treatment planning typically combines multiple criteria into a single objective function by taking a weighted sum. The authors propose a statistical model that predicts objective function weights from patient anatomy for prostate IMRT treatment planning. This study provides a proof of concept for geometry-driven weight determination. Methods: A previously developed inverse optimization method (IOM) was used to generate optimal objective function weights for 24 patients using their historical treatment plans (i.e., dose distributions). These IOM weights were around 1% for each of the femoral heads, while bladder and rectum weights varied greatly between patients. Amore » regression model was developed to predict a patient's rectum weight using the ratio of the overlap volume of the rectum and bladder with the planning target volume at a 1 cm expansion as the independent variable. The femoral head weights were fixed to 1% each and the bladder weight was calculated as one minus the rectum and femoral head weights. The model was validated using leave-one-out cross validation. Objective values and dose distributions generated through inverse planning using the predicted weights were compared to those generated using the original IOM weights, as well as an average of the IOM weights across all patients. Results: The IOM weight vectors were on average six times closer to the predicted weight vectors than to the average weight vector, usingl{sub 2} distance. Likewise, the bladder and rectum objective values achieved by the predicted weights were more similar to the objective values achieved by the IOM weights. The difference in objective value performance between the predicted and average weights was statistically significant according to a one-sided sign test. For all patients, the difference in rectum V54.3 Gy, rectum V70.0 Gy, bladder V54.3 Gy, and bladder V70.0 Gy values between the dose distributions generated by the predicted weights and IOM weights was less than 5 percentage points. Similarly, the difference in femoral head V54.3 Gy values between the two dose distributions was less than 5 percentage points for all but one patient. Conclusions: This study demonstrates a proof of concept that patient anatomy can be used to predict appropriate objective function weights for treatment planning. In the long term, such geometry-driven weights may serve as a starting point for iterative treatment plan design or may provide information about the most clinically relevant region of the Pareto surface to explore.« less

  5. Body Mass Index, Nutrient Intakes, Health Behaviours and Nutrition Knowledge: A Quantile Regression Application in Taiwan

    ERIC Educational Resources Information Center

    Chen, Shih-Neng; Tseng, Jauling

    2010-01-01

    Objective: To assess various marginal effects of nutrient intakes, health behaviours and nutrition knowledge on the entire distribution of body mass index (BMI) across individuals. Design: Quantitative and distributional study. Setting: Taiwan. Methods: This study applies Becker's (1965) model of health production to construct an individual's BMI…

  6. Locating Object Knowledge in the Brain: Comment on Bowers's (2009) Attempt to Revive the Grandmother Cell Hypothesis

    ERIC Educational Resources Information Center

    Plaut, David C.; McClelland, James L.

    2010-01-01

    According to Bowers, the finding that there are neurons with highly selective responses to familiar stimuli supports theories positing localist representations over approaches positing the type of distributed representations typically found in parallel distributed processing (PDP) models. However, his conclusions derive from an overly narrow view…

  7. An evolving model of online bipartite networks

    NASA Astrophysics Data System (ADS)

    Zhang, Chu-Xu; Zhang, Zi-Ke; Liu, Chuang

    2013-12-01

    Understanding the structure and evolution of online bipartite networks is a significant task since they play a crucial role in various e-commerce services nowadays. Recently, various attempts have been tried to propose different models, resulting in either power-law or exponential degree distributions. However, many empirical results show that the user degree distribution actually follows a shifted power-law distribution, the so-called Mandelbrot’s law, which cannot be fully described by previous models. In this paper, we propose an evolving model, considering two different user behaviors: random and preferential attachment. Extensive empirical results on two real bipartite networks, Delicious and CiteULike, show that the theoretical model can well characterize the structure of real networks for both user and object degree distributions. In addition, we introduce a structural parameter p, to demonstrate that the hybrid user behavior leads to the shifted power-law degree distribution, and the region of power-law tail will increase with the increment of p. The proposed model might shed some lights in understanding the underlying laws governing the structure of real online bipartite networks.

  8. Statistical procedures for evaluating daily and monthly hydrologic model predictions

    USGS Publications Warehouse

    Coffey, M.E.; Workman, S.R.; Taraba, J.L.; Fogle, A.W.

    2004-01-01

    The overall study objective was to evaluate the applicability of different qualitative and quantitative methods for comparing daily and monthly SWAT computer model hydrologic streamflow predictions to observed data, and to recommend statistical methods for use in future model evaluations. Statistical methods were tested using daily streamflows and monthly equivalent runoff depths. The statistical techniques included linear regression, Nash-Sutcliffe efficiency, nonparametric tests, t-test, objective functions, autocorrelation, and cross-correlation. None of the methods specifically applied to the non-normal distribution and dependence between data points for the daily predicted and observed data. Of the tested methods, median objective functions, sign test, autocorrelation, and cross-correlation were most applicable for the daily data. The robust coefficient of determination (CD*) and robust modeling efficiency (EF*) objective functions were the preferred methods for daily model results due to the ease of comparing these values with a fixed ideal reference value of one. Predicted and observed monthly totals were more normally distributed, and there was less dependence between individual monthly totals than was observed for the corresponding predicted and observed daily values. More statistical methods were available for comparing SWAT model-predicted and observed monthly totals. The 1995 monthly SWAT model predictions and observed data had a regression Rr2 of 0.70, a Nash-Sutcliffe efficiency of 0.41, and the t-test failed to reject the equal data means hypothesis. The Nash-Sutcliffe coefficient and the R r2 coefficient were the preferred methods for monthly results due to the ability to compare these coefficients to a set ideal value of one.

  9. The Effect of Enhanced Diabatic Heating on Stratospheric Circulation. Degree awarded by Michigan University, 1997.

    NASA Technical Reports Server (NTRS)

    Kleb, Mary M.

    1997-01-01

    The objective of this research focuses on the stratospheric dynamical response to the increase in aerosol loading and subsequent enhanced diabatic heating resulting from the eruption of Mt. Pinatubo. The Langley research Center three dimensional general circulation model and modifications made to that model for this study are described (addition of hydrogen fluoride tracer and diabatic heating enhancement). Unperturbed hydrogen fluoride distribution is compared to the hydrogen fluoride distribution measured by HALOE. A comparison of control and perturbed model runs is presented.

  10. Insertion algorithms for network model database management systems

    NASA Astrophysics Data System (ADS)

    Mamadolimov, Abdurashid; Khikmat, Saburov

    2017-12-01

    The network model is a database model conceived as a flexible way of representing objects and their relationships. Its distinguishing feature is that the schema, viewed as a graph in which object types are nodes and relationship types are arcs, forms partial order. When a database is large and a query comparison is expensive then the efficiency requirement of managing algorithms is minimizing the number of query comparisons. We consider updating operation for network model database management systems. We develop a new sequantial algorithm for updating operation. Also we suggest a distributed version of the algorithm.

  11. Simulation of the communication system between an AUV group and a surface station

    NASA Astrophysics Data System (ADS)

    Burtovaya, D.; Demin, A.; Demeshko, M.; Moiseev, A.; Kudryashova, A.

    2017-01-01

    An object model for simulation of the communications system of an autonomous underwater vehicles (AUV) group with a surface station is proposed in the paper. Implementation of the model is made on the basis of the software package “Object Distribution Simulation”. All structural relationships and behavior details are described. The application was developed on the basis of the proposed model and is now used for computational experiments on the simulation of the communications system between the autonomous underwater vehicles group and a surface station.

  12. Vegetable parenting practices scale: Item response modeling analyses

    USDA-ARS?s Scientific Manuscript database

    Our objective was to evaluate the psychometric properties of a vegetable parenting practices scale using multidimensional polytomous item response modeling which enables assessing item fit to latent variables and the distributional characteristics of the items in comparison to the respondents. We al...

  13. Experimental Verification of Modeled Thermal Distribution Produced by a Piston Source in Physiotherapy Ultrasound

    PubMed Central

    Lopez-Haro, S. A.; Leija, L.

    2016-01-01

    Objectives. To present a quantitative comparison of thermal patterns produced by the piston-in-a-baffle approach with those generated by a physiotherapy ultrasonic device and to show the dependency among thermal patterns and acoustic intensity distributions. Methods. The finite element (FE) method was used to model an ideal acoustic field and the produced thermal pattern to be compared with the experimental acoustic and temperature distributions produced by a real ultrasonic applicator. A thermal model using the measured acoustic profile as input is also presented for comparison. Temperature measurements were carried out with thermocouples inserted in muscle phantom. The insertion place of thermocouples was monitored with ultrasound imaging. Results. Modeled and measured thermal profiles were compared within the first 10 cm of depth. The ideal acoustic field did not adequately represent the measured field having different temperature profiles (errors 10% to 20%). Experimental field was concentrated near the transducer producing a region with higher temperatures, while the modeled ideal temperature was linearly distributed along the depth. The error was reduced to 7% when introducing the measured acoustic field as the input variable in the FE temperature modeling. Conclusions. Temperature distributions are strongly related to the acoustic field distributions. PMID:27999801

  14. Memory-Based Multiagent Coevolution Modeling for Robust Moving Object Tracking

    PubMed Central

    Wang, Yanjiang; Qi, Yujuan; Li, Yongping

    2013-01-01

    The three-stage human brain memory model is incorporated into a multiagent coevolutionary process for finding the best match of the appearance of an object, and a memory-based multiagent coevolution algorithm for robust tracking the moving objects is presented in this paper. Each agent can remember, retrieve, or forget the appearance of the object through its own memory system by its own experience. A number of such memory-based agents are randomly distributed nearby the located object region and then mapped onto a 2D lattice-like environment for predicting the new location of the object by their coevolutionary behaviors, such as competition, recombination, and migration. Experimental results show that the proposed method can deal with large appearance changes and heavy occlusions when tracking a moving object. It can locate the correct object after the appearance changed or the occlusion recovered and outperforms the traditional particle filter-based tracking methods. PMID:23843739

  15. Memory-based multiagent coevolution modeling for robust moving object tracking.

    PubMed

    Wang, Yanjiang; Qi, Yujuan; Li, Yongping

    2013-01-01

    The three-stage human brain memory model is incorporated into a multiagent coevolutionary process for finding the best match of the appearance of an object, and a memory-based multiagent coevolution algorithm for robust tracking the moving objects is presented in this paper. Each agent can remember, retrieve, or forget the appearance of the object through its own memory system by its own experience. A number of such memory-based agents are randomly distributed nearby the located object region and then mapped onto a 2D lattice-like environment for predicting the new location of the object by their coevolutionary behaviors, such as competition, recombination, and migration. Experimental results show that the proposed method can deal with large appearance changes and heavy occlusions when tracking a moving object. It can locate the correct object after the appearance changed or the occlusion recovered and outperforms the traditional particle filter-based tracking methods.

  16. Real-time physiological monitoring with distributed networks of sensors and object-oriented programming techniques

    NASA Astrophysics Data System (ADS)

    Wiesmann, William P.; Pranger, L. Alex; Bogucki, Mary S.

    1998-05-01

    Remote monitoring of physiologic data from individual high- risk workers distributed over time and space is a considerable challenge. This is often due to an inadequate capability to accurately integrate large amounts of data into usable information in real time. In this report, we have used the vertical and horizontal organization of the 'fireground' as a framework to design a distributed network of sensors. In this system, sensor output is linked through a hierarchical object oriented programing process to accurately interpret physiological data, incorporate these data into a synchronous model and relay processed data, trends and predictions to members of the fire incident command structure. There are several unique aspects to this approach. The first includes a process to account for variability in vital parameter values for each individual's normal physiologic response by including an adaptive network in each data process. This information is used by the model in an iterative process to baseline a 'normal' physiologic response to a given stress for each individual and to detect deviations that indicate dysfunction or a significant insult. The second unique capability of the system orders the information for each user including the subject, local company officers, medical personnel and the incident commanders. Information can be retrieved and used for training exercises and after action analysis. Finally this system can easily be adapted to existing communication and processing links along with incorporating the best parts of current models through the use of object oriented programming techniques. These modern software techniques are well suited to handling multiple data processes independently over time in a distributed network.

  17. SED Modeling of 20 Massive Young Stellar Objects

    NASA Astrophysics Data System (ADS)

    Tanti, Kamal Kumar

    In this paper, we present the spectral energy distributions (SEDs) modeling of twenty massive young stellar objects (MYSOs) and subsequently estimated different physical and structural/geometrical parameters for each of the twenty central YSO outflow candidates, along with their associated circumstellar disks and infalling envelopes. The SEDs for each of the MYSOs been reconstructed by using 2MASS, MSX, IRAS, IRAC & MIPS, SCUBA, WISE, SPIRE and IRAM data, with the help of a SED Fitting Tool, that uses a grid of 2D radiative transfer models. Using the detailed analysis of SEDs and subsequent estimation of physical and geometrical parameters for the central YSO sources along with its circumstellar disks and envelopes, the cumulative distribution of the stellar, disk and envelope parameters can be analyzed. This leads to a better understanding of massive star formation processes in their respective star forming regions in different molecular clouds.

  18. Analytical modeling of relative luminescence efficiency of Al2O3:C optically stimulated luminescence detectors exposed to high-energy heavy charged particles.

    PubMed

    Sawakuchi, Gabriel O; Yukihara, Eduardo G

    2012-01-21

    The objective of this work is to test analytical models to calculate the luminescence efficiency of Al(2)O(3):C optically stimulated luminescence detectors (OSLDs) exposed to heavy charged particles with energies relevant to space dosimetry and particle therapy. We used the track structure model to obtain an analytical expression for the relative luminescence efficiency based on the average radial dose distribution produced by the heavy charged particle. We compared the relative luminescence efficiency calculated using seven different radial dose distribution models, including a modified model introduced in this work, with experimental data. The results obtained using the modified radial dose distribution function agreed within 20% with experimental data from Al(2)O(3):C OSLDs relative luminescence efficiency for particles with atomic number ranging from 1 to 54 and linear energy transfer in water from 0.2 up to 1368 keV µm(-1). In spite of the significant improvement over other radial dose distribution models, understanding of the underlying physical processes associated with these radial dose distribution models remain elusive and may represent a limitation of the track structure model.

  19. Decision Support for Renewal of Wastewater Collection and Water Distribution Systems

    EPA Science Inventory

    The objective of this study was to identify the current decision support methodologies, models and approaches being used for determining how to rehabilitate or replace underground utilities; identify the critical gaps of these current models through comparison with case history d...

  20. STOCHASTIC SIMULATION OF FIELD-SCALE PESTICIDE TRANSPORT USING OPUS AND GLEAMS

    EPA Science Inventory

    Incorporating variability in soil and chemical properties into root zone leaching models should provide a better representation of pollutant distribution in natural field conditions. Our objective was to determine if a more mechanistic rate-based model (Opus) would predict soil w...

  1. A 14 h-3 Gpc3 study of cosmic homogeneity using BOSS DR12 quasar sample

    NASA Astrophysics Data System (ADS)

    Laurent, Pierre; Le Goff, Jean-Marc; Burtin, Etienne; Hamilton, Jean-Christophe; Hogg, David W.; Myers, Adam; Ntelis, Pierros; Pâris, Isabelle; Rich, James; Aubourg, Eric; Bautista, Julian; Delubac, Timothée; du Mas des Bourboux, Hélion; Eftekharzadeh, Sarah; Palanque Delabrouille, Nathalie; Petitjean, Patrick; Rossi, Graziano; Schneider, Donald P.; Yeche, Christophe

    2016-11-01

    The BOSS quasar sample is used to study cosmic homogeneity with a 3D survey in the redshift range 2.2 < z < 2.8. We measure the count-in-sphere, N(< r), i.e. the average number of objects around a given object, and its logarithmic derivative, the fractal correlation dimension, D2(r). For a homogeneous distribution N(< r) propto r3 and D2(r) = 3. Due to the uncertainty on tracer density evolution, 3D surveys can only probe homogeneity up to a redshift dependence, i.e. they probe so-called ``spatial isotropy". Our data demonstrate spatial isotropy of the quasar distribution in the redshift range 2.2 < z < 2.8 in a model-independent way, independent of any FLRW fiducial cosmology, resulting in 3 - langleD2rangle < 1.7 × 10-3 (2 σ) over the range 250 < r < 1200 h-1 Mpc for the quasar distribution. If we assume that quasars do not have a bias much less than unity, this implies spatial isotropy of the matter distribution on large scales. Then, combining with the Copernican principle, we finally get homogeneity of the matter distribution on large scales. Alternatively, using a flat ΛCDM fiducial cosmology with CMB-derived parameters, and measuring the quasar bias relative to this ΛCDM model, our data provide a consistency check of the model, in terms of how homogeneous the Universe is on different scales. D2(r) is found to be compatible with our ΛCDM model on the whole 10 < r < 1200 h-1 Mpc range. For the matter distribution we obtain 3 - langleD2rangle < 5 × 10-5 (2 σ) over the range 250 < r < 1200 h-1 Mpc, consistent with homogeneity on large scales.

  2. Simulation on Poisson and negative binomial models of count road accident modeling

    NASA Astrophysics Data System (ADS)

    Sapuan, M. S.; Razali, A. M.; Zamzuri, Z. H.; Ibrahim, K.

    2016-11-01

    Accident count data have often been shown to have overdispersion. On the other hand, the data might contain zero count (excess zeros). The simulation study was conducted to create a scenarios which an accident happen in T-junction with the assumption the dependent variables of generated data follows certain distribution namely Poisson and negative binomial distribution with different sample size of n=30 to n=500. The study objective was accomplished by fitting Poisson regression, negative binomial regression and Hurdle negative binomial model to the simulated data. The model validation was compared and the simulation result shows for each different sample size, not all model fit the data nicely even though the data generated from its own distribution especially when the sample size is larger. Furthermore, the larger sample size indicates that more zeros accident count in the dataset.

  3. Dispersion models and sampling of cacao mirid bug Sahlbergella singularis (Hemiptera: Miridae) on Theobroma Cacao in southern Cameroon.

    PubMed

    Bisseleua, D H B; Vidal, Stefan

    2011-02-01

    The spatio-temporal distribution of Sahlbergella singularis Haglung, a major pest of cacao trees (Theobroma cacao) (Malvaceae), was studied for 2 yr in traditional cacao forest gardens in the humid forest area of southern Cameroon. The first objective was to analyze the dispersion of this insect on cacao trees. The second objective was to develop sampling plans based on fixed levels of precision for estimating S. singularis populations. The following models were used to analyze the data: Taylor's power law, Iwao's patchiness regression, the Nachman model, and the negative binomial distribution. Our results document that Taylor's power law was a better fit for the data than the Iwao and Nachman models. Taylor's b and Iwao's β were both significantly >1, indicating that S. singularis aggregated on specific trees. This result was further supported by the calculated common k of 1.75444. Iwao's α was significantly <0, indicating that the basic distribution component of S. singularis was the individual insect. Comparison of negative binomial (NBD) and Nachman models indicated that the NBD model was appropriate for studying S. singularis distribution. Optimal sample sizes for fixed precision levels of 0.10, 0.15, and 0.25 were estimated with Taylor's regression coefficients. Required sample sizes increased dramatically with increasing levels of precision. This is the first study on S. singularis dispersion in cacao plantations. Sampling plans, presented here, should be a tool for research on population dynamics and pest management decisions of mirid bugs on cacao. © 2011 Entomological Society of America

  4. Hyperspectral imaging simulation of object under sea-sky background

    NASA Astrophysics Data System (ADS)

    Wang, Biao; Lin, Jia-xuan; Gao, Wei; Yue, Hui

    2016-10-01

    Remote sensing image simulation plays an important role in spaceborne/airborne load demonstration and algorithm development. Hyperspectral imaging is valuable in marine monitoring, search and rescue. On the demand of spectral imaging of objects under the complex sea scene, physics based simulation method of spectral image of object under sea scene is proposed. On the development of an imaging simulation model considering object, background, atmosphere conditions, sensor, it is able to examine the influence of wind speed, atmosphere conditions and other environment factors change on spectral image quality under complex sea scene. Firstly, the sea scattering model is established based on the Philips sea spectral model, the rough surface scattering theory and the water volume scattering characteristics. The measured bi directional reflectance distribution function (BRDF) data of objects is fit to the statistical model. MODTRAN software is used to obtain solar illumination on the sea, sky brightness, the atmosphere transmittance from sea to sensor and atmosphere backscattered radiance, and Monte Carlo ray tracing method is used to calculate the sea surface object composite scattering and spectral image. Finally, the object spectrum is acquired by the space transformation, radiation degradation and adding the noise. The model connects the spectrum image with the environmental parameters, the object parameters, and the sensor parameters, which provide a tool for the load demonstration and algorithm development.

  5. The redshift distribution of cosmological samples: a forward modeling approach

    NASA Astrophysics Data System (ADS)

    Herbel, Jörg; Kacprzak, Tomasz; Amara, Adam; Refregier, Alexandre; Bruderer, Claudio; Nicola, Andrina

    2017-08-01

    Determining the redshift distribution n(z) of galaxy samples is essential for several cosmological probes including weak lensing. For imaging surveys, this is usually done using photometric redshifts estimated on an object-by-object basis. We present a new approach for directly measuring the global n(z) of cosmological galaxy samples, including uncertainties, using forward modeling. Our method relies on image simulations produced using \\textsc{UFig} (Ultra Fast Image Generator) and on ABC (Approximate Bayesian Computation) within the MCCL (Monte-Carlo Control Loops) framework. The galaxy population is modeled using parametric forms for the luminosity functions, spectral energy distributions, sizes and radial profiles of both blue and red galaxies. We apply exactly the same analysis to the real data and to the simulated images, which also include instrumental and observational effects. By adjusting the parameters of the simulations, we derive a set of acceptable models that are statistically consistent with the data. We then apply the same cuts to the simulations that were used to construct the target galaxy sample in the real data. The redshifts of the galaxies in the resulting simulated samples yield a set of n(z) distributions for the acceptable models. We demonstrate the method by determining n(z) for a cosmic shear like galaxy sample from the 4-band Subaru Suprime-Cam data in the COSMOS field. We also complement this imaging data with a spectroscopic calibration sample from the VVDS survey. We compare our resulting posterior n(z) distributions to the one derived from photometric redshifts estimated using 36 photometric bands in COSMOS and find good agreement. This offers good prospects for applying our approach to current and future large imaging surveys.

  6. The redshift distribution of cosmological samples: a forward modeling approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herbel, Jörg; Kacprzak, Tomasz; Amara, Adam

    Determining the redshift distribution n ( z ) of galaxy samples is essential for several cosmological probes including weak lensing. For imaging surveys, this is usually done using photometric redshifts estimated on an object-by-object basis. We present a new approach for directly measuring the global n ( z ) of cosmological galaxy samples, including uncertainties, using forward modeling. Our method relies on image simulations produced using \\textsc(UFig) (Ultra Fast Image Generator) and on ABC (Approximate Bayesian Computation) within the MCCL (Monte-Carlo Control Loops) framework. The galaxy population is modeled using parametric forms for the luminosity functions, spectral energy distributions, sizesmore » and radial profiles of both blue and red galaxies. We apply exactly the same analysis to the real data and to the simulated images, which also include instrumental and observational effects. By adjusting the parameters of the simulations, we derive a set of acceptable models that are statistically consistent with the data. We then apply the same cuts to the simulations that were used to construct the target galaxy sample in the real data. The redshifts of the galaxies in the resulting simulated samples yield a set of n ( z ) distributions for the acceptable models. We demonstrate the method by determining n ( z ) for a cosmic shear like galaxy sample from the 4-band Subaru Suprime-Cam data in the COSMOS field. We also complement this imaging data with a spectroscopic calibration sample from the VVDS survey. We compare our resulting posterior n ( z ) distributions to the one derived from photometric redshifts estimated using 36 photometric bands in COSMOS and find good agreement. This offers good prospects for applying our approach to current and future large imaging surveys.« less

  7. Distributive Effects of Forest Service Attempts to Maintain Community Stability

    Treesearch

    Steven E. Daniels; William F. Hyde; David N. Wear

    1991-01-01

    Community stability is an objective of USDA Forest Service timber sales. This paper examines that objective, and the success the Forest Service can have in attaining it, through its intended maintenance of a constant volume timber harvest schedule. We apply a three-factor, two-sector modified general equilibrium model with empirical evidence from the timber-based...

  8. The generalized truncated exponential distribution as a model for earthquake magnitudes

    NASA Astrophysics Data System (ADS)

    Raschke, Mathias

    2015-04-01

    The random distribution of small, medium and large earthquake magnitudes follows an exponential distribution (ED) according to the Gutenberg-Richter relation. But a magnitude distribution is truncated in the range of very large magnitudes because the earthquake energy is finite and the upper tail of the exponential distribution does not fit well observations. Hence the truncated exponential distribution (TED) is frequently applied for the modelling of the magnitude distributions in the seismic hazard and risk analysis. The TED has a weak point: when two TEDs with equal parameters, except the upper bound magnitude, are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters, except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. It also applies to alternative distribution models. The presented generalized truncated exponential distribution (GTED) overcomes this weakness. The ED and the TED are special cases of the GTED. Different issues of the statistical inference are also discussed and an example of empirical data is presented in the current contribution.

  9. ArgoEcoSystem-watershed (AgES-W) model evaluation for streamflow and nitrogen/sediment dynamics on a midwest agricultural watershed

    USDA-ARS?s Scientific Manuscript database

    AgroEcoSystem-Watershed (AgES-W) is a modular, Java-based spatially distributed model which implements hydrologic/water quality simulation components under the Object Modeling System Version 3 (OMS3). The AgES-W model was previously evaluated for streamflow and recently has been enhanced with the ad...

  10. A Latent Class Multidimensional Scaling Model for Two-Way One-Mode Continuous Rating Dissimilarity Data

    ERIC Educational Resources Information Center

    Vera, J. Fernando; Macias, Rodrigo; Heiser, Willem J.

    2009-01-01

    In this paper, we propose a cluster-MDS model for two-way one-mode continuous rating dissimilarity data. The model aims at partitioning the objects into classes and simultaneously representing the cluster centers in a low-dimensional space. Under the normal distribution assumption, a latent class model is developed in terms of the set of…

  11. Model-checking techniques based on cumulative residuals.

    PubMed

    Lin, D Y; Wei, L J; Ying, Z

    2002-03-01

    Residuals have long been used for graphical and numerical examinations of the adequacy of regression models. Conventional residual analysis based on the plots of raw residuals or their smoothed curves is highly subjective, whereas most numerical goodness-of-fit tests provide little information about the nature of model misspecification. In this paper, we develop objective and informative model-checking techniques by taking the cumulative sums of residuals over certain coordinates (e.g., covariates or fitted values) or by considering some related aggregates of residuals, such as moving sums and moving averages. For a variety of statistical models and data structures, including generalized linear models with independent or dependent observations, the distributions of these stochastic processes tinder the assumed model can be approximated by the distributions of certain zero-mean Gaussian processes whose realizations can be easily generated by computer simulation. Each observed process can then be compared, both graphically and numerically, with a number of realizations from the Gaussian process. Such comparisons enable one to assess objectively whether a trend seen in a residual plot reflects model misspecification or natural variation. The proposed techniques are particularly useful in checking the functional form of a covariate and the link function. Illustrations with several medical studies are provided.

  12. A method for optimizing multi-objective reservoir operation upon human and riverine ecosystem demands

    NASA Astrophysics Data System (ADS)

    Ai, Xueshan; Dong, Zuo; Mo, Mingzhu

    2017-04-01

    The optimal reservoir operation is in generally a multi-objective problem. In real life, most of the reservoir operation optimization problems involve conflicting objectives, for which there is no single optimal solution which can simultaneously gain an optimal result of all the purposes, but rather a set of well distributed non-inferior solutions or Pareto frontier exists. On the other hand, most of the reservoirs operation rules is to gain greater social and economic benefits at the expense of ecological environment, resulting to the destruction of riverine ecology and reduction of aquatic biodiversity. To overcome these drawbacks, this study developed a multi-objective model for the reservoir operating with the conflicting functions of hydroelectric energy generation, irrigation and ecological protection. To solve the model with the objectives of maximize energy production, maximize the water demand satisfaction rate of irrigation and ecology, we proposed a multi-objective optimization method of variable penalty coefficient (VPC), which was based on integrate dynamic programming (DP) with discrete differential dynamic programming (DDDP), to generate a well distributed non-inferior along the Pareto front by changing the penalties coefficient of different objectives. This method was applied to an existing China reservoir named Donggu, through a course of a year, which is a multi-annual storage reservoir with multiple purposes. The case study results showed a good relationship between any two of the objectives and a good Pareto optimal solutions, which provide a reference for the reservoir decision makers.

  13. IN-RESIDENCE, MULTIPLE ROUTE EXPOSURES TO CHLORPYRIFOS AND DIAZINON ESTIMATED BY INDIRECT METHOD MODELS

    EPA Science Inventory

    One of the objectives of the National Human Exposure Assessment Survey (NHEXAS) is to estimate exposures to several pollutants in multiple media and determine their distributions for the population of Arizona. This paper presents modeling methods used to estimate exposure dist...

  14. A diffuse radar scattering model from Martian surface rocks

    NASA Technical Reports Server (NTRS)

    Calvin, W. M.; Jakosky, B. M.; Christensen, P. R.

    1987-01-01

    Remote sensing of Mars has been done with a variety of instrumentation at various wavelengths. Many of these data sets can be reconciled with a surface model of bonded fines (or duricrust) which varies widely across the surface and a surface rock distribution which varies less so. A surface rock distribution map from -60 to +60 deg latitude has been generated by Christensen. Our objective is to model the diffuse component of radar reflection based on this surface distribution of rocks. The diffuse, rather than specular, scattering is modeled because the diffuse component arises due to scattering from rocks with sizes on the order of the wavelength of the radar beam. Scattering for radio waves of 12.5 cm is then indicative of the meter scale and smaller structure of the surface. The specular term is indicative of large scale surface undulations and should not be causally related to other surface physical properties. A simplified model of diffuse scattering is described along with two rock distribution models. The results of applying the models to a planet of uniform fractional rock coverage with values ranging from 5 to 20% are discussed.

  15. Extension of Ostwald Ripening Theory

    NASA Technical Reports Server (NTRS)

    Baird, J.; Naumann, R.

    1985-01-01

    The objective is to develop models based on the mean field approximation of Ostwald ripening to describe the growth of second phase droplets or crystallites. The models will include time variations in nucleation rate, control of saturation through addition of solute, precipitating agents, changes in temperature, and various surface kinetic effects. Numerical integration schemes have been developed and tested against the asymptotic solution of Liftshitz, Slyozov and Wagner (LSW). A second attractor (in addition to the LSW distribution) has been found and, contrary to the LSW theory, the final distribution is dependent on the initial distribution. A series of microgravity experiments is being planned to test this and other results from this work.

  16. Ontology for Life-Cycle Modeling of Electrical Distribution Systems: Application of Model View Definition Attributes

    DTIC Science & Technology

    2013-06-01

    Building in- formation exchange (COBie), Building Information Modeling ( BIM ) 16. SECURITY CLASSIFICATION OF: 17. LIMITATION OF...to develop a life-cycle building model have resulted in the definition of a “core” building information model that contains general information de...develop an information -exchange Model View Definition (MVD) for building electrical systems. The objective of the current work was to document the

  17. Analytical YORP torques model with an improved temperature distribution function

    NASA Astrophysics Data System (ADS)

    Breiter, S.; Vokrouhlický, D.; Nesvorný, D.

    2010-01-01

    Previous models of the Yarkovsky-O'Keefe-Radzievskii-Paddack (YORP) effect relied either on the zero thermal conductivity assumption, or on the solutions of the heat conduction equations assuming an infinite body size. We present the first YORP solution accounting for a finite size and non-radial direction of the surface normal vectors in the temperature distribution. The new thermal model implies the dependence of the YORP effect in rotation rate on asteroids conductivity. It is shown that the effect on small objects does not scale as the inverse square of diameter, but rather as the first power of the inverse.

  18. What do we gain from simplicity versus complexity in species distribution models?

    USGS Publications Warehouse

    Merow, Cory; Smith, Matthew J.; Edwards, Thomas C.; Guisan, Antoine; McMahon, Sean M.; Normand, Signe; Thuiller, Wilfried; Wuest, Rafael O.; Zimmermann, Niklaus E.; Elith, Jane

    2014-01-01

    Species distribution models (SDMs) are widely used to explain and predict species ranges and environmental niches. They are most commonly constructed by inferring species' occurrence–environment relationships using statistical and machine-learning methods. The variety of methods that can be used to construct SDMs (e.g. generalized linear/additive models, tree-based models, maximum entropy, etc.), and the variety of ways that such models can be implemented, permits substantial flexibility in SDM complexity. Building models with an appropriate amount of complexity for the study objectives is critical for robust inference. We characterize complexity as the shape of the inferred occurrence–environment relationships and the number of parameters used to describe them, and search for insights into whether additional complexity is informative or superfluous. By building ‘under fit’ models, having insufficient flexibility to describe observed occurrence–environment relationships, we risk misunderstanding the factors shaping species distributions. By building ‘over fit’ models, with excessive flexibility, we risk inadvertently ascribing pattern to noise or building opaque models. However, model selection can be challenging, especially when comparing models constructed under different modeling approaches. Here we argue for a more pragmatic approach: researchers should constrain the complexity of their models based on study objective, attributes of the data, and an understanding of how these interact with the underlying biological processes. We discuss guidelines for balancing under fitting with over fitting and consequently how complexity affects decisions made during model building. Although some generalities are possible, our discussion reflects differences in opinions that favor simpler versus more complex models. We conclude that combining insights from both simple and complex SDM building approaches best advances our knowledge of current and future species ranges.

  19. Deep Residual Network Predicts Cortical Representation and Organization of Visual Features for Rapid Categorization.

    PubMed

    Wen, Haiguang; Shi, Junxing; Chen, Wei; Liu, Zhongming

    2018-02-28

    The brain represents visual objects with topographic cortical patterns. To address how distributed visual representations enable object categorization, we established predictive encoding models based on a deep residual network, and trained them to predict cortical responses to natural movies. Using this predictive model, we mapped human cortical representations to 64,000 visual objects from 80 categories with high throughput and accuracy. Such representations covered both the ventral and dorsal pathways, reflected multiple levels of object features, and preserved semantic relationships between categories. In the entire visual cortex, object representations were organized into three clusters of categories: biological objects, non-biological objects, and background scenes. In a finer scale specific to each cluster, object representations revealed sub-clusters for further categorization. Such hierarchical clustering of category representations was mostly contributed by cortical representations of object features from middle to high levels. In summary, this study demonstrates a useful computational strategy to characterize the cortical organization and representations of visual features for rapid categorization.

  20. A modular approach to addressing model design, scale, and parameter estimation issues in distributed hydrological modelling

    USGS Publications Warehouse

    Leavesley, G.H.; Markstrom, S.L.; Restrepo, Pedro J.; Viger, R.J.

    2002-01-01

    A modular approach to model design and construction provides a flexible framework in which to focus the multidisciplinary research and operational efforts needed to facilitate the development, selection, and application of the most robust distributed modelling methods. A variety of modular approaches have been developed, but with little consideration for compatibility among systems and concepts. Several systems are proprietary, limiting any user interaction. The US Geological Survey modular modelling system (MMS) is a modular modelling framework that uses an open source software approach to enable all members of the scientific community to address collaboratively the many complex issues associated with the design, development, and application of distributed hydrological and environmental models. Implementation of a common modular concept is not a trivial task. However, it brings the resources of a larger community to bear on the problems of distributed modelling, provides a framework in which to compare alternative modelling approaches objectively, and provides a means of sharing the latest modelling advances. The concepts and components of the MMS are described and an example application of the MMS, in a decision-support system context, is presented to demonstrate current system capabilities. Copyright ?? 2002 John Wiley and Sons, Ltd.

  1. Transneptunians as probes of planet building: The Plutino size distribution

    NASA Astrophysics Data System (ADS)

    Alexandersen, M.; Gladman, B.; Kavelaars, J.; Petit, J.; Gwyn, S.

    2014-07-01

    Planetesimals that formed during planet formation are the building blocks of giant planet cores; some are preserved as large transneptunian objects (TNOs). Previous work has shown steep power-law size distributions for TNOs of diameters > 100 km. Recent results claim a dramatic roll-over or divot in the size distribution of Neptunian Trojans (1:1 resonance with Neptune) and scattering TNOs, with a significant lack of intermediate-size D < 100 km planetesimals [1,2,3]. One theoretical explanation for this is that planetesimals were born big, skipping the intermediate sizes, contrary to the expectation of bottom-up planetesimal formation. Exploration of the TNO size distribution requires more precisely calibrated detections in order to improve statistics on these results. We have searched a 32 sq.deg. area near RA=2 hr to an r-band limiting magnitude of m_r=24.6 using the Canada-France-Hawaii Telescope. This coverage was near the Neptunian L4 region to maximise our detection rate, as this is where Neptunian Trojans reside and where Plutinos (and several other resonant populations) come to perihelion. This program successfully detected and tracked 77 TNOs and Centaurs for up to 17 months, giving us both the high-quality orbits and the quantitative detection efficiency needed for precise modelling. Among our detections were one Uranian Trojan, two Neptunian Trojans, 18 Plutinos (3:2 resonance with Neptune) and other resonant objects. We test TNO size and orbital-distribution models using a survey simulator, which simulates the detectability of model objects, accounting for the survey biases. We show that the Plutino size distribution cannot continue as a rising power law past H_r˜8.3 (equivalent to ˜100 km). A single power law is found rejectable at 99.5 % confidence, and a knee (a broken power law to a softer slope) is also rejectable. A divot (sudden drop in number of objects at a transition size), with parameters found independently for scattering TNOs by Shankman et al. [2], provides an excellent match. Due to our study's high-quality detection efficiency and sensitivity to H magnitudes well past the transition, we will show that the Plutino population shares an abrupt deficit of TNOs with D slightly below about 100 km.

  2. Distribution Route Planning of Clean Coal Based on Nearest Insertion Method

    NASA Astrophysics Data System (ADS)

    Wang, Yunrui

    2018-01-01

    Clean coal technology has made some achievements for several ten years, but the research in its distribution field is very small, the distribution efficiency would directly affect the comprehensive development of clean coal technology, it is the key to improve the efficiency of distribution by planning distribution route rationally. The object of this paper was a clean coal distribution system which be built in a county. Through the surveying of the customer demand and distribution route, distribution vehicle in previous years, it was found that the vehicle deployment was only distributed by experiences, and the number of vehicles which used each day changed, this resulted a waste of transport process and an increase in energy consumption. Thus, the mathematical model was established here in order to aim at shortest path as objective function, and the distribution route was re-planned by using nearest-insertion method which been improved. The results showed that the transportation distance saved 37 km and the number of vehicles used had also been decreased from the past average of 5 to fixed 4 every day, as well the real loading of vehicles increased by 16.25% while the current distribution volume staying same. It realized the efficient distribution of clean coal, achieved the purpose of saving energy and reducing consumption.

  3. A SCORM Odyssey.

    ERIC Educational Resources Information Center

    Shackelford, Bill

    2002-01-01

    Discusses the Shareable Content Object Reference Model (SCORM), which integrates electronic learning standards to provide a common ground for course development. Describes the Advanced Distributed Learning Co-Laboratory at the University of Wisconsin- Madison campus. (JOW)

  4. SPATIAL FOREST SOIL PROPERTIES FOR ECOLOGICAL MODELING IN THE WESTERN OREGON CASCADES

    EPA Science Inventory

    The ultimate objective of this work is to provide a spatially distributed database of soil properties to serve as inputs to model ecological processes in western forests at the landscape scale. The Central Western Oregon Cascades are rich in biodiversity and they are a fascinati...

  5. Efficient multi-objective calibration of a computationally intensive hydrologic model with parallel computing software in Python

    USDA-ARS?s Scientific Manuscript database

    With enhanced data availability, distributed watershed models for large areas with high spatial and temporal resolution are increasingly used to understand water budgets and examine effects of human activities and climate change/variability on water resources. Developing parallel computing software...

  6. Flexibility on storage-release based distributed hydrologic modeling with object-oriented approach

    USDA-ARS?s Scientific Manuscript database

    With the availability of advanced hydrologic data in the public domain such as remotely sensed and climate change scenario data, there is a need for a modeling framework that is capable of using these data to simulate and extend hydrologic processes with multidisciplinary approaches for sustainable ...

  7. Collaborative mining and transfer learning for relational data

    NASA Astrophysics Data System (ADS)

    Levchuk, Georgiy; Eslami, Mohammed

    2015-06-01

    Many of the real-world problems, - including human knowledge, communication, biological, and cyber network analysis, - deal with data entities for which the essential information is contained in the relations among those entities. Such data must be modeled and analyzed as graphs, with attributes on both objects and relations encode and differentiate their semantics. Traditional data mining algorithms were originally designed for analyzing discrete objects for which a set of features can be defined, and thus cannot be easily adapted to deal with graph data. This gave rise to the relational data mining field of research, of which graph pattern learning is a key sub-domain [11]. In this paper, we describe a model for learning graph patterns in collaborative distributed manner. Distributed pattern learning is challenging due to dependencies between the nodes and relations in the graph, and variability across graph instances. We present three algorithms that trade-off benefits of parallelization and data aggregation, compare their performance to centralized graph learning, and discuss individual benefits and weaknesses of each model. Presented algorithms are designed for linear speedup in distributed computing environments, and learn graph patterns that are both closer to ground truth and provide higher detection rates than centralized mining algorithm.

  8. Salient object detection based on discriminative boundary and multiple cues integration

    NASA Astrophysics Data System (ADS)

    Jiang, Qingzhu; Wu, Zemin; Tian, Chang; Liu, Tao; Zeng, Mingyong; Hu, Lei

    2016-01-01

    In recent years, many saliency models have achieved good performance by taking the image boundary as the background prior. However, if all boundaries of an image are equally and artificially selected as background, misjudgment may happen when the object touches the boundary. We propose an algorithm called weighted contrast optimization based on discriminative boundary (wCODB). First, a background estimation model is reliably constructed through discriminating each boundary via Hausdorff distance. Second, the background-only weighted contrast is improved by fore-background weighted contrast, which is optimized through weight-adjustable optimization framework. Then to objectively estimate the quality of a saliency map, a simple but effective metric called spatial distribution of saliency map and mean saliency in covered window ratio (MSR) is designed. Finally, in order to further promote the detection result using MSR as the weight, we propose a saliency fusion framework to integrate three other cues-uniqueness, distribution, and coherence from three representative methods into our wCODB model. Extensive experiments on six public datasets demonstrate that our wCODB performs favorably against most of the methods based on boundary, and the integrated result outperforms all state-of-the-art methods.

  9. Source detection in astronomical images by Bayesian model comparison

    NASA Astrophysics Data System (ADS)

    Frean, Marcus; Friedlander, Anna; Johnston-Hollitt, Melanie; Hollitt, Christopher

    2014-12-01

    The next generation of radio telescopes will generate exabytes of data on hundreds of millions of objects, making automated methods for the detection of astronomical objects ("sources") essential. Of particular importance are faint, diffuse objects embedded in noise. There is a pressing need for source finding software that identifies these sources, involves little manual tuning, yet is tractable to calculate. We first give a novel image discretisation method that incorporates uncertainty about how an image should be discretised. We then propose a hierarchical prior for astronomical images, which leads to a Bayes factor indicating how well a given region conforms to a model of source that is exceptionally unconstrained, compared to a model of background. This enables the efficient localisation of regions that are "suspiciously different" from the background distribution, so our method looks not for brightness but for anomalous distributions of intensity, which is much more general. The model of background can be iteratively improved by removing the influence on it of sources as they are discovered. The approach is evaluated by identifying sources in real and simulated data, and performs well on these measures: the Bayes factor is maximized at most real objects, while returning only a moderate number of false positives. In comparison to a catalogue constructed by widely-used source detection software with manual post-processing by an astronomer, our method found a number of dim sources that were missing from the "ground truth" catalogue.

  10. Statistical Analyses of Satellite Cloud Object Data from CERES. Part II; Tropical Convective Cloud Objects During 1998 El Nino and Validation of the Fixed Anvil Temperature Hypothesis

    NASA Technical Reports Server (NTRS)

    Xu, Kuan-Man; Wong, Takmeng; Wielicki, Bruce a.; Parker, Lindsay; Lin, Bing; Eitzen, Zachary A.; Branson, Mark

    2006-01-01

    Characteristics of tropical deep convective cloud objects observed over the tropical Pacific during January-August 1998 are examined using the Tropical Rainfall Measuring Mission/ Clouds and the Earth s Radiant Energy System single scanner footprint (SSF) data. These characteristics include the frequencies of occurrence and statistical distributions of cloud physical properties. Their variations with cloud-object size, sea surface temperature (SST), and satellite precessing cycle are analyzed in detail. A cloud object is defined as a contiguous patch of the Earth composed of satellite footprints within a single dominant cloud-system type. It is found that statistical distributions of cloud physical properties are significantly different among three size categories of cloud objects with equivalent diameters of 100 - 150 km (small), 150 - 300 km (medium), and > 300 km (large), respectively, except for the distributions of ice particle size. The distributions for the larger-size category of cloud objects are more skewed towards high SSTs, high cloud tops, low cloud-top temperature, large ice water path, high cloud optical depth, low outgoing longwave (LW) radiation, and high albedo than the smaller-size category. As SST varied from one satellite precessing cycle to another, the changes in macrophysical properties of cloud objects over the entire tropical Pacific were small for the large-size category of cloud objects, relative to those of the small- and medium-size categories. This result suggests that the fixed anvil temperature hypothesis of Hartmann and Larson may be valid for the large-size category. Combining with the result that a higher percentage of the large-size category of cloud objects occurs during higher SST subperiods, this implies that macrophysical properties of cloud objects would be less sensitive to further warming of the climate. On the other hand, when cloud objects are classified according to SSTs where large-scale dynamics plays important roles, statistical characteristics of cloud microphysical properties, optical depth and albedo are not sensitive to the SST, but those of cloud macrophysical properties are strongly dependent upon the SST. Frequency distributions of vertical velocity from the European Center for Medium-range Weather Forecasts model that is matched to each cloud object are used to interpret some of the findings in this study.

  11. Modeling of biodynamic responses distributed at the fingers and the palm of the human hand-arm system.

    PubMed

    Dong, Ren G; Dong, Jennie H; Wu, John Z; Rakheja, Subhash

    2007-01-01

    The objective of this study is to develop analytical models for simulating driving-point biodynamic responses distributed at the fingers and palm of the hand under vibration along the forearm direction (z(h)-axis). Two different clamp-like model structures are formulated to analyze the distributed responses at the fingers-handle and palm-handle interfaces, as opposed to the single driving point invariably considered in the reported models. The parameters of the proposed four- and five degrees-of-freedom models are identified through minimization of an rms error function of the model and measured responses under different hand actions, namely, fingers pull, push only, grip only, and combined push and grip. The results show that the responses predicted from both models agree reasonably well with the measured data in terms of distributed as well total impedance magnitude and phase. The variations in the identified model parameters under different hand actions are further discussed in view of the biological system behavior. The proposed models are considered to serve as useful tools for design and assessment of vibration isolation methods, and for developing a hand-arm simulator for vibration analysis of power tools.

  12. Simulation Methods for Design of Networked Power Electronics and Information Systems

    DTIC Science & Technology

    2014-07-01

    Insertion of latency in every branch and at every node permits the system model to be efficiently distributed across many separate computing cores. An... the system . We demonstrated extensibility and generality of the Virtual Test Bed (VTB) framework to support multiple solvers and their associated...Information Systems Objectives The overarching objective of this program is to develop methods for fast

  13. Modeling of LEO Orbital Debris Populations in Centimeter and Millimeter Size Regimes

    NASA Technical Reports Server (NTRS)

    Xu, Y.-L.; Hill, . M.; Horstman, M.; Krisko, P. H.; Liou, J.-C.; Matney, M.; Stansbery, E. G.

    2010-01-01

    The building of the NASA Orbital Debris Engineering Model, whether ORDEM2000 or its recently updated version ORDEM2010, uses as its foundation a number of model debris populations, each truncated at a minimum object-size ranging from 10 micron to 1 m. This paper discusses the development of the ORDEM2010 model debris populations in LEO (low Earth orbit), focusing on centimeter (smaller than 10 cm) and millimeter size regimes. Primary data sets used in the statistical derivation of the cm- and mm-size model populations are from the Haystack radar operated in a staring mode. Unlike cataloged objects of sizes greater than approximately 10 cm, ground-based radars monitor smaller-size debris only in a statistical manner instead of tracking every piece. The mono-static Haystack radar can detect debris as small as approximately 5 mm at moderate LEO altitudes. Estimation of millimeter debris populations (for objects smaller than approximately 6 mm) rests largely on Goldstone radar measurements. The bi-static Goldstone radar can detect 2- to 3-mm objects. The modeling of the cm- and mm-debris populations follows the general approach to developing other ORDEM2010-required model populations for various components and types of debris. It relies on appropriate reference populations to provide necessary prior information on the orbital structures and other important characteristics of the debris objects. NASA's LEO-to-GEO Environment Debris (LEGEND) model is capable of furnishing such reference populations in the desired size range. A Bayesian statistical inference process, commonly adopted in ORDEM2010 model-population derivations, changes a priori distribution into a posteriori distribution and thus refines the reference populations in terms of data. This paper describes key elements and major steps in the statistical derivations of the cm- and mm-size debris populations and presents results. Due to lack of data for near 1-mm sizes, the model populations of 1- to 3.16-mm objects are an empirical extension from larger debris. The extension takes into account the results of micro-debris (from 10 micron to 1 mm) population modeling that is based on shuttle impact data, in the hope of making a smooth transition between micron and millimeter size regimes. This paper also includes a brief discussion on issues and potential future work concerning the analysis and interpretation of Goldstone radar data.

  14. Attitude Estimation for Unresolved Agile Space Objects with Shape Model Uncertainty

    DTIC Science & Technology

    2012-09-01

    Simulated lightcurve data using the Cook-Torrance [8] Bidirectional Reflectivity Distribution Function ( BRDF ) model was first applied in a batch estimation...framework to ellipsoidal SO models in geostationary orbits [9]. The Ashikhmin-Shirley [10] BRDF has also been used to study estimation of specular...non-convex 300 facet model and simulated lightcurves using a combination of Lambertian and Cook-Torrance (specular) BRDF models with an Unscented

  15. Dynamic Terrin

    DTIC Science & Technology

    1991-12-30

    York, 1985. [ Serway 86]: Raymond Serway , Physics for Scientists and Engineers. 2nd Edition, Saunders College Publishing, Philadelphia, 1986. pp. 200... Physical Modeling System 3.4 Realtime Hydrology 3.5 Soil Dynamics and Kinematics 4. Database Issues 4.1 Goals 4.2 Object Oriented Databases 4.3 Distributed...Animation System F. Constraints and Physical Modeling G. The PM Physical Modeling System H. Realtime Hydrology I. A Simplified Model of Soil Slumping

  16. Method for distributed object communications based on dynamically acquired and assembled software components

    NASA Technical Reports Server (NTRS)

    Sundermier, Amy (Inventor)

    2002-01-01

    A method for acquiring and assembling software components at execution time into a client program, where the components may be acquired from remote networked servers is disclosed. The acquired components are assembled according to knowledge represented within one or more acquired mediating components. A mediating component implements knowledge of an object model. A mediating component uses its implemented object model knowledge, acquired component class information and polymorphism to assemble components into an interacting program at execution time. The interactions or abstract relationships between components in the object model may be implemented by the mediating component as direct invocations or indirect events or software bus exchanges. The acquired components may establish communications with remote servers. The acquired components may also present a user interface representing data to be exchanged with the remote servers. The mediating components may be assembled into layers, allowing arbitrarily complex programs to be constructed at execution time.

  17. Uncertainty analysis in fault tree models with dependent basic events.

    PubMed

    Pedroni, Nicola; Zio, Enrico

    2013-06-01

    In general, two types of dependence need to be considered when estimating the probability of the top event (TE) of a fault tree (FT): "objective" dependence between the (random) occurrences of different basic events (BEs) in the FT and "state-of-knowledge" (epistemic) dependence between estimates of the epistemically uncertain probabilities of some BEs of the FT model. In this article, we study the effects on the TE probability of objective and epistemic dependences. The well-known Frèchet bounds and the distribution envelope determination (DEnv) method are used to model all kinds of (possibly unknown) objective and epistemic dependences, respectively. For exemplification, the analyses are carried out on a FT with six BEs. Results show that both types of dependence significantly affect the TE probability; however, the effects of epistemic dependence are likely to be overwhelmed by those of objective dependence (if present). © 2012 Society for Risk Analysis.

  18. Towards a distributed information architecture for avionics data

    NASA Technical Reports Server (NTRS)

    Mattmann, Chris; Freeborn, Dana; Crichton, Dan

    2003-01-01

    Avionics data at the National Aeronautics and Space Administration's (NASA) Jet Propulsion Laboratory (JPL consists of distributed, unmanaged, and heterogeneous information that is hard for flight system design engineers to find and use on new NASA/JPL missions. The development of a systematic approach for capturing, accessing and sharing avionics data critical to the support of NASA/JPL missions and projects is required. We propose a general information architecture for managing the existing distributed avionics data sources and a method for querying and retrieving avionics data using the Object Oriented Data Technology (OODT) framework. OODT uses XML messaging infrastructure that profiles data products and their locations using the ISO-11179 data model for describing data products. Queries against a common data dictionary (which implements the ISO model) are translated to domain dependent source data models, and distributed data products are returned asynchronously through the OODT middleware. Further work will include the ability to 'plug and play' new manufacturer data sources, which are distributed at avionics component manufacturer locations throughout the United States.

  19. TH-CD-202-07: A Methodology for Generating Numerical Phantoms for Radiation Therapy Using Geometric Attribute Distribution Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolly, S; Chen, H; Mutic, S

    Purpose: A persistent challenge for the quality assessment of radiation therapy treatments (e.g. contouring accuracy) is the absence of the known, ground truth for patient data. Moreover, assessment results are often patient-dependent. Computer simulation studies utilizing numerical phantoms can be performed for quality assessment with a known ground truth. However, previously reported numerical phantoms do not include the statistical properties of inter-patient variations, as their models are based on only one patient. In addition, these models do not incorporate tumor data. In this study, a methodology was developed for generating numerical phantoms which encapsulate the statistical variations of patients withinmore » radiation therapy, including tumors. Methods: Based on previous work in contouring assessment, geometric attribute distribution (GAD) models were employed to model both the deterministic and stochastic properties of individual organs via principle component analysis. Using pre-existing radiation therapy contour data, the GAD models are trained to model the shape and centroid distributions of each organ. Then, organs with different shapes and positions can be generated by assigning statistically sound weights to the GAD model parameters. Organ contour data from 20 retrospective prostate patient cases were manually extracted and utilized to train the GAD models. As a demonstration, computer-simulated CT images of generated numerical phantoms were calculated and assessed subjectively and objectively for realism. Results: A cohort of numerical phantoms of the male human pelvis was generated. CT images were deemed realistic both subjectively and objectively in terms of image noise power spectrum. Conclusion: A methodology has been developed to generate realistic numerical anthropomorphic phantoms using pre-existing radiation therapy data. The GAD models guarantee that generated organs span the statistical distribution of observed radiation therapy patients, according to the training dataset. The methodology enables radiation therapy treatment assessment with multi-modality imaging and a known ground truth, and without patient-dependent bias.« less

  20. Determination of deuterium–tritium critical burn-up parameter by four temperature theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nazirzadeh, M.; Ghasemizad, A.; Khanbabei, B.

    Conditions for thermonuclear burn-up of an equimolar mixture of deuterium-tritium in non-equilibrium plasma have been investigated by four temperature theory. The photon distribution shape significantly affects the nature of thermonuclear burn. In three temperature model, the photon distribution is Planckian but in four temperature theory the photon distribution has a pure Planck form below a certain cut-off energy and then for photon energy above this cut-off energy makes a transition to Bose-Einstein distribution with a finite chemical potential. The objective was to develop four temperature theory in a plasma to calculate the critical burn up parameter which depends upon initialmore » density, the plasma components initial temperatures, and hot spot size. All the obtained results from four temperature theory model are compared with 3 temperature model. It is shown that the values of critical burn-up parameter calculated by four temperature theory are smaller than those of three temperature model.« less

  1. Visual saliency detection based on modeling the spatial Gaussianity

    NASA Astrophysics Data System (ADS)

    Ju, Hongbin

    2015-04-01

    In this paper, a novel salient object detection method based on modeling the spatial anomalies is presented. The proposed framework is inspired by the biological mechanism that human eyes are sensitive to the unusual and anomalous objects among complex background. It is supposed that a natural image can be seen as a combination of some similar or dissimilar basic patches, and there is a direct relationship between its saliency and anomaly. Some patches share high degree of similarity and have a vast number of quantity. They usually make up the background of an image. On the other hand, some patches present strong rarity and specificity. We name these patches "anomalies". Generally, anomalous patch is a reflection of the edge or some special colors and textures in an image, and these pattern cannot be well "explained" by their surroundings. Human eyes show great interests in these anomalous patterns, and will automatically pick out the anomalous parts of an image as the salient regions. To better evaluate the anomaly degree of the basic patches and exploit their nonlinear statistical characteristics, a multivariate Gaussian distribution saliency evaluation model is proposed. In this way, objects with anomalous patterns usually appear as the outliers in the Gaussian distribution, and we identify these anomalous objects as salient ones. Experiments are conducted on the well-known MSRA saliency detection dataset. Compared with other recent developed visual saliency detection methods, our method suggests significant advantages.

  2. flexsurv: A Platform for Parametric Survival Modeling in R

    PubMed Central

    Jackson, Christopher H.

    2018-01-01

    flexsurv is an R package for fully-parametric modeling of survival data. Any parametric time-to-event distribution may be fitted if the user supplies a probability density or hazard function, and ideally also their cumulative versions. Standard survival distributions are built in, including the three and four-parameter generalized gamma and F distributions. Any parameter of any distribution can be modeled as a linear or log-linear function of covariates. The package also includes the spline model of Royston and Parmar (2002), in which both baseline survival and covariate effects can be arbitrarily flexible parametric functions of time. The main model-fitting function, flexsurvreg, uses the familiar syntax of survreg from the standard survival package (Therneau 2016). Censoring or left-truncation are specified in ‘Surv’ objects. The models are fitted by maximizing the full log-likelihood, and estimates and confidence intervals for any function of the model parameters can be printed or plotted. flexsurv also provides functions for fitting and predicting from fully-parametric multi-state models, and connects with the mstate package (de Wreede, Fiocco, and Putter 2011). This article explains the methods and design principles of the package, giving several worked examples of its use. PMID:29593450

  3. Rigorous Proof of the Boltzmann-Gibbs Distribution of Money on Connected Graphs

    NASA Astrophysics Data System (ADS)

    Lanchier, Nicolas

    2017-04-01

    Models in econophysics, i.e., the emerging field of statistical physics that applies the main concepts of traditional physics to economics, typically consist of large systems of economic agents who are characterized by the amount of money they have. In the simplest model, at each time step, one agent gives one dollar to another agent, with both agents being chosen independently and uniformly at random from the system. Numerical simulations of this model suggest that, at least when the number of agents and the average amount of money per agent are large, the distribution of money converges to an exponential distribution reminiscent of the Boltzmann-Gibbs distribution of energy in physics. The main objective of this paper is to give a rigorous proof of this result and show that the convergence to the exponential distribution holds more generally when the economic agents are located on the vertices of a connected graph and interact locally with their neighbors rather than globally with all the other agents. We also study a closely related model where, at each time step, agents buy with a probability proportional to the amount of money they have, and prove that in this case the limiting distribution of money is Poissonian.

  4. Bi-criteria evaluation of the MIKE SHE model for a forested watershed on the South Carolina coastal plain

    Treesearch

    Z. Dai; C. Li; C. Trettin; G. Sun; D. Amatya; H. Li

    2010-01-01

    Hydrological models are important tools for effective management, conservation and restoration of forested wetlands. The objective of this study was to test a distributed hydrological model, MIKE SHE, by using bi-criteria (i.e., two measurable variables, streamflow and water table depth) to describe the hydrological processes in a forested watershed that is...

  5. Calibration and validation of the SWAT model for a forested watershed in coastal South Carolina

    Treesearch

    Devendra M. Amatya; Elizabeth B. Haley; Norman S. Levine; Timothy J. Callahan; Artur Radecki-Pawlik; Manoj K. Jha

    2008-01-01

    Modeling the hydrology of low-gradient coastal watersheds on shallow, poorly drained soils is a challenging task due to the complexities in watershed delineation, runoff generation processes and pathways, flooding, and submergence caused by tropical storms. The objective of the study is to calibrate and validate a GIS-based spatially-distributed hydrologic model, SWAT...

  6. An Objective Approach to Select Climate Scenarios when Projecting Species Distribution under Climate Change

    PubMed Central

    Casajus, Nicolas; Périé, Catherine; Logan, Travis; Lambert, Marie-Claude; de Blois, Sylvie; Berteaux, Dominique

    2016-01-01

    An impressive number of new climate change scenarios have recently become available to assess the ecological impacts of climate change. Among these impacts, shifts in species range analyzed with species distribution models are the most widely studied. Whereas it is widely recognized that the uncertainty in future climatic conditions must be taken into account in impact studies, many assessments of species range shifts still rely on just a few climate change scenarios, often selected arbitrarily. We describe a method to select objectively a subset of climate change scenarios among a large ensemble of available ones. Our k-means clustering approach reduces the number of climate change scenarios needed to project species distributions, while retaining the coverage of uncertainty in future climate conditions. We first show, for three biologically-relevant climatic variables, that a reduced number of six climate change scenarios generates average climatic conditions very close to those obtained from a set of 27 scenarios available before reduction. A case study on potential gains and losses of habitat by three northeastern American tree species shows that potential future species distributions projected from the selected six climate change scenarios are very similar to those obtained from the full set of 27, although with some spatial discrepancies at the edges of species distributions. In contrast, projections based on just a few climate models vary strongly according to the initial choice of climate models. We give clear guidance on how to reduce the number of climate change scenarios while retaining the central tendencies and coverage of uncertainty in future climatic conditions. This should be particularly useful during future climate change impact studies as more than twice as many climate models were reported in the fifth assessment report of IPCC compared to the previous one. PMID:27015274

  7. An Objective Approach to Select Climate Scenarios when Projecting Species Distribution under Climate Change.

    PubMed

    Casajus, Nicolas; Périé, Catherine; Logan, Travis; Lambert, Marie-Claude; de Blois, Sylvie; Berteaux, Dominique

    2016-01-01

    An impressive number of new climate change scenarios have recently become available to assess the ecological impacts of climate change. Among these impacts, shifts in species range analyzed with species distribution models are the most widely studied. Whereas it is widely recognized that the uncertainty in future climatic conditions must be taken into account in impact studies, many assessments of species range shifts still rely on just a few climate change scenarios, often selected arbitrarily. We describe a method to select objectively a subset of climate change scenarios among a large ensemble of available ones. Our k-means clustering approach reduces the number of climate change scenarios needed to project species distributions, while retaining the coverage of uncertainty in future climate conditions. We first show, for three biologically-relevant climatic variables, that a reduced number of six climate change scenarios generates average climatic conditions very close to those obtained from a set of 27 scenarios available before reduction. A case study on potential gains and losses of habitat by three northeastern American tree species shows that potential future species distributions projected from the selected six climate change scenarios are very similar to those obtained from the full set of 27, although with some spatial discrepancies at the edges of species distributions. In contrast, projections based on just a few climate models vary strongly according to the initial choice of climate models. We give clear guidance on how to reduce the number of climate change scenarios while retaining the central tendencies and coverage of uncertainty in future climatic conditions. This should be particularly useful during future climate change impact studies as more than twice as many climate models were reported in the fifth assessment report of IPCC compared to the previous one.

  8. Adding Biotic Interactions into Paleodistribution Models: A Host-Cleptoparasite Complex of Neotropical Orchid Bees

    PubMed Central

    Silva, Daniel Paiva; Varela, Sara; Nemésio, André; De Marco, Paulo

    2015-01-01

    Orchid bees compose an exclusive Neotropical pollinators group, with bright body coloration. Several of those species build their own nests, while others are reported as nest cleptoparasites. Here, the objective was to evaluate whether the inclusion of a strong biotic interaction, such as the presence of a host species, improved the ability of species distribution models (SDMs) to predict the geographic range of the cleptoparasite species. The target species were Aglae caerulea and its host species Eulaema nigrita. Additionally, since A. caerulea is more frequently found in the Amazon rather than the Cerrado areas, a secondary objective was to evaluate whether this species is increasing or decreasing its distribution given South American past and current climatic conditions. SDMs methods (Maxent and Bioclim), in addition with current and past South American climatic conditions, as well as the occurrences for A. caerulea and E. nigrita were used to generate the distribution models. The distribution of A. caerulea was generated with and without the inclusion of the distribution of E. nigrita as a predictor variable. The results indicate A. caerulea was barely affected by past climatic conditions and the populations from the Cerrado savanna could be at least 21,000 years old (the last glacial maximum), as well as the Amazonian ones. On the other hand, in this study, the inclusion of the host-cleptoparasite interaction complex did not statistically improve the quality of the produced models, which means that the geographic range of this cleptoparasite species is mainly constrained by climate and not by the presence of the host species. Nonetheless, this could also be caused by unknown complexes of other Euglossini hosts with A. caerulea, which still are still needed to be described by science. PMID:26069956

  9. MPL-net at ARM Sites

    NASA Technical Reports Server (NTRS)

    Spinhirne, J. D.; Welton, E. J.; Campbell, J. R.; Berkoff, T. A.; Starr, David OC. (Technical Monitor)

    2002-01-01

    The NASA MPL-net project goal is consistent data products of the vertical distribution of clouds and aerosol from globally distributed lidar observation sites. The four ARM micro pulse lidars are a basis of the network to consist of over twelve sites. The science objective is ground truth for global satellite retrievals and accurate vertical distribution information in combination with surface radiation measurements for aerosol and cloud models. The project involves improvement in instruments and data processing and cooperation with ARM and other partners.

  10. Comparison of two paradigms for distributed shared memory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levelt, W.G.; Kaashoek, M.F.; Bal, H.E.

    1990-08-01

    The paper compares two paradigms for Distributed Shared Memory on loosely coupled computing systems: the shared data-object model as used in Orca, a programming language specially designed for loosely coupled computing systems and the Shared Virtual Memory model. For both paradigms the authors have implemented two systems, one using only point-to-point messages, the other using broadcasting as well. They briefly describe these two paradigms and their implementations. Then they compare their performance on four applications: the traveling salesman problem, alpha-beta search, matrix multiplication and the all pairs shortest paths problem. The measurements show that both paradigms can be used efficientlymore » for programming large-grain parallel applications. Significant speedups were obtained on all applications. The unstructured Shared Virtual Memory paradigm achieves the best absolute performance, although this is largely due to the preliminary nature of the Orca compiler used. The structured shared data-object model achieves the highest speedups and is much easier to program and to debug.« less

  11. The Generation of the Distant Kuiper Belt by Planet Nine from an Initially Broad Perihelion Distribution

    NASA Astrophysics Data System (ADS)

    Khain, Tali; Batygin, Konstantin; Brown, Michael E.

    2018-04-01

    The observation that the orbits of long-period Kuiper Belt objects are anomalously clustered in physical space has recently prompted the Planet Nine hypothesis - the proposed existence of a distant and eccentric planetary member of our Solar System. Within the framework of this model, a Neptune-like perturber sculpts the orbital distribution of distant Kuiper Belt objects through a complex interplay of resonant and secular effects, such that the surviving orbits get organized into apsidally aligned and anti-aligned configurations with respect to Planet Nine's orbit. We present results on the role of Kuiper Belt initial conditions on the evolution of the outer Solar System using numerical simulations. Intriguingly, we find that the final perihelion distance distribution depends strongly on the primordial state of the system, and demonstrate that a bimodal structure corresponding to the existence of both aligned and anti-aligned clusters is only reproduced if the initial perihelion distribution is assumed to extend well beyond 36 AU. The bimodality in the final perihelion distance distribution is due to the permanently stable objects, with the lower perihelion peak corresponding to the anti-aligned orbits and the higher perihelion peak corresponding to the aligned orbits. We identify the mechanisms that enable the persistent stability of these objects and locate the regions of phase space in which they reside. The obtained results contextualize the Planet Nine hypothesis within the broader narrative of solar system formation, and offer further insight into the observational search for Planet Nine.

  12. Experimental Concepts for Testing Seismic Hazard Models

    NASA Astrophysics Data System (ADS)

    Marzocchi, W.; Jordan, T. H.

    2015-12-01

    Seismic hazard analysis is the primary interface through which useful information about earthquake rupture and wave propagation is delivered to society. To account for the randomness (aleatory variability) and limited knowledge (epistemic uncertainty) of these natural processes, seismologists must formulate and test hazard models using the concepts of probability. In this presentation, we will address the scientific objections that have been raised over the years against probabilistic seismic hazard analysis (PSHA). Owing to the paucity of observations, we must rely on expert opinion to quantify the epistemic uncertainties of PSHA models (e.g., in the weighting of individual models from logic-tree ensembles of plausible models). The main theoretical issue is a frequentist critique: subjectivity is immeasurable; ergo, PSHA models cannot be objectively tested against data; ergo, they are fundamentally unscientific. We have argued (PNAS, 111, 11973-11978) that the Bayesian subjectivity required for casting epistemic uncertainties can be bridged with the frequentist objectivity needed for pure significance testing through "experimental concepts." An experimental concept specifies collections of data, observed and not yet observed, that are judged to be exchangeable (i.e., with a joint distribution independent of the data ordering) when conditioned on a set of explanatory variables. We illustrate, through concrete examples, experimental concepts useful in the testing of PSHA models for ontological errors in the presence of aleatory variability and epistemic uncertainty. In particular, we describe experimental concepts that lead to exchangeable binary sequences that are statistically independent but not identically distributed, showing how the Bayesian concept of exchangeability generalizes the frequentist concept of experimental repeatability. We also address the issue of testing PSHA models using spatially correlated data.

  13. Distributional Cost-Effectiveness Analysis

    PubMed Central

    Asaria, Miqdad; Griffin, Susan; Cookson, Richard

    2015-01-01

    Distributional cost-effectiveness analysis (DCEA) is a framework for incorporating health inequality concerns into the economic evaluation of health sector interventions. In this tutorial, we describe the technical details of how to conduct DCEA, using an illustrative example comparing alternative ways of implementing the National Health Service (NHS) Bowel Cancer Screening Programme (BCSP). The 2 key stages in DCEA are 1) modeling social distributions of health associated with different interventions, and 2) evaluating social distributions of health with respect to the dual objectives of improving total population health and reducing unfair health inequality. As well as describing the technical methods used, we also identify the data requirements and the social value judgments that have to be made. Finally, we demonstrate the use of sensitivity analyses to explore the impacts of alternative modeling assumptions and social value judgments. PMID:25908564

  14. Locating object knowledge in the brain: comment on Bowers's (2009) attempt to revive the grandmother cell hypothesis.

    PubMed

    Plaut, David C; McClelland, James L

    2010-01-01

    According to Bowers, the finding that there are neurons with highly selective responses to familiar stimuli supports theories positing localist representations over approaches positing the type of distributed representations typically found in parallel distributed processing (PDP) models. However, his conclusions derive from an overly narrow view of the range of possible distributed representations and of the role that PDP models can play in exploring their properties. Although it is true that current distributed theories face challenges in accounting for both neural and behavioral data, the proposed localist account--to the extent that it is articulated at all--runs into more fundamental difficulties. Central to these difficulties is the problem of specifying the set of entities a localist unit represents.

  15. Frequent Statement and Dereference Elimination for Imperative and Object-Oriented Distributed Programs

    PubMed Central

    El-Zawawy, Mohamed A.

    2014-01-01

    This paper introduces new approaches for the analysis of frequent statement and dereference elimination for imperative and object-oriented distributed programs running on parallel machines equipped with hierarchical memories. The paper uses languages whose address spaces are globally partitioned. Distributed programs allow defining data layout and threads writing to and reading from other thread memories. Three type systems (for imperative distributed programs) are the tools of the proposed techniques. The first type system defines for every program point a set of calculated (ready) statements and memory accesses. The second type system uses an enriched version of types of the first type system and determines which of the ready statements and memory accesses are used later in the program. The third type system uses the information gather so far to eliminate unnecessary statement computations and memory accesses (the analysis of frequent statement and dereference elimination). Extensions to these type systems are also presented to cover object-oriented distributed programs. Two advantages of our work over related work are the following. The hierarchical style of concurrent parallel computers is similar to the memory model used in this paper. In our approach, each analysis result is assigned a type derivation (serves as a correctness proof). PMID:24892098

  16. Tracking of multiple targets using online learning for reference model adaptation.

    PubMed

    Pernkopf, Franz

    2008-12-01

    Recently, much work has been done in multiple object tracking on the one hand and on reference model adaptation for a single-object tracker on the other side. In this paper, we do both tracking of multiple objects (faces of people) in a meeting scenario and online learning to incrementally update the models of the tracked objects to account for appearance changes during tracking. Additionally, we automatically initialize and terminate tracking of individual objects based on low-level features, i.e., face color, face size, and object movement. Many methods unlike our approach assume that the target region has been initialized by hand in the first frame. For tracking, a particle filter is incorporated to propagate sample distributions over time. We discuss the close relationship between our implemented tracker based on particle filters and genetic algorithms. Numerous experiments on meeting data demonstrate the capabilities of our tracking approach. Additionally, we provide an empirical verification of the reference model learning during tracking of indoor and outdoor scenes which supports a more robust tracking. Therefore, we report the average of the standard deviation of the trajectories over numerous tracking runs depending on the learning rate.

  17. Continuum modeling of catastrophic collisions

    NASA Technical Reports Server (NTRS)

    Ryan, Eileen V.; Aspaug, Erik; Melosh, H. J.

    1991-01-01

    A two dimensional hydrocode based on 2-D SALE was modified to include strength effects and fragmentation equations for fracture resulting from tensile stress in one dimension. Output from this code includes a complete fragmentation summary for each cell of the modeled object: fragment size (mass) distribution, vector velocities of particles, peak values of pressure and tensile stress, and peak strain rates associated with fragmentation. Contour plots showing pressure and temperature at given times within the object are also produced. By invoking axial symmetry, three dimensional events can be modeled such as zero impact parameter collisions between asteroids. The code was tested against the one dimensional model and the analytical solution for a linearly increasing tensile stress under constant strain rate.

  18. Modeling of two-dimensional overland flow in a vegetative filter

    Treesearch

    Matthew J. Helmers; Dean E. Eisenhauer; Thomas G. Franti; Michael G. Dosskey

    2002-01-01

    Water transports sediment and other pollutants through vegetative filters. It is often assumed that the overland flow is uniformly distributed across the vegetative filter, but this research indicates otherwise. The objective of this study was to model the two-dimensional overland water flow through a vegetative filter, accounting for variation in microtopography,...

  19. Tracking Electroencephalographic Changes Using Distributions of Linear Models: Application to Propofol-Based Depth of Anesthesia Monitoring.

    PubMed

    Kuhlmann, Levin; Manton, Jonathan H; Heyse, Bjorn; Vereecke, Hugo E M; Lipping, Tarmo; Struys, Michel M R F; Liley, David T J

    2017-04-01

    Tracking brain states with electrophysiological measurements often relies on short-term averages of extracted features and this may not adequately capture the variability of brain dynamics. The objective is to assess the hypotheses that this can be overcome by tracking distributions of linear models using anesthesia data, and that anesthetic brain state tracking performance of linear models is comparable to that of a high performing depth of anesthesia monitoring feature. Individuals' brain states are classified by comparing the distribution of linear (auto-regressive moving average-ARMA) model parameters estimated from electroencephalographic (EEG) data obtained with a sliding window to distributions of linear model parameters for each brain state. The method is applied to frontal EEG data from 15 subjects undergoing propofol anesthesia and classified by the observers assessment of alertness/sedation (OAA/S) scale. Classification of the OAA/S score was performed using distributions of either ARMA parameters or the benchmark feature, Higuchi fractal dimension. The highest average testing sensitivity of 59% (chance sensitivity: 17%) was found for ARMA (2,1) models and Higuchi fractal dimension achieved 52%, however, no statistical difference was observed. For the same ARMA case, there was no statistical difference if medians are used instead of distributions (sensitivity: 56%). The model-based distribution approach is not necessarily more effective than a median/short-term average approach, however, it performs well compared with a distribution approach based on a high performing anesthesia monitoring measure. These techniques hold potential for anesthesia monitoring and may be generally applicable for tracking brain states.

  20. Study on Differentiation Management of Grid Energy Metering Device under High Permeability by Distributed Energy and Smart Grid Technology

    NASA Astrophysics Data System (ADS)

    Wang, Haiyuan; Huang, Rui; Yang, Maotao; Chen, Hao

    2017-12-01

    At present, the electric energy metering device is classified according to the amount of electric energy and the degree of importance of the measurement object. The measuring device is also selected according to the characteristics of the traditional metering object.With the continuous development of smart grid, the diversification of measurement objects increasingly appear, the traditional measurement object classification has been unable to meet the new measurement object of personalized, differentiated needs.Withal, this paper constructs the subdivision model based on the object feature-system evaluation, classifies according to the characteristics of the measurement object, and carries on the empirical analysis with some kind of measurement object as the research object.The results show that the model works well and can be used to subdivide the metrological objects into different customer groups, which can be reasonably configured and managed for the metering devices. The research of this paper has effectively improved the economy and rationality of the energy metering device management, and improved the working efficiency.

  1. Contribution of climate, soil, and MODIS predictors when modeling forest inventory invasive species distribution using forest inventory data

    Treesearch

    Dumitru Salajanu; Dennis Jacobs

    2010-01-01

    Forest inventory and analysis data are used to monitor the presence and extent of certain non-native invasive species. Effective control of its spread requires quality spatial distribution information. There is no clear consensus why some ecosystems are more favorable to non-native species. The objective of this study is to evaluate the reelative contribution of geo-...

  2. APL-UW Deep Water Propagation 2015-2017: Philippine Sea Data Analysis

    DTIC Science & Technology

    2015-09-30

    DISTRIBUTION STATEMENT A: Approved for public release: distribution is unlimited APL-UW Deep Water Propagation 2015-2017: Philippine Sea Data...the fundamental statistics of broadband low-frequency acoustical signals evolve during propagation through a dynamically-varying deep ocean. OBJECTIVES...Current models of signal randomization over long ranges in the deep ocean were developed for and tested in the North Pacific Ocean gyre. The

  3. A database for reproducible manipulation research: CapriDB - Capture, Print, Innovate.

    PubMed

    Pokorny, Florian T; Bekiroglu, Yasemin; Pauwels, Karl; Butepage, Judith; Scherer, Clara; Kragic, Danica

    2017-04-01

    We present a novel approach and database which combines the inexpensive generation of 3D object models via monocular or RGB-D camera images with 3D printing and a state of the art object tracking algorithm. Unlike recent efforts towards the creation of 3D object databases for robotics, our approach does not require expensive and controlled 3D scanning setups and aims to enable anyone with a camera to scan, print and track complex objects for manipulation research. The proposed approach results in detailed textured mesh models whose 3D printed replicas provide close approximations of the originals. A key motivation for utilizing 3D printed objects is the ability to precisely control and vary object properties such as the size, material properties and mass distribution in the 3D printing process to obtain reproducible conditions for robotic manipulation research. We present CapriDB - an extensible database resulting from this approach containing initially 40 textured and 3D printable mesh models together with tracking features to facilitate the adoption of the proposed approach.

  4. Accelerating the Integration of Distributed Water Solutions: A Conceptual Financing Model from the Electricity Sector

    NASA Astrophysics Data System (ADS)

    Quesnel, Kimberly J.; Ajami, Newsha K.; Wyss, Noemi

    2017-11-01

    Modern challenges require new approaches to urban water management. One solution in the portfolio of potential strategies is the integration of distributed water infrastructure, practices, and technologies into existing systems. However, many practical barriers have prevented the widespread adoption of these systems in the US. The objective of this paper is to address these challenges by developing a conceptual model encompassing regulatory, financial, and governance components that can be used to incorporate new distributed water solutions into our current network. To construct the model, case studies of successfully implemented distributed electricity systems, specifically energy efficiency and renewable energy technologies, were examined to determine how these solutions have become prominent in recent years and what lessons can be applied to the water sector in a similar pursuit. The proposed model includes four action-oriented elements: catalyzing change, establishing funding sources, using resource pathways, and creating innovative governance structures. As illustrated in the model, the water sector should use suite of coordinated policies to promote change, engage end users through fiscal incentives, and encourage research, development and dissemination of new technologies over time.

  5. Accelerating the Integration of Distributed Water Solutions: A Conceptual Financing Model from the Electricity Sector.

    PubMed

    Quesnel, Kimberly J; Ajami, Newsha K; Wyss, Noemi

    2017-11-01

    Modern challenges require new approaches to urban water management. One solution in the portfolio of potential strategies is the integration of distributed water infrastructure, practices, and technologies into existing systems. However, many practical barriers have prevented the widespread adoption of these systems in the US. The objective of this paper is to address these challenges by developing a conceptual model encompassing regulatory, financial, and governance components that can be used to incorporate new distributed water solutions into our current network. To construct the model, case studies of successfully implemented distributed electricity systems, specifically energy efficiency and renewable energy technologies, were examined to determine how these solutions have become prominent in recent years and what lessons can be applied to the water sector in a similar pursuit. The proposed model includes four action-oriented elements: catalyzing change, establishing funding sources, using resource pathways, and creating innovative governance structures. As illustrated in the model, the water sector should use suite of coordinated policies to promote change, engage end users through fiscal incentives, and encourage research, development and dissemination of new technologies over time.

  6. Magnetic Field of Conductive Objects as Superposition of Elementary Eddy Currents and Eddy Current Tomography

    NASA Astrophysics Data System (ADS)

    Sukhanov, D. Ya.; Zav'yalova, K. V.

    2018-03-01

    The paper represents induced currents in an electrically conductive object as a totality of elementary eddy currents. The proposed scanning method includes measurements of only one component of the secondary magnetic field. Reconstruction of the current distribution is performed by deconvolution with regularization. Numerical modeling supported by the field experiments show that this approach is of direct practical relevance.

  7. Load allocation of power plant using multi echelon economic dispatch

    NASA Astrophysics Data System (ADS)

    Wahyuda, Santosa, Budi; Rusdiansyah, Ahmad

    2017-11-01

    In this paper, the allocation of power plant load which is usually done with a single echelon as in the load flow calculation, is expanded into a multi echelon. A plant load allocation model based on the integration of economic dispatch and multi-echelon problem is proposed. The resulting model is called as Single Objective Multi Echelon Economic Dispatch (SOME ED). This model allows the distribution of electrical power in more detail in the transmission and distribution substations along the existing network. Considering the interconnection system where the distance between the plant and the load center is usually far away, therefore the loss in this model is seen as a function of distance. The advantages of this model is its capability of allocating electrical loads properly, as well as economic dispatch information with the flexibility of electric power system as a result of using multi-echelon. In this model, the flexibility can be viewed from two sides, namely the supply and demand sides, so that the security of the power system is maintained. The model was tested on a small artificial data. The results demonstrated a good performance. It is still very open to further develop the model considering the integration with renewable energy, multi-objective with environmental issues and applied to the case with a larger scale.

  8. A Simple Model for the Orbital Debris Environment in GEO

    NASA Astrophysics Data System (ADS)

    Anilkumar, A. K.; Ananthasayanam, M. R.; Subba Rao, P. V.

    The increase of space debris and its threat to commercial space activities in the Geosynchronous Earth Orbit (GEO) predictably cause concern regarding the environment over the long term. A variety of studies regarding space debris such as detection, modeling, protection and mitigation measures, is being pursued for the past couple of decades. Due to the absence of atmospheric drag to remove debris in GEO and the increasing number of utility satellites therein, the number of objects in GEO will continue to increase. The characterization of the GEO environment is critical for risk assessment and protection of future satellites and also to incorporate effective debris mitigation measures in the design and operations. The debris measurements in GEO have been limited to objects with size more than 60 cm. This paper provides an engineering model of the GEO environment by utilizing the philosophy and approach as laid out for the SIMPLE model proposed recently for LEO by the authors. The present study analyses the statistical characteristics of the GEO catalogued objects in order to arrive at a model for the GEO space debris environment. It is noted that the catalogued objects, as of now of around 800, by USSPACECOM across the years 1998 to 2004 have the same semi major axis mode (highest number density) around 35750 km above the earth. After removing the objects in the small bin around the mode, (35700, 35800) km containing around 40 percent (a value that is nearly constant across the years) of the objects, the number density of the other objects follow a single Laplace distribution with two parameters, namely location and scale. Across the years the location parameter of the above distribution does not significantly vary but the scale parameter shows a definite trend. These observations are successfully utilized in proposing a simple model for the GEO debris environment. References Ananthasayanam, M. R., Anil Kumar, A. K., and Subba Rao, P. V., ``A New Stochastic Impressionistic Low Earth (SIMPLE) Model of the Space Debris Scenario'', Conference Abstract COSPAR 02-A-01772, 2002. Ananthasayanam, M. R., Anilkumar, A. K., Subba Rao, P. V., and V. Adimurthy, ``Characterization of Eccentricity and Ballistic Coefficients of Space Debris in Altitude and Perigee Bins'', IAC-03-IAA5.p.04, Presented at the IAF Conference, Bremen, October 2003 and also to be published in the Proceedings of IAF Conference, Science and Technology Series, 2003.

  9. Theoretical cratering rates on Ida, Mathilde, Eros and Gaspra

    NASA Astrophysics Data System (ADS)

    Jeffers, S. V.; Asher, D. J.; Bailey, M. E.

    2002-11-01

    We investigate the main influences on crater size distributions, by deriving results for the four example target objects, (951) Gaspra, (243) Ida, (253) Mathilde and (433) Eros. The dynamical history of each of these asteroids is modelled using the MERCURY (Chambers 1999) numerical integrator. The use of an efficient, Öpik-type, collision code enables the calculation of a velocity histogram and the probability of impact. This when combined with a crater scaling law and an impactor size distribution, through a Monte Carlo method, results in a crater size distribution. The resulting crater probability distributions are in good agreement with observed crater distributions on these asteroids.

  10. Investigation on magnetoacoustic signal generation with magnetic induction and its application to electrical conductivity reconstruction.

    PubMed

    Ma, Qingyu; He, Bin

    2007-08-21

    A theoretical study on the magnetoacoustic signal generation with magnetic induction and its applications to electrical conductivity reconstruction is conducted. An object with a concentric cylindrical geometry is located in a static magnetic field and a pulsed magnetic field. Driven by Lorentz force generated by the static magnetic field, the magnetically induced eddy current produces acoustic vibration and the propagated sound wave is received by a transducer around the object to reconstruct the corresponding electrical conductivity distribution of the object. A theory on the magnetoacoustic waveform generation for a circular symmetric model is provided as a forward problem. The explicit formulae and quantitative algorithm for the electrical conductivity reconstruction are then presented as an inverse problem. Computer simulations were conducted to test the proposed theory and assess the performance of the inverse algorithms for a multi-layer cylindrical model. The present simulation results confirm the validity of the proposed theory and suggest the feasibility of reconstructing electrical conductivity distribution based on the proposed theory on the magnetoacoustic signal generation with magnetic induction.

  11. A Variational Approach to Simultaneous Image Segmentation and Bias Correction.

    PubMed

    Zhang, Kaihua; Liu, Qingshan; Song, Huihui; Li, Xuelong

    2015-08-01

    This paper presents a novel variational approach for simultaneous estimation of bias field and segmentation of images with intensity inhomogeneity. We model intensity of inhomogeneous objects to be Gaussian distributed with different means and variances, and then introduce a sliding window to map the original image intensity onto another domain, where the intensity distribution of each object is still Gaussian but can be better separated. The means of the Gaussian distributions in the transformed domain can be adaptively estimated by multiplying the bias field with a piecewise constant signal within the sliding window. A maximum likelihood energy functional is then defined on each local region, which combines the bias field, the membership function of the object region, and the constant approximating the true signal from its corresponding object. The energy functional is then extended to the whole image domain by the Bayesian learning approach. An efficient iterative algorithm is proposed for energy minimization, via which the image segmentation and bias field correction are simultaneously achieved. Furthermore, the smoothness of the obtained optimal bias field is ensured by the normalized convolutions without extra cost. Experiments on real images demonstrated the superiority of the proposed algorithm to other state-of-the-art representative methods.

  12. Effect diffraction on a viewed object has on improvement of object optical image quality in a turbulent medium

    NASA Astrophysics Data System (ADS)

    Banakh, Viktor A.; Sazanovich, Valentina M.; Tsvik, Ruvim S.

    1997-09-01

    The influence of diffraction on the object, coherently illuminated and viewed through a random medium from the same point, on the image quality betterment caused by the counter wave correlation is studied experimentally. The measurements were carried out with the use of setup modeling artificial convective turbulence. It is shown that in the case of spatially limited reflector with the Fresnel number of the reflector surface radius r ranging from 3 to 12 the contribution of the counter wave correlation into image intensity distribution is maximal as compared with the point objects (r U.

  13. Statistics of high-level scene context.

    PubMed

    Greene, Michelle R

    2013-01-01

    CONTEXT IS CRITICAL FOR RECOGNIZING ENVIRONMENTS AND FOR SEARCHING FOR OBJECTS WITHIN THEM: contextual associations have been shown to modulate reaction time and object recognition accuracy, as well as influence the distribution of eye movements and patterns of brain activations. However, we have not yet systematically quantified the relationships between objects and their scene environments. Here I seek to fill this gap by providing descriptive statistics of object-scene relationships. A total of 48, 167 objects were hand-labeled in 3499 scenes using the LabelMe tool (Russell et al., 2008). From these data, I computed a variety of descriptive statistics at three different levels of analysis: the ensemble statistics that describe the density and spatial distribution of unnamed "things" in the scene; the bag of words level where scenes are described by the list of objects contained within them; and the structural level where the spatial distribution and relationships between the objects are measured. The utility of each level of description for scene categorization was assessed through the use of linear classifiers, and the plausibility of each level for modeling human scene categorization is discussed. Of the three levels, ensemble statistics were found to be the most informative (per feature), and also best explained human patterns of categorization errors. Although a bag of words classifier had similar performance to human observers, it had a markedly different pattern of errors. However, certain objects are more useful than others, and ceiling classification performance could be achieved using only the 64 most informative objects. As object location tends not to vary as a function of category, structural information provided little additional information. Additionally, these data provide valuable information on natural scene redundancy that can be exploited for machine vision, and can help the visual cognition community to design experiments guided by statistics rather than intuition.

  14. Profile fitting in crowded astronomical images

    NASA Astrophysics Data System (ADS)

    Manish, Raja

    Around 18,000 known objects currently populate the near Earth space. These constitute active space assets as well as space debris objects. The tracking and cataloging of such objects relies on observations, most of which are ground based. Also, because of the great distance to the objects, only non-resolved object images can be obtained from the observations. Optical systems consist of telescope optics and a detector. Nowadays, usually CCD detectors are used. The information that is sought to be extracted from the frames are the individual object's astrometric position. In order to do so, the center of the object's image on the CCD frame has to be found. However, the observation frames that are read out of the detector are subject to noise. There are three different sources of noise: celestial background sources, the object signal itself and the sensor noise. The noise statistics are usually modeled as Gaussian or Poisson distributed or their combined distribution. In order to achieve a near real time processing, computationally fast and reliable methods for the so-called centroiding are desired; analytical methods are preferred over numerical ones of comparable accuracy. In this work, an analytic method for the centroiding is investigated and compared to numerical methods. Though the work focuses mainly on astronomical images, same principle could be applied on non-celestial images containing similar data. The method is based on minimizing weighted least squared (LS) error between observed data and the theoretical model of point sources in a novel yet simple way. Synthetic image frames have been simulated. The newly developed method is tested in both crowded and non-crowded fields where former needs additional image handling procedures to separate closely packed objects. Subsequent analysis on real celestial images corroborate the effectiveness of the approach.

  15. 1988/1989 household travel survey

    DOT National Transportation Integrated Search

    1989-07-01

    The primary objectives of this study were to provide the data: (1) : to update the trip generation rates used in the Maricopa Association of Governments (MAG) travel demand forecasting process, and; (2) to validate the MAG trip distribution model. Th...

  16. A model for a drug distribution system in remote Australia as a social determinant of health using event structure analysis.

    PubMed

    Rovers, John P; Mages, Michelle D

    2017-09-25

    The social determinants of health include the health systems under which people live and utilize health services. One social determinant, for which pharmacists are responsible, is designing drug distribution systems that ensure patients have safe and convenient access to medications. This is critical for settings with poor access to health care. Rural and remote Australia is one example of a setting where the pharmacy profession, schools of pharmacy, and regulatory agencies require pharmacists to assure medication access. Studies of drug distribution systems in such settings are uncommon. This study describes a model for a drug distribution system in an Aboriginal Health Service in remote Australia. The results may be useful for policy setting, pharmacy system design, health professions education, benchmarking, or quality assurance efforts for health system managers in similarly remote locations. The results also suggest that pharmacists can promote access to medications as a social determinant of health. The primary objective of this study was to propose a model for a drug procurement, storage, and distribution system in a remote region of Australia. The secondary objective was to learn the opinions and experiences of healthcare workers under the model. Qualitative research methods were used. Semi-structured interviews were performed with a convenience sample of 11 individuals employed by an Aboriginal health service. Transcripts were analyzed using Event Structure Analysis (ESA) to develop the model. Transcripts were also analyzed to determine the opinions and experiences of health care workers. The model was comprised of 24 unique steps with seven distinct components: choosing a supplier; creating a list of preferred medications; budgeting and ordering; supply and shipping; receipt and storage in the clinic; prescribing process; dispensing and patient counseling. Interviewees described opportunities for quality improvement in choosing suppliers, legal issues and staffing, cold chain integrity, medication shortages and wastage, and adherence to policies. The model illustrates how pharmacists address medication access as a social determinant of health, and may be helpful for policy setting, system design, benchmarking, and quality assurance by health system designers. ESA is an effective and novel method of developing such models.

  17. From radio to TeV: the surprising spectral energy distribution of AP Librae

    DOE PAGES

    Sanchez, D. A.; Giebels, B.; Fortin, P.; ...

    2015-10-17

    Following the discovery of high-energy (HE; E > 10 MeV) and very-high-energy (VHE; E > 100 GeV) γ-ray emission from the low-frequency-peaked BL Lac (LBL) object AP Librae, its electromagnetic spectrum is studied over 60 octaves in energy. Contemporaneous data in radio, optical and UV together with the (non-simultaneous) γ-ray data are used to construct the most precise spectral energy distribution of this source. We found that the data was modelled with difficulties with single-zone homogeneous leptonic synchrotron self-Compton (SSC) radiative scenarios due to the unprecedented width of the HE component when compared to the lower-energy component. Furthermore, the twomore » other LBL objects also detected at VHE appear to have similar modelling difficulties. Nevertheless, VHE γ-rays produced in the extended jet could account for the VHE flux observed by HESS.« less

  18. Experimental investigation of mode I fracture for brittle tube-shaped particles

    NASA Astrophysics Data System (ADS)

    Stasiak, Marta; Combe, Gaël; Desrues, Jacques; Richefeu, Vincent; Villard, Pascal; Armand, Gilles; Zghondi, Jad

    2017-06-01

    We focus herein on the mechanical behavior of highly crushable grains. The object of our interest, named shell, is a hollow cylinder grain with ring cross-section, made of baked clay. The objective is to model the fragmentation of such shells, by means of discrete element (DE) approach. To this end, fracture modes I (opening fracture) and II (in-plane shear fracture) have to be investigated experimentally. This paper is essentially dedicated to mode I fracture. Therefore, a campaign of Brazilian-like compression tests, that result in crack opening, has been performed. The distribution of the occurrence of tensile strength is shown to obey a Weibull distribution for the studied shells, and Weibull's modulus was quantified. Finally, an estimate of the numerical/physical parameters required in a DE model (local strength), is proposed on the basis of the energy required to fracture through a given surface in mode I or II.

  19. The 11.2 μm emission of PAHs in astrophysical objects

    NASA Astrophysics Data System (ADS)

    Candian, A.; Sarre, P. J.

    2015-04-01

    The 11.2-μm emission band belongs to the family of the `unidentified' infrared emission bands seen in many astronomical environments. In this work, we present a theoretical interpretation of the band characteristics and profile variation for a number of astrophysical sources in which the carriers are subject to a range of physical conditions. The results of Density Functional Theory calculations for the solo out-of-plane vibrational bending modes of large polycyclic aromatic hydrocarbon (PAH) molecules are used as input for a detailed emission model which includes the temperature and mass dependence of PAH band wavelength, and a PAH mass distribution that varies with object. Comparison of the model with astronomical spectra indicates that the 11.2-μm band asymmetry and profile variation can be explained principally in terms of the mass distribution of neutral PAHs with a small contribution from anharmonic effects.

  20. A probabilistic asteroid impact risk model: assessment of sub-300 m impacts

    NASA Astrophysics Data System (ADS)

    Mathias, Donovan L.; Wheeler, Lorien F.; Dotson, Jessie L.

    2017-06-01

    A comprehensive asteroid threat assessment requires the quantification of both the impact likelihood and resulting consequence across the range of possible events. This paper presents a probabilistic asteroid impact risk (PAIR) assessment model developed for this purpose. The model incorporates published impact frequency rates with state-of-the-art consequence assessment tools, applied within a Monte Carlo framework that generates sets of impact scenarios from uncertain input parameter distributions. Explicit treatment of atmospheric entry is included to produce energy deposition rates that account for the effects of thermal ablation and object fragmentation. These energy deposition rates are used to model the resulting ground damage, and affected populations are computed for the sampled impact locations. The results for each scenario are aggregated into a distribution of potential outcomes that reflect the range of uncertain impact parameters, population densities, and strike probabilities. As an illustration of the utility of the PAIR model, the results are used to address the question of what minimum size asteroid constitutes a threat to the population. To answer this question, complete distributions of results are combined with a hypothetical risk tolerance posture to provide the minimum size, given sets of initial assumptions for objects up to 300 m in diameter. Model outputs demonstrate how such questions can be answered and provide a means for interpreting the effect that input assumptions and uncertainty can have on final risk-based decisions. Model results can be used to prioritize investments to gain knowledge in critical areas or, conversely, to identify areas where additional data have little effect on the metrics of interest.

  1. Advances in Modal Analysis Using a Robust and Multiscale Method

    NASA Astrophysics Data System (ADS)

    Picard, Cécile; Frisson, Christian; Faure, François; Drettakis, George; Kry, Paul G.

    2010-12-01

    This paper presents a new approach to modal synthesis for rendering sounds of virtual objects. We propose a generic method that preserves sound variety across the surface of an object at different scales of resolution and for a variety of complex geometries. The technique performs automatic voxelization of a surface model and automatic tuning of the parameters of hexahedral finite elements, based on the distribution of material in each cell. The voxelization is performed using a sparse regular grid embedding of the object, which permits the construction of plausible lower resolution approximations of the modal model. We can compute the audible impulse response of a variety of objects. Our solution is robust and can handle nonmanifold geometries that include both volumetric and surface parts. We present a system which allows us to manipulate and tune sounding objects in an appropriate way for games, training simulations, and other interactive virtual environments.

  2. Understanding Peripheral Bat Populations Using Maximum-Entropy Suitability Modeling

    PubMed Central

    Barnhart, Paul R.; Gillam, Erin H.

    2016-01-01

    Individuals along the periphery of a species distribution regularly encounter more challenging environmental and climatic conditions than conspecifics near the center of the distribution. Due to these potential constraints, individuals in peripheral margins are expected to change their habitat and behavioral characteristics. Managers typically rely on species distribution maps when developing adequate management practices. However, these range maps are often too simplistic and do not provide adequate information as to what fine-scale biotic and abiotic factors are driving a species occurrence. In the last decade, habitat suitability modelling has become widely used as a substitute for simplistic distribution mapping which allows regional managers the ability to fine-tune management resources. The objectives of this study were to use maximum-entropy modeling to produce habitat suitability models for seven species that have a peripheral margin intersecting the state of North Dakota, according to current IUCN distributions, and determine the vegetative and climatic characteristics driving these models. Mistnetting resulted in the documentation of five species outside the IUCN distribution in North Dakota, indicating that current range maps for North Dakota, and potentially the northern Great Plains, are in need of update. Maximum-entropy modeling showed that temperature and not precipitation were the variables most important for model production. This fine-scale result highlights the importance of habitat suitability modelling as this information cannot be extracted from distribution maps. Our results provide baseline information needed for future research about how and why individuals residing in the peripheral margins of a species’ distribution may show marked differences in habitat use as a result of urban expansion, habitat loss, and climate change compared to more centralized populations. PMID:27935936

  3. Serendipitous occultations by kilometer size Kuiper Belt with MIOSOTYS

    NASA Astrophysics Data System (ADS)

    Doressoundiram, A.; Liu, C.-Y.; Maquet, L.; Roques, F.

    2017-09-01

    MIOSOTYS (Multi-object Instrument for Occultations in the SOlar system and TransitorY Systems) is a multi-fiber positioner coupled with a fast photometry camera. This is a visitor instrument mounted on the 193 cm telescope at the Observatoire de Haute-Provence, France and on the 123 cm telescope at the Calar Alto Observatory, Spain. Our immediate goal is to characterize the spatial distribution and extension of the Kuiper Belt, and the physical size distribution of TNOs. We present the observation campaigns during 2010-2013, objectives and observing strategy. We report the detection of potential candidates for occultation events of TNOs. We will discuss more specifically the method used to process the data and the modelling of diffraction patterns. We, finally present the results obtained concerning the distribution of sub-kilometer TNOs in the Kuiper Belt.

  4. Window of visibility - A psychophysical theory of fidelity in time-sampled visual motion displays

    NASA Technical Reports Server (NTRS)

    Watson, A. B.; Ahumada, A. J., Jr.; Farrell, J. E.

    1986-01-01

    A film of an object in motion presents on the screen a sequence of static views, while the human observer sees the object moving smoothly across the screen. Questions related to the perceptual identity of continuous and stroboscopic displays are examined. Time-sampled moving images are considered along with the contrast distribution of continuous motion, the contrast distribution of stroboscopic motion, the frequency spectrum of continuous motion, the frequency spectrum of stroboscopic motion, the approximation of the limits of human visual sensitivity to spatial and temporal frequencies by a window of visibility, the critical sampling frequency, the contrast distribution of staircase motion and the frequency spectrum of this motion, and the spatial dependence of the critical sampling frequency. Attention is given to apparent motion, models of motion, image recording, and computer-generated imagery.

  5. Nearest neighbor density ratio estimation for large-scale applications in astronomy

    NASA Astrophysics Data System (ADS)

    Kremer, J.; Gieseke, F.; Steenstrup Pedersen, K.; Igel, C.

    2015-09-01

    In astronomical applications of machine learning, the distribution of objects used for building a model is often different from the distribution of the objects the model is later applied to. This is known as sample selection bias, which is a major challenge for statistical inference as one can no longer assume that the labeled training data are representative. To address this issue, one can re-weight the labeled training patterns to match the distribution of unlabeled data that are available already in the training phase. There are many examples in practice where this strategy yielded good results, but estimating the weights reliably from a finite sample is challenging. We consider an efficient nearest neighbor density ratio estimator that can exploit large samples to increase the accuracy of the weight estimates. To solve the problem of choosing the right neighborhood size, we propose to use cross-validation on a model selection criterion that is unbiased under covariate shift. The resulting algorithm is our method of choice for density ratio estimation when the feature space dimensionality is small and sample sizes are large. The approach is simple and, because of the model selection, robust. We empirically find that it is on a par with established kernel-based methods on relatively small regression benchmark datasets. However, when applied to large-scale photometric redshift estimation, our approach outperforms the state-of-the-art.

  6. A 14 h {sup −3} Gpc{sup 3} study of cosmic homogeneity using BOSS DR12 quasar sample

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laurent, Pierre; Goff, Jean-Marc Le; Burtin, Etienne

    2016-11-01

    The BOSS quasar sample is used to study cosmic homogeneity with a 3D survey in the redshift range 2.2 < z < 2.8. We measure the count-in-sphere, N (< r ), i.e. the average number of objects around a given object, and its logarithmic derivative, the fractal correlation dimension, D {sub 2}( r ). For a homogeneous distribution N (< r ) ∝ r {sup 3} and D {sub 2}( r ) = 3. Due to the uncertainty on tracer density evolution, 3D surveys can only probe homogeneity up to a redshift dependence, i.e. they probe so-called ''spatial isotropy'. Ourmore » data demonstrate spatial isotropy of the quasar distribution in the redshift range 2.2 < z < 2.8 in a model-independent way, independent of any FLRW fiducial cosmology, resulting in 3 − ( D {sub 2}) < 1.7 × 10{sup −3} (2 σ) over the range 250 < r < 1200 h {sup −1} Mpc for the quasar distribution. If we assume that quasars do not have a bias much less than unity, this implies spatial isotropy of the matter distribution on large scales. Then, combining with the Copernican principle, we finally get homogeneity of the matter distribution on large scales. Alternatively, using a flat ΛCDM fiducial cosmology with CMB-derived parameters, and measuring the quasar bias relative to this ΛCDM model, our data provide a consistency check of the model, in terms of how homogeneous the Universe is on different scales. D {sub 2}( r ) is found to be compatible with our ΛCDM model on the whole 10 < r < 1200 h {sup −1} Mpc range. For the matter distribution we obtain 3 − ( D {sub 2}) < 5 × 10{sup −5} (2 σ) over the range 250 < r < 1200 h {sup −1} Mpc, consistent with homogeneity on large scales.« less

  7. Nanomechanical characterization of heterogeneous and hierarchical biomaterials and tissues using nanoindentation: the role of finite mixture models.

    PubMed

    Zadpoor, Amir A

    2015-03-01

    Mechanical characterization of biological tissues and biomaterials at the nano-scale is often performed using nanoindentation experiments. The different constituents of the characterized materials will then appear in the histogram that shows the probability of measuring a certain range of mechanical properties. An objective technique is needed to separate the probability distributions that are mixed together in such a histogram. In this paper, finite mixture models (FMMs) are proposed as a tool capable of performing such types of analysis. Finite Gaussian mixture models assume that the measured probability distribution is a weighted combination of a finite number of Gaussian distributions with separate mean and standard deviation values. Dedicated optimization algorithms are available for fitting such a weighted mixture model to experimental data. Moreover, certain objective criteria are available to determine the optimum number of Gaussian distributions. In this paper, FMMs are used for interpreting the probability distribution functions representing the distributions of the elastic moduli of osteoarthritic human cartilage and co-polymeric microspheres. As for cartilage experiments, FMMs indicate that at least three mixture components are needed for describing the measured histogram. While the mechanical properties of the softer mixture components, often assumed to be associated with Glycosaminoglycans, were found to be more or less constant regardless of whether two or three mixture components were used, those of the second mixture component (i.e. collagen network) considerably changed depending on the number of mixture components. Regarding the co-polymeric microspheres, the optimum number of mixture components estimated by the FMM theory, i.e. 3, nicely matches the number of co-polymeric components used in the structure of the polymer. The computer programs used for the presented analyses are made freely available online for other researchers to use. Copyright © 2014 Elsevier B.V. All rights reserved.

  8. RockFall analyst: A GIS extension for three-dimensional and spatially distributed rockfall hazard modeling

    NASA Astrophysics Data System (ADS)

    Lan, Hengxing; Derek Martin, C.; Lim, C. H.

    2007-02-01

    Geographic information system (GIS) modeling is used in combination with three-dimensional (3D) rockfall process modeling to assess rockfall hazards. A GIS extension, RockFall Analyst (RA), which is capable of effectively handling large amounts of geospatial information relative to rockfall behaviors, has been developed in ArcGIS using ArcObjects and C#. The 3D rockfall model considers dynamic processes on a cell plane basis. It uses inputs of distributed parameters in terms of raster and polygon features created in GIS. Two major components are included in RA: particle-based rockfall process modeling and geostatistics-based rockfall raster modeling. Rockfall process simulation results, 3D rockfall trajectories and their velocity features either for point seeders or polyline seeders are stored in 3D shape files. Distributed raster modeling, based on 3D rockfall trajectories and a spatial geostatistical technique, represents the distribution of spatial frequency, the flying and/or bouncing height, and the kinetic energy of falling rocks. A distribution of rockfall hazard can be created by taking these rockfall characteristics into account. A barrier analysis tool is also provided in RA to aid barrier design. An application of these modeling techniques to a case study is provided. The RA has been tested in ArcGIS 8.2, 8.3, 9.0 and 9.1.

  9. Googling Service Boundaries for Endovascular Clot Retrieval Hub Hospitals in a Metropolitan Setting: Proof-of-Concept Study.

    PubMed

    Phan, Thanh G; Beare, Richard; Chen, Jian; Clissold, Benjamin; Ly, John; Singhal, Shaloo; Ma, Henry; Srikanth, Velandai

    2017-05-01

    There is great interest in how endovascular clot retrieval hubs provide services to a population. We applied a computational method to objectively generate service boundaries for such endovascular clot retrieval hubs, defined by traveling time to hub. Stroke incidence data merged with population census to estimate numbers of stroke in metropolitan Melbourne, Australia. Traveling time from randomly generated addresses to 4 endovascular clot retrieval-capable hubs (Royal Melbourne Hospital [RMH], Monash Medical Center [MMC], Alfred Hospital [ALF], and Austin Hospital [AUS]) estimated using Google Map application program interface. Boundary maps generated based on traveling time at various times of day for combinations of hubs. In a 2-hub model, catchment was best distributed when RMH was paired with MMC (model 1a, RMH 1765 km 2 and MMC 1164 km 2 ) or with AUS (model 1c, RMH 1244 km 2 and AUS 1685 km 2 ), with no statistical difference between models ( P =0.20). Catchment was poorly distributed when RMH was paired with ALF (model 1b, RMH 2252 km 2 and ALF 676 km 2 ), significantly different from both models 1a and 1c (both P <0.05). Model 1a had the greatest proportion of patients arriving within ideal time of 30 minutes followed by model 1c ( P <0.001). In a 3-hub model, the combination of RMH, MMC, and AUS was superior to that of RMH, MMC, and ALF in catchment distribution and travel time. The method was also successfully applied to the city of Adelaide demonstrating wider applicability. We provide proof of concept for a novel computational method to objectively designate service boundaries for endovascular clot retrieval hubs. © 2017 American Heart Association, Inc.

  10. Software Engineering Support of the Third Round of Scientific Grand Challenge Investigations: An Earth Modeling System Software Framework Strawman Design that Integrates Cactus and UCLA/UCB Distributed Data Broker

    NASA Technical Reports Server (NTRS)

    Talbot, Bryan; Zhou, Shu-Jia; Higgins, Glenn

    2002-01-01

    One of the most significant challenges in large-scale climate modeling, as well as in high-performance computing in other scientific fields, is that of effectively integrating many software models from multiple contributors. A software framework facilitates the integration task. both in the development and runtime stages of the simulation. Effective software frameworks reduce the programming burden for the investigators, freeing them to focus more on the science and less on the parallel communication implementation, while maintaining high performance across numerous supercomputer and workstation architectures. This document proposes a strawman framework design for the climate community based on the integration of Cactus, from the relativistic physics community, and UCLA/UCB Distributed Data Broker (DDB) from the climate community. This design is the result of an extensive survey of climate models and frameworks in the climate community as well as frameworks from many other scientific communities. The design addresses fundamental development and runtime needs using Cactus, a framework with interfaces for FORTRAN and C-based languages, and high-performance model communication needs using DDB. This document also specifically explores object-oriented design issues in the context of climate modeling as well as climate modeling issues in terms of object-oriented design.

  11. Accurate reconstruction of the optical parameter distribution in participating medium based on the frequency-domain radiative transfer equation

    NASA Astrophysics Data System (ADS)

    Qiao, Yao-Bin; Qi, Hong; Zhao, Fang-Zhou; Ruan, Li-Ming

    2016-12-01

    Reconstructing the distribution of optical parameters in the participating medium based on the frequency-domain radiative transfer equation (FD-RTE) to probe the internal structure of the medium is investigated in the present work. The forward model of FD-RTE is solved via the finite volume method (FVM). The regularization term formatted by the generalized Gaussian Markov random field model is used in the objective function to overcome the ill-posed nature of the inverse problem. The multi-start conjugate gradient (MCG) method is employed to search the minimum of the objective function and increase the efficiency of convergence. A modified adjoint differentiation technique using the collimated radiative intensity is developed to calculate the gradient of the objective function with respect to the optical parameters. All simulation results show that the proposed reconstruction algorithm based on FD-RTE can obtain the accurate distributions of absorption and scattering coefficients. The reconstructed images of the scattering coefficient have less errors than those of the absorption coefficient, which indicates the former are more suitable to probing the inner structure. Project supported by the National Natural Science Foundation of China (Grant No. 51476043), the Major National Scientific Instruments and Equipment Development Special Foundation of China (Grant No. 51327803), and the Foundation for Innovative Research Groups of the National Natural Science Foundation of China (Grant No. 51121004).

  12. [Evaluating the performance of species distribution models Biomod2 and MaxEnt using the giant panda distribution data].

    PubMed

    Luo, Mei; Wang, Hao; Lyu, Zhi

    2017-12-01

    Species distribution models (SDMs) are widely used by researchers and conservationists. Results of prediction from different models vary significantly, which makes users feel difficult in selecting models. In this study, we evaluated the performance of two commonly used SDMs, the Biomod2 and Maximum Entropy (MaxEnt), with real presence/absence data of giant panda, and used three indicators, i.e., area under the ROC curve (AUC), true skill statistics (TSS), and Cohen's Kappa, to evaluate the accuracy of the two model predictions. The results showed that both models could produce accurate predictions with adequate occurrence inputs and simulation repeats. Comparedto MaxEnt, Biomod2 made more accurate prediction, especially when occurrence inputs were few. However, Biomod2 was more difficult to be applied, required longer running time, and had less data processing capability. To choose the right models, users should refer to the error requirements of their objectives. MaxEnt should be considered if the error requirement was clear and both models could achieve, otherwise, we recommend the use of Biomod2 as much as possible.

  13. Modeling of magnitude distributions by the generalized truncated exponential distribution

    NASA Astrophysics Data System (ADS)

    Raschke, Mathias

    2015-01-01

    The probability distribution of the magnitude can be modeled by an exponential distribution according to the Gutenberg-Richter relation. Two alternatives are the truncated exponential distribution (TED) and the cutoff exponential distribution (CED). The TED is frequently used in seismic hazard analysis although it has a weak point: when two TEDs with equal parameters except the upper bound magnitude are mixed, then the resulting distribution is not a TED. Inversely, it is also not possible to split a TED of a seismic region into TEDs of subregions with equal parameters except the upper bound magnitude. This weakness is a principal problem as seismic regions are constructed scientific objects and not natural units. We overcome it by the generalization of the abovementioned exponential distributions: the generalized truncated exponential distribution (GTED). Therein, identical exponential distributions are mixed by the probability distribution of the correct cutoff points. This distribution model is flexible in the vicinity of the upper bound magnitude and is equal to the exponential distribution for smaller magnitudes. Additionally, the exponential distributions TED and CED are special cases of the GTED. We discuss the possible ways of estimating its parameters and introduce the normalized spacing for this purpose. Furthermore, we present methods for geographic aggregation and differentiation of the GTED and demonstrate the potential and universality of our simple approach by applying it to empirical data. The considerable improvement by the GTED in contrast to the TED is indicated by a large difference between the corresponding values of the Akaike information criterion.

  14. Verification of road databases using multiple road models

    NASA Astrophysics Data System (ADS)

    Ziems, Marcel; Rottensteiner, Franz; Heipke, Christian

    2017-08-01

    In this paper a new approach for automatic road database verification based on remote sensing images is presented. In contrast to existing methods, the applicability of the new approach is not restricted to specific road types, context areas or geographic regions. This is achieved by combining several state-of-the-art road detection and road verification approaches that work well under different circumstances. Each one serves as an independent module representing a unique road model and a specific processing strategy. All modules provide independent solutions for the verification problem of each road object stored in the database in form of two probability distributions, the first one for the state of a database object (correct or incorrect), and a second one for the state of the underlying road model (applicable or not applicable). In accordance with the Dempster-Shafer Theory, both distributions are mapped to a new state space comprising the classes correct, incorrect and unknown. Statistical reasoning is applied to obtain the optimal state of a road object. A comparison with state-of-the-art road detection approaches using benchmark datasets shows that in general the proposed approach provides results with larger completeness. Additional experiments reveal that based on the proposed method a highly reliable semi-automatic approach for road data base verification can be designed.

  15. Model documentation Natural Gas Transmission and Distribution Model of the National Energy Modeling System. Volume 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1996-02-26

    The Natural Gas Transmission and Distribution Model (NGTDM) of the National Energy Modeling System is developed and maintained by the Energy Information Administration (EIA), Office of Integrated Analysis and Forecasting. This report documents the archived version of the NGTDM that was used to produce the natural gas forecasts presented in the Annual Energy Outlook 1996, (DOE/EIA-0383(96)). The purpose of this report is to provide a reference document for model analysts, users, and the public that defines the objectives of the model, describes its basic approach, and provides detail on the methodology employed. Previously this report represented Volume I of amore » two-volume set. Volume II reported on model performance, detailing convergence criteria and properties, results of sensitivity testing, comparison of model outputs with the literature and/or other model results, and major unresolved issues.« less

  16. Development of Automated Objective Meteorological Techniques.

    DTIC Science & Technology

    1980-11-30

    differences are due largely to the nature and spatial distribution of the atmospheric data chosen as input for the model . The data for initial values and...technique. This report fo,-uses on results of theoretical investigations and data analyses performed oy SASC during the period May, 1979 to June, 1980...the sampling period, at a given point in space, the various size particles composing the particle distribution ex- hibit different velocities from each

  17. The response of vegetation distribution, ecosystem productivity, and fire in California to future climate scenarios simulated by the MC1 dynamic vegetation dynamic.

    Treesearch

    James M. Lenihan; Dominique Bachelet; Raymond Drapek; Ronald P. Neilson

    2006-01-01

    The objective of this study was to dynamically simulate the response of vegetation distribution, carbon, and fire to three scenarios of future climate change for California using the MAPSS-CENTURY (MCI) dynamic general vegetation model. Under all three scenarios, Alpine/Subalpine Forest cover declined with increased growing season length and warmth, and increases in...

  18. Influence of climatic conditions and elevation on the spatial distribution and abundance of Trypodendron ambrosia beetles (Coleoptera: Curculionidae: Scolytinae) in Alaska

    Treesearch

    Robin M. Reich; John E. Lundquist; Robert E. Acciavati

    2014-01-01

    The objective of this study was to model the influence of temperature and precipitation on the distribution and abundance of the ambrosia beetles in the genus Trypodendron. Although these beetles do not attack and kill healthy trees, their gallery holes and accompanying black and gray stain associated with symbiotic ambrosial fungi can cause significant economic losses...

  19. Statistical Issues for Uncontrolled Reentry Hazards

    NASA Technical Reports Server (NTRS)

    Matney, Mark

    2008-01-01

    A number of statistical tools have been developed over the years for assessing the risk of reentering objects to human populations. These tools make use of the characteristics (e.g., mass, shape, size) of debris that are predicted by aerothermal models to survive reentry. The statistical tools use this information to compute the probability that one or more of the surviving debris might hit a person on the ground and cause one or more casualties. The statistical portion of the analysis relies on a number of assumptions about how the debris footprint and the human population are distributed in latitude and longitude, and how to use that information to arrive at realistic risk numbers. This inevitably involves assumptions that simplify the problem and make it tractable, but it is often difficult to test the accuracy and applicability of these assumptions. This paper looks at a number of these theoretical assumptions, examining the mathematical basis for the hazard calculations, and outlining the conditions under which the simplifying assumptions hold. In addition, this paper will also outline some new tools for assessing ground hazard risk in useful ways. Also, this study is able to make use of a database of known uncontrolled reentry locations measured by the United States Department of Defense. By using data from objects that were in orbit more than 30 days before reentry, sufficient time is allowed for the orbital parameters to be randomized in the way the models are designed to compute. The predicted ground footprint distributions of these objects are based on the theory that their orbits behave basically like simple Kepler orbits. However, there are a number of factors - including the effects of gravitational harmonics, the effects of the Earth's equatorial bulge on the atmosphere, and the rotation of the Earth and atmosphere - that could cause them to diverge from simple Kepler orbit behavior and change the ground footprints. The measured latitude and longitude distributions of these objects provide data that can be directly compared with the predicted distributions, providing a fundamental empirical test of the model assumptions.

  20. Interactions of changing climate and shifts in forest composition on stand carbon balance

    Treesearch

    Chiang Jyh-Min; Louis Iverson; Anantha Prasad; Kim Brown

    2006-01-01

    Given that climate influences forest biogeographic distribution, many researchers have created models predicting shifts in tree species range with future climate change scenarios. The objective of this study is to investigate the forest carbon consequences of shifts in stand species composition with current and future climate scenarios using such a model.

  1. Why inputs matter: Selection of climatic variables for species distribution modelling in the Himalayan region

    NASA Astrophysics Data System (ADS)

    Bobrowski, Maria; Schickhoff, Udo

    2017-04-01

    Betula utilis is a major constituent of alpine treeline ecotones in the western and central Himalayan region. The objective of this study is to provide first time analysis of the potential distribution of Betula utilis in the subalpine and alpine belts of the Himalayan region using species distribution modelling. Using Generalized Linear Models (GLM) we aim at examining climatic factors controlling the species distribution under current climate conditions. Furthermore we evaluate the prediction ability of climate data derived from different statistical methods. GLMs were created using least correlated bioclimatic variables derived from two different climate models: 1) interpolated climate data (i.e. Worldclim, Hijmans et al., 2005) and 2) quasi-mechanistical statistical downscaling (i.e. Chelsa; Karger et al., 2016). Model accuracy was evaluated by the ability to predict the potential species distribution range. We found that models based on variables of Chelsa climate data had higher predictive power, whereas models using Worldclim climate data consistently overpredicted the potential suitable habitat for Betula utilis. Although climatic variables of Worldclim are widely used in modelling species distribution, our results suggest to treat them with caution when remote regions like the Himalayan mountains are in focus. Unmindful usage of climatic variables for species distribution models potentially cause misleading projections and may lead to wrong implications and recommendations for nature conservation. References: Hijmans, R.J., Cameron, S.E., Parra, J.L., Jones, P.G. & Jarvis, A. (2005) Very high resolution interpolated climate surfaces for global land areas. International Journal of Climatology, 25, 1965-1978. Karger, D.N., Conrad, O., Böhner, J., Kawohl, T., Kreft, H., Soria-Auza, R.W., Zimmermann, N., Linder, H.P. & Kessler, M. (2016) Climatologies at high resolution for the earth land surface areas. arXiv:1607.00217 [physics].

  2. The embedded young stars in the Taurus-Auriga molecular cloud. I - Models for spectral energy distributions

    NASA Technical Reports Server (NTRS)

    Kenyon, Scott J.; Calvet, Nuria; Hartmann, Lee

    1993-01-01

    We describe radiative transfer calculations of infalling, dusty envelopes surrounding pre-main-sequence stars and use these models to derive physical properties for a sample of 21 heavily reddened young stars in the Taurus-Auriga molecular cloud. The density distributions needed to match the FIR peaks in the spectral energy distributions of these embedded sources suggest mass infall rates similar to those predicted for simple thermally supported clouds with temperatures about 10 K. Unless the dust opacities are badly in error, our models require substantial departures from spherical symmetry in the envelopes of all sources. These flattened envelopes may be produced by a combination of rotation and cavities excavated by bipolar flows. The rotating infall models of Terebey et al. (1984) models indicate a centrifugal radius of about 70 AU for many objects if rotation is the only important physical effect, and this radius is reasonably consistent with typical estimates for the sizes of circumstellar disks around T Tauri stars.

  3. Management of Listeria monocytogenes in fermented sausages using the Food Safety Objective concept underpinned by stochastic modeling and meta-analysis.

    PubMed

    Mataragas, M; Alessandria, V; Rantsiou, K; Cocolin, L

    2015-08-01

    In the present work, a demonstration is made on how the risk from the presence of Listeria monocytogenes in fermented sausages can be managed using the concept of Food Safety Objective (FSO) aided by stochastic modeling (Bayesian analysis and Monte Carlo simulation) and meta-analysis. For this purpose, the ICMSF equation was used, which combines the initial level (H0) of the hazard and its subsequent reduction (ΣR) and/or increase (ΣI) along the production chain. Each element of the equation was described by a distribution to investigate the effect not only of the level of the hazard, but also the effect of the accompanying variability. The distribution of each element was determined by Bayesian modeling (H0) and meta-analysis (ΣR and ΣI). The output was a normal distribution N(-5.36, 2.56) (log cfu/g) from which the percentage of the non-conforming products, i.e. the fraction above the FSO of 2 log cfu/g, was estimated at 0.202%. Different control measures were examined such as lowering initial L. monocytogenes level and inclusion of an additional killing step along the process resulting in reduction of the non-conforming products from 0.195% to 0.003% based on the mean and/or square-root change of the normal distribution, and 0.001%, respectively. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. IRAS observations of the Rho Ophiuchi infrared cluster - Spectral energy distributions and luminosity function

    NASA Technical Reports Server (NTRS)

    Wilking, Bruce A.; Lada, Charles J.; Young, Eric T.

    1989-01-01

    High-sensitivity IRAS coadded survey data, coupled with new high-sensitivity near-IR observations, are used to investigate the nature of embedded objects over an 4.3-sq-pc area comprising the central star-forming cloud of the Ophiuchi molecular complex; the area encompasses the central cloud of the Rho Ophiuchi complex and includes the core region. Seventy-eight members of the embedded cluster were identified; spectral energy distributions were constructed for 53 objects and were compared with theoretical models to gain insight into their evolutionary status. Bolometric luminosities could be estimated for nearly all of the association members, leading to a revised luminosity function for this dust-embedded cluster.

  5. Dust temperature distributions in star-forming condensations

    NASA Technical Reports Server (NTRS)

    Xie, Taoling; Goldsmith, Paul F.; Snell, Ronald L.; Zhou, Weimin

    1993-01-01

    The FIR spectra of the central IR condensations in the dense cores of molecular clouds AFGL 2591. B335, L1551, Mon R2, and Sgr B2 are reanalyzed here in terms of the distribution of dust mass as a function of temperature. FIR spectra of these objects can be characterized reasonably well by a given functional form. The general shapes of the dust temperature distributions of these objects are similar and closely resemble the theoretical computations of de Muizon and Rouan (1985) for a sample of 'hot centered' clouds with active star formation. Specifically, the model yields a 'cutoff' temperature below which essentially no dust is needed to interpret the dust emission spectra, and most of the dust mass is distributed in a broad temperature range of a few tens of degrees above the cutoff temperature. Mass, luminosity, average temperature, and column density are obtained, and it is found that the physical quantities differ considerably from source to source in a meaningful way.

  6. Chance-Constrained Day-Ahead Hourly Scheduling in Distribution System Operation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Huaiguang; Zhang, Yingchen; Muljadi, Eduard

    This paper aims to propose a two-step approach for day-ahead hourly scheduling in a distribution system operation, which contains two operation costs, the operation cost at substation level and feeder level. In the first step, the objective is to minimize the electric power purchase from the day-ahead market with the stochastic optimization. The historical data of day-ahead hourly electric power consumption is used to provide the forecast results with the forecasting error, which is presented by a chance constraint and formulated into a deterministic form by Gaussian mixture model (GMM). In the second step, the objective is to minimize themore » system loss. Considering the nonconvexity of the three-phase balanced AC optimal power flow problem in distribution systems, the second-order cone program (SOCP) is used to relax the problem. Then, a distributed optimization approach is built based on the alternating direction method of multiplier (ADMM). The results shows that the validity and effectiveness method.« less

  7. Object Segmentation Methods for Online Model Acquisition to Guide Robotic Grasping

    NASA Astrophysics Data System (ADS)

    Ignakov, Dmitri

    A vision system is an integral component of many autonomous robots. It enables the robot to perform essential tasks such as mapping, localization, or path planning. A vision system also assists with guiding the robot's grasping and manipulation tasks. As an increased demand is placed on service robots to operate in uncontrolled environments, advanced vision systems must be created that can function effectively in visually complex and cluttered settings. This thesis presents the development of segmentation algorithms to assist in online model acquisition for guiding robotic manipulation tasks. Specifically, the focus is placed on localizing door handles to assist in robotic door opening, and on acquiring partial object models to guide robotic grasping. First, a method for localizing a door handle of unknown geometry based on a proposed 3D segmentation method is presented. Following segmentation, localization is performed by fitting a simple box model to the segmented handle. The proposed method functions without requiring assumptions about the appearance of the handle or the door, and without a geometric model of the handle. Next, an object segmentation algorithm is developed, which combines multiple appearance (intensity and texture) and geometric (depth and curvature) cues. The algorithm is able to segment objects without utilizing any a priori appearance or geometric information in visually complex and cluttered environments. The segmentation method is based on the Conditional Random Fields (CRF) framework, and the graph cuts energy minimization technique. A simple and efficient method for initializing the proposed algorithm which overcomes graph cuts' reliance on user interaction is also developed. Finally, an improved segmentation algorithm is developed which incorporates a distance metric learning (DML) step as a means of weighing various appearance and geometric segmentation cues, allowing the method to better adapt to the available data. The improved method also models the distribution of 3D points in space as a distribution of algebraic distances from an ellipsoid fitted to the object, improving the method's ability to predict which points are likely to belong to the object or the background. Experimental validation of all methods is performed. Each method is evaluated in a realistic setting, utilizing scenarios of various complexities. Experimental results have demonstrated the effectiveness of the handle localization method, and the object segmentation methods.

  8. On the biological plausibility of grandmother cells: implications for neural network theories in psychology and neuroscience.

    PubMed

    Bowers, Jeffrey S

    2009-01-01

    A fundamental claim associated with parallel distributed processing (PDP) theories of cognition is that knowledge is coded in a distributed manner in mind and brain. This approach rejects the claim that knowledge is coded in a localist fashion, with words, objects, and simple concepts (e.g. "dog"), that is, coded with their own dedicated representations. One of the putative advantages of this approach is that the theories are biologically plausible. Indeed, advocates of the PDP approach often highlight the close parallels between distributed representations learned in connectionist models and neural coding in brain and often dismiss localist (grandmother cell) theories as biologically implausible. The author reviews a range a data that strongly challenge this claim and shows that localist models provide a better account of single-cell recording studies. The author also contrast local and alternative distributed coding schemes (sparse and coarse coding) and argues that common rejection of grandmother cell theories in neuroscience is due to a misunderstanding about how localist models behave. The author concludes that the localist representations embedded in theories of perception and cognition are consistent with neuroscience; biology only calls into question the distributed representations often learned in PDP models.

  9. New 3D thermal evolution model for icy bodies application to trans-Neptunian objects

    NASA Astrophysics Data System (ADS)

    Guilbert-Lepoutre, A.; Lasue, J.; Federico, C.; Coradini, A.; Orosei, R.; Rosenberg, E. D.

    2011-05-01

    Context. Thermal evolution models have been developed over the years to investigate the evolution of thermal properties based on the transfer of heat fluxes or transport of gas through a porous matrix, among others. Applications of such models to trans-Neptunian objects (TNOs) and Centaurs has shown that these bodies could be strongly differentiated from the point of view of chemistry (i.e. loss of most volatile ices), as well as from physics (e.g. melting of water ice), resulting in stratified internal structures with differentiated cores and potential pristine material close to the surface. In this context, some observational results, such as the detection of crystalline water ice or volatiles, remain puzzling. Aims: In this paper, we would like to present a new fully three-dimensional thermal evolution model. With this model, we aim to improve determination of the temperature distribution inside icy bodies such as TNOs by accounting for lateral heat fluxes, which have been proven to be important for accurate simulations. We also would like to be able to account for heterogeneous boundary conditions at the surface through various albedo properties, for example, that might induce different local temperature distributions. Methods: In a departure from published modeling approaches, the heat diffusion problem and its boundary conditions are represented in terms of real spherical harmonics, increasing the numerical efficiency by roughly an order of magnitude. We then compare this new model and another 3D model recently published to illustrate the advantages and limits of the new model. We try to put some constraints on the presence of crystalline water ice at the surface of TNOs. Results: The results obtained with this new model are in excellent agreement with results obtained by different groups with various models. Small TNOs could remain primitive unless they are formed quickly (less than 2 Myr) or are debris from the disruption of larger bodies. We find that, for large objects with a thermal evolution dominated by the decay of long-lived isotopes (objects with a formation period greater than 2 to 3 Myr), the presence of crystalline water ice would require both a large radius (>300 km) and high density (>1500 kg m-3). In particular, objects with intermediate radii and densities would be an interesting transitory population deserving a detailed study of individual fates.

  10. Debiased estimates for NEO orbits, absolute magnitudes, and source regions

    NASA Astrophysics Data System (ADS)

    Granvik, Mikael; Morbidelli, Alessandro; Jedicke, Robert; Bolin, Bryce T.; Bottke, William; Beshore, Edward C.; Vokrouhlicky, David; Nesvorny, David; Michel, Patrick

    2017-10-01

    The debiased absolute-magnitude and orbit distributions as well as source regions for near-Earth objects (NEOs) provide a fundamental frame of reference for studies on individual NEOs as well as on more complex population-level questions. We present a new four-dimensional model of the NEO population that describes debiased steady-state distributions of semimajor axis (a), eccentricity (e), inclination (i), and absolute magnitude (H). We calibrate the model using NEO detections by the 703 and G96 stations of the Catalina Sky Survey (CSS) during 2005-2012 corresponding to objects with 17

  11. Addressing population heterogeneity and distribution in epidemics models using a cellular automata approach

    PubMed Central

    2014-01-01

    Background The spread of an infectious disease is determined by biological and social factors. Models based on cellular automata are adequate to describe such natural systems consisting of a massive collection of simple interacting objects. They characterize the time evolution of the global system as the emergent behaviour resulting from the interaction of the objects, whose behaviour is defined through a set of simple rules that encode the individual behaviour and the transmission dynamic. Methods An epidemic is characterized trough an individual–based–model built upon cellular automata. In the proposed model, each individual of the population is represented by a cell of the automata. This way of modeling an epidemic situation allows to individually define the characteristic of each individual, establish different scenarios and implement control strategies. Results A cellular automata model to study the time evolution of a heterogeneous populations through the various stages of disease was proposed, allowing the inclusion of individual heterogeneity, geographical characteristics and social factors that determine the dynamic of the desease. Different assumptions made to built the classical model were evaluated, leading to following results: i) for low contact rate (like in quarantine process or low density population areas) the number of infective individuals is lower than other areas where the contact rate is higher, and ii) for different initial spacial distributions of infected individuals different epidemic dynamics are obtained due to its influence on the transition rate and the reproductive ratio of disease. Conclusions The contact rate and spatial distributions have a central role in the spread of a disease. For low density populations the spread is very low and the number of infected individuals is lower than in highly populated areas. The spacial distribution of the population and the disease focus as well as the geographical characteristic of the area play a central role in the dynamics of the desease. PMID:24725804

  12. Addressing population heterogeneity and distribution in epidemics models using a cellular automata approach.

    PubMed

    López, Leonardo; Burguerner, Germán; Giovanini, Leonardo

    2014-04-12

    The spread of an infectious disease is determined by biological and social factors. Models based on cellular automata are adequate to describe such natural systems consisting of a massive collection of simple interacting objects. They characterize the time evolution of the global system as the emergent behaviour resulting from the interaction of the objects, whose behaviour is defined through a set of simple rules that encode the individual behaviour and the transmission dynamic. An epidemic is characterized trough an individual-based-model built upon cellular automata. In the proposed model, each individual of the population is represented by a cell of the automata. This way of modeling an epidemic situation allows to individually define the characteristic of each individual, establish different scenarios and implement control strategies. A cellular automata model to study the time evolution of a heterogeneous populations through the various stages of disease was proposed, allowing the inclusion of individual heterogeneity, geographical characteristics and social factors that determine the dynamic of the desease. Different assumptions made to built the classical model were evaluated, leading to following results: i) for low contact rate (like in quarantine process or low density population areas) the number of infective individuals is lower than other areas where the contact rate is higher, and ii) for different initial spacial distributions of infected individuals different epidemic dynamics are obtained due to its influence on the transition rate and the reproductive ratio of disease. The contact rate and spatial distributions have a central role in the spread of a disease. For low density populations the spread is very low and the number of infected individuals is lower than in highly populated areas. The spacial distribution of the population and the disease focus as well as the geographical characteristic of the area play a central role in the dynamics of the desease.

  13. Estimation of the minimum permeability coefficient in rats for perfusion-limited tissue distribution in whole-body physiologically-based pharmacokinetics.

    PubMed

    Jeong, Yoo-Seong; Yim, Chang-Soon; Ryu, Heon-Min; Noh, Chi-Kyoung; Song, Yoo-Kyung; Chung, Suk-Jae

    2017-06-01

    The objective of the current study was to determine the minimum permeability coefficient, P, needed for perfusion-limited distribution in PBPK. Two expanded kinetic models, containing both permeability and perfusion terms for the rate of tissue distribution, were considered: The resulting equations could be simplified to perfusion-limited distribution depending on tissue permeability. Integration plot analyses were carried out with theophylline in 11 typical tissues to determine their apparent distributional clearances and the model-dependent permeabilities of the tissues. Effective surface areas were calculated for 11 tissues from the tissue permeabilities of theophylline and its PAMPA P. Tissue permeabilities of other drugs were then estimated from their PAMPA P and the effective surface area of the tissues. The differences between the observed and predicted concentrations, as expressed by the sum of squared log differences with the present models were at least comparable to or less than the values obtained using the traditional perfusion-limited distribution model for 24 compounds with diverse PAMPA P values. These observations suggest that the use of a combination of the proposed models, PAMPA P and the effective surface area can be used to reasonably predict the pharmacokinetics of 22 out of 24 model compounds, and is potentially applicable to calculating the kinetics for other drugs. Assuming that the fractional distribution parameter of 80% of the perfusion rate is a reasonable threshold for perfusion-limited distribution in PBPK, our theoretical prediction indicates that the pharmacokinetics of drugs having an apparent PAMPA P of 1×10 -6 cm/s or more will follow the traditional perfusion-limited distribution in PBPK for major tissues in the body. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. The Generation of the Distant Kuiper Belt by Planet Nine from an Initially Broad Perihelion Distribution

    NASA Astrophysics Data System (ADS)

    Khain, Tali; Batygin, Konstantin; Brown, Michael E.

    2018-06-01

    The observation that the orbits of long-period Kuiper Belt objects (KBOs) are anomalously clustered in physical space has recently prompted the Planet Nine hypothesis—the proposed existence of a distant and eccentric planetary member of our solar system. Within the framework of this model, a Neptune-like perturber sculpts the orbital distribution of distant KBOs through a complex interplay of resonant and secular effects, such that in addition to perihelion-circulating objects, the surviving orbits get organized into apsidally aligned and anti-aligned configurations with respect to Planet Nine’s orbit. In this work, we investigate the role of Kuiper Belt initial conditions on the evolution of the outer solar system using numerical simulations. Intriguingly, we find that the final perihelion distance distribution depends strongly on the primordial state of the system, and we demonstrate that a bimodal structure corresponding to the existence of both aligned and anti-aligned clusters is only reproduced if the initial perihelion distribution is assumed to extend well beyond ∼36 au. The bimodality in the final perihelion distance distribution is due to the existence of permanently stable objects, with the lower perihelion peak corresponding to the anti-aligned orbits and the higher perihelion peak corresponding to the aligned orbits. We identify the mechanisms that enable the persistent stability of these objects and locate the regions of phase space in which they reside. The obtained results contextualize the Planet Nine hypothesis within the broader narrative of solar system formation and offer further insight into the observational search for Planet Nine.

  15. Coding of visual object features and feature conjunctions in the human brain.

    PubMed

    Martinovic, Jasna; Gruber, Thomas; Müller, Matthias M

    2008-01-01

    Object recognition is achieved through neural mechanisms reliant on the activity of distributed coordinated neural assemblies. In the initial steps of this process, an object's features are thought to be coded very rapidly in distinct neural assemblies. These features play different functional roles in the recognition process--while colour facilitates recognition, additional contours and edges delay it. Here, we selectively varied the amount and role of object features in an entry-level categorization paradigm and related them to the electrical activity of the human brain. We found that early synchronizations (approx. 100 ms) increased quantitatively when more image features had to be coded, without reflecting their qualitative contribution to the recognition process. Later activity (approx. 200-400 ms) was modulated by the representational role of object features. These findings demonstrate that although early synchronizations may be sufficient for relatively crude discrimination of objects in visual scenes, they cannot support entry-level categorization. This was subserved by later processes of object model selection, which utilized the representational value of object features such as colour or edges to select the appropriate model and achieve identification.

  16. Modeling Bose-Einstein correlations via elementary emitting cells

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Utyuzh, Oleg; Wilk, Grzegorz; Wlodarczyk, Zbigniew

    2007-04-01

    We propose a method of numerical modeling Bose-Einstein correlations by using the notion of the elementary emitting cell (EEC). They are intermediary objects containing identical bosons and are supposed to be produced independently during the hadronization process. Only bosons in the EEC, which represents a single quantum state here, are subjected to the effects of Bose-Einstein (BE) statistics, which forces them to follow a geometrical distribution. There are no such effects between particles from different EECs. We illustrate our proposition by calculating a representative number of typical distributions and discussing their sensitivity to EECs and their characteristics.

  17. Evaluation of performance of distributed delay model for chemotherapy-induced myelosuppression.

    PubMed

    Krzyzanski, Wojciech; Hu, Shuhua; Dunlavey, Michael

    2018-04-01

    The distributed delay model has been introduced that replaces the transit compartments in the classic model of chemotherapy-induced myelosuppression with a convolution integral. The maturation of granulocyte precursors in the bone marrow is described by the gamma probability density function with the shape parameter (ν). If ν is a positive integer, the distributed delay model coincides with the classic model with ν transit compartments. The purpose of this work was to evaluate performance of the distributed delay model with particular focus on model deterministic identifiability in the presence of the shape parameter. The classic model served as a reference for comparison. Previously published white blood cell (WBC) count data in rats receiving bolus doses of 5-fluorouracil were fitted by both models. The negative two log-likelihood objective function (-2LL) and running times were used as major markers of performance. Local sensitivity analysis was done to evaluate the impact of ν on the pharmacodynamics response WBC. The ν estimate was 1.46 with 16.1% CV% compared to ν = 3 for the classic model. The difference of 6.78 in - 2LL between classic model and the distributed delay model implied that the latter performed significantly better than former according to the log-likelihood ratio test (P = 0.009), although the overall performance was modestly better. The running times were 1 s and 66.2 min, respectively. The long running time of the distributed delay model was attributed to computationally intensive evaluation of the convolution integral. The sensitivity analysis revealed that ν strongly influences the WBC response by controlling cell proliferation and elimination of WBCs from the circulation. In conclusion, the distributed delay model was deterministically identifiable from typical cytotoxic data. Its performance was modestly better than the classic model with significantly longer running time.

  18. Comparison of 3 Symptom Classification Methods to Standardize the History Component of the HEART Score.

    PubMed

    Marchick, Michael R; Setteducato, Michael L; Revenis, Jesse J; Robinson, Matthew A; Weeks, Emily C; Payton, Thomas F; Winchester, David E; Allen, Brandon R

    2017-09-01

    The History, Electrocardiography, Age, Risk factors, Troponin (HEART) score enables rapid risk stratification of emergency department patients presenting with chest pain. However, the subjectivity in scoring introduced by the history component has been criticized by some clinicians. We examined the association of 3 objective scoring models with the results of noninvasive cardiac testing. Medical records for all patients evaluated in the chest pain center of an academic medical center during a 1-year period were reviewed retrospectively. Each patient's history component score was calculated using 3 models developed by the authors. Differences in the distribution of HEART scores for each model, as well as their degree of agreement with one another, as well as the results of cardiac testing were analyzed. Seven hundred forty nine patients were studied, 58 of which had an abnormal stress test or computed tomography coronary angiography. The mean HEART scores for models 1, 2, and 3 were 2.97 (SD 1.17), 2.57 (SD 1.25), and 3.30 (SD 1.35), respectively, and were significantly different (P < 0.001). However, for each model, the likelihood of an abnormal cardiovascular test did not correlate with higher scores on the symptom component of the HEART score (P = 0.09, 0.41, and 0.86, respectively). While the objective scoring models produced different distributions of HEART scores, no model performed well with regards to identifying patients with abnormal advanced cardiac studies in this relatively low-risk cohort. Further studies in a broader cohort of patients, as well as comparison with the performance of subjective history scoring, is warranted before adoption of any of these objective models.

  19. Colour computer-generated holography for point clouds utilizing the Phong illumination model.

    PubMed

    Symeonidou, Athanasia; Blinder, David; Schelkens, Peter

    2018-04-16

    A technique integrating the bidirectional reflectance distribution function (BRDF) is proposed to generate realistic high-quality colour computer-generated holograms (CGHs). We build on prior work, namely a fast computer-generated holography method for point clouds that handles occlusions. We extend the method by integrating the Phong illumination model so that the properties of the objects' surfaces are taken into account to achieve natural light phenomena such as reflections and shadows. Our experiments show that rendering holograms with the proposed algorithm provides realistic looking objects without any noteworthy increase to the computational cost.

  20. Optimization based on benefit of regional energy suppliers of distributed generation in active distribution network

    NASA Astrophysics Data System (ADS)

    Huo, Xianxu; Li, Guodong; Jiang, Ling; Wang, Xudong

    2017-08-01

    With the development of electricity market, distributed generation (DG) technology and related policies, regional energy suppliers are encouraged to build DG. Under this background, the concept of active distribution network (ADN) is put forward. In this paper, a bi-level model of intermittent DG considering benefit of regional energy suppliers is proposed. The objective of the upper level is the maximization of benefit of regional energy suppliers. On this basis, the lower level is optimized for each scene. The uncertainties of DG output and load of users, as well as four active management measures, which include demand-side management, curtailing the output power of DG, regulating reactive power compensation capacity and regulating the on-load tap changer, are considered. Harmony search algorithm and particle swarm optimization are combined as a hybrid strategy to solve the model. This model and strategy are tested with IEEE-33 node system, and results of case study indicate that the model and strategy successfully increase the capacity of DG and benefit of regional energy suppliers.

  1. The spectral energy distributions of isolated neutron stars in the resonant cyclotron scattering model

    NASA Astrophysics Data System (ADS)

    Tong, Hao; Xu, Renxin

    2013-03-01

    The X-ray dim isolated neutron stars (XDINSs) are peculiar pulsar-like objects, characterized by their very well Planck-like spectrum. In studying their spectral energy distributions, the optical/UV excess is a long standing problem. Recently, Kaplan et al. (2011) have measured the optical/UV excess for all seven sources, which is understandable in the resonant cyclotron scattering (RCS) model previously addressed. The RCS model calculations show that the RCS process can account for the observed optical/UV excess for most sources. The flat spectrum of RX J2143.0+0654 may due to contribution from bremsstrahlung emission of the electron system in addition to the RCS process.

  2. Adaptive particle filter for robust visual tracking

    NASA Astrophysics Data System (ADS)

    Dai, Jianghua; Yu, Shengsheng; Sun, Weiping; Chen, Xiaoping; Xiang, Jinhai

    2009-10-01

    Object tracking plays a key role in the field of computer vision. Particle filter has been widely used for visual tracking under nonlinear and/or non-Gaussian circumstances. In particle filter, the state transition model for predicting the next location of tracked object assumes the object motion is invariable, which cannot well approximate the varying dynamics of the motion changes. In addition, the state estimate calculated by the mean of all the weighted particles is coarse or inaccurate due to various noise disturbances. Both these two factors may degrade tracking performance greatly. In this work, an adaptive particle filter (APF) with a velocity-updating based transition model (VTM) and an adaptive state estimate approach (ASEA) is proposed to improve object tracking. In APF, the motion velocity embedded into the state transition model is updated continuously by a recursive equation, and the state estimate is obtained adaptively according to the state posterior distribution. The experiment results show that the APF can increase the tracking accuracy and efficiency in complex environments.

  3. Using the SWAT model to improve process descriptions and define hydrologic partitioning in South Korea

    NASA Astrophysics Data System (ADS)

    Shope, C. L.; Maharjan, G. R.; Tenhunen, J.; Seo, B.; Kim, K.; Riley, J.; Arnhold, S.; Koellner, T.; Ok, Y. S.; Peiffer, S.; Kim, B.; Park, J.-H.; Huwe, B.

    2014-02-01

    Watershed-scale modeling can be a valuable tool to aid in quantification of water quality and yield; however, several challenges remain. In many watersheds, it is difficult to adequately quantify hydrologic partitioning. Data scarcity is prevalent, accuracy of spatially distributed meteorology is difficult to quantify, forest encroachment and land use issues are common, and surface water and groundwater abstractions substantially modify watershed-based processes. Our objective is to assess the capability of the Soil and Water Assessment Tool (SWAT) model to capture event-based and long-term monsoonal rainfall-runoff processes in complex mountainous terrain. To accomplish this, we developed a unique quality-control, gap-filling algorithm for interpolation of high-frequency meteorological data. We used a novel multi-location, multi-optimization calibration technique to improve estimations of catchment-wide hydrologic partitioning. The interdisciplinary model was calibrated to a unique combination of statistical, hydrologic, and plant growth metrics. Our results indicate scale-dependent sensitivity of hydrologic partitioning and substantial influence of engineered features. The addition of hydrologic and plant growth objective functions identified the importance of culverts in catchment-wide flow distribution. While this study shows the challenges of applying the SWAT model to complex terrain and extreme environments; by incorporating anthropogenic features into modeling scenarios, we can enhance our understanding of the hydroecological impact.

  4. A Global, Multi-Waveband Model for the Zodiacal Cloud

    NASA Technical Reports Server (NTRS)

    Grogan, Keith; Dermott, Stanley F.; Kehoe, Thomas J. J.

    2003-01-01

    This recently completed three-year project was undertaken by the PI at the University of Florida, NASA Goddard and JPL, and by the Co-I and Collaborator at the University of Florida. The funding was used to support a continuation of research conducted at the University of Florida over the last decade which focuses on the dynamics of dust particles in the interplanetary environment. The main objectives of this proposal were: To produce improved dynamical models of the zodiacal cloud by performing numerical simulations of the orbital evolution of asteroidal and cometary dust particles. To provide visualizations of the results using our visualization software package, SIMUL, simulating the viewing geometries of IRAS and COBE and comparing the model results with archived data. To use the results to provide a more accurate model of the brightness distribution of the zodiacal cloud than existing empirical models. In addition, our dynamical approach can provide insight into fundamental properties of the cloud, including but not limited to the total mass and surface area of dust, the size-frequency distribution of dust, and the relative contributions of asteroidal and cometary material. The model can also be used to provide constraints on trace signals from other sources, such as dust associated with the "Plutinos" , objects captured in the 2:3 resonance with Neptune.

  5. Web-based modelling of energy, water and matter fluxes to support decision making in mesoscale catchments??the integrative perspective of GLOWA-Danube

    NASA Astrophysics Data System (ADS)

    Ludwig, R.; Mauser, W.; Niemeyer, S.; Colgan, A.; Stolz, R.; Escher-Vetter, H.; Kuhn, M.; Reichstein, M.; Tenhunen, J.; Kraus, A.; Ludwig, M.; Barth, M.; Hennicker, R.

    The GLOWA-initiative (Global Change of the water cycle), funded by the German Ministry of Research and Education (BMBF), has been established to address the manifold consequences of Global Change on regional water resources in a variety of catchment areas with different natural and cultural characteristics. Within this framework, the GLOWA-Danube project is dealing with the Upper Danube watershed as a representative mesoscale test site (∼75.000 km 2) for mountain-foreland regions in the temperate mid-latitudes. The principle objective is to identify, examine and develop new techniques of coupled distributed modelling for the integration of natural and socio-economic sciences. The transdisciplinary research in GLOWA-Danube develops an integrated decision support system, called DANUBIA, to investigate the sustainability of future water use. GLOWA-Danube, which is scheduled for a total run-time of eight years to operationally implement and establish DANUBIA, comprises a university-based network of experts with water-related competence in the fields of engineering, natural and social sciences. Co-operation with a network of stakeholders in water resources management of the Upper Danube catchment ensures that practical issues and future problems in the water sector of the region can be addressed. In order to synthesize a common understanding between the project partners, a standardized notation of parameters and functions and a platform-independent structure of computational methods and interfaces has been established, by making use of the unified modelling language, an industry standard for the structuring and co-ordination of large projects in software development [Booch et al., The Unified Modelling Language User Guide, Addison-Wesley, Reading, 1999]. DANUBIA is object-oriented, spatially distributed and raster-based at its core. It applies the concept of “proxels” (process pixels) as its basic objects, which have different dimensions depending on the viewing scale and connect to their environment through fluxes. The presented paper excerpts the hydrological view point of GLOWA-Danube, its approach of model coupling and network-based communication, and object-oriented techniques to simulate physical processes and interactions at the land surface. The mechanisms and technologies applied to communicate data and model parameters across the typical discipline borders are demonstrated from the perspective of the Landsurface object. It comprises the capabilities of interdependent expert models for energy exchange at various surface types, snowmelt, soil water movement, runoff formation and plant growth in a distributed Java-based modelling environment using the remote method invocation [Pitt et al., Java.rmi: The Remote Method Invocation Guide, Addison Wesley Professional, Reading, 2001, p. 320]. The presented text summarizes the GLOWA-Danube concept and shows the state of an implemented DANUBIA prototype after completion of the first project-year (2001).

  6. A neural network model of semantic memory linking feature-based object representation and words.

    PubMed

    Cuppini, C; Magosso, E; Ursino, M

    2009-06-01

    Recent theories in cognitive neuroscience suggest that semantic memory is a distributed process, which involves many cortical areas and is based on a multimodal representation of objects. The aim of this work is to extend a previous model of object representation to realize a semantic memory, in which sensory-motor representations of objects are linked with words. The model assumes that each object is described as a collection of features, coded in different cortical areas via a topological organization. Features in different objects are segmented via gamma-band synchronization of neural oscillators. The feature areas are further connected with a lexical area, devoted to the representation of words. Synapses among the feature areas, and among the lexical area and the feature areas are trained via a time-dependent Hebbian rule, during a period in which individual objects are presented together with the corresponding words. Simulation results demonstrate that, during the retrieval phase, the network can deal with the simultaneous presence of objects (from sensory-motor inputs) and words (from acoustic inputs), can correctly associate objects with words and segment objects even in the presence of incomplete information. Moreover, the network can realize some semantic links among words representing objects with shared features. These results support the idea that semantic memory can be described as an integrated process, whose content is retrieved by the co-activation of different multimodal regions. In perspective, extended versions of this model may be used to test conceptual theories, and to provide a quantitative assessment of existing data (for instance concerning patients with neural deficits).

  7. Modeling the lateral load distribution for multiple concrete crossties and fastening systems.

    DOT National Transportation Integrated Search

    2017-01-31

    The objective of this project was to further investigate the performance of concrete crosstie and : fastening system under vertical and lateral wheel load using finite element analysis, and explore : possible improvement for current track design stan...

  8. AIR MONITOR SITING BY OBJECTIVE

    EPA Science Inventory

    A method is developed whereby measured pollutant concentrations can be used in conjunction with a mathematical air quality model to estimate the full spatial and temporal concentration distributions of the pollutants over a given region. The method is based on the application of ...

  9. Distribution factors for construction loads and girder capacity equations, final report.

    DOT National Transportation Integrated Search

    2017-03-01

    During the process of constructing a highway bridge, there are several construction stages that warrant : consideration from a structural safety and design perspective. The first objective of the present study was to use analytical : models of prestr...

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rongle Zhang; Jie Chang; Yuanyuan Xu

    A new kinetic model of the Fischer-Tropsch synthesis (FTS) is proposed to describe the non-Anderson-Schulz-Flory (ASF) product distribution. The model is based on the double-polymerization monomers hypothesis, in which the surface C{sub 2}{asterisk} species acts as a chain-growth monomer in the light-product range, while C{sub 1}{asterisk} species acts as a chain-growth monomer in the heavy-product range. The detailed kinetic model in the Langmuir-Hinshelwood-Hougen-Watson type based on the elementary reactions is derived for FTS and the water-gas-shift reaction. Kinetic model candidates are evaluated by minimization of multiresponse objective functions with a genetic algorithm approach. The model of hydrocarbon product distribution ismore » consistent with experimental data (

  11. Portfolio optimization with skewness and kurtosis

    NASA Astrophysics Data System (ADS)

    Lam, Weng Hoe; Jaaman, Saiful Hafizah Hj.; Isa, Zaidi

    2013-04-01

    Mean and variance of return distributions are two important parameters of the mean-variance model in portfolio optimization. However, the mean-variance model will become inadequate if the returns of assets are not normally distributed. Therefore, higher moments such as skewness and kurtosis cannot be ignored. Risk averse investors prefer portfolios with high skewness and low kurtosis so that the probability of getting negative rates of return will be reduced. The objective of this study is to compare the portfolio compositions as well as performances between the mean-variance model and mean-variance-skewness-kurtosis model by using the polynomial goal programming approach. The results show that the incorporation of skewness and kurtosis will change the optimal portfolio compositions. The mean-variance-skewness-kurtosis model outperforms the mean-variance model because the mean-variance-skewness-kurtosis model takes skewness and kurtosis into consideration. Therefore, the mean-variance-skewness-kurtosis model is more appropriate for the investors of Malaysia in portfolio optimization.

  12. Temporal patterns of mental model convergence: implications for distributed teams interacting in electronic collaboration spaces.

    PubMed

    McComb, Sara; Kennedy, Deanna; Perryman, Rebecca; Warner, Norman; Letsky, Michael

    2010-04-01

    Our objective is to capture temporal patterns in mental model convergence processes and differences in these patterns between distributed teams using an electronic collaboration space and face-to-face teams with no interface. Distributed teams, as sociotechnical systems, collaborate via technology to work on their task. The way in which they process information to inform their mental models may be examined via team communication and may unfold differently than it does in face-to-face teams. We conducted our analysis on 32 three-member teams working on a planning task. Half of the teams worked as distributed teams in an electronic collaboration space, and the other half worked face-to-face without an interface. Using event history analysis, we found temporal interdependencies among the initial convergence points of the multiple mental models we examined. Furthermore, the timing of mental model convergence and the onset of task work discussions were related to team performance. Differences existed in the temporal patterns of convergence and task work discussions across conditions. Distributed teams interacting via an electronic interface and face-to-face teams with no interface converged on multiple mental models, but their communication patterns differed. In particular, distributed teams with an electronic interface required less overall communication, converged on all mental models later in their life cycles, and exhibited more linear cognitive processes than did face-to-face teams interacting verbally. Managers need unique strategies for facilitating communication and mental model convergence depending on teams' degrees of collocation and access to an interface, which in turn will enhance team performance.

  13. Soil redistribution model for undisturbed and cultivated sites based on Chernobyl-derived cesium-137 fallout.

    PubMed

    Hrachowitz, Markus; Maringer, Franz-Josef; Steineder, Christian; Gerzabek, Martin H

    2005-01-01

    Measurements of 137Cs fallout have been used in combination with a range of conversion models for the investigation of soil relocation mechanisms and sediment budgets in many countries for more than 20 yr. The objective of this paper is to develop a conversion model for quantifying soil redistribution, based on Chernobyl-derived 137Cs. The model is applicable on uncultivated as well as on cultivated sites, taking into account temporal changes in the 137Cs depth distribution pattern as well as tillage-induced 137Cs dilution effects. The main idea of the new model is the combination of a modified exponential model describing uncultivated soil with a Chapman distribution based model describing cultivated soil. The compound model subsequently allows a dynamic description of the Chernobyl derived 137Cs situation in the soil and its change, specifically migration and soil transport processes over the course of time. Using the suggested model at the sampling site in Pettenbach, in the Austrian province of Oberösterreich 137Cs depth distributions were simulated with a correlation coefficient of 0.97 compared with the measured 137Cs depth profile. The simulated rates of soil distribution at different positions at the sampling site were found to be between 27 and 60 Mg ha(-1) yr(-1). It was shown that the model can be used to describe the temporal changes of 137Cs depth distributions in cultivated as well as uncultivated soils. Additionally, the model allows to quantify soil redistribution in good correspondence with already existing models.

  14. Southern California Edison Grid Integration Evaluation: Cooperative Research and Development Final Report, CRADA Number CRD-10-376

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mather, Barry

    2015-07-09

    The objective of this project is to use field verification to improve DOE’s ability to model and understand the impacts of, as well as develop solutions for, high penetration PV deployments in electrical utility distribution systems. The Participant will work with NREL to assess the existing distribution system at SCE facilities and assess adding additional PV systems into the electric power system.

  15. Flow- topography Interactions in the Vicinity of a Deep Ocean Island and a Ridge

    DTIC Science & Technology

    2015-09-30

    1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Flow- topography Interactions in the Vicinity of a Deep...flow around abrupt topography in operational Navy models. RELATED PROJECTS NRL FY17 6.2 New Start proposal (pending proposal), titled...Predictability of Flow Interacting with Abrupt Topography (FIAT)”; lead PI: Ana Rice, NRL-SSC. The objective of FIAT is to use observations to develop Navy

  16. IMPACT fragmentation model developments

    NASA Astrophysics Data System (ADS)

    Sorge, Marlon E.; Mains, Deanna L.

    2016-09-01

    The IMPACT fragmentation model has been used by The Aerospace Corporation for more than 25 years to analyze orbital altitude explosions and hypervelocity collisions. The model is semi-empirical, combining mass, energy and momentum conservation laws with empirically derived relationships for fragment characteristics such as number, mass, area-to-mass ratio, and spreading velocity as well as event energy distribution. Model results are used for several types of analysis including assessment of short-term risks to satellites from orbital altitude fragmentations, prediction of the long-term evolution of the orbital debris environment and forensic assessments of breakup events. A new version of IMPACT, version 6, has been completed and incorporates a number of advancements enabled by a multi-year long effort to characterize more than 11,000 debris fragments from more than three dozen historical on-orbit breakup events. These events involved a wide range of causes, energies, and fragmenting objects. Special focus was placed on the explosion model, as the majority of events examined were explosions. Revisions were made to the mass distribution used for explosion events, increasing the number of smaller fragments generated. The algorithm for modeling upper stage large fragment generation was updated. A momentum conserving asymmetric spreading velocity distribution algorithm was implemented to better represent sub-catastrophic events. An approach was developed for modeling sub-catastrophic explosions, those where the majority of the parent object remains intact, based on estimated event energy. Finally, significant modifications were made to the area-to-mass ratio distribution to incorporate the tendencies of different materials to fragment into different shapes. This ability enabled better matches between the observed area-to-mass ratios and those generated by the model. It also opened up additional possibilities for post-event analysis of breakups. The paper will discuss a number of the modifications that have been made to improve IMPACT and why these modifications were made. Comparisons between observational data and the IMPACT predictions will be discussed in the context of these model revisions and the overall behavior of model results. A number of future areas of investigation that were uncovered in the process of the analysis efforts will also be reviewed.

  17. Integrated three-dimensional shape and reflection properties measurement system.

    PubMed

    Krzesłowski, Jakub; Sitnik, Robert; Maczkowski, Grzegorz

    2011-02-01

    Creating accurate three-dimensional (3D) digitalized models of cultural heritage objects requires that information about surface geometry be integrated with measurements of other material properties like color and reflectance. Up until now, these measurements have been performed in laboratories using manually integrated (subjective) data analyses. We describe an out-of-laboratory bidirectional reflectance distribution function (BRDF) and 3D shape measurement system that implements shape and BRDF measurement in a single setup with BRDF uncertainty evaluation. The setup aligns spatial data with the angular reflectance distribution, yielding a better estimation of the surface's reflective properties by integrating these two modality measurements into one setup using a single detector. This approach provides a better picture of an object's intrinsic material features, which in turn produces a higher-quality digitalized model reconstruction. Furthermore, this system simplifies the data processing by combining structured light projection and photometric stereo. The results of our method of data analysis describe the diffusive and specular attributes corresponding to every measured geometric point and can be used to render intricate 3D models in an arbitrarily illuminated scene.

  18. GOSSIP, a New VO Compliant Tool for SED Fitting

    NASA Astrophysics Data System (ADS)

    Franzetti, P.; Scodeggio, M.; Garilli, B.; Fumana, M.; Paioro, L.

    2008-08-01

    We present GOSSIP (Galaxy Observed-Simulated SED Interactive Program), a new tool developed to perform SED fitting in a simple, user friendly and efficient way. GOSSIP automatically builds-up the observed SED of an object (or a large sample of objects) combining magnitudes in different bands and eventually a spectrum; then it performs a χ^2 minimization fitting procedure versus a set of synthetic models. The fitting results are used to estimate a number of physical parameters like the Star Formation History, absolute magnitudes, stellar mass and their Probability Distribution Functions. User defined models can be used, but GOSSIP is also able to load models produced by the most commonly used synthesis population codes. GOSSIP can be used interactively with other visualization tools using the PLASTIC protocol for communications. Moreover, since it has been developed with large data sets applications in mind, it will be extended to operate within the Virtual Observatory framework. GOSSIP is distributed to the astronomical community from the PANDORA group web site (http://cosmos.iasf-milano.inaf.it/pandora/gossip.html).

  19. Orthodontic intrusion of maxillary incisors: a 3D finite element method study

    PubMed Central

    Saga, Armando Yukio; Maruo, Hiroshi; Argenta, Marco André; Maruo, Ivan Toshio; Tanaka, Orlando Motohiro

    2016-01-01

    Objective: In orthodontic treatment, intrusion movement of maxillary incisors is often necessary. Therefore, the objective of this investigation is to evaluate the initial distribution patterns and magnitude of compressive stress in the periodontal ligament (PDL) in a simulation of orthodontic intrusion of maxillary incisors, considering the points of force application. Methods: Anatomic 3D models reconstructed from cone-beam computed tomography scans were used to simulate maxillary incisors intrusion loading. The points of force application selected were: centered between central incisors brackets (LOAD 1); bilaterally between the brackets of central and lateral incisors (LOAD 2); bilaterally distal to the brackets of lateral incisors (LOAD 3); bilaterally 7 mm distal to the center of brackets of lateral incisors (LOAD 4). Results and Conclusions: Stress concentrated at the PDL apex region, irrespective of the point of orthodontic force application. The four load models showed distinct contour plots and compressive stress values over the midsagittal reference line. The contour plots of central and lateral incisors were not similar in the same load model. LOAD 3 resulted in more balanced compressive stress distribution. PMID:27007765

  20. The Modular Modeling System (MMS): A modeling framework for water- and environmental-resources management

    USGS Publications Warehouse

    Leavesley, G.H.; Markstrom, S.L.; Viger, R.J.

    2004-01-01

    The interdisciplinary nature and increasing complexity of water- and environmental-resource problems require the use of modeling approaches that can incorporate knowledge from a broad range of scientific disciplines. The large number of distributed hydrological and ecosystem models currently available are composed of a variety of different conceptualizations of the associated processes they simulate. Assessment of the capabilities of these distributed models requires evaluation of the conceptualizations of the individual processes, and the identification of which conceptualizations are most appropriate for various combinations of criteria, such as problem objectives, data constraints, and spatial and temporal scales of application. With this knowledge, "optimal" models for specific sets of criteria can be created and applied. The U.S. Geological Survey (USGS) Modular Modeling System (MMS) is an integrated system of computer software that has been developed to provide these model development and application capabilities. MMS supports the integration of models and tools at a variety of levels of modular design. These include individual process models, tightly coupled models, loosely coupled models, and fully-integrated decision support systems. A variety of visualization and statistical tools are also provided. MMS has been coupled with the Bureau of Reclamation (BOR) object-oriented reservoir and river-system modeling framework, RiverWare, under a joint USGS-BOR program called the Watershed and River System Management Program. MMS and RiverWare are linked using a shared relational database. The resulting database-centered decision support system provides tools for evaluating and applying optimal resource-allocation and management strategies to complex, operational decisions on multipurpose reservoir systems and watersheds. Management issues being addressed include efficiency of water-resources management, environmental concerns such as meeting flow needs for endangered species, and optimizing operations within the constraints of multiple objectives such as power generation, irrigation, and water conservation. This decision support system approach is being developed, tested, and implemented in the Gunni-son, Yakima, San Juan, Rio Grande, and Truckee River basins of the western United States. Copyright ASCE 2004.

  1. Guest Editor's introduction: Selected papers from the 4th USENIX Conference on Object-Oriented Technologies and Systems

    NASA Astrophysics Data System (ADS)

    Sventek, Joe

    1998-12-01

    Hewlett-Packard Laboratories, 1501 Page Mill Road, Palo Alto, CA 94304, USA Introduction The USENIX Conference on Object-Oriented Technologies and Systems (COOTS) is held annually in the late spring. The conference evolved from a set of C++ workshops that were held under the auspices of USENIX, the first of which met in 1989. Given the growing diverse interest in object-oriented technologies, the C++ focus of the workshop eventually became too narrow, with the result that the scope was widened in 1995 to include object-oriented technologies and systems. COOTS is intended to showcase advanced R&D efforts in object-oriented technologies and software systems. The conference emphasizes experimental research and experience gained by using object-oriented techniques and languages to build complex software systems that meet real-world needs. COOTS solicits papers in the following general areas: application of, and experiences with, object-oriented technologies in particular domains (e.g. financial, medical, telecommunication); the architecture and implementation of distributed object systems (e.g. CORBA, DCOM, RMI); object-oriented programming and specification languages; object-oriented design and analysis. The 4th meeting of COOTS was held 27 - 30 April 1998 at the El Dorado Hotel, Santa Fe, New Mexico, USA. Several tutorials were given. The technical program proper consisted of a single track of six sessions, with three paper presentations per session. A keynote address and a provocative panel session rounded out the technical program. The program committee reviewed 56 papers, selecting the best 18 for presentation in the technical sessions. While we solicit papers across the spectrum of applications of object-oriented technologies, this year there was a predominance of distributed, object-oriented papers. The accepted papers reflected this asymmetry, with 15 papers on distributed objects and 3 papers on object-oriented languages. The papers in this special issue are the six best distributed object papers (in the opinion of the program committee). They represent the diversity of research in this particular area, and should give the reader a good idea of the types of papers presented at COOTS as well as the calibre of the work so presented. The papers The paper by Jain, Widoff and Schmidt explores the suitability of Java for writing performance-sensitive distributed applications. Despite the popularity of Java, there are many concerns about its efficiency; in particular, networking and computation performance are key concerns when considering the use of Java to develop performance-sensitive distributed applications. This paper makes three contributions to the study of Java for these applications: it describes an architecture using Java and the Web to develop MedJava, which is a distributed electronic medical imaging system with stringent networking and computation requirements; it presents benchmarks of MedJava image processing and compares the results to the performance of xv, which is an equivalent image processing application written in C; it presents performance benchmarks using Java as a transport interface to exchange large medical images over high-speed ATM networks. The paper by Little and Shrivastava covers the integration of several important topics: transactions, distributed systems, Java, the Internet and security. The usefulness of this paper lies in the synthesis of an effective solution applying work in different areas of computing to the Java environment. Securing applications constructed from distributed objects is important if these applications are to be used in mission-critical situations. Delegation is one aspect of distributed system security that is necessary for such applications. The paper by Nagaratnam and Lea describes a secure delegation model for Java-based, distributed object environments. The paper by Frølund and Koistinen addresses the topical issue of providing a common way for describing Quality-of-Service (QoS) features in distributed, object-oriented systems. They present a general QoS language, QML, that can be used to capture QoS properties as part of a design. They also show how to extend UML to support QML concepts. The paper by Szymaszek, Uszok and Zielinski discusses the important issue of efficient implementation and usage of fine-grained objects in CORBA-based applications. Fine-grained objects can have serious ramifications on overall application performance and scalability, and the paper suggests that such objects should not be treated as first-class CORBA objects, proposing instead the use of collections and smart proxies for efficient implementation. The paper by Milojicic, LaForge and Chauhan describes a mobile objects and agents infrastructure. Their particular research has focused on communication support across agent migration and extensive resource control. The paper also discusses issues regarding interoperation between agent systems. Acknowledgments The editor wishes to thank all of the authors, reviewers and publishers. Without their excellent work, and the contribution of their valuable time, this special issue would not have been possible.

  2. Sea Level Affecting Marshes Model (SLAMM) ‐ New functionality for predicting changes in distribution of submerged aquatic vegetation in response to sea level rise

    USGS Publications Warehouse

    Lee II, Henry; Reusser, Deborah A.; Frazier, Melanie R; McCoy, Lee M; Clinton, Patrick J.; Clough, Jonathan S.

    2014-01-01

    The “Sea‐Level Affecting Marshes Model” (SLAMM) is a moderate resolution model used to predict the effects of sea level rise on marsh habitats (Craft et al. 2009). SLAMM has been used extensively on both the west coast (e.g., Glick et al., 2007) and east coast (e.g., Geselbracht et al., 2011) of the United States to evaluate potential changes in the distribution and extent of tidal marsh habitats. However, a limitation of the current version of SLAMM, (Version 6.2) is that it lacks the ability to model distribution changes in seagrass habitat resulting from sea level rise. Because of the ecological importance of SAV habitats, U.S. EPA, USGS, and USDA partnered with Warren Pinnacle Consulting to enhance the SLAMM modeling software to include new functionality in order to predict changes in Zostera marina distribution within Pacific Northwest estuaries in response to sea level rise. Specifically, the objective was to develop a SAV model that used generally available GIS data and parameters that were predictive and that could be customized for other estuaries that have GIS layers of existing SAV distribution. This report describes the procedure used to develop the SAV model for the Yaquina Bay Estuary, Oregon, appends a statistical script based on the open source R software to generate a similar SAV model for other estuaries that have data layers of existing SAV, and describes how to incorporate the model coefficients from the site‐specific SAV model into SLAMM to predict the effects of sea level rise on Zostera marina distributions. To demonstrate the applicability of the R tools, we utilize them to develop model coefficients for Willapa Bay, Washington using site‐specific SAV data.

  3. Using satellite and airborne LiDAR to model woodpecker habitat occupancy at the landscape scale

    Treesearch

    Lee A. Vierling; Kerri T. Vierling; Patrick Adam; Andrew T. Hudak

    2013-01-01

    Incorporating vertical vegetation structure into models of animal distributions can improve understanding of the patterns and processes governing habitat selection. LiDAR can provide such structural information, but these data are typically collected via aircraft and thus are limited in spatial extent. Our objective was to explore the utility of satellite-based LiDAR...

  4. Trading strategies for distribution company with stochastic distributed energy resources

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Chunyu; Wang, Qi; Wang, Jianhui

    2016-09-01

    This paper proposes a methodology to address the trading strategies of a proactive distribution company (PDISCO) engaged in the transmission-level (TL) markets. A one-leader multi-follower bilevel model is presented to formulate the gaming framework between the PDISCO and markets. The lower-level (LL) problems include the TL day-ahead market and scenario-based real-time markets, respectively with the objectives of maximizing social welfare and minimizing operation cost. The upper-level (UL) problem is to maximize the PDISCO’s profit across these markets. The PDISCO’s strategic offers/bids interactively influence the outcomes of each market. Since the LL problems are linear and convex, while the UL problemmore » is non-linear and non-convex, an equivalent primal–dual approach is used to reformulate this bilevel model to a solvable mathematical program with equilibrium constraints (MPEC). The effectiveness of the proposed model is verified by case studies.« less

  5. Utilizing a scale model solar system project to visualize important planetary science concepts and develop technology and spatial reasoning skills

    NASA Astrophysics Data System (ADS)

    Kortenkamp, Stephen J.; Brock, Laci

    2016-10-01

    Scale model solar systems have been used for centuries to help educate young students and the public about the vastness of space and the relative sizes of objects. We have adapted the classic scale model solar system activity into a student-driven project for an undergraduate general education astronomy course at the University of Arizona. Students are challenged to construct and use their three dimensional models to demonstrate an understanding of numerous concepts in planetary science, including: 1) planetary obliquities, eccentricities, inclinations; 2) phases and eclipses; 3) planetary transits; 4) asteroid sizes, numbers, and distributions; 5) giant planet satellite and ring systems; 6) the Pluto system and Kuiper belt; 7) the extent of space travel by humans and robotic spacecraft; 8) the diversity of extrasolar planetary systems. Secondary objectives of the project allow students to develop better spatial reasoning skills and gain familiarity with technology such as Excel formulas, smart-phone photography, and audio/video editing.During our presentation we will distribute a formal description of the project and discuss our expectations of the students as well as present selected highlights from preliminary submissions.

  6. Predicting bottlenose dolphin distribution along Liguria coast (northwestern Mediterranean Sea) through different modeling techniques and indirect predictors.

    PubMed

    Marini, C; Fossa, F; Paoli, C; Bellingeri, M; Gnone, G; Vassallo, P

    2015-03-01

    Habitat modeling is an important tool to investigate the quality of the habitat for a species within a certain area, to predict species distribution and to understand the ecological processes behind it. Many species have been investigated by means of habitat modeling techniques mainly to address effective management and protection policies and cetaceans play an important role in this context. The bottlenose dolphin (Tursiops truncatus) has been investigated with habitat modeling techniques since 1997. The objectives of this work were to predict the distribution of bottlenose dolphin in a coastal area through the use of static morphological features and to compare the prediction performances of three different modeling techniques: Generalized Linear Model (GLM), Generalized Additive Model (GAM) and Random Forest (RF). Four static variables were tested: depth, bottom slope, distance from 100 m bathymetric contour and distance from coast. RF revealed itself both the most accurate and the most precise modeling technique with very high distribution probabilities predicted in presence cells (90.4% of mean predicted probabilities) and with 66.7% of presence cells with a predicted probability comprised between 90% and 100%. The bottlenose distribution obtained with RF allowed the identification of specific areas with particularly high presence probability along the coastal zone; the recognition of these core areas may be the starting point to develop effective management practices to improve T. truncatus protection. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Determination of material distribution in heading process of small bimetallic bar

    NASA Astrophysics Data System (ADS)

    Presz, Wojciech; Cacko, Robert

    2018-05-01

    The electrical connectors mostly have silver contacts joined by riveting. In order to reduce costs, the core of the contact rivet can be replaced with cheaper material, e.g. copper. There is a wide range of commercially available bimetallic (silver-copper) rivets on the market for the production of contacts. Following that, new conditions in the riveting process are created because the bi-metal object is riveted. In the analyzed example, it is a small size object, which can be placed on the border of microforming. Based on the FEM modeling of the load process of bimetallic rivets with different material distributions, the desired distribution was chosen and the choice was justified. Possible material distributions were parameterized with two parameters referring to desirable distribution characteristics. The parameter: Coefficient of Mutual Interactions of Plastic Deformations and the method of its determination have been proposed. The parameter is determined based of two-parameter stress-strain curves and is a function of these parameters and the range of equivalent strains occurring in the analyzed process. The proposed method was used for the upsetting process of the bimetallic head of the electrical contact. A nomogram was established to predict the distribution of materials in the head of the rivet and the appropriate selection of a pair of materials to achieve the desired distribution.

  8. Application of ideal pressure distribution in development process of automobile seats.

    PubMed

    Kilincsoy, U; Wagner, A; Vink, P; Bubb, H

    2016-07-19

    In designing a car seat the ideal pressure distribution is important as it is the largest contact surface between the human and the car. Because of obstacles hindering a more general application of the ideal pressure distribution in seating design, multidimensional measuring techniques are necessary with extensive user tests. The objective of this study is to apply and integrate the knowledge about the ideal pressure distribution in the seat design process for a car manufacturer in an efficient way. Ideal pressure distribution was combined with pressure measurement, in this case pressure mats. In order to integrate this theoretical knowledge of seating comfort in the seat development process for a car manufacturer a special user interface was defined and developed. The mapping of the measured pressure distribution in real-time and accurately scaled to actual seats during test setups directly lead to design implications for seat design even during the test situation. Detailed analysis of the subject's feedback was correlated with objective measurements of the subject's pressure distribution in real time. Therefore existing seating characteristics were taken into account as well. A user interface can incorporate theoretical and validated 'state of the art' models of comfort. Consequently, this information can reduce extensive testing and lead to more detailed results in a shorter time period.

  9. Impact Cratering Calculations

    NASA Technical Reports Server (NTRS)

    Ahrens, Thomas J.

    2001-01-01

    We examined the von Mises and Mohr-Coulomb strength models with and without damage effects and developed a model for dilatancy. The models and results are given in O'Keefe et al. We found that by incorporating damage into the models that we could in a single integrated impact calculation, starting with the bolide in the atmosphere produce final crater profiles having the major features found in the field measurements. These features included a central uplift, an inner ring, circular terracing and faulting. This was accomplished with undamaged surface strengths of approximately 0.1 GPa and at depth strengths of approximately 1.0 GPa. We modeled the damage in geologic materials using a phenomenological approach, which coupled the Johnson-Cook damage model with the CTH code geologic strength model. The objective here was not to determine the distribution of fragment sizes, but rather to determine the effect of brecciated and comminuted material on the crater evolution, fault production, ejecta distribution, and final crater morphology.

  10. COBRA ATD minefield detection model initial performance analysis

    NASA Astrophysics Data System (ADS)

    Holmes, V. Todd; Kenton, Arthur C.; Hilton, Russell J.; Witherspoon, Ned H.; Holloway, John H., Jr.

    2000-08-01

    A statistical performance analysis of the USMC Coastal Battlefield Reconnaissance and Analysis (COBRA) Minefield Detection (MFD) Model has been performed in support of the COBRA ATD Program under execution by the Naval Surface Warfare Center/Dahlgren Division/Coastal Systems Station . This analysis uses the Veridian ERIM International MFD model from the COBRA Sensor Performance Evaluation and Computational Tools for Research Analysis modeling toolbox and a collection of multispectral mine detection algorithm response distributions for mines and minelike clutter objects. These mine detection response distributions were generated form actual COBRA ATD test missions over littoral zone minefields. This analysis serves to validate both the utility and effectiveness of the COBRA MFD Model as a predictive MFD performance too. COBRA ATD minefield detection model algorithm performance results based on a simulate baseline minefield detection scenario are presented, as well as result of a MFD model algorithm parametric sensitivity study.

  11. NACA0012 benchmark model experimental flutter results with unsteady pressure distributions

    NASA Technical Reports Server (NTRS)

    Rivera, Jose A., Jr.; Dansberry, Bryan E.; Bennett, Robert M.; Durham, Michael H.; Silva, Walter A.

    1992-01-01

    The Structural Dynamics Division at NASA Langley Research Center has started a wind tunnel activity referred to as the Benchmark Models Program. The primary objective of this program is to acquire measured dynamic instability and corresponding pressure data that will be useful for developing and evaluating aeroelastic type computational fluid dynamics codes currently in use or under development. The program is a multi-year activity that will involve testing of several different models to investigate various aeroelastic phenomena. This paper describes results obtained from a second wind tunnel test of the first model in the Benchmark Models Program. This first model consisted of a rigid semispan wing having a rectangular planform and a NACA 0012 airfoil shape which was mounted on a flexible two degree of freedom mount system. Experimental flutter boundaries and corresponding unsteady pressure distribution data acquired over two model chords located at the 60 and 95 percent span stations are presented.

  12. Semiparametric Bayesian classification with longitudinal markers

    PubMed Central

    De la Cruz-Mesía, Rolando; Quintana, Fernando A.; Müller, Peter

    2013-01-01

    Summary We analyse data from a study involving 173 pregnant women. The data are observed values of the β human chorionic gonadotropin hormone measured during the first 80 days of gestational age, including from one up to six longitudinal responses for each woman. The main objective in this study is to predict normal versus abnormal pregnancy outcomes from data that are available at the early stages of pregnancy. We achieve the desired classification with a semiparametric hierarchical model. Specifically, we consider a Dirichlet process mixture prior for the distribution of the random effects in each group. The unknown random-effects distributions are allowed to vary across groups but are made dependent by using a design vector to select different features of a single underlying random probability measure. The resulting model is an extension of the dependent Dirichlet process model, with an additional probability model for group classification. The model is shown to perform better than an alternative model which is based on independent Dirichlet processes for the groups. Relevant posterior distributions are summarized by using Markov chain Monte Carlo methods. PMID:24368871

  13. Optimization of pressure gauge locations for water distribution systems using entropy theory.

    PubMed

    Yoo, Do Guen; Chang, Dong Eil; Jun, Hwandon; Kim, Joong Hoon

    2012-12-01

    It is essential to select the optimal pressure gauge location for effective management and maintenance of water distribution systems. This study proposes an objective and quantified standard for selecting the optimal pressure gauge location by defining the pressure change at other nodes as a result of demand change at a specific node using entropy theory. Two cases are considered in terms of demand change: that in which demand at all nodes shows peak load by using a peak factor and that comprising the demand change of the normal distribution whose average is the base demand. The actual pressure change pattern is determined by using the emitter function of EPANET to reflect the pressure that changes practically at each node. The optimal pressure gauge location is determined by prioritizing the node that processes the largest amount of information it gives to (giving entropy) and receives from (receiving entropy) the whole system according to the entropy standard. The suggested model is applied to one virtual and one real pipe network, and the optimal pressure gauge location combination is calculated by implementing the sensitivity analysis based on the study results. These analysis results support the following two conclusions. Firstly, the installation priority of the pressure gauge in water distribution networks can be determined with a more objective standard through the entropy theory. Secondly, the model can be used as an efficient decision-making guide for gauge installation in water distribution systems.

  14. Improving Estimation of Ground Casualty Risk From Reentering Space Objects

    NASA Technical Reports Server (NTRS)

    Ostrom, Chris L.

    2017-01-01

    A recent improvement to the long-term estimation of ground casualties from reentering space debris is the further refinement and update to the human population distribution. Previous human population distributions were based on global totals with simple scaling factors for future years, or a coarse grid of population counts in a subset of the world's countries, each cell having its own projected growth rate. The newest population model includes a 5-fold refinement in both latitude and longitude resolution. All areas along a single latitude are combined to form a global population distribution as a function of latitude, creating a more accurate population estimation based on non-uniform growth at the country and area levels. Previous risk probability calculations used simplifying assumptions that did not account for the ellipsoidal nature of the Earth. The new method uses first, a simple analytical method to estimate the amount of time spent above each latitude band for a debris object with a given orbit inclination and second, a more complex numerical method that incorporates the effects of a non-spherical Earth. These new results are compared with the prior models to assess the magnitude of the effects on reentry casualty risk.

  15. Propagation of neutron-reaction uncertainties through multi-physics models of novel LWR's

    NASA Astrophysics Data System (ADS)

    Hernandez-Solis, Augusto; Sjöstrand, Henrik; Helgesson, Petter

    2017-09-01

    The novel design of the renewable boiling water reactor (RBWR) allows a breeding ratio greater than unity and thus, it aims at providing for a self-sustained fuel cycle. The neutron reactions that compose the different microscopic cross-sections and angular distributions are uncertain, so when they are employed in the determination of the spatial distribution of the neutron flux in a nuclear reactor, a methodology should be employed to account for these associated uncertainties. In this work, the Total Monte Carlo (TMC) method is used to propagate the different neutron-reactions (as well as angular distributions) covariances that are part of the TENDL-2014 nuclear data (ND) library. The main objective is to propagate them through coupled neutronic and thermal-hydraulic models in order to assess the uncertainty of important safety parameters related to multi-physics, such as peak cladding temperature along the axial direction of an RBWR fuel assembly. The objective of this study is to quantify the impact that ND covariances of important nuclides such as U-235, U-238, Pu-239 and the thermal scattering of hydrogen in H2O have in the deterministic safety analysis of novel nuclear reactors designs.

  16. Improving Estimation of Ground Casualty Risk from Reentering Space Objects

    NASA Technical Reports Server (NTRS)

    Ostrom, C.

    2017-01-01

    A recent improvement to the long-term estimation of ground casualties from reentering space debris is the further refinement and update to the human population distribution. Previous human population distributions were based on global totals with simple scaling factors for future years, or a coarse grid of population counts in a subset of the world's countries, each cell having its own projected growth rate. The newest population model includes a 5-fold refinement in both latitude and longitude resolution. All areas along a single latitude are combined to form a global population distribution as a function of latitude, creating a more accurate population estimation based on non-uniform growth at the country and area levels. Previous risk probability calculations used simplifying assumptions that did not account for the ellipsoidal nature of the earth. The new method uses first, a simple analytical method to estimate the amount of time spent above each latitude band for a debris object with a given orbit inclination, and second, a more complex numerical method that incorporates the effects of a non-spherical Earth. These new results are compared with the prior models to assess the magnitude of the effects on reentry casualty risk.

  17. Model development for national assessment of commercial vehicle parking

    DOT National Transportation Integrated Search

    2002-03-01

    The objective of this research was to estimate the extent and geographic distribution of truck rest parking supply and demand along the National Highway System in accordance with Section 4027 of the Transportation Equity Act for the 21st Century. Thi...

  18. Preliminary Assessment of Optimal Longitudinal-Mode Control for Drag Reduction through Distributed Aeroelastic Shaping

    NASA Technical Reports Server (NTRS)

    Ippolito, Corey; Nguyen, Nhan; Lohn, Jason; Dolan, John

    2014-01-01

    The emergence of advanced lightweight materials is resulting in a new generation of lighter, flexible, more-efficient airframes that are enabling concepts for active aeroelastic wing-shape control to achieve greater flight efficiency and increased safety margins. These elastically shaped aircraft concepts require non-traditional methods for large-scale multi-objective flight control that simultaneously seek to gain aerodynamic efficiency in terms of drag reduction while performing traditional command-tracking tasks as part of a complete guidance and navigation solution. This paper presents results from a preliminary study of a notional multi-objective control law for an aeroelastic flexible-wing aircraft controlled through distributed continuous leading and trailing edge control surface actuators. This preliminary study develops and analyzes a multi-objective control law derived from optimal linear quadratic methods on a longitudinal vehicle dynamics model with coupled aeroelastic dynamics. The controller tracks commanded attack-angle while minimizing drag and controlling wing twist and bend. This paper presents an overview of the elastic aircraft concept, outlines the coupled vehicle model, presents the preliminary control law formulation and implementation, presents results from simulation, provides analysis, and concludes by identifying possible future areas for research

  19. Strategic rehabilitation planning of piped water networks using multi-criteria decision analysis.

    PubMed

    Scholten, Lisa; Scheidegger, Andreas; Reichert, Peter; Maurer, Max; Mauer, Max; Lienert, Judit

    2014-02-01

    To overcome the difficulties of strategic asset management of water distribution networks, a pipe failure and a rehabilitation model are combined to predict the long-term performance of rehabilitation strategies. Bayesian parameter estimation is performed to calibrate the failure and replacement model based on a prior distribution inferred from three large water utilities in Switzerland. Multi-criteria decision analysis (MCDA) and scenario planning build the framework for evaluating 18 strategic rehabilitation alternatives under future uncertainty. Outcomes for three fundamental objectives (low costs, high reliability, and high intergenerational equity) are assessed. Exploitation of stochastic dominance concepts helps to identify twelve non-dominated alternatives and local sensitivity analysis of stakeholder preferences is used to rank them under four scenarios. Strategies with annual replacement of 1.5-2% of the network perform reasonably well under all scenarios. In contrast, the commonly used reactive replacement is not recommendable unless cost is the only relevant objective. Exemplified for a small Swiss water utility, this approach can readily be adapted to support strategic asset management for any utility size and based on objectives and preferences that matter to the respective decision makers. Copyright © 2013 Elsevier Ltd. All rights reserved.

  20. Mid-infrared interferometry of Seyfert galaxies: Challenging the Standard Model

    NASA Astrophysics Data System (ADS)

    López-Gonzaga, N.; Jaffe, W.

    2016-06-01

    Aims: We aim to find torus models that explain the observed high-resolution mid-infrared (MIR) measurements of active galactic nuclei (AGN). Our goal is to determine the general properties of the circumnuclear dusty environments. Methods: We used the MIR interferometric data of a sample of AGNs provided by the instrument MIDI/VLTI and followed a statistical approach to compare the observed distribution of the interferometric measurements with the distributions computed from clumpy torus models. We mainly tested whether the diversity of Seyfert galaxies can be described using the Standard Model idea, where differences are solely due to a line-of-sight (LOS) effect. In addition to the LOS effects, we performed different realizations of the same model to include possible variations that are caused by the stochastic nature of the dusty models. Results: We find that our entire sample of AGNs, which contains both Seyfert types, cannot be explained merely by an inclination effect and by including random variations of the clouds. Instead, we find that each subset of Seyfert type can be explained by different models, where the filling factor at the inner radius seems to be the largest difference. For the type 1 objects we find that about two thirds of our objects could also be described using a dusty torus similar to the type 2 objects. For the remaining third, it was not possible to find a good description using models with high filling factors, while we found good fits with models with low filling factors. Conclusions: Within our model assumptions, we did not find one single set of model parameters that could simultaneously explain the MIR data of all 21 AGN with LOS effects and random variations alone. We conclude that at least two distinct cloud configurations are required to model the differences in Seyfert galaxies, with volume-filling factors differing by a factor of about 5-10. A continuous transition between the two types cannot be excluded.

  1. Impact of Spatial Pumping Patterns on Groundwater Management

    NASA Astrophysics Data System (ADS)

    Yin, J.; Tsai, F. T. C.

    2017-12-01

    Challenges exist to manage groundwater resources while maintaining a balance between groundwater quantity and quality because of anthropogenic pumping activities as well as complex subsurface environment. In this study, to address the impact of spatial pumping pattern on groundwater management, a mixed integer nonlinear multi-objective model is formulated by integrating three objectives within a management framework to: (i) maximize total groundwater withdrawal from potential wells; (ii) minimize total electricity cost for well pumps; and (iii) attain groundwater level at selected monitoring locations as close as possible to the target level. Binary variables are used in the groundwater management model to control the operative status of pumping wells. The NSGA-II is linked with MODFLOW to solve the multi-objective problem. The proposed method is applied to a groundwater management problem in the complex Baton Rouge aquifer system, southeastern Louisiana. Results show that (a) non-dominated trade-off solutions under various spatial distributions of active pumping wells can be achieved. Each solution is optimal with regard to its corresponding objectives; (b) operative status, locations and pumping rates of pumping wells are significant to influence the distribution of hydraulic head, which in turn influence the optimization results; (c) A wide range of optimal solutions is obtained such that decision makers can select the most appropriate solution through negotiation with different stakeholders. This technique is beneficial to finding out the optimal extent to which three objectives including water supply concern, energy concern and subsidence concern can be balanced.

  2. Exploiting range imagery: techniques and applications

    NASA Astrophysics Data System (ADS)

    Armbruster, Walter

    2009-07-01

    Practically no applications exist for which automatic processing of 2D intensity imagery can equal human visual perception. This is not the case for range imagery. The paper gives examples of 3D laser radar applications, for which automatic data processing can exceed human visual cognition capabilities and describes basic processing techniques for attaining these results. The examples are drawn from the fields of helicopter obstacle avoidance, object detection in surveillance applications, object recognition at high range, multi-object-tracking, and object re-identification in range image sequences. Processing times and recognition performances are summarized. The techniques used exploit the bijective continuity of the imaging process as well as its independence of object reflectivity, emissivity and illumination. This allows precise formulations of the probability distributions involved in figure-ground segmentation, feature-based object classification and model based object recognition. The probabilistic approach guarantees optimal solutions for single images and enables Bayesian learning in range image sequences. Finally, due to recent results in 3D-surface completion, no prior model libraries are required for recognizing and re-identifying objects of quite general object categories, opening the way to unsupervised learning and fully autonomous cognitive systems.

  3. Intrinsic Bayesian Active Contours for Extraction of Object Boundaries in Images

    PubMed Central

    Srivastava, Anuj

    2010-01-01

    We present a framework for incorporating prior information about high-probability shapes in the process of contour extraction and object recognition in images. Here one studies shapes as elements of an infinite-dimensional, non-linear quotient space, and statistics of shapes are defined and computed intrinsically using differential geometry of this shape space. Prior models on shapes are constructed using probability distributions on tangent bundles of shape spaces. Similar to the past work on active contours, where curves are driven by vector fields based on image gradients and roughness penalties, we incorporate the prior shape knowledge in the form of vector fields on curves. Through experimental results, we demonstrate the use of prior shape models in the estimation of object boundaries, and their success in handling partial obscuration and missing data. Furthermore, we describe the use of this framework in shape-based object recognition or classification. PMID:21076692

  4. Contribution of explosion and future collision fragments to the orbital debris environment

    NASA Technical Reports Server (NTRS)

    Su, S.-Y.; Kessler, D. J.

    1985-01-01

    The time evolution of the near-earth man-made orbital debris environment modeled by numerical simulation is presented in this paper. The model starts with a data base of orbital debris objects which are tracked by the NORAD ground radar system. The current untrackable small objects are assumed to result from explosions and are predicted from data collected from a ground explosion experiment. Future collisions between earth orbiting objects are handled by the Monte Carlo method to simulate the range of collision possibilities that may occur in the real world. The collision fragmentation process between debris objects is calculated using an empirical formula derived from a laboratory spacecraft impact experiment to obtain the number versus size distribution of the newly generated debris population. The evolution of the future space debris environment is compared with the natural meteoroid background for the relative spacecraft penetration hazard.

  5. Statistical methods for investigating quiescence and other temporal seismicity patterns

    USGS Publications Warehouse

    Matthews, M.V.; Reasenberg, P.A.

    1988-01-01

    We propose a statistical model and a technique for objective recognition of one of the most commonly cited seismicity patterns:microearthquake quiescence. We use a Poisson process model for seismicity and define a process with quiescence as one with a particular type of piece-wise constant intensity function. From this model, we derive a statistic for testing stationarity against a 'quiescence' alternative. The large-sample null distribution of this statistic is approximated from simulated distributions of appropriate functionals applied to Brownian bridge processes. We point out the restrictiveness of the particular model we propose and of the quiescence idea in general. The fact that there are many point processes which have neither constant nor quiescent rate functions underscores the need to test for and describe nonuniformity thoroughly. We advocate the use of the quiescence test in conjunction with various other tests for nonuniformity and with graphical methods such as density estimation. ideally these methods may promote accurate description of temporal seismicity distributions and useful characterizations of interesting patterns. ?? 1988 Birkha??user Verlag.

  6. A project optimization for small watercourses restoration in the northern part of the Volga-Akhtuba floodplain by the geoinformation and hydrodynamic modeling

    NASA Astrophysics Data System (ADS)

    Voronin, Alexander; Vasilchenko, Ann; Khoperskov, Alexander

    2018-03-01

    The project of small watercourses restoration in the northern part of the Volga-Akhtuba floodplain is considered together with the aim of increasing the watering of the territory during small and medium floods. The topography irregularity, the complex structure of the floodplain valley consisting of large number of small watercourses, the presence of urbanized and agricultural areas require careful preliminary analysis of the hydrological safety and efficiency of geographically distributed project activities. Using the digital terrain and watercourses structure models of the floodplain, the hydrodynamic flood model, the analysis of the hydrological safety and efficiency of several project implementation strategies has been conducted. The objective function values have been obtained from the hydrodynamic calculations of the floodplain territory flooding for virtual digital terrain models simulating alternatives for the geographically distributed project activities. The comparative efficiency of several empirical strategies for the geographically distributed project activities, as well as a two-stage exact solution method for the optimization problem has been studied.

  7. Stochastic simulation of human pulmonary blood flow and transit time frequency distribution based on anatomic and elasticity data.

    PubMed

    Huang, Wei; Shi, Jun; Yen, R T

    2012-12-01

    The objective of our study was to develop a computing program for computing the transit time frequency distributions of red blood cell in human pulmonary circulation, based on our anatomic and elasticity data of blood vessels in human lung. A stochastic simulation model was introduced to simulate blood flow in human pulmonary circulation. In the stochastic simulation model, the connectivity data of pulmonary blood vessels in human lung was converted into a probability matrix. Based on this model, the transit time of red blood cell in human pulmonary circulation and the output blood pressure were studied. Additionally, the stochastic simulation model can be used to predict the changes of blood flow in human pulmonary circulation with the advantage of the lower computing cost and the higher flexibility. In conclusion, a stochastic simulation approach was introduced to simulate the blood flow in the hierarchical structure of a pulmonary circulation system, and to calculate the transit time distributions and the blood pressure outputs.

  8. The Deflector Selector: A Machine Learning Framework for Prioritizing Hazardous Object Deflection Technology Development

    NASA Astrophysics Data System (ADS)

    Nesvold, Erika; Greenberg, Adam; Erasmus, Nicolas; Van Heerden, Elmarie; Galache, J. L.; Dahlstrom, Eric; Marchis, Franck

    2018-01-01

    Several technologies have been proposed for deflecting a hazardous Solar System object on a trajectory that would otherwise impact the Earth. The effectiveness of each technology depends on several characteristics of the given object, including its orbit and size. The distribution of these parameters in the likely population of Earth-impacting objects can thus determine which of the technologies are most likely to be useful in preventing a collision with the Earth. None of the proposed deflection technologies has been developed and fully tested in space. Developing every proposed technology is currently prohibitively expensive, so determining now which technologies are most likely to be effective would allow us to prioritize a subset of proposed deflection technologies for funding and development. We will present a new model, the Deflector Selector, that takes as its input the characteristics of a hazardous object or population of such objects and predicts which technology would be able to perform a successful deflection. The model consists of a machine-learning algorithm trained on data produced by N-body integrations simulating the deflections. We will describe the model and present the results of tests of the effectiveness of nuclear explosives, kinetic impactors, and gravity tractors on three simulated populations of hazardous objects.

  9. The Deflector Selector: A machine learning framework for prioritizing hazardous object deflection technology development

    NASA Astrophysics Data System (ADS)

    Nesvold, E. R.; Greenberg, A.; Erasmus, N.; van Heerden, E.; Galache, J. L.; Dahlstrom, E.; Marchis, F.

    2018-05-01

    Several technologies have been proposed for deflecting a hazardous Solar System object on a trajectory that would otherwise impact the Earth. The effectiveness of each technology depends on several characteristics of the given object, including its orbit and size. The distribution of these parameters in the likely population of Earth-impacting objects can thus determine which of the technologies are most likely to be useful in preventing a collision with the Earth. None of the proposed deflection technologies has been developed and fully tested in space. Developing every proposed technology is currently prohibitively expensive, so determining now which technologies are most likely to be effective would allow us to prioritize a subset of proposed deflection technologies for funding and development. We present a new model, the Deflector Selector, that takes as its input the characteristics of a hazardous object or population of such objects and predicts which technology would be able to perform a successful deflection. The model consists of a machine-learning algorithm trained on data produced by N-body integrations simulating the deflections. We describe the model and present the results of tests of the effectiveness of nuclear explosives, kinetic impactors, and gravity tractors on three simulated populations of hazardous objects.

  10. Statistical Issues for Uncontrolled Reentry Hazards Empirical Tests of the Predicted Footprint for Uncontrolled Satellite Reentry Hazards

    NASA Technical Reports Server (NTRS)

    Matney, Mark

    2011-01-01

    A number of statistical tools have been developed over the years for assessing the risk of reentering objects to human populations. These tools make use of the characteristics (e.g., mass, material, shape, size) of debris that are predicted by aerothermal models to survive reentry. The statistical tools use this information to compute the probability that one or more of the surviving debris might hit a person on the ground and cause one or more casualties. The statistical portion of the analysis relies on a number of assumptions about how the debris footprint and the human population are distributed in latitude and longitude, and how to use that information to arrive at realistic risk numbers. Because this information is used in making policy and engineering decisions, it is important that these assumptions be tested using empirical data. This study uses the latest database of known uncontrolled reentry locations measured by the United States Department of Defense. The predicted ground footprint distributions of these objects are based on the theory that their orbits behave basically like simple Kepler orbits. However, there are a number of factors in the final stages of reentry - including the effects of gravitational harmonics, the effects of the Earth s equatorial bulge on the atmosphere, and the rotation of the Earth and atmosphere - that could cause them to diverge from simple Kepler orbit behavior and possibly change the probability of reentering over a given location. In this paper, the measured latitude and longitude distributions of these objects are directly compared with the predicted distributions, providing a fundamental empirical test of the model assumptions.

  11. [Potential distribution of Panax ginseng and its predicted responses to climate change.

    PubMed

    Zhao, Ze Fang; Wei, Hai Yan; Guo, Yan Long; Gu, Wei

    2016-11-18

    This study utilized Panax ginseng as the research object. Based on BioMod2 platform, with species presence data and 22 climatic variables, the potential geographic distribution of P. ginseng under the current conditions in northeast China was simulated with ten species distribution model. And then with the receiver-operating characteristic curve (ROC) as weights, we build an ensemble model, which integrated the results of 10 models, using the ensemble model, the future distributions of P. ginseng were also projected for the periods 2050s and 2070s under the climate change scenarios of RCP 8.5, RCP 6, RCP 4.5 and RCP 2.6 emission scenarios described in the Special Report on Emissions Scenarios (SRES) of IPCC (Intergovernmental Panel on Climate Change). The results showed that for the entire region of study area, under the present climatic conditions, 10.4% of the areas were identified as suitable habitats, which were mainly located in northeast Changbai Mountains area and the southeastern region of the Xiaoxing'an Mountains. The model simulations indicated that the suitable habitats would have a relatively significant change under the different climate change scenarios, and generally the range of suitable habitats would be a certain degree of decrease. Meanwhile, the goodness-of-fit, predicted ranges, and weights of explanatory variables was various for each model. And according to the goodness-of-fit, Maxent had the highest model performance, and GAM, RF and ANN were followed, while SRE had the lowest prediction accuracy. In this study we established an ensemble model, which could improve the accuracy of the existing species distribution models, and optimization of species distribution prediction results.

  12. Research of Ancient Architectures in Jin-Fen Area Based on GIS&BIM Technology

    NASA Astrophysics Data System (ADS)

    Jia, Jing; Zheng, Qiuhong; Gao, Huiying; Sun, Hai

    2017-05-01

    The number of well-preserved ancient buildings located in Shanxi Province, enjoying the absolute maximum proportion of ancient architectures in China, is about 18418, among which, 9053 buildings have the structural style of wood frame. The value of the application of BIM (Building Information Modeling) and GIS (Geographic Information System) is gradually probed and testified in the corresponding fields of ancient architecture’s spatial distribution information management, routine maintenance and special conservation & restoration, the evaluation and simulation of related disasters, such as earthquake. The research objects are ancient architectures in JIN-FEN area, which were first investigated by Sicheng LIANG and recorded in his work of “Chinese ancient architectures survey report”. The research objects, i.e. the ancient architectures in Jin-Fen area include those in Sicheng LIANG’s investigation, and further adjustments were made through authors’ on-site investigation and literature searching & collection. During this research process, the spatial distributing Geodatabase of research objects is established utilizing GIS. The BIM components library for ancient buildings is formed combining on-site investigation data and precedent classic works, such as “Yingzao Fashi”, a treatise on architectural methods in Song Dynasty, “Yongle Encyclopedia” and “Gongcheng Zuofa Zeli”, case collections of engineering practice, by the Ministry of Construction of Qing Dynasty. A building of Guangsheng temple in Hongtong county is selected as an example to elaborate the BIM model construction process based on the BIM components library for ancient buildings. Based on the foregoing work results of spatial distribution data, attribute data of features, 3D graphic information and parametric building information model, the information management system for ancient architectures in Jin-Fen Area, utilizing GIS&BIM technology, could be constructed to support the further research of seismic disaster analysis and seismic performance simulation.

  13. Particle Size Distributions Obtained Through Unfolding 2D Sections: Towards Accurate Distributions of Nebular Solids in the Allende Meteorite

    NASA Technical Reports Server (NTRS)

    Christoffersen, P. A.; Simon, Justin I.; Ross, D. K.; Friedrich, J. M.; Cuzzi, J. N.

    2012-01-01

    Size distributions of nebular solids in chondrites suggest an efficient sorting of these early forming objects within the protoplanetary disk. The effect of this sorting has been documented by investigations of modal abundances of CAIs (e.g., [1-4]) and chondrules (e.g., [5-8]). Evidence for aerodynamic sorting in the disk is largely qualitative, and needs to be carefully assessed. It may be a way of concentrating these materials into planetesimal-mass clumps, perhaps 100 fs of ka after they formed. A key parameter is size/density distributions of particles (i.e., chondrules, CAIs, and metal grains), and in particular, whether the radius-density product (rxp) is a better metric for defining the distribution than r alone [9]. There is no consensus between r versus rxp based models. Here we report our initial tests and preliminary results, which when expanded will be used to test the accuracy of current dynamical disk models.

  14. Lutzomyia longipalpis Presence and Abundance Distribution at Different Micro-spatial Scales in an Urban Scenario

    PubMed Central

    Santini, María Soledad; Utgés, María Eugenia; Berrozpe, Pablo; Manteca Acosta, Mariana; Casas, Natalia; Heuer, Paola; Salomón, O. Daniel

    2015-01-01

    The principal objective of this study was to assess a modeling approach to Lu. longipalpis distribution in an urban scenario, discriminating micro-scale landscape variables at microhabitat and macrohabitat scales and the presence from the abundance of the vector. For this objective, we studied vectors and domestic reservoirs and evaluated different environmental variables simultaneously, so we constructed a set of 13 models to account for micro-habitats, macro-habitats and mixed-habitats. We captured a total of 853 sandflies, of which 98.35% were Lu. longipalpis. We sampled a total of 197 dogs; 177 of which were associated with households where insects were sampled. Positive rK39 dogs represented 16.75% of the total, of which 47% were asymptomatic. Distance to the border of the city and high to medium density vegetation cover ended to be the explanatory variables, all positive, for the presence of sandflies in the city. All variables in the abundance model ended to be explanatory, trees around the trap, distance to the stream and its quadratic, being the last one the only one with negative coefficient indicating that the maximum abundance was associated with medium values of distance to the stream. The spatial distribution of dogs infected with L. infantum showed a heterogeneous pattern throughout the city; however, we could not confirm an association of the distribution with the variables assessed. In relation to Lu. longipalpis distribution, the strategy to discriminate the micro-spatial scales at which the environmental variables were recorded allowed us to associate presence with macrohabitat variables and abundance with microhabitat and macrohabitat variables. Based on the variables associated with Lu. longipalpis, the model will be validated in other cities and environmental surveillance, and control interventions will be proposed and evaluated in the microscale level and integrated with socio-cultural approaches and programmatic and village (mesoscale) strategies. PMID:26274318

  15. Lutzomyia longipalpis Presence and Abundance Distribution at Different Micro-spatial Scales in an Urban Scenario.

    PubMed

    Santini, María Soledad; Utgés, María Eugenia; Berrozpe, Pablo; Manteca Acosta, Mariana; Casas, Natalia; Heuer, Paola; Salomón, O Daniel

    2015-01-01

    The principal objective of this study was to assess a modeling approach to Lu. longipalpis distribution in an urban scenario, discriminating micro-scale landscape variables at microhabitat and macrohabitat scales and the presence from the abundance of the vector. For this objective, we studied vectors and domestic reservoirs and evaluated different environmental variables simultaneously, so we constructed a set of 13 models to account for micro-habitats, macro-habitats and mixed-habitats. We captured a total of 853 sandflies, of which 98.35% were Lu. longipalpis. We sampled a total of 197 dogs; 177 of which were associated with households where insects were sampled. Positive rK39 dogs represented 16.75% of the total, of which 47% were asymptomatic. Distance to the border of the city and high to medium density vegetation cover ended to be the explanatory variables, all positive, for the presence of sandflies in the city. All variables in the abundance model ended to be explanatory, trees around the trap, distance to the stream and its quadratic, being the last one the only one with negative coefficient indicating that the maximum abundance was associated with medium values of distance to the stream. The spatial distribution of dogs infected with L. infantum showed a heterogeneous pattern throughout the city; however, we could not confirm an association of the distribution with the variables assessed. In relation to Lu. longipalpis distribution, the strategy to discriminate the micro-spatial scales at which the environmental variables were recorded allowed us to associate presence with macrohabitat variables and abundance with microhabitat and macrohabitat variables. Based on the variables associated with Lu. longipalpis, the model will be validated in other cities and environmental surveillance, and control interventions will be proposed and evaluated in the microscale level and integrated with socio-cultural approaches and programmatic and village (mesoscale) strategies.

  16. Development of a 3D GIS and its application to karst areas

    NASA Astrophysics Data System (ADS)

    Wu, Qiang; Xu, Hua; Zhou, Wanfang

    2008-05-01

    There is a growing interest in modeling and analyzing karst phenomena in three dimensions. This paper integrates geology, groundwater hydrology, geographic information system (GIS), database management system (DBMS), visualization and data mining to study karst features in Huaibei, China. The 3D geo-objects retrieved from the karst area are analyzed and mapped into different abstract levels. The spatial relationships among the objects are constructed by a dual-linker. The shapes of the 3D objects and the topological models with attributes are stored and maintained in the DBMS. Spatial analysis was then used to integrate the data in the DBMS and the 3D model to form a virtual reality (VR) to provide analytical functions such as distribution analysis, correlation query, and probability assessment. The research successfully implements 3D modeling and analyses in the karst area, and meanwhile provides an efficient tool for government policy-makers to set out restrictions on water resource development in the area.

  17. Pragmatic open space box utilization: asteroid survey model using distributed objects management based articulation (DOMBA)

    NASA Astrophysics Data System (ADS)

    Mohammad, Atif Farid; Straub, Jeremy

    2015-05-01

    A multi-craft asteroid survey has significant data synchronization needs. Limited communication speeds drive exacting performance requirements. Tables have been used in Relational Databases, which are structure; however, DOMBA (Distributed Objects Management Based Articulation) deals with data in terms of collections. With this, no read/write roadblocks to the data exist. A master/slave architecture is created by utilizing the Gossip protocol. This facilitates expanding a mission that makes an important discovery via the launch of another spacecraft. The Open Space Box Framework facilitates the foregoing while also providing a virtual caching layer to make sure that continuously accessed data is available in memory and that, upon closing the data file, recharging is applied to the data.

  18. Biologically Inspired Model for Inference of 3D Shape from Texture

    PubMed Central

    Gomez, Olman; Neumann, Heiko

    2016-01-01

    A biologically inspired model architecture for inferring 3D shape from texture is proposed. The model is hierarchically organized into modules roughly corresponding to visual cortical areas in the ventral stream. Initial orientation selective filtering decomposes the input into low-level orientation and spatial frequency representations. Grouping of spatially anisotropic orientation responses builds sketch-like representations of surface shape. Gradients in orientation fields and subsequent integration infers local surface geometry and globally consistent 3D depth. From the distributions in orientation responses summed in frequency, an estimate of the tilt and slant of the local surface can be obtained. The model suggests how 3D shape can be inferred from texture patterns and their image appearance in a hierarchically organized processing cascade along the cortical ventral stream. The proposed model integrates oriented texture gradient information that is encoded in distributed maps of orientation-frequency representations. The texture energy gradient information is defined by changes in the grouped summed normalized orientation-frequency response activity extracted from the textured object image. This activity is integrated by directed fields to generate a 3D shape representation of a complex object with depth ordering proportional to the fields output, with higher activity denoting larger distance in relative depth away from the viewer. PMID:27649387

  19. A 3D radiative transfer model based on lidar data and its application on hydrological and ecosystem modeling

    NASA Astrophysics Data System (ADS)

    Li, W.; Su, Y.; Harmon, T. C.; Guo, Q.

    2013-12-01

    Light Detection and Ranging (lidar) is an optical remote sensing technology that measures properties of scattered light to find range and/or other information of a distant object. Due to its ability to generate 3-dimensional data with high spatial resolution and accuracy, lidar technology is being increasingly used in ecology, geography, geology, geomorphology, seismology, remote sensing, and atmospheric physics. In this study we construct a 3-dimentional (3D) radiative transfer model (RTM) using lidar data to simulate the spatial distribution of solar radiation (direct and diffuse) on the surface of water and mountain forests. The model includes three sub-models: a light model simulating the light source, a sensor model simulating the camera, and a scene model simulating the landscape. We use ground-based and airborne lidar data to characterize the 3D structure of the study area, and generate a detailed 3D scene model. The interactions between light and object are simulated using the Monte Carlo Ray Tracing (MCRT) method. A large number of rays are generated from the light source. For each individual ray, the full traveling path is traced until it is absorbed or escapes from the scene boundary. By locating the sensor at different positions and directions, we can simulate the spatial distribution of solar energy at the ground, vegetation and water surfaces. These outputs can then be incorporated into meteorological drivers for hydrologic and energy balance models to improve our understanding of hydrologic processes and ecosystem functions.

  20. View-invariant object category learning, recognition, and search: how spatial and object attention are coordinated using surface-based attentional shrouds.

    PubMed

    Fazl, Arash; Grossberg, Stephen; Mingolla, Ennio

    2009-02-01

    How does the brain learn to recognize an object from multiple viewpoints while scanning a scene with eye movements? How does the brain avoid the problem of erroneously classifying parts of different objects together? How are attention and eye movements intelligently coordinated to facilitate object learning? A neural model provides a unified mechanistic explanation of how spatial and object attention work together to search a scene and learn what is in it. The ARTSCAN model predicts how an object's surface representation generates a form-fitting distribution of spatial attention, or "attentional shroud". All surface representations dynamically compete for spatial attention to form a shroud. The winning shroud persists during active scanning of the object. The shroud maintains sustained activity of an emerging view-invariant category representation while multiple view-specific category representations are learned and are linked through associative learning to the view-invariant object category. The shroud also helps to restrict scanning eye movements to salient features on the attended object. Object attention plays a role in controlling and stabilizing the learning of view-specific object categories. Spatial attention hereby coordinates the deployment of object attention during object category learning. Shroud collapse releases a reset signal that inhibits the active view-invariant category in the What cortical processing stream. Then a new shroud, corresponding to a different object, forms in the Where cortical processing stream, and search using attention shifts and eye movements continues to learn new objects throughout a scene. The model mechanistically clarifies basic properties of attention shifts (engage, move, disengage) and inhibition of return. It simulates human reaction time data about object-based spatial attention shifts, and learns with 98.1% accuracy and a compression of 430 on a letter database whose letters vary in size, position, and orientation. The model provides a powerful framework for unifying many data about spatial and object attention, and their interactions during perception, cognition, and action.

  1. Long-term Science Data Curation Using a Digital Object Model and Open-Source Frameworks

    NASA Astrophysics Data System (ADS)

    Pan, J.; Lenhardt, W.; Wilson, B. E.; Palanisamy, G.; Cook, R. B.

    2010-12-01

    Scientific digital content, including Earth Science observations and model output, has become more heterogeneous in format and more distributed across the Internet. In addition, data and metadata are becoming necessarily linked internally and externally on the Web. As a result, such content has become more difficult for providers to manage and preserve and for users to locate, understand, and consume. Specifically, it is increasingly harder to deliver relevant metadata and data processing lineage information along with the actual content consistently. Readme files, data quality information, production provenance, and other descriptive metadata are often separated in the storage level as well as in the data search and retrieval interfaces available to a user. Critical archival metadata, such as auditing trails and integrity checks, are often even more difficult for users to access, if they exist at all. We investigate the use of several open-source software frameworks to address these challenges. We use Fedora Commons Framework and its digital object abstraction as the repository, Drupal CMS as the user-interface, and the Islandora module as the connector from Drupal to Fedora Repository. With the digital object model, metadata of data description and data provenance can be associated with data content in a formal manner, so are external references and other arbitrary auxiliary information. Changes are formally audited on an object, and digital contents are versioned and have checksums automatically computed. Further, relationships among objects are formally expressed with RDF triples. Data replication, recovery, metadata export are supported with standard protocols, such as OAI-PMH. We provide a tentative comparative analysis of the chosen software stack with the Open Archival Information System (OAIS) reference model, along with our initial results with the existing terrestrial ecology data collections at NASA’s ORNL Distributed Active Archive Center for Biogeochemical Dynamics (ORNL DAAC).

  2. Multi-metric calibration of hydrological model to capture overall flow regimes

    NASA Astrophysics Data System (ADS)

    Zhang, Yongyong; Shao, Quanxi; Zhang, Shifeng; Zhai, Xiaoyan; She, Dunxian

    2016-08-01

    Flow regimes (e.g., magnitude, frequency, variation, duration, timing and rating of change) play a critical role in water supply and flood control, environmental processes, as well as biodiversity and life history patterns in the aquatic ecosystem. The traditional flow magnitude-oriented calibration of hydrological model was usually inadequate to well capture all the characteristics of observed flow regimes. In this study, we simulated multiple flow regime metrics simultaneously by coupling a distributed hydrological model with an equally weighted multi-objective optimization algorithm. Two headwater watersheds in the arid Hexi Corridor were selected for the case study. Sixteen metrics were selected as optimization objectives, which could represent the major characteristics of flow regimes. Model performance was compared with that of the single objective calibration. Results showed that most metrics were better simulated by the multi-objective approach than those of the single objective calibration, especially the low and high flow magnitudes, frequency and variation, duration, maximum flow timing and rating. However, the model performance of middle flow magnitude was not significantly improved because this metric was usually well captured by single objective calibration. The timing of minimum flow was poorly predicted by both the multi-metric and single calibrations due to the uncertainties in model structure and input data. The sensitive parameter values of the hydrological model changed remarkably and the simulated hydrological processes by the multi-metric calibration became more reliable, because more flow characteristics were considered. The study is expected to provide more detailed flow information by hydrological simulation for the integrated water resources management, and to improve the simulation performances of overall flow regimes.

  3. Boosting Bayesian parameter inference of nonlinear stochastic differential equation models by Hamiltonian scale separation.

    PubMed

    Albert, Carlo; Ulzega, Simone; Stoop, Ruedi

    2016-04-01

    Parameter inference is a fundamental problem in data-driven modeling. Given observed data that is believed to be a realization of some parameterized model, the aim is to find parameter values that are able to explain the observed data. In many situations, the dominant sources of uncertainty must be included into the model for making reliable predictions. This naturally leads to stochastic models. Stochastic models render parameter inference much harder, as the aim then is to find a distribution of likely parameter values. In Bayesian statistics, which is a consistent framework for data-driven learning, this so-called posterior distribution can be used to make probabilistic predictions. We propose a novel, exact, and very efficient approach for generating posterior parameter distributions for stochastic differential equation models calibrated to measured time series. The algorithm is inspired by reinterpreting the posterior distribution as a statistical mechanics partition function of an object akin to a polymer, where the measurements are mapped on heavier beads compared to those of the simulated data. To arrive at distribution samples, we employ a Hamiltonian Monte Carlo approach combined with a multiple time-scale integration. A separation of time scales naturally arises if either the number of measurement points or the number of simulation points becomes large. Furthermore, at least for one-dimensional problems, we can decouple the harmonic modes between measurement points and solve the fastest part of their dynamics analytically. Our approach is applicable to a wide range of inference problems and is highly parallelizable.

  4. Distribution of the near-earth objects

    NASA Astrophysics Data System (ADS)

    Emel'Yanenko, V. V.; Naroenkov, S. A.; Shustov, B. M.

    2011-12-01

    This paper analyzes the distribution of the orbits of near-Earth minor bodies from the data on more than 7500 objects. The distribution of large near-Earth objects (NEOs) with absolute magnitudes of H < 18 is generally consistent with the earlier predictions (Bottke et al., 2002; Stuart, 2003), although we have revealed a previously undetected maximum in the distribution of perihelion distances q near q = 0.5 AU. The study of the orbital distribution for the entire sample of all detected objects has found new significant features. In particular, the distribution of perihelion longitudes seriously deviates from a homogeneous pattern; its variations are roughly 40% of its mean value. These deviations cannot be stochastic, which is confirmed by the Kolmogorov-Smirnov test with a more than 0.9999 probability. These features can be explained by the dynamic behavior of the minor bodies related to secular resonances with Jupiter. For the objects with H < 18, the variations in the perihelion longitude distribution are not so apparent. By extrapolating the orbital characteristics of the NEOs with H < 18, we have obtained longitudinal, latitudinal, and radial distributions of potentially hazardous objects in a heliocentric ecliptic coordinate frame. The differences in the orbital distributions of objects of different size appear not to be a consequence of observational selection, but could indicate different sources of the NEOs.

  5. Simple Learned Weighted Sums of Inferior Temporal Neuronal Firing Rates Accurately Predict Human Core Object Recognition Performance

    PubMed Central

    Hong, Ha; Solomon, Ethan A.; DiCarlo, James J.

    2015-01-01

    To go beyond qualitative models of the biological substrate of object recognition, we ask: can a single ventral stream neuronal linking hypothesis quantitatively account for core object recognition performance over a broad range of tasks? We measured human performance in 64 object recognition tests using thousands of challenging images that explore shape similarity and identity preserving object variation. We then used multielectrode arrays to measure neuronal population responses to those same images in visual areas V4 and inferior temporal (IT) cortex of monkeys and simulated V1 population responses. We tested leading candidate linking hypotheses and control hypotheses, each postulating how ventral stream neuronal responses underlie object recognition behavior. Specifically, for each hypothesis, we computed the predicted performance on the 64 tests and compared it with the measured pattern of human performance. All tested hypotheses based on low- and mid-level visually evoked activity (pixels, V1, and V4) were very poor predictors of the human behavioral pattern. However, simple learned weighted sums of distributed average IT firing rates exactly predicted the behavioral pattern. More elaborate linking hypotheses relying on IT trial-by-trial correlational structure, finer IT temporal codes, or ones that strictly respect the known spatial substructures of IT (“face patches”) did not improve predictive power. Although these results do not reject those more elaborate hypotheses, they suggest a simple, sufficient quantitative model: each object recognition task is learned from the spatially distributed mean firing rates (100 ms) of ∼60,000 IT neurons and is executed as a simple weighted sum of those firing rates. SIGNIFICANCE STATEMENT We sought to go beyond qualitative models of visual object recognition and determine whether a single neuronal linking hypothesis can quantitatively account for core object recognition behavior. To achieve this, we designed a database of images for evaluating object recognition performance. We used multielectrode arrays to characterize hundreds of neurons in the visual ventral stream of nonhuman primates and measured the object recognition performance of >100 human observers. Remarkably, we found that simple learned weighted sums of firing rates of neurons in monkey inferior temporal (IT) cortex accurately predicted human performance. Although previous work led us to expect that IT would outperform V4, we were surprised by the quantitative precision with which simple IT-based linking hypotheses accounted for human behavior. PMID:26424887

  6. CEBS object model for systems biology data, SysBio-OM.

    PubMed

    Xirasagar, Sandhya; Gustafson, Scott; Merrick, B Alex; Tomer, Kenneth B; Stasiewicz, Stanley; Chan, Denny D; Yost, Kenneth J; Yates, John R; Sumner, Susan; Xiao, Nianqing; Waters, Michael D

    2004-09-01

    To promote a systems biology approach to understanding the biological effects of environmental stressors, the Chemical Effects in Biological Systems (CEBS) knowledge base is being developed to house data from multiple complex data streams in a systems friendly manner that will accommodate extensive querying from users. Unified data representation via a single object model will greatly aid in integrating data storage and management, and facilitate reuse of software to analyze and display data resulting from diverse differential expression or differential profile technologies. Data streams include, but are not limited to, gene expression analysis (transcriptomics), protein expression and protein-protein interaction analysis (proteomics) and changes in low molecular weight metabolite levels (metabolomics). To enable the integration of microarray gene expression, proteomics and metabolomics data in the CEBS system, we designed an object model, Systems Biology Object Model (SysBio-OM). The model is comprehensive and leverages other open source efforts, namely the MicroArray Gene Expression Object Model (MAGE-OM) and the Proteomics Experiment Data Repository (PEDRo) object model. SysBio-OM is designed by extending MAGE-OM to represent protein expression data elements (including those from PEDRo), protein-protein interaction and metabolomics data. SysBio-OM promotes the standardization of data representation and data quality by facilitating the capture of the minimum annotation required for an experiment. Such standardization refines the accuracy of data mining and interpretation. The open source SysBio-OM model, which can be implemented on varied computing platforms is presented here. A universal modeling language depiction of the entire SysBio-OM is available at http://cebs.niehs.nih.gov/SysBioOM/. The Rational Rose object model package is distributed under an open source license that permits unrestricted academic and commercial use and is available at http://cebs.niehs.nih.gov/cebsdownloads. The database and interface are being built to implement the model and will be available for public use at http://cebs.niehs.nih.gov.

  7. So Wide a Web, So Little Time.

    ERIC Educational Resources Information Center

    McConville, David; And Others

    1996-01-01

    Discusses new trends in the World Wide Web. Highlights include multimedia; digitized audio-visual files; compression technology; telephony; virtual reality modeling language (VRML); open architecture; and advantages of Java, an object-oriented programming language, including platform independence, distributed development, and pay-per-use software.…

  8. Integration of Heterogeneous Bibliographic Information through Data Abstractions.

    ERIC Educational Resources Information Center

    Breazeal, Juliette Ow

    This study examines the integration of heterogeneous bibliographic information resources from geographically distributed locations in an automated, unified, and controlled way using abstract data types called "classes" through the Message-Object Model defined in Smalltalk-80 software. The concept of achieving data consistency by…

  9. Evidence for a distributed hierarchy of action representation in the brain

    PubMed Central

    Grafton, Scott T.; de C. Hamilton, Antonia F.

    2007-01-01

    Complex human behavior is organized around temporally distal outcomes. Behavioral studies based on tasks such as normal prehension, multi-step object use and imitation establish the existence of relative hierarchies of motor control. The retrieval errors in apraxia also support the notion of a hierarchical model for representing action in the brain. In this review, three functional brain imaging studies of action observation using the method of repetition suppression are used to identify a putative neural architecture that supports action understanding at the level of kinematics, object centered goals and ultimately, motor outcomes. These results, based on observation, may match a similar functional anatomic hierarchy for action planning and execution. If this is true, then the findings support a functional anatomic model that is distributed across a set of interconnected brain areas that are differentially recruited for different aspects of goal oriented behavior, rather than a homogeneous mirror neuron system for organizing and understanding all behavior. PMID:17706312

  10. JPRS report: Science and technology. Central Eurasia

    NASA Astrophysics Data System (ADS)

    1994-08-01

    Translated articles cover the following topics: boronizing laser treatment of titanium alloys; argon-arc welding-on titanium dowels to inserts for aircraft structures made of composite materials; method of reducing level of thermally stressed state of gas turbine engine blades by selecting optimum thickness distribution of ceramic heat shield coating; certifying modern ceramics for mechanical properties; superplastic ceramic: possibilities for application in modeling pressworking manufacturing processes; monitoring strength of ceramics by acoustic emission; physical and mechanical properties of Al2O3 + ZrO2:Y2O3 composite produced by directional crystallization from melt; influence that microalloying with rare earth elements has on resistance of steels to deformation and fracture under alternating elastic-plastic loading; conceptions of constructing information management networks for distributed objects; concept of a document information system based on an object-oriented subject-area model; underground future of rocket technologies; geoinformation approach to organizing automated information systems for regional-local monitoring of atmospheric pollutants; and possibility of using lidar wind sounding in climatic-ecologic monitoring of limited areas.

  11. A Recursive Partitioning Method for the Prediction of Preference Rankings Based Upon Kemeny Distances.

    PubMed

    D'Ambrosio, Antonio; Heiser, Willem J

    2016-09-01

    Preference rankings usually depend on the characteristics of both the individuals judging a set of objects and the objects being judged. This topic has been handled in the literature with log-linear representations of the generalized Bradley-Terry model and, recently, with distance-based tree models for rankings. A limitation of these approaches is that they only work with full rankings or with a pre-specified pattern governing the presence of ties, and/or they are based on quite strict distributional assumptions. To overcome these limitations, we propose a new prediction tree method for ranking data that is totally distribution-free. It combines Kemeny's axiomatic approach to define a unique distance between rankings with the CART approach to find a stable prediction tree. Furthermore, our method is not limited by any particular design of the pattern of ties. The method is evaluated in an extensive full-factorial Monte Carlo study with a new simulation design.

  12. Orbital debris and meteoroids: Results from retrieved spacecraft surfaces

    NASA Astrophysics Data System (ADS)

    Mandeville, J. C.

    1993-08-01

    Near-Earth space contains natural and man-made particles, whose size distribution ranges from submicron sized particles to cm sized objects. This environment causes a grave threat to space missions, mainly for future manned or long duration missions. Several experiments devoted to the study of this environment have been recently retrieved from space. Among them several were located on the NASA Long Duration Exposure Facility (LDEF) and on the Russian MIR Space Station. Evaluation of hypervelocity impact features gives valuable information on size distribution of small dust particles present in low Earth orbit. Chemical identification of projectile remnants is possible in many instances, thus allowing a discrimination between extraterrestrial particles and man-made orbital debris. A preliminary comparison of flight data with current modeling of meteoroids and space debris shows a fair agreement. However impact of particles identified as space debris on the trailing side of LDEF, not predicted by the models, could be the result of space debris in highly excentric orbits, probably associated with GTO objects.

  13. Computational Control of Flexible Aerospace Systems

    NASA Technical Reports Server (NTRS)

    Sharpe, Lonnie, Jr.; Shen, Ji Yao

    1994-01-01

    The main objective of this project is to establish a distributed parameter modeling technique for structural analysis, parameter estimation, vibration suppression and control synthesis of large flexible aerospace structures. This report concentrates on the research outputs produced in the last two years of the project. The main accomplishments can be summarized as follows. A new version of the PDEMOD Code had been completed. A theoretical investigation of the NASA MSFC two-dimensional ground-based manipulator facility by using distributed parameter modelling technique has been conducted. A new mathematical treatment for dynamic analysis and control of large flexible manipulator systems has been conceived, which may provide a embryonic form of a more sophisticated mathematical model for future modified versions of the PDEMOD Codes.

  14. Universal relations with fermionic dark matter

    NASA Astrophysics Data System (ADS)

    Krut, A.; Argüelles, C. R.; Rueda, J. A.; Ruffini, R.

    2018-01-01

    We have recently introduced a new model for the distribution of dark matter (DM) in galaxies, the Ruffini-Argüelles-Rueda (RAR) model, based on a self-gravitating system of massive fermions at finite temperatures. The RAR model, for fermion masses above keV, successfully describes the DM halos in galaxies, and predicts the existence of a denser quantum core towards the center of each configuration. We demonstrate here, for the first time, that the introduction of a cutoff in the fermion phase-space distribution, necessary to account for galaxies finite size and mass, defines a new solution with a compact quantum core which represents an alternative to the central black hole (BH) scenario for SgrA*. For a fermion mass in the range 48keV ≤ mc2 ≤ 345keV, the DM halo distribution fulfills the most recent data of the Milky Way rotation curves while harbors a dense quantum core of 4×106M⊙ within the S2 star pericenter. In particular, for a fermion mass of mc2 ˜ 50keV the model is able to explain the DM halos from typical dwarf spheroidal to normal elliptical galaxies, while harboring dark and massive compact objects from ˜ 103M⊙ tp to 108M⊙ at their respective centers. The model is shown to be in good agreement with different observationally inferred universal relations, such as the ones connecting DM halos with supermassive dark central objects. Finally, the model provides a natural mechanism for the formation of supermassive BHs as heavy as few ˜ 108M⊙. We argue that larger BH masses (few ˜ 109-10M⊙) may be achieved by assuming subsequent accretion processes onto the above heavy seeds, depending on accretion efficiency and environment.

  15. An Object Oriented Extensible Architecture for Affordable Aerospace Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Follen, Gregory J.; Lytle, John K. (Technical Monitor)

    2002-01-01

    Driven by a need to explore and develop propulsion systems that exceeded current computing capabilities, NASA Glenn embarked on a novel strategy leading to the development of an architecture that enables propulsion simulations never thought possible before. Full engine 3 Dimensional Computational Fluid Dynamic propulsion system simulations were deemed impossible due to the impracticality of the hardware and software computing systems required. However, with a software paradigm shift and an embracing of parallel and distributed processing, an architecture was designed to meet the needs of future propulsion system modeling. The author suggests that the architecture designed at the NASA Glenn Research Center for propulsion system modeling has potential for impacting the direction of development of affordable weapons systems currently under consideration by the Applied Vehicle Technology Panel (AVT). This paper discusses the salient features of the NPSS Architecture including its interface layer, object layer, implementation for accessing legacy codes, numerical zooming infrastructure and its computing layer. The computing layer focuses on the use and deployment of these propulsion simulations on parallel and distributed computing platforms which has been the focus of NASA Ames. Additional features of the object oriented architecture that support MultiDisciplinary (MD) Coupling, computer aided design (CAD) access and MD coupling objects will be discussed. Included will be a discussion of the successes, challenges and benefits of implementing this architecture.

  16. Impact and Cratering History of the Pluto System

    NASA Astrophysics Data System (ADS)

    Greenstreet, Sarah; Gladman, Brett; McKinnon, William B.

    2014-11-01

    The observational opportunity of the New Horizons spacecraft fly-through of the Pluto system in July 2015 requires a current understanding of the Kuiper belt dynamical sub-populations to accurately interpret the cratering history of the surfaces of Pluto and its satellites. We use an Opik-style collision probability code to compute impact rates and impact velocity distributions onto Pluto and its binary companion Charon from the Canada-France Ecliptic Plane Survey (CFEPS) model of classical and resonant Kuiper belt populations (Petit et al., 2011; Gladman et al., 2012) and the scattering model of Kaib et al. (2011) calibrated to Shankman et al. (2013). Due to the uncertainty in how the well-characterized size distribution for Kuiper belt objects (with diameter d>100 km) connects to smaller objects, we compute cratering rates using three simple impactor size distribution extrapolations (a single power-law, a power-law with a knee, and a power-law with a divot) as well as the "curvy" impactor size distributions from Minton et al. (2012) and Schlichting et al. (2013). Current size distribution uncertainties cause absolute ages computed for Pluto surfaces to be entirely dependent on the extrapolation to small sizes and thus uncertain to a factor of approximately 6. We illustrate the relative importance of each Kuiper belt sub-population to Pluto's cratering rate, both now and integrated into the past, and provide crater retention ages for several cases. We find there is only a small chance a crater with diameter D>200 km has been created on Pluto in the past 4 Gyr. The 2015 New Horizons fly-through coupled with telescope surveys that cover objects with diameters d=10-100 km should eventually drop current crater retention age uncertainties on Pluto to <30%. In addition, we compute the "disruption timescale" (to a factor of three accuracy) for Pluto's smaller satellites: Styx, Nix, Kerberos, and Hydra.

  17. Statistics of high-level scene context

    PubMed Central

    Greene, Michelle R.

    2013-01-01

    Context is critical for recognizing environments and for searching for objects within them: contextual associations have been shown to modulate reaction time and object recognition accuracy, as well as influence the distribution of eye movements and patterns of brain activations. However, we have not yet systematically quantified the relationships between objects and their scene environments. Here I seek to fill this gap by providing descriptive statistics of object-scene relationships. A total of 48, 167 objects were hand-labeled in 3499 scenes using the LabelMe tool (Russell et al., 2008). From these data, I computed a variety of descriptive statistics at three different levels of analysis: the ensemble statistics that describe the density and spatial distribution of unnamed “things” in the scene; the bag of words level where scenes are described by the list of objects contained within them; and the structural level where the spatial distribution and relationships between the objects are measured. The utility of each level of description for scene categorization was assessed through the use of linear classifiers, and the plausibility of each level for modeling human scene categorization is discussed. Of the three levels, ensemble statistics were found to be the most informative (per feature), and also best explained human patterns of categorization errors. Although a bag of words classifier had similar performance to human observers, it had a markedly different pattern of errors. However, certain objects are more useful than others, and ceiling classification performance could be achieved using only the 64 most informative objects. As object location tends not to vary as a function of category, structural information provided little additional information. Additionally, these data provide valuable information on natural scene redundancy that can be exploited for machine vision, and can help the visual cognition community to design experiments guided by statistics rather than intuition. PMID:24194723

  18. A Distributed Simulation Software System for Multi-Spacecraft Missions

    NASA Technical Reports Server (NTRS)

    Burns, Richard; Davis, George; Cary, Everett

    2003-01-01

    The paper will provide an overview of the web-based distributed simulation software system developed for end-to-end, multi-spacecraft mission design, analysis, and test at the NASA Goddard Space Flight Center (GSFC). This software system was developed for an internal research and development (IR&D) activity at GSFC called the Distributed Space Systems (DSS) Distributed Synthesis Environment (DSE). The long-term goal of the DSS-DSE is to integrate existing GSFC stand-alone test beds, models, and simulation systems to create a "hands on", end-to-end simulation environment for mission design, trade studies and simulations. The short-term goal of the DSE was therefore to develop the system architecture, and then to prototype the core software simulation capability based on a distributed computing approach, with demonstrations of some key capabilities by the end of Fiscal Year 2002 (FY02). To achieve the DSS-DSE IR&D objective, the team adopted a reference model and mission upon which FY02 capabilities were developed. The software was prototyped according to the reference model, and demonstrations were conducted for the reference mission to validate interfaces, concepts, etc. The reference model, illustrated in Fig. 1, included both space and ground elements, with functional capabilities such as spacecraft dynamics and control, science data collection, space-to-space and space-to-ground communications, mission operations, science operations, and data processing, archival and distribution addressed.

  19. High-mass X-ray binary populations. 1: Galactic modeling

    NASA Technical Reports Server (NTRS)

    Dalton, William W.; Sarazin, Craig L.

    1995-01-01

    Modern stellar evolutionary tracks are used to calculate the evolution of a very large number of massive binary star systems (M(sub tot) greater than or = 15 solar mass) which cover a wide range of total masses, mass ratios, and starting separations. Each binary is evolved accounting for mass and angular momentum loss through the supernova of the primary to the X-ray binary phase. Using the observed rate of star formation in our Galaxy and the properties of massive binaries, we calculate the expected high-mass X-ray binary (HMXRB) population in the Galaxy. We test various massive binary evolutionary scenarios by comparing the resulting HMXRB predictions with the X-ray observations. A major goal of this study is the determination of the fraction of matter lost from the system during the Roche lobe overflow phase. Curiously, we find that the total numbers of observable HMXRBs are nearly independent of this assumed mass-loss fraction, with any of the values tested here giving acceptable agreement between predicted and observed numbers. However, comparison of the period distribution of our HMXRB models with the observed period distribution does reveal a distinction among the various models. As a result of this comparison, we conclude that approximately 70% of the overflow matter is lost from a massive binary system during mass transfer in the Roche lobe overflow phase. We compare models constructed assuming that all X-ray emission is due to accretion onto the compact object from the donor star's wind with models that incorporate a simplified disk accretion scheme. By comparing the results of these models with observations, we conclude that the formation of disks in HMXRBs must be relatively common. We also calculate the rate of formation of double degenerate binaries, high velocity detached compact objects, and Thorne-Zytkow objects.

  20. Optimization of European call options considering physical delivery network and reservoir operation rules

    NASA Astrophysics Data System (ADS)

    Cheng, Wei-Chen; Hsu, Nien-Sheng; Cheng, Wen-Ming; Yeh, William W.-G.

    2011-10-01

    This paper develops alternative strategies for European call options for water purchase under hydrological uncertainties that can be used by water resources managers for decision making. Each alternative strategy maximizes its own objective over a selected sequence of future hydrology that is characterized by exceedance probability. Water trade provides flexibility and enhances water distribution system reliability. However, water trade between two parties in a regional water distribution system involves many issues, such as delivery network, reservoir operation rules, storage space, demand, water availability, uncertainty, and any existing contracts. An option is a security giving the right to buy or sell an asset; in our case, the asset is water. We extend a flow path-based water distribution model to include reservoir operation rules. The model simultaneously considers both the physical distribution network as well as the relationships between water sellers and buyers. We first test the model extension. Then we apply the proposed optimization model for European call options to the Tainan water distribution system in southern Taiwan. The formulation lends itself to a mixed integer linear programming model. We use the weighing method to formulate a composite function for a multiobjective problem. The proposed methodology provides water resources managers with an overall picture of water trade strategies and the consequence of each strategy. The results from the case study indicate that the strategy associated with a streamflow exceedence probability of 50% or smaller should be adopted as the reference strategy for the Tainan water distribution system.

  1. A multi-objective framework to predict flows of ungauged rivers within regions of sparse hydrometeorologic observation

    NASA Astrophysics Data System (ADS)

    Alipour, M.; Kibler, K. M.

    2017-12-01

    Despite advances in flow prediction, managers of ungauged rivers located within broad regions of sparse hydrometeorologic observation still lack prescriptive methods robust to the data challenges of such regions. We propose a multi-objective streamflow prediction framework for regions of minimum observation to select models that balance runoff efficiency with choice of accurate parameter values. We supplement sparse observed data with uncertain or low-resolution information incorporated as `soft' a priori parameter estimates. The performance of the proposed framework is tested against traditional single-objective and constrained single-objective calibrations in two catchments in a remote area of southwestern China. We find that the multi-objective approach performs well with respect to runoff efficiency in both catchments (NSE = 0.74 and 0.72), within the range of efficiencies returned by other models (NSE = 0.67 - 0.78). However, soil moisture capacity estimated by the multi-objective model resonates with a priori estimates (parameter residuals of 61 cm versus 289 and 518 cm for maximum soil moisture capacity in one catchment, and 20 cm versus 246 and 475 cm in the other; parameter residuals of 0.48 versus 0.65 and 0.7 for soil moisture distribution shape factor in one catchment, and 0.91 versus 0.79 and 1.24 in the other). Thus, optimization to a multi-criteria objective function led to very different representations of soil moisture capacity as compared to models selected by single-objective calibration, without compromising runoff efficiency. These different soil moisture representations may translate into considerably different hydrological behaviors. The proposed approach thus offers a preliminary step towards greater process understanding in regions of severe data limitations. For instance, the multi-objective framework may be an adept tool to discern between models of similar efficiency to select models that provide the "right answers for the right reasons". Managers may feel more confident to utilize such models to predict flows in fully ungauged areas.

  2. Modeling the climatic and subsurface stratigraphy controls on the hydrology of a Carolina bay wetland in South Carolina, USA

    Treesearch

    Ge Sun; Timothy J. Callahan; Jennifer E. Pyzoha; Carl C. Trettin

    2006-01-01

    Restoring depressional wetlands or geographically isolated wetlands such as cypress swamps and Carolina bays on the Atlantic Coastal Plains requires a clear understanding of the hydrologic processes and water balances. The objectives of this paper are to (1) test a distributed forest hydrology model, FLATWOODS, for a Carolina bay wetland system using seven years of...

  3. Modeling the climatic and subsurface stratigraphy controls on the hydrology of a Carolina Bay wetland in South Carolina, USA

    Treesearch

    Ge Sun; Timothy J. Callahan; Jennifer E. Pyzoha; Carl C. Trettin

    2006-01-01

    Restoring depressional wetlands or geographically isolated wetlands such as cypress swamps and Carolina bays on the Atlantic Coastal Plains requires a clear understanding of the hydrologic processes and water balances. The objectives of this paper are to (1) test a distributed forest hydrology model, FLATWOODS, for a Carolina bay wetland system using seven years of...

  4. Percentiles of the null distribution of 2 maximum lod score tests.

    PubMed

    Ulgen, Ayse; Yoo, Yun Joo; Gordon, Derek; Finch, Stephen J; Mendell, Nancy R

    2004-01-01

    We here consider the null distribution of the maximum lod score (LOD-M) obtained upon maximizing over transmission model parameters (penetrance values, dominance, and allele frequency) as well as the recombination fraction. Also considered is the lod score maximized over a fixed choice of genetic model parameters and recombination-fraction values set prior to the analysis (MMLS) as proposed by Hodge et al. The objective is to fit parametric distributions to MMLS and LOD-M. Our results are based on 3,600 simulations of samples of n = 100 nuclear families ascertained for having one affected member and at least one other sibling available for linkage analysis. Each null distribution is approximately a mixture p(2)(0) + (1 - p)(2)(v). The values of MMLS appear to fit the mixture 0.20(2)(0) + 0.80chi(2)(1.6). The mixture distribution 0.13(2)(0) + 0.87chi(2)(2.8). appears to describe the null distribution of LOD-M. From these results we derive a simple method for obtaining critical values of LOD-M and MMLS. Copyright 2004 S. Karger AG, Basel

  5. Time Series Remote Sensing in Monitoring the Spatio-Temporal Dynamics of Plant Invasions: A Study of Invasive Saltcedar (Tamarix Spp.)

    NASA Astrophysics Data System (ADS)

    Diao, Chunyuan

    In today's big data era, the increasing availability of satellite and airborne platforms at various spatial and temporal scales creates unprecedented opportunities to understand the complex and dynamic systems (e.g., plant invasion). Time series remote sensing is becoming more and more important to monitor the earth system dynamics and interactions. To date, most of the time series remote sensing studies have been conducted with the images acquired at coarse spatial scale, due to their relatively high temporal resolution. The construction of time series at fine spatial scale, however, is limited to few or discrete images acquired within or across years. The objective of this research is to advance the time series remote sensing at fine spatial scale, particularly to shift from discrete time series remote sensing to continuous time series remote sensing. The objective will be achieved through the following aims: 1) Advance intra-annual time series remote sensing under the pure-pixel assumption; 2) Advance intra-annual time series remote sensing under the mixed-pixel assumption; 3) Advance inter-annual time series remote sensing in monitoring the land surface dynamics; and 4) Advance the species distribution model with time series remote sensing. Taking invasive saltcedar as an example, four methods (i.e., phenological time series remote sensing model, temporal partial unmixing method, multiyear spectral angle clustering model, and time series remote sensing-based spatially explicit species distribution model) were developed to achieve the objectives. Results indicated that the phenological time series remote sensing model could effectively map saltcedar distributions through characterizing the seasonal phenological dynamics of plant species throughout the year. The proposed temporal partial unmixing method, compared to conventional unmixing methods, could more accurately estimate saltcedar abundance within a pixel by exploiting the adequate temporal signatures of saltcedar. The multiyear spectral angle clustering model could guide the selection of the most representative remotely sensed image for repetitive saltcedar mapping over space and time. Through incorporating spatial autocorrelation, the species distribution model developed in the study could identify the suitable habitats of saltcedar at a fine spatial scale and locate appropriate areas at high risk of saltcedar infestation. Among 10 environmental variables, the distance to the river and the phenological attributes summarized by the time series remote sensing were regarded as the most important. These methods developed in the study provide new perspectives on how the continuous time series can be leveraged under various conditions to investigate the plant invasion dynamics.

  6. Gas-Grain Chemical Models: Inclusion of a Grain Size Distribution and a Study Of Young Stellar Objects in the Magellanic Clouds

    NASA Astrophysics Data System (ADS)

    Pauly, Tyler Andrew

    2017-06-01

    Computational models of interstellar gas-grain chemistry have aided in our understanding of star-forming regions. Chemical kinetics models rely on a network of chemical reactions and a set of physical conditions in which atomic and molecular species are allowed to form and react. We replace the canonical single grain-size in our chemical model MAGICKAL with a grain size distribution and analyze the effects on the chemical composition of the gas and grain surface in quiescent and collapsing dark cloud models. We find that a grain size distribution coupled with a temperature distribution across grain sizes can significantly affect the bulk ice composition when dust temperatures fall near critical values related to the surface binding energies of common interstellar chemical species. We then apply the updated model to a study of ice formation in the cold envelopes surrounding massive young stellar objects in the Magellanic Clouds. The Magellanic Clouds are local satellite galaxies of the Milky Way, and they provide nearby environments to study star formation at low metallicity. We expand the model calculation of dust temperature to include a treatment for increased interstellar radiation field intensity; we vary the radiation field to model the elevated dust temperatures observed in the Magellanic Clouds. We also adjust the initial elemental abundances used in the model, guided by observations of Magellanic Cloud HII regions. We are able to reproduce the relative ice fractions observed, indicating that metal depletion and elevated grain temperature are important drivers of the envelope ice composition. The observed shortfall in CO in Small Magellanic Cloud sources can be explained by a combination of reduced carbon abundance and increased grain temperatures. The models indicate that a large variation in radiation field strength is required to match the range of observed LMC abundances. CH 3OH abundance is found to be enhanced (relative to total carbon abundance) in low-metallicity models, providing seed material for complex organic molecule formation. We conclude with a preliminary study of the recently discovered hot core in the Large Magellanic Cloud; we create a grid of models to simulate hot core formation in Magellanic Cloud environments, comparing them to models and observations of well-characterized galactic counterparts.

  7. Fourier optics of constant-thickness three-dimensional objects on the basis of diffraction models

    NASA Astrophysics Data System (ADS)

    Chugui, Yu. V.

    2017-09-01

    Results of investigations of diffraction phenomena on constant-thickness three-dimensional objects with flat inner surfaces (thick plates) are summarized on the basis of our constructive theory of their calculation as applied to dimensional inspection. It is based on diffraction models of 3D objects with the use of equivalent diaphragms (distributions), which allow the Kirchhoff-Fresnel approximation to be effectively used. In contrast to available rigorous and approximate methods, the present approach does not require cumbersome calculations; it is a clearly arranged method, which ensures sufficient accuracy for engineering applications. It is found that the fundamental diffraction parameter for 3D objects of constant thickness d is the critical diffraction angle {θ _{cr}} = √ {λ /d} at which the effect of three-dimensionality on the spectrum of the 3D object becomes appreciable. Calculated Fraunhofer diffraction patterns (spectra) and images of constant-thickness 3D objects with absolutely absorbing, absolutely reflecting, and gray internal faces are presented. It is demonstrated that selection of 3D object fragments can be performed by choosing an appropriate configuration of the wave illuminating the object (plane normal or inclined waves, spherical waves).

  8. WISE/NEOWISE OBSERVATIONS OF THE JOVIAN TROJAN POPULATION: TAXONOMY

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grav, T.; Mainzer, A. K.; Bauer, J. M.

    2012-11-01

    We present updated/new thermal model fits for 478 Jovian Trojan asteroids observed with the Wide-field Infrared Survey Explorer (WISE). Using the fact that the two shortest bands used by WISE, centered on 3.4 and 4.6 {mu}m, are dominated by reflected light, we derive albedos of a significant fraction of these objects in these bands. While the visible albedos of both the C-, P-, and D-type asteroids are strikingly similar, the WISE data reveal that the albedo at 3.4 {mu}m is different between C-/P- and D-types. The albedo at 3.4 {mu}m can thus be used to classify the objects, with C-/P-typesmore » having values less than 10% and D-types have values larger than 10%. Classifying all objects larger than 50 km shows that the D-type objects dominate both the leading cloud (L {sub 4}), with a fraction of 84%, and trailing cloud (L {sub 5}), with a fraction of 71%-80%. The two clouds thus have very similar taxonomic distribution for these large objects, but the leading cloud has a larger number of these large objects, L {sub 4}/L {sub 5} = 1.34. The taxonomic distribution of the Jovian Trojans is found to be different from that of the large Hildas, which is dominated by C- and P-type objects. At smaller sizes, the fraction of D-type Hildas starts increasing, showing more similarities with the Jovian Trojans. If this similarity is confirmed through deeper surveys, it could hold important clues to the formation and evolution of the two populations. The Jovian Trojans does have similar taxonomic distribution to that of the Jovian irregular satellites, but lacks the ultra red surfaces found among the Saturnian irregular satellites and Centaur population.« less

  9. Determination of hyporheic travel time distributions and other parameters from concurrent conservative and reactive tracer tests by local-in-global optimization

    NASA Astrophysics Data System (ADS)

    Knapp, Julia L. A.; Cirpka, Olaf A.

    2017-06-01

    The complexity of hyporheic flow paths requires reach-scale models of solute transport in streams that are flexible in their representation of the hyporheic passage. We use a model that couples advective-dispersive in-stream transport to hyporheic exchange with a shape-free distribution of hyporheic travel times. The model also accounts for two-site sorption and transformation of reactive solutes. The coefficients of the model are determined by fitting concurrent stream-tracer tests of conservative (fluorescein) and reactive (resazurin/resorufin) compounds. The flexibility of the shape-free models give rise to multiple local minima of the objective function in parameter estimation, thus requiring global-search algorithms, which is hindered by the large number of parameter values to be estimated. We present a local-in-global optimization approach, in which we use a Markov-Chain Monte Carlo method as global-search method to estimate a set of in-stream and hyporheic parameters. Nested therein, we infer the shape-free distribution of hyporheic travel times by a local Gauss-Newton method. The overall approach is independent of the initial guess and provides the joint posterior distribution of all parameters. We apply the described local-in-global optimization method to recorded tracer breakthrough curves of three consecutive stream sections, and infer section-wise hydraulic parameter distributions to analyze how hyporheic exchange processes differ between the stream sections.

  10. Mathematical optimization of high dose-rate brachytherapy—derivation of a linear penalty model from a dose-volume model

    NASA Astrophysics Data System (ADS)

    Morén, B.; Larsson, T.; Carlsson Tedgren, Å.

    2018-03-01

    High dose-rate brachytherapy is a method for cancer treatment where the radiation source is placed within the body, inside or close to a tumour. For dose planning, mathematical optimization techniques are being used in practice and the most common approach is to use a linear model which penalizes deviations from specified dose limits for the tumour and for nearby organs. This linear penalty model is easy to solve, but its weakness lies in the poor correlation of its objective value and the dose-volume objectives that are used clinically to evaluate dose distributions. Furthermore, the model contains parameters that have no clear clinical interpretation. Another approach for dose planning is to solve mixed-integer optimization models with explicit dose-volume constraints which include parameters that directly correspond to dose-volume objectives, and which are therefore tangible. The two mentioned models take the overall goals for dose planning into account in fundamentally different ways. We show that there is, however, a mathematical relationship between them by deriving a linear penalty model from a dose-volume model. This relationship has not been established before and improves the understanding of the linear penalty model. In particular, the parameters of the linear penalty model can be interpreted as dual variables in the dose-volume model.

  11. A scale-invariant cellular-automata model for distributed seismicity

    NASA Technical Reports Server (NTRS)

    Barriere, Benoit; Turcotte, Donald L.

    1991-01-01

    In the standard cellular-automata model for a fault an element of stress is randomly added to a grid of boxes until a box has four elements, these are then redistributed to the adjacent boxes on the grid. The redistribution can result in one or more of these boxes having four or more elements in which case further redistributions are required. On the average added elements are lost from the edges of the grid. The model is modified so that the boxes have a scale-invariant distribution of sizes. The objective is to model a scale-invariant distribution of fault sizes. When a redistribution from a box occurs it is equivalent to a characteristic earthquake on the fault. A redistribution from a small box (a foreshock) can trigger an instability in a large box (the main shock). A redistribution from a large box always triggers many instabilities in the smaller boxes (aftershocks). The frequency-size statistics for both main shocks and aftershocks satisfy the Gutenberg-Richter relation with b = 0.835 for main shocks and b = 0.635 for aftershocks. Model foreshocks occur 28 percent of the time.

  12. A simultaneous beta and coincidence-gamma imaging system for plant leaves

    NASA Astrophysics Data System (ADS)

    Ranjbar, Homayoon; Wen, Jie; Mathews, Aswin J.; Komarov, Sergey; Wang, Qiang; Li, Ke; O'Sullivan, Joseph A.; Tai, Yuan-Chuan

    2016-05-01

    Positron emitting isotopes, such as 11C, 13N, and 18F, can be used to label molecules. The tracers, such as 11CO2, are delivered to plants to study their biological processes, particularly metabolism and photosynthesis, which may contribute to the development of plants that have a higher yield of crops and biomass. Measurements and resulting images from PET scanners are not quantitative in young plant structures or in plant leaves due to poor positron annihilation in thin objects. To address this problem we have designed, assembled, modeled, and tested a nuclear imaging system (simultaneous beta-gamma imager). The imager can simultaneously detect positrons ({β+} ) and coincidence-gamma rays (γ). The imaging system employs two planar detectors; one is a regular gamma detector which has a LYSO crystal array, and the other is a phoswich detector which has an additional BC-404 plastic scintillator for beta detection. A forward model for positrons is proposed along with a joint image reconstruction formulation to utilize the beta and coincidence-gamma measurements for estimating radioactivity distribution in plant leaves. The joint reconstruction algorithm first reconstructs beta and gamma images independently to estimate the thickness component of the beta forward model and afterward jointly estimates the radioactivity distribution in the object. We have validated the physics model and reconstruction framework through a phantom imaging study and imaging a tomato leaf that has absorbed 11CO2. The results demonstrate that the simultaneously acquired beta and coincidence-gamma data, combined with our proposed joint reconstruction algorithm, improved the quantitative accuracy of estimating radioactivity distribution in thin objects such as leaves. We used the structural similarity (SSIM) index for comparing the leaf images from the simultaneous beta-gamma imager with the ground truth image. The jointly reconstructed images yield SSIM indices of 0.69 and 0.63, whereas the separately reconstructed beta alone and gamma alone images had indices of 0.33 and 0.52, respectively.

  13. A simultaneous beta and coincidence-gamma imaging system for plant leaves.

    PubMed

    Ranjbar, Homayoon; Wen, Jie; Mathews, Aswin J; Komarov, Sergey; Wang, Qiang; Li, Ke; O'Sullivan, Joseph A; Tai, Yuan-Chuan

    2016-05-07

    Positron emitting isotopes, such as (11)C, (13)N, and (18)F, can be used to label molecules. The tracers, such as (11)CO2, are delivered to plants to study their biological processes, particularly metabolism and photosynthesis, which may contribute to the development of plants that have a higher yield of crops and biomass. Measurements and resulting images from PET scanners are not quantitative in young plant structures or in plant leaves due to poor positron annihilation in thin objects. To address this problem we have designed, assembled, modeled, and tested a nuclear imaging system (simultaneous beta-gamma imager). The imager can simultaneously detect positrons ([Formula: see text]) and coincidence-gamma rays (γ). The imaging system employs two planar detectors; one is a regular gamma detector which has a LYSO crystal array, and the other is a phoswich detector which has an additional BC-404 plastic scintillator for beta detection. A forward model for positrons is proposed along with a joint image reconstruction formulation to utilize the beta and coincidence-gamma measurements for estimating radioactivity distribution in plant leaves. The joint reconstruction algorithm first reconstructs beta and gamma images independently to estimate the thickness component of the beta forward model and afterward jointly estimates the radioactivity distribution in the object. We have validated the physics model and reconstruction framework through a phantom imaging study and imaging a tomato leaf that has absorbed (11)CO2. The results demonstrate that the simultaneously acquired beta and coincidence-gamma data, combined with our proposed joint reconstruction algorithm, improved the quantitative accuracy of estimating radioactivity distribution in thin objects such as leaves. We used the structural similarity (SSIM) index for comparing the leaf images from the simultaneous beta-gamma imager with the ground truth image. The jointly reconstructed images yield SSIM indices of 0.69 and 0.63, whereas the separately reconstructed beta alone and gamma alone images had indices of 0.33 and 0.52, respectively.

  14. Full State Feedback Control for Virtual Power Plants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Johnson, Jay Tillay

    This report presents an object-oriented implementation of full state feedback control for virtual power plants (VPP). The components of the VPP full state feedback control are (1) objectoriented high-fidelity modeling for all devices in the VPP; (2) Distribution System Distributed Quasi-Dynamic State Estimation (DS-DQSE) that enables full observability of the VPP by augmenting actual measurements with virtual, derived and pseudo measurements and performing the Quasi-Dynamic State Estimation (QSE) in a distributed manner, and (3) automated formulation of the Optimal Power Flow (OPF) in real time using the output of the DS-DQSE, and solving the distributed OPF to provide the optimalmore » control commands to the DERs of the VPP.« less

  15. Fast Reliability Assessing Method for Distribution Network with Distributed Renewable Energy Generation

    NASA Astrophysics Data System (ADS)

    Chen, Fan; Huang, Shaoxiong; Ding, Jinjin; Ding, Jinjin; Gao, Bo; Xie, Yuguang; Wang, Xiaoming

    2018-01-01

    This paper proposes a fast reliability assessing method for distribution grid with distributed renewable energy generation. First, the Weibull distribution and the Beta distribution are used to describe the probability distribution characteristics of wind speed and solar irradiance respectively, and the models of wind farm, solar park and local load are built for reliability assessment. Then based on power system production cost simulation probability discretization and linearization power flow, a optimal power flow objected with minimum cost of conventional power generation is to be resolved. Thus a reliability assessment for distribution grid is implemented fast and accurately. The Loss Of Load Probability (LOLP) and Expected Energy Not Supplied (EENS) are selected as the reliability index, a simulation for IEEE RBTS BUS6 system in MATLAB indicates that the fast reliability assessing method calculates the reliability index much faster with the accuracy ensured when compared with Monte Carlo method.

  16. Probability distribution of extreme share returns in Malaysia

    NASA Astrophysics Data System (ADS)

    Zin, Wan Zawiah Wan; Safari, Muhammad Aslam Mohd; Jaaman, Saiful Hafizah; Yie, Wendy Ling Shin

    2014-09-01

    The objective of this study is to investigate the suitable probability distribution to model the extreme share returns in Malaysia. To achieve this, weekly and monthly maximum daily share returns are derived from share prices data obtained from Bursa Malaysia over the period of 2000 to 2012. The study starts with summary statistics of the data which will provide a clue on the likely candidates for the best fitting distribution. Next, the suitability of six extreme value distributions, namely the Gumbel, Generalized Extreme Value (GEV), Generalized Logistic (GLO) and Generalized Pareto (GPA), the Lognormal (GNO) and the Pearson (PE3) distributions are evaluated. The method of L-moments is used in parameter estimation. Based on several goodness of fit tests and L-moment diagram test, the Generalized Pareto distribution and the Pearson distribution are found to be the best fitted distribution to represent the weekly and monthly maximum share returns in Malaysia stock market during the studied period, respectively.

  17. A model for distribution centers location-routing problem on a multimodal transportation network with a meta-heuristic solving approach

    NASA Astrophysics Data System (ADS)

    Fazayeli, Saeed; Eydi, Alireza; Kamalabadi, Isa Nakhai

    2017-07-01

    Nowadays, organizations have to compete with different competitors in regional, national and international levels, so they have to improve their competition capabilities to survive against competitors. Undertaking activities on a global scale requires a proper distribution system which could take advantages of different transportation modes. Accordingly, the present paper addresses a location-routing problem on multimodal transportation network. The introduced problem follows four objectives simultaneously which form main contribution of the paper; determining multimodal routes between supplier and distribution centers, locating mode changing facilities, locating distribution centers, and determining product delivery tours from the distribution centers to retailers. An integer linear programming is presented for the problem, and a genetic algorithm with a new chromosome structure proposed to solve the problem. Proposed chromosome structure consists of two different parts for multimodal transportation and location-routing parts of the model. Based on published data in the literature, two numerical cases with different sizes generated and solved. Also, different cost scenarios designed to better analyze model and algorithm performance. Results show that algorithm can effectively solve large-size problems within a reasonable time which GAMS software failed to reach an optimal solution even within much longer times.

  18. A model for distribution centers location-routing problem on a multimodal transportation network with a meta-heuristic solving approach

    NASA Astrophysics Data System (ADS)

    Fazayeli, Saeed; Eydi, Alireza; Kamalabadi, Isa Nakhai

    2018-07-01

    Nowadays, organizations have to compete with different competitors in regional, national and international levels, so they have to improve their competition capabilities to survive against competitors. Undertaking activities on a global scale requires a proper distribution system which could take advantages of different transportation modes. Accordingly, the present paper addresses a location-routing problem on multimodal transportation network. The introduced problem follows four objectives simultaneously which form main contribution of the paper; determining multimodal routes between supplier and distribution centers, locating mode changing facilities, locating distribution centers, and determining product delivery tours from the distribution centers to retailers. An integer linear programming is presented for the problem, and a genetic algorithm with a new chromosome structure proposed to solve the problem. Proposed chromosome structure consists of two different parts for multimodal transportation and location-routing parts of the model. Based on published data in the literature, two numerical cases with different sizes generated and solved. Also, different cost scenarios designed to better analyze model and algorithm performance. Results show that algorithm can effectively solve large-size problems within a reasonable time which GAMS software failed to reach an optimal solution even within much longer times.

  19. An approach for access differentiation design in medical distributed applications built on databases.

    PubMed

    Shoukourian, S K; Vasilyan, A M; Avagyan, A A; Shukurian, A K

    1999-01-01

    A formalized "top to bottom" design approach was described in [1] for distributed applications built on databases, which were considered as a medium between virtual and real user environments for a specific medical application. Merging different components within a unified distributed application posits new essential problems for software. Particularly protection tools, which are sufficient separately, become deficient during the integration due to specific additional links and relationships not considered formerly. E.g., it is impossible to protect a shared object in the virtual operating room using only DBMS protection tools, if the object is stored as a record in DB tables. The solution of the problem should be found only within the more general application framework. Appropriate tools are absent or unavailable. The present paper suggests a detailed outline of a design and testing toolset for access differentiation systems (ADS) in distributed medical applications which use databases. The appropriate formal model as well as tools for its mapping to a DMBS are suggested. Remote users connected via global networks are considered too.

  20. Contact-force distribution optimization and control for quadruped robots using both gradient and adaptive neural networks.

    PubMed

    Li, Zhijun; Ge, Shuzhi Sam; Liu, Sibang

    2014-08-01

    This paper investigates optimal feet forces' distribution and control of quadruped robots under external disturbance forces. First, we formulate a constrained dynamics of quadruped robots and derive a reduced-order dynamical model of motion/force. Consider an external wrench on quadruped robots; the distribution of required forces and moments on the supporting legs of a quadruped robot is handled as a tip-point force distribution and used to equilibrate the external wrench. Then, a gradient neural network is adopted to deal with the optimized objective function formulated as to minimize this quadratic objective function subjected to linear equality and inequality constraints. For the obtained optimized tip-point force and the motion of legs, we propose the hybrid motion/force control based on an adaptive neural network to compensate for the perturbations in the environment and approximate feedforward force and impedance of the leg joints. The proposed control can confront the uncertainties including approximation error and external perturbation. The verification of the proposed control is conducted using a simulation.

  1. To bind or not to bind, that's the wrong question: Features and objects coexist in visual short-term memory.

    PubMed

    Geigerman, Shriradha; Verhaeghen, Paul; Cerella, John

    2016-06-01

    In three experiments, we investigated whether features and whole-objects can be represented simultaneously in visual short-term memory (VSTM). Participants were presented with a memory set of colored shapes; we probed either for the constituent features or for the whole object, and analyzed retrieval dynamics (cumulative response time distributions). In our first experiment, we used whole-object probes that recombined features from the memory display; we found that subjects' data conformed to a kitchen-line model, showing that they used whole-object representations for the matching process. In the second experiment, we encouraged independent-feature representations by using probes that used features not present in the memory display; subjects' data conformed to the race-model inequality, showing that they used independent-feature representations for the matching process. In a final experiment, we used both types of probes; subjects now used both types of representations, depending on the nature of the probe. Combined, our three experiments suggest that both feature and whole-object representations can coexist in VSTM. Copyright © 2016 Elsevier B.V. All rights reserved.

  2. Multi-object segmentation using coupled nonparametric shape and relative pose priors

    NASA Astrophysics Data System (ADS)

    Uzunbas, Mustafa Gökhan; Soldea, Octavian; Çetin, Müjdat; Ünal, Gözde; Erçil, Aytül; Unay, Devrim; Ekin, Ahmet; Firat, Zeynep

    2009-02-01

    We present a new method for multi-object segmentation in a maximum a posteriori estimation framework. Our method is motivated by the observation that neighboring or coupling objects in images generate configurations and co-dependencies which could potentially aid in segmentation if properly exploited. Our approach employs coupled shape and inter-shape pose priors that are computed using training images in a nonparametric multi-variate kernel density estimation framework. The coupled shape prior is obtained by estimating the joint shape distribution of multiple objects and the inter-shape pose priors are modeled via standard moments. Based on such statistical models, we formulate an optimization problem for segmentation, which we solve by an algorithm based on active contours. Our technique provides significant improvements in the segmentation of weakly contrasted objects in a number of applications. In particular for medical image analysis, we use our method to extract brain Basal Ganglia structures, which are members of a complex multi-object system posing a challenging segmentation problem. We also apply our technique to the problem of handwritten character segmentation. Finally, we use our method to segment cars in urban scenes.

  3. The interplay of various sources of noise on reliability of species distribution models hinges on ecological specialisation.

    PubMed

    Soultan, Alaaeldin; Safi, Kamran

    2017-01-01

    Digitized species occurrence data provide an unprecedented source of information for ecologists and conservationists. Species distribution model (SDM) has become a popular method to utilise these data for understanding the spatial and temporal distribution of species, and for modelling biodiversity patterns. Our objective is to study the impact of noise in species occurrence data (namely sample size and positional accuracy) on the performance and reliability of SDM, considering the multiplicative impact of SDM algorithms, species specialisation, and grid resolution. We created a set of four 'virtual' species characterized by different specialisation levels. For each of these species, we built the suitable habitat models using five algorithms at two grid resolutions, with varying sample sizes and different levels of positional accuracy. We assessed the performance and reliability of the SDM according to classic model evaluation metrics (Area Under the Curve and True Skill Statistic) and model agreement metrics (Overall Concordance Correlation Coefficient and geographic niche overlap) respectively. Our study revealed that species specialisation had by far the most dominant impact on the SDM. In contrast to previous studies, we found that for widespread species, low sample size and low positional accuracy were acceptable, and useful distribution ranges could be predicted with as few as 10 species occurrences. Range predictions for narrow-ranged species, however, were sensitive to sample size and positional accuracy, such that useful distribution ranges required at least 20 species occurrences. Against expectations, the MAXENT algorithm poorly predicted the distribution of specialist species at low sample size.

  4. Financial Effect of a Drug Distribution Model Change on a Health System.

    PubMed

    Turingan, Erin M; Mekoba, Bijan C; Eberwein, Samuel M; Roberts, Patricia A; Pappas, Ashley L; Cruz, Jennifer L; Amerine, Lindsey B

    2017-06-01

    Background: Drug manufacturers change distribution models based on patient safety and product integrity needs. These model changes can limit health-system access to medications, and the financial impact on health systems can be significant. Objective: The primary aim of this study was to determine the health-system financial impact of a manufacturer's change from open to limited distribution for bevacizumab (Avastin), rituximab (Rituxan), and trastuzumab (Herceptin). The secondary aim was to identify opportunities to shift administration to outpatient settings to support formulary change. Methods: To assess the financial impact on the health system, the cost minus discount was applied to total drug expenditure during a 1-year period after the distribution model change. The opportunity analysis was conducted for three institutions within the health system through chart review of each inpatient administration. Opportunity cost was the sum of the inpatient administration cost and outpatient administration margin. Results: The total drug expenditure for the study period was $26 427 263. By applying the cost minus discount, the financial effect of the distribution model change was $1 393 606. A total of 387 administrations were determined to be opportunities to be shifted to the outpatient setting. During the study period, the total opportunity cost was $1 766 049. Conclusion: Drug expenditure increased for the health system due to the drug distribution model change and loss of cost minus discount. The opportunity cost of shifting inpatient administrations could offset the increase in expenditure. It is recommended to restrict bevacizumab, rituximab, and trastuzumab through Pharmacy & Therapeutics Committees to outpatient use where clinically appropriate.

  5. EUROPLANET-RI modelling service for the planetary science community: European Modelling and Data Analysis Facility (EMDAF)

    NASA Astrophysics Data System (ADS)

    Khodachenko, Maxim; Miller, Steven; Stoeckler, Robert; Topf, Florian

    2010-05-01

    Computational modeling and observational data analysis are two major aspects of the modern scientific research. Both appear nowadays under extensive development and application. Many of the scientific goals of planetary space missions require robust models of planetary objects and environments as well as efficient data analysis algorithms, to predict conditions for mission planning and to interpret the experimental data. Europe has great strength in these areas, but it is insufficiently coordinated; individual groups, models, techniques and algorithms need to be coupled and integrated. Existing level of scientific cooperation and the technical capabilities for operative communication, allow considerable progress in the development of a distributed international Research Infrastructure (RI) which is based on the existing in Europe computational modelling and data analysis centers, providing the scientific community with dedicated services in the fields of their computational and data analysis expertise. These services will appear as a product of the collaborative communication and joint research efforts of the numerical and data analysis experts together with planetary scientists. The major goal of the EUROPLANET-RI / EMDAF is to make computational models and data analysis algorithms associated with particular national RIs and teams, as well as their outputs, more readily available to their potential user community and more tailored to scientific user requirements, without compromising front-line specialized research on model and data analysis algorithms development and software implementation. This objective will be met through four keys subdivisions/tasks of EMAF: 1) an Interactive Catalogue of Planetary Models; 2) a Distributed Planetary Modelling Laboratory; 3) a Distributed Data Analysis Laboratory, and 4) enabling Models and Routines for High Performance Computing Grids. Using the advantages of the coordinated operation and efficient communication between the involved computational modelling, research and data analysis expert teams and their related research infrastructures, EMDAF will provide a 1) flexible, 2) scientific user oriented, 3) continuously developing and fast upgrading computational and data analysis service to support and intensify the European planetary scientific research. At the beginning EMDAF will create a set of demonstrators and operational tests of this service in key areas of European planetary science. This work will aim at the following objectives: (a) Development and implementation of tools for distant interactive communication between the planetary scientists and computing experts (including related RIs); (b) Development of standard routine packages, and user-friendly interfaces for operation of the existing numerical codes and data analysis algorithms by the specialized planetary scientists; (c) Development of a prototype of numerical modelling services "on demand" for space missions and planetary researchers; (d) Development of a prototype of data analysis services "on demand" for space missions and planetary researchers; (e) Development of a prototype of coordinated interconnected simulations of planetary phenomena and objects (global multi-model simulators); (f) Providing the demonstrators of a coordinated use of high performance computing facilities (super-computer networks), done in cooperation with European HPC Grid DEISA.

  6. Automatic anatomy recognition via multiobject oriented active shape models.

    PubMed

    Chen, Xinjian; Udupa, Jayaram K; Alavi, Abass; Torigian, Drew A

    2010-12-01

    This paper studies the feasibility of developing an automatic anatomy recognition (AAR) system in clinical radiology and demonstrates its operation on clinical 2D images. The anatomy recognition method described here consists of two main components: (a) multiobject generalization of OASM and (b) object recognition strategies. The OASM algorithm is generalized to multiple objects by including a model for each object and assigning a cost structure specific to each object in the spirit of live wire. The delineation of multiobject boundaries is done in MOASM via a three level dynamic programming algorithm, wherein the first level is at pixel level which aims to find optimal oriented boundary segments between successive landmarks, the second level is at landmark level which aims to find optimal location for the landmarks, and the third level is at the object level which aims to find optimal arrangement of object boundaries over all objects. The object recognition strategy attempts to find that pose vector (consisting of translation, rotation, and scale component) for the multiobject model that yields the smallest total boundary cost for all objects. The delineation and recognition accuracies were evaluated separately utilizing routine clinical chest CT, abdominal CT, and foot MRI data sets. The delineation accuracy was evaluated in terms of true and false positive volume fractions (TPVF and FPVF). The recognition accuracy was assessed (1) in terms of the size of the space of the pose vectors for the model assembly that yielded high delineation accuracy, (2) as a function of the number of objects and objects' distribution and size in the model, (3) in terms of the interdependence between delineation and recognition, and (4) in terms of the closeness of the optimum recognition result to the global optimum. When multiple objects are included in the model, the delineation accuracy in terms of TPVF can be improved to 97%-98% with a low FPVF of 0.1%-0.2%. Typically, a recognition accuracy of > or = 90% yielded a TPVF > or = 95% and FPVF < or = 0.5%. Over the three data sets and over all tested objects, in 97% of the cases, the optimal solutions found by the proposed method constituted the true global optimum. The experimental results showed the feasibility and efficacy of the proposed automatic anatomy recognition system. Increasing the number of objects in the model can significantly improve both recognition and delineation accuracy. More spread out arrangement of objects in the model can lead to improved recognition and delineation accuracy. Including larger objects in the model also improved recognition and delineation. The proposed method almost always finds globally optimum solutions.

  7. Autonomous physics-based color learning under daylight

    NASA Astrophysics Data System (ADS)

    Berube Lauziere, Yves; Gingras, Denis J.; Ferrie, Frank P.

    1999-09-01

    An autonomous approach for learning the colors of specific objects assumed to have known body spectral reflectances is developed for daylight illumination conditions. The main issue is to be able to find these objects autonomously in a set of training images captured under a wide variety of daylight illumination conditions, and to extract their colors to determine color space regions that are representative of the objects' colors and their variations. The work begins by modeling color formation under daylight using the color formation equations and the semi-empirical model of Judd, MacAdam and Wyszecki (CIE daylight model) for representing the typical spectral distributions of daylight. This results in color space regions that serve as prior information in the initial phase of learning which consists in detecting small reliable clusters of pixels having the appropriate colors. These clusters are then expanded by a region growing technique using broader color space regions than those predicted by the model. This is to detect objects in a way that is able to account for color variations which the model cannot due to its limitations. Validation on the detected objects is performed to filter out those that are not of interest and to eliminate unreliable pixel color values extracted from the remaining ones. Detection results using the color space regions determined from color values obtained by this procedure are discussed.

  8. Design and Analysis of AN Static Aeroelastic Experiment

    NASA Astrophysics Data System (ADS)

    Hou, Ying-Yu; Yuan, Kai-Hua; Lv, Ji-Nan; Liu, Zi-Qiang

    2016-06-01

    Static aeroelastic experiments are very common in the United States and Russia. The objective of static aeroelastic experiments is to investigate deformation and loads of elastic structure in flow field. Generally speaking, prerequisite of this experiment is that the stiffness distribution of structure is known. This paper describes a method for designing experimental models, in the case where the stiffness distribution and boundary condition of a real aircraft are both uncertain. The stiffness distribution form of the structure can be calculated via finite element modeling and simulation calculation and F141 steels and rigid foam are used to make elastic model. In this paper, the design and manufacturing process of static aeroelastic models is presented and a set of experiment model was designed to simulate the stiffness of the designed wings, a set of experiments was designed to check the results. The test results show that the experimental method can effectively complete the design work of elastic model. This paper introduces the whole process of the static aeroelastic experiment, and the experimental results are analyzed. This paper developed a static aeroelasticity experiment technique and established an experiment model targeting at the swept wing of a certain kind of large aspect ratio aircraft.

  9. Detecting failure of climate predictions

    USGS Publications Warehouse

    Runge, Michael C.; Stroeve, Julienne C.; Barrett, Andrew P.; McDonald-Madden, Eve

    2016-01-01

    The practical consequences of climate change challenge society to formulate responses that are more suited to achieving long-term objectives, even if those responses have to be made in the face of uncertainty1, 2. Such a decision-analytic focus uses the products of climate science as probabilistic predictions about the effects of management policies3. Here we present methods to detect when climate predictions are failing to capture the system dynamics. For a single model, we measure goodness of fit based on the empirical distribution function, and define failure when the distribution of observed values significantly diverges from the modelled distribution. For a set of models, the same statistic can be used to provide relative weights for the individual models, and we define failure when there is no linear weighting of the ensemble models that produces a satisfactory match to the observations. Early detection of failure of a set of predictions is important for improving model predictions and the decisions based on them. We show that these methods would have detected a range shift in northern pintail 20 years before it was actually discovered, and are increasingly giving more weight to those climate models that forecast a September ice-free Arctic by 2055.

  10. Two Ideals of Educational Justice

    ERIC Educational Resources Information Center

    Stillwaggon, James

    2016-01-01

    Background/Context: This essay takes up McClintock's (2004) critique of educational discourses as overly dependent upon a distributive model of justice and largely ignorant of the formative assumptions that ground educational policy and practice. Purpose/Objective/Research Question/Focus of Study: The question that McClintock's analysis begs is…

  11. Marketing Distributive Education. Secondary Curriculum Guide.

    ERIC Educational Resources Information Center

    Holmes, Wally S.; And Others

    This curriculum guide is intended to provide vocational teachers, supervisors, administrators, and counselors with a suggested model for organizing a course in general marketing. Discussed first are the philosophy, purpose, and objectives of the course. Second, course admissions and recruitment procedures are outlined. Included in the next three…

  12. Modeling of thermal storage systems in MILP distributed energy resource models

    DOE PAGES

    Steen, David; Stadler, Michael; Cardoso, Gonçalo; ...

    2014-08-04

    Thermal energy storage (TES) and distributed generation technologies, such as combined heat and power (CHP) or photovoltaics (PV), can be used to reduce energy costs and decrease CO 2 emissions from buildings by shifting energy consumption to times with less emissions and/or lower energy prices. To determine the feasibility of investing in TES in combination with other distributed energy resources (DER), mixed integer linear programming (MILP) can be used. Such a MILP model is the well-established Distributed Energy Resources Customer Adoption Model (DER-CAM); however, it currently uses only a simplified TES model to guarantee linearity and short run-times. Loss calculationsmore » are based only on the energy contained in the storage. This paper presents a new DER-CAM TES model that allows improved tracking of losses based on ambient and storage temperatures, and compares results with the previous version. A multi-layer TES model is introduced that retains linearity and avoids creating an endogenous optimization problem. The improved model increases the accuracy of the estimated storage losses and enables use of heat pumps for low temperature storage charging. Ultimately,results indicate that the previous model overestimates the attractiveness of TES investments for cases without possibility to invest in heat pumps and underestimates it for some locations when heat pumps are allowed. Despite a variation in optimal technology selection between the two models, the objective function value stays quite stable, illustrating the complexity of optimal DER sizing problems in buildings and microgrids.« less

  13. Robust Hydrological Forecasting for High-resolution Distributed Models Using a Unified Data Assimilation Approach

    NASA Astrophysics Data System (ADS)

    Hernandez, F.; Liang, X.

    2017-12-01

    Reliable real-time hydrological forecasting, to predict important phenomena such as floods, is invaluable to the society. However, modern high-resolution distributed models have faced challenges when dealing with uncertainties that are caused by the large number of parameters and initial state estimations involved. Therefore, to rely on these high-resolution models for critical real-time forecast applications, considerable improvements on the parameter and initial state estimation techniques must be made. In this work we present a unified data assimilation algorithm called Optimized PareTo Inverse Modeling through Inverse STochastic Search (OPTIMISTS) to deal with the challenge of having robust flood forecasting for high-resolution distributed models. This new algorithm combines the advantages of particle filters and variational methods in a unique way to overcome their individual weaknesses. The analysis of candidate particles compares model results with observations in a flexible time frame, and a multi-objective approach is proposed which attempts to simultaneously minimize differences with the observations and departures from the background states by using both Bayesian sampling and non-convex evolutionary optimization. Moreover, the resulting Pareto front is given a probabilistic interpretation through kernel density estimation to create a non-Gaussian distribution of the states. OPTIMISTS was tested on a low-resolution distributed land surface model using VIC (Variable Infiltration Capacity) and on a high-resolution distributed hydrological model using the DHSVM (Distributed Hydrology Soil Vegetation Model). In the tests streamflow observations are assimilated. OPTIMISTS was also compared with a traditional particle filter and a variational method. Results show that our method can reliably produce adequate forecasts and that it is able to outperform those resulting from assimilating the observations using a particle filter or an evolutionary 4D variational method alone. In addition, our method is shown to be efficient in tackling high-resolution applications with robust results.

  14. Blind source separation based on time-frequency morphological characteristics for rigid acoustic scattering by underwater objects

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Li, Xiukun

    2016-06-01

    Separation of the components of rigid acoustic scattering by underwater objects is essential in obtaining the structural characteristics of such objects. To overcome the problem of rigid structures appearing to have the same spectral structure in the time domain, time-frequency Blind Source Separation (BSS) can be used in combination with image morphology to separate the rigid scattering components of different objects. Based on a highlight model, the separation of the rigid scattering structure of objects with time-frequency distribution is deduced. Using a morphological filter, different characteristics in a Wigner-Ville Distribution (WVD) observed for single auto term and cross terms can be simplified to remove any cross-term interference. By selecting time and frequency points of the auto terms signal, the accuracy of BSS can be improved. An experimental simulation has been used, with changes in the pulse width of the transmitted signal, the relative amplitude and the time delay parameter, in order to analyzing the feasibility of this new method. Simulation results show that the new method is not only able to separate rigid scattering components, but can also separate the components when elastic scattering and rigid scattering exist at the same time. Experimental results confirm that the new method can be used in separating the rigid scattering structure of underwater objects.

  15. Moving vehicles segmentation based on Gaussian motion model

    NASA Astrophysics Data System (ADS)

    Zhang, Wei; Fang, Xiang Z.; Lin, Wei Y.

    2005-07-01

    Moving objects segmentation is a challenge in computer vision. This paper focuses on the segmentation of moving vehicles in dynamic scene. We analyses the psychology of human vision and present a framework for segmenting moving vehicles in the highway. The proposed framework consists of two parts. Firstly, we propose an adaptive background update method in which the background is updated according to the change of illumination conditions and thus can adapt to the change of illumination sensitively. Secondly, we construct a Gaussian motion model to segment moving vehicles, in which the motion vectors of the moving pixels are modeled as a Gaussian model and an on-line EM algorithm is used to update the model. The Gaussian distribution of the adaptive model is elevated to determine which moving vectors result from moving vehicles and which from other moving objects such as waving trees. Finally, the pixels with motion vector result from the moving vehicles are segmented. Experimental results of several typical scenes show that the proposed model can detect the moving vehicles correctly and is immune from influence of the moving objects caused by the waving trees and the vibration of camera.

  16. A unified framework of unsupervised subjective optimized bit allocation for multiple video object coding

    NASA Astrophysics Data System (ADS)

    Chen, Zhenzhong; Han, Junwei; Ngan, King Ngi

    2005-10-01

    MPEG-4 treats a scene as a composition of several objects or so-called video object planes (VOPs) that are separately encoded and decoded. Such a flexible video coding framework makes it possible to code different video object with different distortion scale. It is necessary to analyze the priority of the video objects according to its semantic importance, intrinsic properties and psycho-visual characteristics such that the bit budget can be distributed properly to video objects to improve the perceptual quality of the compressed video. This paper aims to provide an automatic video object priority definition method based on object-level visual attention model and further propose an optimization framework for video object bit allocation. One significant contribution of this work is that the human visual system characteristics are incorporated into the video coding optimization process. Another advantage is that the priority of the video object can be obtained automatically instead of fixing weighting factors before encoding or relying on the user interactivity. To evaluate the performance of the proposed approach, we compare it with traditional verification model bit allocation and the optimal multiple video object bit allocation algorithms. Comparing with traditional bit allocation algorithms, the objective quality of the object with higher priority is significantly improved under this framework. These results demonstrate the usefulness of this unsupervised subjective quality lifting framework.

  17. Port-O-Sim Object Simulation Application

    NASA Technical Reports Server (NTRS)

    Lanzi, Raymond J.

    2009-01-01

    Port-O-Sim is a software application that supports engineering modeling and simulation of launch-range systems and subsystems, as well as the vehicles that operate on them. It is flexible, distributed, object-oriented, and realtime. A scripting language is used to configure an array of simulation objects and link them together. The script is contained in a text file, but executed and controlled using a graphical user interface. A set of modules is defined, each with input variables, output variables, and settings. These engineering models can be either linked to each other or run as standalone. The settings can be modified during execution. Since 2001, this application has been used for pre-mission failure mode training for many Range Safety Scenarios. It contains range asset link analysis, develops look-angle data, supports sky-screen site selection, drives GPS (Global Positioning System) and IMU (Inertial Measurement Unit) simulators, and can support conceptual design efforts for multiple flight programs with its capacity for rapid six-degrees-of-freedom model development. Due to the assembly of various object types into one application, the application is applicable across a wide variety of launch range problem domains.

  18. Applications of New Surrogate Global Optimization Algorithms including Efficient Synchronous and Asynchronous Parallelism for Calibration of Expensive Nonlinear Geophysical Simulation Models.

    NASA Astrophysics Data System (ADS)

    Shoemaker, C. A.; Pang, M.; Akhtar, T.; Bindel, D.

    2016-12-01

    New parallel surrogate global optimization algorithms are developed and applied to objective functions that are expensive simulations (possibly with multiple local minima). The algorithms can be applied to most geophysical simulations, including those with nonlinear partial differential equations. The optimization does not require simulations be parallelized. Asynchronous (and synchronous) parallel execution is available in the optimization toolbox "pySOT". The parallel algorithms are modified from serial to eliminate fine grained parallelism. The optimization is computed with open source software pySOT, a Surrogate Global Optimization Toolbox that allows user to pick the type of surrogate (or ensembles), the search procedure on surrogate, and the type of parallelism (synchronous or asynchronous). pySOT also allows the user to develop new algorithms by modifying parts of the code. In the applications here, the objective function takes up to 30 minutes for one simulation, and serial optimization can take over 200 hours. Results from Yellowstone (NSF) and NCSS (Singapore) supercomputers are given for groundwater contaminant hydrology simulations with applications to model parameter estimation and decontamination management. All results are compared with alternatives. The first results are for optimization of pumping at many wells to reduce cost for decontamination of groundwater at a superfund site. The optimization runs with up to 128 processors. Superlinear speed up is obtained for up to 16 processors, and efficiency with 64 processors is over 80%. Each evaluation of the objective function requires the solution of nonlinear partial differential equations to describe the impact of spatially distributed pumping and model parameters on model predictions for the spatial and temporal distribution of groundwater contaminants. The second application uses an asynchronous parallel global optimization for groundwater quality model calibration. The time for a single objective function evaluation varies unpredictably, so efficiency is improved with asynchronous parallel calculations to improve load balancing. The third application (done at NCSS) incorporates new global surrogate multi-objective parallel search algorithms into pySOT and applies it to a large watershed calibration problem.

  19. Design optimization of axial flow hydraulic turbine runner: Part II - multi-objective constrained optimization method

    NASA Astrophysics Data System (ADS)

    Peng, Guoyi; Cao, Shuliang; Ishizuka, Masaru; Hayama, Shinji

    2002-06-01

    This paper is concerned with the design optimization of axial flow hydraulic turbine runner blade geometry. In order to obtain a better design plan with good performance, a new comprehensive performance optimization procedure has been presented by combining a multi-variable multi-objective constrained optimization model with a Q3D inverse computation and a performance prediction procedure. With careful analysis of the inverse design of axial hydraulic turbine runner, the total hydraulic loss and the cavitation coefficient are taken as optimization objectives and a comprehensive objective function is defined using the weight factors. Parameters of a newly proposed blade bound circulation distribution function and parameters describing positions of blade leading and training edges in the meridional flow passage are taken as optimization variables.The optimization procedure has been applied to the design optimization of a Kaplan runner with specific speed of 440 kW. Numerical results show that the performance of designed runner is successfully improved through optimization computation. The optimization model is found to be validated and it has the feature of good convergence. With the multi-objective optimization model, it is possible to control the performance of designed runner by adjusting the value of weight factors defining the comprehensive objective function. Copyright

  20. Self-powered information measuring wireless networks using the distribution of tasks within multicore processors

    NASA Astrophysics Data System (ADS)

    Zhuravska, Iryna M.; Koretska, Oleksandra O.; Musiyenko, Maksym P.; Surtel, Wojciech; Assembay, Azat; Kovalev, Vladimir; Tleshova, Akmaral

    2017-08-01

    The article contains basic approaches to develop the self-powered information measuring wireless networks (SPIM-WN) using the distribution of tasks within multicore processors critical applying based on the interaction of movable components - as in the direction of data transmission as wireless transfer of energy coming from polymetric sensors. Base mathematic model of scheduling tasks within multiprocessor systems was modernized to schedule and allocate tasks between cores of one-crystal computer (SoC) to increase energy efficiency SPIM-WN objects.

  1. Compensation of distributed delays in integrated communication and control systems

    NASA Technical Reports Server (NTRS)

    Ray, Asok; Luck, Rogelio

    1991-01-01

    The concept, analysis, implementation, and verification of a method for compensating delays that are distributed between the sensors, controller, and actuators within a control loop are discussed. With the objective of mitigating the detrimental effects of these network induced delays, a predictor-controller algorithm was formulated and analyzed. Robustness of the delay compensation algorithm was investigated relative to parametric uncertainties in plant modeling. The delay compensator was experimentally verified on an IEEE 802.4 network testbed for velocity control of a DC servomotor.

  2. Real Time Text Analysis

    NASA Astrophysics Data System (ADS)

    Senthilkumar, K.; Ruchika Mehra Vijayan, E.

    2017-11-01

    This paper aims to illustrate real time analysis of large scale data. For practical implementation we are performing sentiment analysis on live Twitter feeds for each individual tweet. To analyze sentiments we will train our data model on sentiWordNet, a polarity assigned wordNet sample by Princeton University. Our main objective will be to efficiency analyze large scale data on the fly using distributed computation. Apache Spark and Apache Hadoop eco system is used as distributed computation platform with Java as development language

  3. The Spiral Arm Segments of the Galaxy within 3 kpc from the Sun: A Statistical Approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Griv, Evgeny; Jiang, Ing-Guey; Hou, Li-Gang, E-mail: griv@bgu.ac.il

    As can be reasonably expected, upcoming large-scale APOGEE, GAIA, GALAH, LAMOST, and WEAVE stellar spectroscopic surveys will yield rather noisy Galactic distributions of stars. In view of the possibility of employing these surveys, our aim is to present a statistical method to extract information about the spiral structure of the Galaxy from currently available data, and to demonstrate the effectiveness of this method. The model differs from previous works studying how objects are distributed in space in its calculation of the statistical significance of the hypothesis that some of the objects are actually concentrated in a spiral. A statistical analysismore » of the distribution of cold dust clumps within molecular clouds, H ii regions, Cepheid stars, and open clusters in the nearby Galactic disk within 3 kpc from the Sun is carried out. As an application of the method, we obtain distances between the Sun and the centers of the neighboring Sagittarius arm segment, the Orion arm segment in which the Sun is located, and the Perseus arm segment. Pitch angles of the logarithmic spiral segments and their widths are also estimated. The hypothesis that the collected objects accidentally form spirals is refuted with almost 100% statistical confidence. We show that these four independent distributions of young objects lead to essentially the same results. We also demonstrate that our newly deduced values of the mean distances and pitch angles for the segments are not too far from those found recently by Reid et al. using VLBI-based trigonometric parallaxes of massive star-forming regions.« less

  4. Routing and Scheduling Optimization Model of Sea Transportation

    NASA Astrophysics Data System (ADS)

    barus, Mika debora br; asyrafy, Habib; nababan, Esther; mawengkang, Herman

    2018-01-01

    This paper examines the routing and scheduling optimization model of sea transportation. One of the issues discussed is about the transportation of ships carrying crude oil (tankers) which is distributed to many islands. The consideration is the cost of transportation which consists of travel costs and the cost of layover at the port. Crude oil to be distributed consists of several types. This paper develops routing and scheduling model taking into consideration some objective functions and constraints. The formulation of the mathematical model analyzed is to minimize costs based on the total distance visited by the tanker and minimize the cost of the ports. In order for the model of the problem to be more realistic and the cost calculated to be more appropriate then added a parameter that states the multiplier factor of cost increases as the charge of crude oil is filled.

  5. Mass-balance modelling of Ak-Shyirak massif Glaciers, Inner Tian Shan

    NASA Astrophysics Data System (ADS)

    Rets, Ekaterina; Barandun, Martina; Belozerov, Egor; Petrakov, Dmitry; Shpuntova, Alena

    2017-04-01

    Tian Shan is a water tower of Central Asia. Rapid and accelerating glacier downwasting is typical for this region. Study sites - Sary-Tor glacier and Glacier No.354 are located in Ak-Shyirak massif, Naryn headwaters. Sary-Tor was chosen as representative for Ak-Shyirak (Ushnurtsev, 1991; Oledeneniye TianShanya, 1995) for direct mass-balance measurements in 1985-1991. Glacier No.354 was an object of direct mass-balance measurements for 2011-2016. An energy-balance distributed A-Melt model (Rets et al, 2010) was used to reconstruct mass-balance for the glaciers for 2003-2015. Verification of modelingresults showed a good reproduction of direct melting measurements data on ablation stakes and mass loss according to geodetic method. Modeling results for Glacier No. 354 were compared to different modeling approach: distributed accumulation and temperature-index melt (Kronenberg et al, 2016)

  6. High-resolution imaging of the Pluto-Charon system with the Faint Object Camera of the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Albrecht, R.; Barbieri, C.; Adorf, H.-M.; Corrain, G.; Gemmo, A.; Greenfield, P.; Hainaut, O.; Hook, R. N.; Tholen, D. J.; Blades, J. C.

    1994-01-01

    Images of the Pluto-Charon system were obtained with the Faint Object Camera (FOC) of the Hubble Space Telescope (HST) after the refurbishment of the telescope. The images are of superb quality, allowing the determination of radii, fluxes, and albedos. Attempts were made to improve the resolution of the already diffraction limited images by image restoration. These yielded indications of surface albedo distributions qualitatively consistent with models derived from observations of Pluto-Charon mutual eclipses.

  7. Real-Time Network Management

    DTIC Science & Technology

    1998-07-01

    Report No. WH97JR00-A002 Sponsored by REAL-TIME NETWORK MANAGEMENT FINAL TECHNICAL REPORT K CD July 1998 CO CO O W O Defense Advanced...Approved for public release; distribution unlimited. t^GquALmmsPEami Report No. WH97JR00-A002 REAL-TIME NETWORK MANAGEMENT Synectics Corporation...2.1.2.1 WAN-class Networks 12 2.1.2.2 IEEE 802.3-class Networks 13 2.2 Task 2 - Object Modeling for Architecture 14 2.2.1 Managed Objects 14 2.2.2

  8. Non-linear multi-objective model for planning water-energy modes of Novosibirsk Hydro Power Plant

    NASA Astrophysics Data System (ADS)

    Alsova, O. K.; Artamonova, A. V.

    2018-05-01

    This paper presents a non-linear multi-objective model for planning and optimizing of water-energy modes for the Novosibirsk Hydro Power Plant (HPP) operation. There is a very important problem of developing a strategy to improve the scheme of water-power modes and ensure the effective operation of hydropower plants. It is necessary to determine the methods and criteria for the optimal distribution of water resources, to develop a set of models and to apply them to the software implementation of a DSS (decision-support system) for managing Novosibirsk HPP modes. One of the possible versions of the model is presented and investigated in this paper. Experimental study of the model has been carried out with 2017 data and the task of ten-day period planning from April to July (only 12 ten-day periods) was solved.

  9. A development framework for distributed artificial intelligence

    NASA Technical Reports Server (NTRS)

    Adler, Richard M.; Cottman, Bruce H.

    1989-01-01

    The authors describe distributed artificial intelligence (DAI) applications in which multiple organizations of agents solve multiple domain problems. They then describe work in progress on a DAI system development environment, called SOCIAL, which consists of three primary language-based components. The Knowledge Object Language defines models of knowledge representation and reasoning. The metaCourier language supplies the underlying functionality for interprocess communication and control access across heterogeneous computing environments. The metaAgents language defines models for agent organization coordination, control, and resource management. Application agents and agent organizations will be constructed by combining metaAgents and metaCourier building blocks with task-specific functionality such as diagnostic or planning reasoning. This architecture hides implementation details of communications, control, and integration in distributed processing environments, enabling application developers to concentrate on the design and functionality of the intelligent agents and agent networks themselves.

  10. Modeling and evaluating the performance of Brillouin distributed optical fiber sensors.

    PubMed

    Soto, Marcelo A; Thévenaz, Luc

    2013-12-16

    A thorough analysis of the key factors impacting on the performance of Brillouin distributed optical fiber sensors is presented. An analytical expression is derived to estimate the error on the determination of the Brillouin peak gain frequency, based for the first time on real experimental conditions. This expression is experimentally validated, and describes how this frequency uncertainty depends on measurement parameters, such as Brillouin gain linewidth, frequency scanning step and signal-to-noise ratio. Based on the model leading to this expression and considering the limitations imposed by nonlinear effects and pump depletion, a figure-of-merit is proposed to fairly compare the performance of Brillouin distributed sensing systems. This figure-of-merit offers to the research community and to potential users the possibility to evaluate with an objective metric the real performance gain resulting from any proposed configuration.

  11. Distributed model predictive control for constrained nonlinear systems with decoupled local dynamics.

    PubMed

    Zhao, Meng; Ding, Baocang

    2015-03-01

    This paper considers the distributed model predictive control (MPC) of nonlinear large-scale systems with dynamically decoupled subsystems. According to the coupled state in the overall cost function of centralized MPC, the neighbors are confirmed and fixed for each subsystem, and the overall objective function is disassembled into each local optimization. In order to guarantee the closed-loop stability of distributed MPC algorithm, the overall compatibility constraint for centralized MPC algorithm is decomposed into each local controller. The communication between each subsystem and its neighbors is relatively low, only the current states before optimization and the optimized input variables after optimization are being transferred. For each local controller, the quasi-infinite horizon MPC algorithm is adopted, and the global closed-loop system is proven to be exponentially stable. Copyright © 2014 ISA. Published by Elsevier Ltd. All rights reserved.

  12. A Model-Based Expert System for Space Power Distribution Diagnostics

    NASA Technical Reports Server (NTRS)

    Quinn, Todd M.; Schlegelmilch, Richard F.

    1994-01-01

    When engineers diagnose system failures, they often use models to confirm system operation. This concept has produced a class of advanced expert systems that perform model-based diagnosis. A model-based diagnostic expert system for the Space Station Freedom electrical power distribution test bed is currently being developed at the NASA Lewis Research Center. The objective of this expert system is to autonomously detect and isolate electrical fault conditions. Marple, a software package developed at TRW, provides a model-based environment utilizing constraint suspension. Originally, constraint suspension techniques were developed for digital systems. However, Marple provides the mechanisms for applying this approach to analog systems such as the test bed, as well. The expert system was developed using Marple and Lucid Common Lisp running on a Sun Sparc-2 workstation. The Marple modeling environment has proved to be a useful tool for investigating the various aspects of model-based diagnostics. This report describes work completed to date and lessons learned while employing model-based diagnostics using constraint suspension within an analog system.

  13. Numerical Simulation of Roller Levelling using SIMULIA Abaqus

    NASA Astrophysics Data System (ADS)

    Trusov, K. A.; Mishnev, P. A.; Kopaev, O. V.; Nushtaev, D. V.

    2017-12-01

    The finite element (FE) 2D-model of roller levelling process is developed in the SIMILIA Abaqus. The objective of this paper is development FE-model and investigation of adjustable parameters of roller leveller together with elastic-plastic material behaviour. Properties of the material were determined experimentally. After levelling, the strip had a residual stress distribution. The longbow after cutting is predicted too. Recommendation for practical use were proposed.

  14. Origin of orbital debris impacts on LDEF's trailing surfaces

    NASA Technical Reports Server (NTRS)

    Kessler, Donald J.

    1993-01-01

    A model was developed to determine the origin of orbital impacts measured on the training surfaces of LDEF. The model calculates the expected debris impact crater distribution around LDEF as a function of debris orbital parameters. The results show that only highly elliptical, low inclination orbits could be responsible for these impacts. The most common objects left in this type of orbit are orbital transfer stages used by the U.S. and ESA to place payloads into geosynchronous orbit. Objects in this type of orbit are difficult to catalog by the U.S. Space Command; consequently there are independent reasons to believe that the catalog does not adequately represent this population. This analysis concludes that the relative number of cataloged objects with highly elliptical, low inclination orbits must be increased by a factor of 20 to be consistent with the LDEF data.

  15. Radiative heat transfer in low-dimensional systems -- microscopic mode

    NASA Astrophysics Data System (ADS)

    Woods, Lilia; Phan, Anh; Drosdoff, David

    2013-03-01

    Radiative heat transfer between objects can increase dramatically at sub-wavelength scales. Exploring ways to modulate such transport between nano-systems is a key issue from fundamental and applied points of view. We advance the theoretical understanding of radiative heat transfer between nano-objects by introducing a microscopic model, which takes into account the individual atoms and their atomic polarizabilities. This approach is especially useful to investigate nano-objects with various geometries and give a detailed description of the heat transfer distribution. We employ this model to study the heat exchange in graphene nanoribbon/substrate systems. Our results for the distance separations, substrates, and presence of extended or localized defects enable making predictions for tailoring the radiative heat transfer at the nanoscale. Financial support from the Department of Energy under Contract No. DE-FG02-06ER46297 is acknowledged.

  16. 78 FR 18322 - Marine Mammals; File No. 17751

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-26

    ... reduction of sea ice in the Arctic with the goal of developing predictive ecosystem models. Research methods... applied in due form for a permit to conduct research on gray (Eschrichtius robustus) and killer (Orcinus..., Chukchi Sea, and Arctic Ocean. The objectives of the research are to examine the distribution and movement...

  17. Professionalism Deficits among Medical Students: Models of Identification and Intervention

    ERIC Educational Resources Information Center

    Bennett, Aurora J.; Roman, Brenda; Arnold, Lesley M.; Kay, Jerald; Goldenhar, Linda M.

    2005-01-01

    Objective: This study compares the instruments and interventions utilized to identify and remediate unprofessional behaviors in medical students across U.S. psychiatry clerkships. Methods: A 20-item questionnaire was distributed to 120 psychiatry clerkship directors and directors of medical student education, in the U.S., inquiring into the…

  18. Estimating density of a territorial species in a dynamic landscape

    Treesearch

    Elizabeth M. Glenn; Damon B. Lesmeister; Raymond J. Davis; Bruce Hollen; Anne Poopatanapong

    2017-01-01

    Context Conservation planning for at-risk species requires understanding of where species are likely to occur, how many individuals are likely to be supported on a given landscape, and the ability to monitor those changes through time. Objectives We developed a distribution model for northern spotted owls that...

  19. Effectiveness of Breakfast in the Classroom in Five Exemplary Districts

    ERIC Educational Resources Information Center

    Rainville, Alice Jo; King, Amber D.; Nettles, Mary Frances

    2013-01-01

    Purpose/Objectives: A national trend to improve school breakfast participation is the integration of breakfast within the school day. Breakfast in the classroom programs increase student access to school breakfast. Service models include "grab and go," distribution of breakfasts to each classroom, and mobile breakfast carts in hallways.…

  20. Creating an Organic Knowledge-Building Environment within an Asynchronous Distributed Learning Context.

    ERIC Educational Resources Information Center

    Moller, Leslie; Prestera, Gustavo E.; Harvey, Douglas; Downs-Keller, Margaret; McCausland, Jo-Ann

    2002-01-01

    Discusses organic architecture and suggests that learning environments should be designed and constructed using an organic approach, so that learning is not viewed as a distinct human activity but incorporated into everyday performance. Highlights include an organic knowledge-building model; information objects; scaffolding; discourse action…

  1. Theoretical calculation of the cratering on Ida, Mathilde, Eros and Gaspra

    NASA Astrophysics Data System (ADS)

    Jeffers, S. V.; Asher, D. J.

    2003-07-01

    The main influences on crater size distributions are investigated by deriving results for the four example target objects, (951) Gaspra, (243) Ida, (253) Mathilde and (433) Eros. The dynamical history of each of these asteroids is modelled using the MERCURY numerical integrator. An efficient, Öpik-type, collision code enables the distribution of impact velocities and the overall impact probability to be found. When combined with a crater scaling law and an impactor size distribution, using a Monte Carlo method, this yields a crater size distribution. The cratering time-scale is longer for Ida than either Gaspra or Mathilde, though it is harder to constrain for Eros due to the chaotic variation of its orbital elements. The slopes of the crater size distribution are in accord with observations.

  2. Modeling a hierarchical structure of factors influencing exploitation policy for water distribution systems using ISM approach

    NASA Astrophysics Data System (ADS)

    Jasiulewicz-Kaczmarek, Małgorzata; Wyczółkowski, Ryszard; Gładysiak, Violetta

    2017-12-01

    Water distribution systems are one of the basic elements of contemporary technical infrastructure of urban and rural areas. It is a complex engineering system composed of transmission networks and auxiliary equipment (e.g. controllers, checkouts etc.), scattered territorially over a large area. From the water distribution system operation point of view, its basic features are: functional variability, resulting from the need to adjust the system to temporary fluctuations in demand for water and territorial dispersion. The main research questions are: What external factors should be taken into account when developing an effective water distribution policy? Does the size and nature of the water distribution system significantly affect the exploitation policy implemented? These questions have shaped the objectives of research and the method of research implementation.

  3. Studies on muon tomography for archaeological internal structures scanning

    NASA Astrophysics Data System (ADS)

    Gómez, H.; Carloganu, C.; Gibert, D.; Jacquemier, J.; Karyotakis, Y.; Marteau, J.; Niess, V.; Katsanevas, S.; Tonazzo, A.

    2016-05-01

    Muon tomography is a potential non-invasive technique for internal structure scanning. It has already interesting applications in geophysics and can be used for archaeological purposes. Muon tomography is based on the measurement of the muon flux after crossing the structure studied. Differences on the mean density of these structures imply differences on the detected muon rate for a given direction. Based on this principle, Monte Carlo simulations represent a useful tool to provide a model of the expected muon rate and angular distribution depending on the composition of the studied object, being useful to estimate the expected detected muons and to better understand the experimental results. These simulations are mainly dependent on the geometry and composition of the studied object and on the modelling of the initial muon flux at surface. In this work, the potential of muon tomography in archaeology is presented and evaluated with Monte Carlo simulations by estimating the differences on the muon rate due to the presence of internal structures and its composition. The influence of the chosen muon model at surface in terms of energy and angular distributions in the final result has been also studied.

  4. Database technology and the management of multimedia data in the Mirror project

    NASA Astrophysics Data System (ADS)

    de Vries, Arjen P.; Blanken, H. M.

    1998-10-01

    Multimedia digital libraries require an open distributed architecture instead of a monolithic database system. In the Mirror project, we use the Monet extensible database kernel to manage different representation of multimedia objects. To maintain independence between content, meta-data, and the creation of meta-data, we allow distribution of data and operations using CORBA. This open architecture introduces new problems for data access. From an end user's perspective, the problem is how to search the available representations to fulfill an actual information need; the conceptual gap between human perceptual processes and the meta-data is too large. From a system's perspective, several representations of the data may semantically overlap or be irrelevant. We address these problems with an iterative query process and active user participating through relevance feedback. A retrieval model based on inference networks assists the user with query formulation. The integration of this model into the database design has two advantages. First, the user can query both the logical and the content structure of multimedia objects. Second, the use of different data models in the logical and the physical database design provides data independence and allows algebraic query optimization. We illustrate query processing with a music retrieval application.

  5. Constraining the red shifts of TeV BL Lac objects

    NASA Astrophysics Data System (ADS)

    Qin, Longhua; Wang, Jiancheng; Yan, Dahai; Yang, Chuyuan; Yuan, Zunli; Zhou, Ming

    2018-01-01

    We present a model-dependent method to estimate the red shifts of three TeV BL Lac objects (BL Lacs) through fitting their (quasi-)simultaneous multi-waveband spectral energy distributions (SEDs) with a one-zone leptonic synchrotron self-Compton model. Considering the impact of electron energy distributions (EEDs) on the results, we use three types of EEDs to fit the SEDs: a power-law EED with exponential cut-off (PLC), a log-parabola (PLLP) EED and the broken power-law (BPL) EED. We also use a parameter α to describe the uncertainties of the extragalactic background light models, as in Abdo et al. We then use a Markov chain Monte Carlo method to explore the multi-dimensional parameter space and obtain the uncertainties of the model parameters based on the observational data. We apply our method to obtain the red shifts of three TeV BL Lac objects in the marginalized 68 per cent confidence, and find that the PLC EED does not fit the SEDs. For 3C66A, the red shift is 0.14-0.31 and 0.16-0.32 in the BPL and PLLP EEDs. For PKS1424+240, the red shift is 0.55-0.68 and 0.55-0.67 in the BPL and PLLP EEDs. For PG1553+113, the red shift is 0.22-0.48 and 0.22-0.39 in the BPL and PLLP EEDs. We also estimate the red shift of PKS1424+240 in the high stage to be 0.46-0.67 in the PLLP EED, roughly consistent with that in the low stage.

  6. Chasing passive galaxies in the early Universe: a critical analysis in CANDELS GOODS-South

    NASA Astrophysics Data System (ADS)

    Merlin, E.; Fontana, A.; Castellano, M.; Santini, P.; Torelli, M.; Boutsia, K.; Wang, T.; Grazian, A.; Pentericci, L.; Schreiber, C.; Ciesla, L.; McLure, R.; Derriere, S.; Dunlop, J. S.; Elbaz, D.

    2018-01-01

    We search for passive galaxies at z > 3 in the GOODS-South field, using different techniques based on photometric data, and paying attention to develop methods that are sensitive to objects that have become passive shortly before the epoch of observation. We use CANDELS HST catalogues, ultra-deep Ks data and new IRAC photometry, performing spectral energy distribution fitting using models with abruptly quenched star formation histories. We then single out galaxies which are best fitted by a passively evolving model, and having only low probability (<5 per cent) star-forming solutions. We verify the effects of including nebular lines emission, and we consider possible solutions at different redshifts. The number of selected sources dramatically depends on the models used in the spectral energy distribution (SED) fitting. Without including emission lines and with photometric redshifts fixed at the CANDELS estimate, we single out 30 candidates; the inclusion of nebular lines emission reduces the sample to 10 objects; allowing for solutions at different redshifts, only two galaxies survive as robust candidates. Most of the candidates are not far-infrared emitters, corroborating their association with passive galaxies. Our results translate into an upper limit in the number density of ∼0.173 arcmin2 above the detection limit. However, we conclude that the selection of passive galaxies at z > 3 is still subject to significant uncertainties, being sensitive to assumptions in the SED modelling adopted and to the relatively low S/N of the objects. By means of dedicated simulations, we show that JWST will greatly enhance the accuracy, allowing for a much more robust classification.

  7. The Dynamical Imprint of Lost Protoplanets on the Trans-Neptunian Populations, and Limits on the Primordial Size Distribution of Trans-Neptunian Objects at Pluto and Larger Sizes.

    NASA Astrophysics Data System (ADS)

    Shannon, Andrew Brian; Dawson, Rebekah

    2018-04-01

    Planet formation remains a poorly understood process, in part because of our limited access to the intermediate phases of planetesimal and protoplanet growth. Today, the vast majority of the accessible remaining planetesimals and protoplanets reside within the Hot Trans-Neptunian Object population. This population has been depleted by 99% - 99.9% over the course of the Solar system's history, and as such the present day size-number distribution may be incomplete at the large size end. We show that such lost protoplanets would have left signatures in the dynamics of the present-day Trans-Neptunian Populations, and their primordial number can thus be statistically limited by considering the survival of ultra-wide binary TNOs, the Cold Classical Kuiper belt, and the resonant populations. We compare those limits to the predicted size-number distribution of various planetesimal and proto-planet growth models.

  8. Debiased orbit and absolute-magnitude distributions for near-Earth objects

    NASA Astrophysics Data System (ADS)

    Granvik, Mikael; Morbidelli, Alessandro; Jedicke, Robert; Bolin, Bryce; Bottke, William F.; Beshore, Edward; Vokrouhlický, David; Nesvorný, David; Michel, Patrick

    2018-09-01

    The debiased absolute-magnitude and orbit distributions as well as source regions for near-Earth objects (NEOs) provide a fundamental frame of reference for studies of individual NEOs and more complex population-level questions. We present a new four-dimensional model of the NEO population that describes debiased steady-state distributions of semimajor axis, eccentricity, inclination, and absolute magnitude H in the range 17 < H < 25. The modeling approach improves upon the methodology originally developed by Bottke et al. (2000, Science 288, 2190-2194) in that it is, for example, based on more realistic orbit distributions and uses source-specific absolute-magnitude distributions that allow for a power-law slope that varies with H. We divide the main asteroid belt into six different entrance routes or regions (ER) to the NEO region: the ν6, 3:1J, 5:2J and 2:1J resonance complexes as well as Hungarias and Phocaeas. In addition we include the Jupiter-family comets as the primary cometary source of NEOs. We calibrate the model against NEO detections by Catalina Sky Surveys' stations 703 and G96 during 2005-2012, and utilize the complementary nature of these two systems to quantify the systematic uncertainties associated to the resulting model. We find that the (fitted) H distributions have significant differences, although most of them show a minimum power-law slope at H ∼ 20. As a consequence of the differences between the ER-specific H distributions we find significant variations in, for example, the NEO orbit distribution, average lifetime, and the relative contribution of different ERs as a function of H. The most important ERs are the ν6 and 3:1J resonance complexes with JFCs contributing a few percent of NEOs on average. A significant contribution from the Hungaria group leads to notable changes compared to the predictions by Bottke et al. in, for example, the orbit distribution and average lifetime of NEOs. We predict that there are 962-56+52 (802-42+48 ×103) NEOs with H < 17.75 (H < 25) and these numbers are in agreement with the most recent estimates found in the literature (the uncertainty estimates only account for the random component). Based on our model we find that relative shares between different NEO groups (Amor, Apollo, Aten, Atira, Vatira) are (39.4,54.4,3.5,1.2,0.3)%, respectively, for the considered H range and that these ratios have a negligible dependence on H. Finally, we find an agreement between our estimate for the rate of Earth impacts by NEOs and recent estimates in the literature, but there remains a potentially significant discrepancy in the frequency of Tunguska-sized and Chelyabinsk-sized impacts.

  9. A network-based distributed, media-rich computing and information environment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phillips, R.L.

    1995-12-31

    Sunrise is a Los Alamos National Laboratory (LANL) project started in October 1993. It is intended to be a prototype National Information Infrastructure development project. A main focus of Sunrise is to tie together enabling technologies (networking, object-oriented distributed computing, graphical interfaces, security, multi-media technologies, and data-mining technologies) with several specific applications. A diverse set of application areas was chosen to ensure that the solutions developed in the project are as generic as possible. Some of the application areas are materials modeling, medical records and image analysis, transportation simulations, and K-12 education. This paper provides a description of Sunrise andmore » a view of the architecture and objectives of this evolving project. The primary objectives of Sunrise are three-fold: (1) To develop common information-enabling tools for advanced scientific research and its applications to industry; (2) To enhance the capabilities of important research programs at the Laboratory; (3) To define a new way of collaboration between computer science and industrially-relevant research.« less

  10. An architecture for object-oriented intelligent control of power systems in space

    NASA Technical Reports Server (NTRS)

    Holmquist, Sven G.; Jayaram, Prakash; Jansen, Ben H.

    1993-01-01

    A control system for autonomous distribution and control of electrical power during space missions is being developed. This system should free the astronauts from localizing faults and reconfiguring loads if problems with the power distribution and generation components occur. The control system uses an object-oriented simulation model of the power system and first principle knowledge to detect, identify, and isolate faults. Each power system component is represented as a separate object with knowledge of its normal behavior. The reasoning process takes place at three different levels of abstraction: the Physical Component Model (PCM) level, the Electrical Equivalent Model (EEM) level, and the Functional System Model (FSM) level, with the PCM the lowest level of abstraction and the FSM the highest. At the EEM level the power system components are reasoned about as their electrical equivalents, e.g, a resistive load is thought of as a resistor. However, at the PCM level detailed knowledge about the component's specific characteristics is taken into account. The FSM level models the system at the subsystem level, a level appropriate for reconfiguration and scheduling. The control system operates in two modes, a reactive and a proactive mode, simultaneously. In the reactive mode the control system receives measurement data from the power system and compares these values with values determined through simulation to detect the existence of a fault. The nature of the fault is then identified through a model-based reasoning process using mainly the EEM. Compound component models are constructed at the EEM level and used in the fault identification process. In the proactive mode the reasoning takes place at the PCM level. Individual components determine their future health status using a physical model and measured historical data. In case changes in the health status seem imminent the component warns the control system about its impending failure. The fault isolation process uses the FSM level for its reasoning base.

  11. Virtual Research Environments for Natural Hazard Modelling

    NASA Astrophysics Data System (ADS)

    Napier, Hazel; Aldridge, Tim

    2017-04-01

    The Natural Hazards Partnership (NHP) is a group of 17 collaborating public sector organisations providing a mechanism for co-ordinated advice to government and agencies responsible for civil contingency and emergency response during natural hazard events. The NHP has set up a Hazard Impact Model (HIM) group tasked with modelling the impact of a range of UK hazards with the aim of delivery of consistent hazard and impact information. The HIM group consists of 7 partners initially concentrating on modelling the socio-economic impact of 3 key hazards - surface water flooding, land instability and high winds. HIM group partners share scientific expertise and data within their specific areas of interest including hydrological modelling, meteorology, engineering geology, GIS, data delivery, and modelling of socio-economic impacts. Activity within the NHP relies on effective collaboration between partners distributed across the UK. The NHP are acting as a use case study for a new Virtual Research Environment (VRE) being developed by the EVER-EST project (European Virtual Environment for Research - Earth Science Themes: a solution). The VRE is allowing the NHP to explore novel ways of cooperation including improved capabilities for e-collaboration, e-research, automation of processes and e-learning. Collaboration tools are complemented by the adoption of Research Objects, semantically rich aggregations of resources enabling the creation of uniquely identified digital artefacts resulting in reusable science and research. Application of the Research Object concept to HIM development facilitates collaboration, by encapsulating scientific knowledge in a shareable format that can be easily shared and used by partners working on the same model but within their areas of expertise. This paper describes the application of the VRE to the NHP use case study. It outlines the challenges associated with distributed partnership working and how they are being addressed in the VRE. A case study is included focussing on the application of Research Objects to development work for the surface water flooding hazard impact model, a key achievement for the HIM group.

  12. Active Contours Driven by Multi-Feature Gaussian Distribution Fitting Energy with Application to Vessel Segmentation.

    PubMed

    Wang, Lei; Zhang, Huimao; He, Kan; Chang, Yan; Yang, Xiaodong

    2015-01-01

    Active contour models are of great importance for image segmentation and can extract smooth and closed boundary contours of the desired objects with promising results. However, they cannot work well in the presence of intensity inhomogeneity. Hence, a novel region-based active contour model is proposed by taking image intensities and 'vesselness values' from local phase-based vesselness enhancement into account simultaneously to define a novel multi-feature Gaussian distribution fitting energy in this paper. This energy is then incorporated into a level set formulation with a regularization term for accurate segmentations. Experimental results based on publicly available STructured Analysis of the Retina (STARE) demonstrate our model is more accurate than some existing typical methods and can successfully segment most small vessels with varying width.

  13. Numerical modeling tools for chemical vapor deposition

    NASA Technical Reports Server (NTRS)

    Jasinski, Thomas J.; Childs, Edward P.

    1992-01-01

    Development of general numerical simulation tools for chemical vapor deposition (CVD) was the objective of this study. Physical models of important CVD phenomena were developed and implemented into the commercial computational fluid dynamics software FLUENT. The resulting software can address general geometries as well as the most important phenomena occurring with CVD reactors: fluid flow patterns, temperature and chemical species distribution, gas phase and surface deposition. The physical models are documented which are available and examples are provided of CVD simulation capabilities.

  14. Evolutionary Agent-based Models to design distributed water management strategies

    NASA Astrophysics Data System (ADS)

    Giuliani, M.; Castelletti, A.; Reed, P. M.

    2012-12-01

    There is growing awareness in the scientific community that the traditional centralized approach to water resources management, as described in much of the water resources literature, provides an ideal optimal solution, which is certainly useful to quantify the best physically achievable performance, but is generally inapplicable. Most real world water resources management problems are indeed characterized by the presence of multiple, distributed and institutionally-independent decision-makers. Multi-Agent Systems provide a potentially more realistic alternative framework to model multiple and self-interested decision-makers in a credible context. Each decision-maker can be represented by an agent who, being self-interested, acts according to local objective functions and produces negative externalities on system level objectives. Different levels of coordination can potentially be included in the framework by designing coordination mechanisms to drive the current decision-making structure toward the global system efficiency. Yet, the identification of effective coordination strategies can be particularly complex in modern institutional contexts and current practice is dependent on largely ad-hoc coordination strategies. In this work we propose a novel Evolutionary Agent-based Modeling (EAM) framework that enables a mapping of fully uncoordinated and centrally coordinated solutions into their relative "many-objective" tradeoffs using multiobjective evolutionary algorithms. Then, by analysing the conflicts between local individual agent and global system level objectives it is possible to more fully understand the causes, consequences, and potential solution strategies for coordination failures. Game-theoretic criteria have value for identifying the most interesting alternatives from a policy making point of view as well as the coordination mechanisms that can be applied to obtain these interesting solutions. The proposed approach is numerically tested on a synthetic case study, representing a Y-shaped system composed by two regulated lakes, whose releases merge just upstream of a city. Each reservoir is operated by an agent in order to prevent floods along the lake shores (local objective). However, the optimal operation of the reservoirs with respect to the local objectives is conflicting with the minimization of floods in the city (global objective). The evolution of the Agent-based Model from individualistic management strategies of the reservoirs toward a global compromise that reduces the costs for the city is analysed.

  15. Multi-objective shape optimization of plate structure under stress criteria based on sub-structured mixed FEM and genetic algorithms

    NASA Astrophysics Data System (ADS)

    Garambois, Pierre; Besset, Sebastien; Jézéquel, Louis

    2015-07-01

    This paper presents a methodology for the multi-objective (MO) shape optimization of plate structure under stress criteria, based on a mixed Finite Element Model (FEM) enhanced with a sub-structuring method. The optimization is performed with a classical Genetic Algorithm (GA) method based on Pareto-optimal solutions and considers thickness distributions parameters and antagonist objectives among them stress criteria. We implement a displacement-stress Dynamic Mixed FEM (DM-FEM) for plate structure vibrations analysis. Such a model gives a privileged access to the stress within the plate structure compared to primal classical FEM, and features a linear dependence to the thickness parameters. A sub-structuring reduction method is also computed in order to reduce the size of the mixed FEM and split the given structure into smaller ones with their own thickness parameters. Those methods combined enable a fast and stress-wise efficient structure analysis, and improve the performance of the repetitive GA. A few cases of minimizing the mass and the maximum Von Mises stress within a plate structure under a dynamic load put forward the relevance of our method with promising results. It is able to satisfy multiple damage criteria with different thickness distributions, and use a smaller FEM.

  16. Microwave Driven Actuators Power Allocation and Distribution

    NASA Technical Reports Server (NTRS)

    Forbes, Timothy; Song, Kyo D.

    2000-01-01

    Design, fabrication and test of a power allocation and distribution (PAD) network for microwave driven actuators is presented in this paper. Development of a circuit that would collect power from a rectenna array amplify and distribute the power to actuators was designed and fabricated for space application in an actuator array driven by a microwave. A P-SPICE model was constructed initially for data reduction purposes, and was followed by a working real-world model. A voltage up - converter (VUC) is used to amplify the voltage from the individual rectenna. The testing yielded a 26:1 voltage amplification ratio with input voltage at 9 volts and a measured output voltage 230VDC. Future work includes the miniaturization of the circuitry, the use of microwave remote control, and voltage amplification technology for each voltage source. The objective of this work is to develop a model system that will collect DC voltage from an array of rectenna and propagate the voltage to an array of actuators.

  17. Optimization of Location-Routing Problem for Cold Chain Logistics Considering Carbon Footprint.

    PubMed

    Wang, Songyi; Tao, Fengming; Shi, Yuhe

    2018-01-06

    In order to solve the optimization problem of logistics distribution system for fresh food, this paper provides a low-carbon and environmental protection point of view, based on the characteristics of perishable products, and combines with the overall optimization idea of cold chain logistics distribution network, where the green and low-carbon location-routing problem (LRP) model in cold chain logistics is developed with the minimum total costs as the objective function, which includes carbon emission costs. A hybrid genetic algorithm with heuristic rules is designed to solve the model, and an example is used to verify the effectiveness of the algorithm. Furthermore, the simulation results obtained by a practical numerical example show the applicability of the model while provide green and environmentally friendly location-distribution schemes for the cold chain logistics enterprise. Finally, carbon tax policies are introduced to analyze the impact of carbon tax on the total costs and carbon emissions, which proves that carbon tax policy can effectively reduce carbon dioxide emissions in cold chain logistics network.

  18. Network inference using informative priors

    PubMed Central

    Mukherjee, Sach; Speed, Terence P.

    2008-01-01

    Recent years have seen much interest in the study of systems characterized by multiple interacting components. A class of statistical models called graphical models, in which graphs are used to represent probabilistic relationships between variables, provides a framework for formal inference regarding such systems. In many settings, the object of inference is the network structure itself. This problem of “network inference” is well known to be a challenging one. However, in scientific settings there is very often existing information regarding network connectivity. A natural idea then is to take account of such information during inference. This article addresses the question of incorporating prior information into network inference. We focus on directed models called Bayesian networks, and use Markov chain Monte Carlo to draw samples from posterior distributions over network structures. We introduce prior distributions on graphs capable of capturing information regarding network features including edges, classes of edges, degree distributions, and sparsity. We illustrate our approach in the context of systems biology, applying our methods to network inference in cancer signaling. PMID:18799736

  19. Network inference using informative priors.

    PubMed

    Mukherjee, Sach; Speed, Terence P

    2008-09-23

    Recent years have seen much interest in the study of systems characterized by multiple interacting components. A class of statistical models called graphical models, in which graphs are used to represent probabilistic relationships between variables, provides a framework for formal inference regarding such systems. In many settings, the object of inference is the network structure itself. This problem of "network inference" is well known to be a challenging one. However, in scientific settings there is very often existing information regarding network connectivity. A natural idea then is to take account of such information during inference. This article addresses the question of incorporating prior information into network inference. We focus on directed models called Bayesian networks, and use Markov chain Monte Carlo to draw samples from posterior distributions over network structures. We introduce prior distributions on graphs capable of capturing information regarding network features including edges, classes of edges, degree distributions, and sparsity. We illustrate our approach in the context of systems biology, applying our methods to network inference in cancer signaling.

  20. Molecular clouds and galactic spiral structure

    NASA Technical Reports Server (NTRS)

    Dame, T. M.

    1984-01-01

    Galactic CO line emission at 115 GHz was surveyed in order to study the distribution of molecular clouds in the inner galaxy. Comparison of this survey with similar H1 data reveals a detailed correlation with the most intense 21 cm features. To each of the classical 21 cm H1 spiral arms of the inner galaxy there corresponds a CO molecular arm which is generally more clearly defined and of higher contrast. A simple model is devised for the galactic distribution of molecular clouds. The modeling results suggest that molecular clouds are essentially transient objects, existing for 15 to 40 million years after their formation in a spiral arm, and are largely confined to spiral features about 300 pc wide.

  1. MAGIC: Model and Graphic Information Converter

    NASA Technical Reports Server (NTRS)

    Herbert, W. C.

    2009-01-01

    MAGIC is a software tool capable of converting highly detailed 3D models from an open, standard format, VRML 2.0/97, into the proprietary DTS file format used by the Torque Game Engine from GarageGames. MAGIC is used to convert 3D simulations from authoritative sources into the data needed to run the simulations in NASA's Distributed Observer Network. The Distributed Observer Network (DON) is a simulation presentation tool built by NASA to facilitate the simulation sharing requirements of the Data Presentation and Visualization effort within the Constellation Program. DON is built on top of the Torque Game Engine (TGE) and has chosen TGE's Dynamix Three Space (DTS) file format to represent 3D objects within simulations.

  2. Mueller-matrix mapping of biological tissues in differential diagnosis of optical anisotropy mechanisms of protein networks

    NASA Astrophysics Data System (ADS)

    Ushenko, V. A.; Sidor, M. I.; Marchuk, Yu F.; Pashkovskaya, N. V.; Andreichuk, D. R.

    2015-03-01

    We report a model of Mueller-matrix description of optical anisotropy of protein networks in biological tissues with allowance for the linear birefringence and dichroism. The model is used to construct the reconstruction algorithms of coordinate distributions of phase shifts and the linear dichroism coefficient. In the statistical analysis of such distributions, we have found the objective criteria of differentiation between benign and malignant tissues of the female reproductive system. From the standpoint of evidence-based medicine, we have determined the operating characteristics (sensitivity, specificity and accuracy) of the Mueller-matrix reconstruction method of optical anisotropy parameters and demonstrated its effectiveness in the differentiation of benign and malignant tumours.

  3. Azimuth-invariant mueller-matrix differentiation of the optical anisotropy of biological tissues

    NASA Astrophysics Data System (ADS)

    Ushenko, V. A.; Sidor, M. I.; Marchuk, Yu. F.; Pashkovskaya, N. V.; Andreichuk, D. R.

    2014-07-01

    A Mueller-matrix model is proposed for analysis of the optical anisotropy of protein networks of optically thin nondepolarizing layers of biological tissues with allowance for birefringence and dichroism. The model is used to construct algorithms for reconstruction of coordinate distributions of phase shifts and coefficient of linear dichroism. Objective criteria for differentiation of benign and malignant tissues of female genitals are formulated in the framework of the statistical analysis of such distributions. Approaches of evidence-based medicine are used to determine the working characteristics (sensitivity, specificity, and accuracy) of the Mueller-matrix method for the reconstruction of the parameters of optical anisotropy and show its efficiency in the differentiation of benign and malignant tumors.

  4. The Lack of Chemical Equilibrium does not Preclude the Use of the Classical Nucleation Theory in Circumstellar Outflows

    NASA Technical Reports Server (NTRS)

    Paquette, John A.; Nuth, Joseph A., III

    2011-01-01

    Classical nucleation theory has been used in models of dust nucleation in circumstellar outflows around oxygen-rich asymptotic giant branch stars. One objection to the application of classical nucleation theory (CNT) to astrophysical systems of this sort is that an equilibrium distribution of clusters (assumed by CNT) is unlikely to exist in such conditions due to a low collision rate of condensable species. A model of silicate grain nucleation and growth was modified to evaluate the effect of a nucleation flux orders of magnitUde below the equilibrium value. The results show that a lack of chemical equilibrium has only a small effect on the ultimate grain distribution.

  5. Estimation of the incubation period of invasive aspergillosis by survival models in acute myeloid leukemia patients.

    PubMed

    Bénet, Thomas; Voirin, Nicolas; Nicolle, Marie-Christine; Picot, Stephane; Michallet, Mauricette; Vanhems, Philippe

    2013-02-01

    The duration of the incubation of invasive aspergillosis (IA) remains unknown. The objective of this investigation was to estimate the time interval between aplasia onset and that of IA symptoms in acute myeloid leukemia (AML) patients. A single-centre prospective survey (2004-2009) included all patients with AML and probable/proven IA. Parametric survival models were fitted to the distribution of the time intervals between aplasia onset and IA. Overall, 53 patients had IA after aplasia, with the median observed time interval between the two being 15 days. Based on log-normal distribution, the median estimated IA incubation period was 14.6 days (95% CI; 12.8-16.5 days).

  6. A weakly-constrained data assimilation approach to address rainfall-runoff model structural inadequacy in streamflow prediction

    NASA Astrophysics Data System (ADS)

    Lee, Haksu; Seo, Dong-Jun; Noh, Seong Jin

    2016-11-01

    This paper presents a simple yet effective weakly-constrained (WC) data assimilation (DA) approach for hydrologic models which accounts for model structural inadequacies associated with rainfall-runoff transformation processes. Compared to the strongly-constrained (SC) DA, WC DA adjusts the control variables less while producing similarly or more accurate analysis. Hence the adjusted model states are dynamically more consistent with those of the base model. The inadequacy of a rainfall-runoff model was modeled as an additive error to runoff components prior to routing and penalized in the objective function. Two example modeling applications, distributed and lumped, were carried out to investigate the effects of the WC DA approach on DA results. For distributed modeling, the distributed Sacramento Soil Moisture Accounting (SAC-SMA) model was applied to the TIFM7 Basin in Missouri, USA. For lumped modeling, the lumped SAC-SMA model was applied to nineteen basins in Texas. In both cases, the variational DA (VAR) technique was used to assimilate discharge data at the basin outlet. For distributed SAC-SMA, spatially homogeneous error modeling yielded updated states that are spatially much more similar to the a priori states, as quantified by Earth Mover's Distance (EMD), than spatially heterogeneous error modeling by up to ∼10 times. DA experiments using both lumped and distributed SAC-SMA modeling indicated that assimilating outlet flow using the WC approach generally produce smaller mean absolute difference as well as higher correlation between the a priori and the updated states than the SC approach, while producing similar or smaller root mean square error of streamflow analysis and prediction. Large differences were found in both lumped and distributed modeling cases between the updated and the a priori lower zone tension and primary free water contents for both WC and SC approaches, indicating possible model structural deficiency in describing low flows or evapotranspiration processes for the catchments studied. Also presented are the findings from this study and key issues relevant to WC DA approaches using hydrologic models.

  7. On the accuracy of models for predicting sound propagation in fitted rooms.

    PubMed

    Hodgson, M

    1990-08-01

    The objective of this article is to make a contribution to the evaluation of the accuracy and applicability of models for predicting the sound propagation in fitted rooms such as factories, classrooms, and offices. The models studied are 1:50 scale models; the method-of-image models of Jovicic, Lindqvist, Hodgson, Kurze, and of Lemire and Nicolas; the emprical formula of Friberg; and Ondet and Barbry's ray-tracing model. Sound propagation predictions by the analytic models are compared with the results of sound propagation measurements in a 1:50 scale model and in a warehouse, both containing various densities of approximately isotropically distributed, rectangular-parallelepipedic fittings. The results indicate that the models of Friberg and of Lemire and Nicolas are fundamentally incorrect. While more generally applicable versions exist, the versions of the models of Jovicic and Kurze studied here are found to be of limited applicability since they ignore vertical-wall reflections. The Hodgson and Lindqvist models appear to be accurate in certain limited cases. This preliminary study found the ray-tracing model of Ondet and Barbry to be the most accurate of all the cases studied. Furthermore, it has the necessary flexibility with respect to room geometry, surface-absorption distribution, and fitting distribution. It appears to be the model with the greatest applicability to fitted-room sound propagation prediction.

  8. Numerical simulation of gas distribution in goaf under Y ventilation mode

    NASA Astrophysics Data System (ADS)

    Li, Shengzhou; Liu, Jun

    2018-04-01

    Taking the Y type ventilation of the working face as the research object, diffusion equation is introduced to simulate the diffusion characteristics of gas, using Navier-Stokes equation and Brinkman equation to simulate the gas flow in working face and goaf, the physical model of gas flow in coal mining face was established. With numerical simulation software COMSOL multiphysics methods, gas distribution in goaf under Y ventilation mode is simulated and gas distribution of the working face, the upper corner and goaf is analysised. The results show that the Y type ventilation system can effectively improve the corner gas accumulation and overrun problem.

  9. Joint Inversion of 1-Hz GPS Data and Strong Motion Records for the Rupture Process of the 2008 Iwate-Miyagi Nairiku Earthquake: Objectively Determining Relative Weighting

    NASA Astrophysics Data System (ADS)

    Wang, Z.; Kato, T.; Wang, Y.

    2015-12-01

    The spatiotemporal fault slip history of the 2008 Iwate-Miyagi Nairiku earthquake, Japan, is obtained by the joint inversion of 1-Hz GPS waveforms and near-field strong motion records. 1-Hz GPS data from GEONET is processed by GAMIT/GLOBK and then a low-pass filter of 0.05 Hz is applied. The ground surface strong motion records from stations of K-NET and Kik-Net are band-pass filtered for the range of 0.05 ~ 0.3 Hz and integrated once to obtain velocity. The joint inversion exploits a broader frequency band for near-field ground motions, which provides excellent constraints for both the detailed slip history and slip distribution. A fully Bayesian inversion method is performed to simultaneously and objectively determine the rupture model, the unknown relative weighting of multiple data sets and the unknown smoothing hyperparameters. The preferred rupture model is stable for different choices of velocity structure model and station distribution, with maximum slip of ~ 8.0 m and seismic moment of 2.9 × 1019 Nm (Mw 6.9). By comparison with the single inversion of strong motion records, the cumulative slip distribution of joint inversion shows sparser slip distribution with two slip asperities. One common slip asperity extends from the hypocenter southeastward to the ground surface of breakage; another slip asperity, which is unique for joint inversion contributed by 1-Hz GPS waveforms, appears in the deep part of fault where very few aftershocks are occurring. The differential moment rate function of joint and single inversions obviously indicates that rich high frequency waves are radiated in the first three seconds but few low frequency waves.

  10. Dynamic electrical impedance imaging with the interacting multiple model scheme.

    PubMed

    Kim, Kyung Youn; Kim, Bong Seok; Kim, Min Chan; Kim, Sin; Isaacson, David; Newell, Jonathan C

    2005-04-01

    In this paper, an effective dynamical EIT imaging scheme is presented for on-line monitoring of the abruptly changing resistivity distribution inside the object, based on the interacting multiple model (IMM) algorithm. The inverse problem is treated as a stochastic nonlinear state estimation problem with the time-varying resistivity (state) being estimated on-line with the aid of the IMM algorithm. In the design of the IMM algorithm multiple models with different process noise covariance are incorporated to reduce the modeling uncertainty. Simulations and phantom experiments are provided to illustrate the proposed algorithm.

  11. An EOQ model for weibull distribution deterioration with time-dependent cubic demand and backlogging

    NASA Astrophysics Data System (ADS)

    Santhi, G.; Karthikeyan, K.

    2017-11-01

    In this article we introduce an economic order quantity model with weibull deterioration and time dependent cubic demand rate where holding costs as a linear function of time. Shortages are allowed in the inventory system are partially and fully backlogging. The objective of this model is to minimize the total inventory cost by using the optimal order quantity and the cycle length. The proposed model is illustrated by numerical examples and the sensitivity analysis is performed to study the effect of changes in parameters on the optimum solutions.

  12. A mixed model framework for teratology studies.

    PubMed

    Braeken, Johan; Tuerlinckx, Francis

    2009-10-01

    A mixed model framework is presented to model the characteristic multivariate binary anomaly data as provided in some teratology studies. The key features of the model are the incorporation of covariate effects, a flexible random effects distribution by means of a finite mixture, and the application of copula functions to better account for the relation structure of the anomalies. The framework is motivated by data of the Boston Anticonvulsant Teratogenesis study and offers an integrated approach to investigate substantive questions, concerning general and anomaly-specific exposure effects of covariates, interrelations between anomalies, and objective diagnostic measurement.

  13. Bidirectional Reflectance Modeling of Non-homogeneous Plant Canopies

    NASA Technical Reports Server (NTRS)

    Norman, J. M. (Principal Investigator)

    1985-01-01

    The objective of this research is to develop a 3-dimensional radiative transfer model for predicting the bidirectional reflectance distribution function (BRDF) for heterogeneous vegetation canopies. The model (named BIGAR) considers the angular distribution of leaves, leaf area index, the location and size of individual subcanopies such as widely spaced rows or trees, spectral and directional properties of leaves, multiple scattering, solar position and sky condition, and characteristics of the soil. The model relates canopy biophysical attributes to down-looking radiation measurements for nadir and off-nadir viewing angles. Therefore, inversion of this model, which is difficult but practical should provide surface biophysical pattern; a fundamental goal of remote sensing. Such a model also will help to evaluate atmospheric limitations to satellite remote sensing by providing a good surface boundary condition for many different kinds of canopies. Furthermore, this model can relate estimates of nadir reflectance, which is approximated by most satellites, to hemispherical reflectance, which is necessary in the energy budget of vegetated surfaces.

  14. Model Calibration in Watershed Hydrology

    NASA Technical Reports Server (NTRS)

    Yilmaz, Koray K.; Vrugt, Jasper A.; Gupta, Hoshin V.; Sorooshian, Soroosh

    2009-01-01

    Hydrologic models use relatively simple mathematical equations to conceptualize and aggregate the complex, spatially distributed, and highly interrelated water, energy, and vegetation processes in a watershed. A consequence of process aggregation is that the model parameters often do not represent directly measurable entities and must, therefore, be estimated using measurements of the system inputs and outputs. During this process, known as model calibration, the parameters are adjusted so that the behavior of the model approximates, as closely and consistently as possible, the observed response of the hydrologic system over some historical period of time. This Chapter reviews the current state-of-the-art of model calibration in watershed hydrology with special emphasis on our own contributions in the last few decades. We discuss the historical background that has led to current perspectives, and review different approaches for manual and automatic single- and multi-objective parameter estimation. In particular, we highlight the recent developments in the calibration of distributed hydrologic models using parameter dimensionality reduction sampling, parameter regularization and parallel computing.

  15. Building a laboratory foundation for interpreting spectral emission from x-ray binary and black hole accretion disks

    NASA Astrophysics Data System (ADS)

    Loisel, Guillaume

    2016-10-01

    Emission from accretion powered objects accounts for a large fraction of all photons in the universe and is a powerful diagnostic for their behavior and structure. Quantitative interpretation of spectrum emission from these objects requires a spectral synthesis model for photoionized plasma, since the ionizing luminosity is so large that photon driven atomic processes dominate over collisions. This is a quandary because laboratory experiments capable of testing the spectral emission models are non-existent. The models must predict the photoionized charge state distribution, the photon emission processes, and the radiation transport influence on the observed emission. We have used a decade of research at the Z facility to achieve the first simultaneous measurements of emission and absorption from photoionized plasmas. The extraordinary spectra are reproducible to within +/-2% and the E/dE 500 spectral resolution has enabled unprecedented tests of atomic structure calculations. The absorption spectra enable determination of plasma density, temperature, and charge state distribution. The emission spectra then enable tests of spectral emission models. The emission has been measured from plasmas with varying size to elucidate the radiation transport effects. This combination of measurements will provide strong constraints on models used in astrophysics. Sandia is a multi-program laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy under contract DE-AC04-94AL85000.

  16. Simulating pad-electrodes with high-definition arrays in transcranial electric stimulation

    NASA Astrophysics Data System (ADS)

    Kempe, René; Huang, Yu; Parra, Lucas C.

    2014-04-01

    Objective. Research studies on transcranial electric stimulation, including direct current, often use a computational model to provide guidance on the placing of sponge-electrode pads. However, the expertise and computational resources needed for finite element modeling (FEM) make modeling impractical in a clinical setting. Our objective is to make the exploration of different electrode configurations accessible to practitioners. We provide an efficient tool to estimate current distributions for arbitrary pad configurations while obviating the need for complex simulation software. Approach. To efficiently estimate current distributions for arbitrary pad configurations we propose to simulate pads with an array of high-definition (HD) electrodes and use an efficient linear superposition to then quickly evaluate different electrode configurations. Main results. Numerical results on ten different pad configurations on a normal individual show that electric field intensity simulated with the sampled array deviates from the solutions with pads by only 5% and the locations of peak magnitude fields have a 94% overlap when using a dense array of 336 electrodes. Significance. Computationally intensive FEM modeling of the HD array needs to be performed only once, perhaps on a set of standard heads that can be made available to multiple users. The present results confirm that by using these models one can now quickly and accurately explore and select pad-electrode montages to match a particular clinical need.

  17. Development of a Web Service for Analysis in a Distributed Network

    PubMed Central

    Jiang, Xiaoqian; Wu, Yuan; Marsolo, Keith; Ohno-Machado, Lucila

    2014-01-01

    Objective: We describe functional specifications and practicalities in the software development process for a web service that allows the construction of the multivariate logistic regression model, Grid Logistic Regression (GLORE), by aggregating partial estimates from distributed sites, with no exchange of patient-level data. Background: We recently developed and published a web service for model construction and data analysis in a distributed environment. This recent paper provided an overview of the system that is useful for users, but included very few details that are relevant for biomedical informatics developers or network security personnel who may be interested in implementing this or similar systems. We focus here on how the system was conceived and implemented. Methods: We followed a two-stage development approach by first implementing the backbone system and incrementally improving the user experience through interactions with potential users during the development. Our system went through various stages such as concept proof, algorithm validation, user interface development, and system testing. We used the Zoho Project management system to track tasks and milestones. We leveraged Google Code and Apache Subversion to share code among team members, and developed an applet-servlet architecture to support the cross platform deployment. Discussion: During the development process, we encountered challenges such as Information Technology (IT) infrastructure gaps and limited team experience in user-interface design. We figured out solutions as well as enabling factors to support the translation of an innovative privacy-preserving, distributed modeling technology into a working prototype. Conclusion: Using GLORE (a distributed model that we developed earlier) as a pilot example, we demonstrated the feasibility of building and integrating distributed modeling technology into a usable framework that can support privacy-preserving, distributed data analysis among researchers at geographically dispersed institutes. PMID:25848586

  18. Development of Semi-distributed ecohydrological model in the Rio Grande De Manati River Basin, Puerto Rico

    NASA Astrophysics Data System (ADS)

    Setegn, S. G.; Ortiz, J.; Melendez, J.; Barreto, M.; Torres-Perez, J. L.; Guild, L. S.

    2015-12-01

    There are limited studies in Puerto Rico that shows the water resources availability and variability with respect to changing climates and land use. The main goal of the HICE-PR (Human Impacts to Coastal Ecosystems in Puerto Rico (HICE-PR): the Río Loco Watershed (southwest coast PR) project which was funded by NASA is to evaluate the impacts of land use/land cover changes on the quality and extent of coastal and marine ecosystems (CMEs) in two priority watersheds in Puerto Rico (Manatí and Guánica).The main objective of this study is to set up a physically based spatially distributed hydrological model, Soil and Water Assessment Tool (SWAT) for the analysis of hydrological processes in the Rio Grande de Manati river basin. SWAT (soil and water assessment tool) is a spatially distributed watershed model developed to predict the impact of land management practices on water, sediment and agricultural chemical yields in large complex watersheds. For efficient use of distributed models for hydrological and scenario analysis, it is important that these models pass through a careful calibration and uncertainty analysis. The model was calibrated and validated using Sequential Uncertainty Fitting (SUFI-2) calibration and uncertainty analysis algorithms. The model evaluation statistics for streamflows prediction shows that there is a good agreement between the measured and simulated flows that was verified by coefficients of determination and Nash Sutcliffe efficiency greater than 0.5. Keywords: Hydrological Modeling; SWAT; SUFI-2; Rio Grande De Manati; Puerto Rico

  19. Objective Video Quality Assessment Based on Machine Learning for Underwater Scientific Applications

    PubMed Central

    Moreno-Roldán, José-Miguel; Luque-Nieto, Miguel-Ángel; Poncela, Javier; Otero, Pablo

    2017-01-01

    Video services are meant to be a fundamental tool in the development of oceanic research. The current technology for underwater networks (UWNs) imposes strong constraints in the transmission capacity since only a severely limited bitrate is available. However, previous studies have shown that the quality of experience (QoE) is enough for ocean scientists to consider the service useful, although the perceived quality can change significantly for small ranges of variation of video parameters. In this context, objective video quality assessment (VQA) methods become essential in network planning and real time quality adaptation fields. This paper presents two specialized models for objective VQA, designed to match the special requirements of UWNs. The models are built upon machine learning techniques and trained with actual user data gathered from subjective tests. Our performance analysis shows how both of them can successfully estimate quality as a mean opinion score (MOS) value and, for the second model, even compute a distribution function for user scores. PMID:28333123

  20. This is SPIRAL-TAP: Sparse Poisson Intensity Reconstruction ALgorithms--theory and practice.

    PubMed

    Harmany, Zachary T; Marcia, Roummel F; Willett, Rebecca M

    2012-03-01

    Observations in many applications consist of counts of discrete events, such as photons hitting a detector, which cannot be effectively modeled using an additive bounded or Gaussian noise model, and instead require a Poisson noise model. As a result, accurate reconstruction of a spatially or temporally distributed phenomenon (f*) from Poisson data (y) cannot be effectively accomplished by minimizing a conventional penalized least-squares objective function. The problem addressed in this paper is the estimation of f* from y in an inverse problem setting, where the number of unknowns may potentially be larger than the number of observations and f* admits sparse approximation. The optimization formulation considered in this paper uses a penalized negative Poisson log-likelihood objective function with nonnegativity constraints (since Poisson intensities are naturally nonnegative). In particular, the proposed approach incorporates key ideas of using separable quadratic approximations to the objective function at each iteration and penalization terms related to l1 norms of coefficient vectors, total variation seminorms, and partition-based multiscale estimation methods.

Top