Promoting Model-based Definition to Establish a Complete Product Definition
Ruemler, Shawn P.; Zimmerman, Kyle E.; Hartman, Nathan W.; Hedberg, Thomas; Feeny, Allison Barnard
2016-01-01
The manufacturing industry is evolving and starting to use 3D models as the central knowledge artifact for product data and product definition, or what is known as Model-based Definition (MBD). The Model-based Enterprise (MBE) uses MBD as a way to transition away from using traditional paper-based drawings and documentation. As MBD grows in popularity, it is imperative to understand what information is needed in the transition from drawings to models so that models represent all the relevant information needed for processes to continue efficiently. Finding this information can help define what data is common amongst different models in different stages of the lifecycle, which could help establish a Common Information Model. The Common Information Model is a source that contains common information from domain specific elements amongst different aspects of the lifecycle. To help establish this Common Information Model, information about how models are used in industry within different workflows needs to be understood. To retrieve this information, a survey mechanism was administered to industry professionals from various sectors. Based on the results of the survey a Common Information Model could not be established. However, the results gave great insight that will help in further investigation of the Common Information Model. PMID:28070155
ERIC Educational Resources Information Center
Abuhamdieh, Ayman H.; Harder, Joseph T.
2015-01-01
This paper proposes a meta-cognitive, systems-based, information structuring model (McSIS) to systematize online information search behavior based on literature review of information-seeking models. The General Systems Theory's (GST) prepositions serve as its framework. Factors influencing information-seekers, such as the individual learning…
A new region-edge based level set model with applications to image segmentation
NASA Astrophysics Data System (ADS)
Zhi, Xuhao; Shen, Hong-Bin
2018-04-01
Level set model has advantages in handling complex shapes and topological changes, and is widely used in image processing tasks. The image segmentation oriented level set models can be grouped into region-based models and edge-based models, both of which have merits and drawbacks. Region-based level set model relies on fitting to color intensity of separated regions, but is not sensitive to edge information. Edge-based level set model evolves by fitting to local gradient information, but can get easily affected by noise. We propose a region-edge based level set model, which considers saliency information into energy function and fuses color intensity with local gradient information. The evolution of the proposed model is implemented by a hierarchical two-stage protocol, and the experimental results show flexible initialization, robust evolution and precise segmentation.
Model-based learning and the contribution of the orbitofrontal cortex to the model-free world
McDannald, Michael A.; Takahashi, Yuji K.; Lopatina, Nina; Pietras, Brad W.; Jones, Josh L.; Schoenbaum, Geoffrey
2012-01-01
Learning is proposed to occur when there is a discrepancy between reward prediction and reward receipt. At least two separate systems are thought to exist: one in which predictions are proposed to be based on model-free or cached values; and another in which predictions are model-based. A basic neural circuit for model-free reinforcement learning has already been described. In the model-free circuit the ventral striatum (VS) is thought to supply a common-currency reward prediction to midbrain dopamine neurons that compute prediction errors and drive learning. In a model-based system, predictions can include more information about an expected reward, such as its sensory attributes or current, unique value. This detailed prediction allows for both behavioral flexibility and learning driven by changes in sensory features of rewards alone. Recent evidence from animal learning and human imaging suggests that, in addition to model-free information, the VS also signals model-based information. Further, there is evidence that the orbitofrontal cortex (OFC) signals model-based information. Here we review these data and suggest that the OFC provides model-based information to this traditional model-free circuitry and offer possibilities as to how this interaction might occur. PMID:22487030
NED-IIS: An Intelligent Information System for Forest Ecosystem Management
W.D. Potter; S. Somasekar; R. Kommineni; H.M. Rauscher
1999-01-01
We view Intelligent Information System (IIS) as composed of a unified knowledge base, database, and model base. The model base includes decision support models, forecasting models, and cvsualization models for example. In addition, we feel that the model base should include domain specific porblems solving modules as well as decision support models. This, then,...
Formal Specification of Information Systems Requirements.
ERIC Educational Resources Information Center
Kampfner, Roberto R.
1985-01-01
Presents a formal model for specification of logical requirements of computer-based information systems that incorporates structural and dynamic aspects based on two separate models: the Logical Information Processing Structure and the Logical Information Processing Network. The model's role in systems development is discussed. (MBR)
A model-driven approach to information security compliance
NASA Astrophysics Data System (ADS)
Correia, Anacleto; Gonçalves, António; Teodoro, M. Filomena
2017-06-01
The availability, integrity and confidentiality of information are fundamental to the long-term survival of any organization. Information security is a complex issue that must be holistically approached, combining assets that support corporate systems, in an extended network of business partners, vendors, customers and other stakeholders. This paper addresses the conception and implementation of information security systems, conform the ISO/IEC 27000 set of standards, using the model-driven approach. The process begins with the conception of a domain level model (computation independent model) based on information security vocabulary present in the ISO/IEC 27001 standard. Based on this model, after embedding in the model mandatory rules for attaining ISO/IEC 27001 conformance, a platform independent model is derived. Finally, a platform specific model serves the base for testing the compliance of information security systems with the ISO/IEC 27000 set of standards.
NASA Astrophysics Data System (ADS)
Dong, S.; Yan, Q.; Xu, Y.; Bai, J.
2018-04-01
In order to promote the construction of digital geo-spatial framework in China and accelerate the construction of informatization mapping system, three-dimensional geographic information model emerged. The three-dimensional geographic information model based on oblique photogrammetry technology has higher accuracy, shorter period and lower cost than traditional methods, and can more directly reflect the elevation, position and appearance of the features. At this stage, the technology of producing three-dimensional geographic information models based on oblique photogrammetry technology is rapidly developing. The market demand and model results have been emerged in a large amount, and the related quality inspection needs are also getting larger and larger. Through the study of relevant literature, it is found that there are a lot of researches on the basic principles and technical characteristics of this technology, and relatively few studies on quality inspection and analysis. On the basis of summarizing the basic principle and technical characteristics of oblique photogrammetry technology, this paper introduces the inspection contents and inspection methods of three-dimensional geographic information model based on oblique photogrammetry technology. Combined with the actual inspection work, this paper summarizes the quality problems of three-dimensional geographic information model based on oblique photogrammetry technology, analyzes the causes of the problems and puts forward the quality control measures. It provides technical guidance for the quality inspection of three-dimensional geographic information model data products based on oblique photogrammetry technology in China and provides technical support for the vigorous development of three-dimensional geographic information model based on oblique photogrammetry technology.
QSAR modeling based on structure-information for properties of interest in human health.
Hall, L H; Hall, L M
2005-01-01
The development of QSAR models based on topological structure description is presented for problems in human health. These models are based on the structure-information approach to quantitative biological modeling and prediction, in contrast to the mechanism-based approach. The structure-information approach is outlined, starting with basic structure information developed from the chemical graph (connection table). Information explicit in the connection table (element identity and skeletal connections) leads to significant (implicit) structure information that is useful for establishing sound models of a wide range of properties of interest in drug design. Valence state definition leads to relationships for valence state electronegativity and atom/group molar volume. Based on these important aspects of molecules, together with skeletal branching patterns, both the electrotopological state (E-state) and molecular connectivity (chi indices) structure descriptors are developed and described. A summary of four QSAR models indicates the wide range of applicability of these structure descriptors and the predictive quality of QSAR models based on them: aqueous solubility (5535 chemically diverse compounds, 938 in external validation), percent oral absorption (%OA, 417 therapeutic drugs, 195 drugs in external validation testing), AMES mutagenicity (2963 compounds including 290 therapeutic drugs, 400 in external validation), fish toxicity (92 substituted phenols, anilines and substituted aromatics). These models are established independent of explicit three-dimensional (3-D) structure information and are directly interpretable in terms of the implicit structure information useful to the drug design process.
Model-based learning and the contribution of the orbitofrontal cortex to the model-free world.
McDannald, Michael A; Takahashi, Yuji K; Lopatina, Nina; Pietras, Brad W; Jones, Josh L; Schoenbaum, Geoffrey
2012-04-01
Learning is proposed to occur when there is a discrepancy between reward prediction and reward receipt. At least two separate systems are thought to exist: one in which predictions are proposed to be based on model-free or cached values; and another in which predictions are model-based. A basic neural circuit for model-free reinforcement learning has already been described. In the model-free circuit the ventral striatum (VS) is thought to supply a common-currency reward prediction to midbrain dopamine neurons that compute prediction errors and drive learning. In a model-based system, predictions can include more information about an expected reward, such as its sensory attributes or current, unique value. This detailed prediction allows for both behavioral flexibility and learning driven by changes in sensory features of rewards alone. Recent evidence from animal learning and human imaging suggests that, in addition to model-free information, the VS also signals model-based information. Further, there is evidence that the orbitofrontal cortex (OFC) signals model-based information. Here we review these data and suggest that the OFC provides model-based information to this traditional model-free circuitry and offer possibilities as to how this interaction might occur. © 2012 The Authors. European Journal of Neuroscience © 2012 Federation of European Neuroscience Societies and Blackwell Publishing Ltd.
Li, Qianqian; Yang, Tao; Zhao, Erbo; Xia, Xing’ang; Han, Zhangang
2013-01-01
There has been an increasing interest in the geographic aspects of economic development, exemplified by P. Krugman’s logical analysis. We show in this paper that the geographic aspects of economic development can be modeled using multi-agent systems that incorporate multiple underlying factors. The extent of information sharing is assumed to be a driving force that leads to economic geographic heterogeneity across locations without geographic advantages or disadvantages. We propose an agent-based market model that considers a spectrum of different information-sharing mechanisms: no information sharing, information sharing among friends and pheromone-like information sharing. Finally, we build a unified model that accommodates all three of these information-sharing mechanisms based on the number of friends who can share information. We find that the no information-sharing model does not yield large economic zones, and more information sharing can give rise to a power-law distribution of market size that corresponds to the stylized fact of city size and firm size distributions. The simulations show that this model is robust. This paper provides an alternative approach to studying economic geographic development, and this model could be used as a test bed to validate the detailed assumptions that regulate real economic agglomeration. PMID:23484007
An Inter-Personal Information Sharing Model Based on Personalized Recommendations
NASA Astrophysics Data System (ADS)
Kamei, Koji; Funakoshi, Kaname; Akahani, Jun-Ichi; Satoh, Tetsuji
In this paper, we propose an inter-personal information sharing model among individuals based on personalized recommendations. In the proposed model, we define an information resource as shared between people when both of them consider it important --- not merely when they both possess it. In other words, the model defines the importance of information resources based on personalized recommendations from identifiable acquaintances. The proposed method is based on a collaborative filtering system that focuses on evaluations from identifiable acquaintances. It utilizes both user evaluations for documents and their contents. In other words, each user profile is represented as a matrix of credibility to the other users' evaluations on each domain of interests. We extended the content-based collaborative filtering method to distinguish other users to whom the documents should be recommended. We also applied a concept-based vector space model to represent the domain of interests instead of the previous method which represented them by a term-based vector space model. We introduce a personalized concept-base compiled from each user's information repository to improve the information retrieval in the user's environment. Furthermore, the concept-spaces change from user to user since they reflect the personalities of the users. Because of different concept-spaces, the similarity between a document and a user's interest varies for each user. As a result, a user receives recommendations from other users who have different view points, achieving inter-personal information sharing based on personalized recommendations. This paper also describes an experimental simulation of our information sharing model. In our laboratory, five participants accumulated a personal repository of e-mails and web pages from which they built their own concept-base. Then we estimated the user profiles according to personalized concept-bases and sets of documents which others evaluated. We simulated inter-personal recommendation based on the user profiles and evaluated the performance of the recommendation method by comparing the recommended documents to the result of the content-based collaborative filtering.
MIQSTURE: An Experimental Online Language for Army Tactical Intelligence Information Processing
1978-07-01
algorithms. The most critical component of an active information processing model for Army tactical intelligence is the user interface, which must be based on...1976)** defined some preliminary notions of an active information model centered around a data base that can introspect about its contents and...34An Introspective Data Base for an Active Information Model." OSI Technical Note N76-017, 17 November 1976 1-4 L4 beyond optimistic expectations and
Information visualisation based on graph models
NASA Astrophysics Data System (ADS)
Kasyanov, V. N.; Kasyanova, E. V.
2013-05-01
Information visualisation is a key component of support tools for many applications in science and engineering. A graph is an abstract structure that is widely used to model information for its visualisation. In this paper, we consider practical and general graph formalism called hierarchical graphs and present the Higres and Visual Graph systems aimed at supporting information visualisation on the base of hierarchical graph models.
An efficient temporal database design method based on EER
NASA Astrophysics Data System (ADS)
Liu, Zhi; Huang, Jiping; Miao, Hua
2007-12-01
Many existing methods of modeling temporal information are based on logical model, which makes relational schema optimization more difficult and more complicated. In this paper, based on the conventional EER model, the author attempts to analyse and abstract temporal information in the phase of conceptual modelling according to the concrete requirement to history information. Then a temporal data model named BTEER is presented. BTEER not only retains all designing ideas and methods of EER which makes BTEER have good upward compatibility, but also supports the modelling of valid time and transaction time effectively at the same time. In addition, BTEER can be transformed to EER easily and automatically. It proves in practice, this method can model the temporal information well.
Information of Complex Systems and Applications in Agent Based Modeling.
Bao, Lei; Fritchman, Joseph C
2018-04-18
Information about a system's internal interactions is important to modeling the system's dynamics. This study examines the finer categories of the information definition and explores the features of a type of local information that describes the internal interactions of a system. Based on the results, a dual-space agent and information modeling framework (AIM) is developed by explicitly distinguishing an information space from the material space. The two spaces can evolve both independently and interactively. The dual-space framework can provide new analytic methods for agent based models (ABMs). Three examples are presented including money distribution, individual's economic evolution, and artificial stock market. The results are analyzed in the dual-space, which more clearly shows the interactions and evolutions within and between the information and material spaces. The outcomes demonstrate the wide-ranging applicability of using the dual-space AIMs to model and analyze a broad range of interactive and intelligent systems.
NASA Astrophysics Data System (ADS)
Qiu, Feng; Dai, Guang; Zhang, Ying
According to the acoustic emission information and the appearance inspection information of tank bottom online testing, the external factors associated with tank bottom corrosion status are confirmed. Applying artificial neural network intelligent evaluation method, three tank bottom corrosion status evaluation models based on appearance inspection information, acoustic emission information, and online testing information are established. Comparing with the result of acoustic emission online testing through the evaluation of test sample, the accuracy of the evaluation model based on online testing information is 94 %. The evaluation model can evaluate tank bottom corrosion accurately and realize acoustic emission online testing intelligent evaluation of tank bottom.
Based on user interest level of modeling scenarios and browse content
NASA Astrophysics Data System (ADS)
Zhao, Yang
2017-08-01
User interest modeling is the core of personalized service, taking into account the impact of situational information on user preferences, the user behavior days of financial information. This paper proposes a method of user interest modeling based on scenario information, which is obtained by calculating the similarity of the situation. The user's current scene of the approximate scenario set; on the "user - interest items - scenarios" three-dimensional model using the situation pre-filtering method of dimension reduction processing. View the content of the user interested in the theme, the analysis of the page content to get each topic of interest keywords, based on the level of vector space model user interest. The experimental results show that the user interest model based on the scenario information is within 9% of the user's interest prediction, which is effective.
Cao, Yuansheng; Gong, Zongping; Quan, H T
2015-06-01
Motivated by the recent proposed models of the information engine [Proc. Natl. Acad. Sci. USA 109, 11641 (2012)] and the information refrigerator [Phys. Rev. Lett. 111, 030602 (2013)], we propose a minimal model of the information pump and the information eraser based on enzyme kinetics. This device can either pump molecules against the chemical potential gradient by consuming the information to be encoded in the bit stream or (partially) erase the information initially encoded in the bit stream by consuming the Gibbs free energy. The dynamics of this model is solved exactly, and the "phase diagram" of the operation regimes is determined. The efficiency and the power of the information machine is analyzed. The validity of the second law of thermodynamics within our model is clarified. Our model offers a simple paradigm for the investigating of the thermodynamics of information processing involving the chemical potential in small systems.
Design of a Model-Based Online Management Information System for Interlibrary Loan Networks.
ERIC Educational Resources Information Center
Rouse, Sandra H.; Rouse, William B.
1979-01-01
Discusses the design of a model-based management information system in terms of mathematical/statistical, information processing, and human factors issues and presents a prototype system for interlibrary loan networks. (Author/CWM)
Information spreading dynamics in hypernetworks
NASA Astrophysics Data System (ADS)
Suo, Qi; Guo, Jin-Li; Shen, Ai-Zhong
2018-04-01
Contact pattern and spreading strategy fundamentally influence the spread of information. Current mathematical methods largely assume that contacts between individuals are fixed by networks. In fact, individuals are affected by all his/her neighbors in different social relationships. Here, we develop a mathematical approach to depict the information spreading process in hypernetworks. Each individual is viewed as a node, and each social relationship containing the individual is viewed as a hyperedge. Based on SIS epidemic model, we construct two spreading models. One model is based on global transmission, corresponding to RP strategy. The other is based on local transmission, corresponding to CP strategy. These models can degenerate into complex network models with a special parameter. Thus hypernetwork models extend the traditional models and are more realistic. Further, we discuss the impact of parameters including structure parameters of hypernetwork, spreading rate, recovering rate as well as information seed on the models. Propagation time and density of informed nodes can reveal the overall trend of information dissemination. Comparing these two models, we find out that there is no spreading threshold in RP, while there exists a spreading threshold in CP. The RP strategy induces a broader and faster information spreading process under the same parameters.
Ranking streamflow model performance based on Information theory metrics
NASA Astrophysics Data System (ADS)
Martinez, Gonzalo; Pachepsky, Yakov; Pan, Feng; Wagener, Thorsten; Nicholson, Thomas
2016-04-01
The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic model evaluation and selection. We simulated 10-year streamflow time series in five watersheds located in Texas, North Carolina, Mississippi, and West Virginia. Eight model of different complexity were applied. The information-theory based metrics were obtained after representing the time series as strings of symbols where different symbols corresponded to different quantiles of the probability distribution of streamflow. The symbol alphabet was used. Three metrics were computed for those strings - mean information gain that measures the randomness of the signal, effective measure complexity that characterizes predictability and fluctuation complexity that characterizes the presence of a pattern in the signal. The observed streamflow time series has smaller information content and larger complexity metrics than the precipitation time series. Watersheds served as information filters and and streamflow time series were less random and more complex than the ones of precipitation. This is reflected the fact that the watershed acts as the information filter in the hydrologic conversion process from precipitation to streamflow. The Nash Sutcliffe efficiency metric increased as the complexity of models increased, but in many cases several model had this efficiency values not statistically significant from each other. In such cases, ranking models by the closeness of the information-theory based parameters in simulated and measured streamflow time series can provide an additional criterion for the evaluation of hydrologic model performance.
Information Interaction Study for DER and DMS Interoperability
NASA Astrophysics Data System (ADS)
Liu, Haitao; Lu, Yiming; Lv, Guangxian; Liu, Peng; Chen, Yu; Zhang, Xinhui
The Common Information Model (CIM) is an abstract data model that can be used to represent the major objects in Distribution Management System (DMS) applications. Because the Common Information Model (CIM) doesn't modeling the Distributed Energy Resources (DERs), it can't meet the requirements of DER operation and management for Distribution Management System (DMS) advanced applications. Modeling of DER were studied based on a system point of view, the article initially proposed a CIM extended information model. By analysis the basic structure of the message interaction between DMS and DER, a bidirectional messaging mapping method based on data exchange was proposed.
Supervised guiding long-short term memory for image caption generation based on object classes
NASA Astrophysics Data System (ADS)
Wang, Jian; Cao, Zhiguo; Xiao, Yang; Qi, Xinyuan
2018-03-01
The present models of image caption generation have the problems of image visual semantic information attenuation and errors in guidance information. In order to solve these problems, we propose a supervised guiding Long Short Term Memory model based on object classes, named S-gLSTM for short. It uses the object detection results from R-FCN as supervisory information with high confidence, and updates the guidance word set by judging whether the last output matches the supervisory information. S-gLSTM learns how to extract the current interested information from the image visual se-mantic information based on guidance word set. The interested information is fed into the S-gLSTM at each iteration as guidance information, to guide the caption generation. To acquire the text-related visual semantic information, the S-gLSTM fine-tunes the weights of the network through the back-propagation of the guiding loss. Complementing guidance information at each iteration solves the problem of visual semantic information attenuation in the traditional LSTM model. Besides, the supervised guidance information in our model can reduce the impact of the mismatched words on the caption generation. We test our model on MSCOCO2014 dataset, and obtain better performance than the state-of-the- art models.
Security of statistical data bases: invasion of privacy through attribute correlational modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Palley, M.A.
This study develops, defines, and applies a statistical technique for the compromise of confidential information in a statistical data base. Attribute Correlational Modeling (ACM) recognizes that the information contained in a statistical data base represents real world statistical phenomena. As such, ACM assumes correlational behavior among the database attributes. ACM proceeds to compromise confidential information through creation of a regression model, where the confidential attribute is treated as the dependent variable. The typical statistical data base may preclude the direct application of regression. In this scenario, the research introduces the notion of a synthetic data base, created through legitimate queriesmore » of the actual data base, and through proportional random variation of responses to these queries. The synthetic data base is constructed to resemble the actual data base as closely as possible in a statistical sense. ACM then applies regression analysis to the synthetic data base, and utilizes the derived model to estimate confidential information in the actual database.« less
Research on BIM-based building information value chain reengineering
NASA Astrophysics Data System (ADS)
Hui, Zhao; Weishuang, Xie
2017-04-01
The achievement of value and value-added factor to the building engineering information is accomplished through a chain-flow, that is, building the information value chain. Based on the deconstruction of the information chain on the construction information in the traditional information mode, this paper clarifies the value characteristics and requirements of each stage of the construction project. In order to achieve building information value-added, the paper deconstructs the traditional building information value chain, reengineer the information value chain model on the basis of the theory and techniques of BIM, to build value-added management model and analyse the value of the model.
The Future of Computer-Based Toxicity Prediction:
Mechanism-Based Models vs. Information Mining Approaches
When we speak of computer-based toxicity prediction, we are generally referring to a broad array of approaches which rely primarily upon chemical structure ...
NASA Astrophysics Data System (ADS)
Anderson, Thomas S.
2016-05-01
The Global Information Network Architecture is an information technology based on Vector Relational Data Modeling, a unique computational paradigm, DoD network certified by USARMY as the Dragon Pulse Informa- tion Management System. This network available modeling environment for modeling models, where models are configured using domain relevant semantics and use network available systems, sensors, databases and services as loosely coupled component objects and are executable applications. Solutions are based on mission tactics, techniques, and procedures and subject matter input. Three recent ARMY use cases are discussed a) ISR SoS. b) Modeling and simulation behavior validation. c) Networked digital library with behaviors.
Clinic expert information extraction based on domain model and block importance model.
Zhang, Yuanpeng; Wang, Li; Qian, Danmin; Geng, Xingyun; Yao, Dengfu; Dong, Jiancheng
2015-11-01
To extract expert clinic information from the Deep Web, there are two challenges to face. The first one is to make a judgment on forms. A novel method based on a domain model, which is a tree structure constructed by the attributes of query interfaces is proposed. With this model, query interfaces can be classified to a domain and filled in with domain keywords. Another challenge is to extract information from response Web pages indexed by query interfaces. To filter the noisy information on a Web page, a block importance model is proposed, both content and spatial features are taken into account in this model. The experimental results indicate that the domain model yields a precision 4.89% higher than that of the rule-based method, whereas the block importance model yields an F1 measure 10.5% higher than that of the XPath method. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Cao, Yuansheng; Gong, Zongping; Quan, H. T.
2015-06-01
Motivated by the recent proposed models of the information engine [Proc. Natl. Acad. Sci. USA 109, 11641 (2012), 10.1073/pnas.1204263109] and the information refrigerator [Phys. Rev. Lett. 111, 030602 (2013), 10.1103/PhysRevLett.111.030602], we propose a minimal model of the information pump and the information eraser based on enzyme kinetics. This device can either pump molecules against the chemical potential gradient by consuming the information to be encoded in the bit stream or (partially) erase the information initially encoded in the bit stream by consuming the Gibbs free energy. The dynamics of this model is solved exactly, and the "phase diagram" of the operation regimes is determined. The efficiency and the power of the information machine is analyzed. The validity of the second law of thermodynamics within our model is clarified. Our model offers a simple paradigm for the investigating of the thermodynamics of information processing involving the chemical potential in small systems.
Modeling method of time sequence model based grey system theory and application proceedings
NASA Astrophysics Data System (ADS)
Wei, Xuexia; Luo, Yaling; Zhang, Shiqiang
2015-12-01
This article gives a modeling method of grey system GM(1,1) model based on reusing information and the grey system theory. This method not only extremely enhances the fitting and predicting accuracy of GM(1,1) model, but also maintains the conventional routes' merit of simple computation. By this way, we have given one syphilis trend forecast method based on reusing information and the grey system GM(1,1) model.
Barnes, Marcia A.; Raghubar, Kimberly P.; Faulkner, Heather; Denton, Carolyn A.
2014-01-01
Readers construct mental models of situations described by text to comprehend what they read, updating these situation models based on explicitly described and inferred information about causal, temporal, and spatial relations. Fluent adult readers update their situation models while reading narrative text based in part on spatial location information that is consistent with the perspective of the protagonist. The current study investigates whether children update spatial situation models in a similar way, whether there are age-related changes in children's formation of spatial situation models during reading, and whether measures of the ability to construct and update spatial situation models are predictive of reading comprehension. Typically-developing children from ages 9 through 16 years (n=81) were familiarized with a physical model of a marketplace. Then the model was covered, and children read stories that described the movement of a protagonist through the marketplace and were administered items requiring memory for both explicitly stated and inferred information about the character's movements. Accuracy of responses and response times were evaluated. Results indicated that: (a) location and object information during reading appeared to be activated and updated not simply from explicit text-based information but from a mental model of the real world situation described by the text; (b) this pattern showed no age-related differences; and (c) the ability to update the situation model of the text based on inferred information, but not explicitly stated information, was uniquely predictive of reading comprehension after accounting for word decoding. PMID:24315376
Lamers, L M
1999-01-01
OBJECTIVE: To evaluate the predictive accuracy of the Diagnostic Cost Group (DCG) model using health survey information. DATA SOURCES/STUDY SETTING: Longitudinal data collected for a sample of members of a Dutch sickness fund. In the Netherlands the sickness funds provide compulsory health insurance coverage for the 60 percent of the population in the lowest income brackets. STUDY DESIGN: A demographic model and DCG capitation models are estimated by means of ordinary least squares, with an individual's annual healthcare expenditures in 1994 as the dependent variable. For subgroups based on health survey information, costs predicted by the models are compared with actual costs. Using stepwise regression procedures a subset of relevant survey variables that could improve the predictive accuracy of the three-year DCG model was identified. Capitation models were extended with these variables. DATA COLLECTION/EXTRACTION METHODS: For the empirical analysis, panel data of sickness fund members were used that contained demographic information, annual healthcare expenditures, and diagnostic information from hospitalizations for each member. In 1993, a mailed health survey was conducted among a random sample of 15,000 persons in the panel data set, with a 70 percent response rate. PRINCIPAL FINDINGS: The predictive accuracy of the demographic model improves when it is extended with diagnostic information from prior hospitalizations (DCGs). A subset of survey variables further improves the predictive accuracy of the DCG capitation models. The predictable profits and losses based on survey information for the DCG models are smaller than for the demographic model. Most persons with predictable losses based on health survey information were not hospitalized in the preceding year. CONCLUSIONS: The use of diagnostic information from prior hospitalizations is a promising option for improving the demographic capitation payment formula. This study suggests that diagnostic information from outpatient utilization is complementary to DCGs in predicting future costs. PMID:10029506
Variable cycle control model for intersection based on multi-source information
NASA Astrophysics Data System (ADS)
Sun, Zhi-Yuan; Li, Yue; Qu, Wen-Cong; Chen, Yan-Yan
2018-05-01
In order to improve the efficiency of traffic control system in the era of big data, a new variable cycle control model based on multi-source information is presented for intersection in this paper. Firstly, with consideration of multi-source information, a unified framework based on cyber-physical system is proposed. Secondly, taking into account the variable length of cell, hysteresis phenomenon of traffic flow and the characteristics of lane group, a Lane group-based Cell Transmission Model is established to describe the physical properties of traffic flow under different traffic signal control schemes. Thirdly, the variable cycle control problem is abstracted into a bi-level programming model. The upper level model is put forward for cycle length optimization considering traffic capacity and delay. The lower level model is a dynamic signal control decision model based on fairness analysis. Then, a Hybrid Intelligent Optimization Algorithm is raised to solve the proposed model. Finally, a case study shows the efficiency and applicability of the proposed model and algorithm.
A Statistical Framework for Protein Quantitation in Bottom-Up MS-Based Proteomics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Karpievitch, Yuliya; Stanley, Jeffrey R.; Taverner, Thomas
2009-08-15
Motivation: Quantitative mass spectrometry-based proteomics requires protein-level estimates and associated confidence measures. Challenges include the presence of low quality or incorrectly identified peptides and informative missingness. Furthermore, models are required for rolling peptide-level information up to the protein level. Results: We present a statistical model that carefully accounts for informative missingness in peak intensities and allows unbiased, model-based, protein-level estimation and inference. The model is applicable to both label-based and label-free quantitation experiments. We also provide automated, model-based, algorithms for filtering of proteins and peptides as well as imputation of missing values. Two LC/MS datasets are used to illustrate themore » methods. In simulation studies, our methods are shown to achieve substantially more discoveries than standard alternatives. Availability: The software has been made available in the opensource proteomics platform DAnTE (http://omics.pnl.gov/software/). Contact: adabney@stat.tamu.edu Supplementary information: Supplementary data are available at Bioinformatics online.« less
Misirli, Goksel; Cavaliere, Matteo; Waites, William; Pocock, Matthew; Madsen, Curtis; Gilfellon, Owen; Honorato-Zimmer, Ricardo; Zuliani, Paolo; Danos, Vincent; Wipat, Anil
2016-03-15
Biological systems are complex and challenging to model and therefore model reuse is highly desirable. To promote model reuse, models should include both information about the specifics of simulations and the underlying biology in the form of metadata. The availability of computationally tractable metadata is especially important for the effective automated interpretation and processing of models. Metadata are typically represented as machine-readable annotations which enhance programmatic access to information about models. Rule-based languages have emerged as a modelling framework to represent the complexity of biological systems. Annotation approaches have been widely used for reaction-based formalisms such as SBML. However, rule-based languages still lack a rich annotation framework to add semantic information, such as machine-readable descriptions, to the components of a model. We present an annotation framework and guidelines for annotating rule-based models, encoded in the commonly used Kappa and BioNetGen languages. We adapt widely adopted annotation approaches to rule-based models. We initially propose a syntax to store machine-readable annotations and describe a mapping between rule-based modelling entities, such as agents and rules, and their annotations. We then describe an ontology to both annotate these models and capture the information contained therein, and demonstrate annotating these models using examples. Finally, we present a proof of concept tool for extracting annotations from a model that can be queried and analyzed in a uniform way. The uniform representation of the annotations can be used to facilitate the creation, analysis, reuse and visualization of rule-based models. Although examples are given, using specific implementations the proposed techniques can be applied to rule-based models in general. The annotation ontology for rule-based models can be found at http://purl.org/rbm/rbmo The krdf tool and associated executable examples are available at http://purl.org/rbm/rbmo/krdf anil.wipat@newcastle.ac.uk or vdanos@inf.ed.ac.uk. © The Author 2015. Published by Oxford University Press.
Research on manufacturing service behavior modeling based on block chain theory
NASA Astrophysics Data System (ADS)
Zhao, Gang; Zhang, Guangli; Liu, Ming; Yu, Shuqin; Liu, Yali; Zhang, Xu
2018-04-01
According to the attribute characteristics of processing craft, the manufacturing service behavior is divided into service attribute, basic attribute, process attribute, resource attribute. The attribute information model of manufacturing service is established. The manufacturing service behavior information is successfully divided into public and private domain. Additionally, the block chain technology is introduced, and the information model of manufacturing service based on block chain principle is established, which solves the problem of sharing and secreting information of processing behavior, and ensures that data is not tampered with. Based on the key pairing verification relationship, the selective publishing mechanism for manufacturing information is established, achieving the traceability of product data, guarantying the quality of processing quality.
Information driving force and its application in agent-based modeling
NASA Astrophysics Data System (ADS)
Chen, Ting-Ting; Zheng, Bo; Li, Yan; Jiang, Xiong-Fei
2018-04-01
Exploring the scientific impact of online big-data has attracted much attention of researchers from different fields in recent years. Complex financial systems are typical open systems profoundly influenced by the external information. Based on the large-scale data in the public media and stock markets, we first define an information driving force, and analyze how it affects the complex financial system. The information driving force is observed to be asymmetric in the bull and bear market states. As an application, we then propose an agent-based model driven by the information driving force. Especially, all the key parameters are determined from the empirical analysis rather than from statistical fitting of the simulation results. With our model, both the stationary properties and non-stationary dynamic behaviors are simulated. Considering the mean-field effect of the external information, we also propose a few-body model to simulate the financial market in the laboratory.
A Petri Net-Based Software Process Model for Developing Process-Oriented Information Systems
NASA Astrophysics Data System (ADS)
Li, Yu; Oberweis, Andreas
Aiming at increasing flexibility, efficiency, effectiveness, and transparency of information processing and resource deployment in organizations to ensure customer satisfaction and high quality of products and services, process-oriented information systems (POIS) represent a promising realization form of computerized business information systems. Due to the complexity of POIS, explicit and specialized software process models are required to guide POIS development. In this chapter we characterize POIS with an architecture framework and present a Petri net-based software process model tailored for POIS development with consideration of organizational roles. As integrated parts of the software process model, we also introduce XML nets, a variant of high-level Petri nets as basic methodology for business processes modeling, and an XML net-based software toolset providing comprehensive functionalities for POIS development.
Task-Based Information Searching.
ERIC Educational Resources Information Center
Vakkari, Pertti
2003-01-01
Reviews studies on the relationship between task performance and information searching by end-users, focusing on information searching in electronic environments and information retrieval systems. Topics include task analysis; task characteristics; search goals; modeling information searching; modeling search goals; information seeking behavior;…
ERIC Educational Resources Information Center
Liu, Chien-Jen; Yang, Shu Ching
2012-01-01
The goal of this study is to better understand how the study participants' cognitive discourse is displayed in their learning transaction in an asynchronous, text-based conferencing environment based on Garrison's Practical Inquiry Model (2001). The authors designed an online information ethics course based on Bloom's taxonomy of educational…
Introduction to Information Visualization (InfoVis) Techniques for Model-Based Systems Engineering
NASA Technical Reports Server (NTRS)
Sindiy, Oleg; Litomisky, Krystof; Davidoff, Scott; Dekens, Frank
2013-01-01
This paper presents insights that conform to numerous system modeling languages/representation standards. The insights are drawn from best practices of Information Visualization as applied to aerospace-based applications.
Architectural approaches for HL7-based health information systems implementation.
López, D M; Blobel, B
2010-01-01
Information systems integration is hard, especially when semantic and business process interoperability requirements need to be met. To succeed, a unified methodology, approaching different aspects of systems architecture such as business, information, computational, engineering and technology viewpoints, has to be considered. The paper contributes with an analysis and demonstration on how the HL7 standard set can support health information systems integration. Based on the Health Information Systems Development Framework (HIS-DF), common architectural models for HIS integration are analyzed. The framework is a standard-based, consistent, comprehensive, customizable, scalable methodology that supports the design of semantically interoperable health information systems and components. Three main architectural models for system integration are analyzed: the point to point interface, the messages server and the mediator models. Point to point interface and messages server models are completely supported by traditional HL7 version 2 and version 3 messaging. The HL7 v3 standard specification, combined with service-oriented, model-driven approaches provided by HIS-DF, makes the mediator model possible. The different integration scenarios are illustrated by describing a proof-of-concept implementation of an integrated public health surveillance system based on Enterprise Java Beans technology. Selecting the appropriate integration architecture is a fundamental issue of any software development project. HIS-DF provides a unique methodological approach guiding the development of healthcare integration projects. The mediator model - offered by the HIS-DF and supported in HL7 v3 artifacts - is the more promising one promoting the development of open, reusable, flexible, semantically interoperable, platform-independent, service-oriented and standard-based health information systems.
Evaluating hydrological model performance using information theory-based metrics
USDA-ARS?s Scientific Manuscript database
The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic m...
Modeling and visualizing borehole information on virtual globes using KML
NASA Astrophysics Data System (ADS)
Zhu, Liang-feng; Wang, Xi-feng; Zhang, Bing
2014-01-01
Advances in virtual globes and Keyhole Markup Language (KML) are providing the Earth scientists with the universal platforms to manage, visualize, integrate and disseminate geospatial information. In order to use KML to represent and disseminate subsurface geological information on virtual globes, we present an automatic method for modeling and visualizing a large volume of borehole information. Based on a standard form of borehole database, the method first creates a variety of borehole models with different levels of detail (LODs), including point placemarks representing drilling locations, scatter dots representing contacts and tube models representing strata. Subsequently, the level-of-detail based (LOD-based) multi-scale representation is constructed to enhance the efficiency of visualizing large numbers of boreholes. Finally, the modeling result can be loaded into a virtual globe application for 3D visualization. An implementation program, termed Borehole2KML, is developed to automatically convert borehole data into KML documents. A case study of using Borehole2KML to create borehole models in Shanghai shows that the modeling method is applicable to visualize, integrate and disseminate borehole information on the Internet. The method we have developed has potential use in societal service of geological information.
Williamson, Ross S.; Sahani, Maneesh; Pillow, Jonathan W.
2015-01-01
Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron’s probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as “single-spike information” to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex. PMID:25831448
Multiple neural states of representation in short-term memory? It's a matter of attention.
Larocque, Joshua J; Lewis-Peacock, Jarrod A; Postle, Bradley R
2014-01-01
Short-term memory (STM) refers to the capacity-limited retention of information over a brief period of time, and working memory (WM) refers to the manipulation and use of that information to guide behavior. In recent years it has become apparent that STM and WM interact and overlap with other cognitive processes, including attention (the selection of a subset of information for further processing) and long-term memory (LTM-the encoding and retention of an effectively unlimited amount of information for a much longer period of time). Broadly speaking, there have been two classes of memory models: systems models, which posit distinct stores for STM and LTM (Atkinson and Shiffrin, 1968; Baddeley and Hitch, 1974); and state-based models, which posit a common store with different activation states corresponding to STM and LTM (Cowan, 1995; McElree, 1996; Oberauer, 2002). In this paper, we will focus on state-based accounts of STM. First, we will consider several theoretical models that postulate, based on considerable behavioral evidence, that information in STM can exist in multiple representational states. We will then consider how neural data from recent studies of STM can inform and constrain these theoretical models. In the process we will highlight the inferential advantage of multivariate, information-based analyses of neuroimaging data (fMRI and electroencephalography (EEG)) over conventional activation-based analysis approaches (Postle, in press). We will conclude by addressing lingering questions regarding the fractionation of STM, highlighting differences between the attention to information vs. the retention of information during brief memory delays.
Knowledge Acquisition of Generic Queries for Information Retrieval
Seol, Yoon-Ho; Johnson, Stephen B.; Cimino, James J.
2002-01-01
Several studies have identified clinical questions posed by health care professionals to understand the nature of information needs during clinical practice. To support access to digital information sources, it is necessary to integrate the information needs with a computer system. We have developed a conceptual guidance approach in information retrieval, based on a knowledge base that contains the patterns of information needs. The knowledge base uses a formal representation of clinical questions based on the UMLS knowledge sources, called the Generic Query model. To improve the coverage of the knowledge base, we investigated a method for extracting plausible clinical questions from the medical literature. This poster presents the Generic Query model, shows how it is used to represent the patterns of clinical questions, and describes the framework used to extract knowledge from the medical literature.
Translating building information modeling to building energy modeling using model view definition.
Jeong, WoonSeong; Kim, Jong Bum; Clayton, Mark J; Haberl, Jeff S; Yan, Wei
2014-01-01
This paper presents a new approach to translate between Building Information Modeling (BIM) and Building Energy Modeling (BEM) that uses Modelica, an object-oriented declarative, equation-based simulation environment. The approach (BIM2BEM) has been developed using a data modeling method to enable seamless model translations of building geometry, materials, and topology. Using data modeling, we created a Model View Definition (MVD) consisting of a process model and a class diagram. The process model demonstrates object-mapping between BIM and Modelica-based BEM (ModelicaBEM) and facilitates the definition of required information during model translations. The class diagram represents the information and object relationships to produce a class package intermediate between the BIM and BEM. The implementation of the intermediate class package enables system interface (Revit2Modelica) development for automatic BIM data translation into ModelicaBEM. In order to demonstrate and validate our approach, simulation result comparisons have been conducted via three test cases using (1) the BIM-based Modelica models generated from Revit2Modelica and (2) BEM models manually created using LBNL Modelica Buildings library. Our implementation shows that BIM2BEM (1) enables BIM models to be translated into ModelicaBEM models, (2) enables system interface development based on the MVD for thermal simulation, and (3) facilitates the reuse of original BIM data into building energy simulation without an import/export process.
Translating Building Information Modeling to Building Energy Modeling Using Model View Definition
Kim, Jong Bum; Clayton, Mark J.; Haberl, Jeff S.
2014-01-01
This paper presents a new approach to translate between Building Information Modeling (BIM) and Building Energy Modeling (BEM) that uses Modelica, an object-oriented declarative, equation-based simulation environment. The approach (BIM2BEM) has been developed using a data modeling method to enable seamless model translations of building geometry, materials, and topology. Using data modeling, we created a Model View Definition (MVD) consisting of a process model and a class diagram. The process model demonstrates object-mapping between BIM and Modelica-based BEM (ModelicaBEM) and facilitates the definition of required information during model translations. The class diagram represents the information and object relationships to produce a class package intermediate between the BIM and BEM. The implementation of the intermediate class package enables system interface (Revit2Modelica) development for automatic BIM data translation into ModelicaBEM. In order to demonstrate and validate our approach, simulation result comparisons have been conducted via three test cases using (1) the BIM-based Modelica models generated from Revit2Modelica and (2) BEM models manually created using LBNL Modelica Buildings library. Our implementation shows that BIM2BEM (1) enables BIM models to be translated into ModelicaBEM models, (2) enables system interface development based on the MVD for thermal simulation, and (3) facilitates the reuse of original BIM data into building energy simulation without an import/export process. PMID:25309954
Information on human behavior and consumer product use is important for characterizing exposures to chemicals in consumer products and in indoor environments. Traditionally, exposure-assessors have relied on time-use surveys to obtain information on exposure-related behavior. In ...
A Model of Knowledge Based Information Retrieval with Hierarchical Concept Graph.
ERIC Educational Resources Information Center
Kim, Young Whan; Kim, Jin H.
1990-01-01
Proposes a model of knowledge-based information retrieval (KBIR) that is based on a hierarchical concept graph (HCG) which shows relationships between index terms and constitutes a hierarchical thesaurus as a knowledge base. Conceptual distance between a query and an object is discussed and the use of Boolean operators is described. (25…
Enriching step-based product information models to support product life-cycle activities
NASA Astrophysics Data System (ADS)
Sarigecili, Mehmet Ilteris
The representation and management of product information in its life-cycle requires standardized data exchange protocols. Standard for Exchange of Product Model Data (STEP) is such a standard that has been used widely by the industries. Even though STEP-based product models are well defined and syntactically correct, populating product data according to these models is not easy because they are too big and disorganized. Data exchange specifications (DEXs) and templates provide re-organized information models required in data exchange of specific activities for various businesses. DEXs show us it would be possible to organize STEP-based product models in order to support different engineering activities at various stages of product life-cycle. In this study, STEP-based models are enriched and organized to support two engineering activities: materials information declaration and tolerance analysis. Due to new environmental regulations, the substance and materials information in products have to be screened closely by manufacturing industries. This requires a fast, unambiguous and complete product information exchange between the members of a supply chain. Tolerance analysis activity, on the other hand, is used to verify the functional requirements of an assembly considering the worst case (i.e., maximum and minimum) conditions for the part/assembly dimensions. Another issue with STEP-based product models is that the semantics of product data are represented implicitly. Hence, it is difficult to interpret the semantics of data for different product life-cycle phases for various application domains. OntoSTEP, developed at NIST, provides semantically enriched product models in OWL. In this thesis, we would like to present how to interpret the GD & T specifications in STEP for tolerance analysis by utilizing OntoSTEP.
Mobile-Based Dictionary of Information and Communication Technology
NASA Astrophysics Data System (ADS)
Liando, O. E. S.; Mewengkang, A.; Kaseger, D.; Sangkop, F. I.; Rantung, V. P.; Rorimpandey, G. C.
2018-02-01
This study aims to design and build mobile-based dictionary of information and communication technology applications to provide access to information in the form of glossary of terms in the context of information and communication technologies. Applications built in this study using the Android platform, with SQLite database model. This research uses prototype model development method which covers the stages of communication, Quick Plan, Quick Design Modeling, Construction of Prototype, Deployment Delivery & Feedback, and Full System Transformation. The design of this application is designed in such a way as to facilitate the user in the process of learning and understanding the new terms or vocabularies encountered in the world of information and communication technology. Mobile-based dictionary of Information And Communication Technology applications that have been built can be an alternative to learning literature. In its simplest form, this application is able to meet the need for a comprehensive and accurate dictionary of Information And Communication Technology function.
Illustrative visualization of 3D city models
NASA Astrophysics Data System (ADS)
Doellner, Juergen; Buchholz, Henrik; Nienhaus, Marc; Kirsch, Florian
2005-03-01
This paper presents an illustrative visualization technique that provides expressive representations of large-scale 3D city models, inspired by the tradition of artistic and cartographic visualizations typically found in bird"s-eye view and panoramic maps. We define a collection of city model components and a real-time multi-pass rendering algorithm that achieves comprehensible, abstract 3D city model depictions based on edge enhancement, color-based and shadow-based depth cues, and procedural facade texturing. Illustrative visualization provides an effective visual interface to urban spatial information and associated thematic information complementing visual interfaces based on the Virtual Reality paradigm, offering a huge potential for graphics design. Primary application areas include city and landscape planning, cartoon worlds in computer games, and tourist information systems.
Jeong, Chan-Seok; Kim, Dongsup
2016-02-24
Elucidating the cooperative mechanism of interconnected residues is an important component toward understanding the biological function of a protein. Coevolution analysis has been developed to model the coevolutionary information reflecting structural and functional constraints. Recently, several methods have been developed based on a probabilistic graphical model called the Markov random field (MRF), which have led to significant improvements for coevolution analysis; however, thus far, the performance of these models has mainly been assessed by focusing on the aspect of protein structure. In this study, we built an MRF model whose graphical topology is determined by the residue proximity in the protein structure, and derived a novel positional coevolution estimate utilizing the node weight of the MRF model. This structure-based MRF method was evaluated for three data sets, each of which annotates catalytic site, allosteric site, and comprehensively determined functional site information. We demonstrate that the structure-based MRF architecture can encode the evolutionary information associated with biological function. Furthermore, we show that the node weight can more accurately represent positional coevolution information compared to the edge weight. Lastly, we demonstrate that the structure-based MRF model can be reliably built with only a few aligned sequences in linear time. The results show that adoption of a structure-based architecture could be an acceptable approximation for coevolution modeling with efficient computation complexity.
An Ontology-Based Archive Information Model for the Planetary Science Community
NASA Technical Reports Server (NTRS)
Hughes, J. Steven; Crichton, Daniel J.; Mattmann, Chris
2008-01-01
The Planetary Data System (PDS) information model is a mature but complex model that has been used to capture over 30 years of planetary science data for the PDS archive. As the de-facto information model for the planetary science data archive, it is being adopted by the International Planetary Data Alliance (IPDA) as their archive data standard. However, after seventeen years of evolutionary change the model needs refinement. First a formal specification is needed to explicitly capture the model in a commonly accepted data engineering notation. Second, the core and essential elements of the model need to be identified to help simplify the overall archive process. A team of PDS technical staff members have captured the PDS information model in an ontology modeling tool. Using the resulting knowledge-base, work continues to identify the core elements, identify problems and issues, and then test proposed modifications to the model. The final deliverables of this work will include specifications for the next generation PDS information model and the initial set of IPDA archive data standards. Having the information model captured in an ontology modeling tool also makes the model suitable for use by Semantic Web applications.
Eppinger, Ben; Walter, Maik; Li, Shu-Chen
2017-04-01
In this study, we investigated the interplay of habitual (model-free) and goal-directed (model-based) decision processes by using a two-stage Markov decision task in combination with event-related potentials (ERPs) and computational modeling. To manipulate the demands on model-based decision making, we applied two experimental conditions with different probabilities of transitioning from the first to the second stage of the task. As we expected, when the stage transitions were more predictable, participants showed greater model-based (planning) behavior. Consistent with this result, we found that stimulus-evoked parietal (P300) activity at the second stage of the task increased with the predictability of the state transitions. However, the parietal activity also reflected model-free information about the expected values of the stimuli, indicating that at this stage of the task both types of information are integrated to guide decision making. Outcome-related ERP components only reflected reward-related processes: Specifically, a medial prefrontal ERP component (the feedback-related negativity) was sensitive to negative outcomes, whereas a component that is elicited by reward (the feedback-related positivity) increased as a function of positive prediction errors. Taken together, our data indicate that stimulus-locked parietal activity reflects the integration of model-based and model-free information during decision making, whereas feedback-related medial prefrontal signals primarily reflect reward-related decision processes.
Identifying Seizure Onset Zone From the Causal Connectivity Inferred Using Directed Information
NASA Astrophysics Data System (ADS)
Malladi, Rakesh; Kalamangalam, Giridhar; Tandon, Nitin; Aazhang, Behnaam
2016-10-01
In this paper, we developed a model-based and a data-driven estimator for directed information (DI) to infer the causal connectivity graph between electrocorticographic (ECoG) signals recorded from brain and to identify the seizure onset zone (SOZ) in epileptic patients. Directed information, an information theoretic quantity, is a general metric to infer causal connectivity between time-series and is not restricted to a particular class of models unlike the popular metrics based on Granger causality or transfer entropy. The proposed estimators are shown to be almost surely convergent. Causal connectivity between ECoG electrodes in five epileptic patients is inferred using the proposed DI estimators, after validating their performance on simulated data. We then proposed a model-based and a data-driven SOZ identification algorithm to identify SOZ from the causal connectivity inferred using model-based and data-driven DI estimators respectively. The data-driven SOZ identification outperforms the model-based SOZ identification algorithm when benchmarked against visual analysis by neurologist, the current clinical gold standard. The causal connectivity analysis presented here is the first step towards developing novel non-surgical treatments for epilepsy.
Arranging ISO 13606 archetypes into a knowledge base.
Kopanitsa, Georgy
2014-01-01
To enable the efficient reuse of standard based medical data we propose to develop a higher level information model that will complement the archetype model of ISO 13606. This model will make use of the relationships that are specified in UML to connect medical archetypes into a knowledge base within a repository. UML connectors were analyzed for their ability to be applied in the implementation of a higher level model that will establish relationships between archetypes. An information model was developed using XML Schema notation. The model allows linking different archetypes of one repository into a knowledge base. Presently it supports several relationships and will be advanced in future.
Using a logical information model-driven design process in healthcare.
Cheong, Yu Chye; Bird, Linda; Tun, Nwe Ni; Brooks, Colleen
2011-01-01
A hybrid standards-based approach has been adopted in Singapore to develop a Logical Information Model (LIM) for healthcare information exchange. The Singapore LIM uses a combination of international standards, including ISO13606-1 (a reference model for electronic health record communication), ISO21090 (healthcare datatypes), SNOMED CT (healthcare terminology) and HL7 v2 (healthcare messaging). This logic-based design approach also incorporates mechanisms for achieving bi-directional semantic interoperability.
Dynamic and Contextual Information in HMM Modeling for Handwritten Word Recognition.
Bianne-Bernard, Anne-Laure; Menasri, Farès; Al-Hajj Mohamad, Rami; Mokbel, Chafic; Kermorvant, Christopher; Likforman-Sulem, Laurence
2011-10-01
This study aims at building an efficient word recognition system resulting from the combination of three handwriting recognizers. The main component of this combined system is an HMM-based recognizer which considers dynamic and contextual information for a better modeling of writing units. For modeling the contextual units, a state-tying process based on decision tree clustering is introduced. Decision trees are built according to a set of expert-based questions on how characters are written. Questions are divided into global questions, yielding larger clusters, and precise questions, yielding smaller ones. Such clustering enables us to reduce the total number of models and Gaussians densities by 10. We then apply this modeling to the recognition of handwritten words. Experiments are conducted on three publicly available databases based on Latin or Arabic languages: Rimes, IAM, and OpenHart. The results obtained show that contextual information embedded with dynamic modeling significantly improves recognition.
Woodward-Kron, Robyn; Connor, Melanie; Schulz, Peter J; Elliott, Kristine
2014-02-01
Communication skills teaching in medical education has yet to acknowledge the impact of the Internet on physician-patient communication. The authors present a conceptual model showing the variables influencing how and to what extent physicians and patients discuss Internet-sourced health information as part of the consultation with the purpose of educating the patient. A study exploring the role physicians play in patient education mediated through health information available on the Internet provided the foundation for the conceptual model. Twenty-one physicians participated in semistructured interviews between 2011 and 2013. Participants were from Australia and Switzerland, whose citizens demonstrate different degrees of Internet usage and who differ culturally and ethnically. The authors analyzed the interviews thematically and iteratively. The themes as well as their interrelationships informed the components of the conceptual model. The intrinsic elements of the conceptual model are the physician, the patient, and Internet based health information. The extrinsic variables of setting, time, and communication activities as well as the quality, availability, and usability of the Internet-based health information influenced the degree to which physicians engaged with, and were engaged by, their patients about Internet-based health information. The empirically informed model provides a means of understanding the environment, enablers, and constraints of discussing Internet-based health information, as well as the benefits for patients' understanding of their health. It also provides medical educators with a conceptual tool to engage and support physicians in their activities of communicating health information to patients.
An assembly process model based on object-oriented hierarchical time Petri Nets
NASA Astrophysics Data System (ADS)
Wang, Jiapeng; Liu, Shaoli; Liu, Jianhua; Du, Zenghui
2017-04-01
In order to improve the versatility, accuracy and integrity of the assembly process model of complex products, an assembly process model based on object-oriented hierarchical time Petri Nets is presented. A complete assembly process information model including assembly resources, assembly inspection, time, structure and flexible parts is established, and this model describes the static and dynamic data involved in the assembly process. Through the analysis of three-dimensional assembly process information, the assembly information is hierarchically divided from the whole, the local to the details and the subnet model of different levels of object-oriented Petri Nets is established. The communication problem between Petri subnets is solved by using message database, and it reduces the complexity of system modeling effectively. Finally, the modeling process is presented, and a five layer Petri Nets model is established based on the hoisting process of the engine compartment of a wheeled armored vehicle.
Theories of learning: models of good practice for evidence-based information skills teaching.
Spring, Hannah
2010-12-01
This feature considers models of teaching and learning and how these can be used to support evidence based practice. © 2010 The authors. Health Information and Libraries Journal © 2010 Health Libraries Group.
A Descriptive Model of Information Problem Solving while Using Internet
ERIC Educational Resources Information Center
Brand-Gruwel, Saskia; Wopereis, Iwan; Walraven, Amber
2009-01-01
This paper presents the IPS-I-model: a model that describes the process of information problem solving (IPS) in which the Internet (I) is used to search information. The IPS-I-model is based on three studies, in which students in secondary and (post) higher education were asked to solve information problems, while thinking aloud. In-depth analyses…
NASA Astrophysics Data System (ADS)
Ouyang, Qin; Liu, Yan; Chen, Quansheng; Zhang, Zhengzhu; Zhao, Jiewen; Guo, Zhiming; Gu, Hang
2017-06-01
Instrumental test of black tea samples instead of human panel test is attracting massive attention recently. This study focused on an investigation of the feasibility for estimation of the color sensory quality of black tea samples using the VIS-NIR spectroscopy technique, comparing the performances of models based on the spectra and color information. In model calibration, the variables were first selected by genetic algorithm (GA); then the nonlinear back propagation-artificial neural network (BPANN) models were established based on the optimal variables. In comparison with the other models, GA-BPANN models from spectra data information showed the best performance, with the correlation coefficient of 0.8935, and the root mean square error of 0.392 in the prediction set. In addition, models based on the spectra information provided better performance than that based on the color parameters. Therefore, the VIS-NIR spectroscopy technique is a promising tool for rapid and accurate evaluation of the sensory quality of black tea samples.
Ouyang, Qin; Liu, Yan; Chen, Quansheng; Zhang, Zhengzhu; Zhao, Jiewen; Guo, Zhiming; Gu, Hang
2017-06-05
Instrumental test of black tea samples instead of human panel test is attracting massive attention recently. This study focused on an investigation of the feasibility for estimation of the color sensory quality of black tea samples using the VIS-NIR spectroscopy technique, comparing the performances of models based on the spectra and color information. In model calibration, the variables were first selected by genetic algorithm (GA); then the nonlinear back propagation-artificial neural network (BPANN) models were established based on the optimal variables. In comparison with the other models, GA-BPANN models from spectra data information showed the best performance, with the correlation coefficient of 0.8935, and the root mean square error of 0.392 in the prediction set. In addition, models based on the spectra information provided better performance than that based on the color parameters. Therefore, the VIS-NIR spectroscopy technique is a promising tool for rapid and accurate evaluation of the sensory quality of black tea samples. Copyright © 2017 Elsevier B.V. All rights reserved.
Lamprecht, Daniel; Strohmaier, Markus; Helic, Denis; Nyulas, Csongor; Tudorache, Tania; Noy, Natalya F; Musen, Mark A
The need to examine the behavior of different user groups is a fundamental requirement when building information systems. In this paper, we present Ontology-based Decentralized Search (OBDS), a novel method to model the navigation behavior of users equipped with different types of background knowledge. Ontology-based Decentralized Search combines decentralized search, an established method for navigation in social networks, and ontologies to model navigation behavior in information networks. The method uses ontologies as an explicit representation of background knowledge to inform the navigation process and guide it towards navigation targets. By using different ontologies, users equipped with different types of background knowledge can be represented. We demonstrate our method using four biomedical ontologies and their associated Wikipedia articles. We compare our simulation results with base line approaches and with results obtained from a user study. We find that our method produces click paths that have properties similar to those originating from human navigators. The results suggest that our method can be used to model human navigation behavior in systems that are based on information networks, such as Wikipedia. This paper makes the following contributions: (i) To the best of our knowledge, this is the first work to demonstrate the utility of ontologies in modeling human navigation and (ii) it yields new insights and understanding about the mechanisms of human navigation in information networks.
Lamprecht, Daniel; Strohmaier, Markus; Helic, Denis; Nyulas, Csongor; Tudorache, Tania; Noy, Natalya F.; Musen, Mark A.
2015-01-01
The need to examine the behavior of different user groups is a fundamental requirement when building information systems. In this paper, we present Ontology-based Decentralized Search (OBDS), a novel method to model the navigation behavior of users equipped with different types of background knowledge. Ontology-based Decentralized Search combines decentralized search, an established method for navigation in social networks, and ontologies to model navigation behavior in information networks. The method uses ontologies as an explicit representation of background knowledge to inform the navigation process and guide it towards navigation targets. By using different ontologies, users equipped with different types of background knowledge can be represented. We demonstrate our method using four biomedical ontologies and their associated Wikipedia articles. We compare our simulation results with base line approaches and with results obtained from a user study. We find that our method produces click paths that have properties similar to those originating from human navigators. The results suggest that our method can be used to model human navigation behavior in systems that are based on information networks, such as Wikipedia. This paper makes the following contributions: (i) To the best of our knowledge, this is the first work to demonstrate the utility of ontologies in modeling human navigation and (ii) it yields new insights and understanding about the mechanisms of human navigation in information networks. PMID:26568745
Determining informative priors for cognitive models.
Lee, Michael D; Vanpaemel, Wolf
2018-02-01
The development of cognitive models involves the creative scientific formalization of assumptions, based on theory, observation, and other relevant information. In the Bayesian approach to implementing, testing, and using cognitive models, assumptions can influence both the likelihood function of the model, usually corresponding to assumptions about psychological processes, and the prior distribution over model parameters, usually corresponding to assumptions about the psychological variables that influence those processes. The specification of the prior is unique to the Bayesian context, but often raises concerns that lead to the use of vague or non-informative priors in cognitive modeling. Sometimes the concerns stem from philosophical objections, but more often practical difficulties with how priors should be determined are the stumbling block. We survey several sources of information that can help to specify priors for cognitive models, discuss some of the methods by which this information can be formalized in a prior distribution, and identify a number of benefits of including informative priors in cognitive modeling. Our discussion is based on three illustrative cognitive models, involving memory retention, categorization, and decision making.
Multiple point statistical simulation using uncertain (soft) conditional data
NASA Astrophysics Data System (ADS)
Hansen, Thomas Mejer; Vu, Le Thanh; Mosegaard, Klaus; Cordua, Knud Skou
2018-05-01
Geostatistical simulation methods have been used to quantify spatial variability of reservoir models since the 80s. In the last two decades, state of the art simulation methods have changed from being based on covariance-based 2-point statistics to multiple-point statistics (MPS), that allow simulation of more realistic Earth-structures. In addition, increasing amounts of geo-information (geophysical, geological, etc.) from multiple sources are being collected. This pose the problem of integration of these different sources of information, such that decisions related to reservoir models can be taken on an as informed base as possible. In principle, though difficult in practice, this can be achieved using computationally expensive Monte Carlo methods. Here we investigate the use of sequential simulation based MPS simulation methods conditional to uncertain (soft) data, as a computational efficient alternative. First, it is demonstrated that current implementations of sequential simulation based on MPS (e.g. SNESIM, ENESIM and Direct Sampling) do not account properly for uncertain conditional information, due to a combination of using only co-located information, and a random simulation path. Then, we suggest two approaches that better account for the available uncertain information. The first make use of a preferential simulation path, where more informed model parameters are visited preferentially to less informed ones. The second approach involves using non co-located uncertain information. For different types of available data, these approaches are demonstrated to produce simulation results similar to those obtained by the general Monte Carlo based approach. These methods allow MPS simulation to condition properly to uncertain (soft) data, and hence provides a computationally attractive approach for integration of information about a reservoir model.
Gibson, Amelia N.
2016-01-01
This grounded theory study used in-depth, semi-structured interview to examine the information-seeking behaviors of 35 parents of children with Down syndrome. Emergent themes include a progressive pattern of behavior including information overload and avoidance, passive attention, and active information seeking; varying preferences between tacit and explicit information at different stages; and selection of information channels and sources that varied based on personal and situational constraints. Based on the findings, the author proposes a progressive model of health information seeking and a framework for using this model to collect data in practice. The author also discusses the practical and theoretical implications of a responsive, progressive approach to understanding parents’ health information–seeking behavior. PMID:28462351
A Mixture Rasch Model-Based Computerized Adaptive Test for Latent Class Identification
ERIC Educational Resources Information Center
Jiao, Hong; Macready, George; Liu, Junhui; Cho, Youngmi
2012-01-01
This study explored a computerized adaptive test delivery algorithm for latent class identification based on the mixture Rasch model. Four item selection methods based on the Kullback-Leibler (KL) information were proposed and compared with the reversed and the adaptive KL information under simulated testing conditions. When item separation was…
Design of a component-based integrated environmental modeling framework
Integrated environmental modeling (IEM) includes interdependent science-based components (e.g., models, databases, viewers, assessment protocols) that comprise an appropriate software modeling system. The science-based components are responsible for consuming and producing inform...
Hofman, Abe D.; Visser, Ingmar; Jansen, Brenda R. J.; van der Maas, Han L. J.
2015-01-01
We propose and test three statistical models for the analysis of children’s responses to the balance scale task, a seminal task to study proportional reasoning. We use a latent class modelling approach to formulate a rule-based latent class model (RB LCM) following from a rule-based perspective on proportional reasoning and a new statistical model, the Weighted Sum Model, following from an information-integration approach. Moreover, a hybrid LCM using item covariates is proposed, combining aspects of both a rule-based and information-integration perspective. These models are applied to two different datasets, a standard paper-and-pencil test dataset (N = 779), and a dataset collected within an online learning environment that included direct feedback, time-pressure, and a reward system (N = 808). For the paper-and-pencil dataset the RB LCM resulted in the best fit, whereas for the online dataset the hybrid LCM provided the best fit. The standard paper-and-pencil dataset yielded more evidence for distinct solution rules than the online data set in which quantitative item characteristics are more prominent in determining responses. These results shed new light on the discussion on sequential rule-based and information-integration perspectives of cognitive development. PMID:26505905
The secure authorization model for healthcare information system.
Hsu, Wen-Shin; Pan, Jiann-I
2013-10-01
Exploring healthcare system for assisting medical services or transmitting patients' personal health information in web application has been widely investigated. Information and communication technologies have been applied to the medical services and healthcare area for a number of years to resolve problems in medical management. In the healthcare system, not all users are allowed to access all the information. Several authorization models for restricting users to access specific information at specific permissions have been proposed. However, as the number of users and the amount of information grows, the difficulties for administrating user authorization will increase. The critical problem limits the widespread usage of the healthcare system. This paper proposes an approach for role-based and extends it to deal with the information for authorizations in the healthcare system. We propose the role-based authorization model which supports authorizations for different kinds of objects, and a new authorization domain. Based on this model, we discuss the issues and requirements of security in the healthcare systems. The security issues for services shared between different healthcare industries will also be discussed.
Wiggins, Paul A
2015-07-21
This article describes the application of a change-point algorithm to the analysis of stochastic signals in biological systems whose underlying state dynamics consist of transitions between discrete states. Applications of this analysis include molecular-motor stepping, fluorophore bleaching, electrophysiology, particle and cell tracking, detection of copy number variation by sequencing, tethered-particle motion, etc. We present a unified approach to the analysis of processes whose noise can be modeled by Gaussian, Wiener, or Ornstein-Uhlenbeck processes. To fit the model, we exploit explicit, closed-form algebraic expressions for maximum-likelihood estimators of model parameters and estimated information loss of the generalized noise model, which can be computed extremely efficiently. We implement change-point detection using the frequentist information criterion (which, to our knowledge, is a new information criterion). The frequentist information criterion specifies a single, information-based statistical test that is free from ad hoc parameters and requires no prior probability distribution. We demonstrate this information-based approach in the analysis of simulated and experimental tethered-particle-motion data. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Khrennikov, Andrei
2011-09-01
We propose a model of quantum-like (QL) processing of mental information. This model is based on quantum information theory. However, in contrast to models of "quantum physical brain" reducing mental activity (at least at the highest level) to quantum physical phenomena in the brain, our model matches well with the basic neuronal paradigm of the cognitive science. QL information processing is based (surprisingly) on classical electromagnetic signals induced by joint activity of neurons. This novel approach to quantum information is based on representation of quantum mechanics as a version of classical signal theory which was recently elaborated by the author. The brain uses the QL representation (QLR) for working with abstract concepts; concrete images are described by classical information theory. Two processes, classical and QL, are performed parallely. Moreover, information is actively transmitted from one representation to another. A QL concept given in our model by a density operator can generate a variety of concrete images given by temporal realizations of the corresponding (Gaussian) random signal. This signal has the covariance operator coinciding with the density operator encoding the abstract concept under consideration. The presence of various temporal scales in the brain plays the crucial role in creation of QLR in the brain. Moreover, in our model electromagnetic noise produced by neurons is a source of superstrong QL correlations between processes in different spatial domains in the brain; the binding problem is solved on the QL level, but with the aid of the classical background fluctuations. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Myers, B.; Beard, T. D.; Weiskopf, S. R.; Jackson, S. T.; Tittensor, D.; Harfoot, M.; Senay, G. B.; Casey, K.; Lenton, T. M.; Leidner, A. K.; Ruane, A. C.; Ferrier, S.; Serbin, S.; Matsuda, H.; Shiklomanov, A. N.; Rosa, I.
2017-12-01
Biodiversity and ecosystems services underpin political targets for the conservation of biodiversity; however, previous incarnations of these biodiversity-related targets have not relied on integrated model based projections of possible outcomes based on climate and land use change. Although a few global biodiversity models are available, most biodiversity models lie along a continuum of geography and components of biodiversity. Model-based projections of the future of global biodiversity are critical to support policymakers in the development of informed global conservation targets, but the scientific community lacks a clear strategy for integrating diverse data streams in developing, and evaluating the performance of, such biodiversity models. Therefore, in this paper, we propose a framework for ongoing testing and refinement of model-based projections of biodiversity trends and change, by linking a broad variety of biodiversity models with data streams generated by advances in remote sensing, coupled with new and emerging in-situ observation technologies to inform development of essential biodiversity variables, future global biodiversity targets, and indicators. Our two main objectives are to (1) develop a framework for model testing and refining projections of a broad range of biodiversity models, focusing on global models, through the integration of diverse data streams and (2) identify the realistic outputs that can be developed and determine coupled approaches using remote sensing and new and emerging in-situ observations (e.g., metagenomics) to better inform the next generation of global biodiversity targets.
Gradient-based reliability maps for ACM-based segmentation of hippocampus.
Zarpalas, Dimitrios; Gkontra, Polyxeni; Daras, Petros; Maglaveras, Nicos
2014-04-01
Automatic segmentation of deep brain structures, such as the hippocampus (HC), in MR images has attracted considerable scientific attention due to the widespread use of MRI and to the principal role of some structures in various mental disorders. In this literature, there exists a substantial amount of work relying on deformable models incorporating prior knowledge about structures' anatomy and shape information. However, shape priors capture global shape characteristics and thus fail to model boundaries of varying properties; HC boundaries present rich, poor, and missing gradient regions. On top of that, shape prior knowledge is blended with image information in the evolution process, through global weighting of the two terms, again neglecting the spatially varying boundary properties, causing segmentation faults. An innovative method is hereby presented that aims to achieve highly accurate HC segmentation in MR images, based on the modeling of boundary properties at each anatomical location and the inclusion of appropriate image information for each of those, within an active contour model framework. Hence, blending of image information and prior knowledge is based on a local weighting map, which mixes gradient information, regional and whole brain statistical information with a multi-atlas-based spatial distribution map of the structure's labels. Experimental results on three different datasets demonstrate the efficacy and accuracy of the proposed method.
NASA Astrophysics Data System (ADS)
Or, D.; von Ruette, J.; Lehmann, P.
2017-12-01
Landslides and subsequent debris-flows initiated by rainfall represent a common natural hazard in mountainous regions. We integrated a landslide hydro-mechanical triggering model with a simple model for debris flow runout pathways and developed a graphical user interface (GUI) to represent these natural hazards at catchment scale at any location. The STEP-TRAMM GUI provides process-based estimates of the initiation locations and sizes of landslides patterns based on digital elevation models (SRTM) linked with high resolution global soil maps (SoilGrids 250 m resolution) and satellite based information on rainfall statistics for the selected region. In the preprocessing phase the STEP-TRAMM model estimates soil depth distribution to supplement other soil information for delineating key hydrological and mechanical properties relevant to representing local soil failure. We will illustrate this publicly available GUI and modeling platform to simulate effects of deforestation on landslide hazards in several regions and compare model outcome with satellite based information.
An ontology model for nursing narratives with natural language generation technology.
Min, Yul Ha; Park, Hyeoun-Ae; Jeon, Eunjoo; Lee, Joo Yun; Jo, Soo Jung
2013-01-01
The purpose of this study was to develop an ontology model to generate nursing narratives as natural as human language from the entity-attribute-value triplets of a detailed clinical model using natural language generation technology. The model was based on the types of information and documentation time of the information along the nursing process. The typesof information are data characterizing the patient status, inferences made by the nurse from the patient data, and nursing actions selected by the nurse to change the patient status. This information was linked to the nursing process based on the time of documentation. We describe a case study illustrating the application of this model in an acute-care setting. The proposed model provides a strategy for designing an electronic nursing record system.
The Research on Informal Learning Model of College Students Based on SNS and Case Study
NASA Astrophysics Data System (ADS)
Lu, Peng; Cong, Xiao; Bi, Fangyan; Zhou, Dongdai
2017-03-01
With the rapid development of network technology, informal learning based on online become the main way for college students to learn a variety of subject knowledge. The favor to the SNS community of students and the characteristics of SNS itself provide a good opportunity for the informal learning of college students. This research first analyzes the related research of the informal learning and SNS, next, discusses the characteristics of informal learning and theoretical basis. Then, it proposed an informal learning model of college students based on SNS according to the support role of SNS to the informal learning of students. Finally, according to the theoretical model and the principles proposed in this study, using the Elgg and related tools which is the open source SNS program to achieve the informal learning community. This research is trying to overcome issues such as the lack of social realism, interactivity, resource transfer mode in the current network informal learning communities, so as to provide a new way of informal learning for college students.
Arranging ISO 13606 archetypes into a knowledge base using UML connectors.
Kopanitsa, Georgy
2014-01-01
To enable the efficient reuse of standard based medical data we propose to develop a higher-level information model that will complement the archetype model of ISO 13606. This model will make use of the relationships that are specified in UML to connect medical archetypes into a knowledge base within a repository. UML connectors were analysed for their ability to be applied in the implementation of a higher-level model that will establish relationships between archetypes. An information model was developed using XML Schema notation. The model allows linking different archetypes of one repository into a knowledge base. Presently it supports several relationships and will be advanced in future.
Coffey, Sara; Vanderlip, Erik; Sarvet, Barry
2017-01-01
There is a consistent need for more child and adolescent psychiatrists. Despite increased recruitment of child and adolescent psychiatry trainees, traditional models of care will likely not be able to meet the need of youth with mental illness. Integrated care models focusing on population-based, team-based, measurement-based, and evidenced-based care have been effective in addressing accessibility and quality of care. These integrated models have specific needs regarding health information technology (HIT). HIT has been used in a variety of different ways in several integrated care models. HIT can aid in implementation of these models but is not without its challenges. Copyright © 2016 Elsevier Inc. All rights reserved.
Modified social force model based on information transmission toward crowd evacuation simulation
NASA Astrophysics Data System (ADS)
Han, Yanbin; Liu, Hong
2017-03-01
In this paper, the information transmission mechanism is introduced into the social force model to simulate pedestrian behavior in an emergency, especially when most pedestrians are unfamiliar with the evacuation environment. This modified model includes a collision avoidance strategy and an information transmission model that considers information loss. The former is used to avoid collision among pedestrians in a simulation, whereas the latter mainly describes how pedestrians obtain and choose directions appropriate to them. Simulation results show that pedestrians can obtain the correct moving direction through information transmission mechanism and that the modified model can simulate actual pedestrian behavior during an emergency evacuation. Moreover, we have drawn four conclusions to improve evacuation based on the simulation results; and these conclusions greatly contribute in optimizing a number of efficient emergency evacuation schemes for large public places.
A UML-based ontology for describing hospital information system architectures.
Winter, A; Brigl, B; Wendt, T
2001-01-01
To control the heterogeneity inherent to hospital information systems the information management needs appropriate hospital information systems modeling methods or techniques. This paper shows that, for several reasons, available modeling approaches are not able to answer relevant questions of information management. To overcome this major deficiency we offer an UML-based ontology for describing hospital information systems architectures. This ontology views at three layers: the domain layer, the logical tool layer, and the physical tool layer, and defines the relevant components. The relations between these components, especially between components of different layers make the answering of our information management questions possible.
[Study on Information Extraction of Clinic Expert Information from Hospital Portals].
Zhang, Yuanpeng; Dong, Jiancheng; Qian, Danmin; Geng, Xingyun; Wu, Huiqun; Wang, Li
2015-12-01
Clinic expert information provides important references for residents in need of hospital care. Usually, such information is hidden in the deep web and cannot be directly indexed by search engines. To extract clinic expert information from the deep web, the first challenge is to make a judgment on forms. This paper proposes a novel method based on a domain model, which is a tree structure constructed by the attributes of search interfaces. With this model, search interfaces can be classified to a domain and filled in with domain keywords. Another challenge is to extract information from the returned web pages indexed by search interfaces. To filter the noise information on a web page, a block importance model is proposed. The experiment results indicated that the domain model yielded a precision 10.83% higher than that of the rule-based method, whereas the block importance model yielded an F₁ measure 10.5% higher than that of the XPath method.
Effect of Heterogeneous Interest Similarity on the Spread of Information in Mobile Social Networks
NASA Astrophysics Data System (ADS)
Zhao, Narisa; Sui, Guoqin; Yang, Fan
2018-06-01
Mobile social networks (MSNs) are important platforms for spreading news. The fact that individuals usually forward information aligned with their own interests inevitably changes the dynamics of information spread. Thereby, first we present a theoretical model based on the discrete Markov chain and mean field theory to evaluate the effect of interest similarity on the information spread in MSNs. Meanwhile, individuals' interests are heterogeneous and vary with time. These two features result in interest shift behavior, and both features are considered in our model. A leveraging simulation demonstrates the accuracy of our model. Moreover, the basic reproduction number R0 is determined. Further extensive numerical analyses based on the model indicate that interest similarity has a critical impact on information spread at the early spreading stage. Specifically, the information always spreads more quickly and widely if the interest similarity between an individual and the information is higher. Finally, five actual data sets from Sina Weibo illustrate the validity of the model.
NASA Astrophysics Data System (ADS)
Li, Da; Cheung, Chifai; Zhao, Xing; Ren, Mingjun; Zhang, Juan; Zhou, Liqiu
2016-10-01
Autostereoscopy based three-dimensional (3D) digital reconstruction has been widely applied in the field of medical science, entertainment, design, industrial manufacture, precision measurement and many other areas. The 3D digital model of the target can be reconstructed based on the series of two-dimensional (2D) information acquired by the autostereoscopic system, which consists multiple lens and can provide information of the target from multiple angles. This paper presents a generalized and precise autostereoscopic three-dimensional (3D) digital reconstruction method based on Direct Extraction of Disparity Information (DEDI) which can be used to any transform autostereoscopic systems and provides accurate 3D reconstruction results through error elimination process based on statistical analysis. The feasibility of DEDI method has been successfully verified through a series of optical 3D digital reconstruction experiments on different autostereoscopic systems which is highly efficient to perform the direct full 3D digital model construction based on tomography-like operation upon every depth plane with the exclusion of the defocused information. With the absolute focused information processed by DEDI method, the 3D digital model of the target can be directly and precisely formed along the axial direction with the depth information.
1986-09-01
differentiation between the systems. This study will investigate an appropriate Order Processing and Management Information System (OP&MIS) to link base-level...methodology: 1. Reviewed the current order processing and information model of the TUAF Logistics System. (centralized-manual model) 2. Described the...RDS program’s order processing and information system. (centralized-computerized model) 3. Described the order irocessing and information system of
NASA Astrophysics Data System (ADS)
Fan, Hong; Li, Huan
2015-12-01
Location-related data are playing an increasingly irreplaceable role in business, government and scientific research. At the same time, the amount and types of data are rapidly increasing. It is a challenge how to quickly find required information from this rapidly growing volume of data, as well as how to efficiently provide different levels of geospatial data to users. This paper puts forward a data-oriented access model for geographic information science data. First, we analyze the features of GIS data including traditional types such as vector and raster data and new types such as Volunteered Geographic Information (VGI). Taking into account these analyses, a classification scheme for geographic data is proposed and TRAFIE is introduced to describe the establishment of a multi-level model for geographic data. Based on this model, a multi-level, scalable access system for geospatial information is put forward. Users can select different levels of data according to their concrete application needs. Pull-based and push-based data access mechanisms based on this model are presented. A Service Oriented Architecture (SOA) was chosen for the data processing. The model of this study has been described by providing decision-making process of government departments with a simulation of fire disaster data collection. The use case shows this data model and the data provision system is flexible and has good adaptability.
NASA Astrophysics Data System (ADS)
Lv, Zheng; Sui, Haigang; Zhang, Xilin; Huang, Xianfeng
2007-11-01
As one of the most important geo-spatial objects and military establishment, airport is always a key target in fields of transportation and military affairs. Therefore, automatic recognition and extraction of airport from remote sensing images is very important and urgent for updating of civil aviation and military application. In this paper, a new multi-source data fusion approach on automatic airport information extraction, updating and 3D modeling is addressed. Corresponding key technologies including feature extraction of airport information based on a modified Ostu algorithm, automatic change detection based on new parallel lines-based buffer detection algorithm, 3D modeling based on gradual elimination of non-building points algorithm, 3D change detecting between old airport model and LIDAR data, typical CAD models imported and so on are discussed in detail. At last, based on these technologies, we develop a prototype system and the results show our method can achieve good effects.
Quantitative biologically-based models describing key events in the continuum from arsenic exposure to the development of adverse health effects provide a framework to integrate information obtained across diverse research areas. For example, genetic polymorphisms in arsenic met...
Rational analyses of information foraging on the web.
Pirolli, Peter
2005-05-06
This article describes rational analyses and cognitive models of Web users developed within information foraging theory. This is done by following the rational analysis methodology of (a) characterizing the problems posed by the environment, (b) developing rational analyses of behavioral solutions to those problems, and (c) developing cognitive models that approach the realization of those solutions. Navigation choice is modeled as a random utility model that uses spreading activation mechanisms that link proximal cues (information scent) that occur in Web browsers to internal user goals. Web-site leaving is modeled as an ongoing assessment by the Web user of the expected benefits of continuing at a Web site as opposed to going elsewhere. These cost-benefit assessments are also based on spreading activation models of information scent. Evaluations include a computational model of Web user behavior called Scent-Based Navigation and Information Foraging in the ACT Architecture, and the Law of Surfing, which characterizes the empirical distribution of the length of paths of visitors at a Web site. 2005 Lawrence Erlbaum Associates, Inc.
The DICOM-based radiation therapy information system
NASA Astrophysics Data System (ADS)
Law, Maria Y. Y.; Chan, Lawrence W. C.; Zhang, Xiaoyan; Zhang, Jianguo
2004-04-01
Similar to DICOM for PACS (Picture Archiving and Communication System), standards for radiotherapy (RT) information have been ratified with seven DICOM-RT objects and their IODs (Information Object Definitions), which are more than just images. This presentation describes how a DICOM-based RT Information System Server can be built based on the PACS technology and its data model for a web-based distribution. Methods: The RT information System consists of a Modality Simulator, a data format translator, a RT Gateway, the DICOM RT Server, and the Web-based Application Server. The DICOM RT Server was designed based on a PACS data model and was connected to a Web application Server for distribution of the RT information including therapeutic plans, structures, dose distribution, images and records. Various DICOM RT objects of the patient transmitted to the RT Server were routed to the Web Application Server where the contents of the DICOM RT objects were decoded and mapped to the corresponding location of the RT data model for display in the specially-designed Graphic User Interface. The non-DICOM objects were first rendered to DICOM RT Objects in the translator before they were sent to the RT Server. Results: Ten clinical cases have been collected from different hopsitals for evaluation of the DICOM-based RT Information System. They were successfully routed through the data flow and displayed in the client workstation of the RT information System. Conclusion: Using the DICOM-RT standards, integration of RT data from different vendors is possible.
2002-08-01
the measurement noise, as well as the physical model of the forward scattered electric field. The Bayesian algorithms for the Uncertain Permittivity...received at multiple sensors. In this research project a tissue- model -based signal-detection theory approach for the detection of mammary tumors in the...oriented information processors. In this research project a tissue- model - based signal detection theory approach for the detection of mammary tumors in the
Guiding Conformation Space Search with an All-Atom Energy Potential
Brunette, TJ; Brock, Oliver
2009-01-01
The most significant impediment for protein structure prediction is the inadequacy of conformation space search. Conformation space is too large and the energy landscape too rugged for existing search methods to consistently find near-optimal minima. To alleviate this problem, we present model-based search, a novel conformation space search method. Model-based search uses highly accurate information obtained during search to build an approximate, partial model of the energy landscape. Model-based search aggregates information in the model as it progresses, and in turn uses this information to guide exploration towards regions most likely to contain a near-optimal minimum. We validate our method by predicting the structure of 32 proteins, ranging in length from 49 to 213 amino acids. Our results demonstrate that model-based search is more effective at finding low-energy conformations in high-dimensional conformation spaces than existing search methods. The reduction in energy translates into structure predictions of increased accuracy. PMID:18536015
Stemflow estimation in a redwood forest using model-based stratified random sampling
Jack Lewis
2003-01-01
Model-based stratified sampling is illustrated by a case study of stemflow volume in a redwood forest. The approach is actually a model-assisted sampling design in which auxiliary information (tree diameter) is utilized in the design of stratum boundaries to optimize the efficiency of a regression or ratio estimator. The auxiliary information is utilized in both the...
A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Rinehart, Aidan W.
2014-01-01
This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.
A Model-Based Anomaly Detection Approach for Analyzing Streaming Aircraft Engine Measurement Data
NASA Technical Reports Server (NTRS)
Simon, Donald L.; Rinehart, Aidan Walker
2015-01-01
This paper presents a model-based anomaly detection architecture designed for analyzing streaming transient aircraft engine measurement data. The technique calculates and monitors residuals between sensed engine outputs and model predicted outputs for anomaly detection purposes. Pivotal to the performance of this technique is the ability to construct a model that accurately reflects the nominal operating performance of the engine. The dynamic model applied in the architecture is a piecewise linear design comprising steady-state trim points and dynamic state space matrices. A simple curve-fitting technique for updating the model trim point information based on steadystate information extracted from available nominal engine measurement data is presented. Results from the application of the model-based approach for processing actual engine test data are shown. These include both nominal fault-free test case data and seeded fault test case data. The results indicate that the updates applied to improve the model trim point information also improve anomaly detection performance. Recommendations for follow-on enhancements to the technique are also presented and discussed.
Geo3DML: A standard-based exchange format for 3D geological models
NASA Astrophysics Data System (ADS)
Wang, Zhangang; Qu, Honggang; Wu, Zixing; Wang, Xianghong
2018-01-01
A geological model (geomodel) in three-dimensional (3D) space is a digital representation of the Earth's subsurface, recognized by geologists and stored in resultant geological data (geodata). The increasing demand for data management and interoperable applications of geomodelscan be addressed by developing standard-based exchange formats for the representation of not only a single geological object, but also holistic geomodels. However, current standards such as GeoSciML cannot incorporate all the geomodel-related information. This paper presents Geo3DML for the exchange of 3D geomodels based on the existing Open Geospatial Consortium (OGC) standards. Geo3DML is based on a unified and formal representation of structural models, attribute models and hierarchical structures of interpreted resultant geodata in different dimensional views, including drills, cross-sections/geomaps and 3D models, which is compatible with the conceptual model of GeoSciML. Geo3DML aims to encode all geomodel-related information integrally in one framework, including the semantic and geometric information of geoobjects and their relationships, as well as visual information. At present, Geo3DML and some supporting tools have been released as a data-exchange standard by the China Geological Survey (CGS).
Data Discretization for Novel Relationship Discovery in Information Retrieval.
ERIC Educational Resources Information Center
Benoit, G.
2002-01-01
Describes an information retrieval, visualization, and manipulation model which offers the user multiple ways to exploit the retrieval set, based on weighted query terms, via an interactive interface. Outlines the mathematical model and describes an information retrieval application built on the model to search structured and full-text files.…
Enhanced semantic interoperability by profiling health informatics standards.
López, Diego M; Blobel, Bernd
2009-01-01
Several standards applied to the healthcare domain support semantic interoperability. These standards are far from being completely adopted in health information system development, however. The objective of this paper is to provide a method and suggest the necessary tooling for reusing standard health information models, by that way supporting the development of semantically interoperable systems and components. The approach is based on the definition of UML Profiles. UML profiling is a formal modeling mechanism to specialize reference meta-models in such a way that it is possible to adapt those meta-models to specific platforms or domains. A health information model can be considered as such a meta-model. The first step of the introduced method identifies the standard health information models and tasks in the software development process in which healthcare information models can be reused. Then, the selected information model is formalized as a UML Profile. That Profile is finally applied to system models, annotating them with the semantics of the information model. The approach is supported on Eclipse-based UML modeling tools. The method is integrated into a comprehensive framework for health information systems development, and the feasibility of the approach is demonstrated in the analysis, design, and implementation of a public health surveillance system, reusing HL7 RIM and DIMs specifications. The paper describes a method and the necessary tooling for reusing standard healthcare information models. UML offers several advantages such as tooling support, graphical notation, exchangeability, extensibility, semi-automatic code generation, etc. The approach presented is also applicable for harmonizing different standard specifications.
Quantitative methods to direct exploration based on hydrogeologic information
Graettinger, A.J.; Lee, J.; Reeves, H.W.; Dethan, D.
2006-01-01
Quantitatively Directed Exploration (QDE) approaches based on information such as model sensitivity, input data covariance and model output covariance are presented. Seven approaches for directing exploration are developed, applied, and evaluated on a synthetic hydrogeologic site. The QDE approaches evaluate input information uncertainty, subsurface model sensitivity and, most importantly, output covariance to identify the next location to sample. Spatial input parameter values and covariances are calculated with the multivariate conditional probability calculation from a limited number of samples. A variogram structure is used during data extrapolation to describe the spatial continuity, or correlation, of subsurface information. Model sensitivity can be determined by perturbing input data and evaluating output response or, as in this work, sensitivities can be programmed directly into an analysis model. Output covariance is calculated by the First-Order Second Moment (FOSM) method, which combines the covariance of input information with model sensitivity. A groundwater flow example, modeled in MODFLOW-2000, is chosen to demonstrate the seven QDE approaches. MODFLOW-2000 is used to obtain the piezometric head and the model sensitivity simultaneously. The seven QDE approaches are evaluated based on the accuracy of the modeled piezometric head after information from a QDE sample is added. For the synthetic site used in this study, the QDE approach that identifies the location of hydraulic conductivity that contributes the most to the overall piezometric head variance proved to be the best method to quantitatively direct exploration. ?? IWA Publishing 2006.
Exploiting salient semantic analysis for information retrieval
NASA Astrophysics Data System (ADS)
Luo, Jing; Meng, Bo; Quan, Changqin; Tu, Xinhui
2016-11-01
Recently, many Wikipedia-based methods have been proposed to improve the performance of different natural language processing (NLP) tasks, such as semantic relatedness computation, text classification and information retrieval. Among these methods, salient semantic analysis (SSA) has been proven to be an effective way to generate conceptual representation for words or documents. However, its feasibility and effectiveness in information retrieval is mostly unknown. In this paper, we study how to efficiently use SSA to improve the information retrieval performance, and propose a SSA-based retrieval method under the language model framework. First, SSA model is adopted to build conceptual representations for documents and queries. Then, these conceptual representations and the bag-of-words (BOW) representations can be used in combination to estimate the language models of queries and documents. The proposed method is evaluated on several standard text retrieval conference (TREC) collections. Experiment results on standard TREC collections show the proposed models consistently outperform the existing Wikipedia-based retrieval methods.
Yu, Rongjie; Abdel-Aty, Mohamed
2013-07-01
The Bayesian inference method has been frequently adopted to develop safety performance functions. One advantage of the Bayesian inference is that prior information for the independent variables can be included in the inference procedures. However, there are few studies that discussed how to formulate informative priors for the independent variables and evaluated the effects of incorporating informative priors in developing safety performance functions. This paper addresses this deficiency by introducing four approaches of developing informative priors for the independent variables based on historical data and expert experience. Merits of these informative priors have been tested along with two types of Bayesian hierarchical models (Poisson-gamma and Poisson-lognormal models). Deviance information criterion (DIC), R-square values, and coefficients of variance for the estimations were utilized as evaluation measures to select the best model(s). Comparison across the models indicated that the Poisson-gamma model is superior with a better model fit and it is much more robust with the informative priors. Moreover, the two-stage Bayesian updating informative priors provided the best goodness-of-fit and coefficient estimation accuracies. Furthermore, informative priors for the inverse dispersion parameter have also been introduced and tested. Different types of informative priors' effects on the model estimations and goodness-of-fit have been compared and concluded. Finally, based on the results, recommendations for future research topics and study applications have been made. Copyright © 2013 Elsevier Ltd. All rights reserved.
Quantitative biologically-based models describing key events in the continuum from arsenic exposure to the development of adverse health effects provide a framework to integrate information obtained across diverse research areas. For example, genetic polymorphisms in arsenic me...
Vivek-Ananth, R P; Samal, Areejit
2016-09-01
A major goal of systems biology is to build predictive computational models of cellular metabolism. Availability of complete genome sequences and wealth of legacy biochemical information has led to the reconstruction of genome-scale metabolic networks in the last 15 years for several organisms across the three domains of life. Due to paucity of information on kinetic parameters associated with metabolic reactions, the constraint-based modelling approach, flux balance analysis (FBA), has proved to be a vital alternative to investigate the capabilities of reconstructed metabolic networks. In parallel, advent of high-throughput technologies has led to the generation of massive amounts of omics data on transcriptional regulation comprising mRNA transcript levels and genome-wide binding profile of transcriptional regulators. A frontier area in metabolic systems biology has been the development of methods to integrate the available transcriptional regulatory information into constraint-based models of reconstructed metabolic networks in order to increase the predictive capabilities of computational models and understand the regulation of cellular metabolism. Here, we review the existing methods to integrate transcriptional regulatory information into constraint-based models of metabolic networks. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
On domain modelling of the service system with its application to enterprise information systems
NASA Astrophysics Data System (ADS)
Wang, J. W.; Wang, H. F.; Ding, J. L.; Furuta, K.; Kanno, T.; Ip, W. H.; Zhang, W. J.
2016-01-01
Information systems are a kind of service systems and they are throughout every element of a modern industrial and business system, much like blood in our body. Types of information systems are heterogeneous because of extreme uncertainty in changes in modern industrial and business systems. To effectively manage information systems, modelling of the work domain (or domain) of information systems is necessary. In this paper, a domain modelling framework for the service system is proposed and its application to the enterprise information system is outlined. The framework is defined based on application of a general domain modelling tool called function-context-behaviour-principle-state-structure (FCBPSS). The FCBPSS is based on a set of core concepts, namely: function, context, behaviour, principle, state and structure and system decomposition. Different from many other applications of FCBPSS in systems engineering, the FCBPSS is applied to both infrastructure and substance systems, which is novel and effective to modelling of service systems including enterprise information systems. It is to be noted that domain modelling of systems (e.g. enterprise information systems) is a key to integration of heterogeneous systems and to coping with unanticipated situations facing to systems.
NASA Astrophysics Data System (ADS)
Telipenko, E.; Chernysheva, T.; Zakharova, A.; Dumchev, A.
2015-10-01
The article represents research results about the knowledge base development for the intellectual information system for the bankruptcy risk assessment of the enterprise. It is described the process analysis of the knowledge base development; the main process stages, some problems and their solutions are given. The article introduces the connectionist model for the bankruptcy risk assessment based on the analysis of industrial enterprise financial accounting. The basis for this connectionist model is a three-layer perceptron with the back propagation of error algorithm. The knowledge base for the intellectual information system consists of processed information and the processing operation method represented as the connectionist model. The article represents the structure of the intellectual information system, the knowledge base, and the information processing algorithm for neural network training. The paper shows mean values of 10 indexes for industrial enterprises; with the help of them it is possible to carry out a financial analysis of industrial enterprises and identify correctly the current situation for well-timed managerial decisions. Results are given about neural network testing on the data of both bankrupt and financially strong enterprises, which were not included into training and test sets.
Model-Based Assurance Case+ (MBAC+): Tutorial on Modeling Radiation Hardness Assurance Activities
NASA Technical Reports Server (NTRS)
Austin, Rebekah; Label, Ken A.; Sampson, Mike J.; Evans, John; Witulski, Art; Sierawski, Brian; Karsai, Gabor; Mahadevan, Nag; Schrimpf, Ron; Reed, Robert A.
2017-01-01
This presentation will cover why modeling is useful for radiation hardness assurance cases, and also provide information on Model-Based Assurance Case+ (MBAC+), NASAs Reliability Maintainability Template, and Fault Propagation Modeling.
Using Web-Based Knowledge Extraction Techniques to Support Cultural Modeling
NASA Astrophysics Data System (ADS)
Smart, Paul R.; Sieck, Winston R.; Shadbolt, Nigel R.
The World Wide Web is a potentially valuable source of information about the cognitive characteristics of cultural groups. However, attempts to use the Web in the context of cultural modeling activities are hampered by the large-scale nature of the Web and the current dominance of natural language formats. In this paper, we outline an approach to support the exploitation of the Web for cultural modeling activities. The approach begins with the development of qualitative cultural models (which describe the beliefs, concepts and values of cultural groups), and these models are subsequently used to develop an ontology-based information extraction capability. Our approach represents an attempt to combine conventional approaches to information extraction with epidemiological perspectives of culture and network-based approaches to cultural analysis. The approach can be used, we suggest, to support the development of models providing a better understanding of the cognitive characteristics of particular cultural groups.
Digital modulation and achievable information rates of thru-body haptic communications
NASA Astrophysics Data System (ADS)
Hanisch, Natalie; Pierobon, Massimiliano
2017-05-01
The ever increasing biocompatibility and pervasive nature of wearable and implantable devices demand novel sustainable solutions to realize their connectivity, which can impact broad application scenarios such as in the defense, biomedicine, and entertainment fields. Where wireless electromagnetic communications are facing challenges such as device miniaturization, energy scarcity, limited range, and possibility of interception, solutions not only inspired but also based on natural communication means might result into valid alternatives. In this paper, a communication paradigm where digital information is propagated through the nervous system is proposed and analyzed on the basis of achievable information rates. In particular, this paradigm is based on an analytical framework where the response of a system based on haptic (tactile) information transmission and ElectroEncephaloGraphy (EEG)-based reception is modeled and characterized. Computational neuroscience models of the somatosensory signal representation in the brain, coupled with models of the generation and propagation of somatosensory stimulation from skin mechanoreceptors, are employed in this paper to provide a proof-of-concept evaluation of achievable performance in encoding information bits into tactile stimulation, and decoding them from the recorded brain activity. Based on these models, the system is simulated and the resulting data are utilized to train a Support Vector Machine (SVM) classifier, which is finally used to provide a proof-of-concept validation of the system performance in terms of information rates against bit error probability at the reception.
ERIC Educational Resources Information Center
Olatokun, Wole Michael; Ajagbe, Enitan
2010-01-01
This survey-based study examined the information-seeking behaviour of traditional medical practitioners using Taylor's information use model. Respondents comprised all 160 traditional medical practitioners that treat sickle cell anaemia. Data were collected using an interviewer-administered, structured questionnaire. Frequency and percentage…
PROCRU: A model for analyzing crew procedures in approach to landing
NASA Technical Reports Server (NTRS)
Baron, S.; Muralidharan, R.; Lancraft, R.; Zacharias, G.
1980-01-01
A model for analyzing crew procedures in approach to landing is developed. The model employs the information processing structure used in the optimal control model and in recent models for monitoring and failure detection. Mechanisms are added to this basic structure to model crew decision making in this multi task environment. Decisions are based on probability assessments and potential mission impact (or gain). Sub models for procedural activities are included. The model distinguishes among external visual, instrument visual, and auditory sources of information. The external visual scene perception models incorporate limitations in obtaining information. The auditory information channel contains a buffer to allow for storage in memory until that information can be processed.
An Ontology Based Approach to Information Security
NASA Astrophysics Data System (ADS)
Pereira, Teresa; Santos, Henrique
The semantically structure of knowledge, based on ontology approaches have been increasingly adopted by several expertise from diverse domains. Recently ontologies have been moved from the philosophical and metaphysics disciplines to be used in the construction of models to describe a specific theory of a domain. The development and the use of ontologies promote the creation of a unique standard to represent concepts within a specific knowledge domain. In the scope of information security systems the use of an ontology to formalize and represent the concepts of security information challenge the mechanisms and techniques currently used. This paper intends to present a conceptual implementation model of an ontology defined in the security domain. The model presented contains the semantic concepts based on the information security standard
Stratigraphy of the crater Copernicus
NASA Technical Reports Server (NTRS)
Paquette, R.
1984-01-01
The stratigraphy of copernicus based on its olivine absorption bands is presented. Earth based spectral data are used to develop models that also employ cratering mechanics to devise theories for Copernican geomorphology. General geologic information, spectral information, upper and lower stratigraphic units and a chart for model comparison are included in the stratigraphic analysis.
Information Retrieval Using UMLS-based Structured Queries
Fagan, Lawrence M.; Berrios, Daniel C.; Chan, Albert; Cucina, Russell; Datta, Anupam; Shah, Maulik; Surendran, Sujith
2001-01-01
During the last three years, we have developed and described components of ELBook, a semantically based information-retrieval system [1-4]. Using these components, domain experts can specify a query model, indexers can use the query model to index documents, and end-users can search these documents for instances of indexed queries.
A Model for Web-based Information Systems in E-Retailing.
ERIC Educational Resources Information Center
Wang, Fang; Head, Milena M.
2001-01-01
Discusses the use of Web-based information systems (WIS) by electronic retailers to attract and retain consumers and deliver business functions and strategy. Presents an abstract model for WIS design in electronic retailing; discusses customers, business determinants, and business interface; and suggests future research. (Author/LRW)
Cammarota, M; Huppes, V; Gaia, S; Degoulet, P
1998-01-01
The development of Health Information Systems is widely determined by the establishment of the underlying information models. An Object-Oriented Matrix Model (OOMM) is described which target is to facilitate the integration of the overall health system. The model is based on information modules named micro-databases that are structured in a three-dimensional network: planning, health structures and information systems. The modelling tool has been developed as a layer on top of a relational database system. A visual browser facilitates the development and maintenance of the information model. The modelling approach has been applied to the Brasilia University Hospital since 1991. The extension of the modelling approach to the Brasilia regional health system is considered.
Linking Earth Observations and Models to Societal Information Needs: The Case of Coastal Flooding
NASA Astrophysics Data System (ADS)
Buzzanga, B. A.; Plag, H. P.
2016-12-01
Coastal flooding is expected to increase in many areas due to sea level rise (SLR). Many societal applications such as emergency planning and designing public services depend on information on how the flooding spectrum may change as a result of SLR. To identify the societal information needs a conceptual model is needed that identifies the key stakeholders, applications, and information and observation needs. In the context of the development of the Global Earth Observation System of Systems (GEOSS), which is implemented by the Group on Earth Observations (GEO), the Socio-Economic and Environmental Information Needs Knowledge Base (SEE-IN KB) is developed as part of the GEOSS Knowledge Base. A core function of the SEE-IN KB is to facilitate the linkage of societal information needs to observations, models, information and knowledge. To achieve this, the SEE-IN KB collects information on objects such as user types, observational requirements, societal goals, models, and datasets. Comprehensive information concerning the interconnections between instances of these objects is used to capture the connectivity and to establish a conceptual model as a network of networks. The captured connectivity can be used in searches to allow users to discover products and services for their information needs, and providers to search for users and applications benefiting from their products. It also allows to answer "What if?" questions and supports knowledge creation. We have used the SEE-IN KB to develop a conceptual model capturing the stakeholders in coastal flooding and their information needs, and to link these elements to objects. We show how the knowledge base enables the transition of scientific data to useable information by connecting individuals such as city managers to flood maps. Within the knowledge base, these same users can request information that improves their ability to make specific planning decisions. These needs are linked to entities within research institutions that have the capabilities to meet them. Further, current research such as that investigating precipitation-induced flooding under different SLR scenarios is linked to the users who benefit from the knowledge, effectively creating a bi-directional channel between science and society that increases knowledge and improves foresight.
Su, Gui-yang; Li, Jian-hua; Ma, Ying-hua; Li, Sheng-hong
2004-09-01
With the flooding of pornographic information on the Internet, how to keep people away from that offensive information is becoming one of the most important research areas in network information security. Some applications which can block or filter such information are used. Approaches in those systems can be roughly classified into two kinds: metadata based and content based. With the development of distributed technologies, content based filtering technologies will play a more and more important role in filtering systems. Keyword matching is a content based method used widely in harmful text filtering. Experiments to evaluate the recall and precision of the method showed that the precision of the method is not satisfactory, though the recall of the method is rather high. According to the results, a new pornographic text filtering model based on reconfirming is put forward. Experiments showed that the model is practical, has less loss of recall than the single keyword matching method, and has higher precision.
Kandhasamy, Chandrasekaran; Ghosh, Kaushik
2017-02-01
Indian states are currently classified into HIV-risk categories based on the observed prevalence counts, percentage of infected attendees in antenatal clinics, and percentage of infected high-risk individuals. This method, however, does not account for the spatial dependence among the states nor does it provide any measure of statistical uncertainty. We provide an alternative model-based approach to address these issues. Our method uses Poisson log-normal models having various conditional autoregressive structures with neighborhood-based and distance-based weight matrices and incorporates all available covariate information. We use R and WinBugs software to fit these models to the 2011 HIV data. Based on the Deviance Information Criterion, the convolution model using distance-based weight matrix and covariate information on female sex workers, literacy rate and intravenous drug users is found to have the best fit. The relative risk of HIV for the various states is estimated using the best model and the states are then classified into the risk categories based on these estimated values. An HIV risk map of India is constructed based on these results. The choice of the final model suggests that an HIV control strategy which focuses on the female sex workers, intravenous drug users and literacy rate would be most effective. Copyright © 2017 Elsevier Ltd. All rights reserved.
The Design and Implement of Tourism Information System Based on GIS
NASA Astrophysics Data System (ADS)
Chunchang, Fu; Nan, Zhang
From the geographical information system concept, discusses the main contents of the geographic information system, and the current of the geographic information system key technological measures of tourism information system, the application of tourism information system for specific requirements and goals, and analyzes a relational database model based on the tourist information system in GIS application methods of realization.
Ogawa, K
1992-01-01
This paper proposes a new evaluation and prediction method for computer usability. This method is based on our two previously proposed information transmission measures created from a human-to-computer information transmission model. The model has three information transmission levels: the device, software, and task content levels. Two measures, called the device independent information measure (DI) and the computer independent information measure (CI), defined on the software and task content levels respectively, are given as the amount of information transmitted. Two information transmission rates are defined as DI/T and CI/T, where T is the task completion time: the device independent information transmission rate (RDI), and the computer independent information transmission rate (RCI). The method utilizes the RDI and RCI rates to evaluate relatively the usability of software and device operations on different computer systems. Experiments using three different systems, in this case a graphical information input task, confirm that the method offers an efficient way of determining computer usability.
Model Documentation of Base Case Data | Regional Energy Deployment System
Model | Energy Analysis | NREL Documentation of Base Case Data Model Documentation of Base Case base case of the model. The base case was developed simply as a point of departure for other analyses Base Case derives many of its inputs from the Energy Information Administration's (EIA's) Annual Energy
Operator Performance Measures for Assessing Voice Communication Effectiveness
1989-07-01
performance and work- load assessment techniques have been based.I Broadbent (1958) described a limited capacity filter model of human information...INFORMATION PROCESSING 20 3.1.1. Auditory Attention 20 3.1.2. Auditory Memory 24 3.2. MODELS OF INFORMATION PROCESSING 24 3.2.1. Capacity Theories 25...Learning 0 Attention * Language Specialization • Decision Making• Problem Solving Auditory Information Processing Models of Processing Ooemtor
BIM and IoT: A Synopsis from GIS Perspective
NASA Astrophysics Data System (ADS)
Isikdag, U.
2015-10-01
Internet-of-Things (IoT) focuses on enabling communication between all devices, things that are existent in real life or that are virtual. Building Information Models (BIMs) and Building Information Modelling is a hype that has been the buzzword of the construction industry for last 15 years. BIMs emerged as a result of a push by the software companies, to tackle the problems of inefficient information exchange between different software and to enable true interoperability. In BIM approach most up-to-date an accurate models of a building are stored in shared central databases during the design and the construction of a project and at post-construction stages. GIS based city monitoring / city management applications require the fusion of information acquired from multiple resources, BIMs, City Models and Sensors. This paper focuses on providing a method for facilitating the GIS based fusion of information residing in digital building "Models" and information acquired from the city objects i.e. "Things". Once this information fusion is accomplished, many fields ranging from Emergency Response, Urban Surveillance, Urban Monitoring to Smart Buildings will have potential benefits.
Efficient Information Access for Location-Based Services in Mobile Environments
ERIC Educational Resources Information Center
Lee, Chi Keung
2009-01-01
The demand for pervasive access of location-related information (e.g., local traffic, restaurant locations, navigation maps, weather conditions, pollution index, etc.) fosters a tremendous application base of "Location Based Services (LBSs)". Without loss of generality, we model location-related information as "spatial objects" and the accesses…
Hippocampus segmentation using locally weighted prior based level set
NASA Astrophysics Data System (ADS)
Achuthan, Anusha; Rajeswari, Mandava
2015-12-01
Segmentation of hippocampus in the brain is one of a major challenge in medical image segmentation due to its' imaging characteristics, with almost similar intensity between another adjacent gray matter structure, such as amygdala. The intensity similarity has causes the hippocampus to have weak or fuzzy boundaries. With this main challenge being demonstrated by hippocampus, a segmentation method that relies on image information alone may not produce accurate segmentation results. Therefore, it is needed an assimilation of prior information such as shape and spatial information into existing segmentation method to produce the expected segmentation. Previous studies has widely integrated prior information into segmentation methods. However, the prior information has been utilized through a global manner integration, and this does not reflect the real scenario during clinical delineation. Therefore, in this paper, a locally integrated prior information into a level set model is presented. This work utilizes a mean shape model to provide automatic initialization for level set evolution, and has been integrated as prior information into the level set model. The local integration of edge based information and prior information has been implemented through an edge weighting map that decides at voxel level which information need to be observed during a level set evolution. The edge weighting map shows which corresponding voxels having sufficient edge information. Experiments shows that the proposed integration of prior information locally into a conventional edge-based level set model, known as geodesic active contour has shown improvement of 9% in averaged Dice coefficient.
Terminology model discovery using natural language processing and visualization techniques.
Zhou, Li; Tao, Ying; Cimino, James J; Chen, Elizabeth S; Liu, Hongfang; Lussier, Yves A; Hripcsak, George; Friedman, Carol
2006-12-01
Medical terminologies are important for unambiguous encoding and exchange of clinical information. The traditional manual method of developing terminology models is time-consuming and limited in the number of phrases that a human developer can examine. In this paper, we present an automated method for developing medical terminology models based on natural language processing (NLP) and information visualization techniques. Surgical pathology reports were selected as the testing corpus for developing a pathology procedure terminology model. The use of a general NLP processor for the medical domain, MedLEE, provides an automated method for acquiring semantic structures from a free text corpus and sheds light on a new high-throughput method of medical terminology model development. The use of an information visualization technique supports the summarization and visualization of the large quantity of semantic structures generated from medical documents. We believe that a general method based on NLP and information visualization will facilitate the modeling of medical terminologies.
FlyBase portals to human disease research using Drosophila models
Millburn, Gillian H.; Crosby, Madeline A.; Gramates, L. Sian; Tweedie, Susan
2016-01-01
ABSTRACT The use of Drosophila melanogaster as a model for studying human disease is well established, reflected by the steady increase in both the number and proportion of fly papers describing human disease models in recent years. In this article, we highlight recent efforts to improve the availability and accessibility of the disease model information in FlyBase (http://flybase.org), the model organism database for Drosophila. FlyBase has recently introduced Human Disease Model Reports, each of which presents background information on a specific disease, a tabulation of related disease subtypes, and summaries of experimental data and results using fruit flies. Integrated presentations of relevant data and reagents described in other sections of FlyBase are incorporated into these reports, which are specifically designed to be accessible to non-fly researchers in order to promote collaboration across model organism communities working in translational science. Another key component of disease model information in FlyBase is that data are collected in a consistent format – using the evolving Disease Ontology (an open-source standardized ontology for human-disease-associated biomedical data) – to allow robust and intuitive searches. To facilitate this, FlyBase has developed a dedicated tool for querying and navigating relevant data, which include mutations that model a disease and any associated interacting modifiers. In this article, we describe how data related to fly models of human disease are presented in individual Gene Reports and in the Human Disease Model Reports. Finally, we discuss search strategies and new query tools that are available to access the disease model data in FlyBase. PMID:26935103
An Integrative Model of "Information Visibility" and "Information Seeking" on the Web
ERIC Educational Resources Information Center
Mansourian, Yazdan; Ford, Nigel; Webber, Sheila; Madden, Andrew
2008-01-01
Purpose: This paper aims to encapsulate the main procedure and key findings of a qualitative research on end-users' interactions with web-based search tools in order to demonstrate how the concept of "information visibility" emerged and how an integrative model of information visibility and information seeking on the web was constructed.…
Meta-Modeling: A Knowledge-Based Approach to Facilitating Model Construction and Reuse
NASA Technical Reports Server (NTRS)
Keller, Richard M.; Dungan, Jennifer L.
1997-01-01
In this paper, we introduce a new modeling approach called meta-modeling and illustrate its practical applicability to the construction of physically-based ecosystem process models. As a critical adjunct to modeling codes meta-modeling requires explicit specification of certain background information related to the construction and conceptual underpinnings of a model. This information formalizes the heretofore tacit relationship between the mathematical modeling code and the underlying real-world phenomena being investigated, and gives insight into the process by which the model was constructed. We show how the explicit availability of such information can make models more understandable and reusable and less subject to misinterpretation. In particular, background information enables potential users to better interpret an implemented ecosystem model without direct assistance from the model author. Additionally, we show how the discipline involved in specifying background information leads to improved management of model complexity and fewer implementation errors. We illustrate the meta-modeling approach in the context of the Scientists' Intelligent Graphical Modeling Assistant (SIGMA) a new model construction environment. As the user constructs a model using SIGMA the system adds appropriate background information that ties the executable model to the underlying physical phenomena under investigation. Not only does this information improve the understandability of the final model it also serves to reduce the overall time and programming expertise necessary to initially build and subsequently modify models. Furthermore, SIGMA's use of background knowledge helps eliminate coding errors resulting from scientific and dimensional inconsistencies that are otherwise difficult to avoid when building complex models. As a. demonstration of SIGMA's utility, the system was used to reimplement and extend a well-known forest ecosystem dynamics model: Forest-BGC.
Influenza forecasting with Google Flu Trends.
Dugas, Andrea Freyer; Jalalpour, Mehdi; Gel, Yulia; Levin, Scott; Torcaso, Fred; Igusa, Takeru; Rothman, Richard E
2013-01-01
We developed a practical influenza forecast model based on real-time, geographically focused, and easy to access data, designed to provide individual medical centers with advanced warning of the expected number of influenza cases, thus allowing for sufficient time to implement interventions. Secondly, we evaluated the effects of incorporating a real-time influenza surveillance system, Google Flu Trends, and meteorological and temporal information on forecast accuracy. Forecast models designed to predict one week in advance were developed from weekly counts of confirmed influenza cases over seven seasons (2004-2011) divided into seven training and out-of-sample verification sets. Forecasting procedures using classical Box-Jenkins, generalized linear models (GLM), and generalized linear autoregressive moving average (GARMA) methods were employed to develop the final model and assess the relative contribution of external variables such as, Google Flu Trends, meteorological data, and temporal information. A GARMA(3,0) forecast model with Negative Binomial distribution integrating Google Flu Trends information provided the most accurate influenza case predictions. The model, on the average, predicts weekly influenza cases during 7 out-of-sample outbreaks within 7 cases for 83% of estimates. Google Flu Trend data was the only source of external information to provide statistically significant forecast improvements over the base model in four of the seven out-of-sample verification sets. Overall, the p-value of adding this external information to the model is 0.0005. The other exogenous variables did not yield a statistically significant improvement in any of the verification sets. Integer-valued autoregression of influenza cases provides a strong base forecast model, which is enhanced by the addition of Google Flu Trends confirming the predictive capabilities of search query based syndromic surveillance. This accessible and flexible forecast model can be used by individual medical centers to provide advanced warning of future influenza cases.
Artificial retina model for the retinally blind based on wavelet transform
NASA Astrophysics Data System (ADS)
Zeng, Yan-an; Song, Xin-qiang; Jiang, Fa-gang; Chang, Da-ding
2007-01-01
Artificial retina is aimed for the stimulation of remained retinal neurons in the patients with degenerated photoreceptors. Microelectrode arrays have been developed for this as a part of stimulator. Design such microelectrode arrays first requires a suitable mathematical method for human retinal information processing. In this paper, a flexible and adjustable human visual information extracting model is presented, which is based on the wavelet transform. With the flexible of wavelet transform to image information processing and the consistent to human visual information extracting, wavelet transform theory is applied to the artificial retina model for the retinally blind. The response of the model to synthetic image is shown. The simulated experiment demonstrates that the model behaves in a manner qualitatively similar to biological retinas and thus may serve as a basis for the development of an artificial retina.
Li, Shasha; Nie, Hongchao; Lu, Xudong; Duan, Huilong
2015-02-01
Integration of heterogeneous systems is the key to hospital information construction due to complexity of the healthcare environment. Currently, during the process of healthcare information system integration, people participating in integration project usually communicate by free-format document, which impairs the efficiency and adaptability of integration. A method utilizing business process model and notation (BPMN) to model integration requirement and automatically transforming it to executable integration configuration was proposed in this paper. Based on the method, a tool was developed to model integration requirement and transform it to integration configuration. In addition, an integration case in radiology scenario was used to verify the method.
Characterizing super-spreading in microblog: An epidemic-based information propagation model
NASA Astrophysics Data System (ADS)
Liu, Yu; Wang, Bai; Wu, Bin; Shang, Suiming; Zhang, Yunlei; Shi, Chuan
2016-12-01
As the microblogging services are becoming more prosperous in everyday life for users on Online Social Networks (OSNs), it is more favorable for hot topics and breaking news to gain more attraction very soon than ever before, which are so-called "super-spreading events". In the information diffusion process of these super-spreading events, messages are passed on from one user to another and numerous individuals are influenced by a relatively small portion of users, a.k.a. super-spreaders. Acquiring an awareness of super-spreading phenomena and an understanding of patterns of wide-ranged information propagations benefits several social media data mining tasks, such as hot topic detection, predictions of information propagation, harmful information monitoring and intervention. Taking into account that super-spreading in both information diffusion and spread of a contagious disease are analogous, in this study, we build a parameterized model, the SAIR model, based on well-known epidemic models to characterize super-spreading phenomenon in tweet information propagation accompanied with super-spreaders. For the purpose of modeling information diffusion, empirical observations on a real-world Weibo dataset are statistically carried out. Both the steady-state analysis on the equilibrium and the validation on real-world Weibo dataset of the proposed model are conducted. The case study that validates the proposed model shows that the SAIR model is much more promising than the conventional SIR model in characterizing a super-spreading event of information propagation. In addition, numerical simulations are carried out and discussed to discover how sensitively the parameters affect the information propagation process.
Information-Flow-Based Access Control for Web Browsers
NASA Astrophysics Data System (ADS)
Yoshihama, Sachiko; Tateishi, Takaaki; Tabuchi, Naoshi; Matsumoto, Tsutomu
The emergence of Web 2.0 technologies such as Ajax and Mashup has revealed the weakness of the same-origin policy[1], the current de facto standard for the Web browser security model. We propose a new browser security model to allow fine-grained access control in the client-side Web applications for secure mashup and user-generated contents. We propose a browser security model that is based on information-flow-based access control (IBAC) to overcome the dynamic nature of the client-side Web applications and to accurately determine the privilege of scripts in the event-driven programming model.
Trust-based information system architecture for personal wellness.
Ruotsalainen, Pekka; Nykänen, Pirkko; Seppälä, Antto; Blobel, Bernd
2014-01-01
Modern eHealth, ubiquitous health and personal wellness systems take place in an unsecure and ubiquitous information space where no predefined trust occurs. This paper presents novel information model and an architecture for trust based privacy management of personal health and wellness information in ubiquitous environment. The architecture enables a person to calculate a dynamic and context-aware trust value for each service provider, and using it to design personal privacy policies for trustworthy use of health and wellness services. For trust calculation a novel set of measurable context-aware and health information-sensitive attributes is developed. The architecture enables a person to manage his or her privacy in ubiquitous environment by formulating context-aware and service provider specific policies. Focus groups and information modelling was used for developing a wellness information model. System analysis method based on sequential steps that enable to combine results of analysis of privacy and trust concerns and the selection of trust and privacy services was used for development of the information system architecture. Its services (e.g. trust calculation, decision support, policy management and policy binding services) and developed attributes enable a person to define situation-aware policies that regulate the way his or her wellness and health information is processed.
Multilingual Medical Data Models in ODM Format
Breil, B.; Kenneweg, J.; Fritz, F.; Bruland, P.; Doods, D.; Trinczek, B.; Dugas, M.
2012-01-01
Background Semantic interoperability between routine healthcare and clinical research is an unsolved issue, as information systems in the healthcare domain still use proprietary and site-specific data models. However, information exchange and data harmonization are essential for physicians and scientists if they want to collect and analyze data from different hospitals in order to build up registries and perform multicenter clinical trials. Consequently, there is a need for a standardized metadata exchange based on common data models. Currently this is mainly done by informatics experts instead of medical experts. Objectives We propose to enable physicians to exchange, rate, comment and discuss their own medical data models in a collaborative web-based repository of medical forms in a standardized format. Methods Based on a comprehensive requirement analysis, a web-based portal for medical data models was specified. In this context, a data model is the technical specification (attributes, data types, value lists) of a medical form without any layout information. The CDISC Operational Data Model (ODM) was chosen as the appropriate format for the standardized representation of data models. The system was implemented with Ruby on Rails and applies web 2.0 technologies to provide a community based solution. Forms from different source systems – both routine care and clinical research – were converted into ODM format and uploaded into the portal. Results A portal for medical data models based on ODM-files was implemented (http://www.medical-data-models.org). Physicians are able to upload, comment, rate and download medical data models. More than 250 forms with approximately 8000 items are provided in different views (overview and detailed presentation) and in multiple languages. For instance, the portal contains forms from clinical and research information systems. Conclusion The portal provides a system-independent repository for multilingual data models in ODM format which can be used by physicians. It serves as a platform for discussion and enables the exchange of multilingual medical data models in a standardized way. PMID:23620720
Study of Collaborative Management for Transportation Construction Project Based on BIM Technology
NASA Astrophysics Data System (ADS)
Jianhua, Liu; Genchuan, Luo; Daiquan, Liu; Wenlei, Li; Bowen, Feng
2018-03-01
Abstract. Building Information Modeling(BIM) is a building modeling technology based on the relevant information data of the construction project. It is an advanced technology and management concept, which is widely used in the whole life cycle process of planning, design, construction and operation. Based on BIM technology, transportation construction project collaborative management can have better communication through authenticity simulation and architectural visualization and can obtain the basic and real-time information such as project schedule, engineering quality, cost and environmental impact etc. The main services of highway construction management are integrated on the unified BIM platform for collaborative management to realize information intercommunication and exchange, to change the isolated situation of information in the past, and improve the level of information management. The final BIM model is integrated not only for the information management of project and the integration of preliminary documents and design drawings, but also for the automatic generation of completion data and final accounts, which covers the whole life cycle of traffic construction projects and lays a good foundation for smart highway construction.
DDDAMS-based Urban Surveillance and Crowd Control via UAVs and UGVs
2015-12-04
for crowd dynamics modeling by incorporating multi-resolution data, where a grid-based method is used to model crowd motion with UAVs’ low -resolution...information and more computational intensive (and time-consuming). Given that the deployment of fidelity selection results in simulation faces computational... low fidelity information FOV y (A) DR x (A) DR y (A) Not detected high fidelity information Table 1: Parameters for UAV and UGV for their detection
Bouaud, Jacques; Guézennec, Gilles; Séroussi, Brigitte
2018-01-01
The integration of clinical information models and termino-ontological models into a unique ontological framework is highly desirable for it facilitates data integration and management using the same formal mechanisms for both data concepts and information model components. This is particularly true for knowledge-based decision support tools that aim to take advantage of all facets of semantic web technologies in merging ontological reasoning, concept classification, and rule-based inferences. We present an ontology template that combines generic data model components with (parts of) existing termino-ontological resources. The approach is developed for the guideline-based decision support module on breast cancer management within the DESIREE European project. The approach is based on the entity attribute value model and could be extended to other domains.
Model-assisted estimation of forest resources with generalized additive models
Jean D. Opsomer; F. Jay Breidt; Gretchen G. Moisen; Goran Kauermann
2007-01-01
Multiphase surveys are often conducted in forest inventories, with the goal of estimating forested area and tree characteristics over large regions. This article describes how design-based estimation of such quantities, based on information gathered during ground visits of sampled plots, can be made more precise by incorporating auxiliary information available from...
ERIC Educational Resources Information Center
Champagne, Tiffany
2013-01-01
The purpose of this dissertation research was to critically examine the development of community-based health information exchanges (HIEs) and to comparatively analyze the various models of exchanges in operation today nationally. Specifically this research sought to better understand several aspects of HIE: policy influences, organizational…
Munteanu, Cristian R; Gonzalez-Diaz, Humberto; Garcia, Rafael; Loza, Mabel; Pazos, Alejandro
2015-01-01
The molecular information encoding into molecular descriptors is the first step into in silico Chemoinformatics methods in Drug Design. The Machine Learning methods are a complex solution to find prediction models for specific biological properties of molecules. These models connect the molecular structure information such as atom connectivity (molecular graphs) or physical-chemical properties of an atom/group of atoms to the molecular activity (Quantitative Structure - Activity Relationship, QSAR). Due to the complexity of the proteins, the prediction of their activity is a complicated task and the interpretation of the models is more difficult. The current review presents a series of 11 prediction models for proteins, implemented as free Web tools on an Artificial Intelligence Model Server in Biosciences, Bio-AIMS (http://bio-aims.udc.es/TargetPred.php). Six tools predict protein activity, two models evaluate drug - protein target interactions and the other three calculate protein - protein interactions. The input information is based on the protein 3D structure for nine models, 1D peptide amino acid sequence for three tools and drug SMILES formulas for two servers. The molecular graph descriptor-based Machine Learning models could be useful tools for in silico screening of new peptides/proteins as future drug targets for specific treatments.
NASA Technical Reports Server (NTRS)
Maluf, David A. (Inventor); Bell, David G. (Inventor); Gurram, Mohana M. (Inventor); Gawdiak, Yuri O. (Inventor)
2009-01-01
A system for managing a project that includes multiple tasks and a plurality of workers. Input information includes characterizations based upon a human model, a team model and a product model. Periodic reports, such as a monthly report, a task plan report, a budget report and a risk management report, are generated and made available for display or further analysis. An extensible database allows searching for information based upon context and upon content.
Popularity Modeling for Mobile Apps: A Sequential Approach.
Zhu, Hengshu; Liu, Chuanren; Ge, Yong; Xiong, Hui; Chen, Enhong
2015-07-01
The popularity information in App stores, such as chart rankings, user ratings, and user reviews, provides an unprecedented opportunity to understand user experiences with mobile Apps, learn the process of adoption of mobile Apps, and thus enables better mobile App services. While the importance of popularity information is well recognized in the literature, the use of the popularity information for mobile App services is still fragmented and under-explored. To this end, in this paper, we propose a sequential approach based on hidden Markov model (HMM) for modeling the popularity information of mobile Apps toward mobile App services. Specifically, we first propose a popularity based HMM (PHMM) to model the sequences of the heterogeneous popularity observations of mobile Apps. Then, we introduce a bipartite based method to precluster the popularity observations. This can help to learn the parameters and initial values of the PHMM efficiently. Furthermore, we demonstrate that the PHMM is a general model and can be applicable for various mobile App services, such as trend based App recommendation, rating and review spam detection, and ranking fraud detection. Finally, we validate our approach on two real-world data sets collected from the Apple Appstore. Experimental results clearly validate both the effectiveness and efficiency of the proposed popularity modeling approach.
Queues with Choice via Delay Differential Equations
NASA Astrophysics Data System (ADS)
Pender, Jamol; Rand, Richard H.; Wesson, Elizabeth
Delay or queue length information has the potential to influence the decision of a customer to join a queue. Thus, it is imperative for managers of queueing systems to understand how the information that they provide will affect the performance of the system. To this end, we construct and analyze two two-dimensional deterministic fluid models that incorporate customer choice behavior based on delayed queue length information. In the first fluid model, customers join each queue according to a Multinomial Logit Model, however, the queue length information the customer receives is delayed by a constant Δ. We show that the delay can cause oscillations or asynchronous behavior in the model based on the value of Δ. In the second model, customers receive information about the queue length through a moving average of the queue length. Although it has been shown empirically that giving patients moving average information causes oscillations and asynchronous behavior to occur in U.S. hospitals, we analytically and mathematically show for the first time that the moving average fluid model can exhibit oscillations and determine their dependence on the moving average window. Thus, our analysis provides new insight on how operators of service systems should report queue length information to customers and how delayed information can produce unwanted system dynamics.
ERIC Educational Resources Information Center
Wiio, Osmo A.
A more unified approach to communication theory can evolve through systems modeling of information theory, communication modes, and mass media operations. Such systematic analysis proposes, as is the case care here, that information models be based upon combinations of energy changes and exchanges and changes in receiver systems. The mass media is…
Extended Graph-Based Models for Enhanced Similarity Search in Cavbase.
Krotzky, Timo; Fober, Thomas; Hüllermeier, Eyke; Klebe, Gerhard
2014-01-01
To calculate similarities between molecular structures, measures based on the maximum common subgraph are frequently applied. For the comparison of protein binding sites, these measures are not fully appropriate since graphs representing binding sites on a detailed atomic level tend to get very large. In combination with an NP-hard problem, a large graph leads to a computationally demanding task. Therefore, for the comparison of binding sites, a less detailed coarse graph model is used building upon so-called pseudocenters. Consistently, a loss of structural data is caused since many atoms are discarded and no information about the shape of the binding site is considered. This is usually resolved by performing subsequent calculations based on additional information. These steps are usually quite expensive, making the whole approach very slow. The main drawback of a graph-based model solely based on pseudocenters, however, is the loss of information about the shape of the protein surface. In this study, we propose a novel and efficient modeling formalism that does not increase the size of the graph model compared to the original approach, but leads to graphs containing considerably more information assigned to the nodes. More specifically, additional descriptors considering surface characteristics are extracted from the local surface and attributed to the pseudocenters stored in Cavbase. These properties are evaluated as additional node labels, which lead to a gain of information and allow for much faster but still very accurate comparisons between different structures.
NASA Technical Reports Server (NTRS)
Tischer, A. E.
1987-01-01
The failure information propagation model (FIPM) data base was developed to store and manipulate the large amount of information anticipated for the various Space Shuttle Main Engine (SSME) FIPMs. The organization and structure of the FIPM data base is described, including a summary of the data fields and key attributes associated with each FIPM data file. The menu-driven software developed to facilitate and control the entry, modification, and listing of data base records is also discussed. The transfer of the FIPM data base and software to the NASA Marshall Space Flight Center is described. Complete listings of all of the data base definition commands and software procedures are included in the appendixes.
Pezzulo, Giovanni; Rigoli, Francesco; Chersi, Fabian
2013-01-01
Instrumental behavior depends on both goal-directed and habitual mechanisms of choice. Normative views cast these mechanisms in terms of model-free and model-based methods of reinforcement learning, respectively. An influential proposal hypothesizes that model-free and model-based mechanisms coexist and compete in the brain according to their relative uncertainty. In this paper we propose a novel view in which a single Mixed Instrumental Controller produces both goal-directed and habitual behavior by flexibly balancing and combining model-based and model-free computations. The Mixed Instrumental Controller performs a cost-benefits analysis to decide whether to chose an action immediately based on the available "cached" value of actions (linked to model-free mechanisms) or to improve value estimation by mentally simulating the expected outcome values (linked to model-based mechanisms). Since mental simulation entails cognitive effort and increases the reward delay, it is activated only when the associated "Value of Information" exceeds its costs. The model proposes a method to compute the Value of Information, based on the uncertainty of action values and on the distance of alternative cached action values. Overall, the model by default chooses on the basis of lighter model-free estimates, and integrates them with costly model-based predictions only when useful. Mental simulation uses a sampling method to produce reward expectancies, which are used to update the cached value of one or more actions; in turn, this updated value is used for the choice. The key predictions of the model are tested in different settings of a double T-maze scenario. Results are discussed in relation with neurobiological evidence on the hippocampus - ventral striatum circuit in rodents, which has been linked to goal-directed spatial navigation.
Psychopathy-related traits and the use of reward and social information: a computational approach
Brazil, Inti A.; Hunt, Laurence T.; Bulten, Berend H.; Kessels, Roy P. C.; de Bruijn, Ellen R. A.; Mars, Rogier B.
2013-01-01
Psychopathy is often linked to disturbed reinforcement-guided adaptation of behavior in both clinical and non-clinical populations. Recent work suggests that these disturbances might be due to a deficit in actively using information to guide changes in behavior. However, how much information is actually used to guide behavior is difficult to observe directly. Therefore, we used a computational model to estimate the use of information during learning. Thirty-six female subjects were recruited based on their total scores on the Psychopathic Personality Inventory (PPI), a self-report psychopathy list, and performed a task involving simultaneous learning of reward-based and social information. A Bayesian reinforcement-learning model was used to parameterize the use of each source of information during learning. Subsequently, we used the subscales of the PPI to assess psychopathy-related traits, and the traits that were strongly related to the model's parameters were isolated through a formal variable selection procedure. Finally, we assessed how these covaried with model parameters. We succeeded in isolating key personality traits believed to be relevant for psychopathy that can be related to model-based descriptions of subject behavior. Use of reward-history information was negatively related to levels of trait anxiety and fearlessness, whereas use of social advice decreased as the perceived ability to manipulate others and lack of anxiety increased. These results corroborate previous findings suggesting that sub-optimal use of different types of information might be implicated in psychopathy. They also further highlight the importance of considering the potential of computational modeling to understand the role of latent variables, such as the weight people give to various sources of information during goal-directed behavior, when conducting research on psychopathy-related traits and in the field of forensic psychiatry. PMID:24391615
Intelligent Context-Aware and Adaptive Interface for Mobile LBS
Liu, Yanhong
2015-01-01
Context-aware user interface plays an important role in many human-computer Interaction tasks of location based services. Although spatial models for context-aware systems have been studied extensively, how to locate specific spatial information for users is still not well resolved, which is important in the mobile environment where location based services users are impeded by device limitations. Better context-aware human-computer interaction models of mobile location based services are needed not just to predict performance outcomes, such as whether people will be able to find the information needed to complete a human-computer interaction task, but to understand human processes that interact in spatial query, which will in turn inform the detailed design of better user interfaces in mobile location based services. In this study, a context-aware adaptive model for mobile location based services interface is proposed, which contains three major sections: purpose, adjustment, and adaptation. Based on this model we try to describe the process of user operation and interface adaptation clearly through the dynamic interaction between users and the interface. Then we show how the model applies users' demands in a complicated environment and suggested the feasibility by the experimental results. PMID:26457077
Liver segmentation from CT images using a sparse priori statistical shape model (SP-SSM).
Wang, Xuehu; Zheng, Yongchang; Gan, Lan; Wang, Xuan; Sang, Xinting; Kong, Xiangfeng; Zhao, Jie
2017-01-01
This study proposes a new liver segmentation method based on a sparse a priori statistical shape model (SP-SSM). First, mark points are selected in the liver a priori model and the original image. Then, the a priori shape and its mark points are used to obtain a dictionary for the liver boundary information. Second, the sparse coefficient is calculated based on the correspondence between mark points in the original image and those in the a priori model, and then the sparse statistical model is established by combining the sparse coefficients and the dictionary. Finally, the intensity energy and boundary energy models are built based on the intensity information and the specific boundary information of the original image. Then, the sparse matching constraint model is established based on the sparse coding theory. These models jointly drive the iterative deformation of the sparse statistical model to approximate and accurately extract the liver boundaries. This method can solve the problems of deformation model initialization and a priori method accuracy using the sparse dictionary. The SP-SSM can achieve a mean overlap error of 4.8% and a mean volume difference of 1.8%, whereas the average symmetric surface distance and the root mean square symmetric surface distance can reach 0.8 mm and 1.4 mm, respectively.
[Modeling in value-based medicine].
Neubauer, A S; Hirneiss, C; Kampik, A
2010-03-01
Modeling plays an important role in value-based medicine (VBM). It allows decision support by predicting potential clinical and economic consequences, frequently combining different sources of evidence. Based on relevant publications and examples focusing on ophthalmology the key economic modeling methods are explained and definitions are given. The most frequently applied model types are decision trees, Markov models, and discrete event simulation (DES) models. Model validation includes besides verifying internal validity comparison with other models (external validity) and ideally validation of its predictive properties. The existing uncertainty with any modeling should be clearly stated. This is true for economic modeling in VBM as well as when using disease risk models to support clinical decisions. In economic modeling uni- and multivariate sensitivity analyses are usually applied; the key concepts here are tornado plots and cost-effectiveness acceptability curves. Given the existing uncertainty, modeling helps to make better informed decisions than without this additional information.
NASA Astrophysics Data System (ADS)
Juszczyk, Michał
2018-04-01
This paper reports some results of the studies on the use of artificial intelligence tools for the purposes of cost estimation based on building information models. A problem of the cost estimates based on the building information models on a macro level supported by the ensembles of artificial neural networks is concisely discussed. In the course of the research a regression model has been built for the purposes of cost estimation of buildings' floor structural frames, as higher level elements. Building information models are supposed to serve as a repository of data used for the purposes of cost estimation. The core of the model is the ensemble of neural networks. The developed model allows the prediction of cost estimates with satisfactory accuracy.
A User-Centric Knowledge Creation Model in a Web of Object-Enabled Internet of Things Environment
Kibria, Muhammad Golam; Fattah, Sheik Mohammad Mostakim; Jeong, Kwanghyeon; Chong, Ilyoung; Jeong, Youn-Kwae
2015-01-01
User-centric service features in a Web of Object-enabled Internet of Things environment can be provided by using a semantic ontology that classifies and integrates objects on the World Wide Web as well as shares and merges context-aware information and accumulated knowledge. The semantic ontology is applied on a Web of Object platform to virtualize the real world physical devices and information to form virtual objects that represent the features and capabilities of devices in the virtual world. Detailed information and functionalities of multiple virtual objects are combined with service rules to form composite virtual objects that offer context-aware knowledge-based services, where context awareness plays an important role in enabling automatic modification of the system to reconfigure the services based on the context. Converting the raw data into meaningful information and connecting the information to form the knowledge and storing and reusing the objects in the knowledge base can both be expressed by semantic ontology. In this paper, a knowledge creation model that synchronizes a service logistic model and a virtual world knowledge model on a Web of Object platform has been proposed. To realize the context-aware knowledge-based service creation and execution, a conceptual semantic ontology model has been developed and a prototype has been implemented for a use case scenario of emergency service. PMID:26393609
A User-Centric Knowledge Creation Model in a Web of Object-Enabled Internet of Things Environment.
Kibria, Muhammad Golam; Fattah, Sheik Mohammad Mostakim; Jeong, Kwanghyeon; Chong, Ilyoung; Jeong, Youn-Kwae
2015-09-18
User-centric service features in a Web of Object-enabled Internet of Things environment can be provided by using a semantic ontology that classifies and integrates objects on the World Wide Web as well as shares and merges context-aware information and accumulated knowledge. The semantic ontology is applied on a Web of Object platform to virtualize the real world physical devices and information to form virtual objects that represent the features and capabilities of devices in the virtual world. Detailed information and functionalities of multiple virtual objects are combined with service rules to form composite virtual objects that offer context-aware knowledge-based services, where context awareness plays an important role in enabling automatic modification of the system to reconfigure the services based on the context. Converting the raw data into meaningful information and connecting the information to form the knowledge and storing and reusing the objects in the knowledge base can both be expressed by semantic ontology. In this paper, a knowledge creation model that synchronizes a service logistic model and a virtual world knowledge model on a Web of Object platform has been proposed. To realize the context-aware knowledge-based service creation and execution, a conceptual semantic ontology model has been developed and a prototype has been implemented for a use case scenario of emergency service.
Model-theoretic framework for sensor data fusion
NASA Astrophysics Data System (ADS)
Zavoleas, Kyriakos P.; Kokar, Mieczyslaw M.
1993-09-01
The main goal of our research in sensory data fusion (SDF) is the development of a systematic approach (a methodology) to designing systems for interpreting sensory information and for reasoning about the situation based upon this information and upon available data bases and knowledge bases. To achieve such a goal, two kinds of subgoals have been set: (1) develop a theoretical framework in which rational design/implementation decisions can be made, and (2) design a prototype SDF system along the lines of the framework. Our initial design of the framework has been described in our previous papers. In this paper we concentrate on the model-theoretic aspects of this framework. We postulate that data are embedded in data models, and information processing mechanisms are embedded in model operators. The paper is devoted to analyzing the classes of model operators and their significance in SDF. We investigate transformation abstraction and fusion operators. A prototype SDF system, fusing data from range and intensity sensors, is presented, exemplifying the structures introduced. Our framework is justified by the fact that it provides modularity, traceability of information flow, and a basis for a specification language for SDF.
What are the Starting Points? Evaluating Base-Year Assumptions in the Asian Modeling Exercise
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chaturvedi, Vaibhav; Waldhoff, Stephanie; Clarke, Leon E.
2012-12-01
A common feature of model inter-comparison efforts is that the base year numbers for important parameters such as population and GDP can differ substantially across models. This paper explores the sources and implications of this variation in Asian countries across the models participating in the Asian Modeling Exercise (AME). Because the models do not all have a common base year, each team was required to provide data for 2005 for comparison purposes. This paper compares the year 2005 information for different models, noting the degree of variation in important parameters, including population, GDP, primary energy, electricity, and CO2 emissions. Itmore » then explores the difference in these key parameters across different sources of base-year information. The analysis confirms that the sources provide different values for many key parameters. This variation across data sources and additional reasons why models might provide different base-year numbers, including differences in regional definitions, differences in model base year, and differences in GDP transformation methodologies, are then discussed in the context of the AME scenarios. Finally, the paper explores the implications of base-year variation on long-term model results.« less
PDS4 - Some Principles for Agile Data Curation
NASA Astrophysics Data System (ADS)
Hughes, J. S.; Crichton, D. J.; Hardman, S. H.; Joyner, R.; Algermissen, S.; Padams, J.
2015-12-01
PDS4, a research data management and curation system for NASA's Planetary Science Archive, was developed using principles that promote the characteristics of agile development. The result is an efficient system that produces better research data products while using less resources (time, effort, and money) and maximizes their usefulness for current and future scientists. The key principle is architectural. The PDS4 information architecture is developed and maintained independent of the infrastructure's process, application and technology architectures. The information architecture is based on an ontology-based information model developed to leverage best practices from standard reference models for digital archives, digital object registries, and metadata registries and capture domain knowledge from a panel of planetary science domain experts. The information model provides a sharable, stable, and formal set of information requirements for the system and is the primary source for information to configure most system components, including the product registry, search engine, validation and display tools, and production pipelines. Multi-level governance is also allowed for the effective management of the informational elements at the common, discipline, and project level. This presentation will describe the development principles, components, and uses of the information model and how an information model-driven architecture exhibits characteristics of agile curation including early delivery, evolutionary development, adaptive planning, continuous improvement, and rapid and flexible response to change.
FlyBase portals to human disease research using Drosophila models.
Millburn, Gillian H; Crosby, Madeline A; Gramates, L Sian; Tweedie, Susan
2016-03-01
The use of Drosophila melanogaster as a model for studying human disease is well established, reflected by the steady increase in both the number and proportion of fly papers describing human disease models in recent years. In this article, we highlight recent efforts to improve the availability and accessibility of the disease model information in FlyBase (http://flybase.org), the model organism database for Drosophila. FlyBase has recently introduced Human Disease Model Reports, each of which presents background information on a specific disease, a tabulation of related disease subtypes, and summaries of experimental data and results using fruit flies. Integrated presentations of relevant data and reagents described in other sections of FlyBase are incorporated into these reports, which are specifically designed to be accessible to non-fly researchers in order to promote collaboration across model organism communities working in translational science. Another key component of disease model information in FlyBase is that data are collected in a consistent format --- using the evolving Disease Ontology (an open-source standardized ontology for human-disease-associated biomedical data) - to allow robust and intuitive searches. To facilitate this, FlyBase has developed a dedicated tool for querying and navigating relevant data, which include mutations that model a disease and any associated interacting modifiers. In this article, we describe how data related to fly models of human disease are presented in individual Gene Reports and in the Human Disease Model Reports. Finally, we discuss search strategies and new query tools that are available to access the disease model data in FlyBase. © 2016. Published by The Company of Biologists Ltd.
Azadeh, Fereydoon; Ghasemi, Shahrzad
2016-01-01
The present research aims to study information seeking behavior of faculty Members of Payame Noor University (PNU) in Mazandaran province of Iran by using Wilson’s model of information seeking behavior. This is a survey study. Participants were 97 of PNU faculty Members in Mazandaran province. An information-seeking behavior inventory was employed to gather information and research data, which had 24 items based on 5-point likert scale. Collected data were analyzed in SPSS software. Results showed that the most important goal of faculty members was publishing a scientific paper, and their least important goal was updating technical information. Also we found that they mostly use internet-based resources to meet their information needs. Accordingly, 57.7% of them find information resources via online search engines (e.g. Google, Yahoo). Also we concluded that there was a significant relationship between English language proficiency, academic rank, and work experience of them and their information- seeking behavior. PMID:27157151
Web information retrieval based on ontology
NASA Astrophysics Data System (ADS)
Zhang, Jian
2013-03-01
The purpose of the Information Retrieval (IR) is to find a set of documents that are relevant for a specific information need of a user. Traditional Information Retrieval model commonly used in commercial search engine is based on keyword indexing system and Boolean logic queries. One big drawback of traditional information retrieval is that they typically retrieve information without an explicitly defined domain of interest to the users so that a lot of no relevance information returns to users, which burden the user to pick up useful answer from these no relevance results. In order to tackle this issue, many semantic web information retrieval models have been proposed recently. The main advantage of Semantic Web is to enhance search mechanisms with the use of Ontology's mechanisms. In this paper, we present our approach to personalize web search engine based on ontology. In addition, key techniques are also discussed in our paper. Compared to previous research, our works concentrate on the semantic similarity and the whole process including query submission and information annotation.
Knowledge-Based Information Retrieval.
ERIC Educational Resources Information Center
Ford, Nigel
1991-01-01
Discussion of information retrieval focuses on theoretical and empirical advances in knowledge-based information retrieval. Topics discussed include the use of natural language for queries; the use of expert systems; intelligent tutoring systems; user modeling; the need for evaluation of system effectiveness; and examples of systems, including…
Modeling the Information Age Combat Model: An Agent-Based Simulation of Network Centric Operations
NASA Technical Reports Server (NTRS)
Deller, Sean; Rabadi, Ghaith A.; Bell, Michael I.; Bowling, Shannon R.; Tolk, Andreas
2010-01-01
The Information Age Combat Model (IACM) was introduced by Cares in 2005 to contribute to the development of an understanding of the influence of connectivity on force effectiveness that can eventually lead to quantitative prediction and guidelines for design and employment. The structure of the IACM makes it clear that the Perron-Frobenius Eigenvalue is a quantifiable metric with which to measure the organization of a networked force. The results of recent experiments presented in Deller, et aI., (2009) indicate that the value of the Perron-Frobenius Eigenvalue is a significant measurement of the performance of an Information Age combat force. This was accomplished through the innovative use of an agent-based simulation to model the IACM and represents an initial contribution towards a new generation of combat models that are net-centric instead of using the current platform-centric approach. This paper describes the intent, challenges, design, and initial results of this agent-based simulation model.
Probabilistic neural networks modeling of the 48-h LC50 acute toxicity endpoint to Daphnia magna.
Niculescu, S P; Lewis, M A; Tigner, J
2008-01-01
Two modeling experiments based on the maximum likelihood estimation paradigm and targeting prediction of the Daphnia magna 48-h LC50 acute toxicity endpoint for both organic and inorganic compounds are reported. The resulting models computational algorithms are implemented as basic probabilistic neural networks with Gaussian kernel (statistical corrections included). The first experiment uses strictly D. magna information for 971 structures as training/learning data and the resulting model targets practical applications. The second experiment uses the same training/learning information plus additional data on another 29 compounds whose endpoint information is originating from D. pulex and Ceriodaphnia dubia. It only targets investigation of the effect of mixing strictly D. magna 48-h LC50 modeling information with small amounts of similar information estimated from related species, and this is done as part of the validation process. A complementary 81 compounds dataset (involving only strictly D. magna information) is used to perform external testing. On this external test set, the Gaussian character of the distribution of the residuals is confirmed for both models. This allows the use of traditional statistical methodology to implement computation of confidence intervals for the unknown measured values based on the models predictions. Examples are provided for the model targeting practical applications. For the same model, a comparison with other existing models targeting the same endpoint is performed.
ERIC Educational Resources Information Center
King, D.; And Others
1994-01-01
Discusses the computational problems of automating paper-based spatial information. A new relational structure for soil science information based on the main conceptual concepts used during conventional cartographic work is proposed. This model is a computerized framework for coherent description of the geographical variability of soils, combined…
NASA Technical Reports Server (NTRS)
1982-01-01
Currently based on ground and aerial surveys, the land cover data base of the Pennsylvania Power and Light Company is routinely used for modelling the effects of alternative generating plant and transmission line sites on the local and regional environment. The development of a satellite-based geographic information system would facilitate both the preparation of environmental impact statements by power companies and assessment of the data by the Nuclear Regulatory Commission. A cooperative project is planned to demonstrate the methodology for integrating satellite data into an existing geographic information system, d to further evaluate the ability of satellite data in modeling environmental conditions that would be applied in the preparation and assessment of environmental impact statements.
ERIC Educational Resources Information Center
Stirling, Keith
2000-01-01
Describes a session on information retrieval systems that planned to discuss relevance measures with Web-based information retrieval; retrieval system performance and evaluation; probabilistic independence of index terms; vector-based models; metalanguages and digital objects; how users assess the reliability, timeliness and bias of information;…
Using the Weighted Keyword Model to Improve Information Retrieval for Answering Biomedical Questions
Yu, Hong; Cao, Yong-gang
2009-01-01
Physicians ask many complex questions during the patient encounter. Information retrieval systems that can provide immediate and relevant answers to these questions can be invaluable aids to the practice of evidence-based medicine. In this study, we first automatically identify topic keywords from ad hoc clinical questions with a Condition Random Field model that is trained over thousands of manually annotated clinical questions. We then report on a linear model that assigns query weights based on their automatically identified semantic roles: topic keywords, domain specific terms, and their synonyms. Our evaluation shows that this weighted keyword model improves information retrieval from the Text Retrieval Conference Genomics track data. PMID:21347188
Yu, Hong; Cao, Yong-Gang
2009-03-01
Physicians ask many complex questions during the patient encounter. Information retrieval systems that can provide immediate and relevant answers to these questions can be invaluable aids to the practice of evidence-based medicine. In this study, we first automatically identify topic keywords from ad hoc clinical questions with a Condition Random Field model that is trained over thousands of manually annotated clinical questions. We then report on a linear model that assigns query weights based on their automatically identified semantic roles: topic keywords, domain specific terms, and their synonyms. Our evaluation shows that this weighted keyword model improves information retrieval from the Text Retrieval Conference Genomics track data.
NASA Astrophysics Data System (ADS)
Peng, Xiang; Zhang, Peng; Cai, Lilong
In this paper, we present a virtual-optical based information security system model with the aid of public-key-infrastructure (PKI) techniques. The proposed model employs a hybrid architecture in which our previously published encryption algorithm based on virtual-optics imaging methodology (VOIM) can be used to encipher and decipher data while an asymmetric algorithm, for example RSA, is applied for enciphering and deciphering the session key(s). For an asymmetric system, given an encryption key, it is computationally infeasible to determine the decryption key and vice versa. The whole information security model is run under the framework of PKI, which is on basis of public-key cryptography and digital signatures. This PKI-based VOIM security approach has additional features like confidentiality, authentication, and integrity for the purpose of data encryption under the environment of network.
Emerging In Vitro Liver Technologies for Drug Metabolism and Inter-Organ Interactions
Bale, Shyam Sundhar; Moore, Laura
2016-01-01
In vitro liver models provide essential information for evaluating drug metabolism, metabolite formation, and hepatotoxicity. Interfacing liver models with other organ models could provide insights into the desirable as well as unintended systemic side effects of therapeutic agents and their metabolites. Such information is invaluable for drug screening processes particularly in the context of secondary organ toxicity. While interfacing of liver models with other organ models has been achieved, platforms that effectively provide human-relevant precise information are needed. In this concise review, we discuss the current state-of-the-art of liver-based multiorgan cell culture platforms primarily from a drug and metabolite perspective, and highlight the importance of media-to-cell ratio in interfacing liver models with other organ models. In addition, we briefly discuss issues related to development of optimal liver models that include recent advances in hepatic cell lines, stem cells, and challenges associated with primary hepatocyte-based liver models. Liver-based multiorgan models that achieve physiologically relevant coupling of different organ models can have a broad impact in evaluating drug efficacy and toxicity, as well as mechanistic investigation of human-relevant disease conditions. PMID:27049038
A new fractional order derivative based active contour model for colon wall segmentation
NASA Astrophysics Data System (ADS)
Chen, Bo; Li, Lihong C.; Wang, Huafeng; Wei, Xinzhou; Huang, Shan; Chen, Wensheng; Liang, Zhengrong
2018-02-01
Segmentation of colon wall plays an important role in advancing computed tomographic colonography (CTC) toward a screening modality. Due to the low contrast of CT attenuation around colon wall, accurate segmentation of the boundary of both inner and outer wall is very challenging. In this paper, based on the geodesic active contour model, we develop a new model for colon wall segmentation. First, tagged materials in CTC images were automatically removed via a partial volume (PV) based electronic colon cleansing (ECC) strategy. We then present a new fractional order derivative based active contour model to segment the volumetric colon wall from the cleansed CTC images. In this model, the regionbased Chan-Vese model is incorporated as an energy term to the whole model so that not only edge/gradient information but also region/volume information is taken into account in the segmentation process. Furthermore, a fractional order differentiation derivative energy term is also developed in the new model to preserve the low frequency information and improve the noise immunity of the new segmentation model. The proposed colon wall segmentation approach was validated on 16 patient CTC scans. Experimental results indicate that the present scheme is very promising towards automatically segmenting colon wall, thus facilitating computer aided detection of initial colonic polyp candidates via CTC.
Applications of agent-based modeling to nutrient movement Lake Michigan
As part of an ongoing project aiming to provide useful information for nearshore management (harmful algal blooms, nutrient loading), we explore the value of agent-based models in Lake Michigan. Agent-based models follow many individual “agents” moving through a simul...
ERIC Educational Resources Information Center
Mun, Eun Young; von Eye, Alexander; Bates, Marsha E.; Vaschillo, Evgeny G.
2008-01-01
Model-based cluster analysis is a new clustering procedure to investigate population heterogeneity utilizing finite mixture multivariate normal densities. It is an inferentially based, statistically principled procedure that allows comparison of nonnested models using the Bayesian information criterion to compare multiple models and identify the…
NASA Astrophysics Data System (ADS)
Skersys, Tomas; Butleris, Rimantas; Kapocius, Kestutis
2013-10-01
Approaches for the analysis and specification of business vocabularies and rules are very relevant topics in both Business Process Management and Information Systems Development disciplines. However, in common practice of Information Systems Development, the Business modeling activities still are of mostly empiric nature. In this paper, basic aspects of the approach for business vocabularies' semi-automated extraction from business process models are presented. The approach is based on novel business modeling-level OMG standards "Business Process Model and Notation" (BPMN) and "Semantics for Business Vocabularies and Business Rules" (SBVR), thus contributing to OMG's vision about Model-Driven Architecture (MDA) and to model-driven development in general.
Approach to spatial information security based on digital certificate
NASA Astrophysics Data System (ADS)
Cong, Shengri; Zhang, Kai; Chen, Baowen
2005-11-01
With the development of the online applications of geographic information systems (GIS) and the spatial information services, the spatial information security becomes more important. This work introduced digital certificates and authorization schemes into GIS to protect the crucial spatial information combining the techniques of the role-based access control (RBAC), the public key infrastructure (PKI) and the privilege management infrastructure (PMI). We investigated the spatial information granularity suited for sensitivity marking and digital certificate model that fits the need of GIS security based on the semantics analysis of spatial information. It implements a secure, flexible, fine-grained data access based on public technologies in GIS in the world.
Braga, Renata Dutra
2016-06-01
To develop a multiprofessional information model to be used in the decision-making process in primary care in Brazil. This was an observational study with a descriptive and exploratory approach, using action research associated with the Delphi method. A group of 13 health professionals made up a panel of experts that, through individual and group meetings, drew up a preliminary health information records model. The questionnaire used to validate this model included four questions based on a Likert scale. These questions evaluated the completeness and relevance of information on each of the four pillars that composed the model. The changes suggested in each round of evaluation were included when accepted by the majority (≥ 50%). This process was repeated as many times as necessary to obtain the desirable and recommended consensus level (> 50%), and the final version became the consensus model. Multidisciplinary health training of the panel of experts allowed a consensus model to be obtained based on four categories of health information, called pillars: Data Collection, Diagnosis, Care Plan and Evaluation. The obtained consensus model was considered valid by the experts and can contribute to the collection and recording of multidisciplinary information in primary care, as well as the identification of relevant concepts for defining electronic health records at this level of complexity in health care. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Liaw, Siaw-Teng; Deveny, Elizabeth; Morrison, Iain; Lewis, Bryn
2006-09-01
Using a factorial vignette survey and modeling methodology, we developed clinical and information models - incorporating evidence base, key concepts, relevant terms, decision-making and workflow needed to practice safely and effectively - to guide the development of an integrated rule-based knowledge module to support prescribing decisions in asthma. We identified workflows, decision-making factors, factor use, and clinician information requirements. The Unified Modeling Language (UML) and public domain software and knowledge engineering tools (e.g. Protégé) were used, with the Australian GP Data Model as the starting point for expressing information needs. A Web Services service-oriented architecture approach was adopted within which to express functional needs, and clinical processes and workflows were expressed in the Business Process Execution Language (BPEL). This formal analysis and modeling methodology to define and capture the process and logic of prescribing best practice in a reference implementation is fundamental to tackling deficiencies in prescribing decision support software.
Emergence of Opinion Leaders Based on Agent Model and Its Impact to Stock Prices
NASA Astrophysics Data System (ADS)
Misawa, Tadanobu; Suzuki, Kyoko; Okano, Yoshitaka; Shimokawa, Tetsuya
Recently, we can be able to get a lot of information easily because information technology has been developed. Therefore, it is thought that the impact to a society by communication of information such as word of mouth has been growing. In this paper, we propose a model of emergence of opinion leader based on word of mouth in artificial stock market. Moreover, the process of emergence of opinion leader and impact to stock prices by opinion leader are verified by simulation.
Intercepting a moving target: On-line or model-based control?
Zhao, Huaiyong; Warren, William H
2017-05-01
When walking to intercept a moving target, people take an interception path that appears to anticipate the target's trajectory. According to the constant bearing strategy, the observer holds the bearing direction of the target constant based on current visual information, consistent with on-line control. Alternatively, the interception path might be based on an internal model of the target's motion, known as model-based control. To investigate these two accounts, participants walked to intercept a moving target in a virtual environment. We degraded the target's visibility by blurring the target to varying degrees in the midst of a trial, in order to influence its perceived speed and position. Reduced levels of visibility progressively impaired interception accuracy and precision; total occlusion impaired performance most and yielded nonadaptive heading adjustments. Thus, performance strongly depended on current visual information and deteriorated qualitatively when it was withdrawn. The results imply that locomotor interception is normally guided by current information rather than an internal model of target motion, consistent with on-line control.
Sediment-Hosted Zinc-Lead Deposits of the World - Database and Grade and Tonnage Models
Singer, Donald A.; Berger, Vladimir I.; Moring, Barry C.
2009-01-01
This report provides information on sediment-hosted zinc-lead mineral deposits based on the geologic settings that are observed on regional geologic maps. The foundation of mineral-deposit models is information about known deposits. The purpose of this publication is to make this kind of information available in digital form for sediment-hosted zinc-lead deposits. Mineral-deposit models are important in exploration planning and quantitative resource assessments: Grades and tonnages among deposit types are significantly different, and many types occur in different geologic settings that can be identified from geologic maps. Mineral-deposit models are the keystone in combining the diverse geoscience information on geology, mineral occurrences, geophysics, and geochemistry used in resource assessments and mineral exploration. Too few thoroughly explored mineral deposits are available in most local areas for reliable identification of the important geoscience variables, or for robust estimation of undiscovered deposits - thus, we need mineral-deposit models. Globally based deposit models allow recognition of important features because the global models demonstrate how common different features are. Well-designed and -constructed deposit models allow geologists to know from observed geologic environments the possible mineral-deposit types that might exist, and allow economists to determine the possible economic viability of these resources in the region. Thus, mineral-deposit models play the central role in transforming geoscience information to a form useful to policy makers. This publication contains a computer file of information on sediment-hosted zinc-lead deposits from around the world. It also presents new grade and tonnage models for nine types of these deposits and a file allowing locations of all deposits to be plotted in Google Earth. The data are presented in FileMaker Pro, Excel and text files to make the information available to as many as possible. The value of this information and any derived analyses depends critically on the consistent manner of data gathering. For this reason, we first discuss the rules applied in this compilation. Next, the fields of the data file are considered. Finally, we provide new grade and tonnage models that are, for the most part, based on a classification of deposits using observable geologic units from regional-scaled maps.
Framework model and principles for trusted information sharing in pervasive health.
Ruotsalainen, Pekka; Blobel, Bernd; Nykänen, Pirkko; Seppälä, Antto; Sorvari, Hannu
2011-01-01
Trustfulness (i.e. health and wellness information is processed ethically, and privacy is guaranteed) is one of the cornerstones for future Personal Health Systems, ubiquitous healthcare and pervasive health. Trust in today's healthcare is organizational, static and predefined. Pervasive health takes place in an open and untrusted information space where person's lifelong health and wellness information together with contextual data are dynamically collected and used by many stakeholders. This generates new threats that do not exist in today's eHealth systems. Our analysis shows that the way security and trust are implemented in today's healthcare cannot guarantee information autonomy and trustfulness in pervasive health. Based on a framework model of pervasive health and risks analysis of ubiquitous information space, we have formulated principles which enable trusted information sharing in pervasive health. Principles imply that the data subject should have the right to dynamically verify trust and to control the use of her health information, as well as the right to set situation based context-aware personal policies. Data collectors and processors have responsibilities including transparency of information processing, and openness of interests, policies and environmental features. Our principles create a base for successful management of privacy and information autonomy in pervasive health. They also imply that it is necessary to create new data models for personal health information and new architectures which support situation depending trust and privacy management.
Role Modelling in MOOC Discussion Forums
ERIC Educational Resources Information Center
Hecking, Tobias; Chounta, Irene-Angelica; Hoppe, H. Ulrich
2017-01-01
To further develop rich and expressive ways of modelling roles of contributors in discussion forums of online courses, particularly in MOOCs, networks of forum users are analyzed based on the relations of information-giving and information-seeking. Specific connection patterns that appear in the information exchange networks of forum users are…
An Object-Based Requirements Modeling Method.
ERIC Educational Resources Information Center
Cordes, David W.; Carver, Doris L.
1992-01-01
Discusses system modeling and specification as it relates to object-based information systems development and software development. An automated system model based on the objects in the initial requirements document is described, the requirements document translator is explained, and a sample application of the technique is provided. (12…
Information Filtering Based on Users' Negative Opinions
NASA Astrophysics Data System (ADS)
Guo, Qiang; Li, Yang; Liu, Jian-Guo
2013-05-01
The process of heat conduction (HC) has recently found application in the information filtering [Zhang et al., Phys. Rev. Lett.99, 154301 (2007)], which is of high diversity but low accuracy. The classical HC model predicts users' potential interested objects based on their interesting objects regardless to the negative opinions. In terms of the users' rating scores, we present an improved user-based HC (UHC) information model by taking into account users' positive and negative opinions. Firstly, the objects rated by users are divided into positive and negative categories, then the predicted interesting and dislike object lists are generated by the UHC model. Finally, the recommendation lists are constructed by filtering out the dislike objects from the interesting lists. By implementing the new model based on nine similarity measures, the experimental results for MovieLens and Netflix datasets show that the new model considering negative opinions could greatly enhance the accuracy, measured by the average ranking score, from 0.049 to 0.036 for Netflix and from 0.1025 to 0.0570 for Movielens dataset, reduced by 26.53% and 44.39%, respectively. Since users prefer to give positive ratings rather than negative ones, the negative opinions contain much more information than the positive ones, the negative opinions, therefore, are very important for understanding users' online collective behaviors and improving the performance of HC model.
A new pattern associative memory model for image recognition based on Hebb rules and dot product
NASA Astrophysics Data System (ADS)
Gao, Mingyue; Deng, Limiao; Wang, Yanjiang
2018-04-01
A great number of associative memory models have been proposed to realize information storage and retrieval inspired by human brain in the last few years. However, there is still much room for improvement for those models. In this paper, we extend a binary pattern associative memory model to accomplish real-world image recognition. The learning process is based on the fundamental Hebb rules and the retrieval is implemented by a normalized dot product operation. Our proposed model can not only fulfill rapid memory storage and retrieval for visual information but also have the ability on incremental learning without destroying the previous learned information. Experimental results demonstrate that our model outperforms the existing Self-Organizing Incremental Neural Network (SOINN) and Back Propagation Neuron Network (BPNN) on recognition accuracy and time efficiency.
Model selection with multiple regression on distance matrices leads to incorrect inferences.
Franckowiak, Ryan P; Panasci, Michael; Jarvis, Karl J; Acuña-Rodriguez, Ian S; Landguth, Erin L; Fortin, Marie-Josée; Wagner, Helene H
2017-01-01
In landscape genetics, model selection procedures based on Information Theoretic and Bayesian principles have been used with multiple regression on distance matrices (MRM) to test the relationship between multiple vectors of pairwise genetic, geographic, and environmental distance. Using Monte Carlo simulations, we examined the ability of model selection criteria based on Akaike's information criterion (AIC), its small-sample correction (AICc), and the Bayesian information criterion (BIC) to reliably rank candidate models when applied with MRM while varying the sample size. The results showed a serious problem: all three criteria exhibit a systematic bias toward selecting unnecessarily complex models containing spurious random variables and erroneously suggest a high level of support for the incorrectly ranked best model. These problems effectively increased with increasing sample size. The failure of AIC, AICc, and BIC was likely driven by the inflated sample size and different sum-of-squares partitioned by MRM, and the resulting effect on delta values. Based on these findings, we strongly discourage the continued application of AIC, AICc, and BIC for model selection with MRM.
Image/video understanding systems based on network-symbolic models
NASA Astrophysics Data System (ADS)
Kuvich, Gary
2004-03-01
Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. Computer simulation models are built on the basis of graphs/networks. The ability of human brain to emulate similar graph/network models is found. Symbols, predicates and grammars naturally emerge in such networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type relational structure created via multilevel hierarchical compression of visual information. Primary areas provide active fusion of image features on a spatial grid-like structure, where nodes are cortical columns. Spatial logic and topology naturally present in such structures. Mid-level vision processes like perceptual grouping, separation of figure from ground, are special kinds of network transformations. They convert primary image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models combines learning, classification, and analogy together with higher-level model-based reasoning into a single framework, and it works similar to frames and agents. Computational intelligence methods transform images into model-based knowledge representation. Based on such principles, an Image/Video Understanding system can convert images into the knowledge models, and resolve uncertainty and ambiguity. This allows creating intelligent computer vision systems for design and manufacturing.
Word of Mouth : An Agent-based Approach to Predictability of Stock Prices
NASA Astrophysics Data System (ADS)
Shimokawa, Tetsuya; Misawa, Tadanobu; Watanabe, Kyoko
This paper addresses how communication processes among investors affect stock prices formation, especially emerging predictability of stock prices, in financial markets. An agent based model, called the word of mouth model, is introduced for analyzing the problem. This model provides a simple, but sufficiently versatile, description of informational diffusion process and is successful in making lucidly explanation for the predictability of small sized stocks, which is a stylized fact in financial markets but difficult to resolve by traditional models. Our model also provides a rigorous examination of the under reaction hypothesis to informational shocks.
Yoon, Miyoung; Clewell, Harvey J.
2016-01-01
Physiologically based pharmacokinetic (PBPK) modeling can provide an effective way to utilize in vitro and in silico based information in modern risk assessment for children and other potentially sensitive populations. In this review, we describe the process of in vitro to in vivo extrapolation (IVIVE) to develop PBPK models for a chemical in different ages in order to predict the target tissue exposure at the age of concern in humans. We present our on-going studies on pyrethroids as a proof of concept to guide the readers through the IVIVE steps using the metabolism data collected either from age-specific liver donors or expressed enzymes in conjunction with enzyme ontogeny information to provide age-appropriate metabolism parameters in the PBPK model in the rat and human, respectively. The approach we present here is readily applicable to not just to other pyrethroids, but also to other environmental chemicals and drugs. Establishment of an in vitro and in silico-based evaluation strategy in conjunction with relevant exposure information in humans is of great importance in risk assessment for potentially vulnerable populations like early ages where the necessary information for decision making is limited. PMID:26977255
Yoon, Miyoung; Clewell, Harvey J
2016-01-01
Physiologically based pharmacokinetic (PBPK) modeling can provide an effective way to utilize in vitro and in silico based information in modern risk assessment for children and other potentially sensitive populations. In this review, we describe the process of in vitro to in vivo extrapolation (IVIVE) to develop PBPK models for a chemical in different ages in order to predict the target tissue exposure at the age of concern in humans. We present our on-going studies on pyrethroids as a proof of concept to guide the readers through the IVIVE steps using the metabolism data collected either from age-specific liver donors or expressed enzymes in conjunction with enzyme ontogeny information to provide age-appropriate metabolism parameters in the PBPK model in the rat and human, respectively. The approach we present here is readily applicable to not just to other pyrethroids, but also to other environmental chemicals and drugs. Establishment of an in vitro and in silico-based evaluation strategy in conjunction with relevant exposure information in humans is of great importance in risk assessment for potentially vulnerable populations like early ages where the necessary information for decision making is limited.
ERIC Educational Resources Information Center
Wolf, Sara Elizabeth; Brush, Thomas
The purpose of this research study was to determine whether a specific information problem-solving skills model was an effective metacognitive scaffold for students solving information-based problems. Specifically, 35 eighth grade students in two intact classes were asked to write newspaper articles that summarized the events surrounding the Selma…
Informal Learning through Science Media Usage
ERIC Educational Resources Information Center
Maier, Michaela; Rothmund, Tobias; Retzbach, Andrea; Otto, Lukas; Besley, John C.
2014-01-01
This article reviews current research on informal science learning through news media. Based on a descriptive model of media-based science communication we distinguish between (a) the professional routines by which journalists select and depict scientific information in traditional media and (b) the psychological processes that account for how…
NASA Technical Reports Server (NTRS)
Fuller, H. V.
1974-01-01
A display system was developed to provide flight information to the ground based pilots of radio controlled models used in flight research programs. The display system utilizes data received by telemetry from the model, and presents the information numerically in the field of view of the binoculars used by the pilots.
Wang, Xibin; Luo, Fengji; Qian, Ying; Ranzi, Gianluca
2016-01-01
With the rapid development of ICT and Web technologies, a large an amount of information is becoming available and this is producing, in some instances, a condition of information overload. Under these conditions, it is difficult for a person to locate and access useful information for making decisions. To address this problem, there are information filtering systems, such as the personalized recommendation system (PRS) considered in this paper, that assist a person in identifying possible products or services of interest based on his/her preferences. Among available approaches, collaborative Filtering (CF) is one of the most widely used recommendation techniques. However, CF has some limitations, e.g., the relatively simple similarity calculation, cold start problem, etc. In this context, this paper presents a new regression model based on the support vector machine (SVM) classification and an improved PSO (IPSO) for the development of an electronic movie PRS. In its implementation, a SVM classification model is first established to obtain a preliminary movie recommendation list based on which a SVM regression model is applied to predict movies’ ratings. The proposed PRS not only considers the movie’s content information but also integrates the users’ demographic and behavioral information to better capture the users’ interests and preferences. The efficiency of the proposed method is verified by a series of experiments based on the MovieLens benchmark data set. PMID:27898691
Wang, Xibin; Luo, Fengji; Qian, Ying; Ranzi, Gianluca
2016-01-01
With the rapid development of ICT and Web technologies, a large an amount of information is becoming available and this is producing, in some instances, a condition of information overload. Under these conditions, it is difficult for a person to locate and access useful information for making decisions. To address this problem, there are information filtering systems, such as the personalized recommendation system (PRS) considered in this paper, that assist a person in identifying possible products or services of interest based on his/her preferences. Among available approaches, collaborative Filtering (CF) is one of the most widely used recommendation techniques. However, CF has some limitations, e.g., the relatively simple similarity calculation, cold start problem, etc. In this context, this paper presents a new regression model based on the support vector machine (SVM) classification and an improved PSO (IPSO) for the development of an electronic movie PRS. In its implementation, a SVM classification model is first established to obtain a preliminary movie recommendation list based on which a SVM regression model is applied to predict movies' ratings. The proposed PRS not only considers the movie's content information but also integrates the users' demographic and behavioral information to better capture the users' interests and preferences. The efficiency of the proposed method is verified by a series of experiments based on the MovieLens benchmark data set.
Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William
2014-01-01
Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies.
Anderson, Weston; Guikema, Seth; Zaitchik, Ben; Pan, William
2014-01-01
Obtaining accurate small area estimates of population is essential for policy and health planning but is often difficult in countries with limited data. In lieu of available population data, small area estimate models draw information from previous time periods or from similar areas. This study focuses on model-based methods for estimating population when no direct samples are available in the area of interest. To explore the efficacy of tree-based models for estimating population density, we compare six different model structures including Random Forest and Bayesian Additive Regression Trees. Results demonstrate that without information from prior time periods, non-parametric tree-based models produced more accurate predictions than did conventional regression methods. Improving estimates of population density in non-sampled areas is important for regions with incomplete census data and has implications for economic, health and development policies. PMID:24992657
Marginal and Random Intercepts Models for Longitudinal Binary Data With Examples From Criminology.
Long, Jeffrey D; Loeber, Rolf; Farrington, David P
2009-01-01
Two models for the analysis of longitudinal binary data are discussed: the marginal model and the random intercepts model. In contrast to the linear mixed model (LMM), the two models for binary data are not subsumed under a single hierarchical model. The marginal model provides group-level information whereas the random intercepts model provides individual-level information including information about heterogeneity of growth. It is shown how a type of numerical averaging can be used with the random intercepts model to obtain group-level information, thus approximating individual and marginal aspects of the LMM. The types of inferences associated with each model are illustrated with longitudinal criminal offending data based on N = 506 males followed over a 22-year period. Violent offending indexed by official records and self-report were analyzed, with the marginal model estimated using generalized estimating equations and the random intercepts model estimated using maximum likelihood. The results show that the numerical averaging based on the random intercepts can produce prediction curves almost identical to those obtained directly from the marginal model parameter estimates. The results provide a basis for contrasting the models and the estimation procedures and key features are discussed to aid in selecting a method for empirical analysis.
NASA Astrophysics Data System (ADS)
Kuppel, S.; Soulsby, C.; Maneta, M. P.; Tetzlaff, D.
2017-12-01
The utility of field measurements to help constrain the model solution space and identify feasible model configurations has been an increasingly central issue in hydrological model calibration. Sufficiently informative observations are necessary to ensure that the goodness of model-data fit attained effectively translates into more physically-sound information for the internal model parameters, as a basis for model structure evaluation. Here we assess to which extent the diversity of information content can inform on the suitability of a complex, process-based ecohydrological model to simulate key water flux and storage dynamics at a long-term research catchment in the Scottish Highlands. We use the fully-distributed ecohydrological model EcH2O, calibrated against long-term datasets that encompass hydrologic and energy exchanges and ecological measurements: stream discharge, soil moisture, net radiation above canopy, and pine stand transpiration. Diverse combinations of these constraints were applied using a multi-objective cost function specifically designed to avoid compensatory effects between model-data metrics. Results revealed that calibration against virtually all datasets enabled the model to reproduce streamflow reasonably well. However, parameterizing the model to adequately capture local flux and storage dynamics, such as soil moisture or transpiration, required calibration with specific observations. This indicates that the footprint of the information contained in observations varies for each type of dataset, and that a diverse database informing about the different compartments of the domain, is critical to test hypotheses of catchment function and identify a consistent model parameterization. The results foster confidence in using EcH2O to help understanding current and future ecohydrological couplings in Northern catchments.
Wallace, C.S.A.; Marsh, S.E.
2005-01-01
Our study used geostatistics to extract measures that characterize the spatial structure of vegetated landscapes from satellite imagery for mapping endangered Sonoran pronghorn habitat. Fine spatial resolution IKONOS data provided information at the scale of individual trees or shrubs that permitted analysis of vegetation structure and pattern. We derived images of landscape structure by calculating local estimates of the nugget, sill, and range variogram parameters within 25 ?? 25-m image windows. These variogram parameters, which describe the spatial autocorrelation of the 1-m image pixels, are shown in previous studies to discriminate between different species-specific vegetation associations. We constructed two independent models of pronghorn landscape preference by coupling the derived measures with Sonoran pronghorn sighting data: a distribution-based model and a cluster-based model. The distribution-based model used the descriptive statistics for variogram measures at pronghorn sightings, whereas the cluster-based model used the distribution of pronghorn sightings within clusters of an unsupervised classification of derived images. Both models define similar landscapes, and validation results confirm they effectively predict the locations of an independent set of pronghorn sightings. Such information, although not a substitute for field-based knowledge of the landscape and associated ecological processes, can provide valuable reconnaissance information to guide natural resource management efforts. ?? 2005 Taylor & Francis Group Ltd.
An ontology-based semantic configuration approach to constructing Data as a Service for enterprises
NASA Astrophysics Data System (ADS)
Cai, Hongming; Xie, Cheng; Jiang, Lihong; Fang, Lu; Huang, Chenxi
2016-03-01
To align business strategies with IT systems, enterprises should rapidly implement new applications based on existing information with complex associations to adapt to the continually changing external business environment. Thus, Data as a Service (DaaS) has become an enabling technology for enterprise through information integration and the configuration of existing distributed enterprise systems and heterogonous data sources. However, business modelling, system configuration and model alignment face challenges at the design and execution stages. To provide a comprehensive solution to facilitate data-centric application design in a highly complex and large-scale situation, a configurable ontology-based service integrated platform (COSIP) is proposed to support business modelling, system configuration and execution management. First, a meta-resource model is constructed and used to describe and encapsulate information resources by way of multi-view business modelling. Then, based on ontologies, three semantic configuration patterns, namely composite resource configuration, business scene configuration and runtime environment configuration, are designed to systematically connect business goals with executable applications. Finally, a software architecture based on model-view-controller (MVC) is provided and used to assemble components for software implementation. The result of the case study demonstrates that the proposed approach provides a flexible method of implementing data-centric applications.
A User-Centered Approach to Adaptive Hypertext Based on an Information Relevance Model
NASA Technical Reports Server (NTRS)
Mathe, Nathalie; Chen, James
1994-01-01
Rapid and effective to information in large electronic documentation systems can be facilitated if information relevant in an individual user's content can be automatically supplied to this user. However most of this knowledge on contextual relevance is not found within the contents of documents, it is rather established incrementally by users during information access. We propose a new model for interactively learning contextual relevance during information retrieval, and incrementally adapting retrieved information to individual user profiles. The model, called a relevance network, records the relevance of references based on user feedback for specific queries and user profiles. It also generalizes such knowledge to later derive relevant references for similar queries and profiles. The relevance network lets users filter information by context of relevance. Compared to other approaches, it does not require any prior knowledge nor training. More importantly, our approach to adaptivity is user-centered. It facilitates acceptance and understanding by users by giving them shared control over the adaptation without disturbing their primary task. Users easily control when to adapt and when to use the adapted system. Lastly, the model is independent of the particular application used to access information, and supports sharing of adaptations among users.
Model-Informed Drug Development for Ixazomib, an Oral Proteasome Inhibitor.
Gupta, Neeraj; Hanley, Michael J; Diderichsen, Paul M; Yang, Huyuan; Ke, Alice; Teng, Zhaoyang; Labotka, Richard; Berg, Deborah; Patel, Chirag; Liu, Guohui; van de Velde, Helgi; Venkatakrishnan, Karthik
2018-02-15
Model-informed drug development (MIDD) was central to the development of the oral proteasome inhibitor ixazomib, facilitating internal decisions (switch from body surface area (BSA)-based to fixed dosing, inclusive phase III trials, portfolio prioritization of ixazomib-based combinations, phase III dose for maintenance treatment), regulatory review (model-informed QT analysis, benefit-risk of 4 mg dose), and product labeling (absolute bioavailability and intrinsic/extrinsic factors). This review discusses the impact of MIDD in enabling patient-centric therapeutic optimization during the development of ixazomib. © 2017 The Authors. Clinical Pharmacology & Therapeutics published by Wiley Periodicals, Inc. on behalf of American Society for Clinical Pharmacology and Therapeutics.
An Extension of SIC Predictions to the Wiener Coactive Model
Houpt, Joseph W.; Townsend, James T.
2011-01-01
The survivor interaction contrasts (SIC) is a powerful measure for distinguishing among candidate models of human information processing. One class of models to which SIC analysis can apply are the coactive, or channel summation, models of human information processing. In general, parametric forms of coactive models assume that responses are made based on the first passage time across a fixed threshold of a sum of stochastic processes. Previous work has shown that that the SIC for a coactive model based on the sum of Poisson processes has a distinctive down-up-down form, with an early negative region that is smaller than the later positive region. In this note, we demonstrate that a coactive process based on the sum of two Wiener processes has the same SIC form. PMID:21822333
An Extension of SIC Predictions to the Wiener Coactive Model.
Houpt, Joseph W; Townsend, James T
2011-06-01
The survivor interaction contrasts (SIC) is a powerful measure for distinguishing among candidate models of human information processing. One class of models to which SIC analysis can apply are the coactive, or channel summation, models of human information processing. In general, parametric forms of coactive models assume that responses are made based on the first passage time across a fixed threshold of a sum of stochastic processes. Previous work has shown that that the SIC for a coactive model based on the sum of Poisson processes has a distinctive down-up-down form, with an early negative region that is smaller than the later positive region. In this note, we demonstrate that a coactive process based on the sum of two Wiener processes has the same SIC form.
Information security system quality assessment through the intelligent tools
NASA Astrophysics Data System (ADS)
Trapeznikov, E. V.
2018-04-01
The technology development has shown the automated system information security comprehensive analysis necessity. The subject area analysis indicates the study relevance. The research objective is to develop the information security system quality assessment methodology based on the intelligent tools. The basis of the methodology is the information security assessment model in the information system through the neural network. The paper presents the security assessment model, its algorithm. The methodology practical implementation results in the form of the software flow diagram are represented. The practical significance of the model being developed is noted in conclusions.
High-Resolution Remote Sensing Image Building Extraction Based on Markov Model
NASA Astrophysics Data System (ADS)
Zhao, W.; Yan, L.; Chang, Y.; Gong, L.
2018-04-01
With the increase of resolution, remote sensing images have the characteristics of increased information load, increased noise, more complex feature geometry and texture information, which makes the extraction of building information more difficult. To solve this problem, this paper designs a high resolution remote sensing image building extraction method based on Markov model. This method introduces Contourlet domain map clustering and Markov model, captures and enhances the contour and texture information of high-resolution remote sensing image features in multiple directions, and further designs the spectral feature index that can characterize "pseudo-buildings" in the building area. Through the multi-scale segmentation and extraction of image features, the fine extraction from the building area to the building is realized. Experiments show that this method can restrain the noise of high-resolution remote sensing images, reduce the interference of non-target ground texture information, and remove the shadow, vegetation and other pseudo-building information, compared with the traditional pixel-level image information extraction, better performance in building extraction precision, accuracy and completeness.
Flood extent and water level estimation from SAR using data-model integration
NASA Astrophysics Data System (ADS)
Ajadi, O. A.; Meyer, F. J.
2017-12-01
Synthetic Aperture Radar (SAR) images have long been recognized as a valuable data source for flood mapping. Compared to other sources, SAR's weather and illumination independence and large area coverage at high spatial resolution supports reliable, frequent, and detailed observations of developing flood events. Accordingly, SAR has the potential to greatly aid in the near real-time monitoring of natural hazards, such as flood detection, if combined with automated image processing. This research works towards increasing the reliability and temporal sampling of SAR-derived flood hazard information by integrating information from multiple SAR sensors and SAR modalities (images and Interferometric SAR (InSAR) coherence) and by combining SAR-derived change detection information with hydrologic and hydraulic flood forecast models. First, the combination of multi-temporal SAR intensity images and coherence information for generating flood extent maps is introduced. The application of least-squares estimation integrates flood information from multiple SAR sensors, thus increasing the temporal sampling. SAR-based flood extent information will be combined with a Digital Elevation Model (DEM) to reduce false alarms and to estimate water depth and flood volume. The SAR-based flood extent map is assimilated into the Hydrologic Engineering Center River Analysis System (Hec-RAS) model to aid in hydraulic model calibration. The developed technology is improving the accuracy of flood information by exploiting information from data and models. It also provides enhanced flood information to decision-makers supporting the response to flood extent and improving emergency relief efforts.
NASA Astrophysics Data System (ADS)
Takeuchi, Susumu; Teranishi, Yuuichi; Harumoto, Kaname; Shimojo, Shinji
Almost all companies are now utilizing computer networks to support speedier and more effective in-house information-sharing and communication. However, existing systems are designed to support communications only within the same department. Therefore, in our research, we propose an in-house communication support system which is based on the “Information Propagation Model (IPM).” The IPM is proposed to realize word-of-mouth communication in a social network, and to support information-sharing on the network. By applying the system in a real company, we found that information could be exchanged between different and unrelated departments, and such exchanges of information could help to build new relationships between the users who are apart on the social network.
User Interface Models for Multidisciplinary Bibliographic Information Dissemination Centers.
ERIC Educational Resources Information Center
Zipperer, W. C.
Two information dissemination centers at University of California at Los Angeles and University of Georgia studied the interactions between computer based search facilities and their users. The study, largely descriptive in nature, investigated the interaction processes between data base users and profile analysis or information specialists in…
Constructing RBAC Based Security Model in u-Healthcare Service Platform
Shin, Moon Sun; Jeon, Heung Seok; Ju, Yong Wan; Lee, Bum Ju; Jeong, Seon-Phil
2015-01-01
In today's era of aging society, people want to handle personal health care by themselves in everyday life. In particular, the evolution of medical and IT convergence technology and mobile smart devices has made it possible for people to gather information on their health status anytime and anywhere easily using biometric information acquisition devices. Healthcare information systems can contribute to the improvement of the nation's healthcare quality and the reduction of related cost. However, there are no perfect security models or mechanisms for healthcare service applications, and privacy information can therefore be leaked. In this paper, we examine security requirements related to privacy protection in u-healthcare service and propose an extended RBAC based security model. We propose and design u-healthcare service integration platform (u-HCSIP) applying RBAC security model. The proposed u-HCSIP performs four main functions: storing and exchanging personal health records (PHR), recommending meals and exercise, buying/selling private health information or experience, and managing personal health data using smart devices. PMID:25695104
Development of a GIS-based spill management information system.
Martin, Paul H; LeBoeuf, Eugene J; Daniel, Edsel B; Dobbins, James P; Abkowitz, Mark D
2004-08-30
Spill Management Information System (SMIS) is a geographic information system (GIS)-based decision support system designed to effectively manage the risks associated with accidental or intentional releases of a hazardous material into an inland waterway. SMIS provides critical planning and impact information to emergency responders in anticipation of, or following such an incident. SMIS couples GIS and database management systems (DBMS) with the 2-D surface water model CE-QUAL-W2 Version 3.1 and the air contaminant model Computer-Aided Management of Emergency Operations (CAMEO) while retaining full GIS risk analysis and interpretive capabilities. Live 'real-time' data links are established within the spill management software to utilize current meteorological information and flowrates within the waterway. Capabilities include rapid modification of modeling conditions to allow for immediate scenario analysis and evaluation of 'what-if' scenarios. The functionality of the model is illustrated through a case study of the Cheatham Reach of the Cumberland River near Nashville, TN.
López, Diego M; Blobel, Bernd; Gonzalez, Carolina
2010-01-01
Requirement analysis, design, implementation, evaluation, use, and maintenance of semantically interoperable Health Information Systems (HIS) have to be based on eHealth standards. HIS-DF is a comprehensive approach for HIS architectural development based on standard information models and vocabulary. The empirical validity of HIS-DF has not been demonstrated so far. Through an empirical experiment, the paper demonstrates that using HIS-DF and HL7 information models, semantic quality of HIS architecture can be improved, compared to architectures developed using traditional RUP process. Semantic quality of the architecture has been measured in terms of model's completeness and validity metrics. The experimental results demonstrated an increased completeness of 14.38% and an increased validity of 16.63% when using the HIS-DF and HL7 information models in a sample HIS development project. Quality assurance of the system architecture in earlier stages of HIS development presumes an increased quality of final HIS systems, which supposes an indirect impact on patient care.
Constructing RBAC based security model in u-healthcare service platform.
Shin, Moon Sun; Jeon, Heung Seok; Ju, Yong Wan; Lee, Bum Ju; Jeong, Seon-Phil
2015-01-01
In today's era of aging society, people want to handle personal health care by themselves in everyday life. In particular, the evolution of medical and IT convergence technology and mobile smart devices has made it possible for people to gather information on their health status anytime and anywhere easily using biometric information acquisition devices. Healthcare information systems can contribute to the improvement of the nation's healthcare quality and the reduction of related cost. However, there are no perfect security models or mechanisms for healthcare service applications, and privacy information can therefore be leaked. In this paper, we examine security requirements related to privacy protection in u-healthcare service and propose an extended RBAC based security model. We propose and design u-healthcare service integration platform (u-HCSIP) applying RBAC security model. The proposed u-HCSIP performs four main functions: storing and exchanging personal health records (PHR), recommending meals and exercise, buying/selling private health information or experience, and managing personal health data using smart devices.
Paini, Alicia; Sala Benito, Jose Vicente; Bessems, Jos; Worth, Andrew P
2017-12-01
Physiologically based kinetic (PBK) models and the virtual cell based assay can be linked to form so called physiologically based dynamic (PBD) models. This study illustrates the development and application of a PBK model for prediction of estragole-induced DNA adduct formation and hepatotoxicity in humans. To address the hepatotoxicity, HepaRG cells were used as a surrogate for liver cells, with cell viability being used as the in vitro toxicological endpoint. Information on DNA adduct formation was taken from the literature. Since estragole induced cell damage is not directly caused by the parent compound, but by a reactive metabolite, information on the metabolic pathway was incorporated into the model. In addition, a user-friendly tool was developed by implementing the PBK/D model into a KNIME workflow. This workflow can be used to perform in vitro to in vivo extrapolation and forward as backward dosimetry in support of chemical risk assessment. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.
Wong, Florence L.; Phillips, Eleyne L.; Johnson, Samuel Y.; Sliter, Ray W.
2012-01-01
Models of the depth to the base of Last Glacial Maximum and sediment thickness over the base of Last Glacial Maximum for the eastern Santa Barbara Channel are a key part of the maps of shallow subsurface geology and structure for offshore Refugio to Hueneme Canyon, California, in the California State Waters Map Series. A satisfactory interpolation of the two datasets that accounted for regional geologic structure was developed using geographic information systems modeling and graphics software tools. Regional sediment volumes were determined from the model. Source data files suitable for geographic information systems mapping applications are provided.
AR Based App for Tourist Attraction in ESKİ ÇARŞI (Safranbolu)
NASA Astrophysics Data System (ADS)
Polat, Merve; Rakıp Karaş, İsmail; Kahraman, İdris; Alizadehashrafi, Behnam
2016-10-01
This research is dealing with 3D modeling of historical and heritage landmarks of Safranbolu that are registered by UNESCO. This is an Augmented Reality (AR) based project in order to trigger virtual three-dimensional (3D) models, cultural music, historical photos, artistic features and animated text information. The aim is to propose a GIS-based approach with these features and add to the system as attribute data in a relational database. The database will be available in an AR-based application to provide information for the tourists.
A Multilayer Naïve Bayes Model for Analyzing User's Retweeting Sentiment Tendency.
Wang, Mengmeng; Zuo, Wanli; Wang, Ying
2015-01-01
Today microblogging has increasingly become a means of information diffusion via user's retweeting behavior. Since retweeting content, as context information of microblogging, is an understanding of microblogging, hence, user's retweeting sentiment tendency analysis has gradually become a hot research topic. Targeted at online microblogging, a dynamic social network, we investigate how to exploit dynamic retweeting sentiment features in retweeting sentiment tendency analysis. On the basis of time series of user's network structure information and published text information, we first model dynamic retweeting sentiment features. Then we build Naïve Bayes models from profile-, relationship-, and emotion-based dimensions, respectively. Finally, we build a multilayer Naïve Bayes model based on multidimensional Naïve Bayes models to analyze user's retweeting sentiment tendency towards a microblog. Experiments on real-world dataset demonstrate the effectiveness of the proposed framework. Further experiments are conducted to understand the importance of dynamic retweeting sentiment features and temporal information in retweeting sentiment tendency analysis. What is more, we provide a new train of thought for retweeting sentiment tendency analysis in dynamic social networks.
Documentation of the Retail Price Model
The Retail Price Model (RPM) provides a first‐order estimate of average retail electricity prices using information from the EPA Base Case v.5.13 Base Case or other scenarios for each of the 64 Integrated Planing Model (IPM) regions.
Upper atmosphere research: Reaction rate and optical measurements
NASA Technical Reports Server (NTRS)
Stief, L. J.; Allen, J. E., Jr.; Nava, D. F.; Payne, W. A., Jr.
1990-01-01
The objective is to provide photochemical, kinetic, and spectroscopic information necessary for photochemical models of the Earth's upper atmosphere and to examine reactions or reactants not presently in the models to either confirm the correctness of their exclusion or provide evidence to justify future inclusion in the models. New initiatives are being taken in technique development (many of them laser based) and in the application of established techniques to address gaps in the photochemical/kinetic data base, as well as to provide increasingly reliable information.
The LUE data model for representation of agents and fields
NASA Astrophysics Data System (ADS)
de Jong, Kor; Schmitz, Oliver; Karssenberg, Derek
2017-04-01
Traditionally, agents-based and field-based modelling environments use different data models to represent the state of information they manipulate. In agent-based modelling, involving the representation of phenomena as objects bounded in space and time, agents are often represented by classes, each of which represents a particular kind of agent and all its properties. Such classes can be used to represent entities like people, birds, cars and countries. In field-based modelling, involving the representation of the environment as continuous fields, fields are often represented by a discretization of space, using multidimensional arrays, each storing mostly a single attribute. Such arrays can be used to represent the elevation of the land-surface, the pH of the soil, or the population density in an area, for example. Representing a population of agents by class instances grouped in collections is an intuitive way of organizing information. A drawback, though, is that models in which class instances grouping properties are stored in collections are less efficient (execute slower) than models in which collections of properties are grouped. The field representation, on the other hand, is convenient for the efficient execution of models. Another drawback is that, because the data models used are so different, integrating agent-based and field-based models becomes difficult, since the model builder has to deal with multiple concepts, and often multiple modelling environments. With the development of the LUE data model [1] we aim at representing agents and fields within a single paradigm, by combining the advantages of the data models used in agent-based and field-based data modelling. This removes the barrier for writing integrated agent-based and field-based models. The resulting data model is intuitive to use and allows for efficient execution of models. LUE is both a high-level conceptual data model and a low-level physical data model. The LUE conceptual data model is a generalization of the data models used in agent-based and field-based modelling. The LUE physical data model [2] is an implementation of the LUE conceptual data model in HDF5. In our presentation we will provide details of our approach to organizing information about agents and fields. We will show examples of agent and field data represented by the conceptual and physical data model. References: [1] de Bakker, M.P., de Jong, K., Schmitz, O., Karssenberg, D., 2016. Design and demonstration of a data model to integrate agent-based and field-based modelling. Environmental Modelling and Software. http://dx.doi.org/10.1016/j.envsoft.2016.11.016 [2] de Jong, K., 2017. LUE source code. https://github.com/pcraster/lue
Object-Oriented Technology-Based Software Library for Operations of Water Reclamation Centers
NASA Astrophysics Data System (ADS)
Otani, Tetsuo; Shimada, Takehiro; Yoshida, Norio; Abe, Wataru
SCADA systems in water reclamation centers have been constructed based on hardware and software that each manufacturer produced according to their design. Even though this approach used to be effective to realize real-time and reliable execution, it is an obstacle to cost reduction about system construction and maintenance. A promising solution to address the problem is to set specifications that can be used commonly. In terms of software, information model approach has been adopted in SCADA systems in other field, such as telecommunications and power systems. An information model is a piece of software specification that describes a physical or logical object to be monitored. In this paper, we propose information models for operations of water reclamation centers, which have not ever existed. In addition, we show the feasibility of the information model in terms of common use and processing performance.
Data Model Management for Space Information Systems
NASA Technical Reports Server (NTRS)
Hughes, J. Steven; Crichton, Daniel J.; Ramirez, Paul; Mattmann, chris
2006-01-01
The Reference Architecture for Space Information Management (RASIM) suggests the separation of the data model from software components to promote the development of flexible information management systems. RASIM allows the data model to evolve independently from the software components and results in a robust implementation that remains viable as the domain changes. However, the development and management of data models within RASIM are difficult and time consuming tasks involving the choice of a notation, the capture of the model, its validation for consistency, and the export of the model for implementation. Current limitations to this approach include the lack of ability to capture comprehensive domain knowledge, the loss of significant modeling information during implementation, the lack of model visualization and documentation capabilities, and exports being limited to one or two schema types. The advent of the Semantic Web and its demand for sophisticated data models has addressed this situation by providing a new level of data model management in the form of ontology tools. In this paper we describe the use of a representative ontology tool to capture and manage a data model for a space information system. The resulting ontology is implementation independent. Novel on-line visualization and documentation capabilities are available automatically, and the ability to export to various schemas can be added through tool plug-ins. In addition, the ingestion of data instances into the ontology allows validation of the ontology and results in a domain knowledge base. Semantic browsers are easily configured for the knowledge base. For example the export of the knowledge base to RDF/XML and RDFS/XML and the use of open source metadata browsers provide ready-made user interfaces that support both text- and facet-based search. This paper will present the Planetary Data System (PDS) data model as a use case and describe the import of the data model into an ontology tool. We will also describe the current effort to provide interoperability with the European Space Agency (ESA)/Planetary Science Archive (PSA) which is critically dependent on a common data model.
Kavlock, R J
1997-01-01
During the last several years, significant changes in the risk assessment process for developmental toxicity of environmental contaminants have begun to emerge. The first of these changes is the development and beginning use of statistically based dose-response models [the benchmark dose (BMD) approach] that better utilize data derived from existing testing approaches. Accompanying this change is the greater emphasis placed on understanding and using mechanistic information to yield more accurate, reliable, and less uncertain risk assessments. The next stage in the evolution of risk assessment will be the use of biologically based dose-response (BBDR) models that begin to build into the statistically based models factors related to the underlying kinetic, biochemical, and/or physiologic processes perturbed by a toxicant. Such models are now emerging from several research laboratories. The introduction of quantitative models and the incorporation of biologic information into them has pointed to the need for even more sophisticated modifications for which we offer the term embryologically based dose-response (EBDR) models. Because these models would be based upon the understanding of normal morphogenesis, they represent a quantum leap in our thinking, but their complexity presents daunting challenges both to the developmental biologist and the developmental toxicologist. Implementation of these models will require extensive communication between developmental toxicologists, molecular embryologists, and biomathematicians. The remarkable progress in the understanding of mammalian embryonic development at the molecular level that has occurred over the last decade combined with advances in computing power and computational models should eventually enable these as yet hypothetical models to be brought into use.
ICT use for information management in healthcare system for chronic disease patient
NASA Astrophysics Data System (ADS)
Wawrzyniak, Zbigniew M.; Lisiecka-Biełanowicz, Mira
2013-10-01
Modern healthcare systems are designed to fulfill needs of the patient, his system environment and other determinants of the treatment with proper support of technical aids. A whole system of care is compatible to the technical solutions and organizational framework based on legal rules. The purpose of this study is to present how can we use Information and Communication Technology (ICT) systemic tools in a new model of patient-oriented care, improving the effectiveness of healthcare for patients with chronic diseases. The study material is the long-term process of healthcare for patients with chronic illness. Basing on the knowledge of the whole circumstances of patient's ecosystem and his needs allow us to build a new ICT model of long term care. The method used is construction, modeling and constant improvement the efficient ICT layer for the patient-centered healthcare model. We present a new constructive approach to systemic process how to use ICT for information management in healthcare system for chronic disease patient. The use of ICT tools in the model for chronic disease can improve all aspects of data management and communication, and the effectiveness of long-term complex healthcare. In conclusion: ICT based model of healthcare can be constructed basing on the interactions of ecosystem's functional parts through information feedback and the provision of services and models as well as the knowledge of the patient itself. Systematic approach to the model of long term healthcare assisted functionally by ICT tools and data management methods will increase the effectiveness of patient care and organizational efficiency.
Chang, Tian-Ying; Zhang, Yi-Lin; Shan, Yan; Liu, Sai-Sai; Song, Xiao-Yue; Li, Zheng-Yan; Du, Li-Ping; Li, Yan-Yan; Gao, Douqing
2018-05-01
To examine whether the information-motivation-behavioural skills model could predict self-care behaviour among Chinese peritoneal dialysis patients. Peritoneal dialysis is a treatment performed by patients or their caregivers in their own home. It is important to implement theory-based projects to increase the self-care of patients with peritoneal dialysis. The information-motivation-behavioural model has been verified in diverse populations as a comprehensive, effective model to guide the design, implementation and evaluation of self-care programmes. A cross-sectional, observational study. A total of 201 adults with peritoneal dialysis were recruited at a 3A grade hospital in China. Participant data were collected on demographics, self-care information (knowledge), social support (social motivation), self-care attitude (personal motivation), self-efficacy (behaviour skills) and self-care behaviour. We also collected data on whether the recruited patients had peritoneal dialysis-associated peritonitis from electronic medical records. Measured variable path analysis was performed using mplus 7.4 to identify the information-motivation-behavioural model. Self-efficacy, information and social motivation predict peritoneal dialysis self-care behaviour directly. Information and personal support affect self-care behaviour through self-efficacy, whereas peritoneal dialysis self-care behaviour has a direct effect on the prevention of peritoneal dialysis-associated peritonitis. The information-motivation-behavioural model is an appropriate and applicable model to explain and predict the self-care behaviour of Chinese peritoneal dialysis patients. Poor self-care behaviour among peritoneal dialysis patients results in peritoneal dialysis-associated peritonitis. The findings suggest that self-care education programmes for peritoneal dialysis patients should include strategies based on the information-motivation-behavioural model to enhance knowledge, motivation and behaviour skills to change or maintain self-care behaviour. © 2018 John Wiley & Sons Ltd.
An Object-Based Approach to Evaluation of Climate Variability Projections and Predictions
NASA Astrophysics Data System (ADS)
Ammann, C. M.; Brown, B.; Kalb, C. P.; Bullock, R.
2017-12-01
Evaluations of the performance of earth system model predictions and projections are of critical importance to enhance usefulness of these products. Such evaluations need to address specific concerns depending on the system and decisions of interest; hence, evaluation tools must be tailored to inform about specific issues. Traditional approaches that summarize grid-based comparisons of analyses and models, or between current and future climate, often do not reveal important information about the models' performance (e.g., spatial or temporal displacements; the reason behind a poor score) and are unable to accommodate these specific information needs. For example, summary statistics such as the correlation coefficient or the mean-squared error provide minimal information to developers, users, and decision makers regarding what is "right" and "wrong" with a model. New spatial and temporal-spatial object-based tools from the field of weather forecast verification (where comparisons typically focus on much finer temporal and spatial scales) have been adapted to more completely answer some of the important earth system model evaluation questions. In particular, the Method for Object-based Diagnostic Evaluation (MODE) tool and its temporal (three-dimensional) extension (MODE-TD) have been adapted for these evaluations. More specifically, these tools can be used to address spatial and temporal displacements in projections of El Nino-related precipitation and/or temperature anomalies, ITCZ-associated precipitation areas, atmospheric rivers, seasonal sea-ice extent, and other features of interest. Examples of several applications of these tools in a climate context will be presented, using output of the CESM large ensemble. In general, these tools provide diagnostic information about model performance - accounting for spatial, temporal, and intensity differences - that cannot be achieved using traditional (scalar) model comparison approaches. Thus, they can provide more meaningful information that can be used in decision-making and planning. Future extensions and applications of these tools in a climate context will be considered.
Cross-Service Investigation of Geographical Information Systems
2004-03-01
Figure 8 illustrates the combined layers. Information for the layers is stored in a database format. The two types of storage are vector and...raster models. In a vector model, the image and information are stored as geometric objects such as points, lines, or polygons. In a raster model...DNCs are a vector -based digital database with selected maritime significant physical features from hydrographic charts. Layers within the DNC are data
A New Model for the Organizational Structure of Medical Record Departments in Hospitals in Iran
Moghaddasi, Hamid; Hosseini, Azamossadat; Sheikhtaheri, Abbas
2006-01-01
The organizational structure of medical record departments in Iran is not appropriate for the efficient management of healthcare information. In addition, there is no strong information management division to provide comprehensive information management services in hospitals in Iran. Therefore, a suggested model was designed based on four main axes: 1) specifications of a Health Information Management Division, 2) specifications of a Healthcare Information Management Department, 3) the functions of the Healthcare Information Management Department, and 4) the units of the Healthcare Information Management Department. The validity of the model was determined through use of the Delphi technique. The results of the validation process show that the majority of experts agree with the model and consider it to be appropriate and applicable for hospitals in Iran. The model is therefore recommended for hospitals in Iran. PMID:18066362
Research on Zheng Classification Fusing Pulse Parameters in Coronary Heart Disease
Guo, Rui; Wang, Yi-Qin; Xu, Jin; Yan, Hai-Xia; Yan, Jian-Jun; Li, Fu-Feng; Xu, Zhao-Xia; Xu, Wen-Jie
2013-01-01
This study was conducted to illustrate that nonlinear dynamic variables of Traditional Chinese Medicine (TCM) pulse can improve the performances of TCM Zheng classification models. Pulse recordings of 334 coronary heart disease (CHD) patients and 117 normal subjects were collected in this study. Recurrence quantification analysis (RQA) was employed to acquire nonlinear dynamic variables of pulse. TCM Zheng models in CHD were constructed, and predictions using a novel multilabel learning algorithm based on different datasets were carried out. Datasets were designed as follows: dataset1, TCM inquiry information including inspection information; dataset2, time-domain variables of pulse and dataset1; dataset3, RQA variables of pulse and dataset1; and dataset4, major principal components of RQA variables and dataset1. The performances of the different models for Zheng differentiation were compared. The model for Zheng differentiation based on RQA variables integrated with inquiry information had the best performance, whereas that based only on inquiry had the worst performance. Meanwhile, the model based on time-domain variables of pulse integrated with inquiry fell between the above two. This result showed that RQA variables of pulse can be used to construct models of TCM Zheng and improve the performance of Zheng differentiation models. PMID:23737839
NASA Astrophysics Data System (ADS)
Wang, Guanghui; Wang, Yufei; Liu, Yijun; Chi, Yuxue
2018-05-01
As the transmission of public opinion on the Internet in the “We the Media” era tends to be supraterritorial, concealed and complex, the traditional “point-to-surface” transmission of information has been transformed into “point-to-point” reciprocal transmission. A foundation for studies of the evolution of public opinion and its transmission on the Internet in the “We the Media” era can be laid by converting the massive amounts of fragmented information on public opinion that exists on “We the Media” platforms into structurally complex networks of information. This paper describes studies of structurally complex network-based modeling of public opinion on the Internet in the “We the Media” era from the perspective of the development and evolution of complex networks. The progress that has been made in research projects relevant to the structural modeling of public opinion on the Internet is comprehensively summarized. The review considers aspects such as regular grid-based modeling of the rules that describe the propagation of public opinion on the Internet in the “We the Media” era, social network modeling, dynamic network modeling, and supernetwork modeling. Moreover, an outlook for future studies that address complex network-based modeling of public opinion on the Internet is put forward as a summary from the perspective of modeling conducted using the techniques mentioned above.
On Utilizing Optimal and Information Theoretic Syntactic Modeling for Peptide Classification
NASA Astrophysics Data System (ADS)
Aygün, Eser; Oommen, B. John; Cataltepe, Zehra
Syntactic methods in pattern recognition have been used extensively in bioinformatics, and in particular, in the analysis of gene and protein expressions, and in the recognition and classification of bio-sequences. These methods are almost universally distance-based. This paper concerns the use of an Optimal and Information Theoretic (OIT) probabilistic model [11] to achieve peptide classification using the information residing in their syntactic representations. The latter has traditionally been achieved using the edit distances required in the respective peptide comparisons. We advocate that one can model the differences between compared strings as a mutation model consisting of random Substitutions, Insertions and Deletions (SID) obeying the OIT model. Thus, in this paper, we show that the probability measure obtained from the OIT model can be perceived as a sequence similarity metric, using which a Support Vector Machine (SVM)-based peptide classifier, referred to as OIT_SVM, can be devised.
Pregger, Thomas; Friedrich, Rainer
2009-02-01
Emission data needed as input for the operation of atmospheric models should not only be spatially and temporally resolved. Another important feature is the effective emission height which significantly influences modelled concentration values. Unfortunately this information, which is especially relevant for large point sources, is usually not available and simple assumptions are often used in atmospheric models. As a contribution to improve knowledge on emission heights this paper provides typical default values for the driving parameters stack height and flue gas temperature, velocity and flow rate for different industrial sources. The results were derived from an analysis of the probably most comprehensive database of real-world stack information existing in Europe based on German industrial data. A bottom-up calculation of effective emission heights applying equations used for Gaussian dispersion models shows significant differences depending on source and air pollutant and compared to approaches currently used for atmospheric transport modelling.
NASA Technical Reports Server (NTRS)
Huning, J. R.; Logan, T. L.; Smith, J. H.
1982-01-01
The potential of using digital satellite data to establish a cloud cover data base for the United States, one that would provide detailed information on the temporal and spatial variability of cloud development are studied. Key elements include: (1) interfacing GOES data from the University of Wisconsin Meteorological Data Facility with the Jet Propulsion Laboratory's VICAR image processing system and IBIS geographic information system; (2) creation of a registered multitemporal GOES data base; (3) development of a simple normalization model to compensate for sun angle; (4) creation of a variable size georeference grid that provides detailed cloud information in selected areas and summarized information in other areas; and (5) development of a cloud/shadow model which details the percentage of each grid cell that is cloud and shadow covered, and the percentage of cloud or shadow opacity. In addition, comparison of model calculations of insolation with measured values at selected test sites was accomplished, as well as development of preliminary requirements for a large scale data base of cloud cover statistics.
Information architecture for a federated health record server.
Kalra, D; Lloyd, D; Austin, T; O'Connor, A; Patterson, D; Ingram, D
2002-01-01
This paper describes the information models that have been used to implement a federated health record server and to deploy it in a live clinical setting. The authors, working at the Centre for Health Informatics and Multiprofessional Education (University College London), have built up over a decade of experience within Europe on the requirements and information models that are needed to underpin comprehensive multi-professional electronic health records. This work has involved collaboration with a wide range of health care and informatics organisations and partners in the healthcare computing industry across Europe though the EU Health Telematics projects GEHR, Synapses, EHCR-SupA, SynEx and Medicate. The resulting architecture models have fed into recent European standardisation work in this area, such as CEN TC/251 ENV 13606. UCL has implemented a federated health record server based on these models which is now running in the Department of Cardiovascular Medicine at the Whittington Hospital in North London. The information models described in this paper reflect a refinement based on this implementation experience.
3D-Lab: a collaborative web-based platform for molecular modeling.
Grebner, Christoph; Norrby, Magnus; Enström, Jonatan; Nilsson, Ingemar; Hogner, Anders; Henriksson, Jonas; Westin, Johan; Faramarzi, Farzad; Werner, Philip; Boström, Jonas
2016-09-01
The use of 3D information has shown impact in numerous applications in drug design. However, it is often under-utilized and traditionally limited to specialists. We want to change that, and present an approach making 3D information and molecular modeling accessible and easy-to-use 'for the people'. A user-friendly and collaborative web-based platform (3D-Lab) for 3D modeling, including a blazingly fast virtual screening capability, was developed. 3D-Lab provides an interface to automatic molecular modeling, like conformer generation, ligand alignments, molecular dockings and simple quantum chemistry protocols. 3D-Lab is designed to be modular, and to facilitate sharing of 3D-information to promote interactions between drug designers. Recent enhancements to our open-source virtual reality tool Molecular Rift are described. The integrated drug-design platform allows drug designers to instantaneously access 3D information and readily apply advanced and automated 3D molecular modeling tasks, with the aim to improve decision-making in drug design projects.
Health level 7 development framework for medication administration.
Kim, Hwa Sun; Cho, Hune
2009-01-01
We propose the creation of a standard data model for medication administration activities through the development of a clinical document architecture using the Health Level 7 Development Framework process based on an object-oriented analysis and the development method of Health Level 7 Version 3. Medication administration is the most common activity performed by clinical professionals in healthcare settings. A standardized information model and structured hospital information system are necessary to achieve evidence-based clinical activities. A virtual scenario is used to demonstrate the proposed method of administering medication. We used the Health Level 7 Development Framework and other tools to create the clinical document architecture, which allowed us to illustrate each step of the Health Level 7 Development Framework in the administration of medication. We generated an information model of the medication administration process as one clinical activity. It should become a fundamental conceptual model for understanding international-standard methodology by healthcare professionals and nursing practitioners with the objective of modeling healthcare information systems.
An industrial information integration approach to in-orbit spacecraft
NASA Astrophysics Data System (ADS)
Du, Xiaoning; Wang, Hong; Du, Yuhao; Xu, Li Da; Chaudhry, Sohail; Bi, Zhuming; Guo, Rong; Huang, Yongxuan; Li, Jisheng
2017-01-01
To operate an in-orbit spacecraft, the spacecraft status has to be monitored autonomously by collecting and analysing real-time data, and then detecting abnormities and malfunctions of system components. To develop an information system for spacecraft state detection, we investigate the feasibility of using ontology-based artificial intelligence in the system development. We propose a new modelling technique based on the semantic web, agent, scenarios and ontologies model. In modelling, the subjects of astronautics fields are classified, corresponding agents and scenarios are defined, and they are connected by the semantic web to analyse data and detect failures. We introduce the modelling methodologies and the resulted framework of the status detection information system in this paper. We discuss system components as well as their interactions in details. The system has been prototyped and tested to illustrate its feasibility and effectiveness. The proposed modelling technique is generic which can be extended and applied to the system development of other large-scale and complex information systems.
The Use of a Context-Based Information Retrieval Technique
2009-07-01
provided in context. Latent Semantic Analysis (LSA) is a statistical technique for inferring contextual and structural information, and previous studies...WAIS). 10 DSTO-TR-2322 1.4.4 Latent Semantic Analysis LSA, which is also known as latent semantic indexing (LSI), uses a statistical and...1.4.6 Language Models In contrast, natural language models apply algorithms that combine statistical information with semantic information. Semantic
Development of an information data base for watershed monitoring
NASA Technical Reports Server (NTRS)
Smith, A. Y.; Blackwell, R. J.
1980-01-01
Landsat multispectral scanner data, Defense Mapping Agency digital terrain data, conventional maps, and ground data were integrated to create a comprehensive information data base (the Image Based Information System), to monitor the water quality of the Lake Tahoe Basin. Landsat imagery was used as the planimetric base to which all other data were registered. A georeference image plane, which provided an interface between all data planes for the Lake Tahoe Basin data base, was created from the drainage basin map. The data base was used to extract each drainage basin for separate display. The Defense Mapping Agency-created elevation image was processed with VICAR software to produce a component representing slope magnitude, which was cross-tabulated with the drainage basin georeference table. Future applications of the data base include the development of precipitation modeling, surface runoff models, and classification of drainage basin cover types.
NASA Astrophysics Data System (ADS)
Setiyono, T. D.
2014-12-01
Accurate and timely information on rice crop growth and yield helps governments and other stakeholders adapting their economic policies and enables relief organizations to better anticipate and coordinate relief efforts in the wake of a natural catastrophe. Such delivery of rice growth and yield information is made possible by regular earth observation using space-born Synthetic Aperture Radar (SAR) technology combined with crop modeling approach to estimate yield. Radar-based remote sensing is capable of observing rice vegetation growth irrespective of cloud coverage, an important feature given that in incidences of flooding the sky is often cloud-covered. The system allows rapid damage assessment over the area of interest. Rice yield monitoring is based on a crop growth simulation and SAR-derived key information, particularly start of season and leaf growth rate. Results from pilot study sites in South and South East Asian countries suggest that incorporation of SAR data into crop model improves yield estimation for actual yields. Remote-sensing data assimilation into crop model effectively capture responses of rice crops to environmental conditions over large spatial coverage, which otherwise is practically impossible to achieve. Such improvement of actual yield estimates offers practical application such as in a crop insurance program. Process-based crop simulation model is used in the system to ensure climate information is adequately captured and to enable mid-season yield forecast.
Research of Manufacture Time Management System Based on PLM
NASA Astrophysics Data System (ADS)
Jing, Ni; Juan, Zhu; Liangwei, Zhong
This system is targeted by enterprises manufacturing machine shop, analyzes their business needs and builds the plant management information system of Manufacture time and Manufacture time information management. for manufacturing process Combined with WEB technology, based on EXCEL VBA development of methods, constructs a hybrid model based on PLM workshop Manufacture time management information system framework, discusses the functionality of the system architecture, database structure.
Application of Artificial Intelligence for Bridge Deterioration Model.
Chen, Zhang; Wu, Yangyang; Li, Li; Sun, Lijun
2015-01-01
The deterministic bridge deterioration model updating problem is well established in bridge management, while the traditional methods and approaches for this problem require manual intervention. An artificial-intelligence-based approach was presented to self-updated parameters of the bridge deterioration model in this paper. When new information and data are collected, a posterior distribution was constructed to describe the integrated result of historical information and the new gained information according to Bayesian theorem, which was used to update model parameters. This AI-based approach is applied to the case of updating parameters of bridge deterioration model, which is the data collected from bridges of 12 districts in Shanghai from 2004 to 2013, and the results showed that it is an accurate, effective, and satisfactory approach to deal with the problem of the parameter updating without manual intervention.
Application of Artificial Intelligence for Bridge Deterioration Model
Chen, Zhang; Wu, Yangyang; Sun, Lijun
2015-01-01
The deterministic bridge deterioration model updating problem is well established in bridge management, while the traditional methods and approaches for this problem require manual intervention. An artificial-intelligence-based approach was presented to self-updated parameters of the bridge deterioration model in this paper. When new information and data are collected, a posterior distribution was constructed to describe the integrated result of historical information and the new gained information according to Bayesian theorem, which was used to update model parameters. This AI-based approach is applied to the case of updating parameters of bridge deterioration model, which is the data collected from bridges of 12 districts in Shanghai from 2004 to 2013, and the results showed that it is an accurate, effective, and satisfactory approach to deal with the problem of the parameter updating without manual intervention. PMID:26601121
Safety Case Development as an Information Modelling Problem
NASA Astrophysics Data System (ADS)
Lewis, Robert
This paper considers the benefits from applying information modelling as the basis for creating an electronically-based safety case. It highlights the current difficulties of developing and managing large document-based safety cases for complex systems such as those found in Air Traffic Control systems. After a review of current tools and related literature on this subject, the paper proceeds to examine the many relationships between entities that can exist within a large safety case. The paper considers the benefits to both safety case writers and readers from the future development of an ideal safety case tool that is able to exploit these information models. The paper also introduces the idea that the safety case has formal relationships between entities that directly support the safety case argument using a methodology such as GSN, and informal relationships that provide links to direct and backing evidence and to supporting information.
NASA Astrophysics Data System (ADS)
Purwoko, Saad, Noor Shah; Tajudin, Nor'ain Mohd
2017-05-01
This study aims to: i) develop problem solving questions of Linear Equations System of Two Variables (LESTV) based on levels of IPT Model, ii) explain the level of students' skill of information processing in solving LESTV problems; iii) explain students' skill in information processing in solving LESTV problems; and iv) explain students' cognitive process in solving LESTV problems. This study involves three phases: i) development of LESTV problem questions based on Tessmer Model; ii) quantitative survey method on analyzing students' skill level of information processing; and iii) qualitative case study method on analyzing students' cognitive process. The population of the study was 545 eighth grade students represented by a sample of 170 students of five Junior High Schools in Hilir Barat Zone, Palembang (Indonesia) that were chosen using cluster sampling. Fifteen students among them were drawn as a sample for the interview session with saturated information obtained. The data were collected using the LESTV problem solving test and the interview protocol. The quantitative data were analyzed using descriptive statistics, while the qualitative data were analyzed using the content analysis. The finding of this study indicated that students' cognitive process was just at the step of indentifying external source and doing algorithm in short-term memory fluently. Only 15.29% students could retrieve type A information and 5.88% students could retrieve type B information from long-term memory. The implication was the development problems of LESTV had validated IPT Model in modelling students' assessment by different level of hierarchy.
Geographic information system/watershed model interface
Fisher, Gary T.
1989-01-01
Geographic information systems allow for the interactive analysis of spatial data related to water-resources investigations. A conceptual design for an interface between a geographic information system and a watershed model includes functions for the estimation of model parameter values. Design criteria include ease of use, minimal equipment requirements, a generic data-base management system, and use of a macro language. An application is demonstrated for a 90.1-square-kilometer subbasin of the Patuxent River near Unity, Maryland, that performs automated derivation of watershed parameters for hydrologic modeling.
A global parallel model based design of experiments method to minimize model output uncertainty.
Bazil, Jason N; Buzzard, Gregory T; Rundell, Ann E
2012-03-01
Model-based experiment design specifies the data to be collected that will most effectively characterize the biological system under study. Existing model-based design of experiment algorithms have primarily relied on Fisher Information Matrix-based methods to choose the best experiment in a sequential manner. However, these are largely local methods that require an initial estimate of the parameter values, which are often highly uncertain, particularly when data is limited. In this paper, we provide an approach to specify an informative sequence of multiple design points (parallel design) that will constrain the dynamical uncertainty of the biological system responses to within experimentally detectable limits as specified by the estimated experimental noise. The method is based upon computationally efficient sparse grids and requires only a bounded uncertain parameter space; it does not rely upon initial parameter estimates. The design sequence emerges through the use of scenario trees with experimental design points chosen to minimize the uncertainty in the predicted dynamics of the measurable responses of the system. The algorithm was illustrated herein using a T cell activation model for three problems that ranged in dimension from 2D to 19D. The results demonstrate that it is possible to extract useful information from a mathematical model where traditional model-based design of experiments approaches most certainly fail. The experiments designed via this method fully constrain the model output dynamics to within experimentally resolvable limits. The method is effective for highly uncertain biological systems characterized by deterministic mathematical models with limited data sets. Also, it is highly modular and can be modified to include a variety of methodologies such as input design and model discrimination.
Effects of rewiring strategies on information spreading in complex dynamic networks
NASA Astrophysics Data System (ADS)
Ally, Abdulla F.; Zhang, Ning
2018-04-01
Recent advances in networks and communication services have attracted much interest to understand information spreading in social networks. Consequently, numerous studies have been devoted to provide effective and accurate models for mimicking information spreading. However, knowledge on how to spread information faster and more widely remains a contentious issue. Yet, most existing works are based on static networks which limit the reality of dynamism of entities that participate in information spreading. Using the SIR epidemic model, this study explores and compares effects of two rewiring models (Fermi-Dirac and Linear functions) on information spreading in scale free and small world networks. Our results show that for all the rewiring strategies, the spreading influence replenishes with time but stabilizes in a steady state at later time-steps. This means that information spreading takes-off during the initial spreading steps, after which the spreading prevalence settles toward its equilibrium, with majority of the population having recovered and thus, no longer affecting the spreading. Meanwhile, rewiring strategy based on Fermi-Dirac distribution function in one way or another impedes the spreading process, however, the structure of the networks mimic the spreading, even with a low spreading rate. The worst case can be when the spreading rate is extremely small. The results emphasize that despite a big role of such networks in mimicking the spreading, the role of the parameters cannot be simply ignored. Apparently, the probability of giant degree neighbors being informed grows much faster with the rewiring strategy of linear function compared to that of Fermi-Dirac distribution function. Clearly, rewiring model based on linear function generates the fastest spreading across the networks. Therefore, if we are interested in speeding up the spreading process in stochastic modeling, linear function may play a pivotal role.
Binder, Harald; Sauerbrei, Willi; Royston, Patrick
2013-06-15
In observational studies, many continuous or categorical covariates may be related to an outcome. Various spline-based procedures or the multivariable fractional polynomial (MFP) procedure can be used to identify important variables and functional forms for continuous covariates. This is the main aim of an explanatory model, as opposed to a model only for prediction. The type of analysis often guides the complexity of the final model. Spline-based procedures and MFP have tuning parameters for choosing the required complexity. To compare model selection approaches, we perform a simulation study in the linear regression context based on a data structure intended to reflect realistic biomedical data. We vary the sample size, variance explained and complexity parameters for model selection. We consider 15 variables. A sample size of 200 (1000) and R(2) = 0.2 (0.8) is the scenario with the smallest (largest) amount of information. For assessing performance, we consider prediction error, correct and incorrect inclusion of covariates, qualitative measures for judging selected functional forms and further novel criteria. From limited information, a suitable explanatory model cannot be obtained. Prediction performance from all types of models is similar. With a medium amount of information, MFP performs better than splines on several criteria. MFP better recovers simpler functions, whereas splines better recover more complex functions. For a large amount of information and no local structure, MFP and the spline procedures often select similar explanatory models. Copyright © 2012 John Wiley & Sons, Ltd.
Information Sharing Modalities for Mobile Ad-Hoc Networks
NASA Astrophysics Data System (ADS)
de Spindler, Alexandre; Grossniklaus, Michael; Lins, Christoph; Norrie, Moira C.
Current mobile phone technologies have fostered the emergence of a new generation of mobile applications. Such applications allow users to interact and share information opportunistically when their mobile devices are in physical proximity or close to fixed installations. It has been shown how mobile applications such as collaborative filtering and location-based services can take advantage of ad-hoc connectivity to use physical proximity as a filter mechanism inherent to the application logic. We discuss the different modes of information sharing that arise in such settings based on the models of persistence and synchronisation. We present a platform that supports the development of applications that can exploit these modes of ad-hoc information sharing and, by means of an example, show how such an application can be realised based on the supported event model.
NASA Astrophysics Data System (ADS)
Shafii, M.; Tolson, B.; Matott, L. S.
2012-04-01
Hydrologic modeling has benefited from significant developments over the past two decades. This has resulted in building of higher levels of complexity into hydrologic models, which eventually makes the model evaluation process (parameter estimation via calibration and uncertainty analysis) more challenging. In order to avoid unreasonable parameter estimates, many researchers have suggested implementation of multi-criteria calibration schemes. Furthermore, for predictive hydrologic models to be useful, proper consideration of uncertainty is essential. Consequently, recent research has emphasized comprehensive model assessment procedures in which multi-criteria parameter estimation is combined with statistically-based uncertainty analysis routines such as Bayesian inference using Markov Chain Monte Carlo (MCMC) sampling. Such a procedure relies on the use of formal likelihood functions based on statistical assumptions, and moreover, the Bayesian inference structured on MCMC samplers requires a considerably large number of simulations. Due to these issues, especially in complex non-linear hydrological models, a variety of alternative informal approaches have been proposed for uncertainty analysis in the multi-criteria context. This study aims at exploring a number of such informal uncertainty analysis techniques in multi-criteria calibration of hydrological models. The informal methods addressed in this study are (i) Pareto optimality which quantifies the parameter uncertainty using the Pareto solutions, (ii) DDS-AU which uses the weighted sum of objective functions to derive the prediction limits, and (iii) GLUE which describes the total uncertainty through identification of behavioral solutions. The main objective is to compare such methods with MCMC-based Bayesian inference with respect to factors such as computational burden, and predictive capacity, which are evaluated based on multiple comparative measures. The measures for comparison are calculated both for calibration and evaluation periods. The uncertainty analysis methodologies are applied to a simple 5-parameter rainfall-runoff model, called HYMOD.
Keith, Jeff; Westbury, Chris; Goldman, James
2015-09-01
Corpus-based semantic space models, which primarily rely on lexical co-occurrence statistics, have proven effective in modeling and predicting human behavior in a number of experimental paradigms that explore semantic memory representation. The most widely studied extant models, however, are strongly influenced by orthographic word frequency (e.g., Shaoul & Westbury, Behavior Research Methods, 38, 190-195, 2006). This has the implication that high-frequency closed-class words can potentially bias co-occurrence statistics. Because these closed-class words are purported to carry primarily syntactic, rather than semantic, information, the performance of corpus-based semantic space models may be improved by excluding closed-class words (using stop lists) from co-occurrence statistics, while retaining their syntactic information through other means (e.g., part-of-speech tagging and/or affixes from inflected word forms). Additionally, very little work has been done to explore the effect of employing morphological decomposition on the inflected forms of words in corpora prior to compiling co-occurrence statistics, despite (controversial) evidence that humans perform early morphological decomposition in semantic processing. In this study, we explored the impact of these factors on corpus-based semantic space models. From this study, morphological decomposition appears to significantly improve performance in word-word co-occurrence semantic space models, providing some support for the claim that sublexical information-specifically, word morphology-plays a role in lexical semantic processing. An overall decrease in performance was observed in models employing stop lists (e.g., excluding closed-class words). Furthermore, we found some evidence that weakens the claim that closed-class words supply primarily syntactic information in word-word co-occurrence semantic space models.
Computational Model for Ethnographically Informed Systems Design
NASA Astrophysics Data System (ADS)
Iqbal, Rahat; James, Anne; Shah, Nazaraf; Terken, Jacuqes
This paper presents a computational model for ethnographically informed systems design that can support complex and distributed cooperative activities. This model is based on an ethnographic framework consisting of three important dimensions (e.g., distributed coordination, awareness of work and plans and procedure), and the BDI (Belief, Desire and Intention) model of intelligent agents. The ethnographic framework is used to conduct ethnographic analysis and to organise ethnographically driven information into three dimensions, whereas the BDI model allows such information to be mapped upon the underlying concepts of multi-agent systems. The advantage of this model is that it is built upon an adaptation of existing mature and well-understood techniques. By the use of this model, we also address the cognitive aspects of systems design.
NASA Astrophysics Data System (ADS)
Quesada-Montano, Beatriz; Westerberg, Ida K.; Fuentes-Andino, Diana; Hidalgo-Leon, Hugo; Halldin, Sven
2017-04-01
Long-term hydrological data are key to understanding catchment behaviour and for decision making within water management and planning. Given the lack of observed data in many regions worldwide, hydrological models are an alternative for reproducing historical streamflow series. Additional types of information - to locally observed discharge - can be used to constrain model parameter uncertainty for ungauged catchments. Climate variability exerts a strong influence on streamflow variability on long and short time scales, in particular in the Central-American region. We therefore explored the use of climate variability knowledge to constrain the simulated discharge uncertainty of a conceptual hydrological model applied to a Costa Rican catchment, assumed to be ungauged. To reduce model uncertainty we first rejected parameter relationships that disagreed with our understanding of the system. We then assessed how well climate-based constraints applied at long-term, inter-annual and intra-annual time scales could constrain model uncertainty. Finally, we compared the climate-based constraints to a constraint on low-flow statistics based on information obtained from global maps. We evaluated our method in terms of the ability of the model to reproduce the observed hydrograph and the active catchment processes in terms of two efficiency measures, a statistical consistency measure, a spread measure and 17 hydrological signatures. We found that climate variability knowledge was useful for reducing model uncertainty, in particular, unrealistic representation of deep groundwater processes. The constraints based on global maps of low-flow statistics provided more constraining information than those based on climate variability, but the latter rejected slow rainfall-runoff representations that the low flow statistics did not reject. The use of such knowledge, together with information on low-flow statistics and constraints on parameter relationships showed to be useful to constrain model uncertainty for an - assumed to be - ungauged basin. This shows that our method is promising for reconstructing long-term flow data for ungauged catchments on the Pacific side of Central America, and that similar methods can be developed for ungauged basins in other regions where climate variability exerts a strong control on streamflow variability.
Zsuga, Judit; Biro, Klara; Papp, Csaba; Tajti, Gabor; Gesztelyi, Rudolf
2016-02-01
Reinforcement learning (RL) is a powerful concept underlying forms of associative learning governed by the use of a scalar reward signal, with learning taking place if expectations are violated. RL may be assessed using model-based and model-free approaches. Model-based reinforcement learning involves the amygdala, the hippocampus, and the orbitofrontal cortex (OFC). The model-free system involves the pedunculopontine-tegmental nucleus (PPTgN), the ventral tegmental area (VTA) and the ventral striatum (VS). Based on the functional connectivity of VS, model-free and model based RL systems center on the VS that by integrating model-free signals (received as reward prediction error) and model-based reward related input computes value. Using the concept of reinforcement learning agent we propose that the VS serves as the value function component of the RL agent. Regarding the model utilized for model-based computations we turned to the proactive brain concept, which offers an ubiquitous function for the default network based on its great functional overlap with contextual associative areas. Hence, by means of the default network the brain continuously organizes its environment into context frames enabling the formulation of analogy-based association that are turned into predictions of what to expect. The OFC integrates reward-related information into context frames upon computing reward expectation by compiling stimulus-reward and context-reward information offered by the amygdala and hippocampus, respectively. Furthermore we suggest that the integration of model-based expectations regarding reward into the value signal is further supported by the efferent of the OFC that reach structures canonical for model-free learning (e.g., the PPTgN, VTA, and VS). (c) 2016 APA, all rights reserved).
Integration of remote sensing based surface information into a three-dimensional microclimate model
NASA Astrophysics Data System (ADS)
Heldens, Wieke; Heiden, Uta; Esch, Thomas; Mueller, Andreas; Dech, Stefan
2017-03-01
Climate change urges cities to consider the urban climate as part of sustainable planning. Urban microclimate models can provide knowledge on the climate at building block level. However, very detailed information on the area of interest is required. Most microclimate studies therefore make use of assumptions and generalizations to describe the model area. Remote sensing data with area wide coverage provides a means to derive many parameters at the detailed spatial and thematic scale required by urban climate models. This study shows how microclimate simulations for a series of real world urban areas can be supported by using remote sensing data. In an automated process, surface materials, albedo, LAI/LAD and object height have been derived and integrated into the urban microclimate model ENVI-met. Multiple microclimate simulations have been carried out both with the dynamic remote sensing based input data as well as with manual and static input data to analyze the impact of the RS-based surface information and the suitability of the applied data and techniques. A valuable support of the integration of the remote sensing based input data for ENVI-met is the use of an automated processing chain. This saves tedious manual editing and allows for fast and area wide generation of simulation areas. The analysis of the different modes shows the importance of high quality height data, detailed surface material information and albedo.
An evidence-based patient-centered method makes the biopsychosocial model scientific.
Smith, Robert C; Fortin, Auguste H; Dwamena, Francesca; Frankel, Richard M
2013-06-01
To review the scientific status of the biopsychosocial (BPS) model and to propose a way to improve it. Engel's BPS model added patients' psychological and social health concerns to the highly successful biomedical model. He proposed that the BPS model could make medicine more scientific, but its use in education, clinical care, and, especially, research remains minimal. Many aver correctly that the present model cannot be defined in a consistent way for the individual patient, making it untestable and non-scientific. This stems from not obtaining relevant BPS data systematically, where one interviewer obtains the same information another would. Recent research by two of the authors has produced similar patient-centered interviewing methods that are repeatable and elicit just the relevant patient information needed to define the model at each visit. We propose that the field adopt these evidence-based methods as the standard for identifying the BPS model. Identifying a scientific BPS model in each patient with an agreed-upon, evidence-based patient-centered interviewing method can produce a quantum leap ahead in both research and teaching. A scientific BPS model can give us more confidence in being humanistic. In research, we can conduct more rigorous studies to inform better practices. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Geographic Video 3d Data Model And Retrieval
NASA Astrophysics Data System (ADS)
Han, Z.; Cui, C.; Kong, Y.; Wu, H.
2014-04-01
Geographic video includes both spatial and temporal geographic features acquired through ground-based or non-ground-based cameras. With the popularity of video capture devices such as smartphones, the volume of user-generated geographic video clips has grown significantly and the trend of this growth is quickly accelerating. Such a massive and increasing volume poses a major challenge to efficient video management and query. Most of the today's video management and query techniques are based on signal level content extraction. They are not able to fully utilize the geographic information of the videos. This paper aimed to introduce a geographic video 3D data model based on spatial information. The main idea of the model is to utilize the location, trajectory and azimuth information acquired by sensors such as GPS receivers and 3D electronic compasses in conjunction with video contents. The raw spatial information is synthesized to point, line, polygon and solid according to the camcorder parameters such as focal length and angle of view. With the video segment and video frame, we defined the three categories geometry object using the geometry model of OGC Simple Features Specification for SQL. We can query video through computing the spatial relation between query objects and three categories geometry object such as VFLocation, VSTrajectory, VSFOView and VFFovCone etc. We designed the query methods using the structured query language (SQL) in detail. The experiment indicate that the model is a multiple objective, integration, loosely coupled, flexible and extensible data model for the management of geographic stereo video.
Wu, Jun-Jun; Gao, Zhi-Hai; Li, Zeng-Yuan; Wang, Hong-Yan; Pang, Yong; Sun, Bin; Li, Chang-Long; Li, Xu-Zhi; Zhang, Jiu-Xing
2014-03-01
In order to estimate the sparse vegetation information accurately in desertification region, taking southeast of Sunite Right Banner, Inner Mongolia, as the test site and Tiangong-1 hyperspectral image as the main data, sparse vegetation coverage and biomass were retrieved based on normalized difference vegetation index (NDVI) and soil adjusted vegetation index (SAVI), combined with the field investigation data. Then the advantages and disadvantages between them were compared. Firstly, the correlation between vegetation indexes and vegetation coverage under different bands combination was analyzed, as well as the biomass. Secondly, the best bands combination was determined when the maximum correlation coefficient turned up between vegetation indexes (VI) and vegetation parameters. It showed that the maximum correlation coefficient between vegetation parameters and NDVI could reach as high as 0.7, while that of SAVI could nearly reach 0.8. The center wavelength of red band in the best bands combination for NDVI was 630nm, and that of the near infrared (NIR) band was 910 nm. Whereas, when the center wavelength was 620 and 920 nm respectively, they were the best combination for SAVI. Finally, the linear regression models were established to retrieve vegetation coverage and biomass based on Tiangong-1 VIs. R2 of all models was more than 0.5, while that of the model based on SAVI was higher than that based on NDVI, especially, the R2 of vegetation coverage retrieve model based on SAVI was as high as 0.59. By intersection validation, the standard errors RMSE based on SAVI models were lower than that of the model based on NDVI. The results showed that the abundant spectral information of Tiangong-1 hyperspectral image can reflect the actual vegetaion condition effectively, and SAVI can estimate the sparse vegetation information more accurately than NDVI in desertification region.
Modeling of BN Lifetime Prediction of a System Based on Integrated Multi-Level Information
Wang, Xiaohong; Wang, Lizhi
2017-01-01
Predicting system lifetime is important to ensure safe and reliable operation of products, which requires integrated modeling based on multi-level, multi-sensor information. However, lifetime characteristics of equipment in a system are different and failure mechanisms are inter-coupled, which leads to complex logical correlations and the lack of a uniform lifetime measure. Based on a Bayesian network (BN), a lifetime prediction method for systems that combine multi-level sensor information is proposed. The method considers the correlation between accidental failures and degradation failure mechanisms, and achieves system modeling and lifetime prediction under complex logic correlations. This method is applied in the lifetime prediction of a multi-level solar-powered unmanned system, and the predicted results can provide guidance for the improvement of system reliability and for the maintenance and protection of the system. PMID:28926930
Modeling of BN Lifetime Prediction of a System Based on Integrated Multi-Level Information.
Wang, Jingbin; Wang, Xiaohong; Wang, Lizhi
2017-09-15
Predicting system lifetime is important to ensure safe and reliable operation of products, which requires integrated modeling based on multi-level, multi-sensor information. However, lifetime characteristics of equipment in a system are different and failure mechanisms are inter-coupled, which leads to complex logical correlations and the lack of a uniform lifetime measure. Based on a Bayesian network (BN), a lifetime prediction method for systems that combine multi-level sensor information is proposed. The method considers the correlation between accidental failures and degradation failure mechanisms, and achieves system modeling and lifetime prediction under complex logic correlations. This method is applied in the lifetime prediction of a multi-level solar-powered unmanned system, and the predicted results can provide guidance for the improvement of system reliability and for the maintenance and protection of the system.
Zhang, Pei-feng; Hu, Yuan-man; He, Hong-shi
2010-05-01
The demand for accurate and up-to-date spatial information of urban buildings is becoming more and more important for urban planning, environmental protection, and other vocations. Today's commercial high-resolution satellite imagery offers the potential to extract the three-dimensional information of urban buildings. This paper extracted the three-dimensional information of urban buildings from QuickBird imagery, and validated the precision of the extraction based on Barista software. It was shown that the extraction of three-dimensional information of the buildings from high-resolution satellite imagery based on Barista software had the advantages of low professional level demand, powerful universality, simple operation, and high precision. One pixel level of point positioning and height determination accuracy could be achieved if the digital elevation model (DEM) and sensor orientation model had higher precision and the off-Nadir View Angle was relatively perfect.
A multicriteria decision making model for assessment and selection of an ERP in a logistics context
NASA Astrophysics Data System (ADS)
Pereira, Teresa; Ferreira, Fernanda A.
2017-07-01
The aim of this work is to apply a methodology of decision support based on a multicriteria decision analyses (MCDA) model that allows the assessment and selection of an Enterprise Resource Planning (ERP) in a Portuguese logistics company by Group Decision Maker (GDM). A Decision Support system (DSS) that implements a MCDA - Multicriteria Methodology for the Assessment and Selection of Information Systems / Information Technologies (MMASSI / IT) is used based on its features and facility to change and adapt the model to a given scope. Using this DSS it was obtained the information system that best suited to the decisional context, being this result evaluated through a sensitivity and robustness analysis.
Prediction on sunspot activity based on fuzzy information granulation and support vector machine
NASA Astrophysics Data System (ADS)
Peng, Lingling; Yan, Haisheng; Yang, Zhigang
2018-04-01
In order to analyze the range of sunspots, a combined prediction method of forecasting the fluctuation range of sunspots based on fuzzy information granulation (FIG) and support vector machine (SVM) was put forward. Firstly, employing the FIG to granulate sample data and extract va)alid information of each window, namely the minimum value, the general average value and the maximum value of each window. Secondly, forecasting model is built respectively with SVM and then cross method is used to optimize these parameters. Finally, the fluctuation range of sunspots is forecasted with the optimized SVM model. Case study demonstrates that the model have high accuracy and can effectively predict the fluctuation of sunspots.
The methodology of database design in organization management systems
NASA Astrophysics Data System (ADS)
Chudinov, I. L.; Osipova, V. V.; Bobrova, Y. V.
2017-01-01
The paper describes the unified methodology of database design for management information systems. Designing the conceptual information model for the domain area is the most important and labor-intensive stage in database design. Basing on the proposed integrated approach to design, the conceptual information model, the main principles of developing the relation databases are provided and user’s information needs are considered. According to the methodology, the process of designing the conceptual information model includes three basic stages, which are defined in detail. Finally, the article describes the process of performing the results of analyzing user’s information needs and the rationale for use of classifiers.
NASA Astrophysics Data System (ADS)
Schmaltz, Elmar; Steger, Stefan; Bogaard, Thom; Van Beek, Rens; Glade, Thomas
2017-04-01
Hydromechanic slope stability models are often used to assess the landslide susceptibility of hillslopes. Some of these models are able to account for vegetation related effects when assessing slope stability. However, spatial information of required vegetation parameters (especially of woodland) that are defined by land cover type, tree species and stand density are mostly underrepresented compared to hydropedological and geomechanical parameters. The aim of this study is to assess how LiDAR-derived biomass information can help to distinguish distinct tree stand-immanent properties (e.g. stand density and diversity) and further improve the performance of hydromechanic slope stability models. We used spatial vegetation data produced from sophisticated algorithms that are able to separate single trees within a stand based on LiDAR point clouds and thus allow an extraordinary detailed determination of the aboveground biomass. Further, this information is used to estimate the species- and stand-related distribution of the subsurface biomass using an innovative approach to approximate root system architecture and development. The hydrological tree-soil interactions and their impact on the geotechnical stability of the soil mantle are then reproduced in the dynamic and spatially distributed slope stability model STARWARS/PROBSTAB. This study highlights first advances in the approximation of biomechanical reinforcement potential of tree root systems in tree stands. Based on our findings, we address the advantages and limitations of highly detailed biomass information in hydromechanic modelling and physically based slope failure prediction.
A logical model of cooperating rule-based systems
NASA Technical Reports Server (NTRS)
Bailin, Sidney C.; Moore, John M.; Hilberg, Robert H.; Murphy, Elizabeth D.; Bahder, Shari A.
1989-01-01
A model is developed to assist in the planning, specification, development, and verification of space information systems involving distributed rule-based systems. The model is based on an analysis of possible uses of rule-based systems in control centers. This analysis is summarized as a data-flow model for a hypothetical intelligent control center. From this data-flow model, the logical model of cooperating rule-based systems is extracted. This model consists of four layers of increasing capability: (1) communicating agents, (2) belief-sharing knowledge sources, (3) goal-sharing interest areas, and (4) task-sharing job roles.
The Sanctuary Model of Trauma-Informed Organizational Change
ERIC Educational Resources Information Center
Bloom, Sandra L.; Sreedhar, Sarah Yanosy
2008-01-01
This article features the Sanctuary Model[R], a trauma-informed method for creating or changing an organizational culture. Although the model is based on trauma theory, its tenets have application in working with children and adults across a wide diagnostic spectrum. Originally developed in a short-term, acute inpatient psychiatric setting for…
A Modeling Approach to the Development of Students' Informal Inferential Reasoning
ERIC Educational Resources Information Center
Doerr, Helen M.; Delmas, Robert; Makar, Katie
2017-01-01
Teaching from an informal statistical inference perspective can address the challenge of teaching statistics in a coherent way. We argue that activities that promote model-based reasoning address two additional challenges: providing a coherent sequence of topics and promoting the application of knowledge to novel situations. We take a models and…
Gaze-Based Assistive Technology - Usefulness in Clinical Assessments.
Wandin, Helena
2017-01-01
Gaze-based assistive technology was used in informal clinical assessments. Excerpts of medical journals were analyzed by directed content analysis using a model of communicative competence. The results of this pilot study indicate that gaze-based assistive technology is a useful tool in communication assessments that can generate clinically relevant information.
de Carvalho, Elias César Araujo; Batilana, Adelia Portero; Simkins, Julie; Martins, Henrique; Shah, Jatin; Rajgor, Dimple; Shah, Anand; Rockart, Scott; Pietrobon, Ricardo
2010-02-19
Sharing of epidemiological and clinical data sets among researchers is poor at best, in detriment of science and community at large. The purpose of this paper is therefore to (1) describe a novel Web application designed to share information on study data sets focusing on epidemiological clinical research in a collaborative environment and (2) create a policy model placing this collaborative environment into the current scientific social context. The Database of Databases application was developed based on feedback from epidemiologists and clinical researchers requiring a Web-based platform that would allow for sharing of information about epidemiological and clinical study data sets in a collaborative environment. This platform should ensure that researchers can modify the information. A Model-based predictions of number of publications and funding resulting from combinations of different policy implementation strategies (for metadata and data sharing) were generated using System Dynamics modeling. The application allows researchers to easily upload information about clinical study data sets, which is searchable and modifiable by other users in a wiki environment. All modifications are filtered by the database principal investigator in order to maintain quality control. The application has been extensively tested and currently contains 130 clinical study data sets from the United States, Australia, China and Singapore. Model results indicated that any policy implementation would be better than the current strategy, that metadata sharing is better than data-sharing, and that combined policies achieve the best results in terms of publications. Based on our empirical observations and resulting model, the social network environment surrounding the application can assist epidemiologists and clinical researchers contribute and search for metadata in a collaborative environment, thus potentially facilitating collaboration efforts among research communities distributed around the globe.
Atmospheric correction for remote sensing image based on multi-spectral information
NASA Astrophysics Data System (ADS)
Wang, Yu; He, Hongyan; Tan, Wei; Qi, Wenwen
2018-03-01
The light collected from remote sensors taken from space must transit through the Earth's atmosphere. All satellite images are affected at some level by lightwave scattering and absorption from aerosols, water vapor and particulates in the atmosphere. For generating high-quality scientific data, atmospheric correction is required to remove atmospheric effects and to convert digital number (DN) values to surface reflectance (SR). Every optical satellite in orbit observes the earth through the same atmosphere, but each satellite image is impacted differently because atmospheric conditions are constantly changing. A physics-based detailed radiative transfer model 6SV requires a lot of key ancillary information about the atmospheric conditions at the acquisition time. This paper investigates to achieve the simultaneous acquisition of atmospheric radiation parameters based on the multi-spectral information, in order to improve the estimates of surface reflectance through physics-based atmospheric correction. Ancillary information on the aerosol optical depth (AOD) and total water vapor (TWV) derived from the multi-spectral information based on specific spectral properties was used for the 6SV model. The experimentation was carried out on images of Sentinel-2, which carries a Multispectral Instrument (MSI), recording in 13 spectral bands, covering a wide range of wavelengths from 440 up to 2200 nm. The results suggest that per-pixel atmospheric correction through 6SV model, integrating AOD and TWV derived from multispectral information, is better suited for accurate analysis of satellite images and quantitative remote sensing application.
2009-09-01
NII)/CIO Assistant Secretary of Defense for Networks and Information Integration/Chief Information Officer CMMI Capability Maturity Model...a Web-based portal to share knowledge about software process-related methodologies, such as the SEI’s Capability Maturity Model Integration ( CMMI ...19 SEI’s IDEALSM model, and Lean Six Sigma.20 For example, the portal features content areas such as software acquisition management, the SEI CMMI
Medium- and long-term electric power demand forecasting based on the big data of smart city
NASA Astrophysics Data System (ADS)
Wei, Zhanmeng; Li, Xiyuan; Li, Xizhong; Hu, Qinghe; Zhang, Haiyang; Cui, Pengjie
2017-08-01
Based on the smart city, this paper proposed a new electric power demand forecasting model, which integrates external data such as meteorological information, geographic information, population information, enterprise information and economic information into the big database, and uses an improved algorithm to analyse the electric power demand and provide decision support for decision makers. The data mining technology is used to synthesize kinds of information, and the information of electric power customers is analysed optimally. The scientific forecasting is made based on the trend of electricity demand, and a smart city in north-eastern China is taken as a sample.
ERIC Educational Resources Information Center
Zheng, Qian; Liang, Chang-Yong
2017-01-01
New information technology (new IT) plays an increasingly important role in the field of education, which greatly enriches the teaching means and promotes the sharing of education resources. However, because of the New Digital Divide existing, the impact of new IT on educational equality has yet to be discussed. Based on Information System Success…
ERIC Educational Resources Information Center
Kurt, Adile Askim; Emiroglu, Bülent Gürsel
2018-01-01
The objective of the present study was to examine students' online information searching strategies, their cognitive absorption levels and the information pollution levels on the Internet based on different variables and to determine the correlation between these variables. The study was designed with the survey model, the study group included 198…
NASA Astrophysics Data System (ADS)
Javorcik, Tomas
2017-11-01
The paper is aimed at the description of a PLE (Personal Learning Environment)-based teaching model suitable for implementation in the instruction of upper primary school students. The paper describes the individual stages of the model and its use of ICT (Information and Communication Technologies) tools. The Personal Learning Environment is a form of instruction which allows for the meaningful use of information and communication technologies (including mobile technologies) in their entirety.
McMurray, Bob; Jongman, Allard
2012-01-01
Most theories of categorization emphasize how continuous perceptual information is mapped to categories. However, equally important is the informational assumptions of a model, the type of information subserving this mapping. This is crucial in speech perception where the signal is variable and context-dependent. This study assessed the informational assumptions of several models of speech categorization, in particular, the number of cues that are the basis of categorization and whether these cues represent the input veridically or have undergone compensation. We collected a corpus of 2880 fricative productions (Jongman, Wayland & Wong, 2000) spanning many talker- and vowel-contexts and measured 24 cues for each. A subset was also presented to listeners in an 8AFC phoneme categorization task. We then trained a common classification model based on logistic regression to categorize the fricative from the cue values, and manipulated the information in the training set to contrast 1) models based on a small number of invariant cues; 2) models using all cues without compensation, and 3) models in which cues underwent compensation for contextual factors. Compensation was modeled by Computing Cues Relative to Expectations (C-CuRE), a new approach to compensation that preserves fine-grained detail in the signal. Only the compensation model achieved a similar accuracy to listeners, and showed the same effects of context. Thus, even simple categorization metrics can overcome the variability in speech when sufficient information is available and compensation schemes like C-CuRE are employed. PMID:21417542
From chart tracking to workflow management.
Srinivasan, P.; Vignes, G.; Venable, C.; Hazelwood, A.; Cade, T.
1994-01-01
The current interest in system-wide integration appears to be based on the assumption that an organization, by digitizing information and accepting a common standard for the exchange of such information, will improve the accessibility of this information and automatically experience benefits resulting from its more productive use. We do not dispute this reasoning, but assert that an organization's capacity for effective change is proportional to the understanding of the current structure among its personnel. Our workflow manager is based on the use of a Parameterized Petri Net (PPN) model which can be configured to represent an arbitrarily detailed picture of an organization. The PPN model can be animated to observe the model organization in action, and the results of the animation analyzed. This simulation is a dynamic ongoing process which changes with the system and allows members of the organization to pose "what if" questions as a means of exploring opportunities for change. We present, the "workflow management system" as the natural successor to the tracking program, incorporating modeling, scheduling, reactive planning, performance evaluation, and simulation. This workflow management system is more than adequate for meeting the needs of a paper chart tracking system, and, as the patient record is computerized, will serve as a planning and evaluation tool in converting the paper-based health information system into a computer-based system. PMID:7950051
Information Models, Data Requirements, and Agile Data Curation
NASA Astrophysics Data System (ADS)
Hughes, John S.; Crichton, Dan; Ritschel, Bernd; Hardman, Sean; Joyner, Ron
2015-04-01
The Planetary Data System's next generation system, PDS4, is an example of the successful use of an ontology-based Information Model (IM) to drive the development and operations of a data system. In traditional systems engineering, requirements or statements about what is necessary for the system are collected and analyzed for input into the design stage of systems development. With the advent of big data the requirements associated with data have begun to dominate and an ontology-based information model can be used to provide a formalized and rigorous set of data requirements. These requirements address not only the usual issues of data quantity, quality, and disposition but also data representation, integrity, provenance, context, and semantics. In addition the use of these data requirements during system's development has many characteristics of Agile Curation as proposed by Young et al. [Taking Another Look at the Data Management Life Cycle: Deconstruction, Agile, and Community, AGU 2014], namely adaptive planning, evolutionary development, early delivery, continuous improvement, and rapid and flexible response to change. For example customers can be satisfied through early and continuous delivery of system software and services that are configured directly from the information model. This presentation will describe the PDS4 architecture and its three principle parts: the ontology-based Information Model (IM), the federated registries and repositories, and the REST-based service layer for search, retrieval, and distribution. The development of the IM will be highlighted with special emphasis on knowledge acquisition, the impact of the IM on development and operations, and the use of shared ontologies at multiple governance levels to promote system interoperability and data correlation.
Volcanogenic Massive Sulfide Deposits of the World - Database and Grade and Tonnage Models
Mosier, Dan L.; Berger, Vladimir I.; Singer, Donald A.
2009-01-01
Grade and tonnage models are useful in quantitative mineral-resource assessments. The models and database presented in this report are an update of earlier publications about volcanogenic massive sulfide (VMS) deposits. These VMS deposits include what were formerly classified as kuroko, Cyprus, and Besshi deposits. The update was necessary because of new information about some deposits, changes in information in some deposits, such as grades, tonnages, or ages, revised locations of some deposits, and reclassification of subtypes. In this report we have added new VMS deposits and removed a few incorrectly classified deposits. This global compilation of VMS deposits contains 1,090 deposits; however, it was not our intent to include every known deposit in the world. The data was recently used for mineral-deposit density models (Mosier and others, 2007; Singer, 2008). In this paper, 867 deposits were used to construct revised grade and tonnage models. Our new models are based on a reclassification of deposits based on host lithologies: Felsic, Bimodal-Mafic, and Mafic volcanogenic massive sulfide deposits. Mineral-deposit models are important in exploration planning and quantitative resource assessments for two reasons: (1) grades and tonnages among deposit types vary significantly, and (2) deposits of different types occur in distinct geologic settings that can be identified from geologic maps. Mineral-deposit models combine the diverse geoscience information on geology, mineral occurrences, geophysics, and geochemistry used in resource assessments and mineral exploration. Globally based deposit models allow recognition of important features and demonstrate how common different features are. Well-designed deposit models allow geologists to deduce possible mineral-deposit types in a given geologic environment and economists to determine the possible economic viability of these resources. Thus, mineral-deposit models play a central role in presenting geoscience information in a useful form to policy makers. The foundation of mineral-deposit models is information about known deposits. The purpose of this publication is to present the latest geologic information and newly developed grade and tonnage models for VMS deposits in digital form. This publication contains computer files with information on VMS deposits from around the world. It also presents new grade and tonnage models for three subtypes of VMS deposits and a text file allowing locations of all deposits to be plotted in geographic information system (GIS) programs. The data are presented in FileMaker Pro and text files to make the information available to a wider audience. The value of this information and any derived analyses depends critically on the consistent manner of data gathering. For this reason, we first discuss the rules used in this compilation. Next, we provide new grade and tonnage models and analysis of the information in the file. Finally, the fields of the data file are explained. Appendix A gives the summary statistics for the new grade-tonnage models and Appendix B displays the country codes used in the database.
Martínez-Costa, Catalina; Cornet, Ronald; Karlsson, Daniel; Schulz, Stefan; Kalra, Dipak
2015-05-01
To improve semantic interoperability of electronic health records (EHRs) by ontology-based mediation across syntactically heterogeneous representations of the same or similar clinical information. Our approach is based on a semantic layer that consists of: (1) a set of ontologies supported by (2) a set of semantic patterns. The first aspect of the semantic layer helps standardize the clinical information modeling task and the second shields modelers from the complexity of ontology modeling. We applied this approach to heterogeneous representations of an excerpt of a heart failure summary. Using a set of finite top-level patterns to derive semantic patterns, we demonstrate that those patterns, or compositions thereof, can be used to represent information from clinical models. Homogeneous querying of the same or similar information, when represented according to heterogeneous clinical models, is feasible. Our approach focuses on the meaning embedded in EHRs, regardless of their structure. This complex task requires a clear ontological commitment (ie, agreement to consistently use the shared vocabulary within some context), together with formalization rules. These requirements are supported by semantic patterns. Other potential uses of this approach, such as clinical models validation, require further investigation. We show how an ontology-based representation of a clinical summary, guided by semantic patterns, allows homogeneous querying of heterogeneous information structures. Whether there are a finite number of top-level patterns is an open question. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
A computational framework for modeling targets as complex adaptive systems
NASA Astrophysics Data System (ADS)
Santos, Eugene; Santos, Eunice E.; Korah, John; Murugappan, Vairavan; Subramanian, Suresh
2017-05-01
Modeling large military targets is a challenge as they can be complex systems encompassing myriad combinations of human, technological, and social elements that interact, leading to complex behaviors. Moreover, such targets have multiple components and structures, extending across multiple spatial and temporal scales, and are in a state of change, either in response to events in the environment or changes within the system. Complex adaptive system (CAS) theory can help in capturing the dynamism, interactions, and more importantly various emergent behaviors, displayed by the targets. However, a key stumbling block is incorporating information from various intelligence, surveillance and reconnaissance (ISR) sources, while dealing with the inherent uncertainty, incompleteness and time criticality of real world information. To overcome these challenges, we present a probabilistic reasoning network based framework called complex adaptive Bayesian Knowledge Base (caBKB). caBKB is a rigorous, overarching and axiomatic framework that models two key processes, namely information aggregation and information composition. While information aggregation deals with the union, merger and concatenation of information and takes into account issues such as source reliability and information inconsistencies, information composition focuses on combining information components where such components may have well defined operations. Since caBKBs can explicitly model the relationships between information pieces at various scales, it provides unique capabilities such as the ability to de-aggregate and de-compose information for detailed analysis. Using a scenario from the Network Centric Operations (NCO) domain, we will describe how our framework can be used for modeling targets with a focus on methodologies for quantifying NCO performance metrics.
Informational and Normative Influences in Conformity from a Neurocomputational Perspective.
Toelch, Ulf; Dolan, Raymond J
2015-10-01
We consider two distinct influences that drive conformity behaviour. Whereas informational influences facilitate adaptive and accurate responses, normative influences bias decisions to enhance social acceptance. We explore these influences from a perspective of perceptual and value-based decision-making models and apply these models to classical works on conformity. We argue that an informational account predicts a surprising tendency to conform. Moreover, we detail how normative influences fit into this framework and interact with social influences. Finally, we explore potential neuronal substrates for informational and normative influences based on a consideration of the neurobiological literature, highlighting conceptual shortcomings particularly with regard to a failure to segregate informational and normative influences. Copyright © 2015 Elsevier Ltd. All rights reserved.
[Construction of information management-based virtual forest landscape and its application].
Chen, Chongcheng; Tang, Liyu; Quan, Bing; Li, Jianwei; Shi, Song
2005-11-01
Based on the analysis of the contents and technical characteristics of different scale forest visualization modeling, this paper brought forward the principles and technical systems of constructing an information management-based virtual forest landscape. With the combination of process modeling and tree geometric structure description, a software method of interactively and parameterized tree modeling was developed, and the corresponding renderings and geometrical elements simplification algorithms were delineated to speed up rendering run-timely. As a pilot study, the geometrical model bases associated with the typical tree categories in Zhangpu County of Fujian Province, southeast China were established as template files. A Virtual Forest Management System prototype was developed with GIS component (ArcObject), OpenGL graphics environment, and Visual C++ language, based on forest inventory and remote sensing data. The prototype could be used for roaming between 2D and 3D, information query and analysis, and virtual and interactive forest growth simulation, and its reality and accuracy could meet the needs of forest resource management. Some typical interfaces of the system and the illustrative scene cross-sections of simulated masson pine growth under conditions of competition and thinning were listed.
Information cascade on networks
NASA Astrophysics Data System (ADS)
Hisakado, Masato; Mori, Shintaro
2016-05-01
In this paper, we discuss a voting model by considering three different kinds of networks: a random graph, the Barabási-Albert (BA) model, and a fitness model. A voting model represents the way in which public perceptions are conveyed to voters. Our voting model is constructed by using two types of voters-herders and independents-and two candidates. Independents conduct voting based on their fundamental values; on the other hand, herders base their voting on the number of previous votes. Hence, herders vote for the majority candidates and obtain information relating to previous votes from their networks. We discuss the difference between the phases on which the networks depend. Two kinds of phase transitions, an information cascade transition and a super-normal transition, were identified. The first of these is a transition between a state in which most voters make the correct choices and a state in which most of them are wrong. The second is a transition of convergence speed. The information cascade transition prevails when herder effects are stronger than the super-normal transition. In the BA and fitness models, the critical point of the information cascade transition is the same as that of the random network model. However, the critical point of the super-normal transition disappears when these two models are used. In conclusion, the influence of networks is shown to only affect the convergence speed and not the information cascade transition. We are therefore able to conclude that the influence of hubs on voters' perceptions is limited.
Precursor Analysis for Flight- and Ground-Based Anomaly Risk Significance Determination
NASA Technical Reports Server (NTRS)
Groen, Frank
2010-01-01
This slide presentation reviews the precursor analysis for flight and ground based anomaly risk significance. It includes information on accident precursor analysis, real models vs. models, and probabilistic analysis.
Jarnevich, Catherine S.; Young, Nicholas E.; Talbert, Marian; Talbert, Colin
2018-01-01
Understanding invasive species distributions and potential invasions often requires broad‐scale information on the environmental tolerances of the species. Further, resource managers are often faced with knowing these broad‐scale relationships as well as nuanced environmental factors related to their landscape that influence where an invasive species occurs and potentially could occur. Using invasive buffelgrass (Cenchrus ciliaris), we developed global models and local models for Saguaro National Park, Arizona, USA, based on location records and literature on physiological tolerances to environmental factors to investigate whether environmental relationships of a species at a global scale are also important at local scales. In addition to correlative models with five commonly used algorithms, we also developed a model using a priori user‐defined relationships between occurrence and environmental characteristics based on a literature review. All correlative models at both scales performed well based on statistical evaluations. The user‐defined curves closely matched those produced by the correlative models, indicating that the correlative models may be capturing mechanisms driving the distribution of buffelgrass. Given climate projections for the region, both global and local models indicate that conditions at Saguaro National Park may become more suitable for buffelgrass. Combining global and local data with correlative models and physiological information provided a holistic approach to forecasting invasive species distributions.
Compound Structure-Independent Activity Prediction in High-Dimensional Target Space.
Balfer, Jenny; Hu, Ye; Bajorath, Jürgen
2014-08-01
Profiling of compound libraries against arrays of targets has become an important approach in pharmaceutical research. The prediction of multi-target compound activities also represents an attractive task for machine learning with potential for drug discovery applications. Herein, we have explored activity prediction in high-dimensional target space. Different types of models were derived to predict multi-target activities. The models included naïve Bayesian (NB) and support vector machine (SVM) classifiers based upon compound structure information and NB models derived on the basis of activity profiles, without considering compound structure. Because the latter approach can be applied to incomplete training data and principally depends on the feature independence assumption, SVM modeling was not applicable in this case. Furthermore, iterative hybrid NB models making use of both activity profiles and compound structure information were built. In high-dimensional target space, NB models utilizing activity profile data were found to yield more accurate activity predictions than structure-based NB and SVM models or hybrid models. An in-depth analysis of activity profile-based models revealed the presence of correlation effects across different targets and rationalized prediction accuracy. Taken together, the results indicate that activity profile information can be effectively used to predict the activity of test compounds against novel targets. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Mann, G; Birkmann, C; Schmidt, T; Schaeffler, V
1999-01-01
Introduction Present solutions for the representation and retrieval of medical information from online sources are not very satisfying. Either the retrieval process lacks of precision and completeness the representation does not support the update and maintenance of the represented information. Most efforts are currently put into improving the combination of search engines and HTML based documents. However, due to the current shortcomings of methods for natural language understanding there are clear limitations to this approach. Furthermore, this approach does not solve the maintenance problem. At least medical information exceeding a certain complexity seems to afford approaches that rely on structured knowledge representation and corresponding retrieval mechanisms. Methods Knowledge-based information systems are based on the following fundamental ideas. The representation of information is based on ontologies that define the structure of the domain's concepts and their relations. Views on domain models are defined and represented as retrieval schemata. Retrieval schemata can be interpreted as canonical query types focussing on specific aspects of the provided information (e.g. diagnosis or therapy centred views). Based on these retrieval schemata it can be decided which parts of the information in the domain model must be represented explicitly and formalised to support the retrieval process. As representation language propositional logic is used. All other information can be represented in a structured but informal way using text, images etc. Layout schemata are used to assign layout information to retrieved domain concepts. Depending on the target environment HTML or XML can be used. Results Based on this approach two knowledge-based information systems have been developed. The 'Ophthalmologic Knowledge-based Information System for Diabetic Retinopathy' (OKIS-DR) provides information on diagnoses, findings, examinations, guidelines, and reference images related to diabetic retinopathy. OKIS-DR uses combinations of findings to specify the information that must be retrieved. The second system focuses on nutrition related allergies and intolerances. Information on allergies and intolerances of a patient are used to retrieve general information on the specified combination of allergies and intolerances. As a special feature the system generates tables showing food types and products that are tolerated or not tolerated by patients. Evaluation by external experts and user groups showed that the described approach of knowledge-based information systems increases the precision and completeness of knowledge retrieval. Due to the structured and non-redundant representation of information the maintenance and update of the information can be simplified. Both systems are available as WWW based online knowledge bases and CD-ROMs (cf. http://mta.gsf.de topic: products).
Mapping interictal epileptic discharges using mutual information between concurrent EEG and fMRI.
Caballero-Gaudes, César; Van de Ville, Dimitri; Grouiller, Frédéric; Thornton, Rachel; Lemieux, Louis; Seeck, Margitta; Lazeyras, François; Vulliemoz, Serge
2013-03-01
The mapping of haemodynamic changes related to interictal epileptic discharges (IED) in simultaneous electroencephalography (EEG) and functional MRI (fMRI) studies is usually carried out by means of EEG-correlated fMRI analyses where the EEG information specifies the model to test on the fMRI signal. The sensitivity and specificity critically depend on the accuracy of EEG detection and the validity of the haemodynamic model. In this study we investigated whether an information theoretic analysis based on the mutual information (MI) between the presence of epileptic activity on EEG and the fMRI data can provide further insights into the haemodynamic changes related to interictal epileptic activity. The important features of MI are that: 1) both recording modalities are treated symmetrically; 2) no requirement for a-priori models for the haemodynamic response function, or assumption of a linear relationship between the spiking activity and BOLD responses, and 3) no parametric model for the type of noise or its probability distribution is necessary for the computation of MI. Fourteen patients with pharmaco-resistant focal epilepsy underwent EEG-fMRI and intracranial EEG and/or surgical resection with positive postoperative outcome (seizure freedom or considerable reduction in seizure frequency) was available in 7/14 patients. We used nonparametric statistical assessment of the MI maps based on a four-dimensional wavelet packet resampling method. The results of MI were compared to the statistical parametric maps obtained with two conventional General Linear Model (GLM) analyses based on the informed basis set (canonical HRF and its temporal and dispersion derivatives) and the Finite Impulse Response (FIR) models. The MI results were concordant with the electro-clinically or surgically defined epileptogenic area in 8/14 patients and showed the same degree of concordance as the results obtained with the GLM-based methods in 12 patients (7 concordant and 5 discordant). In one patient, the information theoretic analysis improved the delineation of the irritative zone compared with the GLM-based methods. Our findings suggest that an information theoretic analysis can provide clinically relevant information about the BOLD signal changes associated with the generation and propagation of interictal epileptic discharges. The concordance between the MI, GLM and FIR maps support the validity of the assumptions adopted in GLM-based analyses of interictal epileptic activity with EEG-fMRI in such a manner that they do not significantly constrain the localization of the epileptogenic zone. Copyright © 2012 Elsevier Inc. All rights reserved.
The Betting Odds Rating System: Using soccer forecasts to forecast soccer.
Wunderlich, Fabian; Memmert, Daniel
2018-01-01
Betting odds are frequently found to outperform mathematical models in sports related forecasting tasks, however the factors contributing to betting odds are not fully traceable and in contrast to rating-based forecasts no straightforward measure of team-specific quality is deducible from the betting odds. The present study investigates the approach of combining the methods of mathematical models and the information included in betting odds. A soccer forecasting model based on the well-known ELO rating system and taking advantage of betting odds as a source of information is presented. Data from almost 15.000 soccer matches (seasons 2007/2008 until 2016/2017) are used, including both domestic matches (English Premier League, German Bundesliga, Spanish Primera Division and Italian Serie A) and international matches (UEFA Champions League, UEFA Europe League). The novel betting odds based ELO model is shown to outperform classic ELO models, thus demonstrating that betting odds prior to a match contain more relevant information than the result of the match itself. It is shown how the novel model can help to gain valuable insights into the quality of soccer teams and its development over time, thus having a practical benefit in performance analysis. Moreover, it is argued that network based approaches might help in further improving rating and forecasting methods.
The Betting Odds Rating System: Using soccer forecasts to forecast soccer
Memmert, Daniel
2018-01-01
Betting odds are frequently found to outperform mathematical models in sports related forecasting tasks, however the factors contributing to betting odds are not fully traceable and in contrast to rating-based forecasts no straightforward measure of team-specific quality is deducible from the betting odds. The present study investigates the approach of combining the methods of mathematical models and the information included in betting odds. A soccer forecasting model based on the well-known ELO rating system and taking advantage of betting odds as a source of information is presented. Data from almost 15.000 soccer matches (seasons 2007/2008 until 2016/2017) are used, including both domestic matches (English Premier League, German Bundesliga, Spanish Primera Division and Italian Serie A) and international matches (UEFA Champions League, UEFA Europe League). The novel betting odds based ELO model is shown to outperform classic ELO models, thus demonstrating that betting odds prior to a match contain more relevant information than the result of the match itself. It is shown how the novel model can help to gain valuable insights into the quality of soccer teams and its development over time, thus having a practical benefit in performance analysis. Moreover, it is argued that network based approaches might help in further improving rating and forecasting methods. PMID:29870554
A simplified computational memory model from information processing.
Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang
2016-11-23
This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view.
Maximum likelihood-based analysis of single-molecule photon arrival trajectories
NASA Astrophysics Data System (ADS)
Hajdziona, Marta; Molski, Andrzej
2011-02-01
In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 103 photons. When the intensity levels are well-separated and 104 photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.
Frisch, Simon; Dshemuchadse, Maja; Görner, Max; Goschke, Thomas; Scherbaum, Stefan
2015-11-01
Selective attention biases information processing toward stimuli that are relevant for achieving our goals. However, the nature of this bias is under debate: Does it solely rely on the amplification of goal-relevant information or is there a need for additional inhibitory processes that selectively suppress currently distracting information? Here, we explored the processes underlying selective attention with a dynamic, modeling-based approach that focuses on the continuous evolution of behavior over time. We present two dynamic neural field models incorporating the diverging theoretical assumptions. Simulations with both models showed that they make similar predictions with regard to response times but differ markedly with regard to their continuous behavior. Human data observed via mouse tracking as a continuous measure of performance revealed evidence for the model solely based on amplification but no indication of persisting selective distracter inhibition.
Timing crisis information release via television.
Wei, Jiuchang; Zhao, Dingtao; Yang, Feng; Du, Shaofu; Marinova, Dora
2010-10-01
When and how often to release information on television are important issues in crisis and emergency risk communication. There is a lot of crisis information, including warnings and news, to which people should have access, but most of it is not significantly urgent to interrupt the broadcasting of television programmes. Hence, the right timing for the release of crisis information should be selected based on the importance of the crisis and any associated communication requirements. Using recursive methods, this paper builds an audience coverage model of crisis information release. Based on 2007 Household Using TV (HUT) data for Hefei City, China, the optimal combination of broadcasting sequence (with frequencies between one and eight times) is obtained using the implicit enumeration method. The developed model is applicable to effective transmission of crisis information, with the aim of reducing interference with the normal television transmission process and decreasing the psychological effect on audiences. The same model can be employed for other purposes, such as news coverage and weather and road information. © 2010 The Author(s). Journal compilation © Overseas Development Institute, 2010.
Connectionist Interaction Information Retrieval.
ERIC Educational Resources Information Center
Dominich, Sandor
2003-01-01
Discussion of connectionist views for adaptive clustering in information retrieval focuses on a connectionist clustering technique and activation spreading-based information retrieval model using the interaction information retrieval method. Presents theoretical as well as simulation results as regards computational complexity and includes…
Klijn, Sven L; Weijenberg, Matty P; Lemmens, Paul; van den Brandt, Piet A; Lima Passos, Valéria
2017-10-01
Background and objective Group-based trajectory modelling is a model-based clustering technique applied for the identification of latent patterns of temporal changes. Despite its manifold applications in clinical and health sciences, potential problems of the model selection procedure are often overlooked. The choice of the number of latent trajectories (class-enumeration), for instance, is to a large degree based on statistical criteria that are not fail-safe. Moreover, the process as a whole is not transparent. To facilitate class enumeration, we introduce a graphical summary display of several fit and model adequacy criteria, the fit-criteria assessment plot. Methods An R-code that accepts universal data input is presented. The programme condenses relevant group-based trajectory modelling output information of model fit indices in automated graphical displays. Examples based on real and simulated data are provided to illustrate, assess and validate fit-criteria assessment plot's utility. Results Fit-criteria assessment plot provides an overview of fit criteria on a single page, placing users in an informed position to make a decision. Fit-criteria assessment plot does not automatically select the most appropriate model but eases the model assessment procedure. Conclusions Fit-criteria assessment plot is an exploratory, visualisation tool that can be employed to assist decisions in the initial and decisive phase of group-based trajectory modelling analysis. Considering group-based trajectory modelling's widespread resonance in medical and epidemiological sciences, a more comprehensive, easily interpretable and transparent display of the iterative process of class enumeration may foster group-based trajectory modelling's adequate use.
A cloud-based information repository for bridge monitoring applications
NASA Astrophysics Data System (ADS)
Jeong, Seongwoon; Zhang, Yilan; Hou, Rui; Lynch, Jerome P.; Sohn, Hoon; Law, Kincho H.
2016-04-01
This paper describes an information repository to support bridge monitoring applications on a cloud computing platform. Bridge monitoring, with instrumentation of sensors in particular, collects significant amount of data. In addition to sensor data, a wide variety of information such as bridge geometry, analysis model and sensor description need to be stored. Data management plays an important role to facilitate data utilization and data sharing. While bridge information modeling (BrIM) technologies and standards have been proposed and they provide a means to enable integration and facilitate interoperability, current BrIM standards support mostly the information about bridge geometry. In this study, we extend the BrIM schema to include analysis models and sensor information. Specifically, using the OpenBrIM standards as the base, we draw on CSI Bridge, a commercial software widely used for bridge analysis and design, and SensorML, a standard schema for sensor definition, to define the data entities necessary for bridge monitoring applications. NoSQL database systems are employed for data repository. Cloud service infrastructure is deployed to enhance scalability, flexibility and accessibility of the data management system. The data model and systems are tested using the bridge model and the sensor data collected at the Telegraph Road Bridge, Monroe, Michigan.
NASA Astrophysics Data System (ADS)
von Ruette, Jonas; Lehmann, Peter; Fan, Linfeng; Bickel, Samuel; Or, Dani
2017-04-01
Landslides and subsequent debris-flows initiated by rainfall represent a ubiquitous natural hazard in steep mountainous regions. We integrated a landslide hydro-mechanical triggering model and associated debris flow runout pathways with a graphical user interface (GUI) to represent these natural hazards in a wide range of catchments over the globe. The STEP-TRAMM GUI provides process-based locations and sizes of landslides patterns using digital elevation models (DEM) from SRTM database (30 m resolution) linked with soil maps from global database SoilGrids (250 m resolution) and satellite based information on rainfall statistics for the selected region. In a preprocessing step STEP-TRAMM models soil depth distribution and complements soil information that jointly capture key hydrological and mechanical properties relevant to local soil failure representation. In the presentation we will discuss feature of this publicly available platform and compare landslide and debris flow patterns for different regions considering representative intense rainfall events. Model outcomes will be compared for different spatial and temporal resolutions to test applicability of web-based information on elevation and rainfall for hazard assessment.
Image segmentation using local shape and gray-level appearance models
NASA Astrophysics Data System (ADS)
Seghers, Dieter; Loeckx, Dirk; Maes, Frederik; Suetens, Paul
2006-03-01
A new generic model-based segmentation scheme is presented, which can be trained from examples akin to the Active Shape Model (ASM) approach in order to acquire knowledge about the shape to be segmented and about the gray-level appearance of the object in the image. Because in the ASM approach the intensity and shape models are typically applied alternately during optimizing as first an optimal target location is selected for each landmark separately based on local gray-level appearance information only to which the shape model is fitted subsequently, the ASM may be misled in case of wrongly selected landmark locations. Instead, the proposed approach optimizes for shape and intensity characteristics simultaneously. Local gray-level appearance information at the landmark points extracted from feature images is used to automatically detect a number of plausible candidate locations for each landmark. The shape information is described by multiple landmark-specific statistical models that capture local dependencies between adjacent landmarks on the shape. The shape and intensity models are combined in a single cost function that is optimized non-iteratively using dynamic programming which allows to find the optimal landmark positions using combined shape and intensity information, without the need for initialization.
Benedict, Matthew N.; Mundy, Michael B.; Henry, Christopher S.; ...
2014-10-16
Genome-scale metabolic models provide a powerful means to harness information from genomes to deepen biological insights. With exponentially increasing sequencing capacity, there is an enormous need for automated reconstruction techniques that can provide more accurate models in a short time frame. Current methods for automated metabolic network reconstruction rely on gene and reaction annotations to build draft metabolic networks and algorithms to fill gaps in these networks. However, automated reconstruction is hampered by database inconsistencies, incorrect annotations, and gap filling largely without considering genomic information. Here we develop an approach for applying genomic information to predict alternative functions for genesmore » and estimate their likelihoods from sequence homology. We show that computed likelihood values were significantly higher for annotations found in manually curated metabolic networks than those that were not. We then apply these alternative functional predictions to estimate reaction likelihoods, which are used in a new gap filling approach called likelihood-based gap filling to predict more genomically consistent solutions. To validate the likelihood-based gap filling approach, we applied it to models where essential pathways were removed, finding that likelihood-based gap filling identified more biologically relevant solutions than parsimony-based gap filling approaches. We also demonstrate that models gap filled using likelihood-based gap filling provide greater coverage and genomic consistency with metabolic gene functions compared to parsimony-based approaches. Interestingly, despite these findings, we found that likelihoods did not significantly affect consistency of gap filled models with Biolog and knockout lethality data. This indicates that the phenotype data alone cannot necessarily be used to discriminate between alternative solutions for gap filling and therefore, that the use of other information is necessary to obtain a more accurate network. All described workflows are implemented as part of the DOE Systems Biology Knowledgebase (KBase) and are publicly available via API or command-line web interface.« less
Benedict, Matthew N.; Mundy, Michael B.; Henry, Christopher S.; Chia, Nicholas; Price, Nathan D.
2014-01-01
Genome-scale metabolic models provide a powerful means to harness information from genomes to deepen biological insights. With exponentially increasing sequencing capacity, there is an enormous need for automated reconstruction techniques that can provide more accurate models in a short time frame. Current methods for automated metabolic network reconstruction rely on gene and reaction annotations to build draft metabolic networks and algorithms to fill gaps in these networks. However, automated reconstruction is hampered by database inconsistencies, incorrect annotations, and gap filling largely without considering genomic information. Here we develop an approach for applying genomic information to predict alternative functions for genes and estimate their likelihoods from sequence homology. We show that computed likelihood values were significantly higher for annotations found in manually curated metabolic networks than those that were not. We then apply these alternative functional predictions to estimate reaction likelihoods, which are used in a new gap filling approach called likelihood-based gap filling to predict more genomically consistent solutions. To validate the likelihood-based gap filling approach, we applied it to models where essential pathways were removed, finding that likelihood-based gap filling identified more biologically relevant solutions than parsimony-based gap filling approaches. We also demonstrate that models gap filled using likelihood-based gap filling provide greater coverage and genomic consistency with metabolic gene functions compared to parsimony-based approaches. Interestingly, despite these findings, we found that likelihoods did not significantly affect consistency of gap filled models with Biolog and knockout lethality data. This indicates that the phenotype data alone cannot necessarily be used to discriminate between alternative solutions for gap filling and therefore, that the use of other information is necessary to obtain a more accurate network. All described workflows are implemented as part of the DOE Systems Biology Knowledgebase (KBase) and are publicly available via API or command-line web interface. PMID:25329157
Tree Biomass Estimation of Chinese fir (Cunninghamia lanceolata) Based on Bayesian Method
Zhang, Jianguo
2013-01-01
Chinese fir (Cunninghamia lanceolata (Lamb.) Hook.) is the most important conifer species for timber production with huge distribution area in southern China. Accurate estimation of biomass is required for accounting and monitoring Chinese forest carbon stocking. In the study, allometric equation was used to analyze tree biomass of Chinese fir. The common methods for estimating allometric model have taken the classical approach based on the frequency interpretation of probability. However, many different biotic and abiotic factors introduce variability in Chinese fir biomass model, suggesting that parameters of biomass model are better represented by probability distributions rather than fixed values as classical method. To deal with the problem, Bayesian method was used for estimating Chinese fir biomass model. In the Bayesian framework, two priors were introduced: non-informative priors and informative priors. For informative priors, 32 biomass equations of Chinese fir were collected from published literature in the paper. The parameter distributions from published literature were regarded as prior distributions in Bayesian model for estimating Chinese fir biomass. Therefore, the Bayesian method with informative priors was better than non-informative priors and classical method, which provides a reasonable method for estimating Chinese fir biomass. PMID:24278198
Tree biomass estimation of Chinese fir (Cunninghamia lanceolata) based on Bayesian method.
Zhang, Xiongqing; Duan, Aiguo; Zhang, Jianguo
2013-01-01
Chinese fir (Cunninghamia lanceolata (Lamb.) Hook.) is the most important conifer species for timber production with huge distribution area in southern China. Accurate estimation of biomass is required for accounting and monitoring Chinese forest carbon stocking. In the study, allometric equation W = a(D2H)b was used to analyze tree biomass of Chinese fir. The common methods for estimating allometric model have taken the classical approach based on the frequency interpretation of probability. However, many different biotic and abiotic factors introduce variability in Chinese fir biomass model, suggesting that parameters of biomass model are better represented by probability distributions rather than fixed values as classical method. To deal with the problem, Bayesian method was used for estimating Chinese fir biomass model. In the Bayesian framework, two priors were introduced: non-informative priors and informative priors. For informative priors, 32 biomass equations of Chinese fir were collected from published literature in the paper. The parameter distributions from published literature were regarded as prior distributions in Bayesian model for estimating Chinese fir biomass. Therefore, the Bayesian method with informative priors was better than non-informative priors and classical method, which provides a reasonable method for estimating Chinese fir biomass.
Simultaneous Semi-Distributed Model Calibration Guided by ...
Modelling approaches to transfer hydrologically-relevant information from locations with streamflow measurements to locations without such measurements continues to be an active field of research for hydrologists. The Pacific Northwest Hydrologic Landscapes (PNW HL) provide a solid conceptual classification framework based on our understanding of dominant processes. A Hydrologic Landscape code (5 letter descriptor based on physical and climatic properties) describes each assessment unit area, and these units average area 60km2. The core function of these HL codes is to relate and transfer hydrologically meaningful information between watersheds without the need for streamflow time series. We present a novel approach based on the HL framework to answer the question “How can we calibrate models across separate watersheds simultaneously, guided by our understanding of dominant processes?“. We should be able to apply the same parameterizations to assessment units of common HL codes if 1) the Hydrologic Landscapes contain hydrologic information transferable between watersheds at a sub-watershed-scale and 2) we use a conceptual hydrologic model and parameters that reflect the hydrologic behavior of a watershed. In this study, This work specifically tests the ability or inability to use HL-codes to inform and share model parameters across watersheds in the Pacific Northwest. EPA’s Western Ecology Division has published and is refining a framework for defining la
Maturity of hospital information systems: Most important influencing factors.
Vidal Carvalho, João; Rocha, Álvaro; Abreu, António
2017-07-01
Maturity models facilitate organizational management, including information systems management, with hospital organizations no exception. This article puts forth a study carried out with a group of experts in the field of hospital information systems management with a view to identifying the main influencing factors to be included in an encompassing maturity model for hospital information systems management. This study is based on the results of a literature review, which identified maturity models in the health field and relevant influencing factors. The development of this model is justified to the extent that the available maturity models for the hospital information systems management field reveal multiple limitations, including lack of detail, absence of tools to determine their maturity and lack of characterization for stages of maturity structured by different influencing factors.
NASA Technical Reports Server (NTRS)
McAdaragh, Raymon M.
2002-01-01
The capacity of the National Airspace System is being stressed due to the limits of current technologies. Because of this, the FAA and NASA are working to develop new technologies to increase the system's capacity which enhancing safety. Adverse weather has been determined to be a major factor in aircraft accidents and fatalities and the FAA and NASA have developed programs to improve aviation weather information technologies and communications for system users The Aviation Weather Information Element of the Weather Accident Prevention Project of NASA's Aviation Safety Program is currently working to develop these technologies in coordination with the FAA and industry. This paper sets forth a theoretical approach to implement these new technologies while addressing the National Airspace System (NAS) as an evolving system with Weather Information as one of its subSystems. With this approach in place, system users will be able to acquire the type of weather information that is needed based upon the type of decision-making situation and condition that is encountered. The theoretical approach addressed in this paper takes the form of a model for weather information implementation. This model addresses the use of weather information in three decision-making situations, based upon the system user's operational perspective. The model also addresses two decision-making conditions, which are based upon the need for collaboration due to the level of support offered by the weather information provided by each new product or technology. The model is proposed for use in weather information implementation in order to provide a systems approach to the NAS. Enhancements to the NAS collaborative decision-making capabilities are also suggested.
Semantic Information Processing of Physical Simulation Based on Scientific Concept Vocabulary Model
NASA Astrophysics Data System (ADS)
Kino, Chiaki; Suzuki, Yoshio; Takemiya, Hiroshi
Scientific Concept Vocabulary (SCV) has been developed to actualize Cognitive methodology based Data Analysis System: CDAS which supports researchers to analyze large scale data efficiently and comprehensively. SCV is an information model for processing semantic information for physics and engineering. In the model of SCV, all semantic information is related to substantial data and algorisms. Consequently, SCV enables a data analysis system to recognize the meaning of execution results output from a numerical simulation. This method has allowed a data analysis system to extract important information from a scientific view point. Previous research has shown that SCV is able to describe simple scientific indices and scientific perceptions. However, it is difficult to describe complex scientific perceptions by currently-proposed SCV. In this paper, a new data structure for SCV has been proposed in order to describe scientific perceptions in more detail. Additionally, the prototype of the new model has been constructed and applied to actual data of numerical simulation. The result means that the new SCV is able to describe more complex scientific perceptions.
Structuring Legacy Pathology Reports by openEHR Archetypes to Enable Semantic Querying.
Kropf, Stefan; Krücken, Peter; Mueller, Wolf; Denecke, Kerstin
2017-05-18
Clinical information is often stored as free text, e.g. in discharge summaries or pathology reports. These documents are semi-structured using section headers, numbered lists, items and classification strings. However, it is still challenging to retrieve relevant documents since keyword searches applied on complete unstructured documents result in many false positive retrieval results. We are concentrating on the processing of pathology reports as an example for unstructured clinical documents. The objective is to transform reports semi-automatically into an information structure that enables an improved access and retrieval of relevant data. The data is expected to be stored in a standardized, structured way to make it accessible for queries that are applied to specific sections of a document (section-sensitive queries) and for information reuse. Our processing pipeline comprises information modelling, section boundary detection and section-sensitive queries. For enabling a focused search in unstructured data, documents are automatically structured and transformed into a patient information model specified through openEHR archetypes. The resulting XML-based pathology electronic health records (PEHRs) are queried by XQuery and visualized by XSLT in HTML. Pathology reports (PRs) can be reliably structured into sections by a keyword-based approach. The information modelling using openEHR allows saving time in the modelling process since many archetypes can be reused. The resulting standardized, structured PEHRs allow accessing relevant data by retrieving data matching user queries. Mapping unstructured reports into a standardized information model is a practical solution for a better access to data. Archetype-based XML enables section-sensitive retrieval and visualisation by well-established XML techniques. Focussing the retrieval to particular sections has the potential of saving retrieval time and improving the accuracy of the retrieval.
Herzog, Sereina A; Blaizot, Stéphanie; Hens, Niel
2017-12-18
Mathematical models offer the possibility to investigate the infectious disease dynamics over time and may help in informing design of studies. A systematic review was performed in order to determine to what extent mathematical models have been incorporated into the process of planning studies and hence inform study design for infectious diseases transmitted between humans and/or animals. We searched Ovid Medline and two trial registry platforms (Cochrane, WHO) using search terms related to infection, mathematical model, and study design from the earliest dates to October 2016. Eligible publications and registered trials included mathematical models (compartmental, individual-based, or Markov) which were described and used to inform the design of infectious disease studies. We extracted information about the investigated infection, population, model characteristics, and study design. We identified 28 unique publications but no registered trials. Focusing on compartmental and individual-based models we found 12 observational/surveillance studies and 11 clinical trials. Infections studied were equally animal and human infectious diseases for the observational/surveillance studies, while all but one between humans for clinical trials. The mathematical models were used to inform, amongst other things, the required sample size (n = 16), the statistical power (n = 9), the frequency at which samples should be taken (n = 6), and from whom (n = 6). Despite the fact that mathematical models have been advocated to be used at the planning stage of studies or surveillance systems, they are used scarcely. With only one exception, the publications described theoretical studies, hence, not being utilised in real studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ridolfi, E.; Napolitano, F., E-mail: francesco.napolitano@uniroma1.it; Alfonso, L.
2016-06-08
The description of river topography has a crucial role in accurate one-dimensional (1D) hydraulic modelling. Specifically, cross-sectional data define the riverbed elevation, the flood-prone area, and thus, the hydraulic behavior of the river. Here, the problem of the optimal cross-sectional spacing is solved through an information theory-based concept. The optimal subset of locations is the one with the maximum information content and the minimum amount of redundancy. The original contribution is the introduction of a methodology to sample river cross sections in the presence of bridges. The approach is tested on the Grosseto River (IT) and is compared to existingmore » guidelines. The results show that the information theory-based approach can support traditional methods to estimate rivers’ cross-sectional spacing.« less
Design of an Information Technology Undergraduate Program to Produce IT Versatilists
ERIC Educational Resources Information Center
Koohang, Alex; Riley, Liz; Smith, Terry; Floyd, Kevin
2010-01-01
This paper attempts to present a model for designing an IT undergraduate program that is based on the recommendations of the Association for Computer Machinery/Institute of Electrical and Electronics Engineers--Information Technology (ACM/IEEE--IT) Curriculum Model. The main intent is to use the ACM/IEEE--IT Curriculum Model's recommendations as a…
IS Success Model in E-Learning Context Based on Students' Perceptions
ERIC Educational Resources Information Center
Freeze, Ronald D.; Alshare, Khaled A.; Lane, Peggy L.; Wen, H. Joseph
2010-01-01
This study utilized the Information Systems Success (ISS) model in examining e-learning systems success. The study was built on the premise that system quality (SQ) and information quality (IQ) influence system use and user satisfaction, which in turn impact system success. A structural equation model (SEM), using LISREL, was used to test the…
ERIC Educational Resources Information Center
Wu, Wei
2010-01-01
Building information modeling (BIM) and green building are currently two major trends in the architecture, engineering and construction (AEC) industry. This research recognizes the market demand for better solutions to achieve green building certification such as LEED in the United States. It proposes a new strategy based on the integration of BIM…
Critical social theory as a model for the informatics curriculum for nursing.
Wainwright, P; Jones, P G
2000-01-01
It is widely acknowledged that the education and training of nurses in information management and technology is problematic. Drawing from recent research this paper presents a theoretical framework within which the nature of the problems faced by nurses in the use of information may be analyzed. This framework, based on the critical social theory of Habermas, also provides a model for the informatics curriculum. The advantages of problem based learning and multi-media web-based technologies for the delivery of learning materials within this area are also discussed.
Modelling and Simulation of Search Engine
NASA Astrophysics Data System (ADS)
Nasution, Mahyuddin K. M.
2017-01-01
The best tool currently used to access information is a search engine. Meanwhile, the information space has its own behaviour. Systematically, an information space needs to be familiarized with mathematics so easily we identify the characteristics associated with it. This paper reveal some characteristics of search engine based on a model of document collection, which are then estimated the impact on the feasibility of information. We reveal some of characteristics of search engine on the lemma and theorem about singleton and doubleton, then computes statistically characteristic as simulating the possibility of using search engine. In this case, Google and Yahoo. There are differences in the behaviour of both search engines, although in theory based on the concept of documents collection.
Modeling of information flows in natural gas storage facility
NASA Astrophysics Data System (ADS)
Ranjbari, Leyla; Bahar, Arifah; Aziz, Zainal Abdul
2013-09-01
The paper considers the natural-gas storage valuation based on the information-based pricing framework of Brody-Hughston-Macrina (BHM). As opposed to many studies which the associated filtration is considered pre-specified, this work tries to construct the filtration in terms of the information provided to the market. The value of the storage is given by the sum of the discounted expectations of the cash flows under risk-neutral measure, conditional to the constructed filtration with the Brownian bridge noise term. In order to model the flow of information about the cash flows, we assume the existence of a fixed pricing kernel with liquid, homogenous and incomplete market without arbitrage.
Modelling Situation Awareness Information for Naval Decision Support Design
2003-10-01
Modelling Situation Awareness Information for Naval Decision Support Design Dr.-Ing. Bernhard Doering, Dipl.-Ing. Gert Doerfel, Dipl.-Ing... knowledge -based user interfaces. For developing such interfaces information of the three different SA levels which operators need in performing their...large scale on situation awareness of operators which is defined as the state of operator knowledge about the external environment resulting from
ERIC Educational Resources Information Center
Frazier, Thomas W.; Youngstrom, Eric A.
2006-01-01
In this article, the authors illustrate a step-by-step process of acquiring and integrating information according to the recommendations of evidence-based practices. A case example models the process, leading to specific recommendations regarding instruments and strategies for evidence-based assessment (EBA) of attention-deficit/hyperactivity…
Glaser, Robert; Venus, Joachim
2017-04-01
The data presented in this article are related to the research article entitled "Model-based characterization of growth performance and l-lactic acid production with high optical purity by thermophilic Bacillus coagulans in a lignin-supplemented mixed substrate medium (R. Glaser and J. Venus, 2016) [1]". This data survey provides the information on characterization of three Bacillus coagulans strains. Information on cofermentation of lignocellulose-related sugars in lignin-containing media is given. Basic characterization data are supported by optical-density high-throughput screening and parameter adjustment to logistic growth models. Lab scale fermentation procedures are examined by model adjustment of a Monod kinetics-based growth model. Lignin consumption is analyzed using the data on decolorization of a lignin-supplemented minimal medium.
Comparing models of the combined-stimulation advantage for speech recognition.
Micheyl, Christophe; Oxenham, Andrew J
2012-05-01
The "combined-stimulation advantage" refers to an improvement in speech recognition when cochlear-implant or vocoded stimulation is supplemented by low-frequency acoustic information. Previous studies have been interpreted as evidence for "super-additive" or "synergistic" effects in the combination of low-frequency and electric or vocoded speech information by human listeners. However, this conclusion was based on predictions of performance obtained using a suboptimal high-threshold model of information combination. The present study shows that a different model, based on Gaussian signal detection theory, can predict surprisingly large combined-stimulation advantages, even when performance with either information source alone is close to chance, without involving any synergistic interaction. A reanalysis of published data using this model reveals that previous results, which have been interpreted as evidence for super-additive effects in perception of combined speech stimuli, are actually consistent with a more parsimonious explanation, according to which the combined-stimulation advantage reflects an optimal combination of two independent sources of information. The present results do not rule out the possible existence of synergistic effects in combined stimulation; however, they emphasize the possibility that the combined-stimulation advantages observed in some studies can be explained simply by non-interactive combination of two information sources.
Forecasting runout of rock and debris avalanches
Iverson, Richard M.; Evans, S.G.; Mugnozza, G.S.; Strom, A.; Hermanns, R.L.
2006-01-01
Physically based mathematical models and statistically based empirical equations each may provide useful means of forecasting runout of rock and debris avalanches. This paper compares the foundations, strengths, and limitations of a physically based model and a statistically based forecasting method, both of which were developed to predict runout across three-dimensional topography. The chief advantage of the physically based model results from its ties to physical conservation laws and well-tested axioms of soil and rock mechanics, such as the Coulomb friction rule and effective-stress principle. The output of this model provides detailed information about the dynamics of avalanche runout, at the expense of high demands for accurate input data, numerical computation, and experimental testing. In comparison, the statistical method requires relatively modest computation and no input data except identification of prospective avalanche source areas and a range of postulated avalanche volumes. Like the physically based model, the statistical method yields maps of predicted runout, but it provides no information on runout dynamics. Although the two methods differ significantly in their structure and objectives, insights gained from one method can aid refinement of the other.
Time series sightability modeling of animal populations.
ArchMiller, Althea A; Dorazio, Robert M; St Clair, Katherine; Fieberg, John R
2018-01-01
Logistic regression models-or "sightability models"-fit to detection/non-detection data from marked individuals are often used to adjust for visibility bias in later detection-only surveys, with population abundance estimated using a modified Horvitz-Thompson (mHT) estimator. More recently, a model-based alternative for analyzing combined detection/non-detection and detection-only data was developed. This approach seemed promising, since it resulted in similar estimates as the mHT when applied to data from moose (Alces alces) surveys in Minnesota. More importantly, it provided a framework for developing flexible models for analyzing multiyear detection-only survey data in combination with detection/non-detection data. During initial attempts to extend the model-based approach to multiple years of detection-only data, we found that estimates of detection probabilities and population abundance were sensitive to the amount of detection-only data included in the combined (detection/non-detection and detection-only) analysis. Subsequently, we developed a robust hierarchical modeling approach where sightability model parameters are informed only by the detection/non-detection data, and we used this approach to fit a fixed-effects model (FE model) with year-specific parameters and a temporally-smoothed model (TS model) that shares information across years via random effects and a temporal spline. The abundance estimates from the TS model were more precise, with decreased interannual variability relative to the FE model and mHT abundance estimates, illustrating the potential benefits from model-based approaches that allow information to be shared across years.
Smarter than others? Conjectures in lowest unique bid auctions.
Zhou, Cancan; Dong, Hongguang; Hu, Rui; Chen, Qinghua
2015-01-01
Research concerning various types of auctions, such as English auctions, Dutch auctions, highest-price sealed-bid auctions, and second-price sealed-bid auctions, is always a topic of considerable interest in interdisciplinary fields. The type of auction, known as a lowest unique bid auction (LUBA), has also attracted significant attention. Various models have been proposed, but they often fail to explain satisfactorily the real bid-distribution characteristics. This paper discusses LUBA bid-distribution characteristics, including the inverted-J shape and the exponential decrease in the upper region. The authors note that this type of distribution, which initially increases and later decreases, cannot be derived from the symmetric Nash equilibrium framework based on perfect information that has previously been used. A novel optimization model based on non-perfect information is presented. The kernel of this model is the premise that agents make decisions to achieve maximum profit based on imaginary information or assumptions regarding the behavior of others.
NASA Astrophysics Data System (ADS)
Hamid, H.
2018-01-01
The purpose of this study is to analyze an improvement of students’ mathematical critical thinking (CT) ability in Real Analysis course by using Rigorous Teaching and Learning (RTL) model with informal argument. In addition, this research also attempted to understand students’ CT on their initial mathematical ability (IMA). This study was conducted at a private university in academic year 2015/2016. The study employed the quasi-experimental method with pretest-posttest control group design. The participants of the study were 83 students in which 43 students were in the experimental group and 40 students were in the control group. The finding of the study showed that students in experimental group outperformed students in control group on mathematical CT ability based on their IMA (high, medium, low) in learning Real Analysis. In addition, based on medium IMA the improvement of mathematical CT ability of students who were exposed to RTL model with informal argument was greater than that of students who were exposed to CI (conventional instruction). There was also no effect of interaction between RTL model and CI model with both (high, medium, and low) IMA increased mathematical CT ability. Finally, based on (high, medium, and low) IMA there was a significant improvement in the achievement of all indicators of mathematical CT ability of students who were exposed to RTL model with informal argument than that of students who were exposed to CI.
A New Perspective on Modeling Groundwater-Driven Health Risk With Subjective Information
NASA Astrophysics Data System (ADS)
Ozbek, M. M.
2003-12-01
Fuzzy rule-based systems provide an efficient environment for the modeling of expert information in the context of risk management for groundwater contamination problems. In general, their use in the form of conditional pieces of knowledge, has been either as a tool for synthesizing control laws from data (i.e., conjunction-based models), or in a knowledge representation and reasoning perspective in Artificial Intelligence (i.e., implication-based models), where only the latter may lead to coherence problems (e.g., input data that leads to logical inconsistency when added to the knowledge base). We implement a two-fold extension to an implication-based groundwater risk model (Ozbek and Pinder, 2002) including: 1) the implementation of sufficient conditions for a coherent knowledge base, and 2) the interpolation of expert statements to supplement gaps in knowledge. The original model assumes statements of public health professionals for the characterization of the exposed individual and the relation of dose and pattern of exposure to its carcinogenic effects. We demonstrate the utility of the extended model in that it: 1)identifies inconsistent statements and establishes coherence in the knowledge base, and 2) minimizes the burden of knowledge elicitation from the experts for utilizing existing knowledge in an optimal fashion.ÿÿ
A Conceptual Model of the Cognitive Processing of Environmental Distance Information
NASA Astrophysics Data System (ADS)
Montello, Daniel R.
I review theories and research on the cognitive processing of environmental distance information by humans, particularly that acquired via direct experience in the environment. The cognitive processes I consider for acquiring and thinking about environmental distance information include working-memory, nonmediated, hybrid, and simple-retrieval processes. Based on my review of the research literature, and additional considerations about the sources of distance information and the situations in which it is used, I propose an integrative conceptual model to explain the cognitive processing of distance information that takes account of the plurality of possible processes and information sources, and describes conditions under which particular processes and sources are likely to operate. The mechanism of summing vista distances is identified as widely important in situations with good visual access to the environment. Heuristics based on time, effort, or other information are likely to play their most important role when sensory access is restricted.
Agent-based Modeling with MATSim for Hazards Evacuation Planning
NASA Astrophysics Data System (ADS)
Jones, J. M.; Ng, P.; Henry, K.; Peters, J.; Wood, N. J.
2015-12-01
Hazard evacuation planning requires robust modeling tools and techniques, such as least cost distance or agent-based modeling, to gain an understanding of a community's potential to reach safety before event (e.g. tsunami) arrival. Least cost distance modeling provides a static view of the evacuation landscape with an estimate of travel times to safety from each location in the hazard space. With this information, practitioners can assess a community's overall ability for timely evacuation. More information may be needed if evacuee congestion creates bottlenecks in the flow patterns. Dynamic movement patterns are best explored with agent-based models that simulate movement of and interaction between individual agents as evacuees through the hazard space, reacting to potential congestion areas along the evacuation route. The multi-agent transport simulation model MATSim is an agent-based modeling framework that can be applied to hazard evacuation planning. Developed jointly by universities in Switzerland and Germany, MATSim is open-source software written in Java and freely available for modification or enhancement. We successfully used MATSim to illustrate tsunami evacuation challenges in two island communities in California, USA, that are impacted by limited escape routes. However, working with MATSim's data preparation, simulation, and visualization modules in an integrated development environment requires a significant investment of time to develop the software expertise to link the modules and run a simulation. To facilitate our evacuation research, we packaged the MATSim modules into a single application tailored to the needs of the hazards community. By exposing the modeling parameters of interest to researchers in an intuitive user interface and hiding the software complexities, we bring agent-based modeling closer to practitioners and provide access to the powerful visual and analytic information that this modeling can provide.
Modeling web-based information seeking by users who are blind.
Brunsman-Johnson, Carissa; Narayanan, Sundaram; Shebilske, Wayne; Alakke, Ganesh; Narakesari, Shruti
2011-01-01
This article describes website information seeking strategies used by users who are blind and compares those with sighted users. It outlines how assistive technologies and website design can aid users who are blind while information seeking. People who are blind and sighted are tested using an assessment tool and performing several tasks on websites. The times and keystrokes are recorded for all tasks as well as commands used and spatial questioning. Participants who are blind used keyword-based search strategies as their primary tool to seek information. Sighted users also used keyword search techniques if they were unable to find the information using a visual scan of the home page of a website. A proposed model based on the present study for information seeking is described. Keywords are important in the strategies used by both groups of participants and providing these common and consistent keywords in locations that are accessible to the users may be useful for efficient information searching. The observations suggest that there may be a difference in how users search a website that is familiar compared to one that is unfamiliar. © 2011 Informa UK, Ltd.
Preliminary description of the area navigation software for a microcomputer-based Loran-C receiver
NASA Technical Reports Server (NTRS)
Oguri, F.
1983-01-01
The development of new software implementation of this software on a microcomputer (MOS 6502) to provide high quality navigation information is described. This software development provides Area/Route Navigation (RNAV) information from Time Differences (TDs) in raw form using an elliptical Earth model and a spherical model. The software is prepared for the microcomputer based Loran-C receiver. To compute navigation infomation, a (MOS 6502) microcomputer and a mathematical chip (AM 9511A) were combined with the Loran-C receiver. Final data reveals that this software does indeed provide accurate information with reasonable execution times.
Rain/No-Rain Identification from Bispectral Satellite Information using Deep Neural Networks
NASA Astrophysics Data System (ADS)
Tao, Y.
2016-12-01
Satellite-based precipitation estimation products have the advantage of high resolution and global coverage. However, they still suffer from insufficient accuracy. To accurately estimate precipitation from satellite data, there are two most important aspects: sufficient precipitation information in the satellite information and proper methodologies to extract such information effectively. This study applies the state-of-the-art machine learning methodologies to bispectral satellite information for Rain/No-Rain detection. Specifically, we use deep neural networks to extract features from infrared and water vapor channels and connect it to precipitation identification. To evaluate the effectiveness of the methodology, we first applies it to the infrared data only (Model DL-IR only), the most commonly used inputs for satellite-based precipitation estimation. Then we incorporates water vapor data (Model DL-IR + WV) to further improve the prediction performance. Radar stage IV dataset is used as ground measurement for parameter calibration. The operational product, Precipitation Estimation from Remotely Sensed Information Using Artificial Neural Networks Cloud Classification System (PERSIANN-CCS), is used as a reference to compare the performance of both models in both winter and summer seasons.The experiments show significant improvement for both models in precipitation identification. The overall performance gains in the Critical Success Index (CSI) are 21.60% and 43.66% over the verification periods for Model DL-IR only and Model DL-IR+WV model compared to PERSIANN-CCS, respectively. Moreover, specific case studies show that the water vapor channel information and the deep neural networks effectively help recover a large number of missing precipitation pixels under warm clouds while reducing false alarms under cold clouds.
Weighted functional linear regression models for gene-based association analysis.
Belonogova, Nadezhda M; Svishcheva, Gulnara R; Wilson, James F; Campbell, Harry; Axenovich, Tatiana I
2018-01-01
Functional linear regression models are effectively used in gene-based association analysis of complex traits. These models combine information about individual genetic variants, taking into account their positions and reducing the influence of noise and/or observation errors. To increase the power of methods, where several differently informative components are combined, weights are introduced to give the advantage to more informative components. Allele-specific weights have been introduced to collapsing and kernel-based approaches to gene-based association analysis. Here we have for the first time introduced weights to functional linear regression models adapted for both independent and family samples. Using data simulated on the basis of GAW17 genotypes and weights defined by allele frequencies via the beta distribution, we demonstrated that type I errors correspond to declared values and that increasing the weights of causal variants allows the power of functional linear models to be increased. We applied the new method to real data on blood pressure from the ORCADES sample. Five of the six known genes with P < 0.1 in at least one analysis had lower P values with weighted models. Moreover, we found an association between diastolic blood pressure and the VMP1 gene (P = 8.18×10-6), when we used a weighted functional model. For this gene, the unweighted functional and weighted kernel-based models had P = 0.004 and 0.006, respectively. The new method has been implemented in the program package FREGAT, which is freely available at https://cran.r-project.org/web/packages/FREGAT/index.html.
Information model construction of MES oriented to mechanical blanking workshop
NASA Astrophysics Data System (ADS)
Wang, Jin-bo; Wang, Jin-ye; Yue, Yan-fang; Yao, Xue-min
2016-11-01
Manufacturing Execution System (MES) is one of the crucial technologies to implement informatization management in manufacturing enterprises, and the construction of its information model is the base of MES database development. Basis on the analysis of the manufacturing process information in mechanical blanking workshop and the information requirement of MES every function module, the IDEF1X method was adopted to construct the information model of MES oriented to mechanical blanking workshop, and a detailed description of the data structure feature included in MES every function module and their logical relationship was given from the point of view of information relationship, which laid the foundation for the design of MES database.
The Smoothed Dirichlet Distribution: Understanding Cross-Entropy Ranking in Information Retrieval
2006-07-01
reflect those of the spon- sor. viii ABSTRACT Unigram Language modeling is a successful probabilistic framework for Information Retrieval (IR) that uses...the Relevance model (RM), a state-of-the-art model for IR in the language modeling framework that uses the same cross-entropy as its ranking function...In addition, the SD based classifier provides more flexibility than RM in modeling documents owing to a consistent generative framework . We
USDA-ARS?s Scientific Manuscript database
Land surface temperature (LST) provides valuable information for quantifying root-zone water availability, evapotranspiration (ET) and crop condition as well as providing useful information for constraining prognostic land surface models. This presentation describes a robust but relatively simple LS...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brost, Randolph C.; McLendon, William Clarence,
2013-01-01
Modeling geospatial information with semantic graphs enables search for sites of interest based on relationships between features, without requiring strong a priori models of feature shape or other intrinsic properties. Geospatial semantic graphs can be constructed from raw sensor data with suitable preprocessing to obtain a discretized representation. This report describes initial work toward extending geospatial semantic graphs to include temporal information, and initial results applying semantic graph techniques to SAR image data. We describe an efficient graph structure that includes geospatial and temporal information, which is designed to support simultaneous spatial and temporal search queries. We also report amore » preliminary implementation of feature recognition, semantic graph modeling, and graph search based on input SAR data. The report concludes with lessons learned and suggestions for future improvements.« less
Enhanced project management tool
NASA Technical Reports Server (NTRS)
Hsu, Chen-Jung (Inventor); Patel, Hemil N. (Inventor); Maluf, David A. (Inventor); Moh Hashim, Jairon C. (Inventor); Tran, Khai Peter B. (Inventor)
2012-01-01
A system for managing a project that includes multiple tasks and a plurality of workers. Input information includes characterizations based upon a human model, a team model and a product model. Periodic reports, such as one or more of a monthly report, a task plan report, a schedule report, a budget report and a risk management report, are generated and made available for display or further analysis or collection into a customized report template. An extensible database allows searching for information based upon context and upon content. Seven different types of project risks are addressed, including non-availability of required skill mix of workers. The system can be configured to exchange data and results with corresponding portions of similar project analyses, and to provide user-specific access to specified information.
NASA Technical Reports Server (NTRS)
Shiau, Jyh-Jen; Wahba, Grace; Johnson, Donald R.
1986-01-01
A new method, based on partial spline models, is developed for including specified discontinuities in otherwise smooth two- and three-dimensional objective analyses. The method is appropriate for including tropopause height information in two- and three-dimensinal temperature analyses, using the O'Sullivan-Wahba physical variational method for analysis of satellite radiance data, and may in principle be used in a combined variational analysis of observed, forecast, and climate information. A numerical method for its implementation is described and a prototype two-dimensional analysis based on simulated radiosonde and tropopause height data is shown. The method may also be appropriate for other geophysical problems, such as modeling the ocean thermocline, fronts, discontinuities, etc.
NASA Technical Reports Server (NTRS)
Joshi, Anjali; Heimdahl, Mats P. E.; Miller, Steven P.; Whalen, Mike W.
2006-01-01
System safety analysis techniques are well established and are used extensively during the design of safety-critical systems. Despite this, most of the techniques are highly subjective and dependent on the skill of the practitioner. Since these analyses are usually based on an informal system model, it is unlikely that they will be complete, consistent, and error free. In fact, the lack of precise models of the system architecture and its failure modes often forces the safety analysts to devote much of their effort to gathering architectural details about the system behavior from several sources and embedding this information in the safety artifacts such as the fault trees. This report describes Model-Based Safety Analysis, an approach in which the system and safety engineers share a common system model created using a model-based development process. By extending the system model with a fault model as well as relevant portions of the physical system to be controlled, automated support can be provided for much of the safety analysis. We believe that by using a common model for both system and safety engineering and automating parts of the safety analysis, we can both reduce the cost and improve the quality of the safety analysis. Here we present our vision of model-based safety analysis and discuss the advantages and challenges in making this approach practical.
Efficient Authorization of Rich Presence Using Secure and Composed Web Services
NASA Astrophysics Data System (ADS)
Li, Li; Chou, Wu
This paper presents an extended Role-Based Access Control (RBAC) model for efficient authorization of rich presence using secure web services composed with an abstract presence data model. Following the information symmetry principle, the standard RBAC model is extended to support context sensitive social relations and cascaded authority. In conjunction with the extended RBAC model, we introduce an extensible presence architecture prototype using WS-Security and WS-Eventing to secure rich presence information exchanges based on PKI certificates. Applications and performance measurements of our presence system are presented to show that the proposed RBAC framework for presence and collaboration is well suited for real-time communication and collaboration.
Bestelmeyer, Brandon T.; Williamson, Jeb C.; Talbot, Curtis J.; Cates, Greg W.; Duniway, Michael C.; Brown, Joel R.
2016-01-01
State-and-transition models (STMs) are useful tools for management, but they can be difficult to use and have limited content.STMs created for groups of related ecological sites could simplify and improve their utility. The amount of information linked to models can be increased using tables that communicate management interpretations and important within-group variability.We created a new web-based information system (the Ecosystem Dynamics Interpretive Tool) to house STMs, associated tabular information, and other ecological site data and descriptors.Fewer, more informative, better organized, and easily accessible STMs should increase the accessibility of science information.
Comparing a Japanese and a German hospital information system.
Jahn, F; Issler, L; Winter, A; Takabayashi, K
2009-01-01
To examine the architectural differences and similarities of a Japanese and German hospital information system (HIS) in a case study. This cross-cultural comparison, which focuses on structural quality characteristics, offers the chance to get new insights into different HIS architectures, which possibly cannot be obtained by inner-country comparisons. A reference model for the domain layer of hospital information systems containing the typical enterprise functions of a hospital provides the basis of comparison for the two different hospital information systems. 3LGM(2) models, which describe the two HISs and which are based on that reference model, are used to assess several structural quality criteria. Four of these criteria are introduced in detail. The two examined HISs are different in terms of the four structural quality criteria examined. Whereas the centralized architecture of the hospital information system at Chiba University Hospital causes only few functional redundancies and leads to a low implementation of communication standards, the hospital information system at the University Hospital of Leipzig, having a decentralized architecture, exhibits more functional redundancies and a higher use of communication standards. Using a model-based comparison, it was possible to detect remarkable differences between the observed hospital information systems of completely different cultural areas. However, the usability of 3LGM(2) models for comparisons has to be improved in order to apply key figures and to assess or benchmark the structural quality of health information systems architectures more thoroughly.
von Krogh, Gunn; Nåden, Dagfinn; Aasland, Olaf Gjerløw
2012-10-01
To present the results from the test site application of the documentation model KPO (quality assurance, problem solving and caring) designed to impact the quality of nursing information in electronic patient record (EPR). The KPO model was developed by means of consensus group and clinical testing. Four documentation arenas and eight content categories, nursing terminologies and a decision-support system were designed to impact the completeness, comprehensiveness and consistency of nursing information. The testing was performed in a pre-test/post-test time series design, three times at a one-year interval. Content analysis of nursing documentation was accomplished through the identification, interpretation and coding of information units. Data from the pre-test and post-test 2 were subjected to statistical analyses. To estimate the differences, paired t-tests were used. At post-test 2, the information is found to be more complete, comprehensive and consistent than at pre-test. The findings indicate that documentation arenas combining work flow and content categories deduced from theories on nursing practice can influence the quality of nursing information. The KPO model can be used as guide when shifting from paper-based to electronic-based nursing documentation with the aim of obtaining complete, comprehensive and consistent nursing information. © 2012 Blackwell Publishing Ltd.
Modeling Dynamic Food Choice Processes to Understand Dietary Intervention Effects.
Marcum, Christopher Steven; Goldring, Megan R; McBride, Colleen M; Persky, Susan
2018-02-17
Meal construction is largely governed by nonconscious and habit-based processes that can be represented as a collection of in dividual, micro-level food choices that eventually give rise to a final plate. Despite this, dietary behavior intervention research rarely captures these micro-level food choice processes, instead measuring outcomes at aggregated levels. This is due in part to a dearth of analytic techniques to model these dynamic time-series events. The current article addresses this limitation by applying a generalization of the relational event framework to model micro-level food choice behavior following an educational intervention. Relational event modeling was used to model the food choices that 221 mothers made for their child following receipt of an information-based intervention. Participants were randomized to receive either (a) control information; (b) childhood obesity risk information; (c) childhood obesity risk information plus a personalized family history-based risk estimate for their child. Participants then made food choices for their child in a virtual reality-based food buffet simulation. Micro-level aspects of the built environment, such as the ordering of each food in the buffet, were influential. Other dynamic processes such as choice inertia also influenced food selection. Among participants receiving the strongest intervention condition, choice inertia decreased and the overall rate of food selection increased. Modeling food selection processes can elucidate the points at which interventions exert their influence. Researchers can leverage these findings to gain insight into nonconscious and uncontrollable aspects of food selection that influence dietary outcomes, which can ultimately improve the design of dietary interventions.
Transactions in domain-specific information systems
NASA Astrophysics Data System (ADS)
Zacek, Jaroslav
2017-07-01
Substantial number of the current information system (IS) implementations is based on transaction approach. In addition, most of the implementations are domain-specific (e.g. accounting IS, resource planning IS). Therefore, we have to have a generic transaction model to build and verify domain-specific IS. The paper proposes a new transaction model for domain-specific ontologies. This model is based on value oriented business process modelling technique. The transaction model is formalized by the Petri Net theory. First part of the paper presents common business processes and analyses related to business process modeling. Second part defines the transactional model delimited by REA enterprise ontology paradigm and introduces states of the generic transaction model. The generic model proposal is defined and visualized by the Petri Net modelling tool. Third part shows application of the generic transaction model. Last part of the paper concludes results and discusses a practical usability of the generic transaction model.
NASA Astrophysics Data System (ADS)
Hoffman, Kenneth J.
1995-10-01
Few information systems create a standardized clinical patient record in which there are discrete and concise observations of patient problems and their resolution. Clinical notes usually are narratives which don't support an aggregate and systematic outcome analysis. Many programs collect information on diagnosis and coded procedures but are not focused on patient problems. Integrated definition (IDEF) methodology has been accepted by the Department of Defense as part of the Corporate Information Management Initiative and serves as the foundation that establishes a need for automation. We used IDEF modeling to describe present and idealized patient care activities. A logical IDEF data model was created to support those activities. The modeling process allows for accurate cost estimates based upon performed activities, efficient collection of relevant information, and outputs which allow real- time assessments of process and outcomes. This model forms the foundation for a prototype automated clinical information system (ACIS).
A simplified computational memory model from information processing
Zhang, Lanhua; Zhang, Dongsheng; Deng, Yuqin; Ding, Xiaoqian; Wang, Yan; Tang, Yiyuan; Sun, Baoliang
2016-01-01
This paper is intended to propose a computational model for memory from the view of information processing. The model, called simplified memory information retrieval network (SMIRN), is a bi-modular hierarchical functional memory network by abstracting memory function and simulating memory information processing. At first meta-memory is defined to express the neuron or brain cortices based on the biology and graph theories, and we develop an intra-modular network with the modeling algorithm by mapping the node and edge, and then the bi-modular network is delineated with intra-modular and inter-modular. At last a polynomial retrieval algorithm is introduced. In this paper we simulate the memory phenomena and functions of memorization and strengthening by information processing algorithms. The theoretical analysis and the simulation results show that the model is in accordance with the memory phenomena from information processing view. PMID:27876847
Intelligence-aided multitarget tracking for urban operations - a case study: counter terrorism
NASA Astrophysics Data System (ADS)
Sathyan, T.; Bharadwaj, K.; Sinha, A.; Kirubarajan, T.
2006-05-01
In this paper, we present a framework for tracking multiple mobile targets in an urban environment based on data from multiple sources of information, and for evaluating the threat these targets pose to assets of interest (AOI). The motivating scenario is one where we have to track many targets, each with different (unknown) destinations and/or intents. The tracking algorithm is aided by information about the urban environment (e.g., road maps, buildings, hideouts), and strategic and intelligence data. The tracking algorithm needs to be dynamic in that it has to handle a time-varying number of targets and the ever-changing urban environment depending on the locations of the moving objects and AOI. Our solution uses the variable structure interacting multiple model (VS-IMM) estimator, which has been shown to be effective in tracking targets based on road map information. Intelligence information is represented as target class information and incorporated through a combined likelihood calculation within the VS-IMM estimator. In addition, we develop a model to calculate the probability that a particular target can attack a given AOI. This model for the calculation of the probability of attack is based on the target kinematic and class information. Simulation results are presented to demonstrate the operation of the proposed framework on a representative scenario.
NASA Astrophysics Data System (ADS)
Al-garni, Abdullah M.
Urban information systems are economic resources that can benefit decision makers in the planning, development, and management of urban projects and resources. In this research, a conceptual model-based prototype Urban Geographic Information System (UGIS) is developed. The base maps used in developing the system and acquiring visual attributes are obtained from aerial photographs. The system is a multi-purpose parcel-based one that can serve many urban applications such as public utilities, health centres, schools, population estimation, road engineering and maintenance, and many others. A modern region in the capital city of Saudi Arabia is used for the study. The developed model is operational for one urban application (population estimation) and is tested for that particular application. The results showed that the system has a satisfactory accuracy and that it may well be promising for other similar urban applications in countries with similar demographic and social characteristics.
Gupta, Nidhi; Heiden, Marina; Mathiassen, Svend Erik; Holtermann, Andreas
2016-05-01
We aimed at developing and evaluating statistical models predicting objectively measured occupational time spent sedentary or in physical activity from self-reported information available in large epidemiological studies and surveys. Two-hundred-and-fourteen blue-collar workers responded to a questionnaire containing information about personal and work related variables, available in most large epidemiological studies and surveys. Workers also wore accelerometers for 1-4 days measuring time spent sedentary and in physical activity, defined as non-sedentary time. Least-squares linear regression models were developed, predicting objectively measured exposures from selected predictors in the questionnaire. A full prediction model based on age, gender, body mass index, job group, self-reported occupational physical activity (OPA), and self-reported occupational sedentary time (OST) explained 63% (R (2)adjusted) of the variance of both objectively measured time spent sedentary and in physical activity since these two exposures were complementary. Single-predictor models based only on self-reported information about either OPA or OST explained 21% and 38%, respectively, of the variance of the objectively measured exposures. Internal validation using bootstrapping suggested that the full and single-predictor models would show almost the same performance in new datasets as in that used for modelling. Both full and single-predictor models based on self-reported information typically available in most large epidemiological studies and surveys were able to predict objectively measured occupational time spent sedentary or in physical activity, with explained variances ranging from 21-63%.
Measurement-based quantum communication with resource states generated by entanglement purification
NASA Astrophysics Data System (ADS)
Wallnöfer, J.; Dür, W.
2017-01-01
We investigate measurement-based quantum communication with noisy resource states that are generated by entanglement purification. We consider the transmission of encoded information via noisy quantum channels using a measurement-based implementation of encoding, error correction, and decoding. We show that such an approach offers advantages over direct transmission, gate-based error correction, and measurement-based schemes with direct generation of resource states. We analyze the noise structure of resource states generated by entanglement purification and show that a local error model, i.e., noise acting independently on all qubits of the resource state, is a good approximation in general, and provides an exact description for Greenberger-Horne-Zeilinger states. The latter are resources for a measurement-based implementation of error-correction codes for bit-flip or phase-flip errors. This provides an approach to link the recently found very high thresholds for fault-tolerant measurement-based quantum information processing based on local error models for resource states with error thresholds for gate-based computational models.
Bain, Christopher A; Standing, Craig
2009-01-01
Hospital managers have a large range of information needs including quality metrics, financial reports, access information needs, educational, resourcing and decision support needs. Currently these needs involve interactions by managers with numerous disparate systems, both electronic such as SAP, Oracle Financials, PAS' (patient administration systems) like HOMER, and relevant websites; and paper-based systems. Hospital management information systems (HMIS) can be thought of sitting within a Technology Ecosystem (TE). In addition, Hospital Management Information Systems (HMIS) could benefit from a broader and deeper TE model, and the HMIS environment may in fact represents its own TE (the HMTE). This research will examine lessons from the health literature in relation to some of these issues, and propose an extension to the base model of a TE.
Rényi entropy measure of noise-aided information transmission in a binary channel.
Chapeau-Blondeau, François; Rousseau, David; Delahaies, Agnès
2010-05-01
This paper analyzes a binary channel by means of information measures based on the Rényi entropy. The analysis extends, and contains as a special case, the classic reference model of binary information transmission based on the Shannon entropy measure. The extended model is used to investigate further possibilities and properties of stochastic resonance or noise-aided information transmission. The results demonstrate that stochastic resonance occurs in the information channel and is registered by the Rényi entropy measures at any finite order, including the Shannon order. Furthermore, in definite conditions, when seeking the Rényi information measures that best exploit stochastic resonance, then nontrivial orders differing from the Shannon case usually emerge. In this way, through binary information transmission, stochastic resonance identifies optimal Rényi measures of information differing from the classic Shannon measure. A confrontation of the quantitative information measures with visual perception is also proposed in an experiment of noise-aided binary image transmission.
The Fusion Model of Intelligent Transportation Systems Based on the Urban Traffic Ontology
NASA Astrophysics Data System (ADS)
Yang, Wang-Dong; Wang, Tao
On these issues unified representation of urban transport information using urban transport ontology, it defines the statute and the algebraic operations of semantic fusion in ontology level in order to achieve the fusion of urban traffic information in the semantic completeness and consistency. Thus this paper takes advantage of the semantic completeness of the ontology to build urban traffic ontology model with which we resolve the problems as ontology mergence and equivalence verification in semantic fusion of traffic information integration. Information integration in urban transport can increase the function of semantic fusion, and reduce the amount of data integration of urban traffic information as well enhance the efficiency and integrity of traffic information query for the help, through the practical application of intelligent traffic information integration platform of Changde city, the paper has practically proved that the semantic fusion based on ontology increases the effect and efficiency of the urban traffic information integration, reduces the storage quantity, and improve query efficiency and information completeness.
Virtual Construction of Space Habitats: Connecting Building Information Models (BIM) and SysML
NASA Technical Reports Server (NTRS)
Polit-Casillas, Raul; Howe, A. Scott
2013-01-01
Current trends in design, construction and management of complex projects make use of Building Information Models (BIM) connecting different types of data to geometrical models. This information model allow different types of analysis beyond pure graphical representations. Space habitats, regardless their size, are also complex systems that require the synchronization of many types of information and disciplines beyond mass, volume, power or other basic volumetric parameters. For this, the state-of-the-art model based systems engineering languages and processes - for instance SysML - represent a solid way to tackle this problem from a programmatic point of view. Nevertheless integrating this with a powerful geometrical architectural design tool with BIM capabilities could represent a change in the workflow and paradigm of space habitats design applicable to other aerospace complex systems. This paper shows some general findings and overall conclusions based on the ongoing research to create a design protocol and method that practically connects a systems engineering approach with a BIM architectural and engineering design as a complete Model Based Engineering approach. Therefore, one hypothetical example is created and followed during the design process. In order to make it possible this research also tackles the application of IFC categories and parameters in the aerospace field starting with the application upon the space habitats design as way to understand the information flow between disciplines and tools. By building virtual space habitats we can potentially improve in the near future the way more complex designs are developed from very little detail from concept to manufacturing.
Qualitative model-based diagnosis using possibility theory
NASA Technical Reports Server (NTRS)
Joslyn, Cliff
1994-01-01
The potential for the use of possibility in the qualitative model-based diagnosis of spacecraft systems is described. The first sections of the paper briefly introduce the Model-Based Diagnostic (MBD) approach to spacecraft fault diagnosis; Qualitative Modeling (QM) methodologies; and the concepts of possibilistic modeling in the context of Generalized Information Theory (GIT). Then the necessary conditions for the applicability of possibilistic methods to qualitative MBD, and a number of potential directions for such an application, are described.
Evaluating diagnosis-based risk-adjustment methods in a population with spinal cord dysfunction.
Warner, Grace; Hoenig, Helen; Montez, Maria; Wang, Fei; Rosen, Amy
2004-02-01
To examine performance of models in predicting health care utilization for individuals with spinal cord dysfunction. Regression models compared 2 diagnosis-based risk-adjustment methods, the adjusted clinical groups (ACGs) and diagnostic cost groups (DCGs). To improve prediction, we added to our model: (1) spinal cord dysfunction-specific diagnostic information, (2) limitations in self-care function, and (3) both 1 and 2. Models were replicated in 3 populations. Samples from 3 populations: (1) 40% of veterans using Veterans Health Administration services in fiscal year 1997 (FY97) (N=1,046,803), (2) veteran sample with spinal cord dysfunction identified by codes from the International Statistical Classification of Diseases, 9th Revision, Clinical Modifications (N=7666), and (3) veteran sample identified in Veterans Affairs Spinal Cord Dysfunction Registry (N=5888). Not applicable. Inpatient, outpatient, and total days of care in FY97. The DCG models (R(2) range,.22-.38) performed better than ACG models (R(2) range,.04-.34) for all outcomes. Spinal cord dysfunction-specific diagnostic information improved prediction more in the ACG model than in the DCG model (R(2) range for ACG,.14-.34; R(2) range for DCG,.24-.38). Information on self-care function slightly improved performance (R(2) range increased from 0 to.04). The DCG risk-adjustment models predicted health care utilization better than ACG models. ACG model prediction was improved by adding information.
Cognon Neural Model Software Verification and Hardware Implementation Design
NASA Astrophysics Data System (ADS)
Haro Negre, Pau
Little is known yet about how the brain can recognize arbitrary sensory patterns within milliseconds using neural spikes to communicate information between neurons. In a typical brain there are several layers of neurons, with each neuron axon connecting to ˜104 synapses of neurons in an adjacent layer. The information necessary for cognition is contained in theses synapses, which strengthen during the learning phase in response to newly presented spike patterns. Continuing on the model proposed in "Models for Neural Spike Computation and Cognition" by David H. Staelin and Carl H. Staelin, this study seeks to understand cognition from an information theoretic perspective and develop potential models for artificial implementation of cognition based on neuronal models. To do so we focus on the mathematical properties and limitations of spike-based cognition consistent with existing neurological observations. We validate the cognon model through software simulation and develop concepts for an optical hardware implementation of a network of artificial neural cognons.
An Ontology-Based Approach to Incorporate User-Generated Geo-Content Into Sdi
NASA Astrophysics Data System (ADS)
Deng, D.-P.; Lemmens, R.
2011-08-01
The Web is changing the way people share and communicate information because of emergence of various Web technologies, which enable people to contribute information on the Web. User-Generated Geo-Content (UGGC) is a potential resource of geographic information. Due to the different production methods, UGGC often cannot fit in geographic information model. There is a semantic gap between UGGC and formal geographic information. To integrate UGGC into geographic information, this study conducts an ontology-based process to bridge this semantic gap. This ontology-based process includes five steps: Collection, Extraction, Formalization, Mapping, and Deployment. In addition, this study implements this process on Twitter messages, which is relevant to Japan Earthquake disaster. By using this process, we extract disaster relief information from Twitter messages, and develop a knowledge base for GeoSPARQL queries in disaster relief information.
The use of multiple models in case-based diagnosis
NASA Technical Reports Server (NTRS)
Karamouzis, Stamos T.; Feyock, Stefan
1993-01-01
The work described in this paper has as its goal the integration of a number of reasoning techniques into a unified intelligent information system that will aid flight crews with malfunction diagnosis and prognostication. One of these approaches involves using the extensive archive of information contained in aircraft accident reports along with various models of the aircraft as the basis for case-based reasoning about malfunctions. Case-based reasoning draws conclusions on the basis of similarities between the present situation and prior experience. We maintain that the ability of a CBR program to reason about physical systems is significantly enhanced by the addition to the CBR program of various models. This paper describes the diagnostic concepts implemented in a prototypical case based reasoner that operates in the domain of in-flight fault diagnosis, the various models used in conjunction with the reasoner's CBR component, and results from a preliminary evaluation.
Virtual-optical information security system based on public key infrastructure
NASA Astrophysics Data System (ADS)
Peng, Xiang; Zhang, Peng; Cai, Lilong; Niu, Hanben
2005-01-01
A virtual-optical based encryption model with the aid of public key infrastructure (PKI) is presented in this paper. The proposed model employs a hybrid architecture in which our previously published encryption method based on virtual-optics scheme (VOS) can be used to encipher and decipher data while an asymmetric algorithm, for example RSA, is applied for enciphering and deciphering the session key(s). The whole information security model is run under the framework of international standard ITU-T X.509 PKI, which is on basis of public-key cryptography and digital signatures. This PKI-based VOS security approach has additional features like confidentiality, authentication, and integrity for the purpose of data encryption under the environment of network. Numerical experiments prove the effectiveness of the method. The security of proposed model is briefly analyzed by examining some possible attacks from the viewpoint of a cryptanalysis.
A multi-method review of home-based chemotherapy.
Evans, J M; Qiu, M; MacKinnon, M; Green, E; Peterson, K; Kaizer, L
2016-09-01
This study summarises research- and practice-based evidence on home-based chemotherapy, and explores existing delivery models. A three-pronged investigation was conducted consisting of a literature review and synthesis of 54 papers, a review of seven home-based chemotherapy programmes spanning four countries, and two case studies within the Canadian province of Ontario. The results support the provision of home-based chemotherapy as a safe and patient-centred alternative to hospital- and outpatient-based service. This paper consolidates information on home-based chemotherapy programmes including services and drugs offered, patient eligibility criteria, patient views and experiences, delivery structures and processes, and common challenges. Fourteen recommendations are also provided for improving the delivery of chemotherapy in patients' homes by prioritising patient-centredness, provider training and teamwork, safety and quality of care, and programme management. The results of this study can be used to inform the development of an evidence-informed model for the delivery of chemotherapy and related care, such as symptom management, in patients' homes. © 2015 John Wiley & Sons Ltd.
Error rate information in attention allocation pilot models
NASA Technical Reports Server (NTRS)
Faulkner, W. H.; Onstott, E. D.
1977-01-01
The Northrop urgency decision pilot model was used in a command tracking task to compare the optimized performance of multiaxis attention allocation pilot models whose urgency functions were (1) based on tracking error alone, and (2) based on both tracking error and error rate. A matrix of system dynamics and command inputs was employed, to create both symmetric and asymmetric two axis compensatory tracking tasks. All tasks were single loop on each axis. Analysis showed that a model that allocates control attention through nonlinear urgency functions using only error information could not achieve performance of the full model whose attention shifting algorithm included both error and error rate terms. Subsequent to this analysis, tracking performance predictions for the full model were verified by piloted flight simulation. Complete model and simulation data are presented.
Dean, Marleah; Scherr, Courtney L; Clements, Meredith; Koruo, Rachel; Martinez, Jennifer; Ross, Amy
2017-09-01
To investigate BRCA-positive, unaffected patients' - referred to as previvors - information needs after testing positive for a deleterious BRCA genetic mutation. 25 qualitative interviews were conducted with previvors. Data were analyzed using the constant comparison method of grounded theory. Analysis revealed a theoretical model of previvors' information needs related to the stage of their health journey. Specifically, a four-stage model was developed based on the data: (1) pre-testing information needs, (2) post-testing information needs, (3) pre-management information needs, and (4) post-management information needs. Two recurring dimensions of desired knowledge also emerged within the stages-personal/social knowledge and medical knowledge. While previvors may be genetically predisposed to develop cancer, they have not been diagnosed with cancer, and therefore have different information needs than cancer patients and cancer survivors. This model can serve as a framework for assisting healthcare providers in meeting the specific information needs of cancer previvors. Copyright © 2017 Elsevier B.V. All rights reserved.
Sustainability-based decision making is a challenging process that requires balancing trade-offs among social, economic, and environmental components. System Dynamic (SD) models can be useful tools to inform sustainability-based decision making because they provide a holistic co...
USDA-ARS?s Scientific Manuscript database
Process-based modeling provides detailed spatial and temporal information of the soil environment in the shallow seedling recruitment zone across field topography where measurements of soil temperature and water may not sufficiently describe the zone. Hourly temperature and water profiles within the...
Exploiting the functional and taxonomic structure of genomic data by probabilistic topic modeling.
Chen, Xin; Hu, Xiaohua; Lim, Tze Y; Shen, Xiajiong; Park, E K; Rosen, Gail L
2012-01-01
In this paper, we present a method that enable both homology-based approach and composition-based approach to further study the functional core (i.e., microbial core and gene core, correspondingly). In the proposed method, the identification of major functionality groups is achieved by generative topic modeling, which is able to extract useful information from unlabeled data. We first show that generative topic model can be used to model the taxon abundance information obtained by homology-based approach and study the microbial core. The model considers each sample as a “document,” which has a mixture of functional groups, while each functional group (also known as a “latent topic”) is a weight mixture of species. Therefore, estimating the generative topic model for taxon abundance data will uncover the distribution over latent functions (latent topic) in each sample. Second, we show that, generative topic model can also be used to study the genome-level composition of “N-mer” features (DNA subreads obtained by composition-based approaches). The model consider each genome as a mixture of latten genetic patterns (latent topics), while each functional pattern is a weighted mixture of the “N-mer” features, thus the existence of core genomes can be indicated by a set of common N-mer features. After studying the mutual information between latent topics and gene regions, we provide an explanation of the functional roles of uncovered latten genetic patterns. The experimental results demonstrate the effectiveness of proposed method.
Petrinco, Michele; Pagano, Eva; Desideri, Alessandro; Bigi, Riccardo; Ghidina, Marco; Ferrando, Alberto; Cortigiani, Lauro; Merletti, Franco; Gregori, Dario
2009-01-01
Several methodological problems arise when health outcomes and resource utilization are collected at different sites. To avoid misleading conclusions in multi-center economic evaluations the center effect needs to be taken into adequate consideration. The aim of this article is to compare several models, which make use of a different amount of information about the enrolling center. To model the association of total medical costs with the levels of two sets of covariates, one at patient and one at center level, we considered four statistical models, based on the Gamma model in the class of the Generalized Linear Models with a log link, which use different amount of information on the enrolling centers. Models were applied to Cost of Strategies after Myocardial Infarction data, an international randomized trial on costs of uncomplicated acute myocardial infarction (AMI). The simple center effect adjustment based on a single random effect results in a more conservative estimation of the parameters as compared with approaches which make use of deeper information on the centers characteristics. This study shows, with reference to a real multicenter trial, that center information cannot be neglected and should be collected and inserted in the analysis, better in combination with one or more random effect, taking into account in this way also the heterogeneity among centers because of unobserved centers characteristics.
Time series sightability modeling of animal populations
ArchMiller, Althea A.; Dorazio, Robert; St. Clair, Katherine; Fieberg, John R.
2018-01-01
Logistic regression models—or “sightability models”—fit to detection/non-detection data from marked individuals are often used to adjust for visibility bias in later detection-only surveys, with population abundance estimated using a modified Horvitz-Thompson (mHT) estimator. More recently, a model-based alternative for analyzing combined detection/non-detection and detection-only data was developed. This approach seemed promising, since it resulted in similar estimates as the mHT when applied to data from moose (Alces alces) surveys in Minnesota. More importantly, it provided a framework for developing flexible models for analyzing multiyear detection-only survey data in combination with detection/non-detection data. During initial attempts to extend the model-based approach to multiple years of detection-only data, we found that estimates of detection probabilities and population abundance were sensitive to the amount of detection-only data included in the combined (detection/non-detection and detection-only) analysis. Subsequently, we developed a robust hierarchical modeling approach where sightability model parameters are informed only by the detection/non-detection data, and we used this approach to fit a fixed-effects model (FE model) with year-specific parameters and a temporally-smoothed model (TS model) that shares information across years via random effects and a temporal spline. The abundance estimates from the TS model were more precise, with decreased interannual variability relative to the FE model and mHT abundance estimates, illustrating the potential benefits from model-based approaches that allow information to be shared across years.
Cheng, Zhenbo; Deng, Zhidong; Hu, Xiaolin; Zhang, Bo; Yang, Tianming
2015-12-01
The brain often has to make decisions based on information stored in working memory, but the neural circuitry underlying working memory is not fully understood. Many theoretical efforts have been focused on modeling the persistent delay period activity in the prefrontal areas that is believed to represent working memory. Recent experiments reveal that the delay period activity in the prefrontal cortex is neither static nor homogeneous as previously assumed. Models based on reservoir networks have been proposed to model such a dynamical activity pattern. The connections between neurons within a reservoir are random and do not require explicit tuning. Information storage does not depend on the stable states of the network. However, it is not clear how the encoded information can be retrieved for decision making with a biologically realistic algorithm. We therefore built a reservoir-based neural network to model the neuronal responses of the prefrontal cortex in a somatosensory delayed discrimination task. We first illustrate that the neurons in the reservoir exhibit a heterogeneous and dynamical delay period activity observed in previous experiments. Then we show that a cluster population circuit decodes the information from the reservoir with a winner-take-all mechanism and contributes to the decision making. Finally, we show that the model achieves a good performance rapidly by shaping only the readout with reinforcement learning. Our model reproduces important features of previous behavior and neurophysiology data. We illustrate for the first time how task-specific information stored in a reservoir network can be retrieved with a biologically plausible reinforcement learning training scheme. Copyright © 2015 the American Physiological Society.
Spectral Entropies as Information-Theoretic Tools for Complex Network Comparison
NASA Astrophysics Data System (ADS)
De Domenico, Manlio; Biamonte, Jacob
2016-10-01
Any physical system can be viewed from the perspective that information is implicitly represented in its state. However, the quantification of this information when it comes to complex networks has remained largely elusive. In this work, we use techniques inspired by quantum statistical mechanics to define an entropy measure for complex networks and to develop a set of information-theoretic tools, based on network spectral properties, such as Rényi q entropy, generalized Kullback-Leibler and Jensen-Shannon divergences, the latter allowing us to define a natural distance measure between complex networks. First, we show that by minimizing the Kullback-Leibler divergence between an observed network and a parametric network model, inference of model parameter(s) by means of maximum-likelihood estimation can be achieved and model selection can be performed with appropriate information criteria. Second, we show that the information-theoretic metric quantifies the distance between pairs of networks and we can use it, for instance, to cluster the layers of a multilayer system. By applying this framework to networks corresponding to sites of the human microbiome, we perform hierarchical cluster analysis and recover with high accuracy existing community-based associations. Our results imply that spectral-based statistical inference in complex networks results in demonstrably superior performance as well as a conceptual backbone, filling a gap towards a network information theory.
Translational Modeling in Schizophrenia: Predicting Human Dopamine D2 Receptor Occupancy.
Johnson, Martin; Kozielska, Magdalena; Pilla Reddy, Venkatesh; Vermeulen, An; Barton, Hugh A; Grimwood, Sarah; de Greef, Rik; Groothuis, Geny M M; Danhof, Meindert; Proost, Johannes H
2016-04-01
To assess the ability of a previously developed hybrid physiology-based pharmacokinetic-pharmacodynamic (PBPKPD) model in rats to predict the dopamine D2 receptor occupancy (D2RO) in human striatum following administration of antipsychotic drugs. A hybrid PBPKPD model, previously developed using information on plasma concentrations, brain exposure and D2RO in rats, was used as the basis for the prediction of D2RO in human. The rat pharmacokinetic and brain physiology parameters were substituted with human population pharmacokinetic parameters and human physiological information. To predict the passive transport across the human blood-brain barrier, apparent permeability values were scaled based on rat and human brain endothelial surface area. Active efflux clearance in brain was scaled from rat to human using both human brain endothelial surface area and MDR1 expression. Binding constants at the D2 receptor were scaled based on the differences between in vitro and in vivo systems of the same species. The predictive power of this physiology-based approach was determined by comparing the D2RO predictions with the observed human D2RO of six antipsychotics at clinically relevant doses. Predicted human D2RO was in good agreement with clinically observed D2RO for five antipsychotics. Models using in vitro information predicted human D2RO well for most of the compounds evaluated in this analysis. However, human D2RO was under-predicted for haloperidol. The rat hybrid PBPKPD model structure, integrated with in vitro information and human pharmacokinetic and physiological information, constitutes a scientific basis to predict the time course of D2RO in man.
EC FP6 Enviro-RISKS project outcomes in area of Earth and Space Science Informatics applications
NASA Astrophysics Data System (ADS)
Gordov, E. P.; Zakarin, E. A.
2009-04-01
Nowadays the community acknowledged that to understand dynamics of regional environment properly and perform its assessment on the base of monitoring and modeling more strong involvement of information-computational technologies (ICT) is required, which should lead to development of information-computational infrastructure as an inherent part of such investigations. This paper is based on the Report&Recommendations (www.dmi.dk/dmi/sr08-05-4.pdf) of the Enviro-RISKS (Man-induced Environmental Risks: Monitoring, Management and Remediation of Man-made Changes in Siberia) Project Thematic expert group for Information Systems, Integration and Synthesis Focus and presents results of activities of Project Partners in area of Information Technologies for Environmental Sciences development and usage. Approaches used the web-based Information Technologies and the GIS-based Information Technologies are described and a way to their integration is outlined. In particular, developed in course of the Project carrying out Enviro-RISKS web portal and its Climate site (http://climate.risks.scert.ru/), providing an access to interactive web-system for regional climate assessment on the base of standard meteorological data archives, which is a key element of the information-computational infrastructure of the Siberia Integrated Regional Study (SIRS), is described in details as well as developed on the base of GIS technology system for monitoring and modeling air and water pollutions transport and transformations. The later is quite useful for practical applications realization of geoinformation modeling, in which relevant mathematical models are plunged into GIS and all the modeling and analysis phases are accomplished in the informational sphere, based on the real data including those coming from satellites. Major efforts currently are undertaken in attempt to integrate GIS based environmental applications with web accessibility, computing power and data interoperability thus to exploit completely huge potential of web bases technologies. In particular, development of a region devoted web portal using approached suggested by the Open Geospatial Consortium has been started recently. The state of the art of the information-computational infrastructure in the targeted region is quite a step in the process of development of a distributed collaborative information-computational environment to support multidisciplinary investigations of Earth regional environment, especially those required meteorology, atmospheric pollution transport and climate modeling. Established in process of the Project carrying out cooperative links, new Partners initiatives, and gained expertise allow us to hope that this infrastructure rather soon will make significant input into understanding regional environmental processes in their relationships with Global Change. In particular, this infrastructure will play a role of the 'underlying mechanics' of the research work, leaving the earth scientists to concentrate on their investigations as well as providing the environment to make research results available and understandable to everyone. Additionally to the core FP6 Enviro-RISKS project (INCO-CT-2004-013427) support this activity was partially supported by SB RAS Integration Project 34, SB RAS Basic Program Project 4.5.2.2 and APN Project CBA2007-08NSY. Valuable input into the expert group work and elaborated outcomes of Profs. V. Lykosov and A. Starchenko, Drs. D. Belikov, , M. Korets, S. Kostrykin, B. Mirkarimova, I. Okladnikov, , A. Titov and A. Tridvornov is acknowledged.
A comprehensive evaluation of input data-induced uncertainty in nonpoint source pollution modeling
NASA Astrophysics Data System (ADS)
Chen, L.; Gong, Y.; Shen, Z.
2015-11-01
Watershed models have been used extensively for quantifying nonpoint source (NPS) pollution, but few studies have been conducted on the error-transitivity from different input data sets to NPS modeling. In this paper, the effects of four input data, including rainfall, digital elevation models (DEMs), land use maps, and the amount of fertilizer, on NPS simulation were quantified and compared. A systematic input-induced uncertainty was investigated using watershed model for phosphorus load prediction. Based on the results, the rain gauge density resulted in the largest model uncertainty, followed by DEMs, whereas land use and fertilizer amount exhibited limited impacts. The mean coefficient of variation for errors in single rain gauges-, multiple gauges-, ASTER GDEM-, NFGIS DEM-, land use-, and fertilizer amount information was 0.390, 0.274, 0.186, 0.073, 0.033 and 0.005, respectively. The use of specific input information, such as key gauges, is also highlighted to achieve the required model accuracy. In this sense, these results provide valuable information to other model-based studies for the control of prediction uncertainty.
Sun, Wei; Zhang, Xiaorui; Peeta, Srinivas; He, Xiaozheng; Li, Yongfu; Zhu, Senlai
2015-01-01
To improve the effectiveness and robustness of fatigue driving recognition, a self-adaptive dynamic recognition model is proposed that incorporates information from multiple sources and involves two sequential levels of fusion, constructed at the feature level and the decision level. Compared with existing models, the proposed model introduces a dynamic basic probability assignment (BPA) to the decision-level fusion such that the weight of each feature source can change dynamically with the real-time fatigue feature measurements. Further, the proposed model can combine the fatigue state at the previous time step in the decision-level fusion to improve the robustness of the fatigue driving recognition. An improved correction strategy of the BPA is also proposed to accommodate the decision conflict caused by external disturbances. Results from field experiments demonstrate that the effectiveness and robustness of the proposed model are better than those of models based on a single fatigue feature and/or single-source information fusion, especially when the most effective fatigue features are used in the proposed model. PMID:26393615
Hao, Chen; LiJun, Chen; Albright, Thomas P.
2007-01-01
Invasive exotic species pose a growing threat to the economy, public health, and ecological integrity of nations worldwide. Explaining and predicting the spatial distribution of invasive exotic species is of great importance to prevention and early warning efforts. We are investigating the potential distribution of invasive exotic species, the environmental factors that influence these distributions, and the ability to predict them using statistical and information-theoretic approaches. For some species, detailed presence/absence occurrence data are available, allowing the use of a variety of standard statistical techniques. However, for most species, absence data are not available. Presented with the challenge of developing a model based on presence-only information, we developed an improved logistic regression approach using Information Theory and Frequency Statistics to produce a relative suitability map. This paper generated a variety of distributions of ragweed (Ambrosia artemisiifolia L.) from logistic regression models applied to herbarium specimen location data and a suite of GIS layers including climatic, topographic, and land cover information. Our logistic regression model was based on Akaike's Information Criterion (AIC) from a suite of ecologically reasonable predictor variables. Based on the results we provided a new Frequency Statistical method to compartmentalize habitat-suitability in the native range. Finally, we used the model and the compartmentalized criterion developed in native ranges to "project" a potential distribution onto the exotic ranges to build habitat-suitability maps. ?? Science in China Press 2007.
Liu, Yan; Cheng, H D; Huang, Jianhua; Zhang, Yingtao; Tang, Xianglong
2012-10-01
In this paper, a novel lesion segmentation within breast ultrasound (BUS) image based on the cellular automata principle is proposed. Its energy transition function is formulated based on global image information difference and local image information difference using different energy transfer strategies. First, an energy decrease strategy is used for modeling the spatial relation information of pixels. For modeling global image information difference, a seed information comparison function is developed using an energy preserve strategy. Then, a texture information comparison function is proposed for considering local image difference in different regions, which is helpful for handling blurry boundaries. Moreover, two neighborhood systems (von Neumann and Moore neighborhood systems) are integrated as the evolution environment, and a similarity-based criterion is used for suppressing noise and reducing computation complexity. The proposed method was applied to 205 clinical BUS images for studying its characteristic and functionality, and several overlapping area error metrics and statistical evaluation methods are utilized for evaluating its performance. The experimental results demonstrate that the proposed method can handle BUS images with blurry boundaries and low contrast well and can segment breast lesions accurately and effectively.
USDA-ARS?s Scientific Manuscript database
Thermal-infrared (TIR) remote sensing of land surface temperature (LST) provides valuable information for quantifying root-zone water availability, evapotranspiration (ET) and crop condition as well as providing useful information for constraining prognostic land surface models. This presentation d...
Kim, Kwang-Yon; Shin, Seong Eun; No, Kyoung Tai
2015-01-01
Objectives For successful adoption of legislation controlling registration and assessment of chemical substances, it is important to obtain sufficient toxicological experimental evidence and other related information. It is also essential to obtain a sufficient number of predicted risk and toxicity results. Particularly, methods used in predicting toxicities of chemical substances during acquisition of required data, ultimately become an economic method for future dealings with new substances. Although the need for such methods is gradually increasing, the-required information about reliability and applicability range has not been systematically provided. Methods There are various representative environmental and human toxicity models based on quantitative structure-activity relationships (QSAR). Here, we secured the 10 representative QSAR-based prediction models and its information that can make predictions about substances that are expected to be regulated. We used models that predict and confirm usability of the information expected to be collected and submitted according to the legislation. After collecting and evaluating each predictive model and relevant data, we prepared methods quantifying the scientific validity and reliability, which are essential conditions for using predictive models. Results We calculated predicted values for the models. Furthermore, we deduced and compared adequacies of the models using the Alternative non-testing method assessed for Registration, Evaluation, Authorization, and Restriction of Chemicals Substances scoring system, and deduced the applicability domains for each model. Additionally, we calculated and compared inclusion rates of substances expected to be regulated, to confirm the applicability. Conclusions We evaluated and compared the data, adequacy, and applicability of our selected QSAR-based toxicity prediction models, and included them in a database. Based on this data, we aimed to construct a system that can be used with predicted toxicity results. Furthermore, by presenting the suitability of individual predicted results, we aimed to provide a foundation that could be used in actual assessments and regulations. PMID:26206368
Multiagent intelligent systems
NASA Astrophysics Data System (ADS)
Krause, Lee S.; Dean, Christopher; Lehman, Lynn A.
2003-09-01
This paper will discuss a simulation approach based upon a family of agent-based models. As the demands placed upon simulation technology by such applications as Effects Based Operations (EBO), evaluations of indicators and warnings surrounding homeland defense and commercial demands such financial risk management current single thread based simulations will continue to show serious deficiencies. The types of "what if" analysis required to support these types of applications, demand rapidly re-configurable approaches capable of aggregating large models incorporating multiple viewpoints. The use of agent technology promises to provide a broad spectrum of models incorporating differing viewpoints through a synthesis of a collection of models. Each model would provide estimates to the overall scenario based upon their particular measure or aspect. An agent framework, denoted as the "family" would provide a common ontology in support of differing aspects of the scenario. This approach permits the future of modeling to change from viewing the problem as a single thread simulation, to take into account multiple viewpoints from different models. Even as models are updated or replaced the agent approach permits rapid inclusion in new or modified simulations. In this approach a variety of low and high-resolution information and its synthesis requires a family of models. Each agent "publishes" its support for a given measure and each model provides their own estimates on the scenario based upon their particular measure or aspect. If more than one agent provides the same measure (e.g. cognitive) then the results from these agents are combined to form an aggregate measure response. The objective would be to inform and help calibrate a qualitative model, rather than merely to present highly aggregated statistical information. As each result is processed, the next action can then be determined. This is done by a top-level decision system that communicates to the family at the ontology level without any specific understanding of the processes (or model) behind each agent. The increasingly complex demands upon simulation for the necessity to incorporate the breadth and depth of influencing factors makes a family of agent based models a promising solution. This paper will discuss that solution with syntax and semantics necessary to support the approach.
Bouvet, J-M; Makouanzi, G; Cros, D; Vigneron, Ph
2016-01-01
Hybrids are broadly used in plant breeding and accurate estimation of variance components is crucial for optimizing genetic gain. Genome-wide information may be used to explore models designed to assess the extent of additive and non-additive variance and test their prediction accuracy for the genomic selection. Ten linear mixed models, involving pedigree- and marker-based relationship matrices among parents, were developed to estimate additive (A), dominance (D) and epistatic (AA, AD and DD) effects. Five complementary models, involving the gametic phase to estimate marker-based relationships among hybrid progenies, were developed to assess the same effects. The models were compared using tree height and 3303 single-nucleotide polymorphism markers from 1130 cloned individuals obtained via controlled crosses of 13 Eucalyptus urophylla females with 9 Eucalyptus grandis males. Akaike information criterion (AIC), variance ratios, asymptotic correlation matrices of estimates, goodness-of-fit, prediction accuracy and mean square error (MSE) were used for the comparisons. The variance components and variance ratios differed according to the model. Models with a parent marker-based relationship matrix performed better than those that were pedigree-based, that is, an absence of singularities, lower AIC, higher goodness-of-fit and accuracy and smaller MSE. However, AD and DD variances were estimated with high s.es. Using the same criteria, progeny gametic phase-based models performed better in fitting the observations and predicting genetic values. However, DD variance could not be separated from the dominance variance and null estimates were obtained for AA and AD effects. This study highlighted the advantages of progeny models using genome-wide information. PMID:26328760
An evaluation of the uncertainties in biomass burning emissions
NASA Astrophysics Data System (ADS)
Yano, A.; Garcia Menendez, F.; Hu, Y.; Odman, M.
2012-12-01
The contribution of biomass burning emissions to the atmospheric loads of gases and aerosols can lead to major air quality problems and have significant climate impacts. Whether from wildfires, natural or human-induced, or controlled burns, biomass burning emissions are an important source of air pollutants regionally in certain parts of the world as well as globally. There are two common ways of estimating biomass burning emissions: by using either ground-based information or satellite observations. When there is sufficient local information about the burn area, the types of fuels and their consumption amounts, and the progression of the fire, ground-based estimation is preferred. For controlled burns a.k.a. prescribed burns and wildfires in places where land management is practiced to a certain extent there is typically sufficient ground-based information for emissions estimation. However, for remote regions where no ground-based information is available on the size, intensity, or the spread of the fire, estimates based on satellite observations are preferred. For example, burn location, size and timing information can be obtained from satellite retrievals of thermal anomalies and fuel loading information can be obtained from satellite products of vegetation cover. In both cases, reasonable emission estimates for a variety of pollutants can be obtained by using emission factors (mass of pollutant released per unit mass of fuel consumed) derived from field or laboratory studies. Here, emissions from a controlled burn and a wildfire are estimated using both ground-based information and satellite observations. The controlled burn was conducted on 17 November 2009 near Santa Barbara, California over 80 ha of land covered with chaparral. An aircraft tracked the smoke plume and measured CO2, light scattering, as well as meteorological parameters during the burn (Akagi et al., 2011). The wildfire is from the summer of 2008 when tens of thousands hectares of wild land burned in Northern California causing unprecedented damage. NASA Aircraft commissioned for the ARCTAS campaign at the time flew over the fires and collected data detailing composition of gases and aerosols in the fire plumes (Singh et al., 2012). We model the fires using a newly developed system consisting of a plume rise and dispersion model specifically designed for wild-land fire plumes (Daysmoke; Achtemeier et al., 2011) coupled with a regional-scale chemistry-transport model (CMAQ). Wind fields generated by a weather prediction model (WRF) are adjusted locally to match the aircraft measurements of wind speed and direction. The fires are simulated using both ground-based and satellite-based estimates of emissions. Predicted concentrations of gases and aerosols are compared to corresponding aircraft measurements. Satellite retrievals of aerosol optical depth are also used in evaluating model predictions. The new modeling system along with the wind adjustments reduces several of the uncertainties inherent to regional-scale modeling of plume transport. This allows for a more reliable analysis of the uncertainties related to emissions. Uncertainties in the magnitudes and timings of emissions, and in plume injection heights with respect to boundary layer heights are investigated. Uncertainties associated with ground-based and satellite-based emissions estimation methods are compared to each other.
Model averaging techniques for quantifying conceptual model uncertainty.
Singh, Abhishek; Mishra, Srikanta; Ruskauff, Greg
2010-01-01
In recent years a growing understanding has emerged regarding the need to expand the modeling paradigm to include conceptual model uncertainty for groundwater models. Conceptual model uncertainty is typically addressed by formulating alternative model conceptualizations and assessing their relative likelihoods using statistical model averaging approaches. Several model averaging techniques and likelihood measures have been proposed in the recent literature for this purpose with two broad categories--Monte Carlo-based techniques such as Generalized Likelihood Uncertainty Estimation or GLUE (Beven and Binley 1992) and criterion-based techniques that use metrics such as the Bayesian and Kashyap Information Criteria (e.g., the Maximum Likelihood Bayesian Model Averaging or MLBMA approach proposed by Neuman 2003) and Akaike Information Criterion-based model averaging (AICMA) (Poeter and Anderson 2005). These different techniques can often lead to significantly different relative model weights and ranks because of differences in the underlying statistical assumptions about the nature of model uncertainty. This paper provides a comparative assessment of the four model averaging techniques (GLUE, MLBMA with KIC, MLBMA with BIC, and AIC-based model averaging) mentioned above for the purpose of quantifying the impacts of model uncertainty on groundwater model predictions. Pros and cons of each model averaging technique are examined from a practitioner's perspective using two groundwater modeling case studies. Recommendations are provided regarding the use of these techniques in groundwater modeling practice.
[Dental education for college students based on WeChat public platform].
Chen, Chuan-Jun; Sun, Tan
2016-06-01
The authors proposed a model for dental education based on WeChat public platform. In this model, teachers send various kinds of digital teaching information such as PPT,word and video to the WeChat public platform and students share the information for preview before class and differentiate the key-point knowledge from those information for in-depth learning in class. Teachers also send reference materials for expansive learning after class. Questionaire through the WeChat public platform is used to evaluate teaching effect of teachers and improvement may be taken based on the feedback questionnaire. A discussion and interaction based on WeCchat between students and teacher can be aroused on a specific topic to reach a proper solution. With technique development of mobile terminal, mobile class will come true in near future.
HOSPITAL MANAGERS' NEED FOR INFORMATION ON HEALTH TECHNOLOGY INVESTMENTS.
Ølholm, Anne Mette; Kidholm, Kristian; Birk-Olsen, Mette; Christensen, Janne Buck
2015-01-01
There is growing interest in implementing hospital-based health technology assessment (HB-HTA) as a tool to facilitate decision making based on a systematic and multidisciplinary assessment of evidence. However, the decision-making process, including the informational needs of hospital decision makers, is not well described. The objective was to review empirical studies analysing the information that hospital decision makers need when deciding about health technology (HT) investments. A systematic review of empirical studies published in English or Danish from 2000 to 2012 was carried out. The literature was assessed by two reviewers working independently. The identified informational needs were assessed with regard to their agreement with the nine domains of EUnetHTA's Core Model. A total of 2,689 articles were identified and assessed. The review process resulted in 14 relevant studies containing 74 types of information that hospital decision makers found relevant. In addition to information covered by the Core Model, other types of information dealing with political and strategic aspects were identified. The most frequently mentioned types of information in the literature related to clinical, economic and political/strategic aspects. Legal, social, and ethical aspects were seldom considered most important. Hospital decision makers are able to describe their information needs when deciding on HT investments. The different types of information were not of equal importance to hospital decision makers, however, and full agreement between EUnetHTA's Core Model and the hospital decision-makers' informational needs was not observed. They also need information on political and strategic aspects not covered by the Core Model.
Design and realization of high quality prime farmland planning and management information system
NASA Astrophysics Data System (ADS)
Li, Manchun; Liu, Guohong; Liu, Yongxue; Jiang, Zhixin
2007-06-01
The article discusses the design and realization of a high quality prime farmland planning and management information system based on SDSS. Models in concept integration, management planning are used in High Quality Prime Farmland Planning in order to refine the current model system and the management information system is deigned with a triangular structure. Finally an example of Tonglu county high quality prime farmland planning and management information system is introduced.
CNN-based ranking for biomedical entity normalization.
Li, Haodi; Chen, Qingcai; Tang, Buzhou; Wang, Xiaolong; Xu, Hua; Wang, Baohua; Huang, Dong
2017-10-03
Most state-of-the-art biomedical entity normalization systems, such as rule-based systems, merely rely on morphological information of entity mentions, but rarely consider their semantic information. In this paper, we introduce a novel convolutional neural network (CNN) architecture that regards biomedical entity normalization as a ranking problem and benefits from semantic information of biomedical entities. The CNN-based ranking method first generates candidates using handcrafted rules, and then ranks the candidates according to their semantic information modeled by CNN as well as their morphological information. Experiments on two benchmark datasets for biomedical entity normalization show that our proposed CNN-based ranking method outperforms traditional rule-based method with state-of-the-art performance. We propose a CNN architecture that regards biomedical entity normalization as a ranking problem. Comparison results show that semantic information is beneficial to biomedical entity normalization and can be well combined with morphological information in our CNN architecture for further improvement.
An actual load forecasting methodology by interval grey modeling based on the fractional calculus.
Yang, Yang; Xue, Dingyü
2017-07-17
The operation processes for thermal power plant are measured by the real-time data, and a large number of historical interval data can be obtained from the dataset. Within defined periods of time, the interval information could provide important information for decision making and equipment maintenance. Actual load is one of the most important parameters, and the trends hidden in the historical data will show the overall operation status of the equipments. However, based on the interval grey parameter numbers, the modeling and prediction process is more complicated than the one with real numbers. In order not lose any information, the geometric coordinate features are used by the coordinates of area and middle point lines in this paper, which are proved with the same information as the original interval data. The grey prediction model for interval grey number by the fractional-order accumulation calculus is proposed. Compared with integer-order model, the proposed method could have more freedom with better performance for modeling and prediction, which can be widely used in the modeling process and prediction for the small amount interval historical industry sequence samples. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Prediction and Informative Risk Factor Selection of Bone Diseases.
Li, Hui; Li, Xiaoyi; Ramanathan, Murali; Zhang, Aidong
2015-01-01
With the booming of healthcare industry and the overwhelming amount of electronic health records (EHRs) shared by healthcare institutions and practitioners, we take advantage of EHR data to develop an effective disease risk management model that not only models the progression of the disease, but also predicts the risk of the disease for early disease control or prevention. Existing models for answering these questions usually fall into two categories: the expert knowledge based model or the handcrafted feature set based model. To fully utilize the whole EHR data, we will build a framework to construct an integrated representation of features from all available risk factors in the EHR data and use these integrated features to effectively predict osteoporosis and bone fractures. We will also develop a framework for informative risk factor selection of bone diseases. A pair of models for two contrast cohorts (e.g., diseased patients versus non-diseased patients) will be established to discriminate their characteristics and find the most informative risk factors. Several empirical results on a real bone disease data set show that the proposed framework can successfully predict bone diseases and select informative risk factors that are beneficial and useful to guide clinical decisions.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-15
... that is based on rigorous scientifically based research methods to assess the effectiveness of a...) Relies on measurements or observational methods that provide reliable and valid data across evaluators... of innovative, cohesive models that are based on research and have demonstrated that they effectively...
Rainbow trout-based assays for estrogenicity are currently being used for development of predictive models based upon quantitative structure activity relationships. A predictive model based on a single species raises the question of whether this information is valid for other spe...
Enabling interoperability in planetary sciences and heliophysics: The case for an information model
NASA Astrophysics Data System (ADS)
Hughes, J. Steven; Crichton, Daniel J.; Raugh, Anne C.; Cecconi, Baptiste; Guinness, Edward A.; Isbell, Christopher E.; Mafi, Joseph N.; Gordon, Mitchell K.; Hardman, Sean H.; Joyner, Ronald S.
2018-01-01
The Planetary Data System has developed the PDS4 Information Model to enable interoperability across diverse science disciplines. The Information Model is based on an integration of International Organization for Standardization (ISO) level standards for trusted digital archives, information model development, and metadata registries. Where controlled vocabularies provides a basic level of interoperability by providing a common set of terms for communication between both machines and humans the Information Model improves interoperability by means of an ontology that provides semantic information or additional related context for the terms. The information model was defined by team of computer scientists and science experts from each of the diverse disciplines in the Planetary Science community, including Atmospheres, Geosciences, Cartography and Imaging Sciences, Navigational and Ancillary Information, Planetary Plasma Interactions, Ring-Moon Systems, and Small Bodies. The model was designed to be extensible beyond the Planetary Science community, for example there are overlaps between certain PDS disciplines and the Heliophysics and Astrophysics disciplines. "Interoperability" can apply to many aspects of both the developer and the end-user experience, for example agency-to-agency, semantic level, and application level interoperability. We define these types of interoperability and focus on semantic level interoperability, the type of interoperability most directly enabled by an information model.
A new service-oriented grid-based method for AIoT application and implementation
NASA Astrophysics Data System (ADS)
Zou, Yiqin; Quan, Li
2017-07-01
The traditional three-layer Internet of things (IoT) model, which includes physical perception layer, information transferring layer and service application layer, cannot express complexity and diversity in agricultural engineering area completely. It is hard to categorize, organize and manage the agricultural things with these three layers. Based on the above requirements, we propose a new service-oriented grid-based method to set up and build the agricultural IoT. Considering the heterogeneous, limitation, transparency and leveling attributes of agricultural things, we propose an abstract model for all agricultural resources. This model is service-oriented and expressed with Open Grid Services Architecture (OGSA). Information and data of agricultural things were described and encapsulated by using XML in this model. Every agricultural engineering application will provide service by enabling one application node in this service-oriented grid. Description of Web Service Resource Framework (WSRF)-based Agricultural Internet of Things (AIoT) and the encapsulation method were also discussed in this paper for resource management in this model.
Trainer, Asa; Hedberg, Thomas; Feeney, Allison Barnard; Fischer, Kevin; Rosche, Phil
2016-01-01
Advances in information technology triggered a digital revolution that holds promise of reduced costs, improved productivity, and higher quality. To ride this wave of innovation, manufacturing enterprises are changing how product definitions are communicated - from paper to models. To achieve industry's vision of the Model-Based Enterprise (MBE), the MBE strategy must include model-based data interoperability from design to manufacturing and quality in the supply chain. The Model-Based Definition (MBD) is created by the original equipment manufacturer (OEM) using Computer-Aided Design (CAD) tools. This information is then shared with the supplier so that they can manufacture and inspect the physical parts. Today, suppliers predominantly use Computer-Aided Manufacturing (CAM) and Coordinate Measuring Machine (CMM) models for these tasks. Traditionally, the OEM has provided design data to the supplier in the form of two-dimensional (2D) drawings, but may also include a three-dimensional (3D)-shape-geometry model, often in a standards-based format such as ISO 10303-203:2011 (STEP AP203). The supplier then creates the respective CAM and CMM models and machine programs to produce and inspect the parts. In the MBE vision for model-based data exchange, the CAD model must include product-and-manufacturing information (PMI) in addition to the shape geometry. Today's CAD tools can generate models with embedded PMI. And, with the emergence of STEP AP242, a standards-based model with embedded PMI can now be shared downstream. The on-going research detailed in this paper seeks to investigate three concepts. First, that the ability to utilize a STEP AP242 model with embedded PMI for CAD-to-CAM and CAD-to-CMM data exchange is possible and valuable to the overall goal of a more efficient process. Second, the research identifies gaps in tools, standards, and processes that inhibit industry's ability to cost-effectively achieve model-based-data interoperability in the pursuit of the MBE vision. Finally, it also seeks to explore the interaction between CAD and CMM processes and determine if the concept of feedback from CAM and CMM back to CAD is feasible. The main goal of our study is to test the hypothesis that model-based-data interoperability from CAD-to-CAM and CAD-to-CMM is feasible through standards-based integration. This paper presents several barriers to model-based-data interoperability. Overall, the project team demonstrated the exchange of product definition data between CAD, CAM, and CMM systems using standards-based methods. While gaps in standards coverage were identified, the gaps should not stop industry's progress toward MBE. The results of our study provide evidence in support of an open-standards method to model-based-data interoperability, which would provide maximum value and impact to industry.
Forestry-based biomass economic and financial information and tools: An annotated bibliography
Dan Loeffler; Jason Brandt; Todd Morgan; Greg Jones
2010-01-01
This annotated bibliography is a synthesis of information products available to land managers in the western United States regarding economic and financial aspects of forestry-based woody biomass removal, a component of fire hazard and/or fuel reduction treatments. This publication contains over 200 forestry-based biomass papers, financial models, sources of biomass...
Accurate position estimation methods based on electrical impedance tomography measurements
NASA Astrophysics Data System (ADS)
Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.
2017-08-01
Electrical impedance tomography (EIT) is a technology that estimates the electrical properties of a body or a cross section. Its main advantages are its non-invasiveness, low cost and operation free of radiation. The estimation of the conductivity field leads to low resolution images compared with other technologies, and high computational cost. However, in many applications the target information lies in a low intrinsic dimensionality of the conductivity field. The estimation of this low-dimensional information is addressed in this work. It proposes optimization-based and data-driven approaches for estimating this low-dimensional information. The accuracy of the results obtained with these approaches depends on modelling and experimental conditions. Optimization approaches are sensitive to model discretization, type of cost function and searching algorithms. Data-driven methods are sensitive to the assumed model structure and the data set used for parameter estimation. The system configuration and experimental conditions, such as number of electrodes and signal-to-noise ratio (SNR), also have an impact on the results. In order to illustrate the effects of all these factors, the position estimation of a circular anomaly is addressed. Optimization methods based on weighted error cost functions and derivate-free optimization algorithms provided the best results. Data-driven approaches based on linear models provided, in this case, good estimates, but the use of nonlinear models enhanced the estimation accuracy. The results obtained by optimization-based algorithms were less sensitive to experimental conditions, such as number of electrodes and SNR, than data-driven approaches. Position estimation mean squared errors for simulation and experimental conditions were more than twice for the optimization-based approaches compared with the data-driven ones. The experimental position estimation mean squared error of the data-driven models using a 16-electrode setup was less than 0.05% of the tomograph radius value. These results demonstrate that the proposed approaches can estimate an object’s position accurately based on EIT measurements if enough process information is available for training or modelling. Since they do not require complex calculations it is possible to use them in real-time applications without requiring high-performance computers.
NASA Astrophysics Data System (ADS)
Li, Hanshan
2016-04-01
To enhance the stability and reliability of multi-screens testing system, this paper studies multi-screens target optical information transmission link properties and performance in long-distance, sets up the discrete multi-tone modulation transmission model based on geometric model of laser multi-screens testing system and visible light information communication principle; analyzes the electro-optic and photoelectric conversion function of sender and receiver in target optical information communication system; researches target information transmission performance and transfer function of the generalized visible-light communication channel; found optical information communication transmission link light intensity space distribution model and distribution function; derives the SNR model of information transmission communication system. Through the calculation and experiment analysis, the results show that the transmission error rate increases with the increment of transmission rate in a certain channel modulation depth; when selecting the appropriate transmission rate, the bit error rate reach 0.01.
An information spreading model based on online social networks
NASA Astrophysics Data System (ADS)
Wang, Tao; He, Juanjuan; Wang, Xiaoxia
2018-01-01
Online social platforms are very popular in recent years. In addition to spreading information, users could review or collect information on online social platforms. According to the information spreading rules of online social network, a new information spreading model, namely IRCSS model, is proposed in this paper. It includes sharing mechanism, reviewing mechanism, collecting mechanism and stifling mechanism. Mean-field equations are derived to describe the dynamics of the IRCSS model. Moreover, the steady states of reviewers, collectors and stiflers and the effects of parameters on the peak values of reviewers, collectors and sharers are analyzed. Finally, numerical simulations are performed on different networks. Results show that collecting mechanism and reviewing mechanism, as well as the connectivity of the network, make information travel wider and faster, and compared to WS network and ER network, the speed of reviewing, sharing and collecting information is fastest on BA network.
Xu, Yiming; Smith, Scot E; Grunwald, Sabine; Abd-Elrahman, Amr; Wani, Suhas P; Nair, Vimala D
2017-09-11
Digital soil mapping (DSM) is gaining momentum as a technique to help smallholder farmers secure soil security and food security in developing regions. However, communications of the digital soil mapping information between diverse audiences become problematic due to the inconsistent scale of DSM information. Spatial downscaling can make use of accessible soil information at relatively coarse spatial resolution to provide valuable soil information at relatively fine spatial resolution. The objective of this research was to disaggregate the coarse spatial resolution soil exchangeable potassium (K ex ) and soil total nitrogen (TN) base map into fine spatial resolution soil downscaled map using weighted generalized additive models (GAMs) in two smallholder villages in South India. By incorporating fine spatial resolution spectral indices in the downscaling process, the soil downscaled maps not only conserve the spatial information of coarse spatial resolution soil maps but also depict the spatial details of soil properties at fine spatial resolution. The results of this study demonstrated difference between the fine spatial resolution downscaled maps and fine spatial resolution base maps is smaller than the difference between coarse spatial resolution base maps and fine spatial resolution base maps. The appropriate and economical strategy to promote the DSM technique in smallholder farms is to develop the relatively coarse spatial resolution soil prediction maps or utilize available coarse spatial resolution soil maps at the regional scale and to disaggregate these maps to the fine spatial resolution downscaled soil maps at farm scale.
Effects of urban microcellular environments on ray-tracing-based coverage predictions.
Liu, Zhongyu; Guo, Lixin; Guan, Xiaowei; Sun, Jiejing
2016-09-01
The ray-tracing (RT) algorithm, which is based on geometrical optics and the uniform theory of diffraction, has become a typical deterministic approach of studying wave-propagation characteristics. Under urban microcellular environments, the RT method highly depends on detailed environmental information. The aim of this paper is to provide help in selecting the appropriate level of accuracy required in building databases to achieve good tradeoffs between database costs and prediction accuracy. After familiarization with the operating procedures of the RT-based prediction model, this study focuses on the effect of errors in environmental information on prediction results. The environmental information consists of two parts, namely, geometric and electrical parameters. The geometric information can be obtained from a digital map of a city. To study the effects of inaccuracies in geometry information (building layout) on RT-based coverage prediction, two different artificial erroneous maps are generated based on the original digital map, and systematic analysis is performed by comparing the predictions with the erroneous maps and measurements or the predictions with the original digital map. To make the conclusion more persuasive, the influence of random errors on RMS delay spread results is investigated. Furthermore, given the electrical parameters' effect on the accuracy of the predicted results of the RT model, the dielectric constant and conductivity of building materials are set with different values. The path loss and RMS delay spread under the same circumstances are simulated by the RT prediction model.
NASA Astrophysics Data System (ADS)
Kuvich, Gary
2003-08-01
Vision is a part of a larger information system that converts visual information into knowledge structures. These structures drive vision process, resolve ambiguity and uncertainty via feedback projections, and provide image understanding that is an interpretation of visual information in terms of such knowledge models. The ability of human brain to emulate knowledge structures in the form of networks-symbolic models is found. And that means an important shift of paradigm in our knowledge about brain from neural networks to "cortical software". Symbols, predicates and grammars naturally emerge in such active multilevel hierarchical networks, and logic is simply a way of restructuring such models. Brain analyzes an image as a graph-type decision structure created via multilevel hierarchical compression of visual information. Mid-level vision processes like clustering, perceptual grouping, separation of figure from ground, are special kinds of graph/network transformations. They convert low-level image structure into the set of more abstract ones, which represent objects and visual scene, making them easy for analysis by higher-level knowledge structures. Higher-level vision phenomena are results of such analysis. Composition of network-symbolic models works similar to frames and agents, combines learning, classification, analogy together with higher-level model-based reasoning into a single framework. Such models do not require supercomputers. Based on such principles, and using methods of Computational intelligence, an Image Understanding system can convert images into the network-symbolic knowledge models, and effectively resolve uncertainty and ambiguity, providing unifying representation for perception and cognition. That allows creating new intelligent computer vision systems for robotic and defense industries.
Product Recommendation System Based on Personal Preference Model Using CAM
NASA Astrophysics Data System (ADS)
Murakami, Tomoko; Yoshioka, Nobukazu; Orihara, Ryohei; Furukawa, Koichi
Product recommendation system is realized by applying business rules acquired by data maining techniques. Business rules such as demographical patterns of purchase, are able to cover the groups of users that have a tendency to purchase products, but it is difficult to recommend products adaptive to various personal preferences only by utilizing them. In addition to that, it is very costly to gather the large volume of high quality survey data, which is necessary for good recommendation based on personal preference model. A method collecting kansei information automatically without questionnaire survey is required. The constructing personal preference model from less favor data is also necessary, since it is costly for the user to input favor data. In this paper, we propose product recommendation system based on kansei information extracted by text mining and user's preference model constructed by Category-guided Adaptive Modeling, CAM for short. CAM is a feature construction method that can generate new features constructing the space where same labeled examples are close and different labeled examples are far away from some labeled examples. It is possible to construct personal preference model by CAM despite less information of likes and dislikes categories. In the system, retrieval agent gathers the products' specification and user agent manages preference model, user's likes and dislikes. Kansei information of the products is gained by applying text mining technique to the reputation documents about the products on the web site. We carry out some experimental studies to make sure that prefrence model obtained by our method performs effectively.
Xu, Yingying; Lin, Lanfen; Hu, Hongjie; Wang, Dan; Zhu, Wenchao; Wang, Jian; Han, Xian-Hua; Chen, Yen-Wei
2018-01-01
The bag of visual words (BoVW) model is a powerful tool for feature representation that can integrate various handcrafted features like intensity, texture, and spatial information. In this paper, we propose a novel BoVW-based method that incorporates texture and spatial information for the content-based image retrieval to assist radiologists in clinical diagnosis. This paper presents a texture-specific BoVW method to represent focal liver lesions (FLLs). Pixels in the region of interest (ROI) are classified into nine texture categories using the rotation-invariant uniform local binary pattern method. The BoVW-based features are calculated for each texture category. In addition, a spatial cone matching (SCM)-based representation strategy is proposed to describe the spatial information of the visual words in the ROI. In a pilot study, eight radiologists with different clinical experience performed diagnoses for 20 cases with and without the top six retrieved results. A total of 132 multiphase computed tomography volumes including five pathological types were collected. The texture-specific BoVW was compared to other BoVW-based methods using the constructed dataset of FLLs. The results show that our proposed model outperforms the other three BoVW methods in discriminating different lesions. The SCM method, which adds spatial information to the orderless BoVW model, impacted the retrieval performance. In the pilot trial, the average diagnosis accuracy of the radiologists was improved from 66 to 80% using the retrieval system. The preliminary results indicate that the texture-specific features and the SCM-based BoVW features can effectively characterize various liver lesions. The retrieval system has the potential to improve the diagnostic accuracy and the confidence of the radiologists.
Dynamic Bus Travel Time Prediction Models on Road with Multiple Bus Routes
Bai, Cong; Peng, Zhong-Ren; Lu, Qing-Chang; Sun, Jian
2015-01-01
Accurate and real-time travel time information for buses can help passengers better plan their trips and minimize waiting times. A dynamic travel time prediction model for buses addressing the cases on road with multiple bus routes is proposed in this paper, based on support vector machines (SVMs) and Kalman filtering-based algorithm. In the proposed model, the well-trained SVM model predicts the baseline bus travel times from the historical bus trip data; the Kalman filtering-based dynamic algorithm can adjust bus travel times with the latest bus operation information and the estimated baseline travel times. The performance of the proposed dynamic model is validated with the real-world data on road with multiple bus routes in Shenzhen, China. The results show that the proposed dynamic model is feasible and applicable for bus travel time prediction and has the best prediction performance among all the five models proposed in the study in terms of prediction accuracy on road with multiple bus routes. PMID:26294903
Dynamic Bus Travel Time Prediction Models on Road with Multiple Bus Routes.
Bai, Cong; Peng, Zhong-Ren; Lu, Qing-Chang; Sun, Jian
2015-01-01
Accurate and real-time travel time information for buses can help passengers better plan their trips and minimize waiting times. A dynamic travel time prediction model for buses addressing the cases on road with multiple bus routes is proposed in this paper, based on support vector machines (SVMs) and Kalman filtering-based algorithm. In the proposed model, the well-trained SVM model predicts the baseline bus travel times from the historical bus trip data; the Kalman filtering-based dynamic algorithm can adjust bus travel times with the latest bus operation information and the estimated baseline travel times. The performance of the proposed dynamic model is validated with the real-world data on road with multiple bus routes in Shenzhen, China. The results show that the proposed dynamic model is feasible and applicable for bus travel time prediction and has the best prediction performance among all the five models proposed in the study in terms of prediction accuracy on road with multiple bus routes.
Ko, Linda K; Turner-McGrievy, Gabrielle M; Campbell, Marci K
2014-04-01
Podcasting is an emerging technology, and previous interventions have shown promising results using theory-based podcast for weight loss among overweight and obese individuals. This study investigated whether constructs of social cognitive theory and information processing theories (IPTs) mediate the effect of a podcast intervention on weight loss among overweight individuals. Data are from Pounds off Digitally, a study testing the efficacy of two weight loss podcast interventions (control podcast and theory-based podcast). Path models were constructed (n = 66). The IPTs, elaboration likelihood model, information control theory, and cognitive load theory mediated the effect of a theory-based podcast on weight loss. The intervention was significantly associated with all IPTs. Information control theory and cognitive load theory were related to elaboration, and elaboration was associated with weight loss. Social cognitive theory constructs did not mediate weight loss. Future podcast interventions grounded in theory may be effective in promoting weight loss.
Modeling reliability measurement of interface on information system: Towards the forensic of rules
NASA Astrophysics Data System (ADS)
Nasution, M. K. M.; Sitompul, Darwin; Harahap, Marwan
2018-02-01
Today almost all machines depend on the software. As a software and hardware system depends also on the rules that are the procedures for its use. If the procedure or program can be reliably characterized by involving the concept of graph, logic, and probability, then regulatory strength can also be measured accordingly. Therefore, this paper initiates an enumeration model to measure the reliability of interfaces based on the case of information systems supported by the rules of use by the relevant agencies. An enumeration model is obtained based on software reliability calculation.
Gravity effects on information filtering and network evolving.
Liu, Jin-Hu; Zhang, Zi-Ke; Chen, Lingjiao; Liu, Chuang; Yang, Chengcheng; Wang, Xueqi
2014-01-01
In this paper, based on the gravity principle of classical physics, we propose a tunable gravity-based model, which considers tag usage pattern to weigh both the mass and distance of network nodes. We then apply this model in solving the problems of information filtering and network evolving. Experimental results on two real-world data sets, Del.icio.us and MovieLens, show that it can not only enhance the algorithmic performance, but can also better characterize the properties of real networks. This work may shed some light on the in-depth understanding of the effect of gravity model.
Gravity Effects on Information Filtering and Network Evolving
Liu, Jin-Hu; Zhang, Zi-Ke; Chen, Lingjiao; Liu, Chuang; Yang, Chengcheng; Wang, Xueqi
2014-01-01
In this paper, based on the gravity principle of classical physics, we propose a tunable gravity-based model, which considers tag usage pattern to weigh both the mass and distance of network nodes. We then apply this model in solving the problems of information filtering and network evolving. Experimental results on two real-world data sets, Del.icio.us and MovieLens, show that it can not only enhance the algorithmic performance, but can also better characterize the properties of real networks. This work may shed some light on the in-depth understanding of the effect of gravity model. PMID:24622162
Development of Health Information Search Engine Based on Metadata and Ontology
Song, Tae-Min; Jin, Dal-Lae
2014-01-01
Objectives The aim of the study was to develop a metadata and ontology-based health information search engine ensuring semantic interoperability to collect and provide health information using different application programs. Methods Health information metadata ontology was developed using a distributed semantic Web content publishing model based on vocabularies used to index the contents generated by the information producers as well as those used to search the contents by the users. Vocabulary for health information ontology was mapped to the Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT), and a list of about 1,500 terms was proposed. The metadata schema used in this study was developed by adding an element describing the target audience to the Dublin Core Metadata Element Set. Results A metadata schema and an ontology ensuring interoperability of health information available on the internet were developed. The metadata and ontology-based health information search engine developed in this study produced a better search result compared to existing search engines. Conclusions Health information search engine based on metadata and ontology will provide reliable health information to both information producer and information consumers. PMID:24872907
Development of health information search engine based on metadata and ontology.
Song, Tae-Min; Park, Hyeoun-Ae; Jin, Dal-Lae
2014-04-01
The aim of the study was to develop a metadata and ontology-based health information search engine ensuring semantic interoperability to collect and provide health information using different application programs. Health information metadata ontology was developed using a distributed semantic Web content publishing model based on vocabularies used to index the contents generated by the information producers as well as those used to search the contents by the users. Vocabulary for health information ontology was mapped to the Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT), and a list of about 1,500 terms was proposed. The metadata schema used in this study was developed by adding an element describing the target audience to the Dublin Core Metadata Element Set. A metadata schema and an ontology ensuring interoperability of health information available on the internet were developed. The metadata and ontology-based health information search engine developed in this study produced a better search result compared to existing search engines. Health information search engine based on metadata and ontology will provide reliable health information to both information producer and information consumers.
Use of generalized linear models and digital data in a forest inventory of Northern Utah
Moisen, Gretchen G.; Edwards, Thomas C.
1999-01-01
Forest inventories, like those conducted by the Forest Service's Forest Inventory and Analysis Program (FIA) in the Rocky Mountain Region, are under increased pressure to produce better information at reduced costs. Here we describe our efforts in Utah to merge satellite-based information with forest inventory data for the purposes of reducing the costs of estimates of forest population totals and providing spatial depiction of forest resources. We illustrate how generalized linear models can be used to construct approximately unbiased and efficient estimates of population totals while providing a mechanism for prediction in space for mapping of forest structure. We model forest type and timber volume of five tree species groups as functions of a variety of predictor variables in the northern Utah mountains. Predictor variables include elevation, aspect, slope, geographic coordinates, as well as vegetation cover types based on satellite data from both the Advanced Very High Resolution Radiometer (AVHRR) and Thematic Mapper (TM) platforms. We examine the relative precision of estimates of area by forest type and mean cubic-foot volumes under six different models, including the traditional double sampling for stratification strategy. Only very small gains in precision were realized through the use of expensive photointerpreted or TM-based data for stratification, while models based on topography and spatial coordinates alone were competitive. We also compare the predictive capability of the models through various map accuracy measures. The models including the TM-based vegetation performed best overall, while topography and spatial coordinates alone provided substantial information at very low cost.
Two Maintenance Mechanisms of Verbal Information in Working Memory
ERIC Educational Resources Information Center
Camos, V.; Lagner, P.; Barrouillet, P.
2009-01-01
The present study evaluated the interplay between two mechanisms of maintenance of verbal information in working memory, namely articulatory rehearsal as described in Baddeley's model, and attentional refreshing as postulated in Barrouillet and Camos's Time-Based Resource-Sharing (TBRS) model. In four experiments using complex span paradigm, we…
Incorporating Non-Relevance Information in the Estimation of Query Models
2008-11-01
experiments in relevance feedback. In Salton , G., editor, The SMART Retrieval System – Exper- iments in Automatic Document Processing, pages 337– 354...W. (2001). Relevance based lan- guage models. In SIGIR ’01. Rocchio, J. (1971). Relevance feedback in information re- trieval. In Salton , G., editor
Term Dependence: A Basis for Luhn and Zipf Models.
ERIC Educational Resources Information Center
Losee, Robert M.
2001-01-01
Discusses relationships between the frequency-based characteristics of neighboring terms in natural language and the rank or frequency of the terms. Topics include information theory measures, including expected mutual information measure (EMIM); entropy and rank; Luhn's model of term aboutness; Zipf's law; and implications for indexing and…
Leading the Teacher Team--Balancing between Formal and Informal Power in Program Leadership
ERIC Educational Resources Information Center
Högfeldt, Anna-Karin; Malmi, Lauri; Kinnunen, Päivi; Jerbrant, Anna; Strömberg, Emma; Berglund, Anders; Villadsen, Jørgen
2018-01-01
This continuous research within Nordic engineering institutions targets the contexts and possibilities for leadership among engineering education program directors. The IFP-model, developed based on analysis of interviews with program leaders in these institutions, visualizes the program director's informal and formal power. The model is presented…
NASA Astrophysics Data System (ADS)
Shi, Jing; Shi, Yunli; Tan, Jian; Zhu, Lei; Li, Hu
2018-02-01
Traditional power forecasting models cannot efficiently take various factors into account, neither to identify the relation factors. In this paper, the mutual information in information theory and the artificial intelligence random forests algorithm are introduced into the medium and long-term electricity demand prediction. Mutual information can identify the high relation factors based on the value of average mutual information between a variety of variables and electricity demand, different industries may be highly associated with different variables. The random forests algorithm was used for building the different industries forecasting models according to the different correlation factors. The data of electricity consumption in Jiangsu Province is taken as a practical example, and the above methods are compared with the methods without regard to mutual information and the industries. The simulation results show that the above method is scientific, effective, and can provide higher prediction accuracy.
Modeling and mining term association for improving biomedical information retrieval performance.
Hu, Qinmin; Huang, Jimmy Xiangji; Hu, Xiaohua
2012-06-11
The growth of the biomedical information requires most information retrieval systems to provide short and specific answers in response to complex user queries. Semantic information in the form of free text that is structured in a way makes it straightforward for humans to read but more difficult for computers to interpret automatically and search efficiently. One of the reasons is that most traditional information retrieval models assume terms are conditionally independent given a document/passage. Therefore, we are motivated to consider term associations within different contexts to help the models understand semantic information and use it for improving biomedical information retrieval performance. We propose a term association approach to discover term associations among the keywords from a query. The experiments are conducted on the TREC 2004-2007 Genomics data sets and the TREC 2004 HARD data set. The proposed approach is promising and achieves superiority over the baselines and the GSP results. The parameter settings and different indices are investigated that the sentence-based index produces the best results in terms of the document-level, the word-based index for the best results in terms of the passage-level and the paragraph-based index for the best results in terms of the passage2-level. Furthermore, the best term association results always come from the best baseline. The tuning number k in the proposed recursive re-ranking algorithm is discussed and locally optimized to be 10. First, modelling term association for improving biomedical information retrieval using factor analysis, is one of the major contributions in our work. Second, the experiments confirm that term association considering co-occurrence and dependency among the keywords can produce better results than the baselines treating the keywords independently. Third, the baselines are re-ranked according to the importance and reliance of latent factors behind term associations. These latent factors are decided by the proposed model and their term appearances in the first round retrieved passages.
Modeling and mining term association for improving biomedical information retrieval performance
2012-01-01
Background The growth of the biomedical information requires most information retrieval systems to provide short and specific answers in response to complex user queries. Semantic information in the form of free text that is structured in a way makes it straightforward for humans to read but more difficult for computers to interpret automatically and search efficiently. One of the reasons is that most traditional information retrieval models assume terms are conditionally independent given a document/passage. Therefore, we are motivated to consider term associations within different contexts to help the models understand semantic information and use it for improving biomedical information retrieval performance. Results We propose a term association approach to discover term associations among the keywords from a query. The experiments are conducted on the TREC 2004-2007 Genomics data sets and the TREC 2004 HARD data set. The proposed approach is promising and achieves superiority over the baselines and the GSP results. The parameter settings and different indices are investigated that the sentence-based index produces the best results in terms of the document-level, the word-based index for the best results in terms of the passage-level and the paragraph-based index for the best results in terms of the passage2-level. Furthermore, the best term association results always come from the best baseline. The tuning number k in the proposed recursive re-ranking algorithm is discussed and locally optimized to be 10. Conclusions First, modelling term association for improving biomedical information retrieval using factor analysis, is one of the major contributions in our work. Second, the experiments confirm that term association considering co-occurrence and dependency among the keywords can produce better results than the baselines treating the keywords independently. Third, the baselines are re-ranked according to the importance and reliance of latent factors behind term associations. These latent factors are decided by the proposed model and their term appearances in the first round retrieved passages. PMID:22901087
Deployment and Evaluation of an Observations Data Model
NASA Astrophysics Data System (ADS)
Horsburgh, J. S.; Tarboton, D. G.; Zaslavsky, I.; Maidment, D. R.; Valentine, D.
2007-12-01
Environmental observations are fundamental to hydrology and water resources, and the way these data are organized and manipulated either enables or inhibits the analyses that can be performed. The CUAHSI Hydrologic Information System project is developing information technology infrastructure to support hydrologic science. This includes an Observations Data Model (ODM) that provides a new and consistent format for the storage and retrieval of environmental observations in a relational database designed to facilitate integrated analysis of large datasets collected by multiple investigators. Within this data model, observations are stored with sufficient ancillary information (metadata) about the observations to allow them to be unambiguously interpreted and used, and to provide traceable heritage from raw measurements to useable information. The design is based upon a relational database model that exposes each single observation as a record, taking advantage of the capability in relational database systems for querying based upon data values and enabling cross dimension data retrieval and analysis. This data model has been deployed, as part of the HIS Server, at the WATERS Network test bed observatories across the U.S where it serves as a repository for real time data in the observatory information system. The ODM holds the data that is then made available to investigators and the public through web services and the Data Access System for Hydrology (DASH) map based interface. In the WATERS Network test bed settings the ODM has been used to ingest, analyze and publish data from a variety of sources and disciplines. This paper will present an evaluation of the effectiveness of this initial deployment and the revisions that are being instituted to address shortcomings. The ODM represents a new, systematic way for hydrologists, scientists, and engineers to organize and share their data and thereby facilitate a fuller integrated understanding of water resources based on more extensive and fully specified information.
D Modelling and Interactive Web-Based Visualization of Cultural Heritage Objects
NASA Astrophysics Data System (ADS)
Koeva, M. N.
2016-06-01
Nowadays, there are rapid developments in the fields of photogrammetry, laser scanning, computer vision and robotics, together aiming to provide highly accurate 3D data that is useful for various applications. In recent years, various LiDAR and image-based techniques have been investigated for 3D modelling because of their opportunities for fast and accurate model generation. For cultural heritage preservation and the representation of objects that are important for tourism and their interactive visualization, 3D models are highly effective and intuitive for present-day users who have stringent requirements and high expectations. Depending on the complexity of the objects for the specific case, various technological methods can be applied. The selected objects in this particular research are located in Bulgaria - a country with thousands of years of history and cultural heritage dating back to ancient civilizations. This motivates the preservation, visualisation and recreation of undoubtedly valuable historical and architectural objects and places, which has always been a serious challenge for specialists in the field of cultural heritage. In the present research, comparative analyses regarding principles and technological processes needed for 3D modelling and visualization are presented. The recent problems, efforts and developments in interactive representation of precious objects and places in Bulgaria are presented. Three technologies based on real projects are described: (1) image-based modelling using a non-metric hand-held camera; (2) 3D visualization based on spherical panoramic images; (3) and 3D geometric and photorealistic modelling based on architectural CAD drawings. Their suitability for web-based visualization are demonstrated and compared. Moreover the possibilities for integration with additional information such as interactive maps, satellite imagery, sound, video and specific information for the objects are described. This comparative study discusses the advantages and disadvantages of these three approaches and their integration in multiple domains, such as web-based 3D city modelling, tourism and architectural 3D visualization. It was concluded that image-based modelling and panoramic visualisation are simple, fast and effective techniques suitable for simultaneous virtual representation of many objects. However, additional measurements or CAD information will be beneficial for obtaining higher accuracy.
Rajoli, Rajith KR; Back, David J; Rannard, Steve; Meyers, Caren Freel; Flexner, Charles; Owen, Andrew; Siccardi, Marco
2014-01-01
Background and Objectives Antiretrovirals (ARVs) are currently used for the treatment and prevention of HIV infection. Poor adherence and low tolerability of some existing oral formulations can hinder their efficacy. Long-acting (LA) injectable nanoformulations could help address these complications by simplifying ARV administration. The aim of this study is to inform the optimisation of intramuscular LA formulations for eight ARVs through physiologically-based pharmacokinetic (PBPK) modelling. Methods A whole-body PBPK model was constructed using mathematical descriptions of molecular, physiological and anatomical processes defining pharmacokinetics. These models were validated against available clinical data and subsequently used to predict the pharmacokinetics of injectable LA formulations Results The predictions suggest that monthly intramuscular injections are possible for dolutegravir, efavirenz, emtricitabine, raltegravir, rilpivirine and tenofovir provided that technological challenges to control release rate can be addressed. Conclusions These data may help inform the target product profiles for LA ARV reformulation strategies. PMID:25523214
Information on where and how individuals spend their time is important for characterizing exposures to chemicals in consumer products and in indoor environments. Traditionally, exposure assessors have relied on time-use surveys in order to obtain information on exposure-related b...
Fault detection and diagnosis for gas turbines based on a kernelized information entropy model.
Wang, Weiying; Xu, Zhiqiang; Tang, Rui; Li, Shuying; Wu, Wei
2014-01-01
Gas turbines are considered as one kind of the most important devices in power engineering and have been widely used in power generation, airplanes, and naval ships and also in oil drilling platforms. However, they are monitored without man on duty in the most cases. It is highly desirable to develop techniques and systems to remotely monitor their conditions and analyze their faults. In this work, we introduce a remote system for online condition monitoring and fault diagnosis of gas turbine on offshore oil well drilling platforms based on a kernelized information entropy model. Shannon information entropy is generalized for measuring the uniformity of exhaust temperatures, which reflect the overall states of the gas paths of gas turbine. In addition, we also extend the entropy to compute the information quantity of features in kernel spaces, which help to select the informative features for a certain recognition task. Finally, we introduce the information entropy based decision tree algorithm to extract rules from fault samples. The experiments on some real-world data show the effectiveness of the proposed algorithms.
The role of physician characteristics in clinical trial acceptance: testing pathways of influence.
Curbow, Barbara; Fogarty, Linda A; McDonnell, Karen A; Chill, Julia; Scott, Lisa Benz
2006-03-01
Eight videotaped vignettes were developed that assessed the effects of three physician-related experimental variables (in a 2 x 2 x 2 factorial design) on clinical trial (CT) knowledge, video knowledge, information processing, CT beliefs, affective evaluations (attitudes), and CT acceptance. It was hypothesized that the physician variables (community versus academic-based affiliation, enthusiastic versus neutral presentation of the trial, and new versus previous relationship with the patient) would serve as communication cues that would interrupt message processing, leading to lower knowledge gain but more positive beliefs, attitudes, and CT acceptance. A total of 262 women (161 survivors and 101 controls) participated in the study. The manipulated variables primarily influenced the intermediary variables of post-test CT beliefs and satisfaction with information rather than knowledge or information processing. Multiple regression results indicated that CT acceptance was associated with positive post-CT beliefs, a lower level of information processing, satisfaction with information, and control status. Based on these results, CT acceptance does not appear to be based on a rational decision-making model; this has implications for both the ethics of informed consent and research conceptual models.
Fault Detection and Diagnosis for Gas Turbines Based on a Kernelized Information Entropy Model
Wang, Weiying; Xu, Zhiqiang; Tang, Rui; Li, Shuying; Wu, Wei
2014-01-01
Gas turbines are considered as one kind of the most important devices in power engineering and have been widely used in power generation, airplanes, and naval ships and also in oil drilling platforms. However, they are monitored without man on duty in the most cases. It is highly desirable to develop techniques and systems to remotely monitor their conditions and analyze their faults. In this work, we introduce a remote system for online condition monitoring and fault diagnosis of gas turbine on offshore oil well drilling platforms based on a kernelized information entropy model. Shannon information entropy is generalized for measuring the uniformity of exhaust temperatures, which reflect the overall states of the gas paths of gas turbine. In addition, we also extend the entropy to compute the information quantity of features in kernel spaces, which help to select the informative features for a certain recognition task. Finally, we introduce the information entropy based decision tree algorithm to extract rules from fault samples. The experiments on some real-world data show the effectiveness of the proposed algorithms. PMID:25258726
Tanabe, Akifumi S
2011-09-01
Proportional and separate models able to apply different combination of substitution rate matrix (SRM) and among-site rate variation model (ASRVM) to each locus are frequently used in phylogenetic studies of multilocus data. A proportional model assumes that branch lengths are proportional among partitions and a separate model assumes that each partition has an independent set of branch lengths. However, the selection from among nonpartitioned (i.e., a common combination of models is applied to all-loci concatenated sequences), proportional and separate models is usually based on the researcher's preference rather than on any information criteria. This study describes two programs, 'Kakusan4' (for DNA sequences) and 'Aminosan' (for amino-acid sequences), which allow the selection of evolutionary models based on several types of information criteria. The programs can handle both multilocus and single-locus data, in addition to providing an easy-to-use wizard interface and a noninteractive command line interface. In the case of multilocus data, SRMs and ASRVMs are compared at each locus and at all-loci concatenated sequences, after which nonpartitioned, proportional and separate models are compared based on information criteria. The programs also provide model configuration files for mrbayes, paup*, phyml, raxml and Treefinder to support further phylogenetic analysis using a selected model. When likelihoods are optimized by Treefinder, the best-fit models were found to differ depending on the data set. Furthermore, differences in the information criteria among nonpartitioned, proportional and separate models were much larger than those among the nonpartitioned models. These findings suggest that selecting from nonpartitioned, proportional and separate models results in a better phylogenetic tree. Kakusan4 and Aminosan are available at http://www.fifthdimension.jp/. They are licensed under gnugpl Ver.2, and are able to run on Windows, MacOS X and Linux. © 2011 Blackwell Publishing Ltd.
Cornett, Alex; Kuziemsky, Craig
2015-01-01
Implementing team based workflows can be complex because of the scope of providers involved and the extent of information exchange and communication that needs to occur. While a workflow may represent the ideal structure of communication that needs to occur, information issues and contextual factors may impact how the workflow is implemented in practice. Understanding these issues will help us better design systems to support team based workflows. In this paper we use a case study of palliative sedation therapy (PST) to model a PST workflow and then use it to identify purposes of communication, information issues and contextual factors that impact them. We then suggest how our findings could inform health information technology (HIT) design to support team based communication workflows.
Vivaldi: visualization and validation of biomacromolecular NMR structures from the PDB.
Hendrickx, Pieter M S; Gutmanas, Aleksandras; Kleywegt, Gerard J
2013-04-01
We describe Vivaldi (VIsualization and VALidation DIsplay; http://pdbe.org/vivaldi), a web-based service for the analysis, visualization, and validation of NMR structures in the Protein Data Bank (PDB). Vivaldi provides access to model coordinates and several types of experimental NMR data using interactive visualization tools, augmented with structural annotations and model-validation information. The service presents information about the modeled NMR ensemble, validation of experimental chemical shifts, residual dipolar couplings, distance and dihedral angle constraints, as well as validation scores based on empirical knowledge and databases. Vivaldi was designed for both expert NMR spectroscopists and casual non-expert users who wish to obtain a better grasp of the information content and quality of NMR structures in the public archive. Copyright © 2013 Wiley Periodicals, Inc.
Maximum likelihood-based analysis of single-molecule photon arrival trajectories.
Hajdziona, Marta; Molski, Andrzej
2011-02-07
In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 10(3) photons. When the intensity levels are well-separated and 10(4) photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.
A novel information cascade model in online social networks
NASA Astrophysics Data System (ADS)
Tong, Chao; He, Wenbo; Niu, Jianwei; Xie, Zhongyu
2016-02-01
The spread and diffusion of information has become one of the hot issues in today's social network analysis. To analyze the spread of online social network information and the attribute of cascade, in this paper, we discuss the spread of two kinds of users' decisions for city-wide activities, namely the "want to take part in the activity" and "be interested in the activity", based on the users' attention in "DouBan" and the data of the city-wide activities. We analyze the characteristics of the activity-decision's spread in these aspects: the scale and scope of the cascade subgraph, the structure characteristic of the cascade subgraph, the topological attribute of spread tree, and the occurrence frequency of cascade subgraph. On this basis, we propose a new information spread model. Based on the classical independent diffusion model, we introduce three mechanisms, equal probability, similarity of nodes, and popularity of nodes, which can generate and affect the spread of information. Besides, by conducting the experiments in six different kinds of network data set, we compare the effects of three mechanisms above mentioned, totally six specific factors, on the spread of information, and put forward that the node's popularity plays an important role in the information spread.
A spread willingness computing-based information dissemination model.
Huang, Haojing; Cui, Zhiming; Zhang, Shukui
2014-01-01
This paper constructs a kind of spread willingness computing based on information dissemination model for social network. The model takes into account the impact of node degree and dissemination mechanism, combined with the complex network theory and dynamics of infectious diseases, and further establishes the dynamical evolution equations. Equations characterize the evolutionary relationship between different types of nodes with time. The spread willingness computing contains three factors which have impact on user's spread behavior: strength of the relationship between the nodes, views identity, and frequency of contact. Simulation results show that different degrees of nodes show the same trend in the network, and even if the degree of node is very small, there is likelihood of a large area of information dissemination. The weaker the relationship between nodes, the higher probability of views selection and the higher the frequency of contact with information so that information spreads rapidly and leads to a wide range of dissemination. As the dissemination probability and immune probability change, the speed of information dissemination is also changing accordingly. The studies meet social networking features and can help to master the behavior of users and understand and analyze characteristics of information dissemination in social network.
A Spread Willingness Computing-Based Information Dissemination Model
Cui, Zhiming; Zhang, Shukui
2014-01-01
This paper constructs a kind of spread willingness computing based on information dissemination model for social network. The model takes into account the impact of node degree and dissemination mechanism, combined with the complex network theory and dynamics of infectious diseases, and further establishes the dynamical evolution equations. Equations characterize the evolutionary relationship between different types of nodes with time. The spread willingness computing contains three factors which have impact on user's spread behavior: strength of the relationship between the nodes, views identity, and frequency of contact. Simulation results show that different degrees of nodes show the same trend in the network, and even if the degree of node is very small, there is likelihood of a large area of information dissemination. The weaker the relationship between nodes, the higher probability of views selection and the higher the frequency of contact with information so that information spreads rapidly and leads to a wide range of dissemination. As the dissemination probability and immune probability change, the speed of information dissemination is also changing accordingly. The studies meet social networking features and can help to master the behavior of users and understand and analyze characteristics of information dissemination in social network. PMID:25110738
Validating EHR clinical models using ontology patterns.
Martínez-Costa, Catalina; Schulz, Stefan
2017-12-01
Clinical models are artefacts that specify how information is structured in electronic health records (EHRs). However, the makeup of clinical models is not guided by any formal constraint beyond a semantically vague information model. We address this gap by advocating ontology design patterns as a mechanism that makes the semantics of clinical models explicit. This paper demonstrates how ontology design patterns can validate existing clinical models using SHACL. Based on the Clinical Information Modelling Initiative (CIMI), we show how ontology patterns detect both modeling and terminology binding errors in CIMI models. SHACL, a W3C constraint language for the validation of RDF graphs, builds on the concept of "Shape", a description of data in terms of expected cardinalities, datatypes and other restrictions. SHACL, as opposed to OWL, subscribes to the Closed World Assumption (CWA) and is therefore more suitable for the validation of clinical models. We have demonstrated the feasibility of the approach by manually describing the correspondences between six CIMI clinical models represented in RDF and two SHACL ontology design patterns. Using a Java-based SHACL implementation, we found at least eleven modeling and binding errors within these CIMI models. This demonstrates the usefulness of ontology design patterns not only as a modeling tool but also as a tool for validation. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Athaudage, Chandranath R. N.; Bradley, Alan B.; Lech, Margaret
2003-12-01
A dynamic programming-based optimization strategy for a temporal decomposition (TD) model of speech and its application to low-rate speech coding in storage and broadcasting is presented. In previous work with the spectral stability-based event localizing (SBEL) TD algorithm, the event localization was performed based on a spectral stability criterion. Although this approach gave reasonably good results, there was no assurance on the optimality of the event locations. In the present work, we have optimized the event localizing task using a dynamic programming-based optimization strategy. Simulation results show that an improved TD model accuracy can be achieved. A methodology of incorporating the optimized TD algorithm within the standard MELP speech coder for the efficient compression of speech spectral information is also presented. The performance evaluation results revealed that the proposed speech coding scheme achieves 50%-60% compression of speech spectral information with negligible degradation in the decoded speech quality.
Carbonatites of the World, Explored Deposits of Nb and REE - Database and Grade and Tonnage Models
Berger, Vladimir I.; Singer, Donald A.; Orris, Greta J.
2009-01-01
This report is based on published tonnage and grade data on 58 Nb- and rare-earth-element (REE)-bearing carbonatite deposits that are mostly well explored and are partially mined or contain resources of these elements. The deposits represent only a part of the known 527 carbonatites around the world, but they are characterized by reliable quantitative data on ore tonnages and grades of niobium and REE. Grade and tonnage models are an important component of mineral resource assessments. Carbonatites present one of the main natural sources of niobium and rare-earth elements, the economic importance of which grows consistently. A purpose of this report is to update earlier publications. New information about known deposits, as well as data on new deposits published during the last decade, are incorporated in the present paper. The compiled database (appendix 1; linked to right) contains 60 explored Nb- and REE-bearing carbonatite deposits - resources of 55 of these deposits are taken from publications. In the present updated grade-tonnage model we have added 24 deposits comparing with the previous model of Singer (1998). Resources of most deposits are residuum ores in the upper part of carbonatite bodies. Mineral-deposit models are important in exploration planning and quantitative resource assessments for two reasons: (1) grades and tonnages among deposit types vary significantly, and (2) deposits of different types are present in distinct geologic settings that can be identified from geologic maps. Mineral-deposit models combine the diverse geoscience information on geology, mineral occurrences, geophysics, and geochemistry used in resource assessments and mineral exploration. Globally based deposit models allow recognition of important features and demonstrate how common different features are. Well-designed deposit models allow geologists to deduce possible mineral-deposit types in a given geologic environment, and the grade and tonnage models allow economists to estimate the possible economic viability of these resources. Thus, mineral-deposit models play a central role in presenting geoscience information in a useful form to policy makers. The foundation of mineral-deposit models is information about known deposits. This publication presents the latest geologic information and newly developed grade and tonnage models for Nb- and REE-carbonatite deposits in digital form. The publication contains computer files with information on deposits from around the world. It also contains a text file allowing locations of all deposits to be plotted in geographic information system (GIS) programs. The data are presented in FileMaker Pro as well as in .xls and text files to make the information available to a broadly based audience. The value of this information and any derived analyses depends critically on the consistent manner of data gathering. For this reason, we first discuss the rules used in this compilation. Next, the fields of the database are explained. Finally, we provide new grade and tonnage models and analysis of the information in the file.
Knowledge representation to support reasoning based on multiple models
NASA Technical Reports Server (NTRS)
Gillam, April; Seidel, Jorge P.; Parker, Alice C.
1990-01-01
Model Based Reasoning is a powerful tool used to design and analyze systems, which are often composed of numerous interactive, interrelated subsystems. Models of the subsystems are written independently and may be used together while they are still under development. Thus the models are not static. They evolve as information becomes obsolete, as improved artifact descriptions are developed, and as system capabilities change. Researchers are using three methods to support knowledge/data base growth, to track the model evolution, and to handle knowledge from diverse domains. First, the representation methodology is based on having pools, or types, of knowledge from which each model is constructed. In addition information is explicit. This includes the interactions between components, the description of the artifact structure, and the constraints and limitations of the models. The third principle we have followed is the separation of the data and knowledge from the inferencing and equation solving mechanisms. This methodology is used in two distinct knowledge-based systems: one for the design of space systems and another for the synthesis of VLSI circuits. It has facilitated the growth and evolution of our models, made accountability of results explicit, and provided credibility for the user community. These capabilities have been implemented and are being used in actual design projects.
A Probabilistic Model of Social Working Memory for Information Retrieval in Social Interactions.
Li, Liyuan; Xu, Qianli; Gan, Tian; Tan, Cheston; Lim, Joo-Hwee
2018-05-01
Social working memory (SWM) plays an important role in navigating social interactions. Inspired by studies in psychology, neuroscience, cognitive science, and machine learning, we propose a probabilistic model of SWM to mimic human social intelligence for personal information retrieval (IR) in social interactions. First, we establish a semantic hierarchy as social long-term memory to encode personal information. Next, we propose a semantic Bayesian network as the SWM, which integrates the cognitive functions of accessibility and self-regulation. One subgraphical model implements the accessibility function to learn the social consensus about IR-based on social information concept, clustering, social context, and similarity between persons. Beyond accessibility, one more layer is added to simulate the function of self-regulation to perform the personal adaptation to the consensus based on human personality. Two learning algorithms are proposed to train the probabilistic SWM model on a raw dataset of high uncertainty and incompleteness. One is an efficient learning algorithm of Newton's method, and the other is a genetic algorithm. Systematic evaluations show that the proposed SWM model is able to learn human social intelligence effectively and outperforms the baseline Bayesian cognitive model. Toward real-world applications, we implement our model on Google Glass as a wearable assistant for social interaction.
NASA Astrophysics Data System (ADS)
Barazzetti, L.; Banfi, F.; Brumana, R.; Oreni, D.; Previtali, M.; Roncoroni, F.
2015-08-01
This paper describes a procedure for the generation of a detailed HBIM which is then turned into a model for mobile apps based on augmented and virtual reality. Starting from laser point clouds, photogrammetric data and additional information, a geometric reconstruction with a high level of detail can be carried out by considering the basic requirements of BIM projects (parametric modelling, object relations, attributes). The work aims at demonstrating that a complex HBIM can be managed in portable devices to extract useful information not only for expert operators, but also towards a wider user community interested in cultural tourism.
NASA Technical Reports Server (NTRS)
Callender, E. D.; Farny, A. M.
1983-01-01
Problem Statement Language/Problem Statement Analyzer (PSL/PSA) applications, which were once a one-step process in which product system information was immediately translated into PSL statements, have in light of experience been shown to result in inconsistent representations. These shortcomings have prompted the development of an intermediate step, designated the Product System Information Model (PSIM), which provides a basis for the mutual understanding of customer terminology and the formal, conceptual representation of that product system in a PSA data base. The PSIM is initially captured as a paper diagram, followed by formal capture in the PSL/PSA data base.
Linking 1D coastal ocean modelling to environmental management: an ensemble approach
NASA Astrophysics Data System (ADS)
Mussap, Giulia; Zavatarelli, Marco; Pinardi, Nadia
2017-12-01
The use of a one-dimensional interdisciplinary numerical model of the coastal ocean as a tool contributing to the formulation of ecosystem-based management (EBM) is explored. The focus is on the definition of an experimental design based on ensemble simulations, integrating variability linked to scenarios (characterised by changes in the system forcing) and to the concurrent variation of selected, and poorly constrained, model parameters. The modelling system used was previously specifically designed for the use in "data-rich" areas, so that horizontal dynamics can be resolved by a diagnostic approach and external inputs can be parameterised by nudging schemes properly calibrated. Ensembles determined by changes in the simulated environmental (physical and biogeochemical) dynamics, under joint forcing and parameterisation variations, highlight the uncertainties associated to the application of specific scenarios that are relevant to EBM, providing an assessment of the reliability of the predicted changes. The work has been carried out by implementing the coupled modelling system BFM-POM1D in an area of Gulf of Trieste (northern Adriatic Sea), considered homogeneous from the point of view of hydrological properties, and forcing it by changing climatic (warming) and anthropogenic (reduction of the land-based nutrient input) pressure. Model parameters affected by considerable uncertainties (due to the lack of relevant observations) were varied jointly with the scenarios of change. The resulting large set of ensemble simulations provided a general estimation of the model uncertainties related to the joint variation of pressures and model parameters. The information of the model result variability aimed at conveying efficiently and comprehensibly the information on the uncertainties/reliability of the model results to non-technical EBM planners and stakeholders, in order to have the model-based information effectively contributing to EBM.
Research on Crowdsourcing Emergency Information Extraction of Based on Events' Frame
NASA Astrophysics Data System (ADS)
Yang, Bo; Wang, Jizhou; Ma, Weijun; Mao, Xi
2018-01-01
At present, the common information extraction method cannot extract the structured emergency event information accurately; the general information retrieval tool cannot completely identify the emergency geographic information; these ways also do not have an accurate assessment of these results of distilling. So, this paper proposes an emergency information collection technology based on event framework. This technique is to solve the problem of emergency information picking. It mainly includes emergency information extraction model (EIEM), complete address recognition method (CARM) and the accuracy evaluation model of emergency information (AEMEI). EIEM can be structured to extract emergency information and complements the lack of network data acquisition in emergency mapping. CARM uses a hierarchical model and the shortest path algorithm and allows the toponomy pieces to be joined as a full address. AEMEI analyzes the results of the emergency event and summarizes the advantages and disadvantages of the event framework. Experiments show that event frame technology can solve the problem of emergency information drawing and provides reference cases for other applications. When the emergency disaster is about to occur, the relevant departments query emergency's data that has occurred in the past. They can make arrangements ahead of schedule which defense and reducing disaster. The technology decreases the number of casualties and property damage in the country and world. This is of great significance to the state and society.
The Integrated Compliance Information System (ICIS) is a web-based system that provides information for the federal enforcement and compliance (FE&C) and the National Pollutant Discharge Elimination System (NPDES) programs.
Microcomputer pollution model for civilian airports and Air Force bases. Model description
DOE Office of Scientific and Technical Information (OSTI.GOV)
Segal, H.M.; Hamilton, P.L.
1988-08-01
This is one of three reports describing the Emissions and Dispersion Modeling System (EDMS). EDMS is a complex source emissions/dispersion model for use at civilian airports and Air Force bases. It operates in both a refined and a screening mode and is programmed for an IBM-XT (or compatible) computer. This report--MODEL DESCRIPTION--provides the technical description of the model. It first identifies the key design features of both the emissions (EMISSMOD) and dispersion (GIMM) portions of EDMS. It then describes the type of meteorological information the dispersion model can accept and identifies the manner in which it preprocesses National Climatic Centermore » (NCC) data prior to a refined-model run. The report presents the results of running EDMS on a number of different microcomputers and compares EDMS results with those of comparable models. The appendices elaborate on the information noted above and list the source code.« less
More than Anecdotes: Fishers' Ecological Knowledge Can Fill Gaps for Ecosystem Modeling.
Bevilacqua, Ana Helena V; Carvalho, Adriana R; Angelini, Ronaldo; Christensen, Villy
2016-01-01
Ecosystem modeling applied to fisheries remains hampered by a lack of local information. Fishers' knowledge could fill this gap, improving participation in and the management of fisheries. The same fishing area was modeled using two approaches: based on fishers' knowledge and based on scientific information. For the former, the data was collected by interviews through the Delphi methodology, and for the latter, the data was gathered from the literature. Agreement between the attributes generated by the fishers' knowledge model and scientific model is discussed and explored, aiming to improve data availability, the ecosystem model, and fisheries management. The ecosystem attributes produced from the fishers' knowledge model were consistent with the ecosystem attributes produced by the scientific model, and elaborated using only the scientific data from literature. This study provides evidence that fishers' knowledge may suitably complement scientific data, and may improve the modeling tools for the research and management of fisheries.
ERIC Educational Resources Information Center
Marco, Francisco Javier Garcia; Pinto, Maria
2010-01-01
Introduction: A model to explore the relations among local and global relevance-based information behaviour is proposed that is based on objective and subjective measures of the relevance of the Website contents. Method: Global interest for the Website was researched using data on visits, while local use was explored with two surveys on the…
NASA Astrophysics Data System (ADS)
Anderson, O. Roger
The rate of information processing during science learning and the efficiency of the learner in mobilizing relevant information in long-term memory as an aid in transmitting newly acquired information to stable storage in long-term memory are fundamental aspects of science content acquisition. These cognitive processes, moreover, may be substantially related in tempo and quality of organization to the efficiency of higher thought processes such as divergent thinking and problem-solving ability that characterize scientific thought. As a contribution to our quantitative understanding of these fundamental information processes, a mathematical model of information acquisition is presented and empirically evaluated in comparison to evidence obtained from experimental studies of science content acquisition. Computer-based models are used to simulate variations in learning parameters and to generate the theoretical predictions to be empirically tested. The initial tests of the predictive accuracy of the model show close agreement between predicted and actual mean recall scores in short-term learning tasks. Implications of the model for human information acquisition and possible future research are discussed in the context of the unique theoretical framework of the model.
Shen, Chung-Wei; Chen, Yi-Hau
2018-03-13
We propose a model selection criterion for semiparametric marginal mean regression based on generalized estimating equations. The work is motivated by a longitudinal study on the physical frailty outcome in the elderly, where the cluster size, that is, the number of the observed outcomes in each subject, is "informative" in the sense that it is related to the frailty outcome itself. The new proposal, called Resampling Cluster Information Criterion (RCIC), is based on the resampling idea utilized in the within-cluster resampling method (Hoffman, Sen, and Weinberg, 2001, Biometrika 88, 1121-1134) and accommodates informative cluster size. The implementation of RCIC, however, is free of performing actual resampling of the data and hence is computationally convenient. Compared with the existing model selection methods for marginal mean regression, the RCIC method incorporates an additional component accounting for variability of the model over within-cluster subsampling, and leads to remarkable improvements in selecting the correct model, regardless of whether the cluster size is informative or not. Applying the RCIC method to the longitudinal frailty study, we identify being female, old age, low income and life satisfaction, and chronic health conditions as significant risk factors for physical frailty in the elderly. © 2018, The International Biometric Society.
Applying Model Analysis to a Resource-Based Analysis of the Force and Motion Conceptual Evaluation
ERIC Educational Resources Information Center
Smith, Trevor I.; Wittmann, Michael C.; Carter, Tom
2014-01-01
Previously, we analyzed the Force and Motion Conceptual Evaluation in terms of a resources-based model that allows for clustering of questions so as to provide useful information on how students correctly or incorrectly reason about physics. In this paper, we apply model analysis to show that the associated model plots provide more information…
Integrated Modeling for Watershed Ecosystem Services Assessment and Forecasting
Regional scale watershed management decisions must be informed by the science-based relationship between anthropogenic activities on the landscape and the change in ecosystem structure, function, and services that occur as a result. We applied process-based models that represent...
Literature-Based Scientific Learning: A Collaboration Model
ERIC Educational Resources Information Center
Elrod, Susan L.; Somerville, Mary M.
2007-01-01
Amidst exponential growth of knowledge, student insights into the knowledge creation practices of the scientific community can be furthered by science faculty collaborations with university librarians. The Literature-Based Scientific Learning model advances undergraduates' disciplinary mastery and information literacy through experience with…
ACCLAIM: A Model for Leading the Community.
ERIC Educational Resources Information Center
Vaughan, George B.; Gillett-Karam, Rosemary
1993-01-01
Advocates an approach to community college leadership based on community-based programming. Describes North Carolina State University's Academy for Community College Leadership Advancement, Innovation, and Modeling (ACCLAIM) and its components (i.e., continuing education, fellows program, information development/dissemination, and university…
The AgESGUI geospatial simulation system for environmental model application and evaluation
USDA-ARS?s Scientific Manuscript database
Practical decision making in spatially-distributed environmental assessment and management is increasingly being based on environmental process-based models linked to geographical information systems (GIS). Furthermore, powerful computers and Internet-accessible assessment tools are providing much g...
Model-based vision for space applications
NASA Technical Reports Server (NTRS)
Chaconas, Karen; Nashman, Marilyn; Lumia, Ronald
1992-01-01
This paper describes a method for tracking moving image features by combining spatial and temporal edge information with model based feature information. The algorithm updates the two-dimensional position of object features by correlating predicted model features with current image data. The results of the correlation process are used to compute an updated model. The algorithm makes use of a high temporal sampling rate with respect to spatial changes of the image features and operates in a real-time multiprocessing environment. Preliminary results demonstrate successful tracking for image feature velocities between 1.1 and 4.5 pixels every image frame. This work has applications for docking, assembly, retrieval of floating objects and a host of other space-related tasks.
Apostol, Izydor; Kelner, Drew; Jiang, Xinzhao Grace; Huang, Gang; Wypych, Jette; Zhang, Xin; Gastwirt, Jessica; Chen, Kenneth; Fodor, Szilan; Hapuarachchi, Suminda; Meriage, Dave; Ye, Frank; Poppe, Leszek; Szpankowski, Wojciech
2012-12-01
To predict precision and other performance characteristics of chromatographic purity methods, which represent the most widely used form of analysis in the biopharmaceutical industry. We have conducted a comprehensive survey of purity methods, and show that all performance characteristics fall within narrow measurement ranges. This observation was used to develop a model called Uncertainty Based on Current Information (UBCI), which expresses these performance characteristics as a function of the signal and noise levels, hardware specifications, and software settings. We applied the UCBI model to assess the uncertainty of purity measurements, and compared the results to those from conventional qualification. We demonstrated that the UBCI model is suitable to dynamically assess method performance characteristics, based on information extracted from individual chromatograms. The model provides an opportunity for streamlining qualification and validation studies by implementing a "live validation" of test results utilizing UBCI as a concurrent assessment of measurement uncertainty. Therefore, UBCI can potentially mitigate the challenges associated with laborious conventional method validation and facilitates the introduction of more advanced analytical technologies during the method lifecycle.
Information Quality Evaluation of C2 Systems at Architecture Level
2014-06-01
based on architecture models of C2 systems, which can help to identify key factors impacting information quality and improve the system capability at the stage of architecture design of C2 system....capability evaluation of C2 systems at architecture level becomes necessary and important for improving the system capability at the stage of architecture ... design . This paper proposes a method for information quality evaluation of C2 system at architecture level. First, the information quality model is
A Hybrid 3D Indoor Space Model
NASA Astrophysics Data System (ADS)
Jamali, Ali; Rahman, Alias Abdul; Boguslawski, Pawel
2016-10-01
GIS integrates spatial information and spatial analysis. An important example of such integration is for emergency response which requires route planning inside and outside of a building. Route planning requires detailed information related to indoor and outdoor environment. Indoor navigation network models including Geometric Network Model (GNM), Navigable Space Model, sub-division model and regular-grid model lack indoor data sources and abstraction methods. In this paper, a hybrid indoor space model is proposed. In the proposed method, 3D modeling of indoor navigation network is based on surveying control points and it is less dependent on the 3D geometrical building model. This research proposes a method of indoor space modeling for the buildings which do not have proper 2D/3D geometrical models or they lack semantic or topological information. The proposed hybrid model consists of topological, geometrical and semantical space.
Web-based multimedia information retrieval for clinical application research
NASA Astrophysics Data System (ADS)
Cao, Xinhua; Hoo, Kent S., Jr.; Zhang, Hong; Ching, Wan; Zhang, Ming; Wong, Stephen T. C.
2001-08-01
We described a web-based data warehousing method for retrieving and analyzing neurological multimedia information. The web-based method supports convenient access, effective search and retrieval of clinical textual and image data, and on-line analysis. To improve the flexibility and efficiency of multimedia information query and analysis, a three-tier, multimedia data warehouse for epilepsy research has been built. The data warehouse integrates clinical multimedia data related to epilepsy from disparate sources and archives them into a well-defined data model.
A test of an expert-based bird-habitat relationship model in South Carolina
John C. Kilgo; David L. Gartner; Brian R. Chapman; John B. Dunnin; Kathleen E. Franzreb; Sidney A. Gauthreaux; Cathryn H. Greenberg; Douglas J. Levey; Karl V. Miller; Scott F. Pearson
2002-01-01
Wildlife-habitat relationships models are used widely by land managers to provide information on which species are likely to occur in an area of interest and may be impacted by a proposed management activity. Few such models have been tested. We used recent avian census data from the Savannah River Site, South Carolina to validate BIRDHAB, a geographic information...
Computational Model of Primary Visual Cortex Combining Visual Attention for Action Recognition
Shu, Na; Gao, Zhiyong; Chen, Xiangan; Liu, Haihua
2015-01-01
Humans can easily understand other people’s actions through visual systems, while computers cannot. Therefore, a new bio-inspired computational model is proposed in this paper aiming for automatic action recognition. The model focuses on dynamic properties of neurons and neural networks in the primary visual cortex (V1), and simulates the procedure of information processing in V1, which consists of visual perception, visual attention and representation of human action. In our model, a family of the three-dimensional spatial-temporal correlative Gabor filters is used to model the dynamic properties of the classical receptive field of V1 simple cell tuned to different speeds and orientations in time for detection of spatiotemporal information from video sequences. Based on the inhibitory effect of stimuli outside the classical receptive field caused by lateral connections of spiking neuron networks in V1, we propose surround suppressive operator to further process spatiotemporal information. Visual attention model based on perceptual grouping is integrated into our model to filter and group different regions. Moreover, in order to represent the human action, we consider the characteristic of the neural code: mean motion map based on analysis of spike trains generated by spiking neurons. The experimental evaluation on some publicly available action datasets and comparison with the state-of-the-art approaches demonstrate the superior performance of the proposed model. PMID:26132270
Reasoning over genetic variance information in cause-and-effect models of neurodegenerative diseases
Naz, Mufassra; Kodamullil, Alpha Tom
2016-01-01
The work we present here is based on the recent extension of the syntax of the Biological Expression Language (BEL), which now allows for the representation of genetic variation information in cause-and-effect models. In our article, we describe, how genetic variation information can be used to identify candidate disease mechanisms in diseases with complex aetiology such as Alzheimer’s disease and Parkinson’s disease. In those diseases, we have to assume that many genetic variants contribute moderately to the overall dysregulation that in the case of neurodegenerative diseases has such a long incubation time until the first clinical symptoms are detectable. Owing to the multilevel nature of dysregulation events, systems biomedicine modelling approaches need to combine mechanistic information from various levels, including gene expression, microRNA (miRNA) expression, protein–protein interaction, genetic variation and pathway. OpenBEL, the open source version of BEL, has recently been extended to match this requirement, and we demonstrate in our article, how candidate mechanisms for early dysregulation events in Alzheimer’s disease can be identified based on an integrative mining approach that identifies ‘chains of causation’ that include single nucleotide polymorphism information in BEL models. PMID:26249223
An Information Perception-Based Emotion Contagion Model for Fire Evacuation
NASA Astrophysics Data System (ADS)
Liu, Ting Ting; Liu, Zhen; Ma, Minhua; Xuan, Rongrong; Chen, Tian; Lu, Tao; Yu, Lipeng
2017-03-01
In fires, people are easier to lose their mind. Panic will lead to irrational behavior and irreparable tragedy. It has great practical significance to make contingency plans for crowd evacuation in fires. However, existing studies about crowd simulation always paid much attention on the crowd density, but little attention on emotional contagion that may cause a panic. Based on settings about information space and information sharing, this paper proposes an emotional contagion model for crowd in panic situations. With the proposed model, a behavior mechanism is constructed for agents in the crowd and a prototype of system is developed for crowd simulation. Experiments are carried out to verify the proposed model. The results showed that the spread of panic not only related to the crowd density and the individual comfort level, but also related to people's prior knowledge of fire evacuation. The model provides a new way for safety education and evacuation management. It is possible to avoid and reduce unsafe factors in the crowd with the lowest cost.
Critical success factors for achieving superior m-health success.
Dwivedi, A; Wickramasinghe, N; Bali, R K; Naguib, R N G
2007-01-01
Recent healthcare trends clearly show significant investment by healthcare institutions into various types of wired and wireless technologies to facilitate and support superior healthcare delivery. This trend has been spurred by the shift in the concept and growing importance of the role of health information and the influence of fields such as bio-informatics, biomedical and genetic engineering. The demand is currently for integrated healthcare information systems; however for such initiatives to be successful it is necessary to adopt a macro model and appropriate methodology with respect to wireless initiatives. The key contribution of this paper is the presentation of one such integrative model for mobile health (m-health) known as the Wi-INET Business Model, along with a detailed Adaptive Mapping to Realisation (AMR) methodology. The AMR methodology details how the Wi-INET Business Model can be implemented. Further validation on the concepts detailed in the Wi-INET Business Model and the AMR methodology is offered via a short vignette on a toolkit based on a leading UK-based healthcare information technology solution.
A study on spatial decision support systems for HIV/AIDS prevention based on COM GIS technology
NASA Astrophysics Data System (ADS)
Yang, Kun; Luo, Huasong; Peng, Shungyun; Xu, Quanli
2007-06-01
Based on the deeply analysis of the current status and the existing problems of GIS technology applications in Epidemiology, this paper has proposed the method and process for establishing the spatial decision support systems of AIDS epidemic prevention by integrating the COM GIS, Spatial Database, GPS, Remote Sensing, and Communication technologies, as well as ASP and ActiveX software development technologies. One of the most important issues for constructing the spatial decision support systems of AIDS epidemic prevention is how to integrate the AIDS spreading models with GIS. The capabilities of GIS applications in the AIDS epidemic prevention have been described here in this paper firstly. Then some mature epidemic spreading models have also been discussed for extracting the computation parameters. Furthermore, a technical schema has been proposed for integrating the AIDS spreading models with GIS and relevant geospatial technologies, in which the GIS and model running platforms share a common spatial database and the computing results can be spatially visualized on Desktop or Web GIS clients. Finally, a complete solution for establishing the decision support systems of AIDS epidemic prevention has been offered in this paper based on the model integrating methods and ESRI COM GIS software packages. The general decision support systems are composed of data acquisition sub-systems, network communication sub-systems, model integrating sub-systems, AIDS epidemic information spatial database sub-systems, AIDS epidemic information querying and statistical analysis sub-systems, AIDS epidemic dynamic surveillance sub-systems, AIDS epidemic information spatial analysis and decision support sub-systems, as well as AIDS epidemic information publishing sub-systems based on Web GIS.
Ahmadi, Maryam; Ghazisaeidi, Marjan; Bashiri, Azadeh
2015-03-18
In order to better designing of electronic health record system in Iran, integration of health information systems based on a common language must be done to interpret and exchange this information with this system is required. This study provides a conceptual model of radiology reporting system using unified modeling language. The proposed model can solve the problem of integration this information system with the electronic health record system. By using this model and design its service based, easily connect to electronic health record in Iran and facilitate transfer radiology report data. This is a cross-sectional study that was conducted in 2013. The study population was 22 experts that working at the Imaging Center in Imam Khomeini Hospital in Tehran and the sample was accorded with the community. Research tool was a questionnaire that prepared by the researcher to determine the information requirements. Content validity and test-retest method was used to measure validity and reliability of questioner respectively. Data analyzed with average index, using SPSS. Also Visual Paradigm software was used to design a conceptual model. Based on the requirements assessment of experts and related texts, administrative, demographic and clinical data and radiological examination results and if the anesthesia procedure performed, anesthesia data suggested as minimum data set for radiology report and based it class diagram designed. Also by identifying radiology reporting system process, use case was drawn. According to the application of radiology reports in electronic health record system for diagnosing and managing of clinical problem of the patient, with providing the conceptual Model for radiology reporting system; in order to systematically design it, the problem of data sharing between these systems and electronic health records system would eliminate.
Liang, Zhaohui; Liu, Jun; Huang, Jimmy X; Zeng, Xing
2018-01-01
The genetic polymorphism of Cytochrome P450 (CYP 450) is considered as one of the main causes for adverse drug reactions (ADRs). In order to explore the latent correlations between ADRs and potentially corresponding single-nucleotide polymorphism (SNPs) in CYP450, three algorithms based on information theory are used as the main method to predict the possible relation. The study uses a retrospective case-control study to explore the potential relation of ADRs to specific genomic locations and single-nucleotide polymorphism (SNP). The genomic data collected from 53 healthy volunteers are applied for the analysis, another group of genomic data collected from 30 healthy volunteers excluded from the study are used as the control group. The SNPs respective on five loci of CYP2D6*2,*10,*14 and CYP1A2*1C, *1F are detected by the Applied Biosystem 3130xl. The raw data is processed by ChromasPro to detect the specific alleles on the above loci from each sample. The secondary data are reorganized and processed by R combined with the reports of ADRs from clinical reports. Three information theory based algorithms are implemented for the screening task: JMI, CMIM, and mRMR. If a SNP is selected by more than two algorithms, we are confident to conclude that it is related to the corresponding ADR. The selection results are compared with the control decision tree + LASSO regression model. In the study group where ADRs occur, 10 SNPs are considered relevant to the occurrence of a specific ADR by the combined information theory model. In comparison, only 5 SNPs are considered relevant to a specific ADR by the decision tree + LASSO regression model. In addition, the new method detects more relevant pairs of SNP and ADR which are affected by both SNP and dosage. This implies that the new information theory based model is effective to discover correlations of ADRs and CYP 450 SNPs and is helpful in predicting the potential vulnerable genotype for some ADRs. The newly proposed information theory based model has superiority performance in detecting the relation between SNP and ADR compared to the decision tree + LASSO regression model. The new model is more sensitive to detect ADRs compared to the old method, while the old method is more reliable. Therefore, the selection criteria for selecting algorithms should depend on the pragmatic needs. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
NASA Astrophysics Data System (ADS)
Sadegh, M.; Vrugt, J. A.
2011-12-01
In the past few years, several contributions have begun to appear in the hydrologic literature that introduced and analyzed the benefits of using a signature based approach to watershed analysis. This signature-based approach abandons the standard single criteria model-data fitting paradigm in favor of a diagnostic approach that better extracts the available information from the available data. Despite the prospects of this new viewpoint, rather ad-hoc criteria have hitherto been proposed to improve watershed modeling. Here, we aim to provide a proper mathematical foundation to signature based analysis. We analyze the information content of different data transformation by analyzing their convergence speed with Markov Chain Monte Carlo (MCMC) simulation using the Generalized Likelihood function of Schousp and Vrugt (2010). We compare the information content of the original discharge data against a simple square root and Box-Cox transformation of the streamflow data. We benchmark these results against wavelet and flow duration curve transformations that temporally disaggregate the discharge data. Our results conclusive demonstrate that wavelet transformations and flow duration curves significantly reduce the information content of the streamflow data and consequently unnecessarily increase the uncertainty of the HYMOD model parameters. Hydrologic signatures thus need to be found in the original data, without temporal disaggregation.
Staccini, Pascal; Joubert, Michel; Quaranta, Jean-François; Fieschi, Marius
2005-03-01
Today, the economic and regulatory environment, involving activity-based and prospective payment systems, healthcare quality and risk analysis, traceability of the acts performed and evaluation of care practices, accounts for the current interest in clinical and hospital information systems. The structured gathering of information relative to users' needs and system requirements is fundamental when installing such systems. This stage takes time and is generally misconstrued by caregivers and is of limited efficacy to analysts. We used a modelling technique designed for manufacturing processes (IDEF0/SADT). We enhanced the basic model of an activity with descriptors extracted from the Ishikawa cause-and-effect diagram (methods, men, materials, machines, and environment). We proposed an object data model of a process and its components, and programmed a web-based tool in an object-oriented environment. This tool makes it possible to extract the data dictionary of a given process from the description of its elements and to locate documents (procedures, recommendations, instructions) according to each activity or role. Aimed at structuring needs and storing information provided by directly involved teams regarding the workings of an institution (or at least part of it), the process-mapping approach has an important contribution to make in the analysis of clinical information systems.
ERIC Educational Resources Information Center
Chen, Jinshi
2017-01-01
Legal case brief writing is pedagogically important yet insufficiently discussed for Chinese EFL learners majoring in law. Based on process genre approach and discourse information theory (DIT), the present study designs a corpus-based analytical model for Chinese EFL learners' autonomy in legal case brief writing and explores the process of case…
The Integrated Compliance Information System (ICIS) is a web-based system that provides information for the federal enforcement and compliance (FE&C) and the National Pollutant Discharge Elimination System (NPDES) programs.
Strategic Help in User Interfaces for Information Retrieval.
ERIC Educational Resources Information Center
Brajnik, Giorgio; Mizzaro, Stefano; Tasso, Carlo; Venuti, Fabio
2002-01-01
Discussion of search strategy in information retrieval by end users focuses on the role played by strategic reasoning and design principles for user interfaces. Highlights include strategic help based on collaborative coaching; a conceptual model for strategic help; and a prototype knowledge-based system named FIRE. (Author/LRW)
2013-02-04
Ground Vehicle Systems Engineering Technology Symposium HC Human Capital HIIT Helsinki Institute of Information Technology UNCLASSIFIED vii...Technology (TKK), and the Helsinki Institute of Information Technology ( HIIT ), the report introduced the concept and the state-of-the-art in the market
How Cognitive Processes Aid Program Understanding.
1985-06-01
information critical to program understanding are...are used in conjunction with a ;rcgrarrrer ’s nowledge base and categories cf information critical to prcgrar understanding are identified. The model... understanding . Further, the study contends that the effectiveness of these processes is aeleraent upon the extent of the programmer’s knowledge base.
LISPA (Library and Information Center Staff Planning Advisor): A Microcomputer-Based System.
ERIC Educational Resources Information Center
Devadason, F. J.; Vespry, H. A.
1996-01-01
Describes LISPA (Library and Information Center Staff Planning Advisor), a set of programs based on Ranganathan's staff plan model. LISPA particularly aids in planning for library staff requirements, both professional and paraprofessional, in developing countries where automated systems for other library operations are not yet available.…
A geographic information system-based 3D city estate modeling and simulation system
NASA Astrophysics Data System (ADS)
Chong, Xiaoli; Li, Sha
2015-12-01
This paper introduces a 3D city simulation system which is based on geographic information system (GIS), covering all commercial housings of the city. A regional- scale, GIS-based approach is used to capture, describe, and track the geographical attributes of each house in the city. A sorting algorithm of "Benchmark + Parity Rate" is developed to cluster houses with similar spatial and construction attributes. This system is applicable for digital city modeling, city planning, housing evaluation, housing monitoring, and visualizing housing transaction. Finally, taking Jingtian area of Shenzhen as an example, the each unit of 35,997 houses in the area could be displayed, tagged, and easily tracked by the GIS-based city modeling and simulation system. The match market real conditions well and can be provided to house buyers as reference.
The Influence of Information Acquisition on the Complex Dynamics of Market Competition
NASA Astrophysics Data System (ADS)
Guo, Zhanbing; Ma, Junhai
In this paper, we build a dynamical game model with three bounded rational players (firms) to study the influence of information on the complex dynamics of market competition, where useful information is about rival’s real decision. In this dynamical game model, one information-sharing team is composed of two firms, they acquire and share the information about their common competitor, however, they make their own decisions separately, where the amount of information acquired by this information-sharing team will determine the estimation accuracy about the rival’s real decision. Based on this dynamical game model and some creative 3D diagrams, the influence of the amount of information on the complex dynamics of market competition such as local dynamics, global dynamics and profits is studied. These results have significant theoretical and practical values to realize the influence of information.
Dankers, Frank; Wijsman, Robin; Troost, Esther G C; Monshouwer, René; Bussink, Johan; Hoffmann, Aswin L
2017-05-07
In our previous work, a multivariable normal-tissue complication probability (NTCP) model for acute esophageal toxicity (AET) Grade ⩾2 after highly conformal (chemo-)radiotherapy for non-small cell lung cancer (NSCLC) was developed using multivariable logistic regression analysis incorporating clinical parameters and mean esophageal dose (MED). Since the esophagus is a tubular organ, spatial information of the esophageal wall dose distribution may be important in predicting AET. We investigated whether the incorporation of esophageal wall dose-surface data with spatial information improves the predictive power of our established NTCP model. For 149 NSCLC patients treated with highly conformal radiation therapy esophageal wall dose-surface histograms (DSHs) and polar dose-surface maps (DSMs) were generated. DSMs were used to generate new DSHs and dose-length-histograms that incorporate spatial information of the dose-surface distribution. From these histograms dose parameters were derived and univariate logistic regression analysis showed that they correlated significantly with AET. Following our previous work, new multivariable NTCP models were developed using the most significant dose histogram parameters based on univariate analysis (19 in total). However, the 19 new models incorporating esophageal wall dose-surface data with spatial information did not show improved predictive performance (area under the curve, AUC range 0.79-0.84) over the established multivariable NTCP model based on conventional dose-volume data (AUC = 0.84). For prediction of AET, based on the proposed multivariable statistical approach, spatial information of the esophageal wall dose distribution is of no added value and it is sufficient to only consider MED as a predictive dosimetric parameter.
NASA Astrophysics Data System (ADS)
Dankers, Frank; Wijsman, Robin; Troost, Esther G. C.; Monshouwer, René; Bussink, Johan; Hoffmann, Aswin L.
2017-05-01
In our previous work, a multivariable normal-tissue complication probability (NTCP) model for acute esophageal toxicity (AET) Grade ⩾2 after highly conformal (chemo-)radiotherapy for non-small cell lung cancer (NSCLC) was developed using multivariable logistic regression analysis incorporating clinical parameters and mean esophageal dose (MED). Since the esophagus is a tubular organ, spatial information of the esophageal wall dose distribution may be important in predicting AET. We investigated whether the incorporation of esophageal wall dose-surface data with spatial information improves the predictive power of our established NTCP model. For 149 NSCLC patients treated with highly conformal radiation therapy esophageal wall dose-surface histograms (DSHs) and polar dose-surface maps (DSMs) were generated. DSMs were used to generate new DSHs and dose-length-histograms that incorporate spatial information of the dose-surface distribution. From these histograms dose parameters were derived and univariate logistic regression analysis showed that they correlated significantly with AET. Following our previous work, new multivariable NTCP models were developed using the most significant dose histogram parameters based on univariate analysis (19 in total). However, the 19 new models incorporating esophageal wall dose-surface data with spatial information did not show improved predictive performance (area under the curve, AUC range 0.79-0.84) over the established multivariable NTCP model based on conventional dose-volume data (AUC = 0.84). For prediction of AET, based on the proposed multivariable statistical approach, spatial information of the esophageal wall dose distribution is of no added value and it is sufficient to only consider MED as a predictive dosimetric parameter.
Comparison of three GIS-based models for predicting rockfall runout zones at a regional scale
NASA Astrophysics Data System (ADS)
Dorren, Luuk K. A.; Seijmonsbergen, Arie C.
2003-11-01
Site-specific information about the level of protection that mountain forests provide is often not available for large regions. Information regarding rockfalls is especially scarce. The most efficient way to obtain information about rockfall activity and the efficacy of protection forests at a regional scale is to use a simulation model. At present, it is still unknown which forest parameters could be incorporated best in such models. Therefore, the purpose of this study was to test and evaluate a model for rockfall assessment at a regional scale in which simple forest stand parameters, such as the number of trees per hectare and the diameter at breast height, are incorporated. Therefore, a newly developed Geographical Information System (GIS)-based distributed model is compared with two existing rockfall models. The developed model is the only model that calculates the rockfall velocity on the basis of energy loss due to collisions with trees and on the soil surface. The two existing models calculate energy loss over the distance between two cell centres, while the newly developed model is able to calculate multiple bounces within a pixel. The patterns of rockfall runout zones produced by the three models are compared with patterns of rockfall deposits derived from geomorphological field maps. Furthermore, the rockfall velocities modelled by the three models are compared. It is found that the models produced rockfall runout zone maps with rather similar accuracies. However, the developed model performs best on forested hillslopes and it also produces velocities that match best with field estimates on both forested and nonforested hillslopes irrespective of the slope gradient.
NASA Astrophysics Data System (ADS)
Koshkina, S.; Ostrinskaya, L.
2018-04-01
An information model for “key” quality indicators of goods has been developed. This model is based on the assessment of f standardization existing state and the product labeling quality. According to the authors’ opinion, the proposed “key” indicators are the most significant for purchasing decision making. Customers will be able to use this model through their mobile technical devices. The developed model allows to decompose existing processes in data flows and to reveal the levels of possible architectural solutions. In-depth analysis of the presented information model decomposition levels will allow determining the stages of its improvement and to reveal additional indicators of the goods quality that are of interest to customers in the further research. Examining the architectural solutions for the customer’s information environment functioning when integrating existing databases will allow us to determine the boundaries of the model flexibility and customizability.
Lee, Jaehoon; Hulse, Nathan C; Wood, Grant M; Oniki, Thomas A; Huff, Stanley M
2016-01-01
In this study we developed a Fast Healthcare Interoperability Resources (FHIR) profile to support exchanging a full pedigree based family health history (FHH) information across multiple systems and applications used by clinicians, patients, and researchers. We used previously developed clinical element models (CEMs) that are capable of representing the FHH information, and derived essential data elements including attributes, constraints, and value sets. We analyzed gaps between the FHH CEM elements and existing FHIR resources. Based on the analysis, we developed a profile that consists of 1) FHIR resources for essential FHH data elements, 2) extensions for additional elements that were not covered by the resources, and 3) a structured definition to integrate patient and family member information in a FHIR message. We implemented the profile using an open-source based FHIR framework and validated it using patient-entered FHH data that was captured through a locally developed FHH tool.
Information Geometry for Landmark Shape Analysis: Unifying Shape Representation and Deformation
Peter, Adrian M.; Rangarajan, Anand
2010-01-01
Shape matching plays a prominent role in the comparison of similar structures. We present a unifying framework for shape matching that uses mixture models to couple both the shape representation and deformation. The theoretical foundation is drawn from information geometry wherein information matrices are used to establish intrinsic distances between parametric densities. When a parameterized probability density function is used to represent a landmark-based shape, the modes of deformation are automatically established through the information matrix of the density. We first show that given two shapes parameterized by Gaussian mixture models (GMMs), the well-known Fisher information matrix of the mixture model is also a Riemannian metric (actually, the Fisher-Rao Riemannian metric) and can therefore be used for computing shape geodesics. The Fisher-Rao metric has the advantage of being an intrinsic metric and invariant to reparameterization. The geodesic—computed using this metric—establishes an intrinsic deformation between the shapes, thus unifying both shape representation and deformation. A fundamental drawback of the Fisher-Rao metric is that it is not available in closed form for the GMM. Consequently, shape comparisons are computationally very expensive. To address this, we develop a new Riemannian metric based on generalized ϕ-entropy measures. In sharp contrast to the Fisher-Rao metric, the new metric is available in closed form. Geodesic computations using the new metric are considerably more efficient. We validate the performance and discriminative capabilities of these new information geometry-based metrics by pairwise matching of corpus callosum shapes. We also study the deformations of fish shapes that have various topological properties. A comprehensive comparative analysis is also provided using other landmark-based distances, including the Hausdorff distance, the Procrustes metric, landmark-based diffeomorphisms, and the bending energies of the thin-plate (TPS) and Wendland splines. PMID:19110497
Development and evaluation of a biomedical search engine using a predicate-based vector space model.
Kwak, Myungjae; Leroy, Gondy; Martinez, Jesse D; Harwell, Jeffrey
2013-10-01
Although biomedical information available in articles and patents is increasing exponentially, we continue to rely on the same information retrieval methods and use very few keywords to search millions of documents. We are developing a fundamentally different approach for finding much more precise and complete information with a single query using predicates instead of keywords for both query and document representation. Predicates are triples that are more complex datastructures than keywords and contain more structured information. To make optimal use of them, we developed a new predicate-based vector space model and query-document similarity function with adjusted tf-idf and boost function. Using a test bed of 107,367 PubMed abstracts, we evaluated the first essential function: retrieving information. Cancer researchers provided 20 realistic queries, for which the top 15 abstracts were retrieved using a predicate-based (new) and keyword-based (baseline) approach. Each abstract was evaluated, double-blind, by cancer researchers on a 0-5 point scale to calculate precision (0 versus higher) and relevance (0-5 score). Precision was significantly higher (p<.001) for the predicate-based (80%) than for the keyword-based (71%) approach. Relevance was almost doubled with the predicate-based approach-2.1 versus 1.6 without rank order adjustment (p<.001) and 1.34 versus 0.98 with rank order adjustment (p<.001) for predicate--versus keyword-based approach respectively. Predicates can support more precise searching than keywords, laying the foundation for rich and sophisticated information search. Copyright © 2013 Elsevier Inc. All rights reserved.
Grid Enabled Geospatial Catalogue Web Service
NASA Technical Reports Server (NTRS)
Chen, Ai-Jun; Di, Li-Ping; Wei, Ya-Xing; Liu, Yang; Bui, Yu-Qi; Hu, Chau-Min; Mehrotra, Piyush
2004-01-01
Geospatial Catalogue Web Service is a vital service for sharing and interoperating volumes of distributed heterogeneous geospatial resources, such as data, services, applications, and their replicas over the web. Based on the Grid technology and the Open Geospatial Consortium (0GC) s Catalogue Service - Web Information Model, this paper proposes a new information model for Geospatial Catalogue Web Service, named as GCWS which can securely provides Grid-based publishing, managing and querying geospatial data and services, and the transparent access to the replica data and related services under the Grid environment. This information model integrates the information model of the Grid Replica Location Service (RLS)/Monitoring & Discovery Service (MDS) with the information model of OGC Catalogue Service (CSW), and refers to the geospatial data metadata standards from IS0 19115, FGDC and NASA EOS Core System and service metadata standards from IS0 191 19 to extend itself for expressing geospatial resources. Using GCWS, any valid geospatial user, who belongs to an authorized Virtual Organization (VO), can securely publish and manage geospatial resources, especially query on-demand data in the virtual community and get back it through the data-related services which provide functions such as subsetting, reformatting, reprojection etc. This work facilitates the geospatial resources sharing and interoperating under the Grid environment, and implements geospatial resources Grid enabled and Grid technologies geospatial enabled. It 2!so makes researcher to focus on science, 2nd not cn issues with computing ability, data locztic, processir,g and management. GCWS also is a key component for workflow-based virtual geospatial data producing.
Visual Persons Behavior Diary Generation Model based on Trajectories and Pose Estimation
NASA Astrophysics Data System (ADS)
Gang, Chen; Bin, Chen; Yuming, Liu; Hui, Li
2018-03-01
The behavior pattern of persons was the important output of the surveillance analysis. This paper focus on the generation model of visual person behavior diary. The pipeline includes the person detection, tracking, and the person behavior classify. This paper adopts the deep convolutional neural model YOLO (You Only Look Once)V2 for person detection module. Multi person tracking was based on the detection framework. The Hungarian assignment algorithm was used to the matching. The person appearance model was integrated by HSV color model and Hash code model. The person object motion was estimated by the Kalman Filter. The multi objects were matching with exist tracklets through the appearance and motion location distance by the Hungarian assignment method. A long continuous trajectory for one person was get by the spatial-temporal continual linking algorithm. And the face recognition information was used to identify the trajectory. The trajectories with identification information can be used to generate the visual diary of person behavior based on the scene context information and person action estimation. The relevant modules are tested in public data sets and our own capture video sets. The test results show that the method can be used to generate the visual person behavior pattern diary with certain accuracy.
LaPelle, Nancy R; Luckmann, Roger; Simpson, E Hatheway; Martin, Elaine R
2006-01-01
Background Movement towards evidence-based practices in many fields suggests that public health (PH) challenges may be better addressed if credible information about health risks and effective PH practices is readily available. However, research has shown that many PH information needs are unmet. In addition to reviewing relevant literature, this study performed a comprehensive review of existing information resources and collected data from two representative PH groups, focusing on identifying current practices, expressed information needs, and ideal systems for information access. Methods Nineteen individual interviews were conducted among employees of two domains in a state health department – communicable disease control and community health promotion. Subsequent focus groups gathered additional data on preferences for methods of information access and delivery as well as information format and content. Qualitative methods were used to identify themes in the interview and focus group transcripts. Results Informants expressed similar needs for improved information access including single portal access with a good search engine; automatic notification regarding newly available information; access to best practice information in many areas of interest that extend beyond biomedical subject matter; improved access to grey literature as well as to more systematic reviews, summaries, and full-text articles; better methods for indexing, filtering, and searching for information; and effective ways to archive information accessed. Informants expressed a preference for improving systems with which they were already familiar such as PubMed and listservs rather than introducing new systems of information organization and delivery. A hypothetical ideal model for information organization and delivery was developed based on informants' stated information needs and preferred means of delivery. Features of the model were endorsed by the subjects who reviewed it. Conclusion Many critical information needs of PH practitioners are not being met efficiently or at all. We propose a dual strategy of: 1) promoting incremental improvements in existing information delivery systems based on the expressed preferences of the PH users of the systems and 2) the concurrent development and rigorous evaluation of new models of information organization and delivery that draw on successful resources already operating to deliver information to clinical medical practitioners. PMID:16597331
A Model-Driven Development Method for Management Information Systems
NASA Astrophysics Data System (ADS)
Mizuno, Tomoki; Matsumoto, Keinosuke; Mori, Naoki
Traditionally, a Management Information System (MIS) has been developed without using formal methods. By the informal methods, the MIS is developed on its lifecycle without having any models. It causes many problems such as lack of the reliability of system design specifications. In order to overcome these problems, a model theory approach was proposed. The approach is based on an idea that a system can be modeled by automata and set theory. However, it is very difficult to generate automata of the system to be developed right from the start. On the other hand, there is a model-driven development method that can flexibly correspond to changes of business logics or implementing technologies. In the model-driven development, a system is modeled using a modeling language such as UML. This paper proposes a new development method for management information systems applying the model-driven development method to a component of the model theory approach. The experiment has shown that a reduced amount of efforts is more than 30% of all the efforts.
Subject-based discriminative sparse representation model for detection of concealed information.
Akhavan, Amir; Moradi, Mohammad Hassan; Vand, Safa Rafiei
2017-05-01
The use of machine learning approaches in concealed information test (CIT) plays a key role in the progress of this neurophysiological field. In this paper, we presented a new machine learning method for CIT in which each subject is considered independent of the others. The main goal of this study is to adapt the discriminative sparse models to be applicable for subject-based concealed information test. In order to provide sufficient discriminability between guilty and innocent subjects, we introduced a novel discriminative sparse representation model and its appropriate learning methods. For evaluation of the method forty-four subjects participated in a mock crime scenario and their EEG data were recorded. As the model input, in this study the recurrence plot features were extracted from single trial data of different stimuli. Then the extracted feature vectors were reduced using statistical dependency method. The reduced feature vector went through the proposed subject-based sparse model in which the discrimination power of sparse code and reconstruction error were applied simultaneously. Experimental results showed that the proposed approach achieved better performance than other competing discriminative sparse models. The classification accuracy, sensitivity and specificity of the presented sparsity-based method were about 93%, 91% and 95% respectively. Using the EEG data of a single subject in response to different stimuli types and with the aid of the proposed discriminative sparse representation model, one can distinguish guilty subjects from innocent ones. Indeed, this property eliminates the necessity of several subject EEG data in model learning and decision making for a specific subject. Copyright © 2017 Elsevier B.V. All rights reserved.