Custom Search Engines: Tools & Tips
ERIC Educational Resources Information Center
Notess, Greg R.
2008-01-01
Few have the resources to build a Google or Yahoo! from scratch. Yet anyone can build a search engine based on a subset of the large search engines' databases. Use Google Custom Search Engine or Yahoo! Search Builder or any of the other similar programs to create a vertical search engine targeting sites of interest to users. The basic steps to…
Effective Cyber Situation Awareness (CSA) Assessment and Training
2013-11-01
activity/scenario. y. Save Wireshark Captures. z. Save SNORT logs. aa. Save MySQL databases. 4. After the completion of the scenario, the reversion...line or from custom Java code. • Cisco ASA Parser: Builds normalized vendor-neutral firewall rule specifications from Cisco ASA and PIX firewall...The Service tool lets analysts build Cauldron models from either the command line or from custom Java code. Functionally, it corresponds to the
Becoming customer-driven: one health system's story.
Bagnell, A
1998-01-01
Market research was done by Crozer-Keystone Health System to better understand the new health care consumer. The information will assist in developing, promoting, and delivering products and services of maximum value to current and prospective consumers. The system is responding by bundling and delivering products and services around consumer-based dimensions, developing new and better ways to improve customer convenience, access, and service. Operationalizing these initiatives for change involves building an information infrastructure of extensive content and customer databases, using new technologies to customize communications and ultimately service components.
ThermoBuild: Online Method Made Available for Accessing NASA Glenn Thermodynamic Data
NASA Technical Reports Server (NTRS)
McBride, Bonnie; Zehe, Michael J.
2004-01-01
The new Web site program "ThermoBuild" allows users to easily access and use the NASA Glenn Thermodynamic Database of over 2000 solid, liquid, and gaseous species. A convenient periodic table allows users to "build" the molecules of interest and designate the temperature range over which thermodynamic functions are to be displayed. ThermoBuild also allows users to build custom databases for use with NASA's Chemical Equilibrium with Applications (CEA) program or other programs that require the NASA format for thermodynamic properties. The NASA Glenn Research Center has long been a leader in the compilation and dissemination of up-to-date thermodynamic data, primarily for use with the NASA CEA program, but increasingly for use with other computer programs.
Wen, Can-Hong; Ou, Shao-Min; Guo, Xiao-Bo; Liu, Chen-Feng; Shen, Yan-Bo; You, Na; Cai, Wei-Hong; Shen, Wen-Jun; Wang, Xue-Qin; Tan, Hai-Zhu
2017-12-12
Breast cancer is a high-risk heterogeneous disease with myriad subtypes and complicated biological features. The Cancer Genome Atlas (TCGA) breast cancer database provides researchers with the large-scale genome and clinical data via web portals and FTP services. Researchers are able to gain new insights into their related fields, and evaluate experimental discoveries with TCGA. However, it is difficult for researchers who have little experience with database and bioinformatics to access and operate on because of TCGA's complex data format and diverse files. For ease of use, we build the breast cancer (B-CAN) platform, which enables data customization, data visualization, and private data center. The B-CAN platform runs on Apache server and interacts with the backstage of MySQL database by PHP. Users can customize data based on their needs by combining tables from original TCGA database and selecting variables from each table. The private data center is applicable for private data and two types of customized data. A key feature of the B-CAN is that it provides single table display and multiple table display. Customized data with one barcode corresponding to many records and processed customized data are allowed in Multiple Tables Display. The B-CAN is an intuitive and high-efficient data-sharing platform.
Wen, Can-Hong; Ou, Shao-Min; Guo, Xiao-Bo; Liu, Chen-Feng; Shen, Yan-Bo; You, Na; Cai, Wei-Hong; Shen, Wen-Jun; Wang, Xue-Qin; Tan, Hai-Zhu
2017-01-01
Breast cancer is a high-risk heterogeneous disease with myriad subtypes and complicated biological features. The Cancer Genome Atlas (TCGA) breast cancer database provides researchers with the large-scale genome and clinical data via web portals and FTP services. Researchers are able to gain new insights into their related fields, and evaluate experimental discoveries with TCGA. However, it is difficult for researchers who have little experience with database and bioinformatics to access and operate on because of TCGA’s complex data format and diverse files. For ease of use, we build the breast cancer (B-CAN) platform, which enables data customization, data visualization, and private data center. The B-CAN platform runs on Apache server and interacts with the backstage of MySQL database by PHP. Users can customize data based on their needs by combining tables from original TCGA database and selecting variables from each table. The private data center is applicable for private data and two types of customized data. A key feature of the B-CAN is that it provides single table display and multiple table display. Customized data with one barcode corresponding to many records and processed customized data are allowed in Multiple Tables Display. The B-CAN is an intuitive and high-efficient data-sharing platform. PMID:29312567
Lewis, M Jane; Ling, Pamela M
2015-01-01
Background As limitations on traditional marketing tactics and scrutiny by tobacco control have increased, the tobacco industry has benefited from direct mail marketing which transmits marketing messages directly to carefully targeted consumers utilising extensive custom consumer databases. However, research in these areas has been limited. This is the first study to examine the development, purposes and extent of direct mail and customer databases. Methods We examined direct mail and database marketing by RJ Reynolds and Philip Morris utilising internal tobacco industry documents from the Legacy Tobacco Document Library employing standard document research techniques. Results Direct mail marketing utilising industry databases began in the 1970s and grew from the need for a promotional strategy to deal with declining smoking rates, growing numbers of products and a cluttered media landscape. Both RJ Reynolds and Philip Morris started with existing commercial consumer mailing lists, but subsequently decided to build their own databases of smokers’ names, addresses, brand preferences, purchase patterns, interests and activities. By the mid-1990s both RJ Reynolds and Philip Morris databases contained at least 30 million smokers’ names each. These companies valued direct mail/database marketing’s flexibility, efficiency and unique ability to deliver specific messages to particular groups as well as direct mail’s limited visibility to tobacco control, public health and regulators. Conclusions Database marketing is an important and increasingly sophisticated tobacco marketing strategy. Additional research is needed on the prevalence of receipt and exposure to direct mail items and their influence on receivers’ perceptions and smoking behaviours. PMID:26243810
Li, Ya-Pin; Gao, Hong-Wei; Fan, Hao-Jun; Wei, Wei; Xu, Bo; Dong, Wen-Long; Li, Qing-Feng; Song, Wen-Jing; Hou, Shi-Ke
2017-12-01
The objective of this study was to build a database to collect infectious disease information at the scene of a disaster through the use of 128 epidemiological questionnaires and 47 types of options, with rapid acquisition of information regarding infectious disease and rapid questionnaire customization at the scene of disaster relief by use of a personal digital assistant (PDA). SQL Server 2005 (Microsoft Corp, Redmond, WA) was used to create the option database for the infectious disease investigation, to develop a client application for the PDA, and to deploy the application on the server side. The users accessed the server for data collection and questionnaire customization with the PDA. A database with a set of comprehensive options was created and an application system was developed for the Android operating system (Google Inc, Mountain View, CA). On this basis, an infectious disease information collection system was built for use at the scene of disaster relief. The creation of an infectious disease information collection system and rapid questionnaire customization through the use of a PDA was achieved. This system integrated computer technology and mobile communication technology to develop an infectious disease information collection system and to allow for rapid questionnaire customization at the scene of disaster relief. (Disaster Med Public Health Preparedness. 2017;11:668-673).
NASA Astrophysics Data System (ADS)
Kaskhedikar, Apoorva Prakash
According to the U.S. Energy Information Administration, commercial buildings represent about 40% of the United State's energy consumption of which office buildings consume a major portion. Gauging the extent to which an individual building consumes energy in excess of its peers is the first step in initiating energy efficiency improvement. Energy Benchmarking offers initial building energy performance assessment without rigorous evaluation. Energy benchmarking tools based on the Commercial Buildings Energy Consumption Survey (CBECS) database are investigated in this thesis. This study proposes a new benchmarking methodology based on decision trees, where a relationship between the energy use intensities (EUI) and building parameters (continuous and categorical) is developed for different building types. This methodology was applied to medium office and school building types contained in the CBECS database. The Random Forest technique was used to find the most influential parameters that impact building energy use intensities. Subsequently, correlations which were significant were identified between EUIs and CBECS variables. Other than floor area, some of the important variables were number of workers, location, number of PCs and main cooling equipment. The coefficient of variation was used to evaluate the effectiveness of the new model. The customization technique proposed in this thesis was compared with another benchmarking model that is widely used by building owners and designers namely, the ENERGY STAR's Portfolio Manager. This tool relies on the standard Linear Regression methods which is only able to handle continuous variables. The model proposed uses data mining technique and was found to perform slightly better than the Portfolio Manager. The broader impacts of the new benchmarking methodology proposed is that it allows for identifying important categorical variables, and then incorporating them in a local, as against a global, model framework for EUI pertinent to the building type. The ability to identify and rank the important variables is of great importance in practical implementation of the benchmarking tools which rely on query-based building and HVAC variable filters specified by the user.
Chen, Zhijun; Zhu, Jing; Zhou, Mingjian
2015-03-01
Building on a social identity framework, our cross-level process model explains how a manager's servant leadership affects frontline employees' service performance, measured as service quality, customer-focused citizenship behavior, and customer-oriented prosocial behavior. Among a sample of 238 hairstylists in 30 salons and 470 of their customers, we found that hair stylists' self-identity embedded in the group, namely, self-efficacy and group identification, partially mediated the positive effect of salon managers' servant leadership on stylists' service performance as rated by the customers, after taking into account the positive influence of transformational leadership. Moreover, group competition climate strengthened the positive relationship between self-efficacy and service performance. PsycINFO Database Record (c) 2015 APA, all rights reserved.
Illinois hospital using Web to build database for relationship marketing.
Rees, T
2000-01-01
Silver Cross Hospital and Medical Centers, Joliet, Ill., is promoting its Web site as a tool for gathering health information about patients and prospective patients in order to build a relationship marketing database. The database will enable the hospital to identify health care needs of consumers in Joliet, Will County and many southwestern suburbs of Chicago. The Web site is promoted in a multimedia advertising campaign that invites residents to participate in a Healthy Living Quiz that rewards respondents with free health screenings. The effort is part of a growing planning and marketing strategy in the health care industry called customer relationship management (CRM). Not only does a total CRM plan offer health care organizations the chance to discover the potential for meeting consumers' needs; it also helps find any marketplace gaps that may exist.
NASA Astrophysics Data System (ADS)
Butell, Bart
1996-02-01
Microsoft's Visual Basic (VB) and Borland's Delphi provide an extremely robust programming environment for delivering multimedia solutions for interactive kiosks, games and titles. Their object oriented use of standard and custom controls enable a user to build extremely powerful applications. A multipurpose, database enabled programming environment that can provide an event driven interface functions as a multimedia kernel. This kernel can provide a variety of authoring solutions (e.g. a timeline based model similar to Macromedia Director or a node authoring model similar to Icon Author). At the heart of the kernel is a set of low level multimedia components providing object oriented interfaces for graphics, audio, video and imaging. Data preparation tools (e.g., layout, palette and Sprite Editors) could be built to manage the media database. The flexible interface for VB allows the construction of an infinite number of user models. The proliferation of these models within a popular, easy to use environment will allow the vast developer segment of 'producer' types to bring their ideas to the market. This is the key to building exciting, content rich multimedia solutions. Microsoft's VB and Borland's Delphi environments combined with multimedia components enable these possibilities.
Lewis, M Jane; Ling, Pamela M
2016-07-01
As limitations on traditional marketing tactics and scrutiny by tobacco control have increased, the tobacco industry has benefited from direct mail marketing which transmits marketing messages directly to carefully targeted consumers utilising extensive custom consumer databases. However, research in these areas has been limited. This is the first study to examine the development, purposes and extent of direct mail and customer databases. We examined direct mail and database marketing by RJ Reynolds and Philip Morris utilising internal tobacco industry documents from the Legacy Tobacco Document Library employing standard document research techniques. Direct mail marketing utilising industry databases began in the 1970s and grew from the need for a promotional strategy to deal with declining smoking rates, growing numbers of products and a cluttered media landscape. Both RJ Reynolds and Philip Morris started with existing commercial consumer mailing lists, but subsequently decided to build their own databases of smokers' names, addresses, brand preferences, purchase patterns, interests and activities. By the mid-1990s both RJ Reynolds and Philip Morris databases contained at least 30 million smokers' names each. These companies valued direct mail/database marketing's flexibility, efficiency and unique ability to deliver specific messages to particular groups as well as direct mail's limited visibility to tobacco control, public health and regulators. Database marketing is an important and increasingly sophisticated tobacco marketing strategy. Additional research is needed on the prevalence of receipt and exposure to direct mail items and their influence on receivers' perceptions and smoking behaviours. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aziz, Azizan; Lasternas, Bertrand; Alschuler, Elena
The American Recovery and Reinvestment Act stimulus funding of 2009 for smart grid projects resulted in the tripling of smart meters deployment. In 2012, the Green Button initiative provided utility customers with access to their real-time1 energy usage. The availability of finely granular data provides an enormous potential for energy data analytics and energy benchmarking. The sheer volume of time-series utility data from a large number of buildings also poses challenges in data collection, quality control, and database management for rigorous and meaningful analyses. In this paper, we will describe a building portfolio-level data analytics tool for operational optimization, businessmore » investment and policy assessment using 15-minute to monthly intervals utility data. The analytics tool is developed on top of the U.S. Department of Energy’s Standard Energy Efficiency Data (SEED) platform, an open source software application that manages energy performance data of large groups of buildings. To support the significantly large volume of granular interval data, we integrated a parallel time-series database to the existing relational database. The time-series database improves on the current utility data input, focusing on real-time data collection, storage, analytics and data quality control. The fully integrated data platform supports APIs for utility apps development by third party software developers. These apps will provide actionable intelligence for building owners and facilities managers. Unlike a commercial system, this platform is an open source platform funded by the U.S. Government, accessible to the public, researchers and other developers, to support initiatives in reducing building energy consumption.« less
Carter, Tony
2007-01-01
To build this process it is necessary to consult customers for preferences, build familiarity and knowledge to build a relationship and conduct business in a customized fashion. The process takes every opportunity to build customer satisfaction with each customer contact. It is an important process to have, since customers today are more demanding, sophisticated, educated and comfortable speaking to the company as an equal (Belk, 2003). Customers have more customized expectations so they want to be reached as individuals (Raymond and Tanner, 1994). Also, a disproportionate search for new business is costly. The cost to cultivate new customers is more than maintaining existing customers (Cathcart, 1990). Other reasons that customer retention is necessary is because many unhappy customers will never buy again from a company that dissatisfied them and they will communicate their displeasure to other people. These dissatisfied customers may not even convey their displeasure but without saying anything just stop doing business with that company, which may keep them unaware for some time that there is any problem (Cathcart, 1990).
Advanced Structures: 2000-2004
NASA Technical Reports Server (NTRS)
2004-01-01
This custom bibliography from the NASA Scientific and Technical Information Program lists a sampling of records found in the NASA Aeronautics and Space Database. The scope of this topic includes technologies for extremely lightweight, multi-function structures with modular interfaces - the building-block technology for advanced spacecraft. This area of focus is one of the enabling technologies as defined by NASA s Report of the President s Commission on Implementation of United States Space Exploration Policy, published in June 2004.
NASA Astrophysics Data System (ADS)
Hicks, S. D.; Aufdenkampe, A. K.; Montgomery, D. S.; Damiano, S. G.; Brooks, H. P.
2015-12-01
Scientists and educators around the world have been building their own dataloggers and devices using a variety of boards based on the Arduino open source electronics platform. While there have been several useful boards on the market in the past few years, they still required significant modification or additional components in order to use them with various sensors or deploy them in remote areas. Here we introduce our new custom datalogger board that makes it very easy to build a rugged environmental monitoring system. These custom boards contain all of the essential features of a solar-powered datalogger with radio telemetry, plus they have a very convenient and modular method for attaching a wide variety of sensors and devices. Various deployment options and installations are shown, as well as the online database that is used for capturing the live streaming data from the loggers and displaying graphs on custom web pages. Following the introduction last year of the EnviroDIY online community (http://enviroDIY.org), it continues to gain new members and share new ideas about open-source hardware and software solutions for observing our environment. EnviroDIY members can showcase their gadgets or describe their projects, ask questions, or follow along with helpful tutorials. Our new datalogger board, together with the EnviroDIY website, will make it easy for anyone to build and deploy their own environmental monitoring stations.
Standard Energy Efficiency Data Platform
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheifetz, D. Magnus
2014-07-15
The SEED platform is expected to be a building energy performance data management tool that provides federal, state and local governments, building owners and operators with an easy, flexible and cost-effective method to collect information about groups of buildings, oversee compliance with energy disclosure laws and demonstrate the economic and environmental benefits of energy efficiency. It will allow users to leverage a local application to manage data disclosure and large data sets without the IT investment of developing custom applications. The first users of SEED will be agencies that need to collect, store, and report/share large data sets generated bymore » benchmarking, energy auditing, retro-commissioning or retrofitting of many buildings. Similarly, building owners and operators will use SEED to manage their own energy data in a common format and centralized location. SEED users will also control the disclosure of their information for compliance requirements, recognition programs such as ENERGY STAR, or data sharing with the Buildings Performance Database and/or other third parties at their discretion.« less
Pullman, Daryl; Perrot-Daley, Astrid; Hodgkinson, Kathy; Street, Catherine; Rahman, Proton
2013-01-01
Objective To provide a legal and ethical analysis of some of the implementation challenges faced by the Population Therapeutics Research Group (PTRG) at Memorial University (Canada), in using genealogical information offered by individuals for its genetics research database. Materials and methods This paper describes the unique historical and genetic characteristics of the Newfoundland and Labrador founder population, which gave rise to the opportunity for PTRG to build the Newfoundland Genealogy Database containing digitized records of all pre-confederation (1949) census records of the Newfoundland founder population. In addition to building the database, PTRG has developed the Heritability Analytics Infrastructure, a data management structure that stores genotype, phenotype, and pedigree information in a single database, and custom linkage software (KINNECT) to perform pedigree linkages on the genealogy database. Discussion A newly adopted legal regimen in Newfoundland and Labrador is discussed. It incorporates health privacy legislation with a unique research ethics statute governing the composition and activities of research ethics boards and, for the first time in Canada, elevating the status of national research ethics guidelines into law. The discussion looks at this integration of legal and ethical principles which provides a flexible and seamless framework for balancing the privacy rights and welfare interests of individuals, families, and larger societies in the creation and use of research data infrastructures as public goods. Conclusion The complementary legal and ethical frameworks that now coexist in Newfoundland and Labrador provide the legislative authority, ethical legitimacy, and practical flexibility needed to find a workable balance between privacy interests and public goods. Such an approach may also be instructive for other jurisdictions as they seek to construct and use biobanks and related research platforms for genetic research. PMID:22859644
Kosseim, Patricia; Pullman, Daryl; Perrot-Daley, Astrid; Hodgkinson, Kathy; Street, Catherine; Rahman, Proton
2013-01-01
To provide a legal and ethical analysis of some of the implementation challenges faced by the Population Therapeutics Research Group (PTRG) at Memorial University (Canada), in using genealogical information offered by individuals for its genetics research database. This paper describes the unique historical and genetic characteristics of the Newfoundland and Labrador founder population, which gave rise to the opportunity for PTRG to build the Newfoundland Genealogy Database containing digitized records of all pre-confederation (1949) census records of the Newfoundland founder population. In addition to building the database, PTRG has developed the Heritability Analytics Infrastructure, a data management structure that stores genotype, phenotype, and pedigree information in a single database, and custom linkage software (KINNECT) to perform pedigree linkages on the genealogy database. A newly adopted legal regimen in Newfoundland and Labrador is discussed. It incorporates health privacy legislation with a unique research ethics statute governing the composition and activities of research ethics boards and, for the first time in Canada, elevating the status of national research ethics guidelines into law. The discussion looks at this integration of legal and ethical principles which provides a flexible and seamless framework for balancing the privacy rights and welfare interests of individuals, families, and larger societies in the creation and use of research data infrastructures as public goods. The complementary legal and ethical frameworks that now coexist in Newfoundland and Labrador provide the legislative authority, ethical legitimacy, and practical flexibility needed to find a workable balance between privacy interests and public goods. Such an approach may also be instructive for other jurisdictions as they seek to construct and use biobanks and related research platforms for genetic research.
Zablah, Alex R; Carlson, Brad D; Donavan, D Todd; Maxham, James G; Brown, Tom J
2016-05-01
Due to its practical importance, the relationship between customer satisfaction and frontline employee (FLE) job satisfaction has received significant attention in the literature. Numerous studies to date confirm that the constructs are related and rely on this empirical finding to infer support for the "inside-out" effect of FLE job satisfaction on customer satisfaction. In doing so, prior studies ignore the possibility that-as suggested by the Service Profit Chain's satisfaction mirror-a portion of the observed empirical effect may be due to the "outside-in" impact of customer satisfaction on FLE job satisfaction. Consequently, both the magnitude and direction of the causal relationship between the constructs remain unclear. To address this oversight, this study builds on multisource data, including longitudinal satisfaction data provided by 49,242 customers and 1,470 FLEs from across 209 retail stores, to examine the association between FLE job satisfaction and customer satisfaction in a context where service relationships are the norm. Consistent with predictions rooted in social exchange theory, the results reveal that (a) customer satisfaction and FLE job satisfaction are reciprocally related; (b) the outside-in effect of customer satisfaction on FLE job satisfaction is predominant (i.e., larger in magnitude than the inside-out effect); and (c) customer engagement determines the extent of this outside-in predominance. Contrary to common wisdom, the study's findings suggest that, in relational contexts, incentivizing FLEs to satisfy customers may prove to be more effective for enhancing FLE and customer outcomes than direct investments in FLE job satisfaction. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Petaminer: Using ROOT for efficient data storage in MySQL database
NASA Astrophysics Data System (ADS)
Cranshaw, J.; Malon, D.; Vaniachine, A.; Fine, V.; Lauret, J.; Hamill, P.
2010-04-01
High Energy and Nuclear Physics (HENP) experiments store Petabytes of event data and Terabytes of calibration data in ROOT files. The Petaminer project is developing a custom MySQL storage engine to enable the MySQL query processor to directly access experimental data stored in ROOT files. Our project is addressing the problem of efficient navigation to PetaBytes of HENP experimental data described with event-level TAG metadata, which is required by data intensive physics communities such as the LHC and RHIC experiments. Physicists need to be able to compose a metadata query and rapidly retrieve the set of matching events, where improved efficiency will facilitate the discovery process by permitting rapid iterations of data evaluation and retrieval. Our custom MySQL storage engine enables the MySQL query processor to directly access TAG data stored in ROOT TTrees. As ROOT TTrees are column-oriented, reading them directly provides improved performance over traditional row-oriented TAG databases. Leveraging the flexible and powerful SQL query language to access data stored in ROOT TTrees, the Petaminer approach enables rich MySQL index-building capabilities for further performance optimization.
A Measurement of Civil Engineering Customer Satisfaction.
1987-09-01
to best represent civil engineering customers : military building managers , civilian building managers , and field grade officers. Building managers ...not know how well they are meeting the expectations of their customers . In their book on service management , 5- I8 Albrecht and Zemke fault American...Austin provide the simplest definition of a customer -- one who pays the bills .59 (2:45). In his book on service management , Richard Normann labels tile
Hinton, Elizabeth G; Oelschlegel, Sandra; Vaughn, Cynthia J; Lindsay, J Michael; Hurst, Sachiko M; Earl, Martha
2013-01-01
This study utilizes an informatics tool to analyze a robust literature search service in an academic medical center library. Structured interviews with librarians were conducted focusing on the benefits of such a tool, expectations for performance, and visual layout preferences. The resulting application utilizes Microsoft SQL Server and .Net Framework 3.5 technologies, allowing for the use of a web interface. Customer tables and MeSH terms are included. The National Library of Medicine MeSH database and entry terms for each heading are incorporated, resulting in functionality similar to searching the MeSH database through PubMed. Data reports will facilitate analysis of the search service.
Comparative study on the customization of natural language interfaces to databases.
Pazos R, Rodolfo A; Aguirre L, Marco A; González B, Juan J; Martínez F, José A; Pérez O, Joaquín; Verástegui O, Andrés A
2016-01-01
In the last decades the popularity of natural language interfaces to databases (NLIDBs) has increased, because in many cases information obtained from them is used for making important business decisions. Unfortunately, the complexity of their customization by database administrators make them difficult to use. In order for a NLIDB to obtain a high percentage of correctly translated queries, it is necessary that it is correctly customized for the database to be queried. In most cases the performance reported in NLIDB literature is the highest possible; i.e., the performance obtained when the interfaces were customized by the implementers. However, for end users it is more important the performance that the interface can yield when the NLIDB is customized by someone different from the implementers. Unfortunately, there exist very few articles that report NLIDB performance when the NLIDBs are not customized by the implementers. This article presents a semantically-enriched data dictionary (which permits solving many of the problems that occur when translating from natural language to SQL) and an experiment in which two groups of undergraduate students customized our NLIDB and English language frontend (ELF), considered one of the best available commercial NLIDBs. The experimental results show that, when customized by the first group, our NLIDB obtained a 44.69 % of correctly answered queries and ELF 11.83 % for the ATIS database, and when customized by the second group, our NLIDB attained 77.05 % and ELF 13.48 %. The performance attained by our NLIDB, when customized by ourselves was 90 %.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, Ian M.; Goldman, Charles A.; Murphy, Sean
The average cost to utilities to save a kilowatt-hour (kWh) in the United States is 2.5 cents, according to the most comprehensive assessment to date of the cost performance of energy efficiency programs funded by electricity customers. These costs are similar to those documented earlier. Cost-effective efficiency programs help ensure electricity system reliability at the most affordable cost as part of utility planning and implementation activities for resource adequacy. Building on prior studies, Berkeley Lab analyzed the cost performance of 8,790 electricity efficiency programs between 2009 and 2015 for 116 investor-owned utilities and other program administrators in 41 states. Themore » Berkeley Lab database includes programs representing about three-quarters of total spending on electricity efficiency programs in the United States.« less
Catlin, Ann Christine; Fernando, Sumudinie; Gamage, Ruwan; Renner, Lorna; Antwi, Sampson; Tettey, Jonas Kusah; Amisah, Kofi Aikins; Kyriakides, Tassos; Cong, Xiangyu; Reynolds, Nancy R; Paintsil, Elijah
2015-01-01
Prevalence of pediatric HIV disclosure is low in resource-limited settings. Innovative, culturally sensitive, and patient-centered disclosure approaches are needed. Conducting such studies in resource-limited settings is not trivial considering the challenges of capturing, cleaning, and storing clinical research data. To overcome some of these challenges, the Sankofa pediatric disclosure intervention adopted an interactive cyber infrastructure for data capture and analysis. The Sankofa Project database system is built on the HUBzero cyber infrastructure ( https://hubzero.org ), an open source software platform. The hub database components support: (1) data management - the "databases" component creates, configures, and manages database access, backup, repositories, applications, and access control; (2) data collection - the "forms" component is used to build customized web case report forms that incorporate common data elements and include tailored form submit processing to handle error checking, data validation, and data linkage as the data are stored to the database; and (3) data exploration - the "dataviewer" component provides powerful methods for users to view, search, sort, navigate, explore, map, graph, visualize, aggregate, drill-down, compute, and export data from the database. The Sankofa cyber data management tool supports a user-friendly, secure, and systematic collection of all data. We have screened more than 400 child-caregiver dyads and enrolled nearly 300 dyads, with tens of thousands of data elements. The dataviews have successfully supported all data exploration and analysis needs of the Sankofa Project. Moreover, the ability of the sites to query and view data summaries has proven to be an incentive for collecting complete and accurate data. The data system has all the desirable attributes of an electronic data capture tool. It also provides an added advantage of building data management capacity in resource-limited settings due to its innovative data query and summary views and availability of real-time support by the data management team.
NASA Astrophysics Data System (ADS)
Bhanumurthy, V.; Venugopala Rao, K.; Srinivasa Rao, S.; Ram Mohan Rao, K.; Chandra, P. Satya; Vidhyasagar, J.; Diwakar, P. G.; Dadhwal, V. K.
2014-11-01
Geographical Information Science (GIS) is now graduated from traditional desktop system to Internet system. Internet GIS is emerging as one of the most promising technologies for addressing Emergency Management. Web services with different privileges are playing an important role in dissemination of the emergency services to the decision makers. Spatial database is one of the most important components in the successful implementation of Emergency Management. It contains spatial data in the form of raster, vector, linked with non-spatial information. Comprehensive data is required to handle emergency situation in different phases. These database elements comprise core data, hazard specific data, corresponding attribute data, and live data coming from the remote locations. Core data sets are minimum required data including base, thematic, infrastructure layers to handle disasters. Disaster specific information is required to handle a particular disaster situation like flood, cyclone, forest fire, earth quake, land slide, drought. In addition to this Emergency Management require many types of data with spatial and temporal attributes that should be made available to the key players in the right format at right time. The vector database needs to be complemented with required resolution satellite imagery for visualisation and analysis in disaster management. Therefore, the database is interconnected and comprehensive to meet the requirement of an Emergency Management. This kind of integrated, comprehensive and structured database with appropriate information is required to obtain right information at right time for the right people. However, building spatial database for Emergency Management is a challenging task because of the key issues such as availability of data, sharing policies, compatible geospatial standards, data interoperability etc. Therefore, to facilitate using, sharing, and integrating the spatial data, there is a need to define standards to build emergency database systems. These include aspects such as i) data integration procedures namely standard coding scheme, schema, meta data format, spatial format ii) database organisation mechanism covering data management, catalogues, data models iii) database dissemination through a suitable environment, as a standard service for effective service dissemination. National Database for Emergency Management (NDEM) is such a comprehensive database for addressing disasters in India at the national level. This paper explains standards for integrating, organising the multi-scale and multi-source data with effective emergency response using customized user interfaces for NDEM. It presents standard procedure for building comprehensive emergency information systems for enabling emergency specific functions through geospatial technologies.
Recognition of human activities using depth images of Kinect for biofied building
NASA Astrophysics Data System (ADS)
Ogawa, Ami; Mita, Akira
2015-03-01
These days, various functions in the living spaces are needed because of an aging society, promotion of energy conservation, and diversification of lifestyles. To meet this requirement, we propose "Biofied Building". The "Biofied Building" is the system learnt from living beings. The various information is accumulated in a database using small sensor agent robots as a key function of this system to control the living spaces. Among the various kinds of information about the living spaces, especially human activities can be triggers for lighting or air conditioning control. By doing so, customized space is possible. Human activities are divided into two groups, the activities consisting of single behavior and the activities consisting of multiple behaviors. For example, "standing up" or "sitting down" consists of a single behavior. These activities are accompanied by large motions. On the other hand "eating" consists of several behaviors, holding the chopsticks, catching the food, putting them in the mouth, and so on. These are continuous motions. Considering the characteristics of two types of human activities, we individually, use two methods, R transformation and variance. In this paper, we focus on the two different types of human activities, and propose the two methods of human activity recognition methods for construction of the database of living space for "Biofied Building". Finally, we compare the results of both methods.
C&RE-SLC: Database for conservation and renewable energy activities
NASA Astrophysics Data System (ADS)
Cavallo, J. D.; Tompkins, M. M.; Fisher, A. G.
1992-08-01
The Western Area Power Administration (Western) requires all its long-term power customers to implement programs that promote the conservation of electric energy or facilitate the use of renewable energy resources. The hope is that these measures could significantly reduce the amount of environmental damage associated with electricity production. As part of preparing the environmental impact statement for Western's Electric Power Marketing Program, Argonne National Laboratory constructed a database of the conservation and renewable energy activities in which Western's Salt Lake City customers are involved. The database provides information on types of conservation and renewable energy activities and allows for comparisons of activities being conducted at different utilities in the Salt Lake City region. Sorting the database allows Western's Salt Lake City customers to be classified so the various activities offered by different classes of utilities can be identified; for example, comparisons can be made between municipal utilities and cooperatives or between large and small customers. The information included in the database was collected from customer planning documents in the files of Western's Salt Lake City office.
Guhlin, Joseph; Silverstein, Kevin A T; Zhou, Peng; Tiffin, Peter; Young, Nevin D
2017-08-10
Rapid generation of omics data in recent years have resulted in vast amounts of disconnected datasets without systemic integration and knowledge building, while individual groups have made customized, annotated datasets available on the web with few ways to link them to in-lab datasets. With so many research groups generating their own data, the ability to relate it to the larger genomic and comparative genomic context is becoming increasingly crucial to make full use of the data. The Omics Database Generator (ODG) allows users to create customized databases that utilize published genomics data integrated with experimental data which can be queried using a flexible graph database. When provided with omics and experimental data, ODG will create a comparative, multi-dimensional graph database. ODG can import definitions and annotations from other sources such as InterProScan, the Gene Ontology, ENZYME, UniPathway, and others. This annotation data can be especially useful for studying new or understudied species for which transcripts have only been predicted, and rapidly give additional layers of annotation to predicted genes. In better studied species, ODG can perform syntenic annotation translations or rapidly identify characteristics of a set of genes or nucleotide locations, such as hits from an association study. ODG provides a web-based user-interface for configuring the data import and for querying the database. Queries can also be run from the command-line and the database can be queried directly through programming language hooks available for most languages. ODG supports most common genomic formats as well as generic, easy to use tab-separated value format for user-provided annotations. ODG is a user-friendly database generation and query tool that adapts to the supplied data to produce a comparative genomic database or multi-layered annotation database. ODG provides rapid comparative genomic annotation and is therefore particularly useful for non-model or understudied species. For species for which more data are available, ODG can be used to conduct complex multi-omics, pattern-matching queries.
Developing customer databases.
Rao, S K; Shenbaga, S
2000-01-01
There is a growing consensus among pharmaceutical companies that more product and customer-specific approaches to marketing and selling a new drug can result in substantial increases in sales. Marketers and researchers taking a proactive micro-marketing approach to identifying, profiling, and communicating with target customers are likely to facilitate such approaches and outcomes. This article provides a working framework for creating customer databases that can be effectively mined to achieve a variety of such marketing and sales force objectives.
Building Energy Consumption Pattern Analysis of Detached Housing for the Policy Decision Simulator
NASA Astrophysics Data System (ADS)
Lim, Jiyoun; Lee, Seung-Eon
2018-03-01
The Korean government announced its plan to raise the previous reduction goal of greenhouse gas emission from buildings by 26.9% until 2020 on July 2015. Therefore, policies regarding efficiency in the building energy are implemented fast, but the level of building owners and market understanding is low in general, and the government service system which supports decision making for implementing low-energy buildings has not been provided yet. The purpose of this study is to present the design direction for establishing user customized building energy database to perform a role to provide autonomous ecosystem of low-energy buildings. In order to reduce energy consumption in buildings, it is necessary to carry out the energy performance analysis based on the characteristics of target building. By analysing about 20-thousand cases of the amount of housing energy consumption in Korea, this study suggested the real energy consumption pattern by building types. Also, the energy performance of a building could be determined by energy consumption, but previous building energy consumption analysis programs required expert knowledge and experience in program usage, so it was difficult for normal building users to use such programs. Therefore, a measure to provide proper default using the level of data which general users with no expert knowledge regarding building energy could enter easily was suggested in this study.
Catlin, Ann Christine; Fernando, Sumudinie; Gamage, Ruwan; Renner, Lorna; Antwi, Sampson; Tettey, Jonas Kusah; Amisah, Kofi Aikins; Kyriakides, Tassos; Cong, Xiangyu; Reynolds, Nancy R.; Paintsil, Elijah
2015-01-01
Prevalence of pediatric HIV disclosure is low in resource-limited settings. Innovative, culturally sensitive, and patient-centered disclosure approaches are needed. Conducting such studies in resource-limited settings is not trivial considering the challenges of capturing, cleaning, and storing clinical research data. To overcome some of these challenges, the Sankofa pediatric disclosure intervention adopted an interactive cyber infrastructure for data capture and analysis. The Sankofa Project database system is built on the HUBzero cyber infrastructure (https://hubzero.org), an open source software platform. The hub database components support: (1) data management – the “databases” component creates, configures, and manages database access, backup, repositories, applications, and access control; (2) data collection – the “forms” component is used to build customized web case report forms that incorporate common data elements and include tailored form submit processing to handle error checking, data validation, and data linkage as the data are stored to the database; and (3) data exploration – the “dataviewer” component provides powerful methods for users to view, search, sort, navigate, explore, map, graph, visualize, aggregate, drill-down, compute, and export data from the database. The Sankofa cyber data management tool supports a user-friendly, secure, and systematic collection of all data. We have screened more than 400 child–caregiver dyads and enrolled nearly 300 dyads, with tens of thousands of data elements. The dataviews have successfully supported all data exploration and analysis needs of the Sankofa Project. Moreover, the ability of the sites to query and view data summaries has proven to be an incentive for collecting complete and accurate data. The data system has all the desirable attributes of an electronic data capture tool. It also provides an added advantage of building data management capacity in resource-limited settings due to its innovative data query and summary views and availability of real-time support by the data management team. PMID:26616131
Freund, Ophir; Reychav, Iris; McHaney, Roger; Goland, Ella; Azuri, Joseph
2017-06-01
Patient compliance with medical advice and recommended treatment depends on perception of health condition, medical knowledge, attitude, and self-efficacy. This study investigated how use of customized online medical databases, intended to improve knowledge in a variety of relevant medical topics, influenced senior adults' perceptions. Seventy-nine older adults in residence homes completed a computerized, tablet-based questionnaire, with medical scenarios and related questions. Following an intervention, control group participants answered questions without online help while an experimental group received internet links that directed them to customized, online medical databases. Medical knowledge and test scores among the experimental group significantly improved from pre- to post-intervention (p<0.0001) and was higher in comparison with the control group (p<0.0001). No significant change occurred in the control group. Older adults improved their knowledge in desired medical topic areas using customized online medical databases. The study demonstrated how such databases help solve health-related questions among older adult population members, and that older patients appear willing to consider technology usage in information acquisition. Copyright © 2017 Elsevier B.V. All rights reserved.
Egbring, Marco; Kullak-Ublick, Gerd A; Russmann, Stefan
2010-01-01
To develop a software solution that supports management and clinical review of patient data from electronic medical records databases or claims databases for pharmacoepidemiological drug safety studies. We used open source software to build a data management system and an internet application with a Flex client on a Java application server with a MySQL database backend. The application is hosted on Amazon Elastic Compute Cloud. This solution named Phynx supports data management, Web-based display of electronic patient information, and interactive review of patient-level information in the individual clinical context. This system was applied to a dataset from the UK General Practice Research Database (GPRD). Our solution can be setup and customized with limited programming resources, and there is almost no extra cost for software. Access times are short, the displayed information is structured in chronological order and visually attractive, and selected information such as drug exposure can be blinded. External experts can review patient profiles and save evaluations and comments via a common Web browser. Phynx provides a flexible and economical solution for patient-level review of electronic medical information from databases considering the individual clinical context. It can therefore make an important contribution to an efficient validation of outcome assessment in drug safety database studies.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-03
... DEPARTMENT OF COMMERCE Minority Business Development Agency Proposed Information Collection; Comment Request; Online Customer Relationship Management (CRM)/Performance Databases, the Online Phoenix... of program goals via the Online CRM/Performance Databases. The data collected through the Online CRM...
Web application for detailed real-time database transaction monitoring for CMS condition data
NASA Astrophysics Data System (ADS)
de Gruttola, Michele; Di Guida, Salvatore; Innocente, Vincenzo; Pierro, Antonio
2012-12-01
In the upcoming LHC era, database have become an essential part for the experiments collecting data from LHC, in order to safely store, and consistently retrieve, a wide amount of data, which are produced by different sources. In the CMS experiment at CERN, all this information is stored in ORACLE databases, allocated in several servers, both inside and outside the CERN network. In this scenario, the task of monitoring different databases is a crucial database administration issue, since different information may be required depending on different users' tasks such as data transfer, inspection, planning and security issues. We present here a web application based on Python web framework and Python modules for data mining purposes. To customize the GUI we record traces of user interactions that are used to build use case models. In addition the application detects errors in database transactions (for example identify any mistake made by user, application failure, unexpected network shutdown or Structured Query Language (SQL) statement error) and provides warning messages from the different users' perspectives. Finally, in order to fullfill the requirements of the CMS experiment community, and to meet the new development in many Web client tools, our application was further developed, and new features were deployed.
Lee, Howard; Chapiro, Julius; Schernthaner, Rüdiger; Duran, Rafael; Wang, Zhijun; Gorodetski, Boris; Geschwind, Jean-François; Lin, MingDe
2015-04-01
The objective of this study was to demonstrate that an intra-arterial liver therapy clinical research database system is a more workflow efficient and robust tool for clinical research than a spreadsheet storage system. The database system could be used to generate clinical research study populations easily with custom search and retrieval criteria. A questionnaire was designed and distributed to 21 board-certified radiologists to assess current data storage problems and clinician reception to a database management system. Based on the questionnaire findings, a customized database and user interface system were created to perform automatic calculations of clinical scores including staging systems such as the Child-Pugh and Barcelona Clinic Liver Cancer, and facilitates data input and output. Questionnaire participants were favorable to a database system. The interface retrieved study-relevant data accurately and effectively. The database effectively produced easy-to-read study-specific patient populations with custom-defined inclusion/exclusion criteria. The database management system is workflow efficient and robust in retrieving, storing, and analyzing data. Copyright © 2015 AUR. Published by Elsevier Inc. All rights reserved.
2013-06-01
U.S. ARMY CORPS OF ENGINEERS Building Overhead Costs into Projects and Customers ’ Views on Information Provided...Overhead Costs into Projects and Customers ’ Views on Information Provided 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S...and Customers ’ Views on Information Provided Why GAO Did This Study The Corps spends billions of dollars annually on projects in its Civil Works
An Automated Ab Initio Framework for Identifying New Ferroelectrics
NASA Astrophysics Data System (ADS)
Smidt, Tess; Reyes-Lillo, Sebastian E.; Jain, Anubhav; Neaton, Jeffrey B.
Ferroelectric materials have a wide-range of technological applications including non-volatile RAM and optoelectronics. In this work, we present an automated first-principles search for ferroelectrics. We integrate density functional theory, crystal structure databases, symmetry tools, workflow software, and a custom analysis toolkit to build a library of known and proposed ferroelectrics. We screen thousands of candidates using symmetry relations between nonpolar and polar structure pairs. We use two search strategies 1) polar-nonpolar pairs with the same composition and 2) polar-nonpolar structure type pairs. Results are automatically parsed, stored in a database, and accessible via a web interface showing distortion animations and plots of polarization and total energy as a function of distortion. We benchmark our results against experimental data, present new ferroelectric candidates found through our search, and discuss future work on expanding this search methodology to other material classes such as anti-ferroelectrics and multiferroics.
Use of 3D Printing for Custom Wind Tunnel Fabrication
NASA Astrophysics Data System (ADS)
Gagorik, Paul; Bates, Zachary; Issakhanian, Emin
2016-11-01
Small-scale wind tunnels for the most part are fairly simple to produce with standard building equipment. However, the intricate bell housing and inlet shape of an Eiffel type wind tunnel, as well as the transition from diffuser to fan in a rectangular tunnel can present design and construction obstacles. With the help of 3D printing, these shapes can be custom designed in CAD models and printed in the lab at very low cost. The undergraduate team at Loyola Marymount University has built a custom benchtop tunnel for gas turbine film cooling experiments. 3D printing is combined with conventional construction methods to build the tunnel. 3D printing is also used to build the custom tunnel floor and interchangeable experimental pieces for various experimental shapes. This simple and low-cost tunnel is a custom solution for specific engineering experiments for gas turbine technology research.
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1998-04-01
Commercial chillers are used in space and industrial process cooling. Approximately 3% of commercial buildings, representing 19% of all commercial floor space, are cooled by chillers. Consequently, every chiller represents significant electric (or gas) consumption. Chillers can comprise as much as 30% of a large office building`s electrical load. The selection decisions (electric versus gas, standard versus high efficiency, thermal storage or no thermal storage, etc.) for a new or replacement chiller will affect the customer`s energy consumption for twenty to thirty years. Consequently, this decision can play a major role in the customer`s relationship with the energy provider. However,more » even though these chiller decisions have a significant impact on the utility, today the utility has limited influence over these decisions. EPRI commissioned this study to develop understanding that will help utilities increase their influence over chiller decisions. To achieve this objective, this study looks at the customer`s behavior -- how they make chiller decisions, how the customer`s behavior and decisions are influenced today, and how these decisions might change in the future due to the impact of deregulation and changes in customer goals. The output of this project includes a list of product and service offerings that utilities and EPRI could offer to increase their influence over chiller decisions.« less
Do Librarians Really Do That? Or Providing Custom, Fee-Based Services.
ERIC Educational Resources Information Center
Whitmore, Susan; Heekin, Janet
This paper describes some of the fee-based, custom services provided by National Institutes of Health (NIH) Library to NIH staff, including knowledge management, clinical liaisons, specialized database searching, bibliographic database development, Web resource guide development, and journal management. The first section discusses selecting the…
The LSST metrics analysis framework (MAF)
NASA Astrophysics Data System (ADS)
Jones, R. L.; Yoachim, Peter; Chandrasekharan, Srinivasan; Connolly, Andrew J.; Cook, Kem H.; Ivezic, Željko; Krughoff, K. S.; Petry, Catherine; Ridgway, Stephen T.
2014-07-01
We describe the Metrics Analysis Framework (MAF), an open-source python framework developed to provide a user-friendly, customizable, easily-extensible set of tools for analyzing data sets. MAF is part of the Large Synoptic Survey Telescope (LSST) Simulations effort. Its initial goal is to provide a tool to evaluate LSST Operations Simulation (OpSim) simulated surveys to help understand the effects of telescope scheduling on survey performance, however MAF can be applied to a much wider range of datasets. The building blocks of the framework are Metrics (algorithms to analyze a given quantity of data), Slicers (subdividing the overall data set into smaller data slices as relevant for each Metric), and Database classes (to access the dataset and read data into memory). We describe how these building blocks work together, and provide an example of using MAF to evaluate different dithering strategies. We also outline how users can write their own custom Metrics and use these within the framework.
MetNetAPI: A flexible method to access and manipulate biological network data from MetNet
2010-01-01
Background Convenient programmatic access to different biological databases allows automated integration of scientific knowledge. Many databases support a function to download files or data snapshots, or a webservice that offers "live" data. However, the functionality that a database offers cannot be represented in a static data download file, and webservices may consume considerable computational resources from the host server. Results MetNetAPI is a versatile Application Programming Interface (API) to the MetNetDB database. It abstracts, captures and retains operations away from a biological network repository and website. A range of database functions, previously only available online, can be immediately (and independently from the website) applied to a dataset of interest. Data is available in four layers: molecular entities, localized entities (linked to a specific organelle), interactions, and pathways. Navigation between these layers is intuitive (e.g. one can request the molecular entities in a pathway, as well as request in what pathways a specific entity participates). Data retrieval can be customized: Network objects allow the construction of new and integration of existing pathways and interactions, which can be uploaded back to our server. In contrast to webservices, the computational demand on the host server is limited to processing data-related queries only. Conclusions An API provides several advantages to a systems biology software platform. MetNetAPI illustrates an interface with a central repository of data that represents the complex interrelationships of a metabolic and regulatory network. As an alternative to data-dumps and webservices, it allows access to a current and "live" database and exposes analytical functions to application developers. Yet it only requires limited resources on the server-side (thin server/fat client setup). The API is available for Java, Microsoft.NET and R programming environments and offers flexible query and broad data- retrieval methods. Data retrieval can be customized to client needs and the API offers a framework to construct and manipulate user-defined networks. The design principles can be used as a template to build programmable interfaces for other biological databases. The API software and tutorials are available at http://www.metnetonline.org/api. PMID:21083943
3D Modeling of Interior Building Environments and Objects from Noisy Sensor Suites
2015-05-14
building environments. The interior environment of a building is scanned by a custom hardware system, which provides raw laser and camera sensor readings...interior environment of a building is scanned by a custom hardware system, which provides raw laser and camera sensor readings used to develop these...seemed straight out of a Calvin & Hobbes strip . As soon as I met the people here, I immediately found that the intellectual adventure matched the
Benis, Arriel; Harel, Nissim; Barkan, Refael; Sela, Tomer; Feldman, Becca
2017-01-01
HMOs record medical data and their interactions with patients. Using this data we strive to identify sub-populations of healthcare customers based on their communication patterns and characterize these sub-populations by their socio-demographic, medical, treatment effectiveness, and treatment adherence profiles. This work will be used to develop tools and interventions aimed at improving patient care. The process included: (1) Extracting socio-demographic, clinical, laboratory, and communication data of 309,460 patients with diabetes in 2015, aged 32+ years, having 7+ years of the disease treated by Clalit Healthcare Services; (2) Reducing dimensions of continuous variables; (3) Finding the K communication-patterns clusters; (4) Building a hierarchical clustering and its associated heatmap to summarize the discovered clusters; (5) Analyzing the clusters found; (6) Validating results epidemiologically. Such a process supports understanding different communication-channel usage and the implementation of personalized services focusing on patients' needs and preferences.
DOT National Transportation Integrated Search
1999-10-29
Customer satisfaction is at the heart of the Pennsylvania Quality Initiative (PQI), which was created in 1994 to build a more effective partnership among all the stakeholders involved in the process of designing, building, operating, and maintaining ...
Integrating In Silico Resources to Map a Signaling Network
Liu, Hanqing; Beck, Tim N.; Golemis, Erica A.; Serebriiskii, Ilya G.
2013-01-01
The abundance of publicly available life science databases offer a wealth of information that can support interpretation of experimentally derived data and greatly enhance hypothesis generation. Protein interaction and functional networks are not simply new renditions of existing data: they provide the opportunity to gain insights into the specific physical and functional role a protein plays as part of the biological system. In this chapter, we describe different in silico tools that can quickly and conveniently retrieve data from existing data repositories and discuss how the available tools are best utilized for different purposes. While emphasizing protein-protein interaction databases (e.g., BioGrid and IntAct), we also introduce metasearch platforms such as STRING and GeneMANIA, pathway databases (e.g., BioCarta and Pathway Commons), text mining approaches (e.g., PubMed and Chilibot), and resources for drug-protein interactions, genetic information for model organisms and gene expression information based on microarray data mining. Furthermore, we provide a simple step-by-step protocol to building customized protein-protein interaction networks in Cytoscape, a powerful network assembly and visualization program, integrating data retrieved from these various databases. As we illustrate, generation of composite interaction networks enables investigators to extract significantly more information about a given biological system than utilization of a single database or sole reliance on primary literature. PMID:24233784
Performance Evaluation of NoSQL Databases: A Case Study
2015-02-01
a centralized relational database. The customer decided to consider NoSQL technologies for two specific uses, namely: the primary data store for...17 custom specific 6. FU NoSQL availab data mo arking of data g a specific wo sin benchmark f hmark for tran le workload de o publish meas their...The choice of a particular NoSQL database imposes a specific distributed software architecture and data model, and is a major determinant of the
Commercial Building Tenant Energy Usage Aggregation and Privacy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Livingston, Olga V.; Pulsipher, Trenton C.; Anderson, David M.
A growing number of building owners are benchmarking their building energy use. This requires the building owner to acquire monthly whole-building energy usage information, which can be challenging for buildings in which individual tenants have their own utility meters and accounts with the utility. Some utilities and utility regulators have turned to aggregation of customer energy use data (CEUD) as a way to give building owners whole-building energy usage data while protecting customer privacy. Meter profile aggregation adds a layer of protection that decreases the risk of revealing CEUD as the number of meters aggregated increases. The report statistically characterizesmore » the similarity between individual energy usage patterns and whole-building totals at various levels of meter aggregation.« less
Fell, D
1998-01-01
Now, more than ever, health care organizations are desperately trying to reach out to customers and establish stronger relationships that will generate increased loyalty and repeat business. As technology, like the Internet and related mediums, allow us to do a better job of managing information and communication, health care executives must invest the time and resources necessary to bring these new advances into the day-to-day operations of their businesses. Those that do will have a head start in building their brand and their customer loyalty.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Monsabert, S. de; Lemmer, H.; Dinwiddie, D.
1995-10-01
In the past, most buildings, structures, and ship visits were not metered, and flat estimates were calculated based on various estimating techniques. The decomposition process was further complicated by the fact that many of the meters monitor consumption values only and do not provide demand or time of use data. This method of billing provides no incentives to the PWC customers to implement energy conservation programs, including load shedding, Energy Monitoring and Control Systems (EMCS), building shell improvements, low flow toilets and shower heads, efficient lighting systems, or other energy savings alternatives. Similarly, the method had no means of adjustmentmore » for seasonal or climatic variations outside of the norm. As an alternative to flat estimates, the Customized Utility Billing Integrated Control (CUBIC) system and the Graphical Data Input System (GDIS) were developed to better manage the data to the major claimant area users based on utilities usage factors, building size, weather data, and hours of operation. GDIS is a graphical database that assists PWC engineers in the development and maintenance of single-line utility diagrams of the facilities and meters. It functions as a drawing associate system and is written in AutoLISP for AutoCAD version 12. GDIS interprets the drawings and provides the facility-to-meter and meter-to-meter hierarchy data that are used by the CUBIC to allocate the billings. This paper reviews the design, development and implementation aspects of CUBIC/GDIS and discusses the benefits of this improved utilities management system.« less
A New Distributed Optimization for Community Microgrids Scheduling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Starke, Michael R; Tomsovic, Kevin
This paper proposes a distributed optimization model for community microgrids considering the building thermal dynamics and customer comfort preference. The microgrid central controller (MCC) minimizes the total cost of operating the community microgrid, including fuel cost, purchasing cost, battery degradation cost and voluntary load shedding cost based on the customers' consumption, while the building energy management systems (BEMS) minimize their electricity bills as well as the cost associated with customer discomfort due to room temperature deviation from the set point. The BEMSs and the MCC exchange information on energy consumption and prices. When the optimization converges, the distributed generation scheduling,more » energy storage charging/discharging and customers' consumption as well as the energy prices are determined. In particular, we integrate the detailed thermal dynamic characteristics of buildings into the proposed model. The heating, ventilation and air-conditioning (HVAC) systems can be scheduled intelligently to reduce the electricity cost while maintaining the indoor temperature in the comfort range set by customers. Numerical simulation results show the effectiveness of proposed model.« less
15 CFR 760.3 - Exceptions to prohibitions.
Code of Federal Regulations, 2012 CFR
2012-01-01
..., a U.S. contractor building an industrial facility in boycotting country Y is asked by B, a resident... agency of boycotting country Y to build a pipeline. Y requests A to suggest qualified engineering firms... conducts its operations, to identify qualified engineering firms to its customers so that its customers may...
15 CFR 760.3 - Exceptions to prohibitions.
Code of Federal Regulations, 2014 CFR
2014-01-01
..., a U.S. contractor building an industrial facility in boycotting country Y is asked by B, a resident... agency of boycotting country Y to build a pipeline. Y requests A to suggest qualified engineering firms... conducts its operations, to identify qualified engineering firms to its customers so that its customers may...
19 CFR 10.602 - Packing materials and containers for shipment.
Code of Federal Regulations, 2010 CFR
2010-04-01
...-Central America-United States Free Trade Agreement Rules of Origin § 10.602 Packing materials and....602 Section 10.602 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY... into the United States. Accordingly, in applying the build-down, build-up, or net cost method for...
16 CFR 240.13 - Customer's and third party liability.
Code of Federal Regulations, 2011 CFR
2011-01-01
... brokers, perform in-store services for their grocery retailer customers, such as stocking of shelves, building of displays and checking or rotating inventory, etc. A customer operating a retail grocery... new store opening when the customer knows or should know that such allowances, or suitable...
16 CFR 240.13 - Customer's and third party liability.
Code of Federal Regulations, 2012 CFR
2012-01-01
... brokers, perform in-store services for their grocery retailer customers, such as stocking of shelves, building of displays and checking or rotating inventory, etc. A customer operating a retail grocery... new store opening when the customer knows or should know that such allowances, or suitable...
16 CFR 240.13 - Customer's and third party liability.
Code of Federal Regulations, 2014 CFR
2014-01-01
... brokers, perform in-store services for their grocery retailer customers, such as stocking of shelves, building of displays and checking or rotating inventory, etc. A customer operating a retail grocery... new store opening when the customer knows or should know that such allowances, or suitable...
16 CFR 240.13 - Customer's and third party liability.
Code of Federal Regulations, 2013 CFR
2013-01-01
... brokers, perform in-store services for their grocery retailer customers, such as stocking of shelves, building of displays and checking or rotating inventory, etc. A customer operating a retail grocery... new store opening when the customer knows or should know that such allowances, or suitable...
2014-09-17
what you can about requirements for a next generation laptop for the Home User that • attracts new customers • leverages existing customer loyalty ...Training © 2014 Carnegie Mellon University Traditional Requirements Elicitation Approaches Interviews of customers /users to elicit problems and usage...needs Inventory of problem reporting systems harboring customer complaints Solicitation of specification from customers /users to build a system
How Smart Schools Get and Keep Community Support.
ERIC Educational Resources Information Center
Carroll, Susan Rovezzi; Carroll, David
The purpose of this book is to provide public school systems with a fresh and unique approach to building community support. The book suggests that schools must build community support, discusses who the customers segments are, and outlines the diverse ways that public schools can begin to develop strong bonds with each customer segment. It…
Client-oriented Building Mass Customization (CoBMC)
NASA Astrophysics Data System (ADS)
Lee, Xia Sheng; Faris Khamidi, Mohd; Kuppusamy, Sivaraman; Tuck Heng, Chin
2017-12-01
Although much later compared to other industries including aerospace, automobile, oil and gas etc., digital technology development has been cresting towards an exponential curve in the construction industry. Technological diversity and abundance change the game from “what you can” to “what you want”. Society is changing at an unprecedented rate. Consequently adaptability will be crucial. This research paper explores the integration of digital adaptive technologies that transform the construction industry from the mass production to that of a possible client-oriented mass customization. The focus on the design, construction and performance stages of a building project, currently undergoing major overhaul faces a paradigm shift globally that will impact and compel attention for the next three decades with viable solutions such as Building Information Modelling (BIM) to manage massive data cum information. Customization maximizes clients’ participation during the design process thereby achieving greater effective value and higher satisfaction. A study between customized and standardized examples will investigate how adaptive customization will shift the design paradigm from cost to value centric. This action research will explore different aspects of emerging innovative systems already in place pushing the edge of frontiers, and transforming the building industry landscape whether micro or giga, to compliment new technologies to create an unprecedented exhilaration of freshness over the mundane, routine and mediocrity. Three identified fundamental aspects that are instrumental to Client-oriented Building Mass Customization (CoBMC) are design option visualization, parametric product information and n-dimensional modelling. The study concluded that a paradigm shift is therefore inevitable for every stakeholder including clients who will need to re-examine their roles, capabilities, and competencies in preparation towards challenging future.
Managing hybrid marketing systems.
Moriarty, R T; Moran, U
1990-01-01
As competition increases and costs become critical, companies that once went to market only one way are adding new channels and using new methods - creating hybrid marketing systems. These hybrid marketing systems hold the promise of greater coverage and reduced costs. But they are also hard to manage; they inevitably raise questions of conflict and control: conflict because marketing units compete for customers; control because new indirect channels are less subject to management authority. Hard as they are to manage, however, hybrid marketing systems promise to become the dominant design, replacing the "purebred" channel strategy in all kinds of businesses. The trick to managing the hybrid is to analyze tasks and channels within and across a marketing system. A map - the hybrid grid - can help managers make sense of their hybrid system. What the chart reveals is that channels are not the basic building blocks of a marketing system; marketing tasks are. The hybrid grid forces managers to consider various combinations of channels and tasks that will optimize both cost and coverage. Managing conflict is also an important element of a successful hybrid system. Managers should first acknowledge the inevitability of conflict. Then they should move to bound it by creating guidelines that spell out which customers to serve through which methods. Finally, a marketing and sales productivity (MSP) system, consisting of a central marketing database, can act as the central nervous system of a hybrid marketing system, helping managers create customized channels and service for specific customer segments.
A customer-insight led approach to building operational resilience.
Passey, Fi
2018-01-01
High-profile failures over the past few years have led to the disruption of banking services in the UK, with some banks' customers left unable to make or receive payments, check balances or access cash for days or weeks. Technological advances and a push towards remote channels have increased customer expectations of 'always on' - any time, any place, anywhere - and with disruptions lasting anything from a few minutes to nearly a month, the regulator is also taking an interest. Nationwide Building Society has responded positively to this challenge by defining its operational resilience strategy, a long-term plan aimed at minimising the likelihood and impact of future disruptions. Customer research was used in order to understand customer expectations, as well as define and prioritise its end-to-end customer journeys, known as business service lines. A comprehensive mapping exercise facilitated the development of strategies and investment projects to address identified vulnerabilities and increase resilience.
Grid Data Management and Customer Demands at MeteoSwiss
NASA Astrophysics Data System (ADS)
Rigo, G.; Lukasczyk, Ch.
2010-09-01
Data grids constitute the required input form for a variety of applications. Therefore, customers increasingly expect climate services to not only provide measured data, but also grids of these with the required configurations on an operational basis. Currently, MeteoSwiss is establishing a production chain for delivering data grids by subscription directly from the data warehouse in order to meet the demand for precipitation data grids by governmental, business and science customers. The MeteoSwiss data warehouse runs on an Oracle database linked with an ArcGIS Standard edition geodatabase. The grids are produced by Unix-based software written in R called GRIDMCH which extracts the station data from the data warehouse and stores the files in the file system. By scripts, the netcdf-v4 files are imported via an FME interface into the database. Currently daily and monthly deliveries of daily precipitation grids are available from MeteoSwiss with a spatial resolution of 2.2km x 2.2km. These daily delivered grids are a preliminary based on 100 measuring sites whilst the grid of the monthly delivery of daily sums is calculated out of about 430 stations. Crucial for the absorption by the customers is the understanding of and the trust into the new grid product. Clearly stating needs which can be covered by grid products, the customers require a certain lead time to develop applications making use of the particular grid. Therefore, early contacts and a continuous attendance as well as flexibility in adjusting the production process to fulfill emerging customer needs are important during the introduction period. Gridding over complex terrain can lead to temporally elevated uncertainties in certain areas depending on the weather situation and coverage of measurements. Therefore, careful instructions on the quality and use and the possibility to communicate the uncertainties of gridded data proofed to be essential especially to the business and science customers who require near-real-time datasets to build up trust in the product in different applications. The implementation of a new method called RSOI for the daily production allowed to bring the daily precipitation field up to the expectations of customers. The main use of the grids were near-realtime and past event analysis in areas scarcely covered with stations, and inputs for forecast tools and models. Critical success factors of the product were speed of delivery and at the same time accuracy, temporal and spatial resolution, and configuration (coordinate system, projection). To date, grids of archived precipitation data since 1961 and daily/monthly precipitation gridsets with 4h-delivery lag of Switzerland or subareas are available.
Chen, Ming; Henry, Nathan; Almsaeed, Abdullah; Zhou, Xiao; Wegrzyn, Jill; Ficklin, Stephen
2017-01-01
Abstract Tripal is an open source software package for developing biological databases with a focus on genetic and genomic data. It consists of a set of core modules that deliver essential functions for loading and displaying data records and associated attributes including organisms, sequence features and genetic markers. Beyond the core modules, community members are encouraged to contribute extension modules to build on the Tripal core and to customize Tripal for individual community needs. To expand the utility of the Tripal software system, particularly for RNASeq data, we developed two new extension modules. Tripal Elasticsearch enables fast, scalable searching of the entire content of a Tripal site as well as the construction of customized advanced searches of specific data types. We demonstrate the use of this module for searching assembled transcripts by functional annotation. A second module, Tripal Analysis Expression, houses and displays records from gene expression assays such as RNA sequencing. This includes biological source materials (biomaterials), gene expression values and protocols used to generate the data. In the case of an RNASeq experiment, this would reflect the individual organisms and tissues used to produce sequencing libraries, the normalized gene expression values derived from the RNASeq data analysis and a description of the software or code used to generate the expression values. The module will load data from common flat file formats including standard NCBI Biosample XML. Data loading, display options and other configurations can be controlled by authorized users in the Drupal administrative backend. Both modules are open source, include usage documentation, and can be found in the Tripal organization’s GitHub repository. Database URL: Tripal Elasticsearch module: https://github.com/tripal/tripal_elasticsearch Tripal Analysis Expression module: https://github.com/tripal/tripal_analysis_expression PMID:29220446
An object-oriented approach to the management of meteorological and hydrological data
NASA Technical Reports Server (NTRS)
Graves, S. J.; Williams, S. F.; Criswell, E. A.
1990-01-01
An interface to several meteorological and hydrological databases have been developed that enables researchers efficiently to access and interrelate data through a customized menu system. By extending a relational database system with object-oriented concepts, each user or group of users may have different 'views' of the data to allow user access to data in customized ways without altering the organization of the database. An application to COHMEX and WetNet, two earth science projects within NASA Marshall Space Flight Center's Earth Science and Applications Division, are described.
Review of Methods for Buildings Energy Performance Modelling
NASA Astrophysics Data System (ADS)
Krstić, Hrvoje; Teni, Mihaela
2017-10-01
Research presented in this paper gives a brief review of methods used for buildings energy performance modelling. This paper gives also a comprehensive review of the advantages and disadvantages of available methods as well as the input parameters used for modelling buildings energy performance. European Directive EPBD obliges the implementation of energy certification procedure which gives an insight on buildings energy performance via exiting energy certificate databases. Some of the methods for buildings energy performance modelling mentioned in this paper are developed by employing data sets of buildings which have already undergone an energy certification procedure. Such database is used in this paper where the majority of buildings in the database have already gone under some form of partial retrofitting - replacement of windows or installation of thermal insulation but still have poor energy performance. The case study presented in this paper utilizes energy certificates database obtained from residential units in Croatia (over 400 buildings) in order to determine the dependence between buildings energy performance and variables from database by using statistical dependencies tests. Building energy performance in database is presented with building energy efficiency rate (from A+ to G) which is based on specific annual energy needs for heating for referential climatic data [kWh/(m2a)]. Independent variables in database are surfaces and volume of the conditioned part of the building, building shape factor, energy used for heating, CO2 emission, building age and year of reconstruction. Research results presented in this paper give an insight in possibilities of methods used for buildings energy performance modelling. Further on it gives an analysis of dependencies between buildings energy performance as a dependent variable and independent variables from the database. Presented results could be used for development of new building energy performance predictive model.
Cataloging the biomedical world of pain through semi-automated curation of molecular interactions
Jamieson, Daniel G.; Roberts, Phoebe M.; Robertson, David L.; Sidders, Ben; Nenadic, Goran
2013-01-01
The vast collection of biomedical literature and its continued expansion has presented a number of challenges to researchers who require structured findings to stay abreast of and analyze molecular mechanisms relevant to their domain of interest. By structuring literature content into topic-specific machine-readable databases, the aggregate data from multiple articles can be used to infer trends that can be compared and contrasted with similar findings from topic-independent resources. Our study presents a generalized procedure for semi-automatically creating a custom topic-specific molecular interaction database through the use of text mining to assist manual curation. We apply the procedure to capture molecular events that underlie ‘pain’, a complex phenomenon with a large societal burden and unmet medical need. We describe how existing text mining solutions are used to build a pain-specific corpus, extract molecular events from it, add context to the extracted events and assess their relevance. The pain-specific corpus contains 765 692 documents from Medline and PubMed Central, from which we extracted 356 499 unique normalized molecular events, with 261 438 single protein events and 93 271 molecular interactions supplied by BioContext. Event chains are annotated with negation, speculation, anatomy, Gene Ontology terms, mutations, pain and disease relevance, which collectively provide detailed insight into how that event chain is associated with pain. The extracted relations are visualized in a wiki platform (wiki-pain.org) that enables efficient manual curation and exploration of the molecular mechanisms that underlie pain. Curation of 1500 grouped event chains ranked by pain relevance revealed 613 accurately extracted unique molecular interactions that in the future can be used to study the underlying mechanisms involved in pain. Our approach demonstrates that combining existing text mining tools with domain-specific terms and wiki-based visualization can facilitate rapid curation of molecular interactions to create a custom database. Database URL: ••• PMID:23707966
Cataloging the biomedical world of pain through semi-automated curation of molecular interactions.
Jamieson, Daniel G; Roberts, Phoebe M; Robertson, David L; Sidders, Ben; Nenadic, Goran
2013-01-01
The vast collection of biomedical literature and its continued expansion has presented a number of challenges to researchers who require structured findings to stay abreast of and analyze molecular mechanisms relevant to their domain of interest. By structuring literature content into topic-specific machine-readable databases, the aggregate data from multiple articles can be used to infer trends that can be compared and contrasted with similar findings from topic-independent resources. Our study presents a generalized procedure for semi-automatically creating a custom topic-specific molecular interaction database through the use of text mining to assist manual curation. We apply the procedure to capture molecular events that underlie 'pain', a complex phenomenon with a large societal burden and unmet medical need. We describe how existing text mining solutions are used to build a pain-specific corpus, extract molecular events from it, add context to the extracted events and assess their relevance. The pain-specific corpus contains 765 692 documents from Medline and PubMed Central, from which we extracted 356 499 unique normalized molecular events, with 261 438 single protein events and 93 271 molecular interactions supplied by BioContext. Event chains are annotated with negation, speculation, anatomy, Gene Ontology terms, mutations, pain and disease relevance, which collectively provide detailed insight into how that event chain is associated with pain. The extracted relations are visualized in a wiki platform (wiki-pain.org) that enables efficient manual curation and exploration of the molecular mechanisms that underlie pain. Curation of 1500 grouped event chains ranked by pain relevance revealed 613 accurately extracted unique molecular interactions that in the future can be used to study the underlying mechanisms involved in pain. Our approach demonstrates that combining existing text mining tools with domain-specific terms and wiki-based visualization can facilitate rapid curation of molecular interactions to create a custom database. Database URL: •••
Building customer capital through knowledge management processes in the health care context.
Liu, Sandra S; Lin, Carol Yuh-Yun
2007-01-01
Customer capital is a value generated and an asset developed from customer relationships. Successfully managing these relationships is enhanced by knowledge management (KM) infrastructure that captures and transfers customer-related knowledge. The execution of such a system relies on the vision and determination of the top management team (TMT). The health care industry in today's knowledge economy encounters similar challenges of consumerism as its business sector. Developing customer capital is critical for hospitals to remain competitive in the market. This study aims to provide taxonomy for cultivating market-based organizational learning that leads to building of customer capital and attaining desirable financial performance in health care. With the advancement of technology, the KM system plays an important moderating role in the entire process. The customer capital issue has not been fully explored either in the business or the health care industry. The exploratory nature of such a pursuit calls for a qualitative approach. This study examines the proposed taxonomy with the case hospital. The lessons learned also are reflected with three US-based health networks. The TMT incorporated the knowledge process of conceptualization and transformation in their organizational mission. The market-oriented learning approach promoted by TMT helps with the accumulation and sharing of knowledge that prepares the hospital for the dynamics in the marketplace. Their key knowledge advancement relies on both the professional arena and the feedback of customers. The institutionalization of the KM system and organizational culture expands the hospital's customer capital. The implication is twofold: (1) the TMT is imperative for the success of building customer capital through KM process; and (2) the team effort should be enhanced with a learning culture and sharing spirit, in particular, active nurse participation in decision making and frontline staff's role in providing a delightfully surprising patient experience.
Internal Branding: Using Performance Technology To Create an Organization Focused on Customer Value.
ERIC Educational Resources Information Center
Tosti, Donald T.; Stotz, Rodger
2000-01-01
Presents a performance technology approach to revenue enhancement, with the goal of improving customer retention through building customer value. Topics include internal branding, a way to make sure that what the company delivers matches what's promised in the advertising; product versus service brands; and customer satisfaction, including…
Rice, Michael; Gladstone, William; Weir, Michael
2004-01-01
We discuss how relational databases constitute an ideal framework for representing and analyzing large-scale genomic data sets in biology. As a case study, we describe a Drosophila splice-site database that we recently developed at Wesleyan University for use in research and teaching. The database stores data about splice sites computed by a custom algorithm using Drosophila cDNA transcripts and genomic DNA and supports a set of procedures for analyzing splice-site sequence space. A generic Web interface permits the execution of the procedures with a variety of parameter settings and also supports custom structured query language queries. Moreover, new analytical procedures can be added by updating special metatables in the database without altering the Web interface. The database provides a powerful setting for students to develop informatic thinking skills.
2004-01-01
We discuss how relational databases constitute an ideal framework for representing and analyzing large-scale genomic data sets in biology. As a case study, we describe a Drosophila splice-site database that we recently developed at Wesleyan University for use in research and teaching. The database stores data about splice sites computed by a custom algorithm using Drosophila cDNA transcripts and genomic DNA and supports a set of procedures for analyzing splice-site sequence space. A generic Web interface permits the execution of the procedures with a variety of parameter settings and also supports custom structured query language queries. Moreover, new analytical procedures can be added by updating special metatables in the database without altering the Web interface. The database provides a powerful setting for students to develop informatic thinking skills. PMID:15592597
Building brand equity and customer loyalty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pokorny, G.
Customer satisfaction and customer loyalty are two different concepts, not merely two different phrases measuring a single consumer attitude. Utilities having identical customer satisfaction ratings based on performance in areas like power reliability, pricing, and quality of service differ dramatically in their levels of customer loyalty. As competitive markets establish themselves, discrepancies in customer loyalty will have profound impacts on each utility`s prospects for market retention, profitability, and ultimately, shareholder value. Meeting pre-existing consumer needs, wants and preferences is the foundation of any utility strategy for building customer loyalty and market retention. Utilities meet their underlying customer expectations by performingmore » well in three discrete areas: product, customer service programs, and customer service transactions. Brand equity is an intervening variable standing between performance and the loyalty a utility desires. It is the totality of customer perceptions about the unique extra value the utility provides above and beyond its basic product, customer service programs and customer service transactions; it is the tangible, palpable reality of a branded utility that exists in the minds of consumers. By learning to manage their brand equity as well as they manage their brand performance, utilities gain control over all the major elements in the value-creation process that creates customer loyalty. By integrating brand performance and brand equity, electric utility companies can truly become in their customers` eyes a brand - a unique, very special, value-added energy services provider that can ask for and deserve a premium price in the marketplace.« less
Veterans Administration Databases
The Veterans Administration Information Resource Center provides database and informatics experts, customer service, expert advice, information products, and web technology to VA researchers and others.
A neotropical Miocene pollen database employing image-based search and semantic modeling.
Han, Jing Ginger; Cao, Hongfei; Barb, Adrian; Punyasena, Surangi W; Jaramillo, Carlos; Shyu, Chi-Ren
2014-08-01
Digital microscopic pollen images are being generated with increasing speed and volume, producing opportunities to develop new computational methods that increase the consistency and efficiency of pollen analysis and provide the palynological community a computational framework for information sharing and knowledge transfer. • Mathematical methods were used to assign trait semantics (abstract morphological representations) of the images of neotropical Miocene pollen and spores. Advanced database-indexing structures were built to compare and retrieve similar images based on their visual content. A Web-based system was developed to provide novel tools for automatic trait semantic annotation and image retrieval by trait semantics and visual content. • Mathematical models that map visual features to trait semantics can be used to annotate images with morphology semantics and to search image databases with improved reliability and productivity. Images can also be searched by visual content, providing users with customized emphases on traits such as color, shape, and texture. • Content- and semantic-based image searches provide a powerful computational platform for pollen and spore identification. The infrastructure outlined provides a framework for building a community-wide palynological resource, streamlining the process of manual identification, analysis, and species discovery.
Map-Based Querying for Multimedia Database
2014-09-01
existing assets in a custom multimedia database based on an area of interest. It also describes the augmentation of an Android Tactical Assault Kit (ATAK......for Multimedia Database Somiya Metu Computational and Information Sciences Directorate, ARL
Operation and cost of a small dehumidification dry kiln
Richard D. Bergman
2008-01-01
Obtaining small quantities of custom kiln-dried lumber can be an expensive process for an individual woodworker. Building and operating a small kiln capable of drying custom cuts of lumber (such as slabs, bowl blanks) gives woodworkers another option. Our approach was to build and operate a small dehumidification dry kiln. The four charges of lumber ranged from 600 to...
Brettin, Thomas; Davis, James J.; Disz, Terry; ...
2015-02-10
The RAST (Rapid Annotation using Subsystem Technology) annotation engine was built in 2008 to annotate bacterial and archaeal genomes. It works by offering a standard software pipeline for identifying genomic features (i.e., protein-encoding genes and RNA) and annotating their functions. Recently, in order to make RAST a more useful research tool and to keep pace with advancements in bioinformatics, it has become desirable to build a version of RAST that is both customizable and extensible. In this paper, we describe the RAST tool kit (RASTtk), a modular version of RAST that enables researchers to build custom annotation pipelines. RASTtk offersmore » a choice of software for identifying and annotating genomic features as well as the ability to add custom features to an annotation job. RASTtk also accommodates the batch submission of genomes and the ability to customize annotation protocols for batch submissions. This is the first major software restructuring of RAST since its inception.« less
Enabling Automated Dynamic Demand Response: From Theory to Practice
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frincu, Marc; Chelmis, Charalampos; Aman, Saima
2015-07-14
Demand response (DR) is a technique used in smart grids to shape customer load during peak hours. Automated DR offers utilities a fine grained control and a high degree of confidence in the outcome. However the impact on the customer's comfort means this technique is more suited for industrial and commercial settings than for residential homes. In this paper we propose a system for achieving automated controlled DR in a heterogeneous environment. We present some of the main issues arising in building such a system, including privacy, customer satisfiability, reliability, and fast decision turnaround, with emphasis on the solutions wemore » proposed. Based on the lessons we learned from empirical results we describe an integrated automated system for controlled DR on the USC microgrid. Results show that while on a per building per event basis the accuracy of our prediction and customer selection techniques varies, it performs well on average when considering several events and buildings.« less
Towards Introducing a Geocoding Information System for Greenland
NASA Astrophysics Data System (ADS)
Siksnans, J.; Pirupshvarre, Hans R.; Lind, M.; Mioc, D.; Anton, F.
2011-08-01
Currently, addressing practices in Greenland do not support geocoding. Addressing points on a map by geographic coordinates is vital for emergency services such as police and ambulance for avoiding ambiguities in finding incident locations (Government of Greenland, 2010) Therefore, it is necessary to investigate the current addressing practices in Greenland. Asiaq (Asiaq, 2011) is a public enterprise of the Government of Greenland which holds three separate databases regards addressing and place references: - list of locality names (towns, villages, farms), - technical base maps (including road center lines not connected with names, and buildings), - the NIN registry (The Land Use Register of Greenland - holds information on the land allotments and buildings in Greenland). The main problem is that these data sets are not interconnected, thus making it impossible to address a point in a map with geographic coordinates in a standardized way. The possible solutions suffer from the fact that Greenland has a scattered habitation pattern and the generalization of the address assignment schema is a difficult task. A schema would be developed according to the characteristics of the settlement pattern, e.g. cities, remote locations and place names. The aim is to propose an ontology for a common postal address system for Greenland. The main part of the research is dedicated to the current system and user requirement engineering. This allowed us to design a conceptual database model which corresponds to the user requirements, and implement a small scale prototype. Furthermore, our research includes resemblance findings in Danish and Greenland's addressing practices, data dictionary for establishing Greenland addressing system's logical model and enhanced entity relationship diagram. This initial prototype of the Greenland addressing system could be used to evaluate and build the full architecture of the addressing information system for Greenland. Using software engineering methods the implementation can be done according to the developed data model and initial database prototype. Development of the Greenland addressing system using a modern GIS and database technology would ease the work and improve the quality of public services such as: postal delivery, emergency response, customer/business relationship management, administration of land, utility planning and maintenance and public statistical data analysis.
Customer relations data aids marketing efforts.
Werronen, H J
1988-08-01
A customer relations information system can help improve a hospital's marketing performance. With such a system, the author writes, a medical center can easily redirect its information systems away from the traditional transaction-oriented approach toward the building of long-lasting relationship with customers.
Test of ATLAS RPCs Front-End electronics
NASA Astrophysics Data System (ADS)
Aielli, G.; Camarri, P.; Cardarelli, R.; Di Ciaccio, A.; Di Stante, L.; Liberti, B.; Paoloni, A.; Pastori, E.; Santonico, R.
2003-08-01
The Front-End Electronics performing the ATLAS RPCs readout is a full custom 8 channels GaAs circuit, which integrates in a single die both the analog and digital signal processing. The die is bonded on the Front-End board which is completely closed inside the detector Faraday cage. About 50 000 FE boards are foreseen for the experiment. The complete functionality of the FE boards will be certificated before the detector assembly. We describe here the systematic test devoted to check the dynamic functionality of each single channel and the selection criteria applied. It measures and registers all relevant electronics parameters to build up a complete database for the experiment. The statistical results from more than 1100 channels are presented.
Simultaneous Co-Clustering and Classification in Customers Insight
NASA Astrophysics Data System (ADS)
Anggistia, M.; Saefuddin, A.; Sartono, B.
2017-04-01
Building predictive model based on the heterogeneous dataset may yield many problems, such as less precise in parameter and prediction accuracy. Such problem can be solved by segmenting the data into relatively homogeneous groups and then build a predictive model for each cluster. The advantage of using this strategy usually gives result in simpler models, more interpretable, and more actionable without any loss in accuracy and reliability. This work concerns on marketing data set which recorded a customer behaviour across products. There are some variables describing customer and product as attributes. The basic idea of this approach is to combine co-clustering and classification simultaneously. The objective of this research is to analyse the customer across product characteristics, so the marketing strategy implemented precisely.
From Industry to Higher Education and Libraries: Building the Fast Response Library (FRL).
ERIC Educational Resources Information Center
Apostolou, A. S.; Skiadas, C. H.
In order to be effective in the coming millennium, libraries will need to measure their performance rigorously against the expectations and real needs of their customers. The library of the future will need to be a customer sensitive, knowledge creating, agile enterprise. It must provide value to every customer, where value is the customer's…
Code of Federal Regulations, 2010 CFR
2010-10-01
... commitment to customer satisfaction; the contractor's reporting into databases (see subparts 4.14 and 4.15...-like concern for the interest of the customer. [60 FR 16719, Mar. 31, 1995, as amended at 73 FR 67091...
NASA Technical Reports Server (NTRS)
Snell, William H.; Turner, Anne M.; Gifford, Luther; Stites, William
2010-01-01
A quality system database (QSD), and software to administer the database, were developed to support recording of administrative nonconformance activities that involve requirements for documentation of corrective and/or preventive actions, which can include ISO 9000 internal quality audits and customer complaints.
The Ties that Bind: Creating Great Customer Service.
ERIC Educational Resources Information Center
Lisker, Peter
2000-01-01
Offers suggestions for libraries on how to develop a customer service plan to provide excellent service, create a positive environment for staff members, foster new and continued positive relationships with patrons, and evaluate customer service goals and objectives. Also discusses policies and building appearance. (Author/LRW)
19 CFR 351.304 - Establishing business proprietary treatment of information.
Code of Federal Regulations, 2012 CFR
2012-04-01
... information. 351.304 Section 351.304 Customs Duties INTERNATIONAL TRADE ADMINISTRATION, DEPARTMENT OF COMMERCE...) Electronic databases. In accordance with § 351.303(c)(3), an electronic database need not contain brackets... in the database. The public version of the database must be publicly summarized and ranged in...
19 CFR 351.304 - Establishing business proprietary treatment of information.
Code of Federal Regulations, 2013 CFR
2013-04-01
... information. 351.304 Section 351.304 Customs Duties INTERNATIONAL TRADE ADMINISTRATION, DEPARTMENT OF COMMERCE...) Electronic databases. In accordance with § 351.303(c)(3), an electronic database need not contain brackets... in the database. The public version of the database must be publicly summarized and ranged in...
19 CFR 351.304 - Establishing business proprietary treatment of information.
Code of Federal Regulations, 2014 CFR
2014-04-01
... information. 351.304 Section 351.304 Customs Duties INTERNATIONAL TRADE ADMINISTRATION, DEPARTMENT OF COMMERCE...) Electronic databases. In accordance with § 351.303(c)(3), an electronic database need not contain brackets... in the database. The public version of the database must be publicly summarized and ranged in...
A Hybrid Data Mining Approach for Credit Card Usage Behavior Analysis
NASA Astrophysics Data System (ADS)
Tsai, Chieh-Yuan
Credit card is one of the most popular e-payment approaches in current online e-commerce. To consolidate valuable customers, card issuers invest a lot of money to maintain good relationship with their customers. Although several efforts have been done in studying card usage motivation, few researches emphasize on credit card usage behavior analysis when time periods change from t to t+1. To address this issue, an integrated data mining approach is proposed in this paper. First, the customer profile and their transaction data at time period t are retrieved from databases. Second, a LabelSOM neural network groups customers into segments and identify critical characteristics for each group. Third, a fuzzy decision tree algorithm is used to construct usage behavior rules of interesting customer groups. Finally, these rules are used to analysis the behavior changes between time periods t and t+1. An implementation case using a practical credit card database provided by a commercial bank in Taiwan is illustrated to show the benefits of the proposed framework.
Research on Customer Value Based on Extension Data Mining
NASA Astrophysics Data System (ADS)
Chun-Yan, Yang; Wei-Hua, Li
Extenics is a new discipline for dealing with contradiction problems with formulize model. Extension data mining (EDM) is a product combining Extenics with data mining. It explores to acquire the knowledge based on extension transformations, which is called extension knowledge (EK), taking advantage of extension methods and data mining technology. EK includes extensible classification knowledge, conductive knowledge and so on. Extension data mining technology (EDMT) is a new data mining technology that mining EK in databases or data warehouse. Customer value (CV) can weigh the essentiality of customer relationship for an enterprise according to an enterprise as a subject of tasting value and customers as objects of tasting value at the same time. CV varies continually. Mining the changing knowledge of CV in databases using EDMT, including quantitative change knowledge and qualitative change knowledge, can provide a foundation for that an enterprise decides the strategy of customer relationship management (CRM). It can also provide a new idea for studying CV.
Business marketing: understand what customers value.
Anderson, J C; Narus, J A
1998-01-01
How do you define the value of your market offering? Can you measure it? Few suppliers in business markets are able to answer those questions, and yet the ability to pinpoint the value of a product or service for one's customers has never been more important. By creating and using what the authors call customer value models, suppliers are able to figure out exactly what their offerings are worth to customers. Field value assessments--the most commonly used method for building customer value models--call for suppliers to gather data about their customers firsthand whenever possible. Through these assessments, a supplier can build a value model for an individual customer or for a market segment, drawing on data gathered form several customers in that segment. Suppliers can use customer value models to create competitive advantage in several ways. First, they can capitalize on the inevitable variation in customers' requirements by providing flexible market offerings. Second, they can use value models to demonstrate how a new product or service they are offering will provide greater value. Third, they can use their knowledge of how their market offerings specifically deliver value to craft persuasive value propositions. And fourth, they can use value models to provide evidence to customers of their accomplishments. Doing business based on value delivered gives companies the means to get an equitable return for their efforts. Once suppliers truly understand value, they will be able to realize the benefits of measuring and monitoring it for their customers.
Azzato, Elizabeth M.; Morrissette, Jennifer J. D.; Halbiger, Regina D.; Bagg, Adam; Daber, Robert D.
2014-01-01
Background: At some institutions, including ours, bone marrow aspirate specimen triage is complex, with hematopathology triage decisions that need to be communicated to downstream ancillary testing laboratories and many specimen aliquot transfers that are handled outside of the laboratory information system (LIS). We developed a custom integrated database with dashboards to facilitate and streamline this workflow. Methods: We developed user-specific dashboards that allow entry of specimen information by technologists in the hematology laboratory, have custom scripting to present relevant information for the hematopathology service and ancillary laboratories and allow communication of triage decisions from the hematopathology service to other laboratories. These dashboards are web-accessible on the local intranet and accessible from behind the hospital firewall on a computer or tablet. Secure user access and group rights ensure that relevant users can edit or access appropriate records. Results: After database and dashboard design, two-stage beta-testing and user education was performed, with the first focusing on technologist specimen entry and the second on downstream users. Commonly encountered issues and user functionality requests were resolved with database and dashboard redesign. Final implementation occurred within 6 months of initial design; users report improved triage efficiency and reduced need for interlaboratory communications. Conclusions: We successfully developed and implemented a custom database with dashboards that facilitates and streamlines our hematopathology bone marrow aspirate triage. This provides an example of a possible solution to specimen communications and traffic that are outside the purview of a standard LIS. PMID:25250187
Swertz, Morris A; De Brock, E O; Van Hijum, Sacha A F T; De Jong, Anne; Buist, Girbe; Baerends, Richard J S; Kok, Jan; Kuipers, Oscar P; Jansen, Ritsert C
2004-09-01
Genomic research laboratories need adequate infrastructure to support management of their data production and research workflow. But what makes infrastructure adequate? A lack of appropriate criteria makes any decision on buying or developing a system difficult. Here, we report on the decision process for the case of a molecular genetics group establishing a microarray laboratory. Five typical requirements for experimental genomics database systems were identified: (i) evolution ability to keep up with the fast developing genomics field; (ii) a suitable data model to deal with local diversity; (iii) suitable storage of data files in the system; (iv) easy exchange with other software; and (v) low maintenance costs. The computer scientists and the researchers of the local microarray laboratory considered alternative solutions for these five requirements and chose the following options: (i) use of automatic code generation; (ii) a customized data model based on standards; (iii) storage of datasets as black boxes instead of decomposing them in database tables; (iv) loosely linking to other programs for improved flexibility; and (v) a low-maintenance web-based user interface. Our team evaluated existing microarray databases and then decided to build a new system, Molecular Genetics Information System (MOLGENIS), implemented using code generation in a period of three months. This case can provide valuable insights and lessons to both software developers and a user community embarking on large-scale genomic projects. http://www.molgenis.nl
NASA Technical Reports Server (NTRS)
Baumback, J. I.; Davies, A. N.; Vonirmer, A.; Lampen, P. H.
1995-01-01
To assist peak assignment in ion mobility spectrometry it is important to have quality reference data. The reference collection should be stored in a database system which is capable of being searched using spectral or substance information. We propose to build such a database customized for ion mobility spectra. To start off with it is important to quickly reach a critical mass of data in the collection. We wish to obtain as many spectra combined with their IMS parameters as possible. Spectra suppliers will be rewarded for their participation with access to the database. To make the data exchange between users and system administration possible, it is important to define a file format specially made for the requirements of ion mobility spectra. The format should be computer readable and flexible enough for extensive comments to be included. In this document we propose a data exchange format, and we would like you to give comments on it. For the international data exchange it is important, to have a standard data exchange format. We propose to base the definition of this format on the JCAMP-DX protocol, which was developed for the exchange of infrared spectra. This standard made by the Joint Committee on Atomic and Molecular Physical Data is of a flexible design. The aim of this paper is to adopt JCAMP-DX to the special requirements of ion mobility spectra.
Müller, H-M; Van Auken, K M; Li, Y; Sternberg, P W
2018-03-09
The biomedical literature continues to grow at a rapid pace, making the challenge of knowledge retrieval and extraction ever greater. Tools that provide a means to search and mine the full text of literature thus represent an important way by which the efficiency of these processes can be improved. We describe the next generation of the Textpresso information retrieval system, Textpresso Central (TPC). TPC builds on the strengths of the original system by expanding the full text corpus to include the PubMed Central Open Access Subset (PMC OA), as well as the WormBase C. elegans bibliography. In addition, TPC allows users to create a customized corpus by uploading and processing documents of their choosing. TPC is UIMA compliant, to facilitate compatibility with external processing modules, and takes advantage of Lucene indexing and search technology for efficient handling of millions of full text documents. Like Textpresso, TPC searches can be performed using keywords and/or categories (semantically related groups of terms), but to provide better context for interpreting and validating queries, search results may now be viewed as highlighted passages in the context of full text. To facilitate biocuration efforts, TPC also allows users to select text spans from the full text and annotate them, create customized curation forms for any data type, and send resulting annotations to external curation databases. As an example of such a curation form, we describe integration of TPC with the Noctua curation tool developed by the Gene Ontology (GO) Consortium. Textpresso Central is an online literature search and curation platform that enables biocurators and biomedical researchers to search and mine the full text of literature by integrating keyword and category searches with viewing search results in the context of the full text. It also allows users to create customized curation interfaces, use those interfaces to make annotations linked to supporting evidence statements, and then send those annotations to any database in the world. Textpresso Central URL: http://www.textpresso.org/tpc.
Organization and dissemination of multimedia medical databases on the WWW.
Todorovski, L; Ribaric, S; Dimec, J; Hudomalj, E; Lunder, T
1999-01-01
In the paper, we focus on the problem of building and disseminating multimedia medical databases on the World Wide Web (WWW). The current results of the ongoing project of building a prototype dermatology images database and its WWW presentation are presented. The dermatology database is part of an ambitious plan concerning an organization of a network of medical institutions building distributed and federated multimedia databases of a much wider scale.
ERIC Educational Resources Information Center
Griffiths, Jose-Marie; And Others
This document contains validated activities and competencies needed by librarians working in a database distributor/service organization. The activities of professionals working in database distributor/service organizations are listed by function: Database Processing; Customer Support; System Administration; and Planning. The competencies are…
A Framework for Mapping User-Designed Forms to Relational Databases
ERIC Educational Resources Information Center
Khare, Ritu
2011-01-01
In the quest for database usability, several applications enable users to design custom forms using a graphical interface, and forward engineer the forms into new databases. The path-breaking aspect of such applications is that users are completely shielded from the technicalities of database creation. Despite this innovation, the process of…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wokoma, S; Yoon, J; Jung, J
2014-06-01
Purpose: To investigate the impact of custom-made build-up caps for a diode detector in robotic radiosurgery radiation fields with variable collimator (IRIS) for collimator scatter factor (Sc) calculation. Methods: An acrylic cap was custom-made to fit our SFD (IBA Dosimetry, Germany) diode detector. The cap has thickness of 5 cm, corresponding to a depth beyond electron contamination. IAEA phase space data was used for beam modeling and DOSRZnrc code was used to model the detector. The detector was positioned at 80 cm source-to-detector distance. Calculations were performed with the SFD, with and without the build-up cap, for clinical IRIS settingsmore » ranging from 7.5 to 60 mm. Results: The collimator scatter factors were calculated with and without 5 cm build-up cap. They were agreed within 3% difference except 15 mm cone. The Sc factor for 15 mm cone without buildup was 13.2% lower than that with buildup. Conclusion: Sc data is a critical component in advanced algorithms for treatment planning in order to calculate the dose accurately. After incorporating build-up cap, we discovered differences of up to 13.2 % in Sc factors in the SFD detector, when compared against in-air measurements without build-up caps.« less
Efficiently Distributing Component-based Applications Across Wide-Area Environments
2002-01-01
a variety of sophisticated network-accessible services such as e-mail, banking, on-line shopping, entertainment, and serv - ing as a data exchange...product database Customer Serves as a façade to Order and Account Stateful Session Beans ShoppingCart Maintains list of items to be bought by customer...Pet Store tests; and JBoss 3.0.3 with Jetty 4.1.0, for the RUBiS tests) and a sin- gle database server ( Oracle 8.1.7 Enterprise Edition), each running
49 CFR 192.353 - Customer meters and regulators: Location.
Code of Federal Regulations, 2010 CFR
2010-10-01
... TRANSPORTATION OF NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Customer Meters, Service... protected from corrosion and other damage, including, if installed outside a building, vehicular damage that...
Commercial and Multifamily Building Tenant Energy Usage Aggregation and Privacy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Livingston, Olga V.; Pulsipher, Trenton C.; Wang, Na
2014-11-17
In a number of cities and states, building owners are required to disclose and/or benchmark their building energy use. This requires the building owner to possess monthly whole-building energy usage information, which can be challenging for buildings in which individual tenants have their own utility meters and accounts with the utility. Some utilities and utility regulators have turned to aggregation of customer data as a way to give building owners the whole-building energy usage data while protecting customer privacy. However, no utilities or regulators appear to have conducted a concerted statistical, cybersecurity, and privacy analysis to justify the level ofmore » aggregation selected. Therefore, the Tennant Data Aggregation Task was established to help utilities address these issues and provide recommendations as well as a theoretical justification of the aggregation threshold. This study is focused on the use case of submitting data for ENERGY STAR Portfolio Manager (ESPM), but it also looks at other potential use cases for monthly energy consumption data.« less
A spatial database for landslides in northern Bavaria: A methodological approach
NASA Astrophysics Data System (ADS)
Jäger, Daniel; Kreuzer, Thomas; Wilde, Martina; Bemm, Stefan; Terhorst, Birgit
2018-04-01
Landslide databases provide essential information for hazard modeling, damages on buildings and infrastructure, mitigation, and research needs. This study presents the development of a landslide database system named WISL (Würzburg Information System on Landslides), currently storing detailed landslide data for northern Bavaria, Germany, in order to enable scientific queries as well as comparisons with other regional landslide inventories. WISL is based on free open source software solutions (PostgreSQL, PostGIS) assuring good correspondence of the various softwares and to enable further extensions with specific adaptions of self-developed software. Apart from that, WISL was designed to be particularly compatible for easy communication with other databases. As a central pre-requisite for standardized, homogeneous data acquisition in the field, a customized data sheet for landslide description was compiled. This sheet also serves as an input mask for all data registration procedures in WISL. A variety of "in-database" solutions for landslide analysis provides the necessary scalability for the database, enabling operations at the local server. In its current state, WISL already enables extensive analysis and queries. This paper presents an example analysis of landslides in Oxfordian Limestones in the northeastern Franconian Alb, northern Bavaria. The results reveal widely differing landslides in terms of geometry and size. Further queries related to landslide activity classifies the majority of the landslides as currently inactive, however, they clearly possess a certain potential for remobilization. Along with some active mass movements, a significant percentage of landslides potentially endangers residential areas or infrastructure. The main aspect of future enhancements of the WISL database is related to data extensions in order to increase research possibilities, as well as to transfer the system to other regions and countries.
Can there be mutual support between hospital marketing and continuous quality improvement?
Weiland, D E
1992-01-01
Marketing the results of continuous quality improvement in hospitals builds a growing bank of loyal customers in an increasingly competitive and quality-oriented environment: If healthcare institutions want to survive and flourish, they must develop a lasting relationship with their customers. The long-term goal of CQI is to provide quality products and services. If marketing managers can sell these improved services, hospitals will build a solid client foundation.
CSI Index Of Customer's Satisfaction Applied In The Area Of Public Transport
NASA Astrophysics Data System (ADS)
Poliaková, Adela
2015-06-01
In Western countries, the new visions are applied in quality control for an integrated public transport system. Public transport puts the customer at the centre of our decision making in achieving customer satisfaction with provided service. Sustainable surveys are kept among customers. A lot of companies are collecting huge databases containing over 30,000 voices of customers, which demonstrates the current satisfaction levels across the public transport service. Customer satisfaction with a provided service is a difficult task. In this service, the quality criteria are not clearly defined, and it is therefore difficult to define customer satisfaction. The paper introduces a possibility of CSI index application in conditions of the Slovak Republic transport area.
A neotropical Miocene pollen database employing image-based search and semantic modeling1
Han, Jing Ginger; Cao, Hongfei; Barb, Adrian; Punyasena, Surangi W.; Jaramillo, Carlos; Shyu, Chi-Ren
2014-01-01
• Premise of the study: Digital microscopic pollen images are being generated with increasing speed and volume, producing opportunities to develop new computational methods that increase the consistency and efficiency of pollen analysis and provide the palynological community a computational framework for information sharing and knowledge transfer. • Methods: Mathematical methods were used to assign trait semantics (abstract morphological representations) of the images of neotropical Miocene pollen and spores. Advanced database-indexing structures were built to compare and retrieve similar images based on their visual content. A Web-based system was developed to provide novel tools for automatic trait semantic annotation and image retrieval by trait semantics and visual content. • Results: Mathematical models that map visual features to trait semantics can be used to annotate images with morphology semantics and to search image databases with improved reliability and productivity. Images can also be searched by visual content, providing users with customized emphases on traits such as color, shape, and texture. • Discussion: Content- and semantic-based image searches provide a powerful computational platform for pollen and spore identification. The infrastructure outlined provides a framework for building a community-wide palynological resource, streamlining the process of manual identification, analysis, and species discovery. PMID:25202648
Geospatial database for heritage building conservation
NASA Astrophysics Data System (ADS)
Basir, W. N. F. W. A.; Setan, H.; Majid, Z.; Chong, A.
2014-02-01
Heritage buildings are icons from the past that exist in present time. Through heritage architecture, we can learn about economic issues and social activities of the past. Nowadays, heritage buildings are under threat from natural disaster, uncertain weather, pollution and others. In order to preserve this heritage for the future generation, recording and documenting of heritage buildings are required. With the development of information system and data collection technique, it is possible to create a 3D digital model. This 3D information plays an important role in recording and documenting heritage buildings. 3D modeling and virtual reality techniques have demonstrated the ability to visualize the real world in 3D. It can provide a better platform for communication and understanding of heritage building. Combining 3D modelling with technology of Geographic Information System (GIS) will create a database that can make various analyses about spatial data in the form of a 3D model. Objectives of this research are to determine the reliability of Terrestrial Laser Scanning (TLS) technique for data acquisition of heritage building and to develop a geospatial database for heritage building conservation purposes. The result from data acquisition will become a guideline for 3D model development. This 3D model will be exported to the GIS format in order to develop a database for heritage building conservation. In this database, requirements for heritage building conservation process are included. Through this research, a proper database for storing and documenting of the heritage building conservation data will be developed.
DIMA quick start, database for inventory, monitoring and assessment
USDA-ARS?s Scientific Manuscript database
The Database for Inventory, Monitoring and Assessment (DIMA) is a highly-customized Microsoft Access database for collecting data electronically in the field and for organizing, storing and reporting those data for monitoring and assessment. While DIMA can be used for any number of different monito...
Cameron, M; Perry, J; Middleton, J R; Chaffer, M; Lewis, J; Keefe, G P
2018-01-01
This study evaluated MALDI-TOF mass spectrometry and a custom reference spectra expanded database for the identification of bovine-associated coagulase-negative staphylococci (CNS). A total of 861 CNS isolates were used in the study, covering 21 different CNS species. The majority of the isolates were previously identified by rpoB gene sequencing (n = 804) and the remainder were identified by sequencing of hsp60 (n = 56) and tuf (n = 1). The genotypic identification was considered the gold standard identification. Using a direct transfer protocol and the existing commercial database, MALDI-TOF mass spectrometry showed a typeability of 96.5% (831/861) and an accuracy of 99.2% (824/831). Using a custom reference spectra expanded database, which included an additional 13 in-house created reference spectra, isolates were identified by MALDI-TOF mass spectrometry with 99.2% (854/861) typeability and 99.4% (849/854) accuracy. Overall, MALDI-TOF mass spectrometry using the direct transfer method was shown to be a highly reliable tool for the identification of bovine-associated CNS. Copyright © 2018 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Using Basic Quality Management Concepts to Produce Total Quality School Buildings.
ERIC Educational Resources Information Center
Herman, Jerry J.
1994-01-01
Quality control in designing and building school buildings depends on customer feedback. Outlines and graphically demonstrates the interrelationships among the input sources; the information acquired; and the three phases of predesign, construction, and completion. (MLF)
A guide for building biological pathways along with two case studies: hair and breast development.
Trindade, Daniel; Orsine, Lissur A; Barbosa-Silva, Adriano; Donnard, Elisa R; Ortega, J Miguel
2015-03-01
Genomic information is being underlined in the format of biological pathways. Building these biological pathways is an ongoing demand and benefits from methods for extracting information from biomedical literature with the aid of text-mining tools. Here we hopefully guide you in the attempt of building a customized pathway or chart representation of a system. Our manual is based on a group of software designed to look at biointeractions in a set of abstracts retrieved from PubMed. However, they aim to support the work of someone with biological background, who does not need to be an expert on the subject and will play the role of manual curator while designing the representation of the system, the pathway. We therefore illustrate with two challenging case studies: hair and breast development. They were chosen for focusing on recent acquisitions of human evolution. We produced sub-pathways for each study, representing different phases of development. Differently from most charts present in current databases, we present detailed descriptions, which will additionally guide PESCADOR users along the process. The implementation as a web interface makes PESCADOR a unique tool for guiding the user along the biointeractions, which will constitute a novel pathway. Copyright © 2014 Elsevier Inc. All rights reserved.
47 CFR 64.5105 - Use of customer proprietary network information without customer approval.
Code of Federal Regulations, 2014 CFR
2014-10-01
... calls; (ii) Access, either directly or via a third party, a commercially available database that will... permit access to CPNI upon request by the administrator of the TRS Fund, as that term is defined in § 64...
47 CFR 64.5105 - Use of customer proprietary network information without customer approval.
Code of Federal Regulations, 2013 CFR
2013-10-01
... calls; (ii) Access, either directly or via a third party, a commercially available database that will... permit access to CPNI upon request by the administrator of the TRS Fund, as that term is defined in § 64...
Soysal, Ergin; Wang, Jingqi; Jiang, Min; Wu, Yonghui; Pakhomov, Serguei; Liu, Hongfang; Xu, Hua
2017-11-24
Existing general clinical natural language processing (NLP) systems such as MetaMap and Clinical Text Analysis and Knowledge Extraction System have been successfully applied to information extraction from clinical text. However, end users often have to customize existing systems for their individual tasks, which can require substantial NLP skills. Here we present CLAMP (Clinical Language Annotation, Modeling, and Processing), a newly developed clinical NLP toolkit that provides not only state-of-the-art NLP components, but also a user-friendly graphic user interface that can help users quickly build customized NLP pipelines for their individual applications. Our evaluation shows that the CLAMP default pipeline achieved good performance on named entity recognition and concept encoding. We also demonstrate the efficiency of the CLAMP graphic user interface in building customized, high-performance NLP pipelines with 2 use cases, extracting smoking status and lab test values. CLAMP is publicly available for research use, and we believe it is a unique asset for the clinical NLP community. © The Author 2017. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brackney, Larry J.
North East utility National Grid (NGrid) is developing a portfolio-scale application of OpenStudio designed to optimize incentive and marketing expenditures for their energy efficiency (EE) programs. NGrid wishes to leverage a combination of geographic information systems (GIS), public records, customer data, and content from the Building Component Library (BCL) to form a JavaScript Object Notation (JSON) input file that is consumed by an OpenStudio-based expert system for automated model generation. A baseline model for each customer building will be automatically tuned using electricity and gas consumption data, and a set of energy conservation measures (ECMs) associated with each NGrid incentivemore » program will be applied to the model. The simulated energy performance and return on investment (ROI) will be compared with customer hurdle rates and available incentives to A) optimize the incentive required to overcome the customer hurdle rate and B) determine if marketing activity associated with the specific ECM is warranted for that particular customer. Repeated across their portfolio, this process will enable NGrid to substantially optimize their marketing and incentive expenditures, targeting those customers that will likely adopt and benefit from specific EE programs.« less
47 CFR 69.118 - Traffic sensitive switched services.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Docket 86-10, FCC 93-53 (1993). Moreover, all customers that use basic 800 database service shall be... account revenues from the relevant Basic Service Element or Elements and 800 Database Service Elements in...
47 CFR 69.118 - Traffic sensitive switched services.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Docket 86-10, FCC 93-53 (1993). Moreover, all customers that use basic 800 database service shall be... account revenues from the relevant Basic Service Element or Elements and 800 Database Service Elements in...
47 CFR 69.118 - Traffic sensitive switched services.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Docket 86-10, FCC 93-53 (1993). Moreover, all customers that use basic 800 database service shall be... account revenues from the relevant Basic Service Element or Elements and 800 Database Service Elements in...
47 CFR 69.118 - Traffic sensitive switched services.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Docket 86-10, FCC 93-53 (1993). Moreover, all customers that use basic 800 database service shall be... account revenues from the relevant Basic Service Element or Elements and 800 Database Service Elements in...
47 CFR 69.118 - Traffic sensitive switched services.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Docket 86-10, FCC 93-53 (1993). Moreover, all customers that use basic 800 database service shall be... account revenues from the relevant Basic Service Element or Elements and 800 Database Service Elements in...
An indoor positioning technology in the BLE mobile payment system
NASA Astrophysics Data System (ADS)
Han, Tiantian; Ding, Lei
2017-05-01
Mobile payment system for large supermarkets, the core function is through the BLE low-power Bluetooth technology to achieve the amount of payment in the mobile payment system, can through an indoor positioning technology to achieve value-added services. The technology by collecting Bluetooth RSSI, the fingerprint database of sampling points corresponding is established. To get Bluetooth module RSSI by the AP. Then, to use k-Nearest Neighbor match the value of the fingerprint database. Thereby, to help businesses find customers through the mall location, combined settlement amount of the customer's purchase of goods, to analyze customer's behavior. When the system collect signal strength, the distribution of the sampling points of RSSI is analyzed and the value is filtered. The system, used in the laboratory is designed to demonstrate the feasibility.
Ulrich, Dave; Smallwood, Norm
2007-01-01
How do some firms produce a pipeline of consistently excellent managers? Instead of concentrating merely on strengthening the skills of individuals, these companies focus on building a broad organizational leadership capability. It's what Ulrich and Smallwood--cofounders of the RBL Group, a leadership development consultancy--call a leadership brand. Organizations with leadership brands take an "outside-in" approach to executive development. They begin with a clear statement of what they want to be known for by customers and then link it with a required set of management skills. The Lexus division of Toyota, for instance, translates its tagline--"The pursuit of perfection"--into an expectation that its leaders excel at managing quality processes. The slogan of Bon Secours Health System is "Good help to those in need." It demands that its managers balance business skills with compassion and caring. The outside-in approach helps firms build a reputation for high-quality leaders whom customers trust to deliver on the company's promises. In examining 150 companies with strong leadership capabilities, the authors found that the organizations follow five strategies. First, make sure managers master the basics of leadership--for example, setting strategy and grooming talent. Second, ensure that leaders internalize customers' high expectations. Third, incorporate customer feedback into evaluations of executives. Fourth, invest in programs that help managers hone the right skills, by tapping customers to participate in such programs. Finally, track the success of efforts to build leadership bench strength over the long-term. The result is outstanding management that persists even when individual executives leave. In fact, companies with the strongest leadership brands often become "leader feeders"--firms that regularly graduate leaders who go on to head other companies.
Viability of in-house datamarting approaches for population genetics analysis of SNP genotypes
Amigo, Jorge; Phillips, Christopher; Salas, Antonio; Carracedo, Ángel
2009-01-01
Background Databases containing very large amounts of SNP (Single Nucleotide Polymorphism) data are now freely available for researchers interested in medical and/or population genetics applications. While many of these SNP repositories have implemented data retrieval tools for general-purpose mining, these alone cannot cover the broad spectrum of needs of most medical and population genetics studies. Results To address this limitation, we have built in-house customized data marts from the raw data provided by the largest public databases. In particular, for population genetics analysis based on genotypes we have built a set of data processing scripts that deal with raw data coming from the major SNP variation databases (e.g. HapMap, Perlegen), stripping them into single genotypes and then grouping them into populations, then merged with additional complementary descriptive information extracted from dbSNP. This allows not only in-house standardization and normalization of the genotyping data retrieved from different repositories, but also the calculation of statistical indices from simple allele frequency estimates to more elaborate genetic differentiation tests within populations, together with the ability to combine population samples from different databases. Conclusion The present study demonstrates the viability of implementing scripts for handling extensive datasets of SNP genotypes with low computational costs, dealing with certain complex issues that arise from the divergent nature and configuration of the most popular SNP repositories. The information contained in these databases can also be enriched with additional information obtained from other complementary databases, in order to build a dedicated data mart. Updating the data structure is straightforward, as well as permitting easy implementation of new external data and the computation of supplementary statistical indices of interest. PMID:19344481
Viability of in-house datamarting approaches for population genetics analysis of SNP genotypes.
Amigo, Jorge; Phillips, Christopher; Salas, Antonio; Carracedo, Angel
2009-03-19
Databases containing very large amounts of SNP (Single Nucleotide Polymorphism) data are now freely available for researchers interested in medical and/or population genetics applications. While many of these SNP repositories have implemented data retrieval tools for general-purpose mining, these alone cannot cover the broad spectrum of needs of most medical and population genetics studies. To address this limitation, we have built in-house customized data marts from the raw data provided by the largest public databases. In particular, for population genetics analysis based on genotypes we have built a set of data processing scripts that deal with raw data coming from the major SNP variation databases (e.g. HapMap, Perlegen), stripping them into single genotypes and then grouping them into populations, then merged with additional complementary descriptive information extracted from dbSNP. This allows not only in-house standardization and normalization of the genotyping data retrieved from different repositories, but also the calculation of statistical indices from simple allele frequency estimates to more elaborate genetic differentiation tests within populations, together with the ability to combine population samples from different databases. The present study demonstrates the viability of implementing scripts for handling extensive datasets of SNP genotypes with low computational costs, dealing with certain complex issues that arise from the divergent nature and configuration of the most popular SNP repositories. The information contained in these databases can also be enriched with additional information obtained from other complementary databases, in order to build a dedicated data mart. Updating the data structure is straightforward, as well as permitting easy implementation of new external data and the computation of supplementary statistical indices of interest.
19 CFR 147.13 - Transfer to fair building.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 2 2010-04-01 2010-04-01 false Transfer to fair building. 147.13 Section 147.13... TREASURY (CONTINUED) TRADE FAIRS Procedure for Importation § 147.13 Transfer to fair building. (a... port director for the transfer of the articles covered thereby to the buildings in which they are to be...
34 CFR 361.23 - Requirements related to the statewide workforce investment system.
Code of Federal Regulations, 2010 CFR
2010-07-01
... technology for individuals with disabilities; (ii) The use of information and financial management systems... statistics, job vacancies, career planning, and workforce investment activities; (iii) The use of customer service features such as common intake and referral procedures, customer databases, resource information...
Building Inventory Database on the Urban Scale Using GIS for Earthquake Risk Assessment
NASA Astrophysics Data System (ADS)
Kaplan, O.; Avdan, U.; Guney, Y.; Helvaci, C.
2016-12-01
The majority of the existing buildings are not safe against earthquakes in most of the developing countries. Before a devastating earthquake, existing buildings need to be assessed and the vulnerable ones must be determined. Determining the seismic performance of existing buildings which is usually made with collecting the attributes of existing buildings, making the analysis and the necessary queries, and producing the result maps is very hard and complicated procedure that can be simplified with Geographic Information System (GIS). The aim of this study is to produce a building inventory database using GIS for assessing the earthquake risk of existing buildings. In this paper, a building inventory database for 310 buildings, located in Eskisehir, Turkey, was produced in order to assess the earthquake risk of the buildings. The results from this study show that 26% of the buildings have high earthquake risk, 33% of the buildings have medium earthquake risk and the 41% of the buildings have low earthquake risk. The produced building inventory database can be very useful especially for governments in dealing with the problem of determining seismically vulnerable buildings in the large existing building stocks. With the help of this kind of methods, determination of the buildings, which may collapse and cause life and property loss during a possible future earthquake, will be very quick, cheap and reliable.
A Public-Use, Full-Screen Interface for SPIRES Databases.
ERIC Educational Resources Information Center
Kriz, Harry M.
This paper describes the techniques for implementing a full-screen, custom SPIRES interface for a public-use library database. The database-independent protocol that controls the system is described in detail. Source code for an entire working application using this interface is included. The protocol, with less than 170 lines of procedural code,…
of Expertise Customer service Technically savvy Event planning Word processing/desktop publishing Database management Research Interests Website design Database design Computational science Technology Consulting, Westminster, CO (2007-2012) Administrative Assistant, Source One Management, Denver, CO (2005
Component, Context and Manufacturing Model Library (C2M2L)
2013-03-01
Penn State team were stored in a relational database for easy access, storage and maintainability. The relational database consisted of a PostGres ...file into a format that can be imported into the PostGres database. This same custom application was used to generate Microsoft Excel templates...Press Break Forming Equipment 4.14 Manufacturing Model Library Database Structure The data storage mechanism for the ARL PSU MML was a PostGres database
Optimal Sizing of Energy Storage for Community Microgrids Considering Building Thermal Dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Guodong; Li, Zhi; Starke, Michael R.
This paper proposes an optimization model for the optimal sizing of energy storage in community microgrids considering the building thermal dynamics and customer comfort preference. The proposed model minimizes the annualized cost of the community microgrid, including energy storage investment, purchased energy cost, demand charge, energy storage degradation cost, voluntary load shedding cost and the cost associated with customer discomfort due to room temperature deviation. The decision variables are the power and energy capacity of invested energy storage. In particular, we assume the heating, ventilation and air-conditioning (HVAC) systems can be scheduled intelligently by the microgrid central controller while maintainingmore » the indoor temperature in the comfort range set by customers. For this purpose, the detailed thermal dynamic characteristics of buildings have been integrated into the optimization model. Numerical simulation shows significant cost reduction by the proposed model. The impacts of various costs on the optimal solution are investigated by sensitivity analysis.« less
Building Your Campus Portal: Advice from the Field.
ERIC Educational Resources Information Center
Krebs, Arlene
2001-01-01
Discusses portal technology in higher education, including planning, design, technical, and financial issues. Highlights include determining the customers; marketing possibilities for the university; ownership issues; data design; effective cost structuring; security issues; adaptability; content; and customer input and feedback. (LRW)
Netemeyer, Richard G; Maxham, James G; Lichtenstein, Donald R
2010-05-01
Based on emotional contagion theory and the value-profit chain literatures, the present study posits a number of hypotheses that show how managers in the small store, small number of employees retail context may affect store employees, customers, and potentially store performance. With data from 306 store managers, 1,615 store customer-contact employees, and 57,656 customers of a single retail chain, the authors examined relationships among store manager job satisfaction and job performance, store customer-contact employee job satisfaction and job performance, customer satisfaction with the retailer, and a customer-spending-based store performance metric (customer spending growth over a 2-year period). Via path analysis, several hypothesized direct and interaction relations among these constructs are supported. The results suggest implications for academic researchers and retail managers. PsycINFO Database Record (c) 2010 APA, all rights reserved.
Dineen, Brian R; Noe, Raymond A
2009-01-01
The authors examined 2 forms of customization in a Web-based recruitment context. Hypotheses were tested in a controlled study in which participants viewed multiple Web-based job postings that each included information about multiple fit categories. Results indicated that customization of information regarding person-organization (PO), needs-supplies, and demands-abilities (DA) fit (fit information customization) and customization of the order in which these fit categories were presented (configural customization) had differential effects on outcomes. Specifically, (a) applicant pool PO and DA fit were greater when fit information customization was provided, (b) applicant pool fit in high- versus low-relevance fit categories was better differentiated when configural customization was provided, and (c) overall application rates were lower when either or both forms of customization were provided. (PsycINFO Database Record (c) 2009 APA, all rights reserved).
ERIC Educational Resources Information Center
Asamoah, Daniel A.; Sharda, Ramesh; Kalgotra, Pankush; Ott, Mark
2016-01-01
Within the context of the telecom industry, this teaching case is an active learning analytics exercise to help students build hands-on expertise on how to utilize Big Data to solve a business problem. Particularly, the case utilizes an analytics method to help develop a customer retention strategy to mitigate against an increasing customer churn…
5. Historic American Buildings Survey ARCHITECT'S DRAWING Copy negative from ...
5. Historic American Buildings Survey ARCHITECT'S DRAWING Copy negative from book published by Secretary of Treasury, 1857, consisting of 21 plates. In office of the Building Manager, General Services Administration, New Orleans - U.S. Custom House, 423 Canal Street, New Orleans, Orleans Parish, LA
11. Historic American Buildings Survey ARCHITECT'S DRAWING Copy negative from ...
11. Historic American Buildings Survey ARCHITECT'S DRAWING Copy negative from book published by Secretary of Treasury, 1857, consisting of 21 plates. In office of the Building Manager, General Services Administration, New Orleans - U.S. Custom House, 423 Canal Street, New Orleans, Orleans Parish, LA
7. Historic American Buildings Survey ARCHITECT'S DRAWING Copy negative from ...
7. Historic American Buildings Survey ARCHITECT'S DRAWING Copy negative from book published by Secretary of Treasury, 1857, consisting of 21 plates. In office of the Building Manager, General Services Administration, New Orleans - U.S. Custom House, 423 Canal Street, New Orleans, Orleans Parish, LA
12. Historic American Buildings Survey ARCHITECT'S DRAWING Copy negative from ...
12. Historic American Buildings Survey ARCHITECT'S DRAWING Copy negative from book published by Secretary of Treasury, 1857, consisting of 21 plates. In office of the Building Manager, General Services Administration, New Orleans - U.S. Custom House, 423 Canal Street, New Orleans, Orleans Parish, LA
10. Historic American Buildings Survey ARCHITECT'S DRAWING Copy negative from ...
10. Historic American Buildings Survey ARCHITECT'S DRAWING Copy negative from book published by Secretary of Treasury, 1857, consisting of 21 plates. In office of the Building Manager, General Services Administration, New Orleans - U.S. Custom House, 423 Canal Street, New Orleans, Orleans Parish, LA
6. Historic American Buildings Survey ARCHITECT'S DRAWING Copy negative from ...
6. Historic American Buildings Survey ARCHITECT'S DRAWING Copy negative from book published by Secretary of Treasury, 1857, consisting of 21 plates. In office of the Building Manager, General Services Administration, New Orleans - U.S. Custom House, 423 Canal Street, New Orleans, Orleans Parish, LA
14. Historic American Buildings Survey ARCHITECT'S DRAWING Copy negative from ...
14. Historic American Buildings Survey ARCHITECT'S DRAWING Copy negative from book published by Secretary of Treasury, 1857, consisting of 21 plates. In office of the Building Manager, General Services Administration, New Orleans - U.S. Custom House, 423 Canal Street, New Orleans, Orleans Parish, LA
8. Historic American Buildings Survey ARCHITECT'S DRAWING Copy negative from ...
8. Historic American Buildings Survey ARCHITECT'S DRAWING Copy negative from book published by Secretary of Treasury, 1857, consisting of 21 plates. In office of the Building Manager, General Services Administration, New Orleans - U.S. Custom House, 423 Canal Street, New Orleans, Orleans Parish, LA
13. Historic American Buildings Survey ARCHITECT'S DRAWING Copy negative from ...
13. Historic American Buildings Survey ARCHITECT'S DRAWING Copy negative from book published by Secretary of Treasury, 1857, consisting of 21 plates. In office of the Building Manager, General Services Administration, New Orleans - U.S. Custom House, 423 Canal Street, New Orleans, Orleans Parish, LA
9. Historic American Buildings Survey ARCHITECT'S DRAWING Copy negative from ...
9. Historic American Buildings Survey ARCHITECT'S DRAWING Copy negative from book published by Secretary of Treasury, 1857, consisting of 21 plates. In office of the Building Manager, General Services Administration, New Orleans - U.S. Custom House, 423 Canal Street, New Orleans, Orleans Parish, LA
4. Historic American Buildings Survey ARCHITECT'S DRAWING Copy negative from ...
4. Historic American Buildings Survey ARCHITECT'S DRAWING Copy negative from book published by Secretary of Treasury, 1857, consisting of 21 plates. In office of the Building Manager, General Services Administration, New Orleans - U.S. Custom House, 423 Canal Street, New Orleans, Orleans Parish, LA
Customer and household matching: resolving entity identity in data warehouses
NASA Astrophysics Data System (ADS)
Berndt, Donald J.; Satterfield, Ronald K.
2000-04-01
The data preparation and cleansing tasks necessary to ensure high quality data are among the most difficult challenges faced in data warehousing and data mining projects. The extraction of source data, transformation into new forms, and loading into a data warehouse environment are all time consuming tasks that can be supported by methodologies and tools. This paper focuses on the problem of record linkage or entity matching, tasks that can be very important in providing high quality data. Merging two or more large databases into a single integrated system is a difficult problem in many industries, especially in the wake of acquisitions. For example, managing customer lists can be challenging when duplicate entries, data entry problems, and changing information conspire to make data quality an elusive target. Common tasks with regard to customer lists include customer matching to reduce duplicate entries and household matching to group customers. These often O(n2) problems can consume significant resources, both in computing infrastructure and human oversight, and the goal of high accuracy in the final integrated database can be difficult to assure. This paper distinguishes between attribute corruption and entity corruption, discussing the various impacts on quality. A metajoin operator is proposed and used to organize past and current entity matching techniques. Finally, a logistic regression approach to implementing the metajoin operator is discussed and illustrated with an example. The metajoin can be used to determine whether two records match, don't match, or require further evaluation by human experts. Properly implemented, the metajoin operator could allow the integration of individual databases with greater accuracy and lower cost.
"Where Is My Answer?": A Customer Service Status Report.
ERIC Educational Resources Information Center
Marcinko, Randy
1997-01-01
Describes the results of a study that tested the customer service responses from 11 companies selling online information including online hosts, database producers, and World Wide Web search engine companies. Highlights include content-oriented issues, costs, training, human interaction, and the use of technology to save time and increase…
1. Historic American Buildings Survey San Francisco Chronicle Library Rephoto ...
1. Historic American Buildings Survey San Francisco Chronicle Library Re-photo May 1940 TOTALLY DESTROYED - Old U. S. Custom House, Historic View, Battery & Washington Streets, San Francisco, San Francisco County, CA
SASD: the Synthetic Alternative Splicing Database for identifying novel isoform from proteomics
2013-01-01
Background Alternative splicing is an important and widespread mechanism for generating protein diversity and regulating protein expression. High-throughput identification and analysis of alternative splicing in the protein level has more advantages than in the mRNA level. The combination of alternative splicing database and tandem mass spectrometry provides a powerful technique for identification, analysis and characterization of potential novel alternative splicing protein isoforms from proteomics. Therefore, based on the peptidomic database of human protein isoforms for proteomics experiments, our objective is to design a new alternative splicing database to 1) provide more coverage of genes, transcripts and alternative splicing, 2) exclusively focus on the alternative splicing, and 3) perform context-specific alternative splicing analysis. Results We used a three-step pipeline to create a synthetic alternative splicing database (SASD) to identify novel alternative splicing isoforms and interpret them at the context of pathway, disease, drug and organ specificity or custom gene set with maximum coverage and exclusive focus on alternative splicing. First, we extracted information on gene structures of all genes in the Ensembl Genes 71 database and incorporated the Integrated Pathway Analysis Database. Then, we compiled artificial splicing transcripts. Lastly, we translated the artificial transcripts into alternative splicing peptides. The SASD is a comprehensive database containing 56,630 genes (Ensembl gene IDs), 95,260 transcripts (Ensembl transcript IDs), and 11,919,779 Alternative Splicing peptides, and also covering about 1,956 pathways, 6,704 diseases, 5,615 drugs, and 52 organs. The database has a web-based user interface that allows users to search, display and download a single gene/transcript/protein, custom gene set, pathway, disease, drug, organ related alternative splicing. Moreover, the quality of the database was validated with comparison to other known databases and two case studies: 1) in liver cancer and 2) in breast cancer. Conclusions The SASD provides the scientific community with an efficient means to identify, analyze, and characterize novel Exon Skipping and Intron Retention protein isoforms from mass spectrometry and interpret them at the context of pathway, disease, drug and organ specificity or custom gene set with maximum coverage and exclusive focus on alternative splicing. PMID:24267658
Preventing the premature death of relationship marketing.
Fournier, S; Dobscha, S; Mick, D G
1998-01-01
Relationship marketing is in vogue. And why not? The new, increasingly efficient ways that companies have of understanding and responding to customers' needs and preferences seemingly allow them to build more meaningful connections with consumers than ever before. These connections promise to benefit the bottom line by reducing costs and increasing revenue. Unfortunately, a close look suggests that the relationships between companies and customers are troubled ones, at best. Companies may delight in learning more about their customers and in being able to provide features and services to please every possible palate. But customers delight in neither. In fact, customer satisfaction rates in the United States are at an all-time low, while complaints, boycotts, and other expressions of consumer discontent are on the rise. This mounting wave of unhappiness has yet to reach the bottom line. Sooner or later, however, corporate performance will suffer unless relationship marketing becomes what it is supposed to be--the epitome of customer orientation. Ironically, the very things that marketers are doing to build relationships with customers are often the things that are destroying those relationships. Relationship marketing is powerful in theory but troubled in practice. To prevent its premature death, marketers need to take the time to figure out how and why they are undermining their own best efforts, as well as how they can get things back on track.
ABrowse--a customizable next-generation genome browser framework.
Kong, Lei; Wang, Jun; Zhao, Shuqi; Gu, Xiaocheng; Luo, Jingchu; Gao, Ge
2012-01-05
With the rapid growth of genome sequencing projects, genome browser is becoming indispensable, not only as a visualization system but also as an interactive platform to support open data access and collaborative work. Thus a customizable genome browser framework with rich functions and flexible configuration is needed to facilitate various genome research projects. Based on next-generation web technologies, we have developed a general-purpose genome browser framework ABrowse which provides interactive browsing experience, open data access and collaborative work support. By supporting Google-map-like smooth navigation, ABrowse offers end users highly interactive browsing experience. To facilitate further data analysis, multiple data access approaches are supported for external platforms to retrieve data from ABrowse. To promote collaborative work, an online user-space is provided for end users to create, store and share comments, annotations and landmarks. For data providers, ABrowse is highly customizable and configurable. The framework provides a set of utilities to import annotation data conveniently. To build ABrowse on existing annotation databases, data providers could specify SQL statements according to database schema. And customized pages for detailed information display of annotation entries could be easily plugged in. For developers, new drawing strategies could be integrated into ABrowse for new types of annotation data. In addition, standard web service is provided for data retrieval remotely, providing underlying machine-oriented programming interface for open data access. ABrowse framework is valuable for end users, data providers and developers by providing rich user functions and flexible customization approaches. The source code is published under GNU Lesser General Public License v3.0 and is accessible at http://www.abrowse.org/. To demonstrate all the features of ABrowse, a live demo for Arabidopsis thaliana genome has been built at http://arabidopsis.cbi.edu.cn/.
WebDB Component Builder - Lessons Learned
DOE Office of Scientific and Technical Information (OSTI.GOV)
Macedo, C.
2000-02-15
Oracle WebDB is the easiest way to produce web enabled lightweight and enterprise-centric applications. This concept from Oracle has tantalized our taste for simplistic web development by using a purely web based tool that lives nowhere else but in the database. The use of online wizards, templates, and query builders, which produces PL/SQL behind the curtains, can be used straight ''out of the box'' by both novice and seasoned developers. The topic of this presentation will introduce lessons learned by developing and deploying applications built using the WebDB Component Builder in conjunction with custom PL/SQL code to empower a hybridmore » application. There are two kinds of WebDB components: those that display data to end users via reporting, and those that let end users update data in the database via entry forms. The presentation will also discuss various methods within the Component Builder to enhance the applications pushed to the desktop. The demonstrated example is an application entitled HOME (Helping Other's More Effectively) that was built to manage a yearly United Way Campaign effort. Our task was to build an end to end application which could manage approximately 900 non-profit agencies, an average of 4,100 individual contributions, and $1.2 million dollars. Using WebDB, the shell of the application was put together in a matter of a few weeks. However, we did encounter some hurdles that WebDB, in it's stage of infancy (v2.0), could not solve for us directly. Together with custom PL/SQL, WebDB's Component Builder became a powerful tool that enabled us to produce a very flexible hybrid application.« less
Judicious use of custom development in an open source component architecture
NASA Astrophysics Data System (ADS)
Bristol, S.; Latysh, N.; Long, D.; Tekell, S.; Allen, J.
2014-12-01
Modern software engineering is not as much programming from scratch as innovative assembly of existing components. Seamlessly integrating disparate components into scalable, performant architecture requires sound engineering craftsmanship and can often result in increased cost efficiency and accelerated capabilities if software teams focus their creativity on the edges of the problem space. ScienceBase is part of the U.S. Geological Survey scientific cyberinfrastructure, providing data and information management, distribution services, and analysis capabilities in a way that strives to follow this pattern. ScienceBase leverages open source NoSQL and relational databases, search indexing technology, spatial service engines, numerous libraries, and one proprietary but necessary software component in its architecture. The primary engineering focus is cohesive component interaction, including construction of a seamless Application Programming Interface (API) across all elements. The API allows researchers and software developers alike to leverage the infrastructure in unique, creative ways. Scaling the ScienceBase architecture and core API with increasing data volume (more databases) and complexity (integrated science problems) is a primary challenge addressed by judicious use of custom development in the component architecture. Other data management and informatics activities in the earth sciences have independently resolved to a similar design of reusing and building upon established technology and are working through similar issues for managing and developing information (e.g., U.S. Geoscience Information Network; NASA's Earth Observing System Clearing House; GSToRE at the University of New Mexico). Recent discussions facilitated through the Earth Science Information Partners are exploring potential avenues to exploit the implicit relationships between similar projects for explicit gains in our ability to more rapidly advance global scientific cyberinfrastructure.
Gulati, Ranjay; Oldroyd, James B
2005-04-01
Companies have poured enormous amounts of money into customer relationship management, but in many cases the investment hasn't really paid off. That's because getting closer to customers isn't about building an information technology system. It's a learning journey-one that unfolds over four stages, requiring people and business units to coordinate in progressively more sophisticated ways. The journey begins with the creation of a companywide repository containing each interaction a customer has with the company, organized not by product, purchase, or location, but by customer. Communal coordination is what's called for at this stage, as each group contributes its information to the data pool separately from the others and then taps into it as needed. In the second stage, one-way serial coordination from centralized IT through analytical units and out to the operating units allows companies to go beyond just assembling data to drawing inferences. In stage three, companies shift their focus from past relationships to future behavior. Through symbiotic coordination, information flows back and forth between central analytic units and various organizational units like marketing, sales, and operations, as together they seek answers to questions like "How can we prevent customers from switching to a competitor?" and "Who would be most likely to buy a new product in the future"? In stage four, firms begin to move past discrete, formal initiatives and, through integral coordination, bring an increasingly sophisticated understanding oftheir customers to bear in all day-to-day operations. Skipping stages denies organizations the sure foundation they need to build a lasting customer-focused mind-set. Those that recognize this will invest their customer relationship dollars much more wisely-and will see their customer-focusing efforts pay offon the bottom line.
Improving customer service. It's not just what's in the box.
Redling, Robert
2003-08-01
Patient satisfaction scores can plummet when medical emergencies throw schedules into disarray or a receptionist ignores a patient at the front desk. Patients' expectations of good customer service have been shaped by technological conveniences and the concerted efforts of retailers, restaurants and other service providers. Physician leaders and administrators can improve customer service by paying more attention to organizational culture, physician behavior, staff incentives, hiring practices and team-building.
NASA Astrophysics Data System (ADS)
Vileikis, O.; Escalante Carrillo, E.; Allayarov, S.; Feyzulayev, A.
2017-08-01
The historic cities of Uzbekistan are an irreplaceable legacy of the Silk Roads. Currently, Uzbekistan counts with four UNESCO World Heritage Properties, with hundreds of historic monuments and traditional historic houses. However, lack of documentation, systematic monitoring and a digital database, of the historic buildings and dwellings within the historic centers, are threatening the World Heritage properties and delaying the development of a proper management mechanism for the preservation of the heritage and an interwoven city urban development. Unlike the monuments, the traditional historic houses are being demolished without any enforced legal protection, leaving no documentation to understand the city history and its urban fabric as well of way of life, traditions and customs over the past centuries. To fill out this gap, from 2008 to 2015, the Principal Department for Preservation and Utilization of Cultural Objects of the Ministry of Culture and Sports of Uzbekistan with support from the UNESCO Office in Tashkent, and in collaboration with several international and local universities and institutions, carried out a survey of the Historic Centre of Bukhara, Itchan Kala and Samarkand Crossroad of Cultures. The collaborative work along these years have helped to consolidate a methodology and to integrate a GIS database that is currently contributing to the understanding of the outstanding heritage values of these cities as well as to develop preservation and management strategies with a solid base of heritage documentation.
Q&A with Sheila Hayter: Laying the Foundation for an Interconnected Energy
should be-this is how our No. 1 customer [the buildings sector] should be responding to us." I believe it should be the other way around-the customer should be the one saying, "These are our needs
Chi, Nai-Wen; Yang, Jixia; Lin, Chia-Ying
2018-01-01
Drawing on the stressor-emotion model, we examine how customer mistreatment can evoke service workers' passive forms of deviant behaviors (i.e., work withdrawal behavior [WWB]) and negative impacts on their home life (i.e., work-family conflict [WFC]), and whether individuals' core self-evaluations and customer service training can buffer the negative effects of customer mistreatment. Using the experience sampling method, we collect daily data from 77 customer service employees for 10 consecutive working days, yielding 546 valid daily responses. The results show that daily customer mistreatment increases service workers' daily WWB and WFC through negative emotions. Furthermore, employees with high core self-evaluations and employees who received customer service training are less likely to experience negative emotions when faced with customer mistreatment, and thus are less likely to engage in WWB or provoke WFC. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
19 CFR 123.81 - Merchandise found in building on the boundary.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 19 Customs Duties 1 2010-04-01 2010-04-01 false Merchandise found in building on the boundary. 123... Merchandise found in building on the boundary. When any merchandise on which the duty has not been paid or which was imported contrary to law is found in any building upon or within 10 feet of the boundary line...
Research on the construction of three level customer service knowledge graph
NASA Astrophysics Data System (ADS)
Cheng, Shi; Shen, Jiajie; Shi, Quan; Cheng, Xianyi
2017-09-01
With the explosion of knowledge and information of the enterprise and the growing demand for intelligent knowledge management and application and improve business performance the knowledge expression and processing of the enterprise has become a hot topic. Aim at the problems of the electric marketing customer service knowledge map (customer service knowledge map) in building theory and method, electric marketing knowledge map of three levels of customer service was discussed, and realizing knowledge reasoning based on Neo4j, achieve good results in practical application.
77 FR 38581 - Submission for OMB Review; Comment Request
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-28
...: Minority Business Development Agency. Title: Online Customer Relationship Management (CRM)/Performance... client information, service activities and progress on attainment of program goals via the Online CRM/Performance Databases. The data collected through the Online CRM/Performance Databases is used to regularly...
ERIC Educational Resources Information Center
Klein, Regina; And Others
1988-01-01
The first of three articles describes the results of a survey that examined characteristics and responsibilities of help-desk personnel at major database and online services. The second provides guidelines to using such customer services, and the third lists help-desk numbers for online databases and systems. (CLB)
Mission Operations Planning and Scheduling System (MOPSS)
NASA Technical Reports Server (NTRS)
Wood, Terri; Hempel, Paul
2011-01-01
MOPSS is a generic framework that can be configured on the fly to support a wide range of planning and scheduling applications. It is currently used to support seven missions at Goddard Space Flight Center (GSFC) in roles that include science planning, mission planning, and real-time control. Prior to MOPSS, each spacecraft project built its own planning and scheduling capability to plan satellite activities and communications and to create the commands to be uplinked to the spacecraft. This approach required creating a data repository for storing planning and scheduling information, building user interfaces to display data, generating needed scheduling algorithms, and implementing customized external interfaces. Complex scheduling problems that involved reacting to multiple variable situations were analyzed manually. Operators then used the results to add commands to the schedule. Each architecture was unique to specific satellite requirements. MOPSS is an expert system that automates mission operations and frees the flight operations team to concentrate on critical activities. It is easily reconfigured by the flight operations team as the mission evolves. The heart of the system is a custom object-oriented data layer mapped onto an Oracle relational database. The combination of these two technologies allows a user or system engineer to capture any type of scheduling or planning data in the system's generic data storage via a GUI.
DOT National Transportation Integrated Search
2016-01-01
This study by the University of Maryland explored the potential of an improved freight rail line to attract new customers. The analysis was based on the 2014 InfoGroup U.S. Business Database and other input data that the National Transportation Cente...
ERIC Educational Resources Information Center
Schaller, James; Yang, Nancy K.
2005-01-01
Differences in rates of case closure, case service cost, hours worked per week, and weekly wage between customers with autism closed successfully in competitive employment and supported employment were found using the Rehabilitation Service Administration national database of 2001. Using logistic regression, customer demographic variables related…
41 CFR 102-85.160 - How does a customer agency know how much to budget for Rent?
Code of Federal Regulations, 2010 CFR
2010-07-01
... PROPERTY 85-PRICING POLICY FOR OCCUPANCY IN GSA SPACE Rent Charges § 102-85.160 How does a customer agency... pricing due to changes in scope of proposed projects or repair and/or replacement of building components ...
Using Customer Reviews to Build Critical Reading Skills
ERIC Educational Resources Information Center
Rice, Mary
2007-01-01
Junior high school teacher Mary Rice designs a consumer research unit that cultivates students' critical reading and thinking skills. As students learn how to develop and revise criteria for evaluating the reliability of online information, they read customer reviews, research products, and present their findings orally.
Exploiting genomic data to identify proteins involved in abalone reproduction.
Mendoza-Porras, Omar; Botwright, Natasha A; McWilliam, Sean M; Cook, Mathew T; Harris, James O; Wijffels, Gene; Colgrave, Michelle L
2014-08-28
Aside from their critical role in reproduction, abalone gonads serve as an indicator of sexual maturity and energy balance, two key considerations for effective abalone culture. Temperate abalone farmers face issues with tank restocking with highly marketable abalone owing to inefficient spawning induction methods. The identification of key proteins in sexually mature abalone will serve as the foundation for a greater understanding of reproductive biology. Addressing this knowledge gap is the first step towards improving abalone aquaculture methods. Proteomic profiling of female and male gonads of greenlip abalone, Haliotis laevigata, was undertaken using liquid chromatography-mass spectrometry. Owing to the incomplete nature of abalone protein databases, in addition to searching against two publicly available databases, a custom database comprising genomic data was used. Overall, 162 and 110 proteins were identified in females and males respectively with 40 proteins common to both sexes. For proteins involved in sexual maturation, sperm and egg structure, motility, acrosomal reaction and fertilization, 23 were identified only in females, 18 only in males and 6 were common. Gene ontology analysis revealed clear differences between the female and male protein profiles reflecting a higher rate of protein synthesis in the ovary and higher metabolic activity in the testis. A comprehensive mass spectrometry-based analysis was performed to profile the abalone gonad proteome providing the foundation for future studies of reproduction in abalone. Key proteins involved in both reproduction and energy balance were identified. Genomic resources were utilised to build a database of molluscan proteins yielding >60% more protein identifications than in a standard workflow employing public protein databases. Copyright © 2014 Elsevier B.V. All rights reserved.
DSSTox Website Launch: Improving Public Access to Databases for Building Structure-Toxicity Prediction Models
Ann M. Richard
US Environmental Protection Agency, Research Triangle Park, NC, USA
Distributed: Decentralized set of standardized, field-delimited databases,...
The UConn Co-op Building and Ray Verrey.
ERIC Educational Resources Information Center
White, Ken
1982-01-01
The planning and merchandising strategies behind a new cooperative college store building and customer circulation concept are outlined. The facility allows for both current marketing needs and future flexibility as the store expands its role. (MSE)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sikora, J. L.
2001-06-01
As concern over the environment grows, builders have the potential to fulfill a market niche by building homes that use fewer resources and have lower environmental impact than conventional construction. Builders can increase their marketability and customer satisfaction and, at the same time, reduce the environmental impact of their homes. However, it takes dedication to build environmentally sound homes along with a solid marketing approach to ensure that customers recognize the added value of energy and resource efficiency. This guide is intended for builders seeking suggestions on how to improve energy and resource efficiency in their new homes. It ismore » a compilation of ideas and concepts for designing, building, and marketing energy- and resource-efficient homes based on the experience of recipients of the national Energy Value Housing Award (EVHA).« less
Nationwide Analysis of U.S. Commercial Building Solar Photovoltaic (PV) Breakeven Conditions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davidson, Carolyn; Gagnon, Pieter; Denholm, Paul
2015-10-01
The commercial sector offers strong potential for solar photovoltaics (PV) owing to abundant available roof space suitable for PV and the opportunity to offset the sector's substantial retail electricity purchases. This report evaluated the breakeven price of PV for 15 different building types and various financing options by calculating electricity savings based on detailed rate structures for most U.S. utility territories (representing approximately two thirds of U.S. commercial customers). We find that at current capital costs, an estimated 1/3 of U.S. commercial customers break even in the cash scenario and approximately 2/3 break even in the loan scenario. Variation inmore » retail rates is a stronger driver of breakeven prices than is variation in building load or solar generation profiles. At the building level, variation in the average breakeven price is largely driven by the ability for a PV system to reduce demand charges.« less
NASA Technical Reports Server (NTRS)
Reil, Robin
2011-01-01
The success of JPL's Next Generation Imaging Spectrometer (NGIS) in Earth remote sensing has inspired a follow-on instrument project, the MaRSPlus Sensor System (MSS). One of JPL's responsibilities in the MSS project involves updating the documentation from the previous JPL airborne imagers to provide all the information necessary for an outside customer to operate the instrument independently. As part of this documentation update, I created detailed electrical cabling diagrams to provide JPL technicians with clear and concise build instructions and a database to track the status of cables from order to build to delivery. Simultaneously, a distributed motor control system is being developed for potential use on the proposed 2018 Mars rover mission. This system would significantly reduce the mass necessary for rover motor control, making more mass space available to other important spacecraft systems. The current stage of the project consists of a desktop computer talking to a single "cold box" unit containing the electronics to drive a motor. In order to test the electronics, I developed a graphical user interface (GUI) using MATLAB to allow a user to send simple commands to the cold box and display the responses received in a user-friendly format.
Solar + Storage Synergies for Managing Commercial-Customer Demand Charges
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gagnon, P.; Govindarajan, A.; Bird, L.
Demand charges, which are based on a customer’s maximum demand in kilowatts (kW), are a common element of electricity rate structures for commercial customers. Customer-sited solar photovoltaic (PV) systems can potentially reduce demand charges, but the level of savings is difficult to predict, given variations in demand charge designs, customer loads, and PV generation profiles. Lawrence Berkeley National Laboratory (Berkeley Lab) and the National Renewable Energy Laboratory (NREL) are collaborating on a series of studies to understand how solar PV can impact demand charges. Prior studies in the series examined demand charge reductions from solar on a stand-alone basis formore » residential and commercial customers. Those earlier analyses found that solar, alone, has limited ability to reduce demand charges depending on the specific design of the demand charge and on the shape of the customer’s load profile. This latest analysis estimates demand charge savings from solar in commercial buildings when co-deployed with behind-the-meter storage, highlighting the complementary roles of the two technologies. The analysis is based on simulated loads, solar generation, and storage dispatch across a wide variety of building types, locations, system configurations, and demand charge designs.« less
Global building inventory for earthquake loss estimation and risk management
Jaiswal, Kishor; Wald, David; Porter, Keith
2010-01-01
We develop a global database of building inventories using taxonomy of global building types for use in near-real-time post-earthquake loss estimation and pre-earthquake risk analysis, for the U.S. Geological Survey’s Prompt Assessment of Global Earthquakes for Response (PAGER) program. The database is available for public use, subject to peer review, scrutiny, and open enhancement. On a country-by-country level, it contains estimates of the distribution of building types categorized by material, lateral force resisting system, and occupancy type (residential or nonresidential, urban or rural). The database draws on and harmonizes numerous sources: (1) UN statistics, (2) UN Habitat’s demographic and health survey (DHS) database, (3) national housing censuses, (4) the World Housing Encyclopedia and (5) other literature.
Souvignet, Julien; Declerck, Gunnar; Asfari, Hadyl; Jaulent, Marie-Christine; Bousquet, Cédric
2016-10-01
Efficient searching and coding in databases that use terminological resources requires that they support efficient data retrieval. The Medical Dictionary for Regulatory Activities (MedDRA) is a reference terminology for several countries and organizations to code adverse drug reactions (ADRs) for pharmacovigilance. Ontologies that are available in the medical domain provide several advantages such as reasoning to improve data retrieval. The field of pharmacovigilance does not yet benefit from a fully operational ontology to formally represent the MedDRA terms. Our objective was to build a semantic resource based on formal description logic to improve MedDRA term retrieval and aid the generation of on-demand custom groupings by appropriately and efficiently selecting terms: OntoADR. The method consists of the following steps: (1) mapping between MedDRA terms and SNOMED-CT, (2) generation of semantic definitions using semi-automatic methods, (3) storage of the resource and (4) manual curation by pharmacovigilance experts. We built a semantic resource for ADRs enabling a new type of semantics-based term search. OntoADR adds new search capabilities relative to previous approaches, overcoming the usual limitations of computation using lightweight description logic, such as the intractability of unions or negation queries, bringing it closer to user needs. Our automated approach for defining MedDRA terms enabled the association of at least one defining relationship with 67% of preferred terms. The curation work performed on our sample showed an error level of 14% for this automated approach. We tested OntoADR in practice, which allowed us to build custom groupings for several medical topics of interest. The methods we describe in this article could be adapted and extended to other terminologies which do not benefit from a formal semantic representation, thus enabling better data retrieval performance. Our custom groupings of MedDRA terms were used while performing signal detection, which suggests that the graphical user interface we are currently implementing to process OntoADR could be usefully integrated into specialized pharmacovigilance software that rely on MedDRA. Copyright © 2016 Elsevier Inc. All rights reserved.
Building Essential Skills for the Ohio Building and Construction Industry. Final Report.
ERIC Educational Resources Information Center
Pritz, Sandra G.; And Others
The Center on Education and Training for Employment (CETE) at the Ohio State University worked in partnership with the Ohio State Building and Construction Trades Council (OSB&CT) to develop and deliver customized workplace literacy services for local union members in six major Ohio cities (Columbus, Cleveland, Cincinnati, Toledo, Dayton, and…
Architecture Knowledge for Evaluating Scalable Databases
2015-01-16
problems, arising from the proliferation of new data models and distributed technologies for building scalable, available data stores . Architects must...longer are relational databases the de facto standard for building data repositories. Highly distributed, scalable “ NoSQL ” databases [11] have emerged...This is especially challenging at the data storage layer. The multitude of competing NoSQL database technologies creates a complex and rapidly
In-Memory Graph Databases for Web-Scale Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Castellana, Vito G.; Morari, Alessandro; Weaver, Jesse R.
RDF databases have emerged as one of the most relevant way for organizing, integrating, and managing expo- nentially growing, often heterogeneous, and not rigidly structured data for a variety of scientific and commercial fields. In this paper we discuss the solutions integrated in GEMS (Graph database Engine for Multithreaded Systems), a software framework for implementing RDF databases on commodity, distributed-memory high-performance clusters. Unlike the majority of current RDF databases, GEMS has been designed from the ground up to primarily employ graph-based methods. This is reflected in all the layers of its stack. The GEMS framework is composed of: a SPARQL-to-C++more » compiler, a library of data structures and related methods to access and modify them, and a custom runtime providing lightweight software multithreading, network messages aggregation and a partitioned global address space. We provide an overview of the framework, detailing its component and how they have been closely designed and customized to address issues of graph methods applied to large-scale datasets on clusters. We discuss in details the principles that enable automatic translation of the queries (expressed in SPARQL, the query language of choice for RDF databases) to graph methods, and identify differences with respect to other RDF databases.« less
Hoeck, W G
1994-06-01
InfoTrac TFD provides a graphical user interface (GUI) for viewing and manipulating datasets in the Transcription Factor Database, TFD. The interface was developed in Filemaker Pro 2.0 by Claris Corporation, which provides cross platform compatibility between Apple Macintosh computers running System 7.0 and higher and IBM-compatibles running Microsoft Windows 3.0 and higher. TFD ASCII-tables were formatted to fit data into several custom data tables using Add/Strip, a shareware utility and Filemaker Pro's lookup feature. The lookup feature was also put to use to allow TFD data tables to become linked within a flat-file database management system. The 'Navigator', consisting of several pop-up menus listing transcription factor abbreviations, facilitates the search for transcription factor entries. Data are presented onscreen in several layouts, that can be further customized by the user. InfoTrac TFD makes the transcription factor database accessible to a much wider community of scientists by making it available on two popular microcomputer platforms.
GenomeHubs: simple containerized setup of a custom Ensembl database and web server for any species
Kumar, Sujai; Stevens, Lewis; Blaxter, Mark
2017-01-01
Abstract As the generation and use of genomic datasets is becoming increasingly common in all areas of biology, the need for resources to collate, analyse and present data from one or more genome projects is becoming more pressing. The Ensembl platform is a powerful tool to make genome data and cross-species analyses easily accessible through a web interface and a comprehensive application programming interface. Here we introduce GenomeHubs, which provide a containerized environment to facilitate the setup and hosting of custom Ensembl genome browsers. This simplifies mirroring of existing content and import of new genomic data into the Ensembl database schema. GenomeHubs also provide a set of analysis containers to decorate imported genomes with results of standard analyses and functional annotations and support export to flat files, including EMBL format for submission of assemblies and annotations to International Nucleotide Sequence Database Collaboration. Database URL: http://GenomeHubs.org PMID:28605774
DTS: Building custom, intelligent schedulers
NASA Technical Reports Server (NTRS)
Hansson, Othar; Mayer, Andrew
1994-01-01
DTS is a decision-theoretic scheduler, built on top of a flexible toolkit -- this paper focuses on how the toolkit might be reused in future NASA mission schedulers. The toolkit includes a user-customizable scheduling interface, and a 'Just-For-You' optimization engine. The customizable interface is built on two metaphors: objects and dynamic graphs. Objects help to structure problem specifications and related data, while dynamic graphs simplify the specification of graphical schedule editors (such as Gantt charts). The interface can be used with any 'back-end' scheduler, through dynamically-loaded code, interprocess communication, or a shared database. The 'Just-For-You' optimization engine includes user-specific utility functions, automatically compiled heuristic evaluations, and a postprocessing facility for enforcing scheduling policies. The optimization engine is based on BPS, the Bayesian Problem-Solver (1,2), which introduced a similar approach to solving single-agent and adversarial graph search problems.
Bakshi, Sonal R; Shukla, Shilin N; Shah, Pankaj M
2009-01-01
We developed a Microsoft Access-based laboratory management system to facilitate database management of leukemia patients referred for cytogenetic tests in regards to karyotyping and fluorescence in situ hybridization (FISH). The database is custom-made for entry of patient data, clinical details, sample details, cytogenetics test results, and data mining for various ongoing research areas. A number of clinical research laboratoryrelated tasks are carried out faster using specific "queries." The tasks include tracking clinical progression of a particular patient for multiple visits, treatment response, morphological and cytogenetics response, survival time, automatic grouping of patient inclusion criteria in a research project, tracking various processing steps of samples, turn-around time, and revenue generated. Since 2005 we have collected of over 5,000 samples. The database is easily updated and is being adapted for various data maintenance and mining needs.
Customized Assessment Group Initiative: A Complementary Approach to Students' Learning
ERIC Educational Resources Information Center
Akindayomi, Akinloye
2015-01-01
This study, conducted in a US setting, examines the importance of group dynamics that emphasize cooperative team building through the proposed grouping strategy called Customized Assessment Group Initiative (CAGI). CAGI is a student grouping strategy designed to operationalize the mutual accountability concept central to the definition of teams by…
DEEP: Database of Energy Efficiency Performance
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, Tianzhen; Piette, Mary; Lee, Sang Hoon
A database of energy efficiency performance (DEEP) is a presimulated database to enable quick and accurate assessment of energy retrofit of commercial buildings. DEEP was compiled from results of about 10 million EnergyPlus simulations. DEEP provides energy savings for screening and evaluation of retrofit measures targeting the small and medium-sized office and retail buildings in California. The prototype building models are developed for a comprehensive assessment of building energy performance based on DOE commercial reference buildings and the California DEER [sic] prototype buildings. The prototype buildings represent seven building types across six vintages of constructions and 16 California climate zones.more » DEEP uses these prototypes to evaluate energy performance of about 100 energy conservation measures covering envelope, lighting, heating, ventilation, air conditioning, plug loads, and domestic hot war. DEEP consists the energy simulation results for individual retrofit measures as well as packages of measures to consider interactive effects between multiple measures. The large scale EnergyPlus simulations are being conducted on the super computers at the National Energy Research Scientific Computing Center (NERSC) of Lawrence Berkeley National Laboratory. The pre-simulation database is a part of the CEC PIER project to develop a web-based retrofit toolkit for small and medium-sized commercial buildings in California, which provides real-time energy retrofit feedback by querying DEEP with recommended measures, estimated energy savings and financial payback period based on users' decision criteria of maximizing energy savings, energy cost savings, carbon reduction, or payback of investment. The pre-simulated database and associated comprehensive measure analysis enhances the ability to performance assessments of retrofits to reduce energy use for small and medium buildings and business owners who typically do not have resources to conduct costly building energy audit.« less
Customer perceived service quality, satisfaction and loyalty in Indian private healthcare.
Kondasani, Rama Koteswara Rao; Panda, Rajeev Kumar
2015-01-01
The purpose of this paper is to analyse how perceived service quality and customer satisfaction lead to loyalty towards healthcare service providers. In total, 475 hospital patients participated in a questionnaire survey in five Indian private hospitals. Descriptive statistics, factor analysis, regression and correlation statistics were employed to analyse customer perceived service quality and how it leads to loyalty towards service providers. Results indicate that the service seeker-service provider relationship, quality of facilities and the interaction with supporting staff have a positive effect on customer perception. Findings help healthcare managers to formulate effective strategies to ensure a better quality of services to the customers. This study helps healthcare managers to build customer loyalty towards healthcare services, thereby attracting and gaining more customers. This paper will help healthcare managers and service providers to analyse customer perceptions and their loyalty towards Indian private healthcare services.
Application GIS on university planning: building a spatial database aided spatial decision
NASA Astrophysics Data System (ADS)
Miao, Lei; Wu, Xiaofang; Wang, Kun; Nong, Yu
2007-06-01
With the development of university and its size enlarging, kinds of resource need to effective management urgently. Spacial database is the right tool to assist administrator's spatial decision. And it's ready for digital campus with integrating existing OMS. It's researched about the campus planning in detail firstly. Following instanced by south china agriculture university it is practiced that how to build the geographic database of the campus building and house for university administrator's spatial decision.
Building a generalized distributed system model
NASA Technical Reports Server (NTRS)
Mukkamala, Ravi
1991-01-01
A number of topics related to building a generalized distributed system model are discussed. The effects of distributed database modeling on evaluation of transaction rollbacks, the measurement of effects of distributed database models on transaction availability measures, and a performance analysis of static locking in replicated distributed database systems are covered.
Estimation of vulnerability functions based on a global earthquake damage database
NASA Astrophysics Data System (ADS)
Spence, R. J. S.; Coburn, A. W.; Ruffle, S. J.
2009-04-01
Developing a better approach to the estimation of future earthquake losses, and in particular to the understanding of the inherent uncertainties in loss models, is vital to confidence in modelling potential losses in insurance or for mitigation. For most areas of the world there is currently insufficient knowledge of the current building stock for vulnerability estimates to be based on calculations of structural performance. In such areas, the most reliable basis for estimating vulnerability is performance of the building stock in past earthquakes, using damage databases, and comparison with consistent estimates of ground motion. This paper will present a new approach to the estimation of vulnerabilities using the recently launched Cambridge University Damage Database (CUEDD). CUEDD is based on data assembled by the Martin Centre at Cambridge University since 1980, complemented by other more-recently published and some unpublished data. The database assembles in a single, organised, expandable and web-accessible database, summary information on worldwide post-earthquake building damage surveys which have been carried out since the 1960's. Currently it contains data on the performance of more than 750,000 individual buildings, in 200 surveys following 40 separate earthquakes. The database includes building typologies, damage levels, location of each survey. It is mounted on a GIS mapping system and links to the USGS Shakemaps of each earthquake which enables the macroseismic intensity and other ground motion parameters to be defined for each survey and location. Fields of data for each building damage survey include: · Basic earthquake data and its sources · Details of the survey location and intensity and other ground motion observations or assignments at that location · Building and damage level classification, and tabulated damage survey results · Photos showing typical examples of damage. In future planned extensions of the database information on human casualties will also be assembled. The database also contains analytical tools enabling data from similar locations, building classes or ground motion levels to be assembled and thus vulnerability relationships derived for any chosen ground motion parameter, for a given class of building, and for particular countries or regions. The paper presents examples of vulnerability relationships for particular classes of buildings and regions of the world, together with the estimated uncertainty ranges. It will discuss the applicability of such vulnerability functions in earthquake loss assessment for insurance purposes or for earthquake risk mitigation.
Competitive edge: the art and science of branding.
Longeteig, Kim
2010-01-01
Branding is the equivalent of building a reputation and managing the brand and brand perceptions with actions. Create and craft a desirable brand by associating brand with a personality. This is important because it relies on the collective experiences a customer has with the brand and is one of the most straightforward ways to craft a brand. Building and maintaining brand strategy is an ongoing process that must be managed. Effort must be continually made to increase the brand's perceived value to referrers and patients, to differentiate the brand from competition, to make and keep brand promises, and to create customer loyalty.
DOT National Transportation Integrated Search
2004-05-01
For estimating the system total unlinked passenger trips and passenger miles of a fixed-route bus system for the National Transit Database (NTD), the FTA approved sampling plans may either over-sample or do not yield FTAs required confidence and p...
From slogans to strategy: a workable approach to customer satisfaction and retention.
Timm, P R
1997-01-01
Too many organizations confuse slogans, good intentions, or mechanical phrases with customer service. Most recognize that the most powerful way to prosper in today's economy is to enhance customer satisfaction and loyalty. But customer service has little to do with mottos, slogans, or mechanical phrases. The real management challenge lies in translating the slogans into employee actions that create customer satisfaction and loyalty--in creating a strategy for ensuring good service intentions and exceptional service results. This article shows a logical, theoretically sound approach to building and implementing what I call an E-Plus Customer Satisfaction strategy. Incidentally, I use the term "customer" throughout this article, but I recognize that we have different terms in various organizations. So feel free to substitute "patient", "guest", "client", or any other synonym. The principles are the same.
NASA Astrophysics Data System (ADS)
Bangia, Tarun; Yadava, Shobhit; Kumar, Brijesh; Ghanti, A. S.; Hardikar, P. M.
2016-07-01
India's largest 3.6 m aperture optical telescope facility has been recently established at Devasthal site by Aryabhatta Research Institute of Observation Sciences (ARIES), an autonomous Institute under Department of Science and Technology, Government of India. The telescope is equipped with active optics and it is designed to be used for seeinglimited observations at visible and near-infrared wavelengths. A steel building with rotating cylindrical steel Dome was erected to house 3.6m telescope and its accessories at hilltop of Devasthal site. Customized cranes were essentially required inside the building as there were space constraints around the telescope building for operating big external heavy duty cranes from outside, transportation constraints in route for bringing heavy weight cranes, altitude of observatory, and sharp bends etc. to site. To meet the challenge of telescope installation from inside the telescope building by lifting components through its hatch, two Single Girder cranes and two Under Slung cranes of 10 MT capacity each were specifically designed and developed. All the four overhead cranes were custom built to achieve the goal of handling telescope mirror and its various components during installation and assembly. Overhead cranes were installed in limited available space inside the building and tested as per IS 3177. Cranes were equipped with many features like VVVFD compatibility, provision for tandem operation, digital load display, anti-collision mechanism, electrical interlocks, radio remote, low hook height and compact carriage etc. for telescope integration at site.
A global building inventory for earthquake loss estimation and risk management
Jaiswal, K.; Wald, D.; Porter, K.
2010-01-01
We develop a global database of building inventories using taxonomy of global building types for use in near-real-time post-earthquake loss estimation and pre-earthquake risk analysis, for the U.S. Geological Survey's Prompt Assessment of Global Earthquakes for Response (PAGER) program. The database is available for public use, subject to peer review, scrutiny, and open enhancement. On a country-by-country level, it contains estimates of the distribution of building types categorized by material, lateral force resisting system, and occupancy type (residential or nonresidential, urban or rural). The database draws on and harmonizes numerous sources: (1) UN statistics, (2) UN Habitat's demographic and health survey (DHS) database, (3) national housing censuses, (4) the World Housing Encyclopedia and (5) other literature. ?? 2010, Earthquake Engineering Research Institute.
Deadpool: A how-to-build guide
USDA-ARS?s Scientific Manuscript database
An easy-to-customize, low-cost, low disturbance proximal sensing cart for field-based high-throughput phenotyping is described. General dimensions and build guidelines are provided. The cart, named Deadpool, supports mounting multiple proximal sensors and cameras for characterizing plant traits grow...
3. Historic American Buildings Survey Justh Querot & Company 'WaspNewsletter' ...
3. Historic American Buildings Survey Justh Querot & Company 'Wasp-Newsletter' (December 26, 1931) San Francisco Examiner Library Hanging of John Jenkins, 1851 DESTROYED - 1850's - Mexican Custom House, Historic View, Portsmouth Square, San Francisco, San Francisco County, CA
Autotune Calibrates Models to Building Use Data
None
2018-01-16
Models of existing buildings are currently unreliable unless calibrated manually by a skilled professional. Autotune, as the name implies, automates this process by calibrating the model of an existing building to measured data, and is now available as open source software. This enables private businesses to incorporate Autotune into their products so that their customers can more effectively estimate cost savings of reduced energy consumption measures in existing buildings.
MaizeGDB: enabling access to basic, translational, and applied research information
USDA-ARS?s Scientific Manuscript database
MaizeGDB is the Maize Genetics and Genomics Database (available online at http://www.maizegdb.org). The MaizeGDB project is not simply an online database and website but rather an information service to maize researchers that supports customized data access and analysis needs to individual research...
Using data mining to build a customer-focused organization.
Okasha, A
1999-08-01
Data mining is a new buzz word in managed care. More than simply a method of unlocking a vault of useful information in MCO data banks and warehouses, the author believes that it can help steer an organization through the reengineering process, leading to the health system's transformation toward a customer-focused organization.
ERIC Educational Resources Information Center
Circle, Alison; Bierman, Kerry
2009-01-01
The days when marketing was thought to be posters and fliers is over. In today's world, marketing is at the core of every transaction, from checkout and customer interaction to story times and buildings themselves. A brand is a promise one makes to his or her customer, and a promise that is unified, consistent, and believable can help ensure that…
6. Photocopy from portfolio of measured drawings (Plans of Public ...
6. Photocopy from portfolio of measured drawings (Plans of Public Buildings in Course of Construction Under the Direction of the Secretary of the Treasury, Including the Specifications Thereof: Custom House, Post Office and Court Room (Wheeling, VA), 1855.) 1855 TITLE PAGE - U. S. Custom House, Market & Sixteenth Streets, Wheeling, Ohio County, WV
19 CFR 10.601 - Retail packaging materials and containers.
Code of Federal Regulations, 2010 CFR
2010-04-01
...-Central America-United States Free Trade Agreement Rules of Origin § 10.601 Retail packaging materials and... Section 10.601 Customs Duties U.S. CUSTOMS AND BORDER PROTECTION, DEPARTMENT OF HOMELAND SECURITY... requirement. The United States importer of good C decides to use the build-down method, RVC = ((AV-VNM)/AV...
ERIC Educational Resources Information Center
Bi, Youyi
2017-01-01
Human-centered design requires thorough understanding of people (e.g. customers, designers, engineers) in order to better satisfy the needs and expectations of all stakeholders in the design process. Designers are able to create better products by incorporating customers' subjective evaluations on products. Engineers can also build better tools…
Winning relationships through customer service: initial contact.
Levin, R P
1994-08-01
First impressions last, and can have an impact on all future contact with a new patient. By using each initial contact with a new patient to begin building a strong relationship, a practice can be positioned for success. This article explores relationship building techniques.
NASA Technical Reports Server (NTRS)
Fisher, Marcus S.; Northey, Jeffrey; Stanton, William
2014-01-01
The purpose of this presentation is to outline how the NASA Independent Verification and Validation (IVV) Program helps to build reliability into the Space Mission Software Systems (SMSSs) that its customers develop.
ERIC Educational Resources Information Center
Lawlor, John
1998-01-01
Instead of differentiating themselves by building "brand identities," colleges and universities often focus on competing with price. As a result, fewer and fewer institutions base their identities on value, the combination of quality and price. Methods of building two concepts to influence customers' brand image and brand loyalty are…
NASA Astrophysics Data System (ADS)
Wu, Jiangning; Wang, Xiaohuan
Rapidly increasing amount of mobile phone users and types of services leads to a great accumulation of complaining information. How to use this information to enhance the quality of customers' services is a big issue at present. To handle this kind of problem, the paper presents an approach to construct a domain knowledge map for navigating the explicit and tacit knowledge in two ways: building the Topic Map-based explicit knowledge navigation model, which includes domain TM construction, a semantic topic expansion algorithm and VSM-based similarity calculation; building Social Network Analysis-based tacit knowledge navigation model, which includes a multi-relational expert navigation algorithm and the criterions to evaluate the performance of expert networks. In doing so, both the customer managers and operators in call centers can find the appropriate knowledge and experts quickly and exactly. The experimental results show that the above method is very powerful for knowledge navigation.
NASA Astrophysics Data System (ADS)
Cretcher, C. K.
1980-11-01
The impact of stringent energy conserving building standards on electric utility service areas and their customers was analyzed. The demands on the seven broadly representative electric utilities were aggregated to represent the total new construction electric heating demands in the years 1990 and 2000 to be compared to the aggregate obtained similarly for a nominal, less stringent standard, viz., ASHRAE 90-75. Results presented include the percentage of energy savings achieved in both heating and cooling seasons and typical demand profile changes. A utility economic impact analysis was performed for the cases investigated to determine changes in operating costs and potential capacity sales. A third cost component considered is the incremental cost of superinsulation (over ASHRAE 90-75) to the customer. The aggregate net cost to the utility/customer entity is utilized as a measure of overall economic benefit.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoon Lee, Sang; Hong, Tianzhen; Sawaya, Geof
The paper presents a method and process to establish a database of energy efficiency performance (DEEP) to enable quick and accurate assessment of energy retrofit of commercial buildings. DEEP was compiled from results of about 35 million EnergyPlus simulations. DEEP provides energy savings for screening and evaluation of retrofit measures targeting the small and medium-sized office and retail buildings in California. The prototype building models are developed for a comprehensive assessment of building energy performance based on DOE commercial reference buildings and the California DEER prototype buildings. The prototype buildings represent seven building types across six vintages of constructions andmore » 16 California climate zones. DEEP uses these prototypes to evaluate energy performance of about 100 energy conservation measures covering envelope, lighting, heating, ventilation, air-conditioning, plug-loads, and domestic hot water. DEEP consists the energy simulation results for individual retrofit measures as well as packages of measures to consider interactive effects between multiple measures. The large scale EnergyPlus simulations are being conducted on the super computers at the National Energy Research Scientific Computing Center of Lawrence Berkeley National Laboratory. The pre-simulation database is a part of an on-going project to develop a web-based retrofit toolkit for small and medium-sized commercial buildings in California, which provides real-time energy retrofit feedback by querying DEEP with recommended measures, estimated energy savings and financial payback period based on users’ decision criteria of maximizing energy savings, energy cost savings, carbon reduction, or payback of investment. The pre-simulated database and associated comprehensive measure analysis enhances the ability to performance assessments of retrofits to reduce energy use for small and medium buildings and business owners who typically do not have resources to conduct costly building energy audit. DEEP will be migrated into the DEnCity - DOE’s Energy City, which integrates large-scale energy data for multi-purpose, open, and dynamic database leveraging diverse source of existing simulation data.« less
Data Preparation Process for the Buildings Performance Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Walter, Travis; Dunn, Laurel; Mercado, Andrea
2014-06-30
The Buildings Performance Database (BPD) includes empirically measured data from a variety of data sources with varying degrees of data quality and data availability. The purpose of the data preparation process is to maintain data quality within the database and to ensure that all database entries have sufficient data for meaningful analysis and for the database API. Data preparation is a systematic process of mapping data into the Building Energy Data Exchange Specification (BEDES), cleansing data using a set of criteria and rules of thumb, and deriving values such as energy totals and dominant asset types. The data preparation processmore » takes the most amount of effort and time therefore most of the cleansing process has been automated. The process also needs to adapt as more data is contributed to the BPD and as building technologies over time. The data preparation process is an essential step between data contributed by providers and data published to the public in the BPD.« less
Detecting errors and anomalies in computerized materials control and accountability databases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Whiteson, R.; Hench, K.; Yarbro, T.
The Automated MC and A Database Assessment project is aimed at improving anomaly and error detection in materials control and accountability (MC and A) databases and increasing confidence in the data that they contain. Anomalous data resulting in poor categorization of nuclear material inventories greatly reduces the value of the database information to users. Therefore it is essential that MC and A data be assessed periodically for anomalies or errors. Anomaly detection can identify errors in databases and thus provide assurance of the integrity of data. An expert system has been developed at Los Alamos National Laboratory that examines thesemore » large databases for anomalous or erroneous data. For several years, MC and A subject matter experts at Los Alamos have been using this automated system to examine the large amounts of accountability data that the Los Alamos Plutonium Facility generates. These data are collected and managed by the Material Accountability and Safeguards System, a near-real-time computerized nuclear material accountability and safeguards system. This year they have expanded the user base, customizing the anomaly detector for the varying requirements of different groups of users. This paper describes the progress in customizing the expert systems to the needs of the users of the data and reports on their results.« less
2004-06-01
of Case B identified the importance of a Customer Relationship Management ( CRM ) strategy in their e-business for effective telework to occur as...telework that acknowledge and take account of the heterogeneity of teleworkers. Keywords. Telework, e-Business, Customer relationship management ...to build rapport on-line with the customers makes it easier to work from outside the office. Fourthly, the employees (T3 and T4) and manager (M2
Integration of Evidence Base into a Probabilistic Risk Assessment
NASA Technical Reports Server (NTRS)
Saile, Lyn; Lopez, Vilma; Bickham, Grandin; Kerstman, Eric; FreiredeCarvalho, Mary; Byrne, Vicky; Butler, Douglas; Myers, Jerry; Walton, Marlei
2011-01-01
INTRODUCTION: A probabilistic decision support model such as the Integrated Medical Model (IMM) utilizes an immense amount of input data that necessitates a systematic, integrated approach for data collection, and management. As a result of this approach, IMM is able to forecasts medical events, resource utilization and crew health during space flight. METHODS: Inflight data is the most desirable input for the Integrated Medical Model. Non-attributable inflight data is collected from the Lifetime Surveillance for Astronaut Health study as well as the engineers, flight surgeons, and astronauts themselves. When inflight data is unavailable cohort studies, other models and Bayesian analyses are used, in addition to subject matters experts input on occasion. To determine the quality of evidence of a medical condition, the data source is categorized and assigned a level of evidence from 1-5; the highest level is one. The collected data reside and are managed in a relational SQL database with a web-based interface for data entry and review. The database is also capable of interfacing with outside applications which expands capabilities within the database itself. Via the public interface, customers can access a formatted Clinical Findings Form (CLiFF) that outlines the model input and evidence base for each medical condition. Changes to the database are tracked using a documented Configuration Management process. DISSCUSSION: This strategic approach provides a comprehensive data management plan for IMM. The IMM Database s structure and architecture has proven to support additional usages. As seen by the resources utilization across medical conditions analysis. In addition, the IMM Database s web-based interface provides a user-friendly format for customers to browse and download the clinical information for medical conditions. It is this type of functionality that will provide Exploratory Medicine Capabilities the evidence base for their medical condition list. CONCLUSION: The IMM Database in junction with the IMM is helping NASA aerospace program improve the health care and reduce risk for the astronauts crew. Both the database and model will continue to expand to meet customer needs through its multi-disciplinary evidence based approach to managing data. Future expansion could serve as a platform for a Space Medicine Wiki of medical conditions.
Genome Writing: Current Progress and Related Applications.
Wang, Yueqiang; Shen, Yue; Gu, Ying; Zhu, Shida; Yin, Ye
2018-02-01
The ultimate goal of synthetic biology is to build customized cells or organisms to meet specific industrial or medical needs. The most important part of the customized cell is a synthetic genome. Advanced genomic writing technologies are required to build such an artificial genome. Recently, the partially-completed synthetic yeast genome project represents a milestone in this field. In this mini review, we briefly introduce the techniques for de novo genome synthesis and genome editing. Furthermore, we summarize recent research progresses and highlight several applications in the synthetic genome field. Finally, we discuss current challenges and future prospects. Copyright © 2018. Production and hosting by Elsevier B.V.
MCST Research Operations | NREL
Readiness. Laboratory Utilization. Custom Research Equipment Design-Build Capabilities. Add short description Concept and Design Design Requirements Assessment Controls and Automation Design-Build Services International (SEMI) S2 standard assessment Computer-Aided Design (CAD)/Piping and Instrumentation Diagram (P
Safe to Fly: Certifying COTS Hardware for Spaceflight
NASA Technical Reports Server (NTRS)
Fichuk, Jessica L.
2011-01-01
Providing hardware for the astronauts to use on board the Space Shuttle or International Space Station (ISS) involves a certification process that entails evaluating hardware safety, weighing risks, providing mitigation, and verifying requirements. Upon completion of this certification process, the hardware is deemed safe to fly. This process from start to finish can be completed as quickly as 1 week or can take several years in length depending on the complexity of the hardware and whether the item is a unique custom design. One area of cost and schedule savings that NASA implements is buying Commercial Off the Shelf (COTS) hardware and certifying it for human spaceflight as safe to fly. By utilizing commercial hardware, NASA saves time not having to develop, design and build the hardware from scratch, as well as a timesaving in the certification process. By utilizing COTS hardware, the current detailed certification process can be simplified which results in schedule savings. Cost savings is another important benefit of flying COTS hardware. Procuring COTS hardware for space use can be more economical than custom building the hardware. This paper will investigate the cost savings associated with certifying COTS hardware to NASA s standards rather than performing a custom build.
Yeh, Chun-Ting; Brunette, T J; Baker, David; McIntosh-Smith, Simon; Parmeggiani, Fabio
2018-02-01
Computational protein design methods have enabled the design of novel protein structures, but they are often still limited to small proteins and symmetric systems. To expand the size of designable proteins while controlling the overall structure, we developed Elfin, a genetic algorithm for the design of novel proteins with custom shapes using structural building blocks derived from experimentally verified repeat proteins. By combining building blocks with compatible interfaces, it is possible to rapidly build non-symmetric large structures (>1000 amino acids) that match three-dimensional geometric descriptions provided by the user. A run time of about 20min on a laptop computer for a 3000 amino acid structure makes Elfin accessible to users with limited computational resources. Protein structures with controlled geometry will allow the systematic study of the effect of spatial arrangement of enzymes and signaling molecules, and provide new scaffolds for functional nanomaterials. Copyright © 2017 Elsevier Inc. All rights reserved.
Toward Generalization of Iterative Small Molecule Synthesis
Lehmann, Jonathan W.; Blair, Daniel J.; Burke, Martin D.
2018-01-01
Small molecules have extensive untapped potential to benefit society, but access to this potential is too often restricted by limitations inherent to the customized approach currently used to synthesize this class of chemical matter. In contrast, the “building block approach”, i.e., generalized iterative assembly of interchangeable parts, has now proven to be a highly efficient and flexible way to construct things ranging all the way from skyscrapers to macromolecules to artificial intelligence algorithms. The structural redundancy found in many small molecules suggests that they possess a similar capacity for generalized building block-based construction. It is also encouraging that many customized iterative synthesis methods have been developed that improve access to specific classes of small molecules. There has also been substantial recent progress toward the iterative assembly of many different types of small molecules, including complex natural products, pharmaceuticals, biological probes, and materials, using common building blocks and coupling chemistry. Collectively, these advances suggest that a generalized building block approach for small molecule synthesis may be within reach. PMID:29696152
NASA Astrophysics Data System (ADS)
Lakshmi, S.; Muthumani, S., Dr.
2017-05-01
Brand power is established through brand awareness. It’s all about making consumers familiar about their products and services. Marketing strategies should make the customers extend the positive approach towards brand and continue through repeated purchases. There is a triple perspective approach to investigate the brand awareness in this research. The brand awareness and brand equity are studied and the relationship between those are analyzed. This also drills down about the brand performance and knowledge with awareness which tries to find out the brands value and utility among the public. Continuous improvement on package design, quality and buying experience will lead to customer loyalty and preference. Branding should happen though creative ads, eye catchers and special campaigns. Brand awareness is the extent to which consumers are familiar with their product or services. Power of a brand is resides in the minds of the customers. To build a strong brand, it is one of the great challenge for the marketers to ensure that customers have the right experiences with products and services and various marketing programs. So that tenderness, beliefs, perspective, perception and so on linked to the brand. If we are presenting the brand with no enthusiasm or spunk, people are going to forget about our brand. Even though that may seem harsh, it’s the naked truth in today’s marketing world. Brand must reach out to the community by special events, creating campaigns to keep the brand relevant also offer customer a unique experience. Here we study about the brand consciousness and to identify the cohesion between brand awareness with knowledge and performance and also to assess the effect of brand awareness on consumer purchase. In this study we necessary statistical tools like chi-square test ad t-test has been used to analyse the collected data. It is highly recommend to increase brand awareness, the marketers are constantly required to build brand awareness both economically and efficiently in the minds of customers at a competitive environment. Generating brand power begins with building healthy brands. So that consumers are able to identify a brand through brand recognition or recall performance. This article contains the following sub headings. 1. Introduction 2. Objectives 3. Research questions 4. Research methodology 5. Data Analysis 6. Conclusion
A new image for long-term care.
Wager, Richard; Creelman, William
2004-04-01
To counter widely held negative images of long-term care, managers in the industry should implement quality-improvement initiatives that include six key strategies: Manage the expectations of residents and their families. Address customers' concerns early. Build long-term customer satisfaction. Allocate resources to achieve exceptional outcomes in key areas. Respond to adverse events with compassion. Reinforce the facility's credibility.
Tourism guide cloud service quality: What actually delights customers?
Lin, Shu-Ping; Yang, Chen-Lung; Pi, Han-Chung; Ho, Thao-Minh
2016-01-01
The emergence of advanced IT and cloud services has beneficially supported the information-intensive tourism industry, simultaneously caused extreme competitions in attracting customers through building efficient service platforms. On response, numerous nations have implemented cloud platforms to provide value-added sightseeing information and personal intelligent service experiences. Despite these efforts, customers' actual perspectives have yet been sufficiently understood. To bridge the gap, this study attempts to investigate what aspects of tourism cloud services actually delight customers' satisfaction and loyalty. 336 valid survey questionnaire answers were analyzed using structural equation modeling method. The results prove positive impacts of function quality, enjoyment, multiple visual aids, and information quality on customers' satisfaction as well as of enjoyment and satisfaction on use loyalty. The findings hope to provide helpful references of customer use behaviors for enhancing cloud service quality in order to achieve better organizational competitiveness.
The Cocoa Shop: A Database Management Case
ERIC Educational Resources Information Center
Pratt, Renée M. E.; Smatt, Cindi T.
2015-01-01
This is an example of a real-world applicable case study, which includes background information on a small local business (i.e., TCS), description of functional business requirements, and sample data. Students are asked to design and develop a database to improve the management of the company's customers, products, and purchases by emphasizing…
OpinionSeer: interactive visualization of hotel customer feedback.
Wu, Yingcai; Wei, Furu; Liu, Shixia; Au, Norman; Cui, Weiwei; Zhou, Hong; Qu, Huamin
2010-01-01
The rapid development of Web technology has resulted in an increasing number of hotel customers sharing their opinions on the hotel services. Effective visual analysis of online customer opinions is needed, as it has a significant impact on building a successful business. In this paper, we present OpinionSeer, an interactive visualization system that could visually analyze a large collection of online hotel customer reviews. The system is built on a new visualization-centric opinion mining technique that considers uncertainty for faithfully modeling and analyzing customer opinions. A new visual representation is developed to convey customer opinions by augmenting well-established scatterplots and radial visualization. To provide multiple-level exploration, we introduce subjective logic to handle and organize subjective opinions with degrees of uncertainty. Several case studies illustrate the effectiveness and usefulness of OpinionSeer on analyzing relationships among multiple data dimensions and comparing opinions of different groups. Aside from data on hotel customer feedback, OpinionSeer could also be applied to visually analyze customer opinions on other products or services.
OpenStudio: A Platform for Ex Ante Incentive Programs
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roth, Amir; Brackney, Larry; Parker, Andrew
Many utilities operate programs that provide ex ante (up front) incentives for building energy conservation measures (ECMs). A typical incentive program covers two kinds of ECMs. ECMs that deliver similar savings in different contexts are associated with pre-calculated 'deemed' savings values. ECMs that deliver different savings in different contexts are evaluated on a 'custom' per-project basis. Incentive programs often operate at less than peak efficiency because both deemed ECMs and custom projects have lengthy and effort-intensive review processes--deemed ECMs to gain confidence that they are sufficiently context insensitive, custom projects to ensure that savings are claimed appropriately. DOE's OpenStudio platformmore » can be used to automate ex ante processes and help utilities operate programs more efficiently, consistently, and transparently, resulting in greater project throughput and energy savings. A key concept of the platform is the OpenStudio Measure, a script that queries and transforms building energy models. Measures can be simple or surgical, e.g., applying different transformations based on space-type, orientation, etc. Measures represent ECMs explicitly and are easier to review than ECMs that are represented implicitly as the difference between a with-ECM and without-ECM models. Measures can be automatically applied to large numbers of prototype models--and instantiated from uncertainty distributions--facilitating the large scale analysis required to develop deemed savings values. For custom projects, Measures can also be used to calibrate existing building models, to automatically create code baseline models, and to perform quality assurance screening.« less
A Communication Framework for Collaborative Defense
2009-02-28
been able to provide sufficient automation to be able to build up the most extensive application signature database in the world with a fraction of...perceived. We have been able to provide sufficient automation to be able to build up the most extensive application signature database in the world with a...that are well understood in the context of databases . These techniques allow users to quickly scan for the existence of a key in a database . 8 To be
Bundela, Saurabh; Sharma, Anjana; Bisen, Prakash S.
2015-01-01
Oral cancer is one of the main causes of cancer-related deaths in South-Asian countries. There are very limited treatment options available for oral cancer. Research endeavors focused on discovery and development of novel therapies for oral cancer, is necessary to control the ever rising oral cancer related mortalities. We mined the large pool of compounds from the publicly available compound databases, to identify potential therapeutic compounds for oral cancer. Over 84 million compounds were screened for the possible anti-cancer activity by custom build SVM classifier. The molecular targets of the predicted anti-cancer compounds were mined from reliable sources like experimental bioassays studies associated with the compound, and from protein-compound interaction databases. Therapeutic compounds from DrugBank, and a list of natural anti-cancer compounds derived from literature mining of published studies, were used for building partial least squares regression model. The regression model thus built, was used for the estimation of oral cancer specific weights based on the molecular targets. These weights were used to compute scores for screening the predicted anti-cancer compounds for their potential to treat oral cancer. The list of potential compounds was annotated with corresponding physicochemical properties, cancer specific bioactivity evidences, and literature evidences. In all, 288 compounds with the potential to treat oral cancer were identified in the current study. The majority of the compounds in this list are natural products, which are well-tolerated and have minimal side-effects compared to the synthetic counterparts. Some of the potential therapeutic compounds identified in the current study are resveratrol, nimbolide, lovastatin, bortezomib, vorinostat, berberine, pterostilbene, deguelin, andrographolide, and colchicine. PMID:26536350
TriBITS (Tribal Build, Integrate, and Test System)
DOE Office of Scientific and Technical Information (OSTI.GOV)
2013-05-16
TriBITS is a configuration, build, test, and reporting system that uses the Kitware open-source CMake/CTest/CDash system. TriBITS contains a number of custom CMake/CTest scripts and python scripts that extend the functionality of the out-of-the-box CMake/CTest/CDash system.
Trust and Relationship Building in Electronic Commerce.
ERIC Educational Resources Information Center
Papadopoulou, Panagiota; Andreou, Andreas; Kanellis, Panagiotis; Martakos, Drakoulis
2001-01-01
Discussion of the need for trust in electronic commerce to build customer relationships focuses on a model drawn from established theoretical work on trust and relationship marketing that highlights differences between traditional and electronic commerce. Considers how trust can be built into virtual environments. (Contains 50 references.)…
Lee, Taein; Cheng, Chun-Huai; Ficklin, Stephen; Yu, Jing; Humann, Jodi; Main, Dorrie
2017-01-01
Abstract Tripal is an open-source database platform primarily used for development of genomic, genetic and breeding databases. We report here on the release of the Chado Loader, Chado Data Display and Chado Search modules to extend the functionality of the core Tripal modules. These new extension modules provide additional tools for (1) data loading, (2) customized visualization and (3) advanced search functions for supported data types such as organism, marker, QTL/Mendelian Trait Loci, germplasm, map, project, phenotype, genotype and their respective metadata. The Chado Loader module provides data collection templates in Excel with defined metadata and data loaders with front end forms. The Chado Data Display module contains tools to visualize each data type and the metadata which can be used as is or customized as desired. The Chado Search module provides search and download functionality for the supported data types. Also included are the tools to visualize map and species summary. The use of materialized views in the Chado Search module enables better performance as well as flexibility of data modeling in Chado, allowing existing Tripal databases with different metadata types to utilize the module. These Tripal Extension modules are implemented in the Genome Database for Rosaceae (rosaceae.org), CottonGen (cottongen.org), Citrus Genome Database (citrusgenomedb.org), Genome Database for Vaccinium (vaccinium.org) and the Cool Season Food Legume Database (coolseasonfoodlegume.org). Database URL: https://www.citrusgenomedb.org/, https://www.coolseasonfoodlegume.org/, https://www.cottongen.org/, https://www.rosaceae.org/, https://www.vaccinium.org/
A Flexible Online Metadata Editing and Management System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aguilar, Raul; Pan, Jerry Yun; Gries, Corinna
2010-01-01
A metadata editing and management system is being developed employing state of the art XML technologies. A modular and distributed design was chosen for scalability, flexibility, options for customizations, and the possibility to add more functionality at a later stage. The system consists of a desktop design tool or schema walker used to generate code for the actual online editor, a native XML database, and an online user access management application. The design tool is a Java Swing application that reads an XML schema, provides the designer with options to combine input fields into online forms and give the fieldsmore » user friendly tags. Based on design decisions, the tool generates code for the online metadata editor. The code generated is an implementation of the XForms standard using the Orbeon Framework. The design tool fulfills two requirements: First, data entry forms based on one schema may be customized at design time and second data entry applications may be generated for any valid XML schema without relying on custom information in the schema. However, the customized information generated at design time is saved in a configuration file which may be re-used and changed again in the design tool. Future developments will add functionality to the design tool to integrate help text, tool tips, project specific keyword lists, and thesaurus services. Additional styling of the finished editor is accomplished via cascading style sheets which may be further customized and different look-and-feels may be accumulated through the community process. The customized editor produces XML files in compliance with the original schema, however, data from the current page is saved into a native XML database whenever the user moves to the next screen or pushes the save button independently of validity. Currently the system uses the open source XML database eXist for storage and management, which comes with third party online and desktop management tools. However, access to metadata files in the application introduced here is managed in a custom online module, using a MySQL backend accessed by a simple Java Server Faces front end. A flexible system with three grouping options, organization, group and single editing access is provided. Three levels were chosen to distribute administrative responsibilities and handle the common situation of an information manager entering the bulk of the metadata but leave specifics to the actual data provider.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larsen, Peter; Goldman, Charles A.; Satchwell, Andrew
2012-05-08
The U.S. energy service company (ESCO) industry is an example of a private sector business model where energy savings are delivered to customers primarily through the use of performance-based contracts. This study was conceived as a snapshot of the ESCO industry prior to the economic slowdown and the introduction of federal stimulus funding mandated by enactment of the American Recovery and Reinvestment Act of 2009 (ARRA). This study utilizes two parallel analytic approaches to characterize ESCO industry and market trends in the U.S.: (1) a “top-down” approach involving a survey of individual ESCOs to estimate aggregate industry activity and (2)more » a “bottom-up” analysis of a database of -3,265 projects (representing over $8B in project investment) that reports market trends including installed EE retrofit strategies, project installation costs and savings, project payback times, and benefit-cost ratios over time. Despite the onset of an economic recession, the U.S. ESCO industry managed to grow at about 7% per year between 2006 and 2008. ESCO industry revenues are relatively small compared to total U.S. energy expenditures (about $4.1 billion in 2008), but ESCOs anticipated accelerated growth through 2011 (25% per year). We found that 2,484 ESCO projects in our database generated -$4.0 billion ($2009) in net, direct economic benefits to their customers. We estimate that the ESCO project database includes about 20% of all U.S. ESCO market activity from 1990-2008. Assuming the net benefits per project are comparable for ESCO projects that are not included in the LBNL database, this would suggest that the ESCO industry has generated -$23 billion in net direct economic benefits for customers at projects installed between 1990 and 2008. We found that nearly 85% of all public and institutional projects met or exceeded the guaranteed level of savings. We estimated that a typical ESCO project generated $1.5 dollars of direct benefits for every dollar of customer investment. There is empirical evidence confirming that the industry is responding to customer demand by installing more comprehensive and complex measures—including onsite generation and measures to address deferred maintenance—but this evolution has significant implications for customer project economics, especially at K-12 schools. We found that the median simple payback time has increased from 1.9 to 3.2 years in private sector projects since the early-to-mid 1990s and from 5.2 to 10.5 years in public sector projects for the same time period.« less
NASA Astrophysics Data System (ADS)
Mendela-Anzlik, Małgorzata; Borkowski, Andrzej
2017-06-01
Airborne laser scanning data (ALS) are used mainly for creation of precise digital elevation models. However, it appears that the informative potential stored in ALS data can be also used for updating spatial databases, including the Database of Topographic Objects (BDOT10k). Typically, geometric representations of buildings in the BDOT10k are equal to their entities in the Land and Property Register (EGiB). In this study ALS is considered as supporting data source. The thresholding method of original ALS data with the use of the alpha shape algorithm, proposed in this paper, allows for extraction of points that represent horizontal cross section of building walls, leading to creation of vector, geometric models of buildings that can be then used for updating the BDOT10k. This method gives also the possibility of an easy verification of up-to-dateness of both the BDOT10k and the district EGiB databases within geometric information about buildings. For verification of the proposed methodology there have been used the classified ALS data acquired with a density of 4 points/m2. The accuracy assessment of the identified building outlines has been carried out by their comparison to the corresponding EGiB objects. The RMSE values for 78 buildings are from a few to tens of centimeters and the average value is about 0,5 m. At the same time for several objects there have been revealed huge geometric discrepancies. Further analyses have shown that these discrepancies could be resulted from incorrect representations of buildings in the EGiB database.
A General Water Resources Regulation Software System in China
NASA Astrophysics Data System (ADS)
LEI, X.
2017-12-01
To avoid iterative development of core modules in water resource normal regulation and emergency regulation and improve the capability of maintenance and optimization upgrading of regulation models and business logics, a general water resources regulation software framework was developed based on the collection and analysis of common demands for water resources regulation and emergency management. It can provide a customizable, secondary developed and extensible software framework for the three-level platform "MWR-Basin-Province". Meanwhile, this general software system can realize business collaboration and information sharing of water resources regulation schemes among the three-level platforms, so as to improve the decision-making ability of national water resources regulation. There are four main modules involved in the general software system: 1) A complete set of general water resources regulation modules allows secondary developer to custom-develop water resources regulation decision-making systems; 2) A complete set of model base and model computing software released in the form of Cloud services; 3) A complete set of tools to build the concept map and model system of basin water resources regulation, as well as a model management system to calibrate and configure model parameters; 4) A database which satisfies business functions and functional requirements of general water resources regulation software can finally provide technical support for building basin or regional water resources regulation models.
Developing Online Courses: A Human-Centered Approach.
ERIC Educational Resources Information Center
Branon, Rovy; Beatty, Brian; Wilson, Jack
Companies and universities are increasingly moving to online delivery for much of their training and education needs, and designing and building quality distance education is a challenge facing many organizations. Option Six is an independent company that is building customized e-learning solutions. Over the last 2 years, the instructional…
Airport electrotechnology resource guide. Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Geba, V.; Nesbit, M.
1998-06-01
Electrotechnologies offer utilities a cutting edge marketing tool to work with airport customers to increase passenger comfort, and achieve environmental and economic goals. At the same time, utility objectives such as customer retention, and revenue and sales goals can be enhanced. This guide provides electric utility marketing staff with the necessary information to market electrotechnologies in airport applications. The airport industry is profiled and an overview of airport building, infrastructure technologies and electric vehicles is provided. In addition, the guide offers market strategies for customer targeting, market research, market plan development and development of trade ally partnerships.
12 CFR 517.1 - Purpose and scope.
Code of Federal Regulations, 2014 CFR
2014-01-01
..., customized training, relocation services, information systems technology (computer systems, database... Businesses Outreach Program (Outreach Program) is to ensure that firms owned and operated by minorities...
12 CFR 517.1 - Purpose and scope.
Code of Federal Regulations, 2013 CFR
2013-01-01
..., customized training, relocation services, information systems technology (computer systems, database... Businesses Outreach Program (Outreach Program) is to ensure that firms owned and operated by minorities...
Decision - making of Direct Customers Based on Available Transfer Capability
NASA Astrophysics Data System (ADS)
Quan, Tang; Zhaohang, Lin; Huaqiang, Li
2017-05-01
Large customer direct-power-purchasing is a hot spot in the electricity market reform. In this paper, the author established an Available Transfer Capability (ATC) model which takes uncertain factors into account, applied the model into large customer direct-power-purchasing transactions and improved the reliability of power supply during direct-power-purchasing by introducing insurance theory. The author also considered the customers loss suffered from power interruption when building ATC model, established large customer decision model, took purchasing quantity of power from different power plants and reserved capacity insurance as variables, targeted minimum power interruption loss as optimization goal and best solution by means of particle swarm algorithm to produce optimal power purchasing decision of large consumers. Simulation was made through IEEE57 system finally and proved that such method is effective.
ERIC Educational Resources Information Center
Ojo, Michael A.
2017-01-01
The roadmap towards the commercialization of goods and services has been continually enhanced and modified to accommodate a more digital landscape. Businesses are building more robust websites and point-of-service opportunities that do not require human intervention. In turn, consumer shopping patterns and behaviors have shifted in response to…
NASA Astrophysics Data System (ADS)
Chen, Zhu-an; Zhang, Li-ting; Liu, Lu
2009-10-01
ESRI's GIS components MapObjects are applied in many cadastral information system because of its miniaturization and flexibility. Some cadastral information was saved in cadastral database directly by MapObjects's Shape file format in this cadastral information system. However, MapObjects didn't provide the function of building attribute field for map layer's attribute data file in cadastral database and user cann't save the result of analysis. This present paper designed and realized the function of building attribute field in MapObjects based on the method of Jackson's system development.
Diffraction analysis of customized illumination technique
NASA Astrophysics Data System (ADS)
Lim, Chang-Moon; Kim, Seo-Min; Eom, Tae-Seung; Moon, Seung Chan; Shin, Ki S.
2004-05-01
Various enhancement techniques such as alternating PSM, chrome-less phase lithography, double exposure, etc. have been considered as driving forces to lead the production k1 factor towards below 0.35. Among them, a layer specific optimization of illumination mode, so-called customized illumination technique receives deep attentions from lithographers recently. A new approach for illumination customization based on diffraction spectrum analysis is suggested in this paper. Illumination pupil is divided into various diffraction domains by comparing the similarity of the confined diffraction spectrum. Singular imaging property of individual diffraction domain makes it easier to build and understand the customized illumination shape. By comparing the goodness of image in each domain, it was possible to achieve the customized shape of illumination. With the help from this technique, it was found that the layout change would not gives the change in the shape of customized illumination mode.
Redefining NHS complaint handling--the real challenge.
Seelos, L; Adamson, C
1994-01-01
More and more organizations find that a constructive and open dialogue with their customers can be an effective strategy for building long-term customer relations. In this context, it has been recognized that effective complaint-contact handling can make a significant contribution to organizations' attempts to maximize customer satisfaction and loyalty. Within the NHS, an intellectual awareness exists that effective complaint/contact handling can contribute to making services more efficient and cost-effective by developing customer-oriented improvement initiatives. Recent efforts have focused on redefining NHS complaint-handling procedures to make them more user-friendly and effective for both NHS employees and customers. Discusses the challenges associated with opening up the NHS to customer feedback. Highlights potential weaknesses in the current approach and argues that the real challenge is for NHS managers to facilitate a culture change that moves the NHS away from a long-established defensive complaint handling practice.
Customer-Provider Strategic Alignment: A Maturity Model
NASA Astrophysics Data System (ADS)
Luftman, Jerry; Brown, Carol V.; Balaji, S.
This chapter presents a new model for assessing the maturity of a customer-provider relationship from a collaborative service delivery perspective: the Customer-Provider Strategic Alignment Maturity (CPSAM) Model. This model builds on recent research for effectively managing the customer-provider relationship in IT service outsourcing contexts and a validated model for assessing alignment across internal IT service units and their business customers within the same organization. After reviewing relevant literature by service science and information systems researchers, the six overarching components of the maturity model are presented: value measurements, governance, partnership, communications, human resources and skills, and scope and architecture. A key assumption of the model is that all of the components need be addressed to assess and improve customer-provider alignment. Examples of specific metrics for measuring the maturity level of each component over the five levels of maturity are also presented.
ERIC Educational Resources Information Center
Dahlen, Sarah P. C.; Hanson, Kathlene
2017-01-01
Discovery layers provide a simplified interface for searching library resources. Libraries with limited finances make decisions about retaining indexing and abstracting databases when similar information is available in discovery layers. These decisions should be informed by student success at finding quality information as well as satisfaction…
Database Application for a Youth Market Livestock Production Education Program
ERIC Educational Resources Information Center
Horney, Marc R.
2013-01-01
This article offers an example of a database designed to support teaching animal production and husbandry skills in county youth livestock programs. The system was used to manage production goals, animal growth and carcass data, photos and other imagery, and participant records. These were used to produce a variety of customized reports to help…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-20
..., demographic, and other information that allow their customers to market to teachers, administrators, schools... turning to the other company. By contrast, MCH lacked a K-12 database comparable to MDR or QED's..., including the time and cost to develop a database with market coverage and accuracy comparable to MDR or QED...
An Object-Relational Ifc Storage Model Based on Oracle Database
NASA Astrophysics Data System (ADS)
Li, Hang; Liu, Hua; Liu, Yong; Wang, Yuan
2016-06-01
With the building models are getting increasingly complicated, the levels of collaboration across professionals attract more attention in the architecture, engineering and construction (AEC) industry. In order to adapt the change, buildingSMART developed Industry Foundation Classes (IFC) to facilitate the interoperability between software platforms. However, IFC data are currently shared in the form of text file, which is defective. In this paper, considering the object-based inheritance hierarchy of IFC and the storage features of different database management systems (DBMS), we propose a novel object-relational storage model that uses Oracle database to store IFC data. Firstly, establish the mapping rules between data types in IFC specification and Oracle database. Secondly, design the IFC database according to the relationships among IFC entities. Thirdly, parse the IFC file and extract IFC data. And lastly, store IFC data into corresponding tables in IFC database. In experiment, three different building models are selected to demonstrate the effectiveness of our storage model. The comparison of experimental statistics proves that IFC data are lossless during data exchange.
Walker, David D; van Jaarsveld, Danielle D; Skarlicki, Daniel P
2014-01-01
Incivility between customers and employees is common in many service organizations. These encounters can have negative outcomes for employees, customers, and the organization. To date, researchers have tended to study incivility as an aggregated and accumulated phenomenon (entity perspective). In the present study, we examined incivility as it occurs during a specific service encounter (event perspective) alongside the entity perspective. Using a mixed-method multilevel field study of customer service interactions, we show that individual customer incivility encounters (i.e., events) trigger employee incivility as a function of the employee's overall accumulated impression of the (in)civility in his or her customer interactions, such that the effects are more pronounced among employees who generally perceive their customer interactions to be more versus less civil. We also find that these interactive effects occur only among employees who are lower (vs. higher) in negative affectivity. Our results show that, in order to expand the understanding of customer incivility, it is important to study the incivility encounter, the social context in which negative customer interactions occur, and individual differences. PsycINFO Database Record (c) 2014 APA, all rights reserved
NASA Astrophysics Data System (ADS)
Szajnfarber, Zoe; Stringfellow, Margaret V.; Weigel, Annalisa L.
2010-11-01
This paper captures a first detailed attempt to quantitatively analyze the innovation history of the space sector. Building on a communication satellite innovation metric and a spacecraft innovation framework developed as part of an ongoing project, this paper presents a preliminary model of global communication satellite innovation. In addition to innovation being a function of the rate of performance normalized by price, innovation was found to be strongly influenced by characteristics of the customer-contractor contractual relationship. Specifically, Department of Defense contracts tend to result in a lower level of innovation on average as compared to other customers. Also, particular customer-contractor pairs perform differently and exhibit a second order relationship in time.
Code of Federal Regulations, 2011 CFR
2011-10-01
... CONTRACT ADMINISTRATION AND AUDIT SERVICES Contractor Performance Information 42.1501 General. Past... commitment to customer satisfaction; the contractor's reporting into databases (see subparts 4.14 and 4.15...
Code of Federal Regulations, 2012 CFR
2012-10-01
... CONTRACT ADMINISTRATION AND AUDIT SERVICES Contractor Performance Information 42.1501 General. Past... commitment to customer satisfaction; the contractor's reporting into databases (see subparts 4.14 and 4.15...
Walker, David D; van Jaarsveld, Danielle D; Skarlicki, Daniel P
2017-02-01
Customer service employees tend to react negatively to customer incivility by demonstrating incivility in return, thereby likely reducing customer service quality. Research, however, has yet to uncover precisely what customers do that results in employee incivility. Through transcript and computerized text analysis in a multilevel, multisource, mixed-method field study of customer service events (N = 434 events), we found that employee incivility can occur as a function of customer (a) aggressive words, (b) second-person pronoun use (e.g., you, your), (c) interruptions, and (d) positive emotion words. First, the positive association between customer aggressive words and employee incivility was more pronounced when the verbal aggression included second-person pronouns, which we label targeted aggression. Second, we observed a 2-way interaction between targeted aggression and customer interruptions such that employees demonstrated more incivility when targeted customer verbal aggression was accompanied by more (vs. fewer) interruptions. Third, this 2-way interaction predicting employee incivility was attenuated when customers used positive emotion words. Our results support a resource-based explanation, suggesting that customer verbal aggression consumes employee resources potentially leading to self-regulation failure, whereas positive emotion words from customers can help replenish employee resources that support self-regulation. The present study highlights the advantages of examining what occurs within customer-employee interactions to gain insight into employee reactions to customer incivility. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Open source database of images DEIMOS: extension for large-scale subjective image quality assessment
NASA Astrophysics Data System (ADS)
Vítek, Stanislav
2014-09-01
DEIMOS (Database of Images: Open Source) is an open-source database of images and video sequences for testing, verification and comparison of various image and/or video processing techniques such as compression, reconstruction and enhancement. This paper deals with extension of the database allowing performing large-scale web-based subjective image quality assessment. Extension implements both administrative and client interface. The proposed system is aimed mainly at mobile communication devices, taking into account advantages of HTML5 technology; it means that participants don't need to install any application and assessment could be performed using web browser. The assessment campaign administrator can select images from the large database and then apply rules defined by various test procedure recommendations. The standard test procedures may be fully customized and saved as a template. Alternatively the administrator can define a custom test, using images from the pool and other components, such as evaluating forms and ongoing questionnaires. Image sequence is delivered to the online client, e.g. smartphone or tablet, as a fully automated assessment sequence or viewer can decide on timing of the assessment if required. Environmental data and viewing conditions (e.g. illumination, vibrations, GPS coordinates, etc.), may be collected and subsequently analyzed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
When done well, modular home production can provide lower costs and excellent quality control (QC)—compared to conventional home building methods— while still allowing a great deal of customization. The Consortium for Advanced Residential Buildings (CARB) is a U.S. Department of Energy Building America team that worked with three Maine companies to compare standard codecompliant modular homes with a modular zero energy home. Those companies were BrightBuilt Home (BBH), Black Bros. Builders, and Keiser Homes.
7. Photocopy of original plan and elevation of the building ...
7. Photocopy of original plan and elevation of the building by Ammi B. Young, architect. Taken from Plans of Public Buildings in Course of Construction for the United States of America under the Direction of the Secretary of Treasury. 5 vol., Washington, 1855-56. It is available at the National Archives, Washington, D.C. - U. S. Custom House, Twentieth & Post Office Streets, Galveston, Galveston County, TX
The development of a qualitative dynamic attribute value model for healthcare institutes.
Lee, Wan-I
2010-01-01
Understanding customers has become an urgent topic for increasing competitiveness. The purpopse of the study was to develop a qualitative dynamic attribute value model which provides insight into the customers' value for healthcare institute managers by conducting the initial open-ended questionnaire survey to select participants purposefully. A total number of 427 questionnaires was conducted in two hospitals in Taiwan (one district hospital with 635 beds and one academic hospital with 2495 beds) and 419 questionnaires were received in nine weeks. Then, apply qualitative in-depth interviews to explore customers' perspective of values for building a model of partial differential equations. This study concludes nine categories of value, including cost, equipment, physician background, physicain care, environment, timing arrangement, relationship, brand image and additional value, to construct objective network for customer value and qualitative dynamic attribute value model where the network shows the value process of loyalty development via its effect on customer satisfaction, customer relationship, customer loyalty and healthcare service. One set predicts the customer relationship based on comminent, including service quality, communication and empahty. As the same time, customer loyalty based on trust, involves buzz marketing, brand and image. Customer value of the current instance is useful for traversing original customer attributes and identifing customers on different service share.
The Gene Set Builder: collation, curation, and distribution of sets of genes
Yusuf, Dimas; Lim, Jonathan S; Wasserman, Wyeth W
2005-01-01
Background In bioinformatics and genomics, there are many applications designed to investigate the common properties for a set of genes. Often, these multi-gene analysis tools attempt to reveal sequential, functional, and expressional ties. However, while tremendous effort has been invested in developing tools that can analyze a set of genes, minimal effort has been invested in developing tools that can help researchers compile, store, and annotate gene sets in the first place. As a result, the process of making or accessing a set often involves tedious and time consuming steps such as finding identifiers for each individual gene. These steps are often repeated extensively to shift from one identifier type to another; or to recreate a published set. In this paper, we present a simple online tool which – with the help of the gene catalogs Ensembl and GeneLynx – can help researchers build and annotate sets of genes quickly and easily. Description The Gene Set Builder is a database-driven, web-based tool designed to help researchers compile, store, export, and share sets of genes. This application supports the 17 eukaryotic genomes found in version 32 of the Ensembl database, which includes species from yeast to human. User-created information such as sets and customized annotations are stored to facilitate easy access. Gene sets stored in the system can be "exported" in a variety of output formats – as lists of identifiers, in tables, or as sequences. In addition, gene sets can be "shared" with specific users to facilitate collaborations or fully released to provide access to published results. The application also features a Perl API (Application Programming Interface) for direct connectivity to custom analysis tools. A downloadable Quick Reference guide and an online tutorial are available to help new users learn its functionalities. Conclusion The Gene Set Builder is an Ensembl-facilitated online tool designed to help researchers compile and manage sets of genes in a user-friendly environment. The application can be accessed via . PMID:16371163
Integrating GIS, Archeology, and the Internet.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sera White; Brenda Ringe Pace; Randy Lee
2004-08-01
At the Idaho National Engineering and Environmental Laboratory's (INEEL) Cultural Resource Management Office, a newly developed Data Management Tool (DMT) is improving management and long-term stewardship of cultural resources. The fully integrated system links an archaeological database, a historical database, and a research database to spatial data through a customized user interface using ArcIMS and Active Server Pages. Components of the new DMT are tailored specifically to the INEEL and include automated data entry forms for historic and prehistoric archaeological sites, specialized queries and reports that address both yearly and project-specific documentation requirements, and unique field recording forms. The predictivemore » modeling component increases the DMT’s value for land use planning and long-term stewardship. The DMT enhances the efficiency of archive searches, improving customer service, oversight, and management of the large INEEL cultural resource inventory. In the future, the DMT will facilitate data sharing with regulatory agencies, tribal organizations, and the general public.« less
The customization of APACHE II for patients receiving orthotopic liver transplants
Moreno, Rui
2002-01-01
General outcome prediction models developed for use with large, multicenter databases of critically ill patients may not correctly estimate mortality if applied to a particular group of patients that was under-represented in the original database. The development of new diagnostic weights has been proposed as a method of adapting the general model – the Acute Physiology and Chronic Health Evaluation (APACHE) II in this case – to a new group of patients. Such customization must be empirically tested, because the original model cannot contain an appropriate set of predictive variables for the particular group. In this issue of Critical Care, Arabi and co-workers present the results of the validation of a modified model of the APACHE II system for patients receiving orthotopic liver transplants. The use of a highly heterogeneous database for which not all important variables were taken into account and of a sample too small to use the Hosmer–Lemeshow goodness-of-fit test appropriately makes their conclusions uncertain. PMID:12133174
Sukhinin, Dmitrii I.; Engel, Andreas K.; Manger, Paul; Hilgetag, Claus C.
2016-01-01
Databases of structural connections of the mammalian brain, such as CoCoMac (cocomac.g-node.org) or BAMS (https://bams1.org), are valuable resources for the analysis of brain connectivity and the modeling of brain dynamics in species such as the non-human primate or the rodent, and have also contributed to the computational modeling of the human brain. Another animal model that is widely used in electrophysiological or developmental studies is the ferret; however, no systematic compilation of brain connectivity is currently available for this species. Thus, we have started developing a database of anatomical connections and architectonic features of the ferret brain, the Ferret(connect)ome, www.Ferretome.org. The Ferretome database has adapted essential features of the CoCoMac methodology and legacy, such as the CoCoMac data model. This data model was simplified and extended in order to accommodate new data modalities that were not represented previously, such as the cytoarchitecture of brain areas. The Ferretome uses a semantic parcellation of brain regions as well as a logical brain map transformation algorithm (objective relational transformation, ORT). The ORT algorithm was also adopted for the transformation of architecture data. The database is being developed in MySQL and has been populated with literature reports on tract-tracing observations in the ferret brain using a custom-designed web interface that allows efficient and validated simultaneous input and proofreading by multiple curators. The database is equipped with a non-specialist web interface. This interface can be extended to produce connectivity matrices in several formats, including a graphical representation superimposed on established ferret brain maps. An important feature of the Ferretome database is the possibility to trace back entries in connectivity matrices to the original studies archived in the system. Currently, the Ferretome contains 50 reports on connections comprising 20 injection reports with more than 150 labeled source and target areas, the majority reflecting connectivity of subcortical nuclei and 15 descriptions of regional brain architecture. We hope that the Ferretome database will become a useful resource for neuroinformatics and neural modeling, and will support studies of the ferret brain as well as facilitate advances in comparative studies of mesoscopic brain connectivity. PMID:27242503
Sukhinin, Dmitrii I; Engel, Andreas K; Manger, Paul; Hilgetag, Claus C
2016-01-01
Databases of structural connections of the mammalian brain, such as CoCoMac (cocomac.g-node.org) or BAMS (https://bams1.org), are valuable resources for the analysis of brain connectivity and the modeling of brain dynamics in species such as the non-human primate or the rodent, and have also contributed to the computational modeling of the human brain. Another animal model that is widely used in electrophysiological or developmental studies is the ferret; however, no systematic compilation of brain connectivity is currently available for this species. Thus, we have started developing a database of anatomical connections and architectonic features of the ferret brain, the Ferret(connect)ome, www.Ferretome.org. The Ferretome database has adapted essential features of the CoCoMac methodology and legacy, such as the CoCoMac data model. This data model was simplified and extended in order to accommodate new data modalities that were not represented previously, such as the cytoarchitecture of brain areas. The Ferretome uses a semantic parcellation of brain regions as well as a logical brain map transformation algorithm (objective relational transformation, ORT). The ORT algorithm was also adopted for the transformation of architecture data. The database is being developed in MySQL and has been populated with literature reports on tract-tracing observations in the ferret brain using a custom-designed web interface that allows efficient and validated simultaneous input and proofreading by multiple curators. The database is equipped with a non-specialist web interface. This interface can be extended to produce connectivity matrices in several formats, including a graphical representation superimposed on established ferret brain maps. An important feature of the Ferretome database is the possibility to trace back entries in connectivity matrices to the original studies archived in the system. Currently, the Ferretome contains 50 reports on connections comprising 20 injection reports with more than 150 labeled source and target areas, the majority reflecting connectivity of subcortical nuclei and 15 descriptions of regional brain architecture. We hope that the Ferretome database will become a useful resource for neuroinformatics and neural modeling, and will support studies of the ferret brain as well as facilitate advances in comparative studies of mesoscopic brain connectivity.
NASA Astrophysics Data System (ADS)
Shimoda, Atsushi; Kosugi, Hidenori; Karino, Takafumi; Komoda, Norihisa
This study focuses on a stock reduction method for build-to-order (BTO) products to flow surplus parts out to the market using sale by recommendation. A sale by recommendation is repeated in an each business negotiation using a recommended configuration selected from the inventory of parts to minimize the stock deficiency or excess at the end of a certain period of the production plan. The method is based on the potential of a customer specification to be replaced by an alternative one if the alternative one is close to the initial customer specification. A recommendation method is proposed that decides the recommended product configuration by balancing the part consumption so that the alternative specification of the configuration is close enough to the initial customer specification for substitutability. The method was evaluated by a simulation using real BTO manufacturing data and the result demonstrates that the unbalance of the consumption of parts inventory is improved.
Accelerating the Delivery of Home Performance Upgrades through a Synergistic Business Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schirber, Tom; Ojczyk, Cindy
Achieving Building America energy savings goals (40% by 2030) will require many existing homes to install energy upgrades. Engaging large numbers of homeowners in building science-guided upgrades during a single remodeling event has been difficult for a number of reasons. Performance upgrades in existing homes tend to occur over multiple years and usually result from component failures (furnace failure) and weather damage (ice dams, roofing, siding). This research attempted to: A) understand the homeowner's motivations regarding investing in building science based performance upgrades; B) determining a rapidly scalable approach to engage large numbers of homeowners directly through existing customer networks;more » and C) access a business model that will manage all aspects of the contractor-homeowner-performance professional interface to ensure good upgrade decisions over time. The solution results from a synergistic approach utilizing networks of suppliers merging with networks of homeowner customers. Companies in the $400 to $800 billion home services industry have proven direct marketing and sales proficiencies that have led to the development of vast customer networks. Companies such as pest control, lawn care, and security have nurtured these networks by successfully addressing the ongoing needs of homes. This long-term access to customers and trust established with consistent delivery has also provided opportunities for home service providers to grow by successfully introducing new products and services like attic insulation and air sealing. The most important component for success is a business model that will facilitate and manage the process. The team analyzes a group that developed a working model.« less
Research on high availability architecture of SQL and NoSQL
NASA Astrophysics Data System (ADS)
Wang, Zhiguo; Wei, Zhiqiang; Liu, Hao
2017-03-01
With the advent of the era of big data, amount and importance of data have increased dramatically. SQL database develops in performance and scalability, but more and more companies tend to use NoSQL database as their databases, because NoSQL database has simpler data model and stronger extension capacity than SQL database. Almost all database designers including SQL database and NoSQL database aim to improve performance and ensure availability by reasonable architecture which can reduce the effects of software failures and hardware failures, so that they can provide better experiences for their customers. In this paper, I mainly discuss the architectures of MySQL, MongoDB, and Redis, which are high available and have been deployed in practical application environment, and design a hybrid architecture.
Shedlock, James; Frisque, Michelle; Hunt, Steve; Walton, Linda; Handler, Jonathan; Gillam, Michael
2010-04-01
How can the user's access to health information, especially full-text articles, be improved? The solution is building and evaluating the Health SmartLibrary (HSL). The setting is the Galter Health Sciences Library, Feinberg School of Medicine, Northwestern University. The HSL was built on web-based personalization and customization tools: My E-Resources, Stay Current, Quick Search, and File Cabinet. Personalization and customization data were tracked to show user activity with these value-added, online services. Registration data indicated that users were receptive to personalized resource selection and that the automated application of specialty-based, personalized HSLs was more frequently adopted than manual customization by users. Those who did customize customized My E-Resources and Stay Current more often than Quick Search and File Cabinet. Most of those who customized did so only once. Users did not always take advantage of the services designed to aid their library research experiences. When personalization is available at registration, users readily accepted it. Customization tools were used less frequently; however, more research is needed to determine why this was the case.
Warehouses information system design and development
NASA Astrophysics Data System (ADS)
Darajatun, R. A.; Sukanta
2017-12-01
Materials/goods handling industry is fundamental for companies to ensure the smooth running of their warehouses. Efficiency and organization within every aspect of the business is essential in order to gain a competitive advantage. The purpose of this research is design and development of Kanban of inventory storage and delivery system. Application aims to facilitate inventory stock checks to be more efficient and effective. Users easily input finished goods from production department, warehouse, customer, and also suppliers. Master data designed as complete as possible to be prepared applications used in a variety of process logistic warehouse variations. The author uses Java programming language to develop the application, which is used for building Java Web applications, while the database used is MySQL. System development methodology that I use is the Waterfall methodology. Waterfall methodology has several stages of the Analysis, System Design, Implementation, Integration, Operation and Maintenance. In the process of collecting data the author uses the method of observation, interviews, and literature.
Build an Assistive Technology Toolkit
ERIC Educational Resources Information Center
Ahrens, Kelly
2011-01-01
Assistive technology (AT) by its very nature consists of a variety of personal and customized tools for multiple learning styles and physical challenges. The author not only encourages students, parents, and educators to advocate for AT services, she also wants them to go a step further and build their own AT toolkits that can instill independence…
Building a "Motivation Toolkit" for Teaching Information Literacy.
ERIC Educational Resources Information Center
Moyer, Susan L.; Small, Ruth V.
2001-01-01
Discusses the need to motivate students to make information literacy programs successful and demonstrates how a middle school library media specialist used Small and Arnone's Motivation Overlay for Information Skills Instruction to build a set of customized toolkits to improve student research that includes the Big6[TM] approach to library and…
Building Customized University-to-Business (U2B) Partnerships
ERIC Educational Resources Information Center
Irvine, George; Verma, Lisa
2013-01-01
Continuing education (CE) units throughout the United States have successfully built University-to-Business (U2B) partnerships to provide greater value to their community partners and to increase revenue for the university. Our experience in building U2B partnerships and feedback from our partners--businesses, corporations, state agencies, and…
37. NORTH TO BINS ALONG NORTH WALL OF FACTORY BUILDING ...
37. NORTH TO BINS ALONG NORTH WALL OF FACTORY BUILDING WHICH REMAIN FILLED WITH NEW OLD STOCK AND USED PARTS FOR ELI WINDMILLS. THE ROPE AT THE LOWER FOREGROUND WAS USED IN ERECTING WINDMILLS AND TOWERS FOR CUSTOMERS. - Kregel Windmill Company Factory, 1416 Central Avenue, Nebraska City, Otoe County, NE
Toy Modification Note: Build It Yourself Battery Interrupter. Revised.
ERIC Educational Resources Information Center
Vanderheiden, Gregg C.; Brandenburg, S.
This toy modification note presents illustrated instructions on how to build a battery interrupter that permits on/off control of battery-operated toys without modification of the toys themselves. The device allows for a separate control switch which can be custom designed to fit a handicapped user's needs. Information on the construction and use…
Curating the Web: Building a Google Custom Search Engine for the Arts
ERIC Educational Resources Information Center
Hennesy, Cody; Bowman, John
2008-01-01
Google's first foray onto the web made search simple and results relevant. With its Co-op platform, Google has taken another step toward dramatically increasing the relevancy of search results, further adapting the World Wide Web to local needs. Google Custom Search Engine, a tool on the Co-op platform, puts one in control of his or her own search…
2006-02-07
consumer confidence will boost and “just- in-time” logistics will be the lighter and leaner combat logistics support the Marine Corps has desperately...stove-piped logistics systems have done little to build customer or “warrior confidence”. Customers who requesting assets from the supply system...have become used to waiting thirty plus days to receive the asset. Consumers outside of the Marine Corps would refuse to accept such delays, but
Raub, Steffen; Liao, Hui
2012-05-01
We developed and tested a cross-level model of the antecedents and outcomes of proactive customer service performance. Results from a field study of 900 frontline service employees and their supervisors in 74 establishments of a multinational hotel chain located in Europe, the Middle East, Africa, and Asia demonstrated measurement equivalence and suggested that, after controlling for service climate, initiative climate at the establishment level and general self-efficacy at the individual level predicted employee proactive customer service performance and interacted in a synergistic way. Results also showed that at the establishment level, controlling for service climate and collective general service performance, initiative climate was positively and indirectly associated with customer service satisfaction through the mediation of aggregated proactive customer service performance. We discuss important theoretical and practical implications of these findings. (PsycINFO Database Record (c) 2012 APA, all rights reserved).
Development of new data acquisition system for COMPASS experiment
NASA Astrophysics Data System (ADS)
Bodlak, M.; Frolov, V.; Jary, V.; Huber, S.; Konorov, I.; Levit, D.; Novy, J.; Salac, R.; Virius, M.
2016-04-01
This paper presents development and recent status of the new data acquisiton system of the COMPASS experiment at CERN with up to 50 kHz trigger rate and 36 kB average event size during 10 second period with beam followed by approximately 40 second period without beam. In the original DAQ, the event building is performed by software deployed on switched computer network, moreover the data readout is based on deprecated PCI technology; the new system replaces the event building network with a custom FPGA-based hardware. The custom cards are introduced and advantages of the FPGA technology for DAQ related tasks are discussed. In this paper, we focus on the software part that is mainly responsible for control and monitoring. The most of the system can run as slow control; only readout process has realtime requirements. The design of the software is built on state machines that are implemented using the Qt framework; communication between remote nodes that form the software architecture is based on the DIM library and IPBus technology. Furthermore, PHP and JS languages are used to maintain system configuration; the MySQL database was selected as storage for both configuration of the system and system messages. The system has been design with maximum throughput of 1500 MB/s and large buffering ability used to spread load on readout computers over longer period of time. Great emphasis is put on data latency, data consistency, and even timing checks which are done at each stage of event assembly. System collects results of these checks which together with special data format allows the software to localize origin of problems in data transmission process. A prototype version of the system has already been developed and tested the new system fulfills all given requirements. It is expected that the full-scale version of the system will be finalized in June 2014 and deployed on September provided that tests with cosmic run succeed.
Fisk, Glenda M; Neville, Lukas B
2011-10-01
This exploratory study examines the nature of customer entitlement and its impact on front-line service employees. In an open-ended qualitative inquiry, 56 individuals with waitstaff experience described the types of behaviors entitled customers engage in and the kinds of service-related "perks" these individuals feel deserving of. Participants explained how they responded to entitled customers, how and when managers became involved, and how their dealings with these patrons influenced their subjective physical and psychological well-being. We found that the behaviors of entitled customers negatively impacted waitstaff employees. Participants reported physiological arousal, negative affect, burnout, and feelings of dehumanization as a result of dealing with these patrons. While respondents drew on a variety of strategies to manage their encounters with entitled customers, they indicated workplace support was often informal and described feeling abandoned by management in dealing with this workplace stressor. Approaching customer entitlement as a form of microaggression, we offer recommendations for practice and suggest new directions for future research. . (PsycINFO Database Record (c) 2011 APA, all rights reserved).
An approach in building a chemical compound search engine in oracle database.
Wang, H; Volarath, P; Harrison, R
2005-01-01
A searching or identifying of chemical compounds is an important process in drug design and in chemistry research. An efficient search engine involves a close coupling of the search algorithm and database implementation. The database must process chemical structures, which demands the approaches to represent, store, and retrieve structures in a database system. In this paper, a general database framework for working as a chemical compound search engine in Oracle database is described. The framework is devoted to eliminate data type constrains for potential search algorithms, which is a crucial step toward building a domain specific query language on top of SQL. A search engine implementation based on the database framework is also demonstrated. The convenience of the implementation emphasizes the efficiency and simplicity of the framework.
Graphical user interfaces for symbol-oriented database visualization and interaction
NASA Astrophysics Data System (ADS)
Brinkschulte, Uwe; Siormanolakis, Marios; Vogelsang, Holger
1997-04-01
In this approach, two basic services designed for the engineering of computer based systems are combined: a symbol-oriented man-machine-service and a high speed database-service. The man-machine service is used to build graphical user interfaces (GUIs) for the database service; these interfaces are stored using the database service. The idea is to create a GUI-builder and a GUI-manager for the database service based upon the man-machine service using the concept of symbols. With user-definable and predefined symbols, database contents can be visualized and manipulated in a very flexible and intuitive way. Using the GUI-builder and GUI-manager, a user can build and operate its own graphical user interface for a given database according to its needs without writing a single line of code.
Renaissance architecture for Ground Data Systems
NASA Technical Reports Server (NTRS)
Perkins, Dorothy C.; Zeigenfuss, Lawrence B.
1994-01-01
The Mission Operations and Data Systems Directorate (MO&DSD) has embarked on a new approach for developing and operating Ground Data Systems (GDS) for flight mission support. This approach is driven by the goals of minimizing cost and maximizing customer satisfaction. Achievement of these goals is realized through the use of a standard set of capabilities which can be modified to meet specific user needs. This approach, which is called the Renaissance architecture, stresses the engineering of integrated systems, based upon workstation/local area network (LAN)/fileserver technology and reusable hardware and software components called 'building blocks.' These building blocks are integrated with mission specific capabilities to build the GDS for each individual mission. The building block approach is key to the reduction of development costs and schedules. Also, the Renaissance approach allows the integration of GDS functions that were previously provided via separate multi-mission facilities. With the Renaissance architecture, the GDS can be developed by the MO&DSD or all, or part, of the GDS can be operated by the user at their facility. Flexibility in operation configuration allows both selection of a cost-effective operations approach and the capability for customizing operations to user needs. Thus the focus of the MO&DSD is shifted from operating systems that we have built to building systems and, optionally, operations as separate services. Renaissance is actually a continuous process. Both the building blocks and the system architecture will evolve as user needs and technology change. Providing GDS on a per user basis enables this continuous refinement of the development process and product and allows the MO&DSD to remain a customer-focused organization. This paper will present the activities and results of the MO&DSD initial efforts toward the establishment of the Renaissance approach for the development of GDS, with a particular focus on both the technical and process implications posed by Renaissance to the MO&DSD.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shah, Monisha; Burr, Andrew; Schulte, Andrew
2016-08-26
The Better Buildings Energy Data Accelerator (BBEDA) is a unique effort that has supported 22 pairs of local governments and their utility companies to help building owners gain access to their whole-building energy data. Municipal and Utility BBEDA Partners committed to develop streamlined and easy-to-use solutions to provide whole-building energy data, especially for multitenant commercial buildings, by the end of 2015. As a result, building owners would be able to make data-driven decisions about their buildings by utilizing readily available energy consumption data for entire buildings. Traditionally, data access was difficult to implement due to technical barriers and the lackmore » of clear value propositions for the utilities. During the past two years, BBEDA has taken a hands-on approach to overcome these barriers by offering a platform for the partners to discuss their challenges and solutions. Customized support was also provided to Partners building their local strategies. Based on the lessons learned from the partners, BBEDA developed a final toolkit with guiding documents that addressed key barriers and served as a resource for the other cities and utilities attempting to establish whole-building data access, including an exploration of opportunities to apply the whole-building data to various aspects of utility demand-side management (DSM) programs. BBEDA has been a catalyst for market transformation by addressing the upstream (to efficiency implementation) barrier of data access, demonstrated through the success of the BBEDA partners to address policy, engagement, and technical hurdles and arrive at replicable solutions to make data access a standard practice nationwide. As a result of best practices identified by the BBEDA, 18 utilities serving more than 2.6 million commercial customers nationwide will provide whole-building energy data access to building owners by 2017. This historic expansion of data accessibility will increase building energy benchmarking, the first step many building owners take to improve the energy efficiency of their buildings.« less
CRM System Implementation in a Multinational Enterprise
NASA Astrophysics Data System (ADS)
Mishra, Alok; Mishra, Deepti
The concept of customer relationship management (CRM) resonates with managers in today's competitive economy. As more and more organizations realize the significance of becoming customer-centric in today's competitive era, they embrace CRM as a core business strategy. CRM an integration of information technology and relationship marketing provides the infrastructure that facilitates long-term relationship building with customers at an enterprise-wide level. Successful CRM implementation is a complex, expensive and rarely technical projects. This paper presents the successful implementation of CRM in a multinational organization. This study will facilitate in understanding transition, constraints and implementation of CRM in multinational enterprises.
Searching Databases without Query-Building Aids: Implications for Dyslexic Users
ERIC Educational Resources Information Center
Berget, Gerd; Sandnes, Frode Eika
2015-01-01
Introduction: Few studies document the information searching behaviour of users with cognitive impairments. This paper therefore addresses the effect of dyslexia on information searching in a database with no tolerance for spelling errors and no query-building aids. The purpose was to identify effective search interface design guidelines that…
... Safety and Health. Emergency Response Safety and Health Database. Methanol: systemic agent. Updated May 28, 2015. www. ... ADAM Health Solutions. About MedlinePlus Site Map FAQs Customer Support Get email updates Subscribe to RSS Follow ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Matuszak, M; Anderson, C; Lee, C
Purpose: With electronic medical records, patient information for the treatment planning process has become disseminated across multiple applications with limited quality control and many associated failure modes. We present the development of a single application with a centralized database to manage the planning process. Methods: The system was designed to replace current functionalities of (i) static directives representing the physician intent for the prescription and planning goals, localization information for delivery, and other information, (ii) planning objective reports, (iii) localization and image guidance documents and (iv) the official radiation therapy prescription in the medical record. Using the Eclipse Scripting Applicationmore » Programming Interface, a plug-in script with an associated domain-specific SQL Server database was created to manage the information in (i)–(iv). The system’s user interface and database were designed by a team of physicians, clinical physicists, database experts, and software engineers to ensure usability and robustness for clinical use. Results: The resulting system has been fully integrated within the TPS via a custom script and database. Planning scenario templates, version control, approvals, and logic-based quality control allow this system to fully track and document the planning process as well as physician approval of tradeoffs while improving the consistency of the data. Multiple plans and prescriptions are supported along with non-traditional dose objectives and evaluation such as biologically corrected models, composite dose limits, and management of localization goals. User-specific custom views were developed for the attending physician review, physicist plan checks, treating therapists, and peer review in chart rounds. Conclusion: A method was developed to maintain cohesive information throughout the planning process within one integrated system by using a custom treatment planning management application that interfaces directly with the TPS. Future work includes quantifying the improvements in quality, safety and efficiency that are possible with the routine clinical use of this system. Supported in part by NIH-P01-CA-059827.« less
Data model and relational database design for the New England Water-Use Data System (NEWUDS)
Tessler, Steven
2001-01-01
The New England Water-Use Data System (NEWUDS) is a database for the storage and retrieval of water-use data. NEWUDS can handle data covering many facets of water use, including (1) tracking various types of water-use activities (withdrawals, returns, transfers, distributions, consumptive-use, wastewater collection, and treatment); (2) the description, classification and location of places and organizations involved in water-use activities; (3) details about measured or estimated volumes of water associated with water-use activities; and (4) information about data sources and water resources associated with water use. In NEWUDS, each water transaction occurs unidirectionally between two site objects, and the sites and conveyances form a water network. The core entities in the NEWUDS model are site, conveyance, transaction/rate, location, and owner. Other important entities include water resources (used for withdrawals and returns), data sources, and aliases. Multiple water-exchange estimates can be stored for individual transactions based on different methods or data sources. Storage of user-defined details is accommodated for several of the main entities. Numerous tables containing classification terms facilitate detailed descriptions of data items and can be used for routine or custom data summarization. NEWUDS handles single-user and aggregate-user water-use data, can be used for large or small water-network projects, and is available as a stand-alone Microsoft? Access database structure. Users can customize and extend the database, link it to other databases, or implement the design in other relational database applications.
Building Change Detection from LIDAR Point Cloud Data Based on Connected Component Analysis
NASA Astrophysics Data System (ADS)
Awrangjeb, M.; Fraser, C. S.; Lu, G.
2015-08-01
Building data are one of the important data types in a topographic database. Building change detection after a period of time is necessary for many applications, such as identification of informal settlements. Based on the detected changes, the database has to be updated to ensure its usefulness. This paper proposes an improved building detection technique, which is a prerequisite for many building change detection techniques. The improved technique examines the gap between neighbouring buildings in the building mask in order to avoid under segmentation errors. Then, a new building change detection technique from LIDAR point cloud data is proposed. Buildings which are totally new or demolished are directly added to the change detection output. However, for demolished or extended building parts, a connected component analysis algorithm is applied and for each connected component its area, width and height are estimated in order to ascertain if it can be considered as a demolished or new building part. Finally, a graphical user interface (GUI) has been developed to update detected changes to the existing building map. Experimental results show that the improved building detection technique can offer not only higher performance in terms of completeness and correctness, but also a lower number of undersegmentation errors as compared to its original counterpart. The proposed change detection technique produces no omission errors and thus it can be exploited for enhanced automated building information updating within a topographic database. Using the developed GUI, the user can quickly examine each suggested change and indicate his/her decision with a minimum number of mouse clicks.
NASA Astrophysics Data System (ADS)
Mecray, E. L.; Dissen, J.
2016-12-01
Federal agencies across multiple sectors from transportation to health, emergency management and agriculture, are now requiring their key stakeholders to identify and plan for climate-related impacts. Responding to the drumbeat for climate services at the regional and local scale, the National Oceanic and Atmospheric Administration (NOAA) formed its Regional Climate Services (RCS) program to include Regional Climate Services Directors (RCSD), Regional Climate Centers, and state climatologists in a partnership. Since 2010, the RCS program has engaged customers across the country and amongst many of the nation's key economic sectors to compile information requirements, deliver climate-related products and services, and build partnerships among federal agencies and their regional climate entities. The talk will include a sketch from the Eastern Region that may shed light on the interaction of the multiple entities working at the regional scale. Additionally, we will show examples of our interagency work with the Department of Interior, the Department of Agriculture, and others in NOAA to deliver usable and trusted climate information and resources. These include webinars, print material, and face-to-face customer engagements to gather and respond to information requirements. NOAA/National Centers for Environmental Information's RCSDs work on-the-ground to learn from customers about their information needs and their use of existing tools and resources. As regional leads, the RCSDs work within NOAA and with our regional partners to ensure the customer receives a broad picture of the tools and information from across the nation.
The Development of a Qualitative Dynamic Attribute Value Model for Healthcare Institutes
Lee, Wan-I
2010-01-01
Background: Understanding customers has become an urgent topic for increasing competitiveness. The purpopse of the study was to develop a qualitative dynamic attribute value model which provides insight into the customers’ value for healthcare institute managers by conducting the initial open-ended questionnaire survey to select participants purposefully. Methods: A total number of 427 questionnaires was conducted in two hospitals in Taiwan (one district hospital with 635 beds and one academic hospital with 2495 beds) and 419 questionnaires were received in nine weeks. Then, apply qualitative in-depth interviews to explore customers’ perspective of values for building a model of partial differential equations. Results: This study concludes nine categories of value, including cost, equipment, physician background, physicain care, environment, timing arrangement, relationship, brand image and additional value, to construct objective network for customer value and qualitative dynamic attribute value model where the network shows the value process of loyalty development via its effect on customer satisfaction, customer relationship, customer loyalty and healthcare service. Conclusion: One set predicts the customer relationship based on comminent, including service quality, communication and empahty. As the same time, customer loyalty based on trust, involves buzz marketing, brand and image. Customer value of the current instance is useful for traversing original customer attributes and identifing customers on different service share. PMID:23113034
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stadler, Michael; Marnay, Chris; Donadee, Jon
2011-02-06
Together with OSIsoft LLC as its private sector partner and matching sponsor, the Lawrence Berkeley National Laboratory (Berkeley Lab) won an FY09 Technology Commercialization Fund (TCF) grant from the U.S. Department of Energy. The goal of the project is to commercialize Berkeley Lab's optimizing program, the Distributed Energy Resources Customer Adoption Model (DER-CAM) using a software as a service (SaaS) model with OSIsoft as its first non-scientific user. OSIsoft could in turn provide optimization capability to its software clients. In this way, energy efficiency and/or carbon minimizing strategies could be made readily available to commercial and industrial facilities. Specialized versionsmore » of DER-CAM dedicated to solving OSIsoft's customer problems have been set up on a server at Berkeley Lab. The objective of DER-CAM is to minimize the cost of technology adoption and operation or carbon emissions, or combinations thereof. DER-CAM determines which technologies should be installed and operated based on specific site load, price information, and performance data for available equipment options. An established user of OSIsoft's PI software suite, the University of California, Davis (UCD), was selected as a demonstration site for this project. UCD's participation in the project is driven by its motivation to reduce its carbon emissions. The campus currently buys electricity economically through the Western Area Power Administration (WAPA). The campus does not therefore face compelling cost incentives to improve the efficiency of its operations, but is nonetheless motivated to lower the carbon footprint of its buildings. Berkeley Lab attempted to demonstrate a scenario wherein UCD is forced to purchase electricity on a standard time-of-use tariff from Pacific Gas and Electric (PG&E), which is a concern to Facilities staff. Additionally, DER-CAM has been set up to consider the variability of carbon emissions throughout the day and seasons. Two distinct analyses of value to UCD are possible using this approach. First, optimal investment choices for buildings under the two alternative objectives can be derived. Second, a week-ahead building operations forecaster has been written that executes DER-CAM to find an optimal operating schedule for buildings given their expected building energy services requirements, electricity prices, and local weather. As part of its matching contribution, OSIsoft provided a full implementation of PI and a server to install it on at Berkeley Lab. Using the PItoPI protocol, this gives Berkeley Lab researchers direct access to UCD's PI data base. However, this arrangement is in itself inadequate for performing optimizations. Additional data not included in UCD's PI database would be needed and the campus was not able to provide this information. This report details the process, results, and lessons learned of this commercialization project.« less
The Human Communication Research Centre dialogue database.
Anderson, A H; Garrod, S C; Clark, A; Boyle, E; Mullin, J
1992-10-01
The HCRC dialogue database consists of over 700 transcribed and coded dialogues from pairs of speakers aged from seven to fourteen. The speakers are recorded while tackling co-operative problem-solving tasks and the same pairs of speakers are recorded over two years tackling 10 different versions of our two tasks. In addition there are over 200 dialogues recorded between pairs of undergraduate speakers engaged on versions of the same tasks. Access to the database, and to its accompanying custom-built search software, is available electronically over the JANET system by contacting liz@psy.glasgow.ac.uk, from whom further information about the database and a user's guide to the database can be obtained.
Equipped for the Future. A Reform Agenda for Adult Literacy and Lifelong Learning.
ERIC Educational Resources Information Center
Stein, Sondra Gayle
The National Institute for Literacy's Equipped for the Future initiative was undertaken to achieve customer-driven, standards-based reform of adult literacy and lifelong learning through a broad, national consensus-building process. The initiative's six stages are as follows: (1) build consensus on the knowledge and skills adults need to fulfill…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Achieving Building America energy savings goals (40 percent by 2030) will require many existing homes to install energy upgrades. Engaging large numbers of homeowners in building science-guided upgrades during a single remodeling event has been difficult for a number of reasons. Performance upgrades in existing homes tend to occur over multiple years and usually result from component failures (furnace failure) and weather damage (ice dams, roofing, siding). This research attempted to: A) Understand the homeowner's motivations regarding investing in building science based performance upgrades. B) Determining a rapidly scalable approach to engage large numbers of homeowners directly through existing customermore » networks. C) Access a business model that will manage all aspects of the contractor-homeowner-performance professional interface to ensure good upgrade decisions over time. The solution results from a synergistic approach utilizing networks of suppliers merging with networks of homeowner customers. Companies in the $400 to $800 billion home services industry have proven direct marketing and sales proficiencies that have led to the development of vast customer networks. Companies such as pest control, lawn care, and security have nurtured these networks by successfully addressing the ongoing needs of homes. This long-term access to customers and trust established with consistent delivery has also provided opportunities for home service providers to grow by successfully introducing new products and services like attic insulation and air sealing. The most important component for success is a business model that will facilitate and manage the process. The team analyzes a group that developed a working model.« less
NASA Astrophysics Data System (ADS)
Cornaglia, Bruno; Young, Gavin; Marchetta, Antonio
2015-12-01
Fixed broadband network deployments are moving inexorably to the use of Next Generation Access (NGA) technologies and architectures. These NGA deployments involve building fiber infrastructure increasingly closer to the customer in order to increase the proportion of fiber on the customer's access connection (Fibre-To-The-Home/Building/Door/Cabinet… i.e. FTTx). This increases the speed of services that can be sold and will be increasingly required to meet the demands of new generations of video services as we evolve from HDTV to "Ultra-HD TV" with 4k and 8k lines of video resolution. However, building fiber access networks is a costly endeavor. It requires significant capital in order to cover any significant geographic coverage. Hence many companies are forming partnerships and joint-ventures in order to share the NGA network construction costs. One form of such a partnership involves two companies agreeing to each build to cover a certain geographic area and then "cross-selling" NGA products to each other in order to access customers within their partner's footprint (NGA coverage area). This is tantamount to a bi-lateral wholesale partnership. The concept of Fixed Access Network Sharing (FANS) is to address the possibility of sharing infrastructure with a high degree of flexibility for all network operators involved. By providing greater configuration control over the NGA network infrastructure, the service provider has a greater ability to define the network and hence to define their product capabilities at the active layer. This gives the service provider partners greater product development autonomy plus the ability to differentiate from each other at the active network layer.
Integration of Oracle and Hadoop: Hybrid Databases Affordable at Scale
NASA Astrophysics Data System (ADS)
Canali, L.; Baranowski, Z.; Kothuri, P.
2017-10-01
This work reports on the activities aimed at integrating Oracle and Hadoop technologies for the use cases of CERN database services and in particular on the development of solutions for offloading data and queries from Oracle databases into Hadoop-based systems. The goal and interest of this investigation is to increase the scalability and optimize the cost/performance footprint for some of our largest Oracle databases. These concepts have been applied, among others, to build offline copies of CERN accelerator controls and logging databases. The tested solution allows to run reports on the controls data offloaded in Hadoop without affecting the critical production database, providing both performance benefits and cost reduction for the underlying infrastructure. Other use cases discussed include building hybrid database solutions with Oracle and Hadoop, offering the combined advantages of a mature relational database system with a scalable analytics engine.
NASA Astrophysics Data System (ADS)
Heeager, Lise Tordrup; Tjørnehøj, Gitte
Quality assurance technology is a formal control mechanism aiming at increasing the quality of the product exchanged between vendors and customers. Studies of the adoption of this technology in the field of system development rarely focus on the role of the relationship between the customer and vendor in the process. We have studied how the process of adopting quality assurance technology by a small Danish IT vendor developing pharmacy software for a customer in the public sector was influenced by the relationship with the customer. The case study showed that the adoption process was shaped to a high degree by the relationship and vice versa. The prior high level of trust and mutual knowledge helped the parties negotiate mutually feasible solutions throughout the adoption process. We thus advise enhancing trust-building processes to strengthen the relationships and to balance formal control and social control to increase the likelihood of a successful outcome of the adoption of quality assurance technology in a customer-vendor relationship.
Establishment of Low Energy Building materials and Equipment Database Based on Property Information
NASA Astrophysics Data System (ADS)
Kim, Yumin; Shin, Hyery; eon Lee, Seung
2018-03-01
The purpose of this study is to provide reliable service of materials information portal through the establishment of public big data by collecting and integrating scattered low energy building materials and equipment data. There were few cases of low energy building materials database in Korea have provided material properties as factors influencing material pricing. The framework of the database was defined referred with Korea On-line E-procurement system. More than 45,000 data were gathered by the specification of entities and with the gathered data, price prediction models for chillers were suggested. To improve the usability of the prediction model, detailed properties should be analysed for each item.
CERN alerter—RSS based system for information broadcast to all CERN offices
NASA Astrophysics Data System (ADS)
Otto, R.
2008-07-01
Nearly every large organization uses a tool to broadcast messages and information across the internal campus (messages like alerts announcing interruption in services or just information about upcoming events). These tools typically allow administrators (operators) to send 'targeted' messages which are sent only to specific groups of users or computers, e/g only those located in a specified building or connected to a particular computing service. CERN has a long history of such tools: CERNVMS's SPM_quotMESSAGE command, Zephyr [2] and the most recent the NICE Alerter based on the NNTP protocol. The NICE Alerter used on all Windows-based computers had to be phased out as a consequence of phasing out NNTP at CERN. The new solution to broadcast information messages on the CERN campus continues to provide the service based on cross-platform technologies, hence minimizing custom developments and relying on commercial software as much as possible. The new system, called CERN Alerter, is based on RSS (Really Simple Syndication) [9] for the transport protocol and uses Microsoft SharePoint as the backend for database and posting interface. The windows-based client relies on Internet Explorer 7.0 with custom code to trigger the window pop-ups and the notifications for new events. Linux and Mac OS X clients could also rely on any RSS readers to subscribe to targeted notifications. The paper covers the architecture and implementation aspects of the new system.
application architecture, energy informatics, scalable acquisition of sensor data, and software tools for engaging occupants in building energy performance. Prior to joining NREL, Anya developed custom business
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Guodong; Ollis, Thomas B.; Xiao, Bailu
Here, this paper proposes a Mixed Integer Conic Programming (MICP) model for community microgrids considering the network operational constraints and building thermal dynamics. The proposed optimization model optimizes not only the operating cost, including fuel cost, purchasing cost, battery degradation cost, voluntary load shedding cost and the cost associated with customer discomfort due to room temperature deviation from the set point, but also several performance indices, including voltage deviation, network power loss and power factor at the Point of Common Coupling (PCC). In particular, the detailed thermal dynamic model of buildings is integrated into the distribution optimal power flow (D-OPF)more » model for the optimal operation of community microgrids. The heating, ventilation and air-conditioning (HVAC) systems can be scheduled intelligently to reduce the electricity cost while maintaining the indoor temperature in the comfort range set by customers. Numerical simulation results show the effectiveness of the proposed model and significant saving in electricity cost could be achieved with network operational constraints satisfied.« less
Liu, Guodong; Ollis, Thomas B.; Xiao, Bailu; ...
2017-10-10
Here, this paper proposes a Mixed Integer Conic Programming (MICP) model for community microgrids considering the network operational constraints and building thermal dynamics. The proposed optimization model optimizes not only the operating cost, including fuel cost, purchasing cost, battery degradation cost, voluntary load shedding cost and the cost associated with customer discomfort due to room temperature deviation from the set point, but also several performance indices, including voltage deviation, network power loss and power factor at the Point of Common Coupling (PCC). In particular, the detailed thermal dynamic model of buildings is integrated into the distribution optimal power flow (D-OPF)more » model for the optimal operation of community microgrids. The heating, ventilation and air-conditioning (HVAC) systems can be scheduled intelligently to reduce the electricity cost while maintaining the indoor temperature in the comfort range set by customers. Numerical simulation results show the effectiveness of the proposed model and significant saving in electricity cost could be achieved with network operational constraints satisfied.« less
Building a Faculty Publications Database: A Case Study
ERIC Educational Resources Information Center
Tabaei, Sara; Schaffer, Yitzchak; McMurray, Gregory; Simon, Bashe
2013-01-01
This case study shares the experience of building an in-house faculty publications database that was spearheaded by the Touro College and University System library in 2010. The project began with the intention of contributing to the college by collecting the research accomplishments of our faculty and staff, thereby also increasing library…
Information support of monitoring of technical condition of buildings in construction risk area
NASA Astrophysics Data System (ADS)
Skachkova, M. E.; Lepihina, O. Y.; Ignatova, V. V.
2018-05-01
The paper presents the results of the research devoted to the development of a model of information support of monitoring buildings technical condition; these buildings are located in the construction risk area. As a result of the visual and instrumental survey, as well as the analysis of existing approaches and techniques, attributive and cartographic databases have been created. These databases allow monitoring defects and damages of buildings located in a 30-meter risk area from the object under construction. The classification of structures and defects of these buildings under survey is presented. The functional capabilities of the developed model and the field of it practical applications are determined.
Shedlock, James; Frisque, Michelle; Hunt, Steve; Walton, Linda; Handler, Jonathan; Gillam, Michael
2010-01-01
Question: How can the user's access to health information, especially full-text articles, be improved? The solution is building and evaluating the Health SmartLibrary (HSL). Setting: The setting is the Galter Health Sciences Library, Feinberg School of Medicine, Northwestern University. Method: The HSL was built on web-based personalization and customization tools: My E-Resources, Stay Current, Quick Search, and File Cabinet. Personalization and customization data were tracked to show user activity with these value-added, online services. Main Results: Registration data indicated that users were receptive to personalized resource selection and that the automated application of specialty-based, personalized HSLs was more frequently adopted than manual customization by users. Those who did customize customized My E-Resources and Stay Current more often than Quick Search and File Cabinet. Most of those who customized did so only once. Conclusion: Users did not always take advantage of the services designed to aid their library research experiences. When personalization is available at registration, users readily accepted it. Customization tools were used less frequently; however, more research is needed to determine why this was the case. PMID:20428276
IT Solution to Improve the Permitting Process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hammer, Mary
2013-02-14
Over the past decade Houston has taken significant strides to implement and promote sustainability. Currently the City of Houston’s Green Building Resource Center stands testament to the determination of city officials to make Houston truly green. Houston was named a Solar America City by the Department of Energy (DOE) in 2008 and is part of the Texas Solar Collaboration as part of the DOE Rooftop Challenge Grant. In that time, Houston has made significant progress in addressing the challenges associated with installing solar in the City. One of the challenges related to soft costs of solar are the time andmore » associated costs related to the permitting process. From 2000 to 2010, the Houston area has witnessed unprecedented growth, with the population increasing by nearly 700,000. The City of Houston is working to address the needs of this growing population, including building the new One-Stop Code and Permitting building. The Houston Permitting Center opened in June 2011. It combines the majority of the City of Houston's permitting and licensing into one place with a mission to help customers achieve their goals while complying with the City’s regulations. The stated mission “requires a continuous pursuit of improving the customer experience. Providing excellent service, streamlining business processes, implementing innovative technologies, and proactively engaging customers are all cornerstones of this philosophy.”« less
Pacific Custom Materials, Petition to Object to Title V Operating Permit
This document may be of assistance in applying the Title V air operating permit regulations. This document is part of the Title V Petition Database available at www2.epa.gov/title-v-operating-permits/title-v-petition-database. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
The mismanagement of customer loyalty.
Reinartz, Werner; Kumar, V
2002-07-01
Who wouldn't want loyal customers? Surely they should cost less to serve, they'd be willing to pay more than other customers, and they'd actively market your company by word of mouth, right? Maybe not. Careful study of the relationship between customer loyalty and profits plumbed from 16,000 customers in four companies' databases tells a different story. The authors found no evidence to support any of these claims. What they did find was that the link between customers and profitability was more complicated because customers fall into four groups, not two. Simply put: Not all loyal customers are profitable, and not all profitable customers are loyal. Traditional tools for segmenting customers do a poor job of identifying that latter group, causing companies to chase expensively after initially profitable customers who hold little promise of future profits. The authors suggest an alternative approach, based on well-established "event-history modeling" techniques, that more accurately predicts future buying probabilities. Armed with such a tool, marketers can correctly identify which customers belong in which category and market accordingly. The challenge in managing customers who are profitable but disloyal--the "butterflies"--is to milk them for as much as you can while they're buying from you. A softly-softly approach is more appropriate for the profitable customers who are likely to stay loyal--your "true friends." As for highly loyal but not very profitable customers--the "barnacles"--you need to find out if they have the potential to spend more than they currently do. And, of course, for the "strangers"--those who generate no loyalty and no profits--the answer is simple: Identify early and don't invest anything.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herrmann, W.; von Laven, G.M.; Parker, T.
1993-09-01
The Bibliographic Retrieval System (BARS) is a data base management system specially designed to retrieve bibliographic references. Two databases are available, (i) the Sandia Shock Compression (SSC) database which contains over 5700 references to the literature related to stress waves in solids and their applications, and (ii) the Shock Physics Index (SPHINX) which includes over 8000 further references to stress waves in solids, material properties at intermediate and low rates, ballistic and hypervelocity impact, and explosive or shock fabrication methods. There is some overlap in the information in the two data bases.
ERIC Educational Resources Information Center
Barbian, Jeff
2001-01-01
Discusses the benefits of employee volunteerism such as enhanced brand image, increased customer loyalty, increased competitiveness, and skill building for employees. Looks at how several major corporations volunteer in their communities. (JOW)
... postoperative pulmonary complications in upper abdominal surgery. Cochrane Database Syst Rev . 2014;(2):CD006058. PMID: 24510642 www. ... ADAM Health Solutions. About MedlinePlus Site Map FAQs Customer Support Get email updates Subscribe to RSS Follow ...
78 FR 42775 - CGI Federal, Inc., and Custom Applications Management; Transfer of Data
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-17
... develop applications, Web sites, Web pages, web-based applications and databases, in accordance with EPA policies and related Federal standards and procedures. The Contractor will provide [[Page 42776
Building Databases for Education. ERIC Digest.
ERIC Educational Resources Information Center
Klausmeier, Jane A.
This digest provides a brief explanation of what a database is; explains how a database can be used; identifies important factors that should be considered when choosing database management system software; and provides citations to sources for finding reviews and evaluations of database management software. The digest is concerned primarily with…
When fellow customers behave badly: Witness reactions to employee mistreatment by customers.
Hershcovis, M Sandy; Bhatnagar, Namita
2017-11-01
In 3 experiments, we examined how customers react after witnessing a fellow customer mistreat an employee. Drawing on the deontic model of justice, we argue that customer mistreatment of employees leads witnesses (i.e., other customers) to leave larger tips, engage in supportive employee-directed behaviors, and evaluate employees more positively (Studies 1 and 2). We also theorize that witnesses develop less positive treatment intentions and more negative retaliatory intentions toward perpetrators, with anger and empathy acting as parallel mediators of our perpetrator- and target-directed outcomes, respectively. In Study 1, we conducted a field experiment that examined real customers' target-directed reactions to witnessed mistreatment in the context of a fast-food restaurant. In Study 2, we replicated Study 1 findings in an online vignette experiment, and extended it by examining more severe mistreatment and perpetrator-directed responses. In Study 3, we demonstrated that employees who respond to mistreatment uncivilly are significantly less likely to receive the positive outcomes found in Studies 1 and 2 than those who respond neutrally. We discuss the implications of our findings for theory and practice. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Processing biological literature with customizable Web services supporting interoperable formats.
Rak, Rafal; Batista-Navarro, Riza Theresa; Carter, Jacob; Rowley, Andrew; Ananiadou, Sophia
2014-01-01
Web services have become a popular means of interconnecting solutions for processing a body of scientific literature. This has fuelled research on high-level data exchange formats suitable for a given domain and ensuring the interoperability of Web services. In this article, we focus on the biological domain and consider four interoperability formats, BioC, BioNLP, XMI and RDF, that represent domain-specific and generic representations and include well-established as well as emerging specifications. We use the formats in the context of customizable Web services created in our Web-based, text-mining workbench Argo that features an ever-growing library of elementary analytics and capabilities to build and deploy Web services straight from a convenient graphical user interface. We demonstrate a 2-fold customization of Web services: by building task-specific processing pipelines from a repository of available analytics, and by configuring services to accept and produce a combination of input and output data interchange formats. We provide qualitative evaluation of the formats as well as quantitative evaluation of automatic analytics. The latter was carried out as part of our participation in the fourth edition of the BioCreative challenge. Our analytics built into Web services for recognizing biochemical concepts in BioC collections achieved the highest combined scores out of 10 participating teams. Database URL: http://argo.nactem.ac.uk. © The Author(s) 2014. Published by Oxford University Press.
Processing biological literature with customizable Web services supporting interoperable formats
Rak, Rafal; Batista-Navarro, Riza Theresa; Carter, Jacob; Rowley, Andrew; Ananiadou, Sophia
2014-01-01
Web services have become a popular means of interconnecting solutions for processing a body of scientific literature. This has fuelled research on high-level data exchange formats suitable for a given domain and ensuring the interoperability of Web services. In this article, we focus on the biological domain and consider four interoperability formats, BioC, BioNLP, XMI and RDF, that represent domain-specific and generic representations and include well-established as well as emerging specifications. We use the formats in the context of customizable Web services created in our Web-based, text-mining workbench Argo that features an ever-growing library of elementary analytics and capabilities to build and deploy Web services straight from a convenient graphical user interface. We demonstrate a 2-fold customization of Web services: by building task-specific processing pipelines from a repository of available analytics, and by configuring services to accept and produce a combination of input and output data interchange formats. We provide qualitative evaluation of the formats as well as quantitative evaluation of automatic analytics. The latter was carried out as part of our participation in the fourth edition of the BioCreative challenge. Our analytics built into Web services for recognizing biochemical concepts in BioC collections achieved the highest combined scores out of 10 participating teams. Database URL: http://argo.nactem.ac.uk. PMID:25006225
Building heating and cooling applications thermal energy storage program overview
NASA Technical Reports Server (NTRS)
Eissenberg, D. M.
1980-01-01
Thermal energy storage technology and development of building heating and cooling applications in the residential and commercial sectors is outlined. Three elements are identified to undergo an applications assessment, technology development, and demonstration. Emphasis is given to utility load management thermal energy system application where the stress is on the 'customer side of the meter'. Thermal storage subsystems for space conditioning and conservation means of increased thermal mass within the building envelope and by means of low-grade waste heat recovery are covered.
Comparative analysis of data mining techniques for business data
NASA Astrophysics Data System (ADS)
Jamil, Jastini Mohd; Shaharanee, Izwan Nizal Mohd
2014-12-01
Data mining is the process of employing one or more computer learning techniques to automatically analyze and extract knowledge from data contained within a database. Companies are using this tool to further understand their customers, to design targeted sales and marketing campaigns, to predict what product customers will buy and the frequency of purchase, and to spot trends in customer preferences that can lead to new product development. In this paper, we conduct a systematic approach to explore several of data mining techniques in business application. The experimental result reveals that all data mining techniques accomplish their goals perfectly, but each of the technique has its own characteristics and specification that demonstrate their accuracy, proficiency and preference.
Enhancing reproducibility in scientific computing: Metrics and registry for Singularity containers.
Sochat, Vanessa V; Prybol, Cameron J; Kurtzer, Gregory M
2017-01-01
Here we present Singularity Hub, a framework to build and deploy Singularity containers for mobility of compute, and the singularity-python software with novel metrics for assessing reproducibility of such containers. Singularity containers make it possible for scientists and developers to package reproducible software, and Singularity Hub adds automation to this workflow by building, capturing metadata for, visualizing, and serving containers programmatically. Our novel metrics, based on custom filters of content hashes of container contents, allow for comparison of an entire container, including operating system, custom software, and metadata. First we will review Singularity Hub's primary use cases and how the infrastructure has been designed to support modern, common workflows. Next, we conduct three analyses to demonstrate build consistency, reproducibility metric and performance and interpretability, and potential for discovery. This is the first effort to demonstrate a rigorous assessment of measurable similarity between containers and operating systems. We provide these capabilities within Singularity Hub, as well as the source software singularity-python that provides the underlying functionality. Singularity Hub is available at https://singularity-hub.org, and we are excited to provide it as an openly available platform for building, and deploying scientific containers.
Enhancing reproducibility in scientific computing: Metrics and registry for Singularity containers
Prybol, Cameron J.; Kurtzer, Gregory M.
2017-01-01
Here we present Singularity Hub, a framework to build and deploy Singularity containers for mobility of compute, and the singularity-python software with novel metrics for assessing reproducibility of such containers. Singularity containers make it possible for scientists and developers to package reproducible software, and Singularity Hub adds automation to this workflow by building, capturing metadata for, visualizing, and serving containers programmatically. Our novel metrics, based on custom filters of content hashes of container contents, allow for comparison of an entire container, including operating system, custom software, and metadata. First we will review Singularity Hub’s primary use cases and how the infrastructure has been designed to support modern, common workflows. Next, we conduct three analyses to demonstrate build consistency, reproducibility metric and performance and interpretability, and potential for discovery. This is the first effort to demonstrate a rigorous assessment of measurable similarity between containers and operating systems. We provide these capabilities within Singularity Hub, as well as the source software singularity-python that provides the underlying functionality. Singularity Hub is available at https://singularity-hub.org, and we are excited to provide it as an openly available platform for building, and deploying scientific containers. PMID:29186161
Applications of Optimal Building Energy System Selection and Operation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marnay, Chris; Stadler, Michael; Siddiqui, Afzal
2011-04-01
Berkeley Lab has been developing the Distributed Energy Resources Customer Adoption Model (DER-CAM) for several years. Given load curves for energy services requirements in a building microgrid (u grid), fuel costs and other economic inputs, and a menu of available technologies, DER-CAM finds the optimum equipment fleet and its optimum operating schedule using a mixed integer linear programming approach. This capability is being applied using a software as a service (SaaS) model. Optimisation problems are set up on a Berkeley Lab server and clients can execute their jobs as needed, typically daily. The evolution of this approach is demonstrated bymore » description of three ongoing projects. The first is a public access web site focused on solar photovoltaic generation and battery viability at large commercial and industrial customer sites. The second is a building CO2 emissions reduction operations problem for a University of California, Davis student dining hall for which potential investments are also considered. And the third, is both a battery selection problem and a rolling operating schedule problem for a large County Jail. Together these examples show that optimization of building u grid design and operation can be effectively achieved using SaaS.« less
Reichheld, F F
1993-01-01
Despite a flurry of activities aimed at serving customers better, few companies have systematically revamped their operations with customer loyalty in mind. Instead, most have adopted improvement programs ad hoc, and paybacks haven't materialized. Building a highly loyal customer base must be integral to a company's basic business strategy. Loyalty leaders like MBNA credit cards are successful because they have designed their entire business systems around customer loyalty--a self-reinforcing system in which the company delivers superior value consistently and reinvents cash flows to find and keep high-quality customers and employees. The economic benefits of high customer loyalty are measurable. When a company consistently delivers superior value and wins customer loyalty, market share and revenues go up, and the cost of acquiring new customers goes down. The better economics mean the company can pay workers better, which sets off a whole chain of events. Increased pay boosts employee moral and commitment; as employees stay longer, their productivity goes up and training costs fall; employees' overall job satisfaction, combined with their experience, helps them serve customers better; and customers are then more inclined to stay loyal to the company. Finally, as the best customers and employees become part of the loyalty-based system, competitors are left to survive with less desirable customers and less talented employees. To compete on loyalty, a company must understand the relationships between customer retention and the other parts of the business--and be able to quantify the linkages between loyalty and profits. It involves rethinking and aligning four important aspects of the business: customers, product/service offering, employees, and measurement systems.
1986-03-01
SRdb ... .......... .35 APPENDIX A: ABBREVIATIONS AND ACRONYMS ......... 37 " APPENDIX B: USER’S MANUAL ..... ............... 38 APPENDIX C: DATABASE...percentage of situations. The purpose of this paper is to examine and propose a software-oriented alternative to the current manual , instruction-driven...Department Customer Service Manual (Ref. 1] and the applicable NPS Comptroller instruction [Ref. 2]. Several modifications to these written quidelines
Considerations of Online Numeric Databases for Social Science Research,
1983-09-01
online user groups profit from them has greatly increased the size of the online market . International Resource Development says the revenues of...information services. Carlos Cuadra, however, feels that the customizers have been beneficial to the online market by educating users at a local level...calculations. Online data can sometimes assume a spurious authority due to the medium itself. "The market for numeric databases and systems is still
Hernandez-Valladares, Maria; Vaudel, Marc; Selheim, Frode; Berven, Frode; Bruserud, Øystein
2017-08-01
Mass spectrometry (MS)-based proteomics has become an indispensable tool for the characterization of the proteome and its post-translational modifications (PTM). In addition to standard protein sequence databases, proteogenomics strategies search the spectral data against the theoretical spectra obtained from customized protein sequence databases. Up to date, there are no published proteogenomics studies on acute myeloid leukemia (AML) samples. Areas covered: Proteogenomics involves the understanding of genomic and proteomic data. The intersection of both datatypes requires advanced bioinformatics skills. A standard proteogenomics workflow that could be used for the study of AML samples is described. The generation of customized protein sequence databases as well as bioinformatics tools and pipelines commonly used in proteogenomics are discussed in detail. Expert commentary: Drawing on evidence from recent cancer proteogenomics studies and taking into account the public availability of AML genomic data, the interpretation of present and future MS-based AML proteomic data using AML-specific protein sequence databases could discover new biological mechanisms and targets in AML. However, proteogenomics workflows including bioinformatics guidelines can be challenging for the wide AML research community. It is expected that further automation and simplification of the bioinformatics procedures might attract AML investigators to adopt the proteogenomics strategy.
Implementation of age and gender recognition system for intelligent digital signage
NASA Astrophysics Data System (ADS)
Lee, Sang-Heon; Sohn, Myoung-Kyu; Kim, Hyunduk
2015-12-01
Intelligent digital signage systems transmit customized advertising and information by analyzing users and customers, unlike existing system that presented advertising in the form of broadcast without regard to type of customers. Currently, development of intelligent digital signage system has been pushed forward vigorously. In this study, we designed a system capable of analyzing gender and age of customers based on image obtained from camera, although there are many different methods for analyzing customers. We conducted age and gender recognition experiments using public database. The age/gender recognition experiments were performed through histogram matching method by extracting Local binary patterns (LBP) features after facial area on input image was normalized. The results of experiment showed that gender recognition rate was as high as approximately 97% on average. Age recognition was conducted based on categorization into 5 age classes. Age recognition rates for women and men were about 67% and 68%, respectively when that conducted separately for different gender.
Sliter, Michael T; Pui, Shuang Yueh; Sliter, Katherine A; Jex, Steve M
2011-10-01
Interpersonal conflict (IC) at work is a frequently experienced type of workplace mistreatment that has been linked to a host of negative workplace outcomes. Previous research has shown that IC can have differential effects based on source, but this has not yet been investigated in terms of customer IC versus coworker IC. To remedy this oversight in the literature, we used a multimethod, multitime point design to compare IC from customers and coworkers experienced by 75 call center employees. Primarily, we investigated burnout, physical health symptoms, and task performance. Results indicated that customer IC was more strongly related to both personal and organizational outcomes. Additionally, trait anger was investigated as a moderator of these relationships, and the results indicated that people who are easy to anger may be more likely to experience negative effects as a result of customer IC. Implications of these findings, limitations, and areas for future research are discussed. (PsycINFO Database Record (c) 2011 APA, all rights reserved).
NASA Astrophysics Data System (ADS)
Deng, Shuang; Xiang, Wenting; Tian, Yangge
2009-10-01
Map coloring is a hard task even to the experienced map experts. In the GIS project, usually need to color map according to the customer, which make the work more complex. With the development of GIS, more and more programmers join the project team, which lack the training of cartology, their coloring map are harder to meet the requirements of customer. From the experience, customers with similar background usually have similar tastes for coloring map. So, we developed a GIS color scheme decision-making system which can select color schemes of similar customers from case base for customers to select and adjust. The system is a BS/CS mixed system, the client side use JSP and make it possible for the system developers to go on remote calling of the colors scheme cases in the database server and communicate with customers. Different with general case-based reasoning, even the customers are very similar, their selection may have difference, it is hard to provide a "best" option. So, we select the Simulated Annealing Algorithm (SAA) to arrange the emergence order of different color schemes. Customers can also dynamically adjust certain features colors based on existing case. The result shows that the system can facilitate the communication between the designers and the customers and improve the quality and efficiency of coloring map.
77 FR 12641 - Shipping Coordinating Committee; Notice of Committee Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-01
... including World Customs Organization and International Organization for Standardization standards. 2. Based... capacity of the room. To facilitate the building security process, and to request reasonable accommodation...
77 FR 64576 - Shipping Coordinating Committee; Notice of Committee Meeting
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-22
... Customs Organization and International Organization for Standardization standards. 2. Based on analysis... facilitate the building security process, and to request reasonable accommodation, those who plan to attend...
47 CFR 32.2426 - Intrabuilding network cable.
Code of Federal Regulations, 2013 CFR
2013-10-01
...) Nonmetallic cable. This subsidiary record category shall include the original cost of optical fiber cable and... between buildings on one customer's same premises. Intrabuilding network cables are used to distribute...
47 CFR 32.2426 - Intrabuilding network cable.
Code of Federal Regulations, 2014 CFR
2014-10-01
...) Nonmetallic cable. This subsidiary record category shall include the original cost of optical fiber cable and... between buildings on one customer's same premises. Intrabuilding network cables are used to distribute...
47 CFR 32.2426 - Intrabuilding network cable.
Code of Federal Regulations, 2012 CFR
2012-10-01
...) Nonmetallic cable. This subsidiary record category shall include the original cost of optical fiber cable and... between buildings on one customer's same premises. Intrabuilding network cables are used to distribute...
NASA Astrophysics Data System (ADS)
Thessen, Anne E.; McGinnis, Sean; North, Elizabeth W.
2016-02-01
Process studies and coupled-model validation efforts in geosciences often require integration of multiple data types across time and space. For example, improved prediction of hydrocarbon fate and transport is an important societal need which fundamentally relies upon synthesis of oceanography and hydrocarbon chemistry. Yet, there are no publically accessible databases which integrate these diverse data types in a georeferenced format, nor are there guidelines for developing such a database. The objective of this research was to analyze the process of building one such database to provide baseline information on data sources and data sharing and to document the challenges and solutions that arose during this major undertaking. The resulting Deepwater Horizon Database was approximately 2.4 GB in size and contained over 8 million georeferenced data points collected from industry, government databases, volunteer networks, and individual researchers. The major technical challenges that were overcome were reconciliation of terms, units, and quality flags which were necessary to effectively integrate the disparate data sets. Assembling this database required the development of relationships with individual researchers and data managers which often involved extensive e-mail contacts. The average number of emails exchanged per data set was 7.8. Of the 95 relevant data sets that were discovered, 38 (40%) were obtained, either in whole or in part. Over one third (36%) of the requests for data went unanswered. The majority of responses were received after the first request (64%) and within the first week of the first request (67%). Although fewer than half of the potentially relevant datasets were incorporated into the database, the level of sharing (40%) was high compared to some other disciplines where sharing can be as low as 10%. Our suggestions for building integrated databases include budgeting significant time for e-mail exchanges, being cognizant of the cost versus benefits of pursuing reticent data providers, and building trust through clear, respectful communication and with flexible and appropriate attributions.
Attributes of the Federal Energy Management Program's Federal Site Building Characteristics Database
DOE Office of Scientific and Technical Information (OSTI.GOV)
Loper, Susan A.; Sandusky, William F.
2010-12-31
Typically, the Federal building stock is referred to as a group of about one-half million buildings throughout the United States. Additional information beyond this level is generally limited to distribution of that total by agency and maybe distribution of the total by state. However, additional characterization of the Federal building stock is required as the Federal sector seeks ways to implement efficiency projects to reduce energy and water use intensity as mandated by legislation and Executive Order. Using a Federal facility database that was assembled for use in a geographic information system tool, additional characterization of the Federal building stockmore » is provided including information regarding the geographical distribution of sites, building counts and percentage of total by agency, distribution of sites and building totals by agency, distribution of building count and floor space by Federal building type classification by agency, and rank ordering of sites, buildings, and floor space by state. A case study is provided regarding how the building stock has changed for the Department of Energy from 2000 through 2008.« less
... and your provider communicate openly and build a relationship of trust. Alternative Names Patient-centered care References ... ADAM Health Solutions. About MedlinePlus Site Map FAQs Customer Support Get email updates Subscribe to RSS Follow ...
49 CFR 192.365 - Service lines: Location of valves.
Code of Federal Regulations, 2010 CFR
2010-10-01
... NATURAL AND OTHER GAS BY PIPELINE: MINIMUM FEDERAL SAFETY STANDARDS Customer Meters, Service Regulators... accessible location that, if feasible, is outside of the building. (c) Underground valves. Each underground...
The value proposition of patient feedback.
Gingold, Scott R
2011-01-01
Medical practices need to listen to patients and value their opinions in order to provide the best possible service. But too often practitioners don't make the effort to satisfy customers and build loyalty, something of value to every business. The road to failure is littered with companies that did not listen to customers. Research from Powerfeedback shows that soliciting feedback and acting on that information is critical to the success of a medical practice, as it is with any business.
WAN Optimization: A Business Process Reengineering and Knowledge Value Added Approach
2011-03-01
processing is not affected. Reliability The Customer or Order systems are unavailable. If either fails, order processing halts and alerts are...online immediately, and sends a fax to the customer who orders the container. The whole order processing process can be completed in one day. IT plays...Messages build up in the OrderQ until the email server restarts. Messages are then sent by the SendEmail component to remove the backlog. Order
Compensation, Culture and Contracts: The Realities of the DoD’s Blended Workforce
2010-05-01
structure, based on customer satisfaction , business development, collateral duties, and participation in morale- building activities. On the government...and customer satisfaction . “One time, an employee bargained for a 30% pay increase. He was working on a cost-plus contract. There’s no way I...and manageable workloads.” Certainly, job satisfaction can be found outside of the DoD as well. One industry manager reminisced, “I’ve held jobs
2015-04-30
procedures to retain control over strategic decisions (Hatch & Cunliffe, 2013). Large U.S. corporations such as McDonalds, with its 33,000 restaurants and...1.7 million workers across 119 nations, understand that their 68 million customers per day are demanding changes that their famously consistent and...providing business units greater flexibility and autonomy to meet local customer demands (Hatch & Cunliffe, 2013). So why not build a layered system of
Mostaguir, Khaled; Hoogland, Christine; Binz, Pierre-Alain; Appel, Ron D
2003-08-01
The Make 2D-DB tool has been previously developed to help build federated two-dimensional gel electrophoresis (2-DE) databases on one's own web site. The purpose of our work is to extend the strength of the first package and to build a more efficient environment. Such an environment should be able to fulfill the different needs and requirements arising from both the growing use of 2-DE techniques and the increasing amount of distributed experimental data.
NASA Technical Reports Server (NTRS)
McGlynn, T.; Santisteban, M.
2007-01-01
This chapter provides a very brief introduction to the Structured Query Language (SQL) for getting information from relational databases. We make no pretense that this is a complete or comprehensive discussion of SQL. There are many aspects of the language the will be completely ignored in the presentation. The goal here is to provide enough background so that users understand the basic concepts involved in building and using relational databases. We also go through the steps involved in building a particular astronomical database used in some of the other presentations in this volume.
The GOLM-database standard- a framework for time-series data management based on free software
NASA Astrophysics Data System (ADS)
Eichler, M.; Francke, T.; Kneis, D.; Reusser, D.
2009-04-01
Monitoring and modelling projects usually involve time series data originating from different sources. Often, file formats, temporal resolution and meta-data documentation rarely adhere to a common standard. As a result, much effort is spent on converting, harmonizing, merging, checking, resampling and reformatting these data. Moreover, in work groups or during the course of time, these tasks tend to be carried out redundantly and repeatedly, especially when new data becomes available. The resulting duplication of data in various formats strains additional ressources. We propose a database structure and complementary scripts for facilitating these tasks. The GOLM- (General Observation and Location Management) framework allows for import and storage of time series data of different type while assisting in meta-data documentation, plausibility checking and harmonization. The imported data can be visually inspected and its coverage among locations and variables may be visualized. Supplementing scripts provide options for data export for selected stations and variables and resampling of the data to the desired temporal resolution. These tools can, for example, be used for generating model input files or reports. Since GOLM fully supports network access, the system can be used efficiently by distributed working groups accessing the same data over the internet. GOLM's database structure and the complementary scripts can easily be customized to specific needs. Any involved software such as MySQL, R, PHP, OpenOffice as well as the scripts for building and using the data base, including documentation, are free for download. GOLM was developed out of the practical requirements of the OPAQUE-project. It has been tested and further refined in the ERANET-CRUE and SESAM projects, all of which used GOLM to manage meteorological, hydrological and/or water quality data.
The Effect of Non-technical Factors in B2C E-Commerce
NASA Astrophysics Data System (ADS)
Sanayei, Ali; Shafe'Ei, Reza
As e-commerce grows across industries worldwide , business are building web sites for presence as well as for online business. It is more than transferring current business operations to a new medium. This situation requires explaining main models, changing infrastructures, and notice to customer needs as their vital rights. Whilst increasing numbers of firms have launched themselves on the Internet, they are trying to consideration of the strategic implications of developing, implementing or running a Web site. Global competition, laws, and customer preferences are among the issues being affected by e-commerce. In this study many factors that effect on e-commerce are considered these factors have no technical issue in nature. Companies related factors, customers' knowledge, customers' trust and customers' behavior are the main effective factors in development of B2C e-commerce. In this research we surveyed the mentioned aspects by offering questionnaire to experts of e-commerce for companies. The results show there is a meaningful relationship between perception, knowledge, trust and attitude of customers and the company's capabilities in the other side with B2C e-commerce development.
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
The Building America team NorthernSTAR investigated opportunities to use the massive customer networks of the home service industry as a means to connect homeowners to home-performance solutions. Home service companies could provide a pathway to advance building-science-guided upgrades by being in close proximity to homeowners when a decision-making moment is at hand. Established trust provides an opportunity for the company to deliver sound information and influence during a remodeling decision.
Surveying the Numeric Databanks.
ERIC Educational Resources Information Center
O'Leary, Mick
1987-01-01
Describes six leading numeric databank services and compares them with bibliographic databases in terms of customers' needs, search software, pricing arrangements, and the role of the search specialist. A listing of the locations of the numeric databanks discussed is provided. (CLB)
Soil Carbon Data: long tail recovery
DOE Office of Scientific and Technical Information (OSTI.GOV)
2017-07-25
The software is intended to be part of an open source effort regarding soils data. The software provides customized data ingestion scripts for soil carbon related data sets and scripts for output databases that conform to common templates.
Infopreneurs: Turn Data into Dollars.
ERIC Educational Resources Information Center
Weitzen, H. Skip
1989-01-01
Describes seven activities that offer opportunities for entrepreneurs working with information: leveraging database information; customizing information; facilitating access to information; speeding up the flow of information; repackaging information; providing around the clock delivery; and integrating computer, telephone, and electronic funds…
Heterogeneous distributed databases: A case study
NASA Technical Reports Server (NTRS)
Stewart, Tracy R.; Mukkamala, Ravi
1991-01-01
Alternatives are reviewed for accessing distributed heterogeneous databases and a recommended solution is proposed. The current study is limited to the Automated Information Systems Center at the Naval Sea Combat Systems Engineering Station at Norfolk, VA. This center maintains two databases located on Digital Equipment Corporation's VAX computers running under the VMS operating system. The first data base, ICMS, resides on a VAX11/780 and has been implemented using VAX DBMS, a CODASYL based system. The second database, CSA, resides on a VAX 6460 and has been implemented using the ORACLE relational database management system (RDBMS). Both databases are used for configuration management within the U.S. Navy. Different customer bases are supported by each database. ICMS tracks U.S. Navy ships and major systems (anti-sub, sonar, etc.). Even though the major systems on ships and submarines have totally different functions, some of the equipment within the major systems are common to both ships and submarines.
NASA Astrophysics Data System (ADS)
Gross, M. B.; Mayernik, M. S.; Rowan, L. R.; Khan, H.; Boler, F. M.; Maull, K. E.; Stott, D.; Williams, S.; Corson-Rikert, J.; Johns, E. M.; Daniels, M. D.; Krafft, D. B.
2015-12-01
UNAVCO, UCAR, and Cornell University are working together to leverage semantic web technologies to enable discovery of people, datasets, publications and other research products, as well as the connections between them. The EarthCollab project, an EarthCube Building Block, is enhancing an existing open-source semantic web application, VIVO, to address connectivity gaps across distributed networks of researchers and resources related to the following two geoscience-based communities: (1) the Bering Sea Project, an interdisciplinary field program whose data archive is hosted by NCAR's Earth Observing Laboratory (EOL), and (2) UNAVCO, a geodetic facility and consortium that supports diverse research projects informed by geodesy. People, publications, datasets and grant information have been mapped to an extended version of the VIVO-ISF ontology and ingested into VIVO's database. Data is ingested using a custom set of scripts that include the ability to perform basic automated and curated disambiguation. VIVO can display a page for every object ingested, including connections to other objects in the VIVO database. A dataset page, for example, includes the dataset type, time interval, DOI, related publications, and authors. The dataset type field provides a connection to all other datasets of the same type. The author's page will show, among other information, related datasets and co-authors. Information previously spread across several unconnected databases is now stored in a single location. In addition to VIVO's default display, the new database can also be queried using SPARQL, a query language for semantic data. EarthCollab will also extend the VIVO web application. One such extension is the ability to cross-link separate VIVO instances across institutions, allowing local display of externally curated information. For example, Cornell's VIVO faculty pages will display UNAVCO's dataset information and UNAVCO's VIVO will display Cornell faculty member contact and position information. Additional extensions, including enhanced geospatial capabilities, will be developed following task-centered usability testing.
Visualising biological data: a semantic approach to tool and database integration.
Pettifer, Steve; Thorne, David; McDermott, Philip; Marsh, James; Villéger, Alice; Kell, Douglas B; Attwood, Teresa K
2009-06-16
In the biological sciences, the need to analyse vast amounts of information has become commonplace. Such large-scale analyses often involve drawing together data from a variety of different databases, held remotely on the internet or locally on in-house servers. Supporting these tasks are ad hoc collections of data-manipulation tools, scripting languages and visualisation software, which are often combined in arcane ways to create cumbersome systems that have been customized for a particular purpose, and are consequently not readily adaptable to other uses. For many day-to-day bioinformatics tasks, the sizes of current databases, and the scale of the analyses necessary, now demand increasing levels of automation; nevertheless, the unique experience and intuition of human researchers is still required to interpret the end results in any meaningful biological way. Putting humans in the loop requires tools to support real-time interaction with these vast and complex data-sets. Numerous tools do exist for this purpose, but many do not have optimal interfaces, most are effectively isolated from other tools and databases owing to incompatible data formats, and many have limited real-time performance when applied to realistically large data-sets: much of the user's cognitive capacity is therefore focused on controlling the software and manipulating esoteric file formats rather than on performing the research. To confront these issues, harnessing expertise in human-computer interaction (HCI), high-performance rendering and distributed systems, and guided by bioinformaticians and end-user biologists, we are building reusable software components that, together, create a toolkit that is both architecturally sound from a computing point of view, and addresses both user and developer requirements. Key to the system's usability is its direct exploitation of semantics, which, crucially, gives individual components knowledge of their own functionality and allows them to interoperate seamlessly, removing many of the existing barriers and bottlenecks from standard bioinformatics tasks. The toolkit, named Utopia, is freely available from http://utopia.cs.man.ac.uk/.
Generic Entity Resolution in Relational Databases
NASA Astrophysics Data System (ADS)
Sidló, Csaba István
Entity Resolution (ER) covers the problem of identifying distinct representations of real-world entities in heterogeneous databases. We consider the generic formulation of ER problems (GER) with exact outcome. In practice, input data usually resides in relational databases and can grow to huge volumes. Yet, typical solutions described in the literature employ standalone memory resident algorithms. In this paper we utilize facilities of standard, unmodified relational database management systems (RDBMS) to enhance the efficiency of GER algorithms. We study and revise the problem formulation, and propose practical and efficient algorithms optimized for RDBMS external memory processing. We outline a real-world scenario and demonstrate the advantage of algorithms by performing experiments on insurance customer data.
ERIC Educational Resources Information Center
American Society for Information Science, Washington, DC.
This document contains abstracts of papers on database design and management which were presented at the 1986 mid-year meeting of the American Society for Information Science (ASIS). Topics considered include: knowledge representation in a bilingual art history database; proprietary database design; relational database design; in-house databases;…
Charting a Path to Location Intelligence for STD Control.
Gerber, Todd M; Du, Ping; Armstrong-Brown, Janelle; McNutt, Louise-Anne; Coles, F Bruce
2009-01-01
This article describes the New York State Department of Health's GeoDatabase project, which developed new methods and techniques for designing and building a geocoding and mapping data repository for sexually transmitted disease (STD) control. The GeoDatabase development was supported through the Centers for Disease Control and Prevention's Outcome Assessment through Systems of Integrated Surveillance workgroup. The design and operation of the GeoDatabase relied upon commercial-off-the-shelf tools that other public health programs may also use for disease-control systems. This article provides a blueprint of the structure and software used to build the GeoDatabase and integrate location data from multiple data sources into the everyday activities of STD control programs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Femec, D.A.
This report describes two code-generating tools used to speed design and implementation of relational databases and user interfaces: CREATE-SCHEMA and BUILD-SCREEN. CREATE-SCHEMA produces the SQL commands that actually create and define the database. BUILD-SCREEN takes templates for data entry screens and generates the screen management system routine calls to display the desired screen. Both tools also generate the related FORTRAN declaration statements and precompiled SQL calls. Included with this report is the source code for a number of FORTRAN routines and functions used by the user interface. This code is broadly applicable to a number of different databases.
Grizzle, Jerry W; Zablah, Alex R; Brown, Tom J; Mowen, John C; Lee, James M
2009-09-01
This empirical study evaluated the moderating effects of unit customer orientation (CO) climate and climate strength on the relationship between service workers' level of CO and their performance of customer-oriented behaviors (COBs). In addition, the study examined whether aggregate COB performance influences unit profitability. Building on multisource, multilevel data, the study's results suggest that the influence of employee CO on employee COB performance is positive when the unit's CO climate is relatively high and that the constructs are unrelated when unit CO climate is relatively low. In addition, the data reveal that unit COB performance influences unit profitability by enhancing revenues without a concomitant increase in costs. The study's results underscore the theoretical importance of considering cross-level influencers of employee-level relationships and suggest that managers should focus on creating a climate that is supportive of COBs if their units are to profit from the recruitment, hiring, and retention of customer-oriented employees.
ScanImage: flexible software for operating laser scanning microscopes.
Pologruto, Thomas A; Sabatini, Bernardo L; Svoboda, Karel
2003-05-17
Laser scanning microscopy is a powerful tool for analyzing the structure and function of biological specimens. Although numerous commercial laser scanning microscopes exist, some of the more interesting and challenging applications demand custom design. A major impediment to custom design is the difficulty of building custom data acquisition hardware and writing the complex software required to run the laser scanning microscope. We describe a simple, software-based approach to operating a laser scanning microscope without the need for custom data acquisition hardware. Data acquisition and control of laser scanning are achieved through standard data acquisition boards. The entire burden of signal integration and image processing is placed on the CPU of the computer. We quantitate the effectiveness of our data acquisition and signal conditioning algorithm under a variety of conditions. We implement our approach in an open source software package (ScanImage) and describe its functionality. We present ScanImage, software to run a flexible laser scanning microscope that allows easy custom design.
Build loyalty in business markets.
Narayandas, Das
2005-09-01
Companies often apply consumer marketing solutions in business markets without realizing that such strategies only hamper the acquisition and retention of profitable customers. Unlike consumers, business customers inevitably need customized products, quantities, or prices. A company in a business market must therefore manage customers individually, showing how its products or services can help solve each buyer's problems. And it must learn to reap the enormous benefits of loyalty by developing individual relationships with customers. To achieve these ends, the firm's marketers must become aware of the different types of benefits the company offers and convey their value to the appropriate executives in the customer company. It's especially important to inform customers about what the author calls nontangible nonfinancial benefits-above-and-beyond efforts, such as delivering supplies on holidays to keep customers' production lines going. The author has developed a simple set of devices-the benefit stack and the decision-maker stack-to help marketers communicate their firm's myriad benefits. The vendor lists the benefits it offers, then lists the customer's decision makers, specifying their concerns, motivations, and power bases. By linking the two stacks, the vendor can systematically communicate how it will meet each decision-maker's needs. The author has also developed a tool called a loyalty ladder, which helps a company determine how much time and money to spend on relationships with various customers. As customers become increasingly loyal, they display behaviors in a predictable sequence, from growing the relationship and providing word-of-mouth endorsements to investing in the vendor company. The author has found that customers follow the same sequence of loyalty behaviors in all business markets.
Californium-252 Program Equipment Evaluation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chattin, Fred Rhea; Wilson, Kenton; Ezold, Julie G.
2017-12-01
To successfully continue the 252Cf production and meet the needs of the customers, a comprehensive evaluation of the Building 7920 processing equipment was requested to identify equipment critical to the operational continuity of the program.
MSFC Respiratory Protection Services
NASA Technical Reports Server (NTRS)
CoVan, James P.
1999-01-01
An overview of the Marshall Space Flight Center Respiratory Protection program is provided in this poster display. Respiratory protection personnel, building, facilities, equipment, customers, maintenance and operational activities, and Dynatech fit testing details are described and illustrated.
Key ingredients needed when building large data processing systems for scientists
NASA Technical Reports Server (NTRS)
Miller, K. C.
2002-01-01
Why is building a large science software system so painful? Weren't teams of software engineers supposed to make life easier for scientists? Does it sometimes feel as if it would be easier to write the million lines of code in Fortran 77 yourself? The cause of this dissatisfaction is that many of the needs of the science customer remain hidden in discussions with software engineers until after a system has already been built. In fact, many of the hidden needs of the science customer conflict with stated needs and are therefore very difficult to meet unless they are addressed from the outset in a system's architectural requirements. What's missing is the consideration of a small set of key software properties in initial agreements about the requirements, the design and the cost of the system.
A global organism detection and monitoring system for non-native species
Graham, J.; Newman, G.; Jarnevich, C.; Shory, R.; Stohlgren, T.J.
2007-01-01
Harmful invasive non-native species are a significant threat to native species and ecosystems, and the costs associated with non-native species in the United States is estimated at over $120 Billion/year. While some local or regional databases exist for some taxonomic groups, there are no effective geographic databases designed to detect and monitor all species of non-native plants, animals, and pathogens. We developed a web-based solution called the Global Organism Detection and Monitoring (GODM) system to provide real-time data from a broad spectrum of users on the distribution and abundance of non-native species, including attributes of their habitats for predictive spatial modeling of current and potential distributions. The four major subsystems of GODM provide dynamic links between the organism data, web pages, spatial data, and modeling capabilities. The core survey database tables for recording invasive species survey data are organized into three categories: "Where, Who & When, and What." Organisms are identified with Taxonomic Serial Numbers from the Integrated Taxonomic Information System. To allow users to immediately see a map of their data combined with other user's data, a custom geographic information system (GIS) Internet solution was required. The GIS solution provides an unprecedented level of flexibility in database access, allowing users to display maps of invasive species distributions or abundances based on various criteria including taxonomic classification (i.e., phylum or division, order, class, family, genus, species, subspecies, and variety), a specific project, a range of dates, and a range of attributes (percent cover, age, height, sex, weight). This is a significant paradigm shift from "map servers" to true Internet-based GIS solutions. The remainder of the system was created with a mix of commercial products, open source software, and custom software. Custom GIS libraries were created where required for processing large datasets, accessing the operating system, and to use existing libraries in C++, R, and other languages to develop the tools to track harmful species in space and time. The GODM database and system are crucial for early detection and rapid containment of invasive species. ?? 2007 Elsevier B.V. All rights reserved.
Rimmer, James H; Vanderbom, Kerri A; Graham, Ian D
2016-04-01
Supporting the transition of people with newly acquired and existing disability from rehabilitation into community-based health/wellness programs, services, and venues requires rehabilitation professionals to build evidence by capturing successful strategies at the local level, finding innovative ways to translate successful practices to other communities, and ultimately to upgrade and maintain their applicability and currency for future scale-up. This article describes a knowledge-to-practice framework housed in a national resource and practice center that will support therapists and other rehabilitation professionals in building and maintaining a database of successful health/wellness guidelines, recommendations, and adaptations to promote community health inclusion for people with disabilities. A framework was developed in the National Center on Health, Physical Activity and Disability (NCHPAD) to systematically build and advance the evidence base of health/wellness programs, practices, and services applicable to people with disabilities. N-KATS (NCHPAD Knowledge Adaptation, Translation, and Scale-up) has 4 sequencing strategies: strategy 1-new evidence- and practice-based knowledge is collected and adapted for the local context (ie, community); strategy 2-customized resources are effectively disseminated to key stakeholders including rehabilitation professionals with appropriate training tools; strategy 3-NCHPAD staff serve as facilitators assisting key stakeholders in implementing recommendations; strategy 4-successful elements of practice (eg, guideline, recommendation, adaptation) are archived and scaled to other rehabilitation providers. The N-KATS framework supports the role of rehabilitation professionals as knowledge brokers, facilitators, and users in a collaborative, dynamic structure that will grow and be sustained over time through the NCHPAD.Video abstract available for additional insights from the authors (see Video, Supplemental Digital Content 1, http://links.lww.com/JNPT/A130).
Development of Analytical Plug-ins for ENSITE: Version 1.0
2017-11-01
ENSITE’s core-software platform builds upon leading geospatial platforms already in use by the Army and is designed to offer an easy-to-use, customized ...use by the Army and is designed to offer an easy-to-use, customized set of workflows for CB planners. Within this platform are added software compo...public good . Find out more at www.erdc.usace.army.mil. To search for other technical reports published by ERDC, visit the ERDC online library at
Open Energy Information System version 2.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
OpenEIS was created to provide standard methods for authoring, sharing, testing, using, and improving algorithms for operational building energy efficiency with building managers and building owners. OpenEIS is designed as a no-cost/low-cost solution that will propagate the fault detection and diagnostic (FDD) solutions into the marketplace by providing state- of- the-art analytical and diagnostic algorithms. As OpenEIS penetrates the market, demand by control system manufacturers and integrators serving small and medium commercial customers will help push these types of commercial software tool offerings into the broader marketplace.
ILLiad: Customer-Focused Interlibrary Loan Automation.
ERIC Educational Resources Information Center
Kriz, Harry M.; Glover, M. Jason; Ford, Kevin C.
1998-01-01
Describes ILLiad (Interlibrary Loan Internet Accessible Database), software that examines the current state of interlibrary loan borrowing requests at Virginia Polytechnic Institute and State University. Topics include system requirements, user procedures, staff procedures, copyright clearance, OCLC, and future developments. (LRW)
The Marriage of Fax and Online.
ERIC Educational Resources Information Center
Basch, Reva
1995-01-01
Discusses the use of fax transmissions. Highlights include searching by fax, including online service, print and electronic publishing, and database producers; customer service, including documentation updates, new product announcements, and marketing materials; document delivery; problems; and fax messaging. (four references) (LRW)
12 CFR 517.1 - Purpose and scope.
Code of Federal Regulations, 2010 CFR
2010-01-01
..., expert witnesses, customized training, relocation services, information systems technology (computer systems, database management, software and office automation), or micrographic services; or in support of...-Owned Businesses Outreach Program (Outreach Program) is to ensure that firms owned and operated by...
12 CFR 517.1 - Purpose and scope.
Code of Federal Regulations, 2012 CFR
2012-01-01
..., expert witnesses, customized training, relocation services, information systems technology (computer systems, database management, software and office automation), or micrographic services; or in support of...-Owned Businesses Outreach Program (Outreach Program) is to ensure that firms owned and operated by...
12 CFR 517.1 - Purpose and scope.
Code of Federal Regulations, 2011 CFR
2011-01-01
..., expert witnesses, customized training, relocation services, information systems technology (computer systems, database management, software and office automation), or micrographic services; or in support of...-Owned Businesses Outreach Program (Outreach Program) is to ensure that firms owned and operated by...
U.S. Energy Service Company Industry: Market Size and Project Performance from 1990-2008
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larsen, Peter; Goldman, Charles; Satchwell, Andrew
2012-08-21
The U.S. energy service company (ESCO) industry is an example of a private sector business model where energy savings are delivered to customers primarily through the use of performance-based contracts. This study was conceived as a snapshot of the ESCO industry prior to the economic slowdown and the introduction of federal stimulus funding mandated by enactment of the American Recovery and Reinvestment Act of 2009 (ARRA). This study utilizes two parallel analytic approaches to characterize ESCO industry and market trends in the U.S.: (1) a ?top-down? approach involving a survey of individual ESCOs to estimate aggregate industry activity and (2)more » a ?bottom-up? analysis of a database of ~;;3,250 projects (representing over $8B in project investment) that reports market trends including installed EE retrofit strategies, project installation costs and savings, project payback times, and benefit-cost ratios over time. Despite the onset of a severe economic recession, the U.S. ESCO industry managed to grow at about 7percent per year between 2006 and 2008. ESCO industry revenues were about $4.1 billion in 2008 and ESCOs anticipate accelerated growth through 2011 (25percent per year). We found that 2,484 ESCO projects in our database generated ~;;$4.0 billion ($2009) in net, direct economic benefits to their customers. We estimate that the ESCO project database includes about 20percent of all U.S. ESCO market activity from 1990-2008. Assuming the net benefits per project are comparable for ESCO projects that are not included in the LBNL database, this would suggest that the ESCO industry has generated ~;;$23 billion in net direct economic benefits for customers at projects installed between 1990 and 2008. There is empirical evidence confirming that the industry is evolving by installing more comprehensive and complex measures?including onsite generation and measures to address deferred maintenance?but this evolution has significant implications for customer project economics, especially at K-12 schools. We found that the median simple payback time has increased from 1.9 to 3.2 years in private sector projects since the early-to-mid 1990s and from 5.2 to 10.5 years in public sector projects for the same time period.« less
Reducing Information Overload in Large Seismic Data Sets
DOE Office of Scientific and Technical Information (OSTI.GOV)
HAMPTON,JEFFERY W.; YOUNG,CHRISTOPHER J.; MERCHANT,BION J.
2000-08-02
Event catalogs for seismic data can become very large. Furthermore, as researchers collect multiple catalogs and reconcile them into a single catalog that is stored in a relational database, the reconciled set becomes even larger. The sheer number of these events makes searching for relevant events to compare with events of interest problematic. Information overload in this form can lead to the data sets being under-utilized and/or used incorrectly or inconsistently. Thus, efforts have been initiated to research techniques and strategies for helping researchers to make better use of large data sets. In this paper, the authors present their effortsmore » to do so in two ways: (1) the Event Search Engine, which is a waveform correlation tool and (2) some content analysis tools, which area combination of custom-built and commercial off-the-shelf tools for accessing, managing, and querying seismic data stored in a relational database. The current Event Search Engine is based on a hierarchical clustering tool known as the dendrogram tool, which is written as a MatSeis graphical user interface. The dendrogram tool allows the user to build dendrogram diagrams for a set of waveforms by controlling phase windowing, down-sampling, filtering, enveloping, and the clustering method (e.g. single linkage, complete linkage, flexible method). It also allows the clustering to be based on two or more stations simultaneously, which is important to bridge gaps in the sparsely recorded event sets anticipated in such a large reconciled event set. Current efforts are focusing on tools to help the researcher winnow the clusters defined using the dendrogram tool down to the minimum optimal identification set. This will become critical as the number of reference events in the reconciled event set continually grows. The dendrogram tool is part of the MatSeis analysis package, which is available on the Nuclear Explosion Monitoring Research and Engineering Program Web Site. As part of the research into how to winnow the reference events in these large reconciled event sets, additional database query approaches have been developed to provide windows into these datasets. These custom built content analysis tools help identify dataset characteristics that can potentially aid in providing a basis for comparing similar reference events in these large reconciled event sets. Once these characteristics can be identified, algorithms can be developed to create and add to the reduced set of events used by the Event Search Engine. These content analysis tools have already been useful in providing information on station coverage of the referenced events and basic statistical, information on events in the research datasets. The tools can also provide researchers with a quick way to find interesting and useful events within the research datasets. The tools could also be used as a means to review reference event datasets as part of a dataset delivery verification process. There has also been an effort to explore the usefulness of commercially available web-based software to help with this problem. The advantages of using off-the-shelf software applications, such as Oracle's WebDB, to manipulate, customize and manage research data are being investigated. These types of applications are being examined to provide access to large integrated data sets for regional seismic research in Asia. All of these software tools would provide the researcher with unprecedented power without having to learn the intricacies and complexities of relational database systems.« less
EPA Office of Water (OW): Nonpoint Source Projects NHDPlus Indexed Dataset
GRTS locational data for nonpoint source projects. GRTS locations are coded onto NHDPlus v2.1 flowline features to create point and line events or coded onto NHDPlus v2.1 waterbody features to create area events. In addition to NHDPlus reach indexed data there may also be custom events (point, line or area) that are not associated with NHD and are in an EPA standard format that is compatible with EPA's Reach Address Database. Custom events are used to represent GRTS locations that are not represented well in NHDPlus.
New Compressor Added to Glenn's 450- psig Combustion Air System
NASA Technical Reports Server (NTRS)
Swan, Jeffrey A.
2000-01-01
In September 1999, the Central Process Systems Engineering Branch and the Maintenance and the Central Process Systems Operations Branch, released for service a new high pressure compressor to supplement the 450-psig Combustion Air System at the NASA Glenn Research Center at Lewis Field. The new compressor, designated C-18, is located in Glenn s Central Air Equipment Building and is remotely operated from the Central Control Building. C-18 can provide 40 pounds per second (pps) of airflow at pressure to our research customers. This capability augments our existing system capacity (compressors C 4 at 38 pps and C-5 at 32 pps), which is generated from Glenn's Engine Research Building. The C-18 compressor was originally part of Glenn's 21-Inch Hypersonic Tunnel, which was transferred from the Jet Propulsion Laboratory to Glenn in the mid-1980's. With the investment of construction of facilities funding, the compressor was modified, new mechanical and electrical support equipment were purchased, and the unit was installed in the basement of the Central Air Equipment Building. After several weeks of checkout and troubleshooting, the new compressor was ready for long-term, reliable operations. With a total of 110 pps in airflow now available, Glenn is well positioned to support the high-pressure air test requirements of our research customers.
Accelerating semantic graph databases on commodity clusters
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morari, Alessandro; Castellana, Vito G.; Haglin, David J.
We are developing a full software system for accelerating semantic graph databases on commodity cluster that scales to hundreds of nodes while maintaining constant query throughput. Our framework comprises a SPARQL to C++ compiler, a library of parallel graph methods and a custom multithreaded runtime layer, which provides a Partitioned Global Address Space (PGAS) programming model with fork/join parallelism and automatic load balancing over a commodity clusters. We present preliminary results for the compiler and for the runtime.
ERIC Educational Resources Information Center
Sabo, Sandra R.
1994-01-01
The school design process depends on open communication and the participation of various constituencies or customers. Today's architects use a variety of strategies to make sure they build the schools communities want. Describes approaches used by three architects. (MLF)
Building structural similarity database for metric learning
NASA Astrophysics Data System (ADS)
Jin, Guoxin; Pappas, Thrasyvoulos N.
2015-03-01
We propose a new approach for constructing databases for training and testing similarity metrics for structurally lossless image compression. Our focus is on structural texture similarity (STSIM) metrics and the matched-texture compression (MTC) approach. We first discuss the metric requirements for structurally lossless compression, which differ from those of other applications such as image retrieval, classification, and understanding. We identify "interchangeability" as the key requirement for metric performance, and partition the domain of "identical" textures into three regions, of "highest," "high," and "good" similarity. We design two subjective tests for data collection, the first relies on ViSiProG to build a database of "identical" clusters, and the second builds a database of image pairs with the "highest," "high," "good," and "bad" similarity labels. The data for the subjective tests is generated during the MTC encoding process, and consist of pairs of candidate and target image blocks. The context of the surrounding image is critical for training the metrics to detect lighting discontinuities, spatial misalignments, and other border artifacts that have a noticeable effect on perceptual quality. The identical texture clusters are then used for training and testing two STSIM metrics. The labelled image pair database will be used in future research.
Data model and relational database design for the New Jersey Water-Transfer Data System (NJWaTr)
Tessler, Steven
2003-01-01
The New Jersey Water-Transfer Data System (NJWaTr) is a database design for the storage and retrieval of water-use data. NJWaTr can manage data encompassing many facets of water use, including (1) the tracking of various types of water-use activities (withdrawals, returns, transfers, distributions, consumptive-use, wastewater collection, and treatment); (2) the storage of descriptions, classifications and locations of places and organizations involved in water-use activities; (3) the storage of details about measured or estimated volumes of water associated with water-use activities; and (4) the storage of information about data sources and water resources associated with water use. In NJWaTr, each water transfer occurs unidirectionally between two site objects, and the sites and conveyances form a water network. The core entities in the NJWaTr model are site, conveyance, transfer/volume, location, and owner. Other important entities include water resource (used for withdrawals and returns), data source, permit, and alias. Multiple water-exchange estimates based on different methods or data sources can be stored for individual transfers. Storage of user-defined details is accommodated for several of the main entities. Many tables contain classification terms to facilitate the detailed description of data items and can be used for routine or custom data summarization. NJWaTr accommodates single-user and aggregate-user water-use data, can be used for large or small water-network projects, and is available as a stand-alone Microsoft? Access database. Data stored in the NJWaTr structure can be retrieved in user-defined combinations to serve visualization and analytical applications. Users can customize and extend the database, link it to other databases, or implement the design in other relational database applications.
BiKEGG: a COBRA toolbox extension for bridging the BiGG and KEGG databases.
Jamialahmadi, Oveis; Motamedian, Ehsan; Hashemi-Najafabadi, Sameereh
2016-10-18
Development of an interface tool between the Biochemical, Genetic and Genomic (BiGG) and KEGG databases is necessary for simultaneous access to the features of both databases. For this purpose, we present the BiKEGG toolbox, an open source COBRA toolbox extension providing a set of functions to infer the reaction correspondences between the KEGG reaction identifiers and those in the BiGG knowledgebase using a combination of manual verification and computational methods. Inferred reaction correspondences using this approach are supported by evidence from the literature, which provides a higher number of reconciled reactions between these two databases compared to the MetaNetX and MetRxn databases. This set of equivalent reactions is then used to automatically superimpose the predicted fluxes using COBRA methods on classical KEGG pathway maps or to create a customized metabolic map based on the KEGG global metabolic pathway, and to find the corresponding reactions in BiGG based on the genome annotation of an organism in the KEGG database. Customized metabolic maps can be created for a set of pathways of interest, for the whole KEGG global map or exclusively for all pathways for which there exists at least one flux carrying reaction. This flexibility in visualization enables BiKEGG to indicate reaction directionality as well as to visualize the reaction fluxes for different static or dynamic conditions in an animated manner. BiKEGG allows the user to export (1) the output visualized metabolic maps to various standard image formats or save them as a video or animated GIF file, and (2) the equivalent reactions for an organism as an Excel spreadsheet.
LeishCyc: a guide to building a metabolic pathway database and visualization of metabolomic data.
Saunders, Eleanor C; MacRae, James I; Naderer, Thomas; Ng, Milica; McConville, Malcolm J; Likić, Vladimir A
2012-01-01
The complexity of the metabolic networks in even the simplest organisms has raised new challenges in organizing metabolic information. To address this, specialized computer frameworks have been developed to capture, manage, and visualize metabolic knowledge. The leading databases of metabolic information are those organized under the umbrella of the BioCyc project, which consists of the reference database MetaCyc, and a number of pathway/genome databases (PGDBs) each focussed on a specific organism. A number of PGDBs have been developed for bacterial, fungal, and protozoan pathogens, greatly facilitating dissection of the metabolic potential of these organisms and the identification of new drug targets. Leishmania are protozoan parasites belonging to the family Trypanosomatidae that cause a broad spectrum of diseases in humans. In this work we use the LeishCyc database, the BioCyc database for Leishmania major, to describe how to build a BioCyc database from genomic sequences and associated annotations. By using metabolomic data generated in our group, we show how such databases can be utilized to elucidate specific changes in parasite metabolism.
ERIC Educational Resources Information Center
May, Abigail
1998-01-01
Offers some key business principles with the hope of helping educational facilities managers improve their operations. Looks at customer service, disparate databases, technological concerns, the mission of facility management, how to improve the bottom line, staffing ideas, future planning, and management suggestions. Lists seven habits of…
ERIC Educational Resources Information Center
Van Horn, Royal
2001-01-01
According to Thomas Stewart's book, intellectual capital comprises three broad categories: human, structural, and customer. Structural, or organizational capital, is knowledge that does not leave at night (with workers, or human capital). Developing a "best practices" database using Lotus Notes software would preserve and access schools'…
NASA Astrophysics Data System (ADS)
Skowron, Łukasz; Gąsior, Marcin; Sak-Skowron, Monika
2014-12-01
Both the scientific bodies as well as business practitioners over the past few years have concentrated their efforts in the field of marketing and management primarily around the concept of customer, wanting to know more about him/her and trying to understand their behaviour so that their market activities can more easily be influenced and shaped. In today's market, the customer bases the purchase-decision-making process on choosing a good/service that will give him/her the greatest satisfaction, a subjective, positive experience, which is an emotional reaction to the perceived value. Its level is a result of the comparison between the level of expectations arising from past experience, obtained information and promises, and the perception of experienced situation. In the empirical part of the manuscript, the authors present the main differences in the process of building customer satisfaction and loyalty for two groups of patients: those using prepaid medical services and those who pay for their services each time. Reported results refer to research carried out by the authors between August and October 2012 in the city of Warsaw (Poland) with use of the Structural Equation Modeling analysis. The study was conducted via paper surveys, on a sample of 1590 respondents who were the patients of selected medical organizations. The study demonstrated, using two, separate models, that among aforementioned groups of patients, the evaluation of health services proceeds in quite a different way. This indicates significant implications, of marketing and management character in the field of communication and building long-term patient-organization relationships. Medical establishments wanting to manage effectively their relationships with current and potential customers need to understand the nature of the different groups of patients and be able to adjust the scope and form of marketing activities to their different expectations and preferences.
Projecting 2D gene expression data into 3D and 4D space.
Gerth, Victor E; Katsuyama, Kaori; Snyder, Kevin A; Bowes, Jeff B; Kitayama, Atsushi; Ueno, Naoto; Vize, Peter D
2007-04-01
Video games typically generate virtual 3D objects by texture mapping an image onto a 3D polygonal frame. The feeling of movement is then achieved by mathematically simulating camera movement relative to the polygonal frame. We have built customized scripts that adapt video game authoring software to texture mapping images of gene expression data onto b-spline based embryo models. This approach, known as UV mapping, associates two-dimensional (U and V) coordinates within images to the three dimensions (X, Y, and Z) of a b-spline model. B-spline model frameworks were built either from confocal data or de novo extracted from 2D images, once again using video game authoring approaches. This system was then used to build 3D models of 182 genes expressed in developing Xenopus embryos and to implement these in a web-accessible database. Models can be viewed via simple Internet browsers and utilize openGL hardware acceleration via a Shockwave plugin. Not only does this database display static data in a dynamic and scalable manner, the UV mapping system also serves as a method to align different images to a common framework, an approach that may make high-throughput automated comparisons of gene expression patterns possible. Finally, video game systems also have elegant methods for handling movement, allowing biomechanical algorithms to drive the animation of models. With further development, these biomechanical techniques offer practical methods for generating virtual embryos that recapitulate morphogenesis.
ERIC Educational Resources Information Center
Schlenker, Richard M.
This manual is a "how to" training device for building database files using the AppleWorks program with an Apple IIe or Apple IIGS Computer with Duodisk or two disk drives and an 80-column card. The manual provides step-by-step directions, and includes 25 figures depicting the computer screen at the various stages of the database file…
Leveraging Cognitive Context for Object Recognition
2014-06-01
learned from large image databases. We build upon this concept by exploring cognitive context, demonstrating how rich dynamic context provided by...context that people rely upon as they perceive the world. Context in ACT-R/E takes the form of associations between related concepts that are learned ...and accuracy of object recognition. Context is most often viewed as a static concept, learned from large image databases. We build upon this concept by
Mobile Food Ordering Application using Android OS Platform
NASA Astrophysics Data System (ADS)
Yosep Ricky, Michael
2014-03-01
The purpose of this research is making an ordering food application based on Android with New Order, Order History, Restaurant Profile, Order Status, Tracking Order, and Setting Profile features. The research method used in this research is water model of System Development Life Cycle (SDLC) method with following phases: requirement definition, analyzing and determining the features needed in developing application and making the detail definition of each features, system and software design, designing the flow of developing application by using storyboard design, user experience design, Unified Modeling Language (UML) design, and database structure design, implementation an unit testing, making database and translating the result of designs to programming language code then doing unit testing, integration and System testing, integrating unit program to one unit system then doing system testing, operation and maintenance, operating the result of system testing and if any changes and reparations needed then the previous phases could be back. The result of this research is an ordering food application based on Android for customer and courier user, and a website for restaurant and admin user. The conclusion of this research is to help customer in making order easily, to give detail information needed by customer, to help restaurant in receiving order, and to help courier while doing delivery.
Effect of alcohol references in music on alcohol consumption in public drinking places.
Engels, Rutger C M E; Slettenhaar, Gert; ter Bogt, Tom; Scholte, Ron H J
2011-01-01
People are exposed to many references to alcohol, which might influence their consumption of alcohol directly. In a field experiment, we tested whether textual references to alcohol in music played in bars lead to higher revenues of alcoholic beverages. We created two databases: one contained songs referring to alcohol, the parallel database contained songs with matching artists, tempo, and energetic content, but no references to alcohol. Customers of three bars were exposed to either music textually referring to alcohol or to the control condition, resulting in 23 evenings in both conditions. Bartenders were instructed to play songs with references to alcohol (or not) during a period of 2 hours each of the evenings of interest. They were not blind to the experimental condition. The results showed that customers who were exposed to music with textual references to alcohol spent significantly more on alcoholic drinks compared to customers in the control condition. This pilot study provides preliminary evidence that alcohol-related lyrics directly affect alcohol consumption in public drinking places. Since our study is one of the first testing direct effects of music lyrics on consumption, our small-scale, preliminary study needs replication before firm conclusions can be drawn. Copyright © American Academy of Addiction Psychiatry.
Bendapudi, Neeli; Bendapudi, Venkat
2005-05-01
It's easy to conclude from the literature and the lore that top-notch customer service is the province of a few luxury companies and that any retailer outside that rarefied atmosphere is condemned to offer mediocre service at best. But even companies that position themselves for the mass market can provide outstanding customer-employee interactions and profit from them, if they train employees to reflect the brand's core values. The authors studied the convenience store industry in depth and focused on two that have developed a devoted following: QuikTrip (QT) and Wawa. Turnover rates at QT and Wawa are 14% and 22% respectively, much lower than the typical rate in retail. The authors found six principles that both firms embrace to create a strong culture of customer service. Know what you're looking for: A focus on candidates' intrinsic traits allows the companies to hire people who will naturally bring the right qualities to the job. Make the most of talent: In mass-market retail, talent is generally viewed as a commodity, but that outlook becomes a self-fulfilling prophesy. Create pride in the brand: Service quality depends directly on employees' attachment to the brand. Build community: Wawa and QT have made concerted efforts to build customer loyalty through a sense of community. Share the business context: Employees need a clear understanding of how their company operates and how it defines success. Satisfy the soul: To win an employee's passionate engagement, a company must meet his or her needs for security, esteem, and justice.
You catch more flies with sugar...marketing RIM
DOE Office of Scientific and Technical Information (OSTI.GOV)
KEENEN,MARTHA JANE
There is a difference between marketing and selling. Marketing is finding out what the customer wants and/or needs and showing that customer how a product meets those needs. Modifying or repackaging the product may be required to make its utility clear to the customer. When it is, they'll buy because they, on their own, want it. Selling is pushing a product on the customer for reasons of profit, compliance, the way things have always been done here, or any others. When one markets, a relationship is built. This isn't about a one-time sale, it's about getting those records into safekeepingmore » and customers trusting us to give them back, retrieve them, the way that customer needs them, when and how that customer needs them. This is a trust building exercise that has long-term as well as short-term actions and reactions all aligned toward that interdependent relationship between customers and us, the recorded information managers. Marketing works better than selling because human beings don't like to be pushed...think door-to-door sales people and evaluate emotions. Are they positive? Go a step further. No one likes to be told to do what's good for you? Which brings us to the fundamental marketing, as opposed to sales, principle: What's In It For Me? Commonly called the WIIFM of Wiff-em principle in marketing and entrepreneurship texts and classes.« less
Notre Dame Nuclear Database: A New Chart of Nuclides
NASA Astrophysics Data System (ADS)
Lee, Kevin; Khouw, Timothy; Fasano, Patrick; Mumpower, Matthew; Aprahamian, Ani
2014-09-01
Nuclear data is critical to research fields from medicine to astrophysics. We are creating a database, the Notre Dame Nuclear Database, which can store theoretical and experimental datasets. We place emphasis on storing metadata and user interaction with the database. Users are able to search in addition to the specific nuclear datum, the author(s), the facility where the measurements were made, the institution of the facility, and device or method/technique used. We also allow users to interact with the database by providing online search, an interactive nuclide chart, and a command line interface. The nuclide chart is a more descriptive version of the periodic table that can be used to visualize nuclear properties such as half-lives and mass. We achieve this by using D3 (Data Driven Documents), HTML, and CSS3 to plot the nuclides and color them accordingly. Search capabilities can be applied dynamically to the chart by using Python to communicate with MySQL, allowing for customization. Users can save the customized chart they create to any image format. These features provide a unique approach for researchers to interface with nuclear data. We report on the current progress of this project and will present a working demo that highlights each aspect of the aforementioned features. This is the first time that all available technologies are put to use to make nuclear data more accessible than ever before in a manner that is much easier and fully detailed. This is a first and we will make it available as open source ware.
The U.S. Geological Survey Strategic Plan 1999-2009
,
1999-01-01
This new version of the USGS Strategic Plan builds on our first strategic plan, which was developed in 1996, and focuses specifically on strategic goals in four areas: customers, programs, people, and operations of the USGS.
78 FR 8101 - Codex Alimentarius Commission: Meeting of the Codex Committee on Food Additives
Federal Register 2010, 2011, 2012, 2013, 2014
2013-02-05
... the building and its parking area. If you require parking, please include the vehicle make and tag... offers an electronic mail subscription service which provides automatic and customized access to selected...
Freely Accessible Chemical Database Resources of Compounds for in Silico Drug Discovery.
Yang, JingFang; Wang, Di; Jia, Chenyang; Wang, Mengyao; Hao, GeFei; Yang, GuangFu
2018-05-07
In silico drug discovery has been proved to be a solidly established key component in early drug discovery. However, this task is hampered by the limitation of quantity and quality of compound databases for screening. In order to overcome these obstacles, freely accessible database resources of compounds have bloomed in recent years. Nevertheless, how to choose appropriate tools to treat these freely accessible databases are crucial. To the best of our knowledge, this is the first systematic review on this issue. The existed advantages and drawbacks of chemical databases were analyzed and summarized based on the collected six categories of freely accessible chemical databases from literature in this review. Suggestions on how and in which conditions the usage of these databases could be reasonable were provided. Tools and procedures for building 3D structure chemical libraries were also introduced. In this review, we described the freely accessible chemical database resources for in silico drug discovery. In particular, the chemical information for building chemical database appears as attractive resources for drug design to alleviate experimental pressure. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Situational Awareness Geospatial Application (iSAGA)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sher, Benjamin
Situational Awareness Geospatial Application (iSAGA) is a geospatial situational awareness software tool that uses an algorithm to extract location data from nearly any internet-based, or custom data source and display it geospatially; allows user-friendly conduct of spatial analysis using custom-developed tools; searches complex Geographic Information System (GIS) databases and accesses high resolution imagery. iSAGA has application at the federal, state and local levels of emergency response, consequence management, law enforcement, emergency operations and other decision makers as a tool to provide complete, visual, situational awareness using data feeds and tools selected by the individual agency or organization. Feeds may bemore » layered and custom tools developed to uniquely suit each subscribing agency or organization. iSAGA may similarly be applied to international agencies and organizations.« less
Ogawa, Yoshiko; Tanabe, Naohito; Honda, Akiko; Azuma, Tomoko; Seki, Nao; Suzuki, Tsubasa; Suzuki, Hiroshi
2011-07-01
Point-of-purchase (POP) information at food stores could help promote healthy dietary habits. However, it has been difficult to evaluate the effects of such intervention on customers' behavior. We objectively evaluated the usefulness of POP health information for vegetables in the modification of customers' purchasing behavior by using the database of a point-of-sales (POS) system. Two supermarket stores belonging to the same chain were assigned as the intervention store (store I) and control store (store C). POP health information for vegetables was presented in store I for 60 days. The percent increase in daily sales of vegetables over the sales on the same date of the previous year was compared between the stores by using the database of the POS system, adjusting for the change in monthly visitors from the previous year (adjusted ∆sales). The adjusted ∆sales significantly increased during the intervention period (Spearman's ρ = 0.258, P for trend = 0.006) at store I but did not increase at store C (ρ = -0.037, P for trend = 0.728). The growth of the mean adjusted ∆sales of total vegetables from 30 days before the intervention period through the latter half of the intervention period was estimated to be greater at store I than at store C by 18.7 percentage points (95% confidence interval 1.6-35.9). Health-related POP information for vegetables in supermarkets can encourage customers to purchase and, probably, consume vegetables.
Database Performance Monitoring for the Photovoltaic Systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klise, Katherine A.
The Database Performance Monitoring (DPM) software (copyright in processes) is being developed at Sandia National Laboratories to perform quality control analysis on time series data. The software loads time indexed databases (currently csv format), performs a series of quality control tests defined by the user, and creates reports which include summary statistics, tables, and graphics. DPM can be setup to run on an automated schedule defined by the user. For example, the software can be run once per day to analyze data collected on the previous day. HTML formatted reports can be sent via email or hosted on a website.more » To compare performance of several databases, summary statistics and graphics can be gathered in a dashboard view which links to detailed reporting information for each database. The software can be customized for specific applications.« less
NREL: U.S. Life Cycle Inventory Database Home Page
U.S. Life-Cycle Inventory Database Buildings Research Photo of a green field with an ocean in the background. U.S. Life Cycle Inventory Database NREL and its partners created the U.S. Life Cycle Inventory (LCI) Database to help life cycle assessment (LCA) practitioners answer questions about environmental
The radiopurity.org material database
NASA Astrophysics Data System (ADS)
Cooley, J.; Loach, J. C.; Poon, A. W. P.
2018-01-01
The database at http://www.radiopurity.org is the world's largest public database of material radio-purity mea-surements. These measurements are used by members of the low-background physics community to build experiments that search for neutrinos, neutrinoless double-beta decay, WIMP dark matter, and other exciting physics. This paper summarizes the current status and the future plan of this database.
Database Software for Social Studies. A MicroSIFT Quarterly Report.
ERIC Educational Resources Information Center
Weaver, Dave
The report describes and evaluates the use of a set of learning tools called database managers and their creation of databases to help teach problem solving skills in social studies. Details include the design, building, and use of databases in a social studies setting, along with advantages and disadvantages of using them. The three types of…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Price, Phillip N.
2014-11-01
Snohomish County Public Utilities District (the District or Snohomish PUD) provides electricity to about 325,000 customers in Snohomish County, Washington. The District has an incentive programs to encourage commercial customers to improve energy efficiency: the District partially reimburses the cost of approved retrofits if they provide a level of energy performance improvement that is specified by contract. In 2013 the District contracted with Lawrence Berkeley National Laboratory to provide a third-party review of the Monitoring and Verification (M&V) practices the District uses to evaluate whether companies are meeting their contractual obligations. This work helps LBNL understand the challenges faced bymore » real-world practitioners of M&V of energy savings, and builds on a body of related work such as Price et al. (2013). The District selected a typical project for which they had already performed an evaluation. The present report includes the District's original evaluation as well as LBNL's review of their approach. The review is based on the document itself; on investigation of the load data and outdoor air temperature data from the building evaluated in the document; and on phone discussions with Bill Harris of the Snohomish County Public Utilities District. We will call the building studied in the document the subject building, the original Snohomish PUD report will be referred to as the Evaluation, and this discussion by LBNL is called the Review.« less
Hybrid Model-Based and Data-Driven Fault Detection and Diagnostics for Commercial Buildings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frank, Stephen; Heaney, Michael; Jin, Xin
Commercial buildings often experience faults that produce undesirable behavior in building systems. Building faults waste energy, decrease occupants' comfort, and increase operating costs. Automated fault detection and diagnosis (FDD) tools for buildings help building owners discover and identify the root causes of faults in building systems, equipment, and controls. Proper implementation of FDD has the potential to simultaneously improve comfort, reduce energy use, and narrow the gap between actual and optimal building performance. However, conventional rule-based FDD requires expensive instrumentation and valuable engineering labor, which limit deployment opportunities. This paper presents a hybrid, automated FDD approach that combines building energymore » models and statistical learning tools to detect and diagnose faults noninvasively, using minimal sensors, with little customization. We compare and contrast the performance of several hybrid FDD algorithms for a small security building. Our results indicate that the algorithms can detect and diagnose several common faults, but more work is required to reduce false positive rates and improve diagnosis accuracy.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Frank, Stephen; Heaney, Michael; Jin, Xin
Commercial buildings often experience faults that produce undesirable behavior in building systems. Building faults waste energy, decrease occupants' comfort, and increase operating costs. Automated fault detection and diagnosis (FDD) tools for buildings help building owners discover and identify the root causes of faults in building systems, equipment, and controls. Proper implementation of FDD has the potential to simultaneously improve comfort, reduce energy use, and narrow the gap between actual and optimal building performance. However, conventional rule-based FDD requires expensive instrumentation and valuable engineering labor, which limit deployment opportunities. This paper presents a hybrid, automated FDD approach that combines building energymore » models and statistical learning tools to detect and diagnose faults noninvasively, using minimal sensors, with little customization. We compare and contrast the performance of several hybrid FDD algorithms for a small security building. Our results indicate that the algorithms can detect and diagnose several common faults, but more work is required to reduce false positive rates and improve diagnosis accuracy.« less
General engineering specifications for 6000 tpd SRC-I Demonstration Plant
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
This volume contains specifications for architectural features of buildings for the SRC-1 Demonstration Plant: skylights, ventilators, sealants, doors, mirrors, furring and lathing, gypsum plaster, lightweight plaster, wallboard, ceramic tile, acoustic ceiling systems, resilient flooring, carpeting, brick flooring, architectural painting, vinyl wall covering, chalkboards, tackboards, toilets, access flooring, lockers, partitions, washroom accessories, unit kitchens, dock levels, seals, shelters, custom casework, auditorium seats, drapery tacks, prefabricated buildings, stairs, elevators, shelves, etc. (LTN).
Applications of Technology to CAS Data-Base Production.
ERIC Educational Resources Information Center
Weisgerber, David W.
1984-01-01
Reviews the economic importance of applying computer technology to Chemical Abstracts Service database production from 1973 to 1983. Database building, technological applications for editorial processing (online editing, Author Index Manufacturing System), and benefits (increased staff productivity, reduced rate of increase of cost of services,…
SAMSON Technology Demonstrator
2014-06-01
requested. The SAMSON TD was testing with two different policy engines: 1. A custom XACML-based element matching engine using a MySQL database for...performed during the course of the event. Full information protection across the sphere of access management, information protection and auditing was in...
Bibliographies without Tears: Bibliography-Managers Round-Up.
ERIC Educational Resources Information Center
Science Software Quarterly, 1984
1984-01-01
Reviews and compares "Sci-Mate,""Reference Manager," and "BIBLIOPHILE" software packages used for storage and retrieval tasks involving bibliographic data. Each program handles search tasks well; major differences are in the amount of flexibility in customizing the database structure, their import and export…
Streamlining the Process of Acquiring Secure Open Architecture Software Systems
2013-10-08
Microsoft.NET, Enterprise Java Beans, GNU Lesser General Public License (LGPL) libraries, and data communication protocols like the Hypertext Transfer...NetBeans development environments), customer relationship management (SugarCRM), database management systems (PostgreSQL, MySQL ), operating
Yaghoubi, Maryam; Asgari, Hamed; Javadi, Marzieh
2017-01-01
One of the challenges in the fiercely competitive space of health organizations is responding to customers and building trust and satisfaction in them in the shortest time, with best quality and highest productivity. Hence the aim of this study is to survey the impact of customer relationship management (CRM) on organizational productivity, customer loyalty, satisfaction and trust in selected hospitals of Isfahan (in Iran). This study is a correlation descriptive research. Study population was the nurses in selected hospitals of Isfahan and the sampling has been conducted using stratified random method. Data collection tool is a researcher-made questionnaire of CRM and its effects (organizational productivity, customer loyalty, satisfaction and trust) which its validity and reliability has been confirmed by researchers. Structural equation method was used to determine the impact of variables. Data analysis method was structural equation modeling and the software used was SPSS version 16 (IBM, SPSS, 2007 Microsoft Corp., Bristol, UK) and AMOS version 18 (IBM, SPSS, 2010 Microsoft Corp, Bristol, UK). Among the dimensions of CRM, diversification had the highest impact (0.83) and customer acquisition had the lowest (0.57) CRM, had the lowest impact on productivity (0.59) and the highest effect on customer satisfaction (0.83). For the implementation of CRM, it is necessary that the studied hospitals improve strategies of acquiring information about new customers, attracting new customers and keeping them and communication with patients outside the hospital and improve the system of measuring patient satisfaction and loyalty.
Chen, Gengbo; Walmsley, Scott; Cheung, Gemmy C M; Chen, Liyan; Cheng, Ching-Yu; Beuerman, Roger W; Wong, Tien Yin; Zhou, Lei; Choi, Hyungwon
2017-05-02
Data independent acquisition-mass spectrometry (DIA-MS) coupled with liquid chromatography is a promising approach for rapid, automatic sampling of MS/MS data in untargeted metabolomics. However, wide isolation windows in DIA-MS generate MS/MS spectra containing a mixed population of fragment ions together with their precursor ions. This precursor-fragment ion map in a comprehensive MS/MS spectral library is crucial for relative quantification of fragment ions uniquely representative of each precursor ion. However, existing reference libraries are not sufficient for this purpose since the fragmentation patterns of small molecules can vary in different instrument setups. Here we developed a bioinformatics workflow called MetaboDIA to build customized MS/MS spectral libraries using a user's own data dependent acquisition (DDA) data and to perform MS/MS-based quantification with DIA data, thus complementing conventional MS1-based quantification. MetaboDIA also allows users to build a spectral library directly from DIA data in studies of a large sample size. Using a marine algae data set, we show that quantification of fragment ions extracted with a customized MS/MS library can provide as reliable quantitative data as the direct quantification of precursor ions based on MS1 data. To test its applicability in complex samples, we applied MetaboDIA to a clinical serum metabolomics data set, where we built a DDA-based spectral library containing consensus spectra for 1829 compounds. We performed fragment ion quantification using DIA data using this library, yielding sensitive differential expression analysis.
The role of complaint management in the service recovery process.
Bendall-Lyon, D; Powers, T L
2001-05-01
Patient satisfaction and retention can be influenced by the development of an effective service recovery program that can identify complaints and remedy failure points in the service system. Patient complaints provide organizations with an opportunity to resolve unsatisfactory situations and to track complaint data for quality improvement purposes. Service recovery is an important and effective customer retention tool. One way an organization can ensure repeat business is by developing a strong customer service program that includes service recovery as an essential component. The concept of service recovery involves the service provider taking responsive action to "recover" lost or dissatisfied customers and convert them into satisfied customers. Service recovery has proven to be cost-effective in other service industries. The complaint management process involves six steps that organizations can use to influence effective service recovery: (1) encourage complaints as a quality improvement tool; (2) establish a team of representatives to handle complaints; (3) resolve customer problems quickly and effectively; (4) develop a complaint database; (5) commit to identifying failure points in the service system; and (6) track trends and use information to improve service processes. Customer retention is enhanced when an organization can reclaim disgruntled patients through the development of effective service recovery programs. Health care organizations can become more customer oriented by taking advantage of the information provided by patient complaints, increasing patient satisfaction and retention in the process.
The development and validation of the Incivility from Customers Scale.
Wilson, Nicole L; Holmvall, Camilla M
2013-07-01
Scant research has examined customers as sources of workplace incivility, despite evidence suggesting that mistreatment is more common from organizational outsiders, including customers, than from organizational members (Grandey, Kern, & Frone, 2007; Schat & Kelloway, 2005). As an important step in extending the literature on customer incivility, we conducted two studies to develop and validate a measure of this construct. Study 1 used focus groups of retail and restaurant employees (n = 30) to elicit a list of uncivil customer behaviors, based on which we wrote initial scale items. Study 2 used a correlational survey design (n = 439) to pare down the number of scale items to 10 and to garner reliability and validity evidence for the scale. Exploratory and confirmatory factor analyses show that the scale is unidimensional and distinguishable from measures of the related, but distinct, constructs of interpersonal justice and psychological aggression from customers. Reliability analyses show that the scale is internally consistent. Significant correlations between the scale and individuals' job satisfaction, turnover intentions, and general and job-specific psychological strain provide evidence of criterion-related validity. Hierarchical regression analyses show that the scale significantly predicts three of four organizational and personal strain outcomes over and above a workplace incivility measure adapted for customer incivility, providing some evidence of incremental validity. Limitations and future research directions are discussed. PsycINFO Database Record (c) 2013 APA, all rights reserved.
Gandy, Jessica R; Fossett, Lela; Wong, Brian J F
2016-05-01
This study aims to: 1) determine the current consumer trends of over-the-counter (OTC) and custom-made face mask usage among National Collegiate Athletic Association (NCAA) Division I athletic programs; and 2) provide a literature review of OTC face guards and a classified database. Literature review and survey. Consumer trends were obtained by contacting all 352 NCAA Division I programs. Athletic trainers present in the office when called answered the following questions: 1) "When an athlete breaks his or her nose, is a custom or generic face guard used?" and 2) "What brand is the generic face guard that is used?" Data was analyzed to determine trends among athletic programs. Also, a database of OTC devices available was generated using PubMed, Google, and manufacturer Web sites. Among the 352 NCAA Division I athletic programs, 254 programs participated in the survey (72% response rate). The majority preferred custom-made guards (46%). Disadvantages included high cost and slow manufacture turnaround time. Only 20% of the programs strictly used generic brands. For the face mask database, 10 OTC products were identified and classified into four categories based on design, with pricing ranging between $35.99 and $69.95. Only a handful of face masks exist for U.S. consumers, but none of them have been reviewed or classified by product design, sport application, price, and collegiate consumer use. This project details usage trends among NCAA Division I athletic programs and provides a list of available devices that can be purchased to protect the nose and face during sports. NA. Laryngoscope, 126:1054-1060, 2016. © 2015 The American Laryngological, Rhinological and Otological Society, Inc.
Ehrhart, Karen Holcombe; Witt, L A; Schneider, Benjamin; Perry, Sara Jansen
2011-03-01
We lend theoretical insight to the service climate literature by exploring the joint effects of branch service climate and the internal service provided to the branch (the service received from corporate units to support external service delivery) on customer-rated service quality. We hypothesized that service climate is related to service quality most strongly when the internal service quality received is high, providing front-line employees with the capability to deliver what the service climate motivates them to do. We studied 619 employees and 1,973 customers in 36 retail branches of a bank. We aggregated employee perceptions of the internal service quality received from corporate units and the local service climate and external customer perceptions of service quality to the branch level of analysis. Findings were consistent with the hypothesis that high-quality internal service is necessary for branch service climate to yield superior external customer service quality. PsycINFO Database Record (c) 2011 APA, all rights reserved.
Schmidt, Joseph A; Pohler, Dionne M
2018-05-17
We develop competing hypotheses about the relationship between high performance work systems (HPWS) with employee and customer satisfaction. Drawing on 8 years of employee and customer survey data from a financial services firm, we used a recently developed empirical technique-covariate balanced propensity score (CBPS) weighting-to examine if the proposed relationships between HPWS and satisfaction outcomes can be explained by reverse causality, selection effects, or commonly omitted variables such as leadership behavior. The results provide support for leader behaviors as a primary driver of customer satisfaction, rather than HPWS, and also suggest that the problem of reverse causality requires additional attention in future human resource (HR) systems research. Model comparisons suggest that the estimates and conclusions vary across CBPS, meta-analytic, cross-sectional, and time-lagged models (with and without a lagged dependent variable as a control). We highlight the theoretical and methodological implications of the findings for HR systems research. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Residential Indoor Temperature Study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Booten, Chuck; Robertson, Joseph; Christensen, Dane
2017-04-07
In this study, we are adding to the body of knowledge around answering the question: What are good assumptions for HVAC set points in U.S. homes? We collected and analyzed indoor temperature data from US homes using funding from the U.S. Department of Energy's Building America (BA) program, due to the program's reliance on accurate energy simulation of homes. Simulations are used to set Building America goals, predict the impact of new building techniques and technologies, inform research objectives, evaluate home performance, optimize efficiency packages to meet savings goals, customize savings approaches to specific climate zones, and myriad other uses.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Reshma; Ravache, Baptiste; Sartor, Dale
India launched the Energy Conservation Building Code (ECBC) in 2007, and a revised version in 2017 as ambitious first steps towards promoting energy efficiency in the building sector. Pioneering early adopters—building owners, A&E firms, and energy consultants—have taken the lead to design customized solutions for their energy-efficient buildings. This Guide offers a synthesizing framework, critical lessons, and guidance to meet and exceed ECBC. Its whole-building lifecycle assurance framework provides a user-friendly methodology to achieve high performance in terms of energy, environmental, and societal impact. Class A offices are selected as a target typology, being a high-growth sector, with significant opportunitiesmore » for energy savings. The practices may be extrapolated to other commercial building sectors, as well as extended to other regions with similar cultural, climatic, construction, and developmental contexts« less
Information resources at the National Center for Biotechnology Information.
Woodsmall, R M; Benson, D A
1993-01-01
The National Center for Biotechnology Information (NCBI), part of the National Library of Medicine, was established in 1988 to perform basic research in the field of computational molecular biology as well as build and distribute molecular biology databases. The basic research has led to new algorithms and analysis tools for interpreting genomic data and has been instrumental in the discovery of human disease genes for neurofibromatosis and Kallmann syndrome. The principal database responsibility is the National Institutes of Health (NIH) genetic sequence database, GenBank. NCBI, in collaboration with international partners, builds, distributes, and provides online and CD-ROM access to over 112,000 DNA sequences. Another major program is the integration of multiple sequences databases and related bibliographic information and the development of network-based retrieval systems for Internet access. PMID:8374583
ERIC Educational Resources Information Center
Riso, Ovid
1977-01-01
Advertising should be viewed as a sales-building investment and not simply an element of business outlay that actually is a completely controllable expense. Suggestions deal with the sales budget, profiling the store and its customers, advertising media, promotional ideas, and consumer protection. (LBH)
ERIC Educational Resources Information Center
Carroll, David J.
2001-01-01
Administrators can create word-of-mouth communication that dispels negative attitudes and build good school reputations by discovering what parents and students are saying, targeting employee satisfaction and retention, providing excellent customer service, actively seeking and handling complaints, nurturing champions, and integrating "grapevine"…
Building Customer Relationships: A Model for Vocational Education and Training Delivery.
ERIC Educational Resources Information Center
Jarratt, Denise G.; Murphy, Tom; Lowry, Diannah
1997-01-01
Review of the theory of relational marketing and interviews with training providers identified a training delivery model that includes elements of trust and commitment, investment by relationship partners, and knowledge exchange, supporting relationship longevity. (SK)
Appleby, C
2001-01-01
Drug firms are integrating technology into the continuum of care. They're enlisting physicians to use their technology in prescribing medications, reporting clinical data, and learning about new drugs. They're also building a loyal customer base, and they're doing it smartly.
76 FR 3600 - Codex Alimentarius Commission: Meeting of the Codex Committee on Food Additives
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-20
... because it will expedite entry into the building and its parking area. If you require parking, please... provides automatic and customized access to selected food safety news and information. This service is...
77 FR 5483 - Codex Alimentarius Commission: Meeting of the Codex Committee on Food Additives
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-03
... building and its parking area. If you require parking, please include the vehicle make and tag number when..., FSIS offers an electronic mail subscription service which provides automatic and customized access to...
Identification of site frequencies from building records
Celebi, M.
2003-01-01
A simple procedure to identify site frequencies using earthquake response records from roofs and basements of buildings is presented. For this purpose, data from five different buildings are analyzed using only spectral analyses techniques. Additional data such as free-field records in close proximity to the buildings and site characterization data are also used to estimate site frequencies and thereby to provide convincing evidence and confirmation of the site frequencies inferred from the building records. Furthermore, simple code-formula is used to calculate site frequencies and compare them with the identified site frequencies from records. Results show that the simple procedure is effective in identification of site frequencies and provides relatively reliable estimates of site frequencies when compared with other methods. Therefore the simple procedure for estimating site frequencies using earthquake records can be useful in adding to the database of site frequencies. Such databases can be used to better estimate site frequencies of those sites with similar geological structures.
The NASA Computational Fluid Dynamics (CFD) program - Building technology to solve future challenges
NASA Technical Reports Server (NTRS)
Richardson, Pamela F.; Dwoyer, Douglas L.; Kutler, Paul; Povinelli, Louis A.
1993-01-01
This paper presents the NASA Computational Fluid Dynamics program in terms of a strategic vision and goals as well as NASA's financial commitment and personnel levels. The paper also identifies the CFD program customers and the support to those customers. In addition, the paper discusses technical emphasis and direction of the program and some recent achievements. NASA's Ames, Langley, and Lewis Research Centers are the research hubs of the CFD program while the NASA Headquarters Office of Aeronautics represents and advocates the program.
Educational Brokering and Adult Basic Education.
ERIC Educational Resources Information Center
Roberts, David J.
1978-01-01
Describes how an educational broker accomplishes the task of successfully matching educational resources with the needs of his adult education customer: the role of the educational broker, establishment of his database, accessing the data, publicizing the center, delivery of service, and the library's role/responsibility. (Author/JD)
Web-Enabled Systems for Student Access.
ERIC Educational Resources Information Center
Harris, Chad S.; Herring, Tom
1999-01-01
California State University, Fullerton is developing a suite of server-based, Web-enabled applications that distribute the functionality of its student information system software to external customers without modifying the mainframe applications or databases. The cost-effective, secure, and rapidly deployable business solution involves using the…
Databases Don't Measure Motivation
ERIC Educational Resources Information Center
Yeager, Joseph
2005-01-01
Automated persuasion is the Holy Grail of quantitatively biased data base designers. However, data base histories are, at best, probabilistic estimates of customer behavior and do not make use of more sophisticated qualitative motivational profiling tools. While usually absent from web designer thinking, qualitative motivational profiling can be…
ToxMiner Software Interface for Visualizing and Analyzing ToxCast Data
The ToxCast dataset represents a collection of assays and endpoints that will require both standard statistical approaches as well as customized data analysis workflows. To analyze this unique dataset, we have developed an integrated database with Javabased interface called ToxMi...
Bar-Code System for a Microbiological Laboratory
NASA Technical Reports Server (NTRS)
Law, Jennifer; Kirschner, Larry
2007-01-01
A bar-code system has been assembled for a microbiological laboratory that must examine a large number of samples. The system includes a commercial bar-code reader, computer hardware and software components, plus custom-designed database software. The software generates a user-friendly, menu-driven interface.
ERIC Educational Resources Information Center
Fox, Megan K.
2003-01-01
Examines how librarians are customizing their services and collections for handheld computing. Discusses the widest adoption of PDAs (personal digital assistants) in libraries that serve health and medical communities; PDA-friendly information pages; the reference focus; journals and databases; lending materials; publicity; use of PDAs by library…
SAM International Case Studies: DPV Analysis in Mexico
DOE Office of Scientific and Technical Information (OSTI.GOV)
McCall, James D
Presentation demonstrates the use of the System Advisor Model (SAM) in international analyses, specifically Mexico. Two analyses are discussed with relation to SAM modelling efforts: 1) Customer impacts from changes to net metering and billing agreements and 2) Potential benefits of PV for Mexican solar customers, the Mexican Treasury, and the environment. Along with the SAM analyses, integration of the International Utility Rate Database (I-URDB) with SAM and future international SAM work are discussed. Presentation was created for the International Solar Energy Society's (ISES) webinar titled 'International use of the NREL System Advisor Model (SAM) with case studies'.
Research on architecture of intelligent transportation cloud platform for Guangxi expressway
NASA Astrophysics Data System (ADS)
Hua, Pan; Huang, Zhongxiang; He, Zengzhen
2017-04-01
In view of the practical needs of the intelligent transportation business collaboration, a model on intelligent traffic business collaboration is established. Aarchitecture of intelligent traffic cloud platformfor high speed road is proposed which realizes the loose coupling of each intelligent traffic business module. Based on custom technology in database design, it realizes the dynamic customization of business function which means that different roles can dynamically added business functions according to the needs. Through its application in the development and implementation of the actual business system, the architecture is proved to be effective and feasible.
Developing a Large Lexical Database for Information Retrieval, Parsing, and Text Generation Systems.
ERIC Educational Resources Information Center
Conlon, Sumali Pin-Ngern; And Others
1993-01-01
Important characteristics of lexical databases and their applications in information retrieval and natural language processing are explained. An ongoing project using various machine-readable sources to build a lexical database is described, and detailed designs of individual entries with examples are included. (Contains 66 references.) (EAM)
Enhancing Knowledge Integration: An Information System Capstone Project
ERIC Educational Resources Information Center
Steiger, David M.
2009-01-01
This database project focuses on learning through knowledge integration; i.e., sharing and applying specialized (database) knowledge within a group, and combining it with other business knowledge to create new knowledge. Specifically, the Tiny Tots, Inc. project described below requires students to design, build, and instantiate a database system…
Received Signal Strength Database Interpolation by Kriging for a Wi-Fi Indoor Positioning System
Jan, Shau-Shiun; Yeh, Shuo-Ju; Liu, Ya-Wen
2015-01-01
The main approach for a Wi-Fi indoor positioning system is based on the received signal strength (RSS) measurements, and the fingerprinting method is utilized to determine the user position by matching the RSS values with the pre-surveyed RSS database. To build a RSS fingerprint database is essential for an RSS based indoor positioning system, and building such a RSS fingerprint database requires lots of time and effort. As the range of the indoor environment becomes larger, labor is increased. To provide better indoor positioning services and to reduce the labor required for the establishment of the positioning system at the same time, an indoor positioning system with an appropriate spatial interpolation method is needed. In addition, the advantage of the RSS approach is that the signal strength decays as the transmission distance increases, and this signal propagation characteristic is applied to an interpolated database with the Kriging algorithm in this paper. Using the distribution of reference points (RPs) at measured points, the signal propagation model of the Wi-Fi access point (AP) in the building can be built and expressed as a function. The function, as the spatial structure of the environment, can create the RSS database quickly in different indoor environments. Thus, in this paper, a Wi-Fi indoor positioning system based on the Kriging fingerprinting method is developed. As shown in the experiment results, with a 72.2% probability, the error of the extended RSS database with Kriging is less than 3 dBm compared to the surveyed RSS database. Importantly, the positioning error of the developed Wi-Fi indoor positioning system with Kriging is reduced by 17.9% in average than that without Kriging. PMID:26343673
Received Signal Strength Database Interpolation by Kriging for a Wi-Fi Indoor Positioning System.
Jan, Shau-Shiun; Yeh, Shuo-Ju; Liu, Ya-Wen
2015-08-28
The main approach for a Wi-Fi indoor positioning system is based on the received signal strength (RSS) measurements, and the fingerprinting method is utilized to determine the user position by matching the RSS values with the pre-surveyed RSS database. To build a RSS fingerprint database is essential for an RSS based indoor positioning system, and building such a RSS fingerprint database requires lots of time and effort. As the range of the indoor environment becomes larger, labor is increased. To provide better indoor positioning services and to reduce the labor required for the establishment of the positioning system at the same time, an indoor positioning system with an appropriate spatial interpolation method is needed. In addition, the advantage of the RSS approach is that the signal strength decays as the transmission distance increases, and this signal propagation characteristic is applied to an interpolated database with the Kriging algorithm in this paper. Using the distribution of reference points (RPs) at measured points, the signal propagation model of the Wi-Fi access point (AP) in the building can be built and expressed as a function. The function, as the spatial structure of the environment, can create the RSS database quickly in different indoor environments. Thus, in this paper, a Wi-Fi indoor positioning system based on the Kriging fingerprinting method is developed. As shown in the experiment results, with a 72.2% probability, the error of the extended RSS database with Kriging is less than 3 dBm compared to the surveyed RSS database. Importantly, the positioning error of the developed Wi-Fi indoor positioning system with Kriging is reduced by 17.9% in average than that without Kriging.
1981-05-01
factors that cause damage are discussed below. a. Architectural elements. Damage to architectural elements can result in both significant dollar losses...hazard priority- ranking procedure are: 1. To produce meaningful results which are as simple as possible, con- sidering the existing databases. 2. To...minimize the amount of data required for meaningful results , i.e., the database should contain only the most fundamental building characteris- tics. 3. To
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-09
... 151.13, Intertek USA, Inc., 1000 Port Carteret Drive Building C, Carteret, NJ 07008, has been approved to gauge and accredited to test petroleum and petroleum products for customs purposes, in accordance...
Online History Textbooks: Breaking the Mold.
ERIC Educational Resources Information Center
Schick, James B. M.
2001-01-01
Outlines recommended conditions and features of online history textbooks: link control, coverage of methodology, maps, breadth and depth of information, layered storytelling approach, tools, tutorials, customization, team teaching, short movies, interviews, reading activities and skill building activities, overcharging, and password protection.…
Improved Air Combat Awareness; with AESA and Next-Generation Signal Processing
2002-09-01
competence network Building techniques Software development environment Communication Computer architecture Modeling Real-time programming Radar...memory access, skewed load and store, 3.2 GB/s BW • Performance: 400 MFLOPS Runtime environment Custom runtime routines Driver routines Hardware
SRNL Tagging and Tracking Video
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
SRNL generates a next generation satellite base tracking system. The tagging and tracking system can work in remote wilderness areas, inside buildings, underground and other areas not well served by traditional GPS. It’s a perfect response to customer needs and market demand.
Organisational Learning: A New Perspective.
ERIC Educational Resources Information Center
O'Keefe, Ted
2002-01-01
A study of Irish multinational companies identified antecedents to organizational learning: nature of global business, anthropomorphism, dissatisfaction with traditional paradigms, customer-responsive culture, and intellectual capital. The path to the learning organization builds on these antecedents in an environment of innovation focused on…
Multilayer DNA Origami Packed on a Square Lattice
Ke, Yonggang; Douglas, Shawn M.; Liu, Minghui; Sharma, Jaswinder; Cheng, Anchi; Leung, Albert; Liu, Yan; Shih, William M.; Yan, Hao
2009-01-01
Molecular self-assembly using DNA as a structural building block has proven to be an efficient route to the construction of nanoscale objects and arrays of increasing complexity. Using the remarkable “scaffolded DNA origami” strategy, Rothemund demonstrated that a long single-stranded DNA from a viral genome (M13) can be folded into a variety of custom two-dimensional (2D) shapes using hundreds of short synthetic DNA molecules as staple strands. More recently, we generalized a strategy to build custom-shaped, three-dimensional (3D) objects formed as pleated layers of helices constrained to a honeycomb lattice, with precisely controlled dimensions ranging from 10 to 100 nm. Here we describe a more compact design for 3D origami, with layers of helices packed on a square lattice, that can be folded successfully into structures of designed dimensions in a one-step annealing process, despite the increased density of DNA helices. A square lattice provides a more natural framework for designing rectangular structures, the option for a more densely packed architecture, and the ability to create surfaces that are more flat than is possible with the honeycomb lattice. Thus enabling the design and construction of custom 3D shapes from helices packed on a square lattice provides a general foundational advance for increasing the versatility and scope of DNA nanotechnology. PMID:19807088
Aguilera-Mendoza, Longendri; Marrero-Ponce, Yovani; Tellez-Ibarra, Roberto; Llorente-Quesada, Monica T; Salgado, Jesús; Barigye, Stephen J; Liu, Jun
2015-08-01
The large variety of antimicrobial peptide (AMP) databases developed to date are characterized by a substantial overlap of data and similarity of sequences. Our goals are to analyze the levels of redundancy for all available AMP databases and use this information to build a new non-redundant sequence database. For this purpose, a new software tool is introduced. A comparative study of 25 AMP databases reveals the overlap and diversity among them and the internal diversity within each database. The overlap analysis shows that only one database (Peptaibol) contains exclusive data, not present in any other, whereas all sequences in the LAMP_Patent database are included in CAMP_Patent. However, the majority of databases have their own set of unique sequences, as well as some overlap with other databases. The complete set of non-duplicate sequences comprises 16 990 cases, which is almost half of the total number of reported peptides. On the other hand, the diversity analysis identifies the most and least diverse databases and proves that all databases exhibit some level of redundancy. Finally, we present a new parallel-free software, named Dover Analyzer, developed to compute the overlap and diversity between any number of databases and compile a set of non-redundant sequences. These results are useful for selecting or building a suitable representative set of AMPs, according to specific needs. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Orsolini, Laura; Francesconi, Giulia; Papanti, Duccio; Giorgetti, Arianna; Schifano, Fabrizio
2015-07-01
Internet and social networking sites play a significant role in the marketing and distribution of recreational/prescription drugs without restrictions. We aimed here at reviewing data relating to the profile of the online drug customer and at describing drug vending websites. The PubMed, Google Scholar, and Scopus databases were searched here in order to elicit data on the socio-demographic characteristics of the recreational marketplaces/online pharmacies' customers and the determinants relating to online drug purchasing activities. Typical online recreational drugs' customers seem to be Caucasian, men, in their 20s, highly educated, and using the web to impact as minimally as possible on their existing work/professional status. Conversely, people without any health insurance seemed to look at the web as a source of more affordable prescription medicines. Drug vending websites are typically presented here with a "no prescription required" approach, together with aggressive marketing strategies. The online availability of recreational/prescriptions drugs remains a public health concern. A more precise understanding of online vending sites' customers may well facilitate the drafting and implementation of proper prevention campaigns aimed at counteracting the increasing levels of online drug acquisition and hence intake activities. Copyright © 2015 John Wiley & Sons, Ltd.
Image-Based Airborne LiDAR Point Cloud Encoding for 3d Building Model Retrieval
NASA Astrophysics Data System (ADS)
Chen, Yi-Chen; Lin, Chao-Hung
2016-06-01
With the development of Web 2.0 and cyber city modeling, an increasing number of 3D models have been available on web-based model-sharing platforms with many applications such as navigation, urban planning, and virtual reality. Based on the concept of data reuse, a 3D model retrieval system is proposed to retrieve building models similar to a user-specified query. The basic idea behind this system is to reuse these existing 3D building models instead of reconstruction from point clouds. To efficiently retrieve models, the models in databases are compactly encoded by using a shape descriptor generally. However, most of the geometric descriptors in related works are applied to polygonal models. In this study, the input query of the model retrieval system is a point cloud acquired by Light Detection and Ranging (LiDAR) systems because of the efficient scene scanning and spatial information collection. Using Point clouds with sparse, noisy, and incomplete sampling as input queries is more difficult than that by using 3D models. Because that the building roof is more informative than other parts in the airborne LiDAR point cloud, an image-based approach is proposed to encode both point clouds from input queries and 3D models in databases. The main goal of data encoding is that the models in the database and input point clouds can be consistently encoded. Firstly, top-view depth images of buildings are generated to represent the geometry surface of a building roof. Secondly, geometric features are extracted from depth images based on height, edge and plane of building. Finally, descriptors can be extracted by spatial histograms and used in 3D model retrieval system. For data retrieval, the models are retrieved by matching the encoding coefficients of point clouds and building models. In experiments, a database including about 900,000 3D models collected from the Internet is used for evaluation of data retrieval. The results of the proposed method show a clear superiority over related methods.
NASA Astrophysics Data System (ADS)
Tsai, Tsung-Ying; Chang, Kai-Wei; Chen, Calvin Yu-Chian
2011-06-01
The rapidly advancing researches on traditional Chinese medicine (TCM) have greatly intrigued pharmaceutical industries worldwide. To take initiative in the next generation of drug development, we constructed a cloud-computing system for TCM intelligent screening system (iScreen) based on TCM Database@Taiwan. iScreen is compacted web server for TCM docking and followed by customized de novo drug design. We further implemented a protein preparation tool that both extract protein of interest from a raw input file and estimate the size of ligand bind site. In addition, iScreen is designed in user-friendly graphic interface for users who have less experience with the command line systems. For customized docking, multiple docking services, including standard, in-water, pH environment, and flexible docking modes are implemented. Users can download first 200 TCM compounds of best docking results. For TCM de novo drug design, iScreen provides multiple molecular descriptors for a user's interest. iScreen is the world's first web server that employs world's largest TCM database for virtual screening and de novo drug design. We believe our web server can lead TCM research to a new era of drug development. The TCM docking and screening server is available at http://iScreen.cmu.edu.tw/.
Tsai, Tsung-Ying; Chang, Kai-Wei; Chen, Calvin Yu-Chian
2011-06-01
The rapidly advancing researches on traditional Chinese medicine (TCM) have greatly intrigued pharmaceutical industries worldwide. To take initiative in the next generation of drug development, we constructed a cloud-computing system for TCM intelligent screening system (iScreen) based on TCM Database@Taiwan. iScreen is compacted web server for TCM docking and followed by customized de novo drug design. We further implemented a protein preparation tool that both extract protein of interest from a raw input file and estimate the size of ligand bind site. In addition, iScreen is designed in user-friendly graphic interface for users who have less experience with the command line systems. For customized docking, multiple docking services, including standard, in-water, pH environment, and flexible docking modes are implemented. Users can download first 200 TCM compounds of best docking results. For TCM de novo drug design, iScreen provides multiple molecular descriptors for a user's interest. iScreen is the world's first web server that employs world's largest TCM database for virtual screening and de novo drug design. We believe our web server can lead TCM research to a new era of drug development. The TCM docking and screening server is available at http://iScreen.cmu.edu.tw/.
Implementation of a data management software system for SSME test history data
NASA Technical Reports Server (NTRS)
Abernethy, Kenneth
1986-01-01
The implementation of a software system for managing Space Shuttle Main Engine (SSME) test/flight historical data is presented. The software system uses the database management system RIM7 for primary data storage and routine data management, but includes several FORTRAN programs, described here, which provide customized access to the RIM7 database. The consolidation, modification, and transfer of data from the database THIST, to the RIM7 database THISRM is discussed. The RIM7 utility modules for generating some standard reports from THISRM and performing some routine updating and maintenance are briefly described. The FORTRAN accessing programs described include programs for initial loading of large data sets into the database, capturing data from files for database inclusion, and producing specialized statistical reports which cannot be provided by the RIM7 report generator utility. An expert system tutorial, constructed using the expert system shell product INSIGHT2, is described. Finally, a potential expert system, which would analyze data in the database, is outlined. This system could use INSIGHT2 as well and would take advantage of RIM7's compatibility with the microcomputer database system RBase 5000.
1999 Customer Satisfaction Survey Report: How Do We Measure Up?
ERIC Educational Resources Information Center
Salvucci, Sameena; Parker, Albert C. E.; Cash, R. William; Thurgood, Lori
2001-01-01
Summarizes results of a 1999 survey regarding the satisfaction of various groups with publications, databases, and services of the National Center for Education Statistics. Groups studied were federal, state, and local policymakers; academic researchers; and journalists. Compared 1999 results with 1997 results. (Author/SLD)
Front End Software for Online Database Searching. Part 2: The Marketplace.
ERIC Educational Resources Information Center
Levy, Louise R.; Hawkins, Donald T.
1986-01-01
This article analyzes the front end software marketplace and discusses some of the complex forces influencing it. Discussion covers intermediary market; end users (library customers, scientific and technical professionals, corporate business specialists, consumers); marketing strategies; a British front end development firm; competitive pressures;…
Searching Lexis and Westlaw: Part III.
ERIC Educational Resources Information Center
Franklin, Carl
1986-01-01
This last installment in a three-part series covers several important areas in the searching of legal information: online (group) training and customer service, documentation (search manuals and other aids), account representatives, microcomputer software, and pricing. Advantages and drawbacks of both the LEXIS and WESTLAW databases are noted.…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-27
... submit order execution reports to the Exchange's Front End Systemic Capture (``FESC'') database linking... that would apply across their respective marketplaces, including a harmonized approach to riskless... approach to customer order protection rules, including how riskless principal transactions should be...
CICS Region Virtualization for Cost Effective Application Development
ERIC Educational Resources Information Center
Khan, Kamal Waris
2012-01-01
Mainframe is used for hosting large commercial databases, transaction servers and applications that require a greater degree of reliability, scalability and security. Customer Information Control System (CICS) is a mainframe software framework for implementing transaction services. It is designed for rapid, high-volume online processing. In order…
Configurable product design considering the transition of multi-hierarchical models
NASA Astrophysics Data System (ADS)
Ren, Bin; Qiu, Lemiao; Zhang, Shuyou; Tan, Jianrong; Cheng, Jin
2013-03-01
The current research of configurable product design mainly focuses on how to convert a predefined set of components into a valid set of product structures. With the scale and complexity of configurable products increasing, the interdependencies between customer demands and product structures grow up as well. The result is that existing product structures fails to satisfy the individual customer requirements and hence product variants are needed. This paper is aimed to build a bridge between customer demands and product structures in order to make demand-driven fast response design feasible. First of all, multi-hierarchical models of configurable product design are established with customer demand model, technical requirement model and product structure model. Then, the transition of multi-hierarchical models among customer demand model, technical requirement model and product structure model is solved with fuzzy analytic hierarchy process (FAHP) and the algorithm of multi-level matching. Finally, optimal structure according to the customer demands is obtained with the calculation of Euclidean distance and similarity of some cases. In practice, the configuration design of a clamping unit of injection molding machine successfully performs an optimal search strategy for the product variants with reasonable satisfaction to individual customer demands. The proposed method can automatically generate a configuration design with better alternatives for each product structures, and shorten the time of finding the configuration of a product.
Building automatic customer complaints filtering application based on Twitter in Bahasa Indonesia
NASA Astrophysics Data System (ADS)
Gunawan, D.; Siregar, R. P.; Rahmat, R. F.; Amalia, A.
2018-03-01
Twitter has become a media to provide communication between a company with its customers. The average number of Twitter active users monthly is 330 million. A lot of companies realize the potential of Twitter to establish good relationship with their customers. Therefore, they usually have one official Twitter account to act as customer care division. In Indonesia, one of the company that utilizes the potential of Twitter to reach their customers is PT Telkom. PT Telkom has an official customer service account (called @TelkomCare) to receive customers’ problem. However, because of this account is open for public, Twitter users might post all kind of messages (not only complaints) to Telkom Care account. This leads to a problem that the Telkom Care account contains not only the customer complaints but also compliment and ordinary message. Furthermore, the complaints should be distributed to relevant division such as “Indihome”, “Telkomsel”, “UseeTV”, and “Telepon” based on the content of the message. This research built the application that automatically filter twitter post messages into several pre-defined categories (based on existing divisions) using Naïve Bayes algorithm. This research is done by collecting Twitter message, data cleaning, data pre-processing, training and testing data, and evaluate the classification result. This research yields 97% accuracy to classify Twitter message into the categories mentioned earlier.
Yaghoubi, Maryam; Asgari, Hamed; Javadi, Marzieh
2017-01-01
Context: One of the challenges in the fiercely competitive space of health organizations is responding to customers and building trust and satisfaction in them in the shortest time, with best quality and highest productivity. Hence the aim of this study is to survey the impact of customer relationship management (CRM) on organizational productivity, customer loyalty, satisfaction and trust in selected hospitals of Isfahan (in Iran). Materials and Methods: This study is a correlation descriptive research. Study population was the nurses in selected hospitals of Isfahan and the sampling has been conducted using stratified random method. Data collection tool is a researcher-made questionnaire of CRM and its effects (organizational productivity, customer loyalty, satisfaction and trust) which its validity and reliability has been confirmed by researchers. Structural equation method was used to determine the impact of variables. Data analysis method was structural equation modeling and the software used was SPSS version 16 (IBM, SPSS, 2007 Microsoft Corp., Bristol, UK) and AMOS version 18 (IBM, SPSS, 2010 Microsoft Corp, Bristol, UK). Results: Among the dimensions of CRM, diversification had the highest impact (0.83) and customer acquisition had the lowest (0.57) CRM, had the lowest impact on productivity (0.59) and the highest effect on customer satisfaction (0.83). Conclusions: For the implementation of CRM, it is necessary that the studied hospitals improve strategies of acquiring information about new customers, attracting new customers and keeping them and communication with patients outside the hospital and improve the system of measuring patient satisfaction and loyalty. PMID:28546971
Tripal v1.1: a standards-based toolkit for construction of online genetic and genomic databases.
Sanderson, Lacey-Anne; Ficklin, Stephen P; Cheng, Chun-Huai; Jung, Sook; Feltus, Frank A; Bett, Kirstin E; Main, Dorrie
2013-01-01
Tripal is an open-source freely available toolkit for construction of online genomic and genetic databases. It aims to facilitate development of community-driven biological websites by integrating the GMOD Chado database schema with Drupal, a popular website creation and content management software. Tripal provides a suite of tools for interaction with a Chado database and display of content therein. The tools are designed to be generic to support the various ways in which data may be stored in Chado. Previous releases of Tripal have supported organisms, genomic libraries, biological stocks, stock collections and genomic features, their alignments and annotations. Also, Tripal and its extension modules provided loaders for commonly used file formats such as FASTA, GFF, OBO, GAF, BLAST XML, KEGG heir files and InterProScan XML. Default generic templates were provided for common views of biological data, which could be customized using an open Application Programming Interface to change the way data are displayed. Here, we report additional tools and functionality that are part of release v1.1 of Tripal. These include (i) a new bulk loader that allows a site curator to import data stored in a custom tab delimited format; (ii) full support of every Chado table for Drupal Views (a powerful tool allowing site developers to construct novel displays and search pages); (iii) new modules including 'Feature Map', 'Genetic', 'Publication', 'Project', 'Contact' and the 'Natural Diversity' modules. Tutorials, mailing lists, download and set-up instructions, extension modules and other documentation can be found at the Tripal website located at http://tripal.info. DATABASE URL: http://tripal.info/.
Tripal v1.1: a standards-based toolkit for construction of online genetic and genomic databases
Sanderson, Lacey-Anne; Ficklin, Stephen P.; Cheng, Chun-Huai; Jung, Sook; Feltus, Frank A.; Bett, Kirstin E.; Main, Dorrie
2013-01-01
Tripal is an open-source freely available toolkit for construction of online genomic and genetic databases. It aims to facilitate development of community-driven biological websites by integrating the GMOD Chado database schema with Drupal, a popular website creation and content management software. Tripal provides a suite of tools for interaction with a Chado database and display of content therein. The tools are designed to be generic to support the various ways in which data may be stored in Chado. Previous releases of Tripal have supported organisms, genomic libraries, biological stocks, stock collections and genomic features, their alignments and annotations. Also, Tripal and its extension modules provided loaders for commonly used file formats such as FASTA, GFF, OBO, GAF, BLAST XML, KEGG heir files and InterProScan XML. Default generic templates were provided for common views of biological data, which could be customized using an open Application Programming Interface to change the way data are displayed. Here, we report additional tools and functionality that are part of release v1.1 of Tripal. These include (i) a new bulk loader that allows a site curator to import data stored in a custom tab delimited format; (ii) full support of every Chado table for Drupal Views (a powerful tool allowing site developers to construct novel displays and search pages); (iii) new modules including ‘Feature Map’, ‘Genetic’, ‘Publication’, ‘Project’, ‘Contact’ and the ‘Natural Diversity’ modules. Tutorials, mailing lists, download and set-up instructions, extension modules and other documentation can be found at the Tripal website located at http://tripal.info. Database URL: http://tripal.info/ PMID:24163125
Sheynkman, Gloria M.; Shortreed, Michael R.; Frey, Brian L.; Scalf, Mark; Smith, Lloyd M.
2013-01-01
Each individual carries thousands of non-synonymous single nucleotide variants (nsSNVs) in their genome, each corresponding to a single amino acid polymorphism (SAP) in the encoded proteins. It is important to be able to directly detect and quantify these variations at the protein level in order to study post-transcriptional regulation, differential allelic expression, and other important biological processes. However, such variant peptides are not generally detected in standard proteomic analyses, due to their absence from the generic databases that are employed for mass spectrometry searching. Here, we extend previous work that demonstrated the use of customized SAP databases constructed from sample-matched RNA-Seq data. We collected deep coverage RNA-Seq data from the Jurkat cell line, compiled the set of nsSNVs that are expressed, used this information to construct a customized SAP database, and searched it against deep coverage shotgun MS data obtained from the same sample. This approach enabled detection of 421 SAP peptides mapping to 395 nsSNVs. We compared these peptides to peptides identified from a large generic search database containing all known nsSNVs (dbSNP) and found that more than 70% of the SAP peptides from this dbSNP-derived search were not supported by the RNA-Seq data, and thus are likely false positives. Next, we increased the SAP coverage from the RNA-Seq derived database by utilizing multiple protease digestions, thereby increasing variant detection to 695 SAP peptides mapping to 504 nsSNV sites. These detected SAP peptides corresponded to moderate to high abundance transcripts (30+ transcripts per million, TPM). The SAP peptides included 192 allelic pairs; the relative expression levels of the two alleles were evaluated for 51 of those pairs, and found to be comparable in all cases. PMID:24175627
Dynamic motifs in socio-economic networks
NASA Astrophysics Data System (ADS)
Zhang, Xin; Shao, Shuai; Stanley, H. Eugene; Havlin, Shlomo
2014-12-01
Socio-economic networks are of central importance in economic life. We develop a method of identifying and studying motifs in socio-economic networks by focusing on “dynamic motifs,” i.e., evolutionary connection patterns that, because of “node acquaintances” in the network, occur much more frequently than random patterns. We examine two evolving bi-partite networks: i) the world-wide commercial ship chartering market and ii) the ship build-to-order market. We find similar dynamic motifs in both bipartite networks, even though they describe different economic activities. We also find that “influence” and “persistence” are strong factors in the interaction behavior of organizations. When two companies are doing business with the same customer, it is highly probable that another customer who currently only has business relationship with one of these two companies, will become customer of the second in the future. This is the effect of influence. Persistence means that companies with close business ties to customers tend to maintain their relationships over a long period of time.
Handling imbalance data in churn prediction using combined SMOTE and RUS with bagging method
NASA Astrophysics Data System (ADS)
Pura Hartati, Eka; Adiwijaya; Arif Bijaksana, Moch
2018-03-01
Customer churn has become a significant problem and also a challenge for Telecommunication company such as PT. Telkom Indonesia. It is necessary to evaluate whether the big problems of churn customer and the company’s managements will make appropriate strategies to minimize the churn and retaining the customer. Churn Customer data which categorized churn Atas Permintaan Sendiri (APS) in this Company is an imbalance data, and this issue is one of the challenging tasks in machine learning. This study will investigate how is handling class imbalance in churn prediction using combined Synthetic Minority Over-Sampling (SMOTE) and Random Under-Sampling (RUS) with Bagging method for a better churn prediction performance’s result. The dataset that used is Broadband Internet data which is collected from Telkom Regional 6 Kalimantan. The research firstly using data preprocessing to balance the imbalanced dataset and also to select features by sampling technique SMOTE and RUS, and then building churn prediction model using Bagging methods and C4.5.
Adaptation of commercial microscopes for advanced imaging applications
NASA Astrophysics Data System (ADS)
Brideau, Craig; Poon, Kelvin; Stys, Peter
2015-03-01
Today's commercially available microscopes offer a wide array of options to accommodate common imaging experiments. Occasionally, an experimental goal will require an unusual light source, filter, or even irregular sample that is not compatible with existing equipment. In these situations the ability to modify an existing microscopy platform with custom accessories can greatly extend its utility and allow for experiments not possible with stock equipment. Light source conditioning/manipulation such as polarization, beam diameter or even custom source filtering can easily be added with bulk components. Custom and after-market detectors can be added to external ports using optical construction hardware and adapters. This paper will present various examples of modifications carried out on commercial microscopes to address both atypical imaging modalities and research needs. Violet and near-ultraviolet source adaptation, custom detection filtering, and laser beam conditioning and control modifications will be demonstrated. The availability of basic `building block' parts will be discussed with respect to user safety, construction strategies, and ease of use.
Odibo, Anthony O; Francis, Andre; Cahill, Alison G; Macones, George A; Crane, James P; Gardosi, Jason
2011-03-01
To derive coefficients for developing a customized growth chart for a Mid-Western US population, and to estimate the association between pregnancy outcomes and smallness for gestational age (SGA) defined by the customized growth chart compared with a population-based growth chart for the USA. A retrospective cohort study of an ultrasound database using 54,433 pregnancies meeting inclusion criteria was conducted. Coefficients for customized centiles were derived using 42,277 pregnancies and compared with those obtained from other populations. Two adverse outcome indicators were defined (greater than 7 day stay in the neonatal unit and stillbirth [SB]), and the risk for each outcome was calculated for the groups of pregnancies defined as SGA by the population standard and SGA by the customized standard using 12,456 pregnancies for the validation sample. The growth potential expressed as weight at 40 weeks in this population was 3524 g (standard error: 402 g). In the validation population, 4055 cases of SGA were identified using both population and customized standards. The cases additionally identified as SGA by the customized method had a significantly increased risk of each of the adverse outcome categories. The sensitivity and specificity of those identified as SGA by customized method only for detecting pregnancies at risk for SB was 32.7% (95% confidence interval [CI] 27.0-38.8%) and 95.1% (95% CI: 94.7-95.0%) versus 0.8% (95% CI 0.1-2.7%) and 98.0% (95% CI 97.8-98.2%)for those identified by only the population-based method, respectively. SGA defined by customized growth potential is able to identify substantially more pregnancies at a risk for adverse outcome than the currently used national standard for fetal growth.
The importance of the criteria of residential buildings from the perspective of future users
NASA Astrophysics Data System (ADS)
Sirochmanová, Lenka; Kozlovská, Mária; Bašková, Renáta
2016-06-01
The developers need to know what is important to their customers in preparation of new construction of residential buildings. The paper deals with finding the importance of structure, material, cost, time and environmental criteria of residential buildings from the perspective of the future owners. The research methodology that provided the information was questionnaire survey. Research was conducted in two lines. The first line is dedicated to the research of main construction domains of residential building. The second line of the research deals with the specific criteria of main construction domains. The order of importance of the main areas and the specific criteria is determined by analyzing of data through descriptive characteristics: median, modus, variance, average value and by weigh of importance.
Six Lessons We Learned Applying Six Sigma
NASA Technical Reports Server (NTRS)
Carroll, Napoleon; Casleton, Christa H.
2005-01-01
As Chief Financial Officer of Kennedy Space Center (KSC), I'm not only responsible for financial planning and accounting but also for building strong partnerships with the CFO customers, who include Space Shuttle and International Space Station operations as well all who manage the KSC Spaceport. My never ending goal is to design, manage and continuously improve our core business processes so that they deliver world class products and services to the CFO's customers. I became interested in Six Sigma as Christa Casleton (KSC's first Six Sigma Black belt) applied Six Sigma tools and methods to our Plan and Account for Travel Costs Process. Her analysis was fresh, innovative and thorough but, even more impressive, was her approach to ensure ongoing, continuous process improvement. Encouraged by the results, I launched two more process improvement initiatives aimed at applying Six Sigma principles to CFO processes that not only touch most of my employees but also have direct customer impact. As many of you know, Six Sigma is a measurement scale that compares the output of a process with customer requirements. That's straight forward, but demands that you not only understand your processes but also know your products and the critical customer requirements. The objective is to isolate and eliminate the causes of process variation so that the customer sees consistently high quality.
Customer quality and type 2 diabetes from the patients' perspective: a cross-sectional study.
Tabrizi, Jafar S; Wilson, Andrew J; O'Rourke, Peter K
2010-12-18
Quality in health care can be seen as having three principal dimensions: service, technical and customer quality. This study aimed to measure Customer Quality in relation to self-management of Type 2 diabetes. A cross-sectional survey of 577 Type 2 diabetes people was carried out in Australia. The 13-item Patient Activation Measure was used to evaluate Customer Quality based on self-reported knowledge, skills and confidence in four stages of self-management. All statistical analyses were conducted using SPSS 13.0. All participants achieved scores at the level of stage 1, but ten percent did not achieve score levels consistent with stage 2 and a further 16% did not reach the actual action stage. Seventy-four percent reported capacity for taking action for self-management and 38% reported the highest Customer Quality score and ability to change the action by changing health and environment. Participants with a higher education attainment, better diabetes control status and those who maintain continuity of care reported a higher Customer Quality score, reflecting higher capacity for self-management. Specific capacity building programs for health care providers and people with Type 2 diabetes are needed to increase their knowledge and skills; and improve their confidence to self-management, to achieve improved quality of delivered care and better health outcomes.
Database resources of the National Center for Biotechnology Information: 2002 update
Wheeler, David L.; Church, Deanna M.; Lash, Alex E.; Leipe, Detlef D.; Madden, Thomas L.; Pontius, Joan U.; Schuler, Gregory D.; Schriml, Lynn M.; Tatusova, Tatiana A.; Wagner, Lukas; Rapp, Barbara A.
2002-01-01
In addition to maintaining the GenBank nucleic acid sequence database, the National Center for Biotechnology Information (NCBI) provides data analysis and retrieval resources that operate on the data in GenBank and a variety of other biological data made available through NCBI’s web site. NCBI data retrieval resources include Entrez, PubMed, LocusLink and the Taxonomy Browser. Data analysis resources include BLAST, Electronic PCR, OrfFinder, RefSeq, UniGene, HomoloGene, Database of Single Nucleotide Polymorphisms (dbSNP), Human Genome Sequencing, Human MapViewer, Human¡VMouse Homology Map, Cancer Chromosome Aberration Project (CCAP), Entrez Genomes, Clusters of Orthologous Groups (COGs) database, Retroviral Genotyping Tools, SAGEmap, Gene Expression Omnibus (GEO), Online Mendelian Inheritance in Man (OMIM), the Molecular Modeling Database (MMDB) and the Conserved Domain Database (CDD). Augmenting many of the web applications are custom implementations of the BLAST program optimized to search specialized data sets. All of the resources can be accessed through the NCBI home page at http://www.ncbi.nlm.nih.gov. PMID:11752242
A streamlined build system foundation for developing HPC software
DOE Office of Scientific and Technical Information (OSTI.GOV)
White, Chris; Harrison, Cyrus; Hornung, Richard
2017-02-09
BLT bundles custom CMake macros, unit testing frameworks for C++ and Fortran, and a set of smoke tests for common HPC dependencies. The combination of these three provides a foundation for quickly bootstrapping a CMale-based system for developing HPC softward.
The Long-Term Pavement Performance Program Roadmap: A Strategic Plan
DOT National Transportation Integrated Search
1995-09-01
The goal of the ongoing, 20-year long-term pavement performance (LTPP) studies is to give State and Provincial transportation departments- the owners and customers of the LTPP program-the information and tools they need to build and maintain longer-l...
Beyond Black Boxes: Bringing Transparency and Aesthetics Back to Scientific Investigation.
ERIC Educational Resources Information Center
Resnick, Mitchel; Berg, Robbie; Eisenberg, Michael
2000-01-01
Presents a set of case studies in which students create, customize, and personalize their own scientific instruments. Finds that students become engaged in scientific inquiry not only through observing and measuring, but also through designing and building. (Author/CCM)
2011-03-21
throughout the experimental runs. Reliable and validated measures of anxiety ( Spielberger , 1983), as well as custom-constructed questionnaires about...Crowd modeling and simulation technologies. Transactions on modeling and computer simulation, 20(4). Spielberger , C. D. (1983
Measuring Effectiveness in a Virtual Library
ERIC Educational Resources Information Center
Finch, Jannette L.
2010-01-01
Measuring quality of service in academic libraries traditionally includes quantifiable data such as collection size, staff counts, circulation numbers, reference service statistics, qualitative analyses of customer satisfaction, shelving accuracy, and building comfort. In the libraries of the third millennium, virtual worlds, Web content and…
SRNL Tagging and Tracking Video
None
2018-01-16
SRNL generates a next generation satellite base tracking system. The tagging and tracking system can work in remote wilderness areas, inside buildings, underground and other areas not well served by traditional GPS. Itâs a perfect response to customer needs and market demand.
The School Building Principal and Inventory Control: A Case for Computerization.
ERIC Educational Resources Information Center
Stronge, James
1987-01-01
General and special purpose database programs are appropriate for inventory control at the school building level. A fixed asset equipment inventory example illustrates the feasibility of computerized inventory control. (MLF)
Updated database on natural radioactivity in building materials in Europe.
Trevisi, R; Leonardi, F; Risica, S; Nuccetelli, C
2018-07-01
The paper presents the latest collection of activity concentration data of natural radionuclides ( 226 Ra, 232 Th and 4 K) in building materials. This database contains about 24200 samples of both bulk materials and their constituents (bricks, concrete, cement, aggregates) and superficial materials used in most European Union Member States and some European countries. This collection also includes radiological information about some NORM residues and by-products (by-product gypsum, metallurgical slags, fly and bottom ashes and red mud) which can be of radiological concern if recycled in building materials as secondary raw materials. Moreover, radon emanation and radon exhalation rate data are reported for bricks and concrete. Copyright © 2018 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Friedman, Debra; Hoffman, Phillip
2001-01-01
Describes creation of a relational database at the University of Washington supporting ongoing academic planning at several levels and affecting the culture of decision making. Addresses getting started; sharing the database; questions, worries, and issues; improving access to high-demand courses; the advising function; management of instructional…
Performance related issues in distributed database systems
NASA Technical Reports Server (NTRS)
Mukkamala, Ravi
1991-01-01
The key elements of research performed during the year long effort of this project are: Investigate the effects of heterogeneity in distributed real time systems; Study the requirements to TRAC towards building a heterogeneous database system; Study the effects of performance modeling on distributed database performance; and Experiment with an ORACLE based heterogeneous system.
Keyless Entry: Building a Text Database Using OCR Technology.
ERIC Educational Resources Information Center
Grotophorst, Clyde W.
1989-01-01
Discusses the use of optical character recognition (OCR) technology to produce an ASCII text database. A tutorial on digital scanning and OCR is provided, and a systems integration project which used the Calera CDP-3000XF scanner and text retrieval software to construct a database of dissertations at George Mason University is described. (four…
A UNIMARC Bibliographic Format Database for ABCD
ERIC Educational Resources Information Center
Megnigbeto, Eustache
2012-01-01
Purpose: ABCD is a web-based open and free software suite for library management derived from the UNESCO CDS/ISIS software technology. The first version was launched officially in December 2009 with a MARC 21 bibliographic format database. This paper aims to detail the building of the UNIMARC bibliographic format database for ABCD.…
The Design of Lexical Database for Indonesian Language
NASA Astrophysics Data System (ADS)
Gunawan, D.; Amalia, A.
2017-03-01
Kamus Besar Bahasa Indonesia (KBBI), an official dictionary for Indonesian language, provides lists of words with their meaning. The online version can be accessed via Internet network. Another online dictionary is Kateglo. KBBI online and Kateglo only provides an interface for human. A machine cannot retrieve data from the dictionary easily without using advanced techniques. Whereas, lexical of words is required in research or application development which related to natural language processing, text mining, information retrieval or sentiment analysis. To address this requirement, we need to build a lexical database which provides well-defined structured information about words. A well-known lexical database is WordNet, which provides the relation among words in English. This paper proposes the design of a lexical database for Indonesian language based on the combination of KBBI 4th edition, Kateglo and WordNet structure. Knowledge representation by utilizing semantic networks depict the relation among words and provide the new structure of lexical database for Indonesian language. The result of this design can be used as the foundation to build the lexical database for Indonesian language.
NASA Technical Reports Server (NTRS)
Srivastava, Priyaka; Kraus, Jeff; Murawski, Robert; Golden, Bertsel, Jr.
2015-01-01
NASAs Space Communications and Navigation (SCaN) program manages three active networks: the Near Earth Network, the Space Network, and the Deep Space Network. These networks simultaneously support NASA missions and provide communications services to customers worldwide. To efficiently manage these resources and their capabilities, a team of student interns at the NASA Glenn Research Center is developing a distributed system to model the SCaN networks. Once complete, the system shall provide a platform that enables users to perform capacity modeling of current and prospective missions with finer-grained control of information between several simulation and modeling tools. This will enable the SCaN program to access a holistic view of its networks and simulate the effects of modifications in order to provide NASA with decisional information. The development of this capacity modeling system is managed by NASAs Strategic Center for Education, Networking, Integration, and Communication (SCENIC). Three primary third-party software tools offer their unique abilities in different stages of the simulation process. MagicDraw provides UMLSysML modeling, AGIs Systems Tool Kit simulates the physical transmission parameters and de-conflicts scheduled communication, and Riverbed Modeler (formerly OPNET) simulates communication protocols and packet-based networking. SCENIC developers are building custom software extensions to integrate these components in an end-to-end space communications modeling platform. A central control module acts as the hub for report-based messaging between client wrappers. Backend databases provide information related to mission parameters and ground station configurations, while the end user defines scenario-specific attributes for the model. The eight SCENIC interns are working under the direction of their mentors to complete an initial version of this capacity modeling system during the summer of 2015. The intern team is composed of four students in Computer Science, two in Computer Engineering, one in Electrical Engineering, and one studying Space Systems Engineering.
Ingargiola, Antonino; Laurence, Ted; Boutelle, Robert; Weiss, Shimon; Michalet, Xavier
2017-01-01
Archival of experimental data in public databases has increasingly become a requirement for most funding agencies and journals. These data-sharing policies have the potential to maximize data reuse, and to enable confirmatory as well as novel studies. However, the lack of standard data formats can severely hinder data reuse. In photon-counting-based single-molecule fluorescence experiments, data is stored in a variety of vendor-specific or even setup-specific (custom) file formats, making data interchange prohibitively laborious, unless the same hardware-software combination is used. Moreover, the number of available techniques and setup configurations make it difficult to find a common standard. To address this problem, we developed Photon-HDF5 (www.photon-hdf5.org), an open data format for timestamp-based single-molecule fluorescence experiments. Building on the solid foundation of HDF5, Photon-HDF5 provides a platform- and language-independent, easy-to-use file format that is self-describing and supports rich metadata. Photon-HDF5 supports different types of measurements by separating raw data (e.g. photon-timestamps, detectors, etc) from measurement metadata. This approach allows representing several measurement types and setup configurations within the same core structure and makes possible extending the format in backward-compatible way. Complementing the format specifications, we provide open source software to create and convert Photon-HDF5 files, together with code examples in multiple languages showing how to read Photon-HDF5 files. Photon-HDF5 allows sharing data in a format suitable for long term archival, avoiding the effort to document custom binary formats and increasing interoperability with different analysis software. We encourage participation of the single-molecule community to extend interoperability and to help defining future versions of Photon-HDF5. PMID:28649160
Ingargiola, Antonino; Laurence, Ted; Boutelle, Robert; Weiss, Shimon; Michalet, Xavier
2016-02-13
Archival of experimental data in public databases has increasingly become a requirement for most funding agencies and journals. These data-sharing policies have the potential to maximize data reuse, and to enable confirmatory as well as novel studies. However, the lack of standard data formats can severely hinder data reuse. In photon-counting-based single-molecule fluorescence experiments, data is stored in a variety of vendor-specific or even setup-specific (custom) file formats, making data interchange prohibitively laborious, unless the same hardware-software combination is used. Moreover, the number of available techniques and setup configurations make it difficult to find a common standard. To address this problem, we developed Photon-HDF5 (www.photon-hdf5.org), an open data format for timestamp-based single-molecule fluorescence experiments. Building on the solid foundation of HDF5, Photon-HDF5 provides a platform- and language-independent, easy-to-use file format that is self-describing and supports rich metadata. Photon-HDF5 supports different types of measurements by separating raw data (e.g. photon-timestamps, detectors, etc) from measurement metadata. This approach allows representing several measurement types and setup configurations within the same core structure and makes possible extending the format in backward-compatible way. Complementing the format specifications, we provide open source software to create and convert Photon-HDF5 files, together with code examples in multiple languages showing how to read Photon-HDF5 files. Photon-HDF5 allows sharing data in a format suitable for long term archival, avoiding the effort to document custom binary formats and increasing interoperability with different analysis software. We encourage participation of the single-molecule community to extend interoperability and to help defining future versions of Photon-HDF5.
NASA Astrophysics Data System (ADS)
Ingargiola, Antonino; Laurence, Ted; Boutelle, Robert; Weiss, Shimon; Michalet, Xavier
2016-02-01
Archival of experimental data in public databases has increasingly become a requirement for most funding agencies and journals. These data-sharing policies have the potential to maximize data reuse, and to enable confirmatory as well as novel studies. However, the lack of standard data formats can severely hinder data reuse. In photon-counting-based single-molecule fluorescence experiments, data is stored in a variety of vendor-specific or even setup-specific (custom) file formats, making data interchange prohibitively laborious, unless the same hardware-software combination is used. Moreover, the number of available techniques and setup configurations make it difficult to find a common standard. To address this problem, we developed Photon-HDF5 (www.photon-hdf5.org), an open data format for timestamp-based single-molecule fluorescence experiments. Building on the solid foundation of HDF5, Photon- HDF5 provides a platform- and language-independent, easy-to-use file format that is self-describing and supports rich metadata. Photon-HDF5 supports different types of measurements by separating raw data (e.g. photon-timestamps, detectors, etc) from measurement metadata. This approach allows representing several measurement types and setup configurations within the same core structure and makes possible extending the format in backward-compatible way. Complementing the format specifications, we provide open source software to create and convert Photon- HDF5 files, together with code examples in multiple languages showing how to read Photon-HDF5 files. Photon- HDF5 allows sharing data in a format suitable for long term archival, avoiding the effort to document custom binary formats and increasing interoperability with different analysis software. We encourage participation of the single-molecule community to extend interoperability and to help defining future versions of Photon-HDF5.
Low-carbon building assessment and multi-scale input-output analysis
NASA Astrophysics Data System (ADS)
Chen, G. Q.; Chen, H.; Chen, Z. M.; Zhang, Bo; Shao, L.; Guo, S.; Zhou, S. Y.; Jiang, M. M.
2011-01-01
Presented as a low-carbon building evaluation framework in this paper are detailed carbon emission account procedures for the life cycle of buildings in terms of nine stages as building construction, fitment, outdoor facility construction, transportation, operation, waste treatment, property management, demolition, and disposal for buildings, supported by integrated carbon intensity databases based on multi-scale input-output analysis, essential for low-carbon planning, procurement and supply chain design, and logistics management.
Analyzing critical material demand: A revised approach.
Nguyen, Ruby Thuy; Fishman, Tomer; Zhao, Fu; Imholte, D D; Graedel, T E
2018-07-15
Apparent consumption has been widely used as a metric to estimate material demand. However, with technology advancement and complexity of material use, this metric has become less useful in tracking material flows, estimating recycling feedstocks, and conducting life cycle assessment of critical materials. We call for future research efforts to focus on building a multi-tiered consumption database for the global trade network of critical materials. This approach will help track how raw materials are processed into major components (e.g., motor assemblies) and eventually incorporated into complete pieces of equipment (e.g., wind turbines). Foreseeable challenges would involve: 1) difficulty in obtaining a comprehensive picture of trade partners due to business sensitive information, 2) complexity of materials going into components of a machine, and 3) difficulty maintaining such a database. We propose ways to address these challenges such as making use of digital design, learning from the experience of building similar databases, and developing a strategy for financial sustainability. We recommend that, with the advancement of information technology, small steps toward building such a database will contribute significantly to our understanding of material flows in society and the associated human impacts on the environment. Copyright © 2018 Elsevier B.V. All rights reserved.
ZINC: A Free Tool to Discover Chemistry for Biology
2012-01-01
ZINC is a free public resource for ligand discovery. The database contains over twenty million commercially available molecules in biologically relevant representations that may be downloaded in popular ready-to-dock formats and subsets. The Web site also enables searches by structure, biological activity, physical property, vendor, catalog number, name, and CAS number. Small custom subsets may be created, edited, shared, docked, downloaded, and conveyed to a vendor for purchase. The database is maintained and curated for a high purchasing success rate and is freely available at zinc.docking.org. PMID:22587354
Global GIS database; digital atlas of South Pacific
Hearn, P.P.; Hare, T.M.; Schruben, P.; Sherrill, D.; LaMar, C.; Tsushima, P.
2001-01-01
This CD-ROM contains a digital atlas of the countries of the South Pacific. This atlas is part of a global database compiled from USGS and other data sources at a nominal scale of 1:1 million and is intended to be used as a regional-scale reference and analytical tool by government officials, researchers, the private sector, and the general public. The atlas includes free GIS software or may be used with ESRI's ArcView software. Customized ArcView tools, specifically designed to make the atlas easier to use, are also included.
Global GIS database; digital atlas of Africa
Hearn, P.P.; Hare, T.M.; Schruben, P.; Sherrill, D.; LaMar, C.; Tsushima, P.
2001-01-01
This CD-ROM contains a digital atlas of the countries of Africa. This atlas is part of a global database compiled from USGS and other data sources at a nominal scale of 1:1 million and is intended to be used as a regional-scale reference and analytical tool by government officials, researchers, the private sector, and the general public. The atlas includes free GIS software or may be used with ESRI's ArcView software. Customized ArcView tools, specifically designed to make this atlas easier to use, are also included.
Global GIS database; digital atlas of South Asia
Hearn, P.P.; Hare, T.M.; Schruben, P.; Sherrill, D.; LaMar, C.; Tsushima, P.
2001-01-01
This CD-ROM contains a digital atlas of the countries of South Asia. This atlas is part of a global database compiled from USGS and other data sources at a nominal scale 1:1 million and is intended to be used as a regional-scale reference and analytical tool by government officials, researchers, the private sector, and the general public. The atlas includes free GIS software or may be used with ESRI's ArcView software. Customized ArcView tools, specifically designed to make the atlas easier to use, are also included.