Virtual Queue in a Centralized Database Environment
NASA Astrophysics Data System (ADS)
Kar, Amitava; Pal, Dibyendu Kumar
2010-10-01
Today is the era of the Internet. Every matter whether it be a gather of knowledge or planning a holiday or booking of ticket etc everything can be obtained from the internet. This paper intends to calculate the different queuing measures when some booking or purchase is done through the internet subject to the limitations in the number of tickets or seats. It involves a lot of database activities like read and write. This paper takes care of the time involved in the requests of a service, taken as arrival and the time involved in providing the required information, taken as service and thereby tries to calculate the distribution of arrival and service and the various measures of the queuing. This paper considers the database as centralized database for the sake of simplicity as the alternating concept of distributed database would rather complicate the calculation.
Negative Effects of Learning Spreadsheet Management on Learning Database Management
ERIC Educational Resources Information Center
Vágner, Anikó; Zsakó, László
2015-01-01
A lot of students learn spreadsheet management before database management. Their similarities can cause a lot of negative effects when learning database management. In this article, we consider these similarities and explain what can cause problems. First, we analyse the basic concepts such as table, database, row, cell, reference, etc. Then, we…
Statistical distribution of building lot frontage: application for Tokyo downtown districts
NASA Astrophysics Data System (ADS)
Usui, Hiroyuki
2018-03-01
The frontage of a building lot is the determinant factor of the residential environment. The statistical distribution of building lot frontages shows how the perimeters of urban blocks are shared by building lots for a given density of buildings and roads. For practitioners in urban planning, this is indispensable to identify potential districts which comprise a high percentage of building lots with narrow frontage after subdivision and to reconsider the appropriate criteria for the density of buildings and roads as residential environment indices. In the literature, however, the statistical distribution of building lot frontages and the density of buildings and roads has not been fully researched. In this paper, based on the empirical study in the downtown districts of Tokyo, it is found that (1) a log-normal distribution fits the observed distribution of building lot frontages better than a gamma distribution, which is the model of the size distribution of Poisson Voronoi cells on closed curves; (2) the statistical distribution of building lot frontages statistically follows a log-normal distribution, whose parameters are the gross building density, road density, average road width, the coefficient of variation of building lot frontage, and the ratio of the number of building lot frontages to the number of buildings; and (3) the values of the coefficient of variation of building lot frontages, and that of the ratio of the number of building lot frontages to that of buildings are approximately equal to 0.60 and 1.19, respectively.
A Test-Bed of Secure Mobile Cloud Computing for Military Applications
2016-09-13
searching databases. This kind of applications is a typical example of mobile cloud computing (MCC). MCC has lots of applications in the military...Release; Distribution Unlimited UU UU UU UU 13-09-2016 1-Aug-2014 31-Jul-2016 Final Report: A Test-bed of Secure Mobile Cloud Computing for Military...Army Research Office P.O. Box 12211 Research Triangle Park, NC 27709-2211 Test-bed, Mobile Cloud Computing , Security, Military Applications REPORT
Status, upgrades, and advances of RTS2: the open source astronomical observatory manager
NASA Astrophysics Data System (ADS)
Kubánek, Petr
2016-07-01
RTS2 is an open source observatory control system. Being developed from early 2000, it continue to receive new features in last two years. RTS2 is a modulat, network-based distributed control system, featuring telescope drivers with advanced tracking and pointing capabilities, fast camera drivers and high level modules for "business logic" of the observatory, connected to a SQL database. Running on all continents of the planet, it accumulated a lot to control parts or full observatory setups.
21 CFR 610.2 - Requests for samples and protocols; official release.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Biologics Evaluation and Research, a manufacturer shall not distribute a lot of a product until the lot is... Evaluation and Research, a manufacturer shall not distribute a lot of a biological product until the lot is... 21 Food and Drugs 7 2010-04-01 2010-04-01 false Requests for samples and protocols; official...
21 CFR 610.2 - Requests for samples and protocols; official release.
Code of Federal Regulations, 2011 CFR
2011-04-01
... Biologics Evaluation and Research, a manufacturer shall not distribute a lot of a product until the lot is... Evaluation and Research, a manufacturer shall not distribute a lot of a biological product until the lot is... 21 Food and Drugs 7 2011-04-01 2010-04-01 true Requests for samples and protocols; official...
ERIC Educational Resources Information Center
Moore, Pam
2010-01-01
The Internet and electronic commerce (e-commerce) generate lots of data. Data must be stored, organized, and managed. Database administrators, or DBAs, work with database software to find ways to do this. They identify user needs, set up computer databases, and test systems. They ensure that systems perform as they should and add people to the…
Applications of Ontologies in Knowledge Management Systems
NASA Astrophysics Data System (ADS)
Rehman, Zobia; Kifor, Claudiu V.
2014-12-01
Enterprises are realizing that their core asset in 21st century is knowledge. In an organization knowledge resides in databases, knowledge bases, filing cabinets and peoples' head. Organizational knowledge is distributed in nature and its poor management causes repetition of activities across the enterprise. To get true benefits from this asset, it is important for an organization to "know what they know". That's why many organizations are investing a lot in managing their knowledge. Artificial intelligence techniques have a huge contribution in organizational knowledge management. In this article we are reviewing the applications of ontologies in knowledge management realm
Design of a sampling plan to detect ochratoxin A in green coffee.
Vargas, E A; Whitaker, T B; Dos Santos, E A; Slate, A B; Lima, F B; Franca, R C A
2006-01-01
The establishment of maximum limits for ochratoxin A (OTA) in coffee by importing countries requires that coffee-producing countries develop scientifically based sampling plans to assess OTA contents in lots of green coffee before coffee enters the market thus reducing consumer exposure to OTA, minimizing the number of lots rejected, and reducing financial loss for producing countries. A study was carried out to design an official sampling plan to determine OTA in green coffee produced in Brazil. Twenty-five lots of green coffee (type 7 - approximately 160 defects) were sampled according to an experimental protocol where 16 test samples were taken from each lot (total of 16 kg) resulting in a total of 800 OTA analyses. The total, sampling, sample preparation, and analytical variances were 10.75 (CV = 65.6%), 7.80 (CV = 55.8%), 2.84 (CV = 33.7%), and 0.11 (CV = 6.6%), respectively, assuming a regulatory limit of 5 microg kg(-1) OTA and using a 1 kg sample, Romer RAS mill, 25 g sub-samples, and high performance liquid chromatography. The observed OTA distribution among the 16 OTA sample results was compared to several theoretical distributions. The 2 parameter-log normal distribution was selected to model OTA test results for green coffee as it gave the best fit across all 25 lot distributions. Specific computer software was developed using the variance and distribution information to predict the probability of accepting or rejecting coffee lots at specific OTA concentrations. The acceptation probability was used to compute an operating characteristic (OC) curve specific to a sampling plan design. The OC curve was used to predict the rejection of good lots (sellers' or exporters' risk) and the acceptance of bad lots (buyers' or importers' risk).
Community Organizing for Database Trial Buy-In by Patrons
ERIC Educational Resources Information Center
Pionke, J. J.
2015-01-01
Database trials do not often garner a lot of feedback. Using community-organizing techniques can not only potentially increase the amount of feedback received but also deepen the relationship between the librarian and his or her constituent group. This is a case study of the use of community-organizing techniques in a series of database trials for…
Kiermeier, Andreas; Mellor, Glen; Barlow, Robert; Jenson, Ian
2011-04-01
The aims of this work were to determine the distribution and concentration of Escherichia coli O157 in lots of beef destined for grinding (manufacturing beef) that failed to meet Australian requirements for export, to use these data to better understand the performance of sampling plans based on the binomial distribution, and to consider alternative approaches for evaluating sampling plans. For each of five lots from which E. coli O157 had been detected, 900 samples from the external carcass surface were tested. E. coli O157 was not detected in three lots, whereas in two lots E. coli O157 was detected in 2 and 74 samples. For lots in which E. coli O157 was not detected in the present study, the E. coli O157 level was estimated to be <12 cells per 27.2-kg carton. For the most contaminated carton, the total number of E. coli O157 cells was estimated to be 813. In the two lots in which E. coli O157 was detected, the pathogen was detected in 1 of 12 and 2 of 12 cartons. The use of acceptance sampling plans based on a binomial distribution can provide a falsely optimistic view of the value of sampling as a control measure when applied to assessment of E. coli O157 contamination in manufacturing beef. Alternative approaches to understanding sampling plans, which do not assume homogeneous contamination throughout the lot, appear more realistic. These results indicate that despite the application of stringent sampling plans, sampling and testing approaches are inefficient for controlling microbiological quality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mulford, Roberta Nancy
Particle sizes determined for a single lot of incoming Russian fuel and for a lot of fuel after aqueous processing are compared with particle sizes measured on fuel after ball-milling. The single samples of each type are believed to have particle size distributions typical of oxide from similar lots, as the processing of fuel lots is fairly uniform. Variation between lots is, as yet, uncharacterized. Sampling and particle size measurement methods are discussed elsewhere.
Study on parallel and distributed management of RS data based on spatial database
NASA Astrophysics Data System (ADS)
Chen, Yingbiao; Qian, Qinglan; Wu, Hongqiao; Liu, Shijin
2009-10-01
With the rapid development of current earth-observing technology, RS image data storage, management and information publication become a bottle-neck for its appliance and popularization. There are two prominent problems in RS image data storage and management system. First, background server hardly handle the heavy process of great capacity of RS data which stored at different nodes in a distributing environment. A tough burden has put on the background server. Second, there is no unique, standard and rational organization of Multi-sensor RS data for its storage and management. And lots of information is lost or not included at storage. Faced at the above two problems, the paper has put forward a framework for RS image data parallel and distributed management and storage system. This system aims at RS data information system based on parallel background server and a distributed data management system. Aiming at the above two goals, this paper has studied the following key techniques and elicited some revelatory conclusions. The paper has put forward a solid index of "Pyramid, Block, Layer, Epoch" according to the properties of RS image data. With the solid index mechanism, a rational organization for different resolution, different area, different band and different period of Multi-sensor RS image data is completed. In data storage, RS data is not divided into binary large objects to be stored at current relational database system, while it is reconstructed through the above solid index mechanism. A logical image database for the RS image data file is constructed. In system architecture, this paper has set up a framework based on a parallel server of several common computers. Under the framework, the background process is divided into two parts, the common WEB process and parallel process.
Tsuchiyama, Tomoyuki; Miyazaki, Hitoshi; Terada, Hisaya; Nakajima, Masahiro
2015-01-01
Shiitake mushrooms (Lentinula edodes) cultivated on bed-log are known to accumulate radiocaesium. Since the Fukushima-Diichi nuclear power plant accident (2011), the violation rate has been higher for log-cultivated shiitake than that for agricultural products or other foodstuffs. When testing shiitake mushrooms for radionuclide contamination, the validation of the sampling plan can be severely compromised by the heterogeneous contamination within shiitake lots. Currently, few data are available on the statistical properties of the radiocaesium contamination of log-cultivated shiitake. In this paper, shiitake lots contaminated by radiocaesium were identified and the distribution of the radiocaesium concentration within the lots investigated. The risk of misclassifying shiitake lots was predicted from the operating characteristic curve generated from Monte Carlo simulations and the performance of various sampling plans was evaluated. This study provides useful information for deciding on an acceptable level of misclassification risk.
Quantum private query based on single-photon interference
NASA Astrophysics Data System (ADS)
Xu, Sheng-Wei; Sun, Ying; Lin, Song
2016-08-01
Quantum private query (QPQ) has become a research hotspot recently. Specially, the quantum key distribution (QKD)-based QPQ attracts lots of attention because of its practicality. Various such kind of QPQ protocols have been proposed based on different technologies of quantum communications. Single-photon interference is one of such technologies, on which the famous QKD protocol GV95 is just based. In this paper, we propose two QPQ protocols based on single-photon interference. The first one is simpler and easier to realize, and the second one is loss tolerant and flexible, and more practical than the first one. Furthermore, we analyze both the user privacy and the database privacy in the proposed protocols.
Monitoring of services with non-relational databases and map-reduce framework
NASA Astrophysics Data System (ADS)
Babik, M.; Souto, F.
2012-12-01
Service Availability Monitoring (SAM) is a well-established monitoring framework that performs regular measurements of the core site services and reports the corresponding availability and reliability of the Worldwide LHC Computing Grid (WLCG) infrastructure. One of the existing extensions of SAM is Site Wide Area Testing (SWAT), which gathers monitoring information from the worker nodes via instrumented jobs. This generates quite a lot of monitoring data to process, as there are several data points for every job and several million jobs are executed every day. The recent uptake of non-relational databases opens a new paradigm in the large-scale storage and distributed processing of systems with heavy read-write workloads. For SAM this brings new possibilities to improve its model, from performing aggregation of measurements to storing raw data and subsequent re-processing. Both SAM and SWAT are currently tuned to run at top performance, reaching some of the limits in storage and processing power of their existing Oracle relational database. We investigated the usability and performance of non-relational storage together with its distributed data processing capabilities. For this, several popular systems have been compared. In this contribution we describe our investigation of the existing non-relational databases suited for monitoring systems covering Cassandra, HBase and MongoDB. Further, we present our experiences in data modeling and prototyping map-reduce algorithms focusing on the extension of the already existing availability and reliability computations. Finally, possible future directions in this area are discussed, analyzing the current deficiencies of the existing Grid monitoring systems and proposing solutions to leverage the benefits of the non-relational databases to get more scalable and flexible frameworks.
Data sharing system for lithography APC
NASA Astrophysics Data System (ADS)
Kawamura, Eiichi; Teranishi, Yoshiharu; Shimabara, Masanori
2007-03-01
We have developed a simple and cost-effective data sharing system between fabs for lithography advanced process control (APC). Lithography APC requires process flow, inter-layer information, history information, mask information and so on. So, inter-APC data sharing system has become necessary when lots are to be processed in multiple fabs (usually two fabs). The development cost and maintenance cost also have to be taken into account. The system handles minimum information necessary to make trend prediction for the lots. Three types of data have to be shared for precise trend prediction. First one is device information of the lots, e.g., process flow of the device and inter-layer information. Second one is mask information from mask suppliers, e.g., pattern characteristics and pattern widths. Last one is history data of the lots. Device information is electronic file and easy to handle. The electronic file is common between APCs and uploaded into the database. As for mask information sharing, mask information described in common format is obtained via Wide Area Network (WAN) from mask-vender will be stored in the mask-information data server. This information is periodically transferred to one specific lithography-APC server and compiled into the database. This lithography-APC server periodically delivers the mask-information to every other lithography-APC server. Process-history data sharing system mainly consists of function of delivering process-history data. In shipping production lots to another fab, the product-related process-history data is delivered by the lithography-APC server from the shipping site. We have confirmed the function and effectiveness of data sharing systems.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-14
... annual average of lots released in FY 2010 (6,752), number of recalls made (1,881), and total number of... fill lot numbers for the total number of dosage units of each strength or potency distributed (e.g., 50... manufacture and distribution of a product including any recalls. These recordkeeping requirements serve...
Strategic Decision-Making Learning from Label Distributions: An Approach for Facial Age Estimation.
Zhao, Wei; Wang, Han
2016-06-28
Nowadays, label distribution learning is among the state-of-the-art methodologies in facial age estimation. It takes the age of each facial image instance as a label distribution with a series of age labels rather than the single chronological age label that is commonly used. However, this methodology is deficient in its simple decision-making criterion: the final predicted age is only selected at the one with maximum description degree. In many cases, different age labels may have very similar description degrees. Consequently, blindly deciding the estimated age by virtue of the highest description degree would miss or neglect other valuable age labels that may contribute a lot to the final predicted age. In this paper, we propose a strategic decision-making label distribution learning algorithm (SDM-LDL) with a series of strategies specialized for different types of age label distribution. Experimental results from the most popular aging face database, FG-NET, show the superiority and validity of all the proposed strategic decision-making learning algorithms over the existing label distribution learning and other single-label learning algorithms for facial age estimation. The inner properties of SDM-LDL are further explored with more advantages.
Strategic Decision-Making Learning from Label Distributions: An Approach for Facial Age Estimation
Zhao, Wei; Wang, Han
2016-01-01
Nowadays, label distribution learning is among the state-of-the-art methodologies in facial age estimation. It takes the age of each facial image instance as a label distribution with a series of age labels rather than the single chronological age label that is commonly used. However, this methodology is deficient in its simple decision-making criterion: the final predicted age is only selected at the one with maximum description degree. In many cases, different age labels may have very similar description degrees. Consequently, blindly deciding the estimated age by virtue of the highest description degree would miss or neglect other valuable age labels that may contribute a lot to the final predicted age. In this paper, we propose a strategic decision-making label distribution learning algorithm (SDM-LDL) with a series of strategies specialized for different types of age label distribution. Experimental results from the most popular aging face database, FG-NET, show the superiority and validity of all the proposed strategic decision-making learning algorithms over the existing label distribution learning and other single-label learning algorithms for facial age estimation. The inner properties of SDM-LDL are further explored with more advantages. PMID:27367691
Designing a two-rank acceptance sampling plan for quality inspection of geospatial data products
NASA Astrophysics Data System (ADS)
Tong, Xiaohua; Wang, Zhenhua; Xie, Huan; Liang, Dan; Jiang, Zuoqin; Li, Jinchao; Li, Jun
2011-10-01
To address the disadvantages of classical sampling plans designed for traditional industrial products, we originally propose a two-rank acceptance sampling plan (TRASP) for the inspection of geospatial data outputs based on the acceptance quality level (AQL). The first rank sampling plan is to inspect the lot consisting of map sheets, and the second is to inspect the lot consisting of features in an individual map sheet. The TRASP design is formulated as an optimization problem with respect to sample size and acceptance number, which covers two lot size cases. The first case is for a small lot size with nonconformities being modeled by a hypergeometric distribution function, and the second is for a larger lot size with nonconformities being modeled by a Poisson distribution function. The proposed TRASP is illustrated through two empirical case studies. Our analysis demonstrates that: (1) the proposed TRASP provides a general approach for quality inspection of geospatial data outputs consisting of non-uniform items and (2) the proposed acceptance sampling plan based on TRASP performs better than other classical sampling plans. It overcomes the drawbacks of percent sampling, i.e., "strictness for large lot size, toleration for small lot size," and those of a national standard used specifically for industrial outputs, i.e., "lots with different sizes corresponding to the same sampling plan."
Information as Resoures ; A View toward the 2lst Century - Let's Construct Databases by Ourselves -
NASA Astrophysics Data System (ADS)
Ohmi, Akira
A highly-developed information-oriented society based on “Information Network Technology” will be realized in the 21st century. In enterprises, fundamental research will be regarded as important more and more, and the effective use of information as resources will be indispensable. From the viewpoint of international distribution of information there is a criticism that Japan has been offering the information on science and technology insufficiently to the overseas countries, but, for example, in the steel industry lots of house-organ technical journals in English version has been offered overseas. And recently several information firms have started translating Japanese information into English and providing overseas. However, there are some problems to be taken into consideration; 1. The information is not integrated, 2. there is not any co-ordination among the firms, 3. others. Then the author proposes communal use of machine translation system and construction of database for overseas that integrate such firms” work preserving each individuality.
Practical quantum private query with better performance in resisting joint-measurement attack
NASA Astrophysics Data System (ADS)
Wei, Chun-Yan; Wang, Tian-Yin; Gao, Fei
2016-04-01
As a kind of practical protocol, quantum-key-distribution (QKD)-based quantum private queries (QPQs) have drawn lots of attention. However, joint-measurement (JM) attack poses a noticeable threat to the database security in such protocols. That is, by JM attack a malicious user can illegally elicit many more items from the database than the average amount an honest one can obtain. Taking Jacobi et al.'s protocol as an example, by JM attack a malicious user can obtain as many as 500 bits, instead of the expected 2.44 bits, from a 104-bit database in one query. It is a noticeable security flaw in theory, and would also arise in application with the development of quantum memories. To solve this problem, we propose a QPQ protocol based on a two-way QKD scheme, which behaves much better in resisting JM attack. Concretely, the user Alice cannot get more database items by conducting JM attack on the qubits because she has to send them back to Bob (the database holder) before knowing which of them should be jointly measured. Furthermore, JM attack by both Alice and Bob would be detected with certain probability, which is quite different from previous protocols. Moreover, our protocol retains the good characters of QKD-based QPQs, e.g., it is loss tolerant and robust against quantum memory attack.
USDA-ARS?s Scientific Manuscript database
The Dietary Supplement Ingredient Database (DSID) is a federal initiative to provide analytical validation of ingredients in dietary supplements. The first release on vitamins and minerals in adult MVMs is now available. Multiple lots of >100 representative adult MVMs were chemically analyzed for ...
Säde, Elina; Penttinen, Katri; Björkroth, Johanna; Hultman, Jenni
2017-04-01
Understanding the factors influencing meat bacterial communities is important as these communities are largely responsible for meat spoilage. The composition and structure of a bacterial community on a high-O 2 modified-atmosphere packaged beef product were examined after packaging, on the use-by date and two days after, to determine whether the communities at each stage were similar to those in samples taken from different production lots. Furthermore, we examined whether the taxa associated with product spoilage were distributed across production lots. Results from 16S rRNA amplicon sequencing showed that while the early samples harbored distinct bacterial communities, after 8-12 days storage at 6 °C the communities were similar to those in samples from different lots, comprising mainly of common meat spoilage bacteria Carnobacterium spp., Brochothrix spp., Leuconostoc spp. and Lactococcus spp. Interestingly, abundant operational taxonomic units associated with product spoilage were shared between the production lots, suggesting that the bacteria enable to spoil the product were constant contaminants in the production chain. A characteristic succession pattern and the distribution of common spoilage bacteria between lots suggest that both the packaging type and the initial community structure influenced the development of the spoilage bacterial community. Copyright © 2016 Elsevier Ltd. All rights reserved.
Received Signal Strength Database Interpolation by Kriging for a Wi-Fi Indoor Positioning System
Jan, Shau-Shiun; Yeh, Shuo-Ju; Liu, Ya-Wen
2015-01-01
The main approach for a Wi-Fi indoor positioning system is based on the received signal strength (RSS) measurements, and the fingerprinting method is utilized to determine the user position by matching the RSS values with the pre-surveyed RSS database. To build a RSS fingerprint database is essential for an RSS based indoor positioning system, and building such a RSS fingerprint database requires lots of time and effort. As the range of the indoor environment becomes larger, labor is increased. To provide better indoor positioning services and to reduce the labor required for the establishment of the positioning system at the same time, an indoor positioning system with an appropriate spatial interpolation method is needed. In addition, the advantage of the RSS approach is that the signal strength decays as the transmission distance increases, and this signal propagation characteristic is applied to an interpolated database with the Kriging algorithm in this paper. Using the distribution of reference points (RPs) at measured points, the signal propagation model of the Wi-Fi access point (AP) in the building can be built and expressed as a function. The function, as the spatial structure of the environment, can create the RSS database quickly in different indoor environments. Thus, in this paper, a Wi-Fi indoor positioning system based on the Kriging fingerprinting method is developed. As shown in the experiment results, with a 72.2% probability, the error of the extended RSS database with Kriging is less than 3 dBm compared to the surveyed RSS database. Importantly, the positioning error of the developed Wi-Fi indoor positioning system with Kriging is reduced by 17.9% in average than that without Kriging. PMID:26343673
Received Signal Strength Database Interpolation by Kriging for a Wi-Fi Indoor Positioning System.
Jan, Shau-Shiun; Yeh, Shuo-Ju; Liu, Ya-Wen
2015-08-28
The main approach for a Wi-Fi indoor positioning system is based on the received signal strength (RSS) measurements, and the fingerprinting method is utilized to determine the user position by matching the RSS values with the pre-surveyed RSS database. To build a RSS fingerprint database is essential for an RSS based indoor positioning system, and building such a RSS fingerprint database requires lots of time and effort. As the range of the indoor environment becomes larger, labor is increased. To provide better indoor positioning services and to reduce the labor required for the establishment of the positioning system at the same time, an indoor positioning system with an appropriate spatial interpolation method is needed. In addition, the advantage of the RSS approach is that the signal strength decays as the transmission distance increases, and this signal propagation characteristic is applied to an interpolated database with the Kriging algorithm in this paper. Using the distribution of reference points (RPs) at measured points, the signal propagation model of the Wi-Fi access point (AP) in the building can be built and expressed as a function. The function, as the spatial structure of the environment, can create the RSS database quickly in different indoor environments. Thus, in this paper, a Wi-Fi indoor positioning system based on the Kriging fingerprinting method is developed. As shown in the experiment results, with a 72.2% probability, the error of the extended RSS database with Kriging is less than 3 dBm compared to the surveyed RSS database. Importantly, the positioning error of the developed Wi-Fi indoor positioning system with Kriging is reduced by 17.9% in average than that without Kriging.
Interactive Database of Pulsar Flux Density Measurements
NASA Astrophysics Data System (ADS)
Koralewska, O.; Krzeszowski, K.; Kijak, J.; Lewandowski, W.
2012-12-01
The number of astronomical observations is steadily growing, giving rise to the need of cataloguing the obtained results. There are a lot of databases, created to store different types of data and serve a variety of purposes, e. g. databases providing basic data for astronomical objects (SIMBAD Astronomical Database), databases devoted to one type of astronomical object (ATNF Pulsar Database) or to a set of values of the specific parameter (Lorimer 1995 - database of flux density measurements for 280 pulsars on the frequencies up to 1606 MHz), etc. We found that creating an online database of pulsar flux measurements, provided with facilities for plotting diagrams and histograms, calculating mean values for a chosen set of data, filtering parameter values and adding new measurements by the registered users, could be useful in further studies on pulsar spectra.
Plant conservation priorities of Xinjiang region, China
NASA Astrophysics Data System (ADS)
Li, L. P.; Cui, W. H.; Wang, T.; Tian, S.; Xing, W. J.; Yin, L. K.; Abdusalih, N.; Jiang, Y. M.
2017-02-01
As an important region in the Silk Road, Xinjiang is getting a good chance of developing economy. However at the same time, its natural environment is facing a big challenge. To better protect the plant diversity, it is urgent to make a thorough conservation plan. With a full database of vascular and medicinal plant distributions and nature reserve plant lists and boundaries in Xinjiang of China, we analysed the plant diversity hotspots, protection gaps and proposed the plant conservation priorities of this region. Differed from the widely accepted viewpoints that lots of plants were not included in nature reserves, we found that most of the plants ( > 90%) were actually included in the current nature reserves. We believe that compared with establishing more nature reserves, improving the management of the existing ones is also important. Furthermore, the very few unprotected plants ( < 10%) were distributed mostly in the regions of Aletai, Tacheng, Zhaosu, Manasi, Qitai and Hetian which could be the future conservation priorities.
Secure Database Management Study.
1978-12-01
covers cases Involving indus- trial economics (e.g., Industrial spies) and commercial finances (e.g., fraud). Priv¢j--Protection of date about people...California, Berke - lay [STONM76aI. * The approach to protection taken in INGRE (STOM74| has attracted a lot of Interest* Queries, in a high level query...Material Command Support Activity (NMCSA), and another DoD agency, Cullinane Corporation developed a prototype version of the IDS database system on a
NASA Astrophysics Data System (ADS)
Erberich, Stephan G.; Hoppe, Martin; Jansen, Christian; Schmidt, Thomas; Thron, Armin; Oberschelp, Walter
2001-08-01
In the last few years more and more University Hospitals as well as private hospitals changed to digital information systems for patient record, diagnostic files and digital images. Not only that patient management becomes easier, it is also very remarkable how clinical research can profit from Picture Archiving and Communication Systems (PACS) and diagnostic databases, especially from image databases. Since images are available on the finger tip, difficulties arise when image data needs to be processed, e.g. segmented, classified or co-registered, which usually demands a lot computational power. Today's clinical environment does support PACS very well, but real image processing is still under-developed. The purpose of this paper is to introduce a parallel cluster of standard distributed systems and its software components and how such a system can be integrated into a hospital environment. To demonstrate the cluster technique we present our clinical experience with the crucial but cost-intensive motion correction of clinical routine and research functional MRI (fMRI) data, as it is processed in our Lab on a daily basis.
Developing a Multi-Dimensional Hydrodynamics Code with Astrochemical Reactions
NASA Astrophysics Data System (ADS)
Kwak, Kyujin; Yang, Seungwon
2015-08-01
The Atacama Large Millimeter/submillimeter Array (ALMA) revealed high resolution molecular lines some of which are still unidentified yet. Because formation of these astrochemical molecules has been seldom studied in traditional chemistry, observations of new molecular lines drew a lot of attention from not only astronomers but also chemists both experimental and theoretical. Theoretical calculations for the formation of these astrochemical molecules have been carried out providing reaction rates for some important molecules, and some of theoretical predictions have been measured in laboratories. The reaction rates for the astronomically important molecules are now collected to form databases some of which are publically available. By utilizing these databases, we develop a multi-dimensional hydrodynamics code that includes the reaction rates of astrochemical molecules. Because this type of hydrodynamics code is able to trace the molecular formation in a non-equilibrium fashion, it is useful to study the formation history of these molecules that affects the spatial distribution of some specific molecules. We present the development procedure of this code and some test problems in order to verify and validate the developed code.
LX-17-1 Stockpile Returned Material Lot Comparison
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gagliardi, F.; Pease, S.; Willey, T.
2015-02-18
Many different lots of LX-17 have been produced over the years. Two varieties of LX-17, LX-17-0 and LX-17-1, have at one point or another been a part of the Livermore stockpile systems. LX-17-0 was made with dry-aminated TATB whereas LX-17-1 was made with wet-aminated TATB. Both versions have the same TATB to Kel-F 800 mass ratio of 92.5%/7.5%. Both kinds of LX-17 were formulated at Holston during the late 1970s or early to mid-1980s and were certified to have met the necessary specifications that cover the purity, particle size range, explosive to binder ratio, etc. In recent years, Trevor Willymore » and others have performed a detailed evaluation of solid parts made from each of the LX-17 lots manufactured at Holston. Using the Advanced Light Source at LBNL, Willey and his colleagues radiographed many samples from isostatic pressings using the same scanning conditions. In their investigation they identified that even though the bulk composition can be the same, there may exist a large spread in how smoothly the TATB and binder were distributed within the radiographed volume of different lots of material.1 Overall, the dry-aminated TATB-based material, LX-17-0, had a smooth TATB and binder distribution, whereas the wet-aminated TATB-based LX-17-1 showed a wide range of binder distributions. The results for five different LX-17-1 lots are shown in Figure 1. The wide variation in material distribution has raised the question about whether or not this sort variability will cause significant differences in mechanical behavior.« less
The Anthrax Vaccine Debate: A Medical Review for Commanders
2001-04-01
tested , certified, and released the new lots for distribution.65 BioPort has a total of 32 lots of Anthrax Vaccine, Adsorbed in storage for ...cit., 1744. 159. New anthrax vaccines have been developed and are ready for clinical testing . But so far, lack of funding has prevented the ...anthrax vaccine. The FDA has not yet certified the new facilities and has not released new lots for sale. DoD has not used any of these
Broiler Campylobacter Contamination and Human Campylobacteriosis in Iceland ▿ †
Callicott, Kenneth A.; Harðardóttir, Hjördís; Georgsson, Franklín; Reiersen, Jarle; Friðriksdóttir, Vala; Gunnarsson, Eggert; Michel, Pascal; Bisaillon, Jean-Robert; Kristinsson, Karl G.; Briem, Haraldur; Hiett, Kelli L.; Needleman, David S.; Stern, Norman J.
2008-01-01
To examine whether there is a relationship between the degree of Campylobacter contamination observed in product lots of retail Icelandic broiler chicken carcasses and the incidence of human disease, 1,617 isolates from 327 individual product lots were genetically matched (using the flaA short variable region [SVR[) to 289 isolates from cases of human campylobacteriosis whose onset was within approximately 2 weeks from the date of processing. When there was genetic identity between broiler isolates and human isolates within the appropriate time frame, a retail product lot was classified as implicated in human disease. According to the results of this analysis, there were multiple clusters of human disease linked to the same process lot or lots. Implicated and nonimplicated retail product lots were compared for four lot descriptors: lot size, prevalence, mean contamination, and maximum contamination (as characterized by direct rinse plating). For retail product distributed fresh, Mann-Whitney U tests showed that implicated product lots had significantly (P = 0.0055) higher mean contamination than nonimplicated lots. The corresponding median values were 3.56 log CFU/carcass for implicated lots and 2.72 log CFU/carcass for nonimplicated lots. For frozen retail product, implicated lots were significantly (P = 0.0281) larger than nonimplicated lots. When the time frame was removed, retail product lots containing Campylobacter flaA SVR genotypes also seen in human disease had significantly higher mean and maximum contamination numbers than lots containing no genotypes seen in human disease for both fresh and frozen product. Our results suggest that cases of broiler-borne campylobacteriosis may occur in clusters and that the differences in mean contamination levels may provide a basis for regulatory action that is more specific than a presence-absence standard. PMID:18791017
9 CFR 381.191 - Distribution of inspected products to small lot buyers.
Code of Federal Regulations, 2010 CFR
2010-01-01
... small lot buyers (such as small restaurants), distributors or jobbers may remove inspected and passed... not bear an official inspection mark: Provided, That the individual non-consumer-packaged carcasses bear the official inspection legend and the official establishment number of the establishment that...
Rousselet, Jérôme; Imbert, Charles-Edouard; Dekri, Anissa; Garcia, Jacques; Goussard, Francis; Vincent, Bruno; Denux, Olivier; Robinet, Christelle; Dorkeld, Franck; Roques, Alain; Rossi, Jean-Pierre
2013-01-01
Mapping species spatial distribution using spatial inference and prediction requires a lot of data. Occurrence data are generally not easily available from the literature and are very time-consuming to collect in the field. For that reason, we designed a survey to explore to which extent large-scale databases such as Google maps and Google Street View could be used to derive valid occurrence data. We worked with the Pine Processionary Moth (PPM) Thaumetopoea pityocampa because the larvae of that moth build silk nests that are easily visible. The presence of the species at one location can therefore be inferred from visual records derived from the panoramic views available from Google Street View. We designed a standardized procedure allowing evaluating the presence of the PPM on a sampling grid covering the landscape under study. The outputs were compared to field data. We investigated two landscapes using grids of different extent and mesh size. Data derived from Google Street View were highly similar to field data in the large-scale analysis based on a square grid with a mesh of 16 km (96% of matching records). Using a 2 km mesh size led to a strong divergence between field and Google-derived data (46% of matching records). We conclude that Google database might provide useful occurrence data for mapping the distribution of species which presence can be visually evaluated such as the PPM. However, the accuracy of the output strongly depends on the spatial scales considered and on the sampling grid used. Other factors such as the coverage of Google Street View network with regards to sampling grid size and the spatial distribution of host trees with regards to road network may also be determinant.
Study on parallel and distributed management of RS data based on spatial data base
NASA Astrophysics Data System (ADS)
Chen, Yingbiao; Qian, Qinglan; Liu, Shijin
2006-12-01
With the rapid development of current earth-observing technology, RS image data storage, management and information publication become a bottle-neck for its appliance and popularization. There are two prominent problems in RS image data storage and management system. First, background server hardly handle the heavy process of great capacity of RS data which stored at different nodes in a distributing environment. A tough burden has put on the background server. Second, there is no unique, standard and rational organization of Multi-sensor RS data for its storage and management. And lots of information is lost or not included at storage. Faced at the above two problems, the paper has put forward a framework for RS image data parallel and distributed management and storage system. This system aims at RS data information system based on parallel background server and a distributed data management system. Aiming at the above two goals, this paper has studied the following key techniques and elicited some revelatory conclusions. The paper has put forward a solid index of "Pyramid, Block, Layer, Epoch" according to the properties of RS image data. With the solid index mechanism, a rational organization for different resolution, different area, different band and different period of Multi-sensor RS image data is completed. In data storage, RS data is not divided into binary large objects to be stored at current relational database system, while it is reconstructed through the above solid index mechanism. A logical image database for the RS image data file is constructed. In system architecture, this paper has set up a framework based on a parallel server of several common computers. Under the framework, the background process is divided into two parts, the common WEB process and parallel process.
Dekri, Anissa; Garcia, Jacques; Goussard, Francis; Vincent, Bruno; Denux, Olivier; Robinet, Christelle; Dorkeld, Franck; Roques, Alain; Rossi, Jean-Pierre
2013-01-01
Mapping species spatial distribution using spatial inference and prediction requires a lot of data. Occurrence data are generally not easily available from the literature and are very time-consuming to collect in the field. For that reason, we designed a survey to explore to which extent large-scale databases such as Google maps and Google street view could be used to derive valid occurrence data. We worked with the Pine Processionary Moth (PPM) Thaumetopoea pityocampa because the larvae of that moth build silk nests that are easily visible. The presence of the species at one location can therefore be inferred from visual records derived from the panoramic views available from Google street view. We designed a standardized procedure allowing evaluating the presence of the PPM on a sampling grid covering the landscape under study. The outputs were compared to field data. We investigated two landscapes using grids of different extent and mesh size. Data derived from Google street view were highly similar to field data in the large-scale analysis based on a square grid with a mesh of 16 km (96% of matching records). Using a 2 km mesh size led to a strong divergence between field and Google-derived data (46% of matching records). We conclude that Google database might provide useful occurrence data for mapping the distribution of species which presence can be visually evaluated such as the PPM. However, the accuracy of the output strongly depends on the spatial scales considered and on the sampling grid used. Other factors such as the coverage of Google street view network with regards to sampling grid size and the spatial distribution of host trees with regards to road network may also be determinant. PMID:24130675
Comparison of the efficiency between two sampling plans for aflatoxins analysis in maize
Mallmann, Adriano Olnei; Marchioro, Alexandro; Oliveira, Maurício Schneider; Rauber, Ricardo Hummes; Dilkin, Paulo; Mallmann, Carlos Augusto
2014-01-01
Variance and performance of two sampling plans for aflatoxins quantification in maize were evaluated. Eight lots of maize were sampled using two plans: manual, using sampling spear for kernels; and automatic, using a continuous flow to collect milled maize. Total variance and sampling, preparation, and analysis variance were determined and compared between plans through multifactor analysis of variance. Four theoretical distribution models were used to compare aflatoxins quantification distributions in eight maize lots. The acceptance and rejection probabilities for a lot under certain aflatoxin concentration were determined using variance and the information on the selected distribution model to build the operational characteristic curves (OC). Sampling and total variance were lower at the automatic plan. The OC curve from the automatic plan reduced both consumer and producer risks in comparison to the manual plan. The automatic plan is more efficient than the manual one because it expresses more accurately the real aflatoxin contamination in maize. PMID:24948911
Polarization in the land distribution, land use and land cover change in the Amazon
D'ANTONA, Alvaro; VANWEY, Leah; LUDEWIGS, Thomas
2013-01-01
The objective of this article is to present Polarization of Agrarian Structure as a single, more complete representation than models emphasizing rural exodus and consolidation of land into large agropastoral enterprises of the dynamics of changing land distribution, land use / cover, and thus the rural milieu of Amazonia. Data were collected in 2003 using social surveys on a sample of 587 lots randomly selected from among 5,086 lots on a cadastral map produced in the 1970s. Georeferencing of current property boundaries in the location of these previously demarcated lots allows us to relate sociodemographic and biophysical variables of the surveyed properties to the changes in boundaries that have occurred since the 1970s. As have other authors in other Amazonian regions, we found concentration of land ownership into larger properties. The approach we took, however, showed that changes in the distribution of land ownership is not limited to the appearance of larger properties, those with 200 ha or more; there also exists substantial division of earlier lots into properties with fewer than five hectares, many without any agropastoral use. These two trends are juxtaposed against the decline in establishments with between five and 200 ha. The variation across groups in land use / land cover and population distribution shows the necessity of developing conceptual models, whether from socioeconomic, demographic or environmental perspectives, look beyond a single group of people or properties. PMID:24639597
The logical primitives of thought: Empirical foundations for compositional cognitive models.
Piantadosi, Steven T; Tenenbaum, Joshua B; Goodman, Noah D
2016-07-01
The notion of a compositional language of thought (LOT) has been central in computational accounts of cognition from earliest attempts (Boole, 1854; Fodor, 1975) to the present day (Feldman, 2000; Penn, Holyoak, & Povinelli, 2008; Fodor, 2008; Kemp, 2012; Goodman, Tenenbaum, & Gerstenberg, 2015). Recent modeling work shows how statistical inferences over compositionally structured hypothesis spaces might explain learning and development across a variety of domains. However, the primitive components of such representations are typically assumed a priori by modelers and theoreticians rather than determined empirically. We show how different sets of LOT primitives, embedded in a psychologically realistic approximate Bayesian inference framework, systematically predict distinct learning curves in rule-based concept learning experiments. We use this feature of LOT models to design a set of large-scale concept learning experiments that can determine the most likely primitives for psychological concepts involving Boolean connectives and quantification. Subjects' inferences are most consistent with a rich (nonminimal) set of Boolean operations, including first-order, but not second-order, quantification. Our results more generally show how specific LOT theories can be distinguished empirically. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
NASA Technical Reports Server (NTRS)
Piascik, Robert S.
2011-01-01
Several cracks were detected in stringers located beneath the foam on the External Tank (ET) following the launch scrub of Space Transportation System (STS)-133 on November 5, 2010. The stringer material was aluminum-lithium (AL-Li) 2090-T83 fabricated from sheets that were nominally 0.064 inches thick. The mechanical properties of the stringer material were known to vary between different material lots, with the stringers from ET-137 (predominately lots 620853 and 620854) having the highest yield and ultimate stresses. Subsequent testing determined that these same lots also had the lowest fracture toughness properties. The NASA Engineering and Safety Center (NESC) supported the Space Shuttle Program (SSP)-led investigation. The objective of this investigation was to develop a database of test results to provide validation for structural analysis models, independently confirm test results obtained from other investigators, and determine the proximate cause of the anomalous low fracture toughness observed in stringer lots 620853 and 620854. This document contains the outcome of the investigation.
NASA Astrophysics Data System (ADS)
Setiawan, E. P.; Rosadi, D.
2017-01-01
Portfolio selection problems conventionally means ‘minimizing the risk, given the certain level of returns’ from some financial assets. This problem is frequently solved with quadratic or linear programming methods, depending on the risk measure that used in the objective function. However, the solutions obtained by these method are in real numbers, which may give some problem in real application because each asset usually has its minimum transaction lots. In the classical approach considering minimum transaction lots were developed based on linear Mean Absolute Deviation (MAD), variance (like Markowitz’s model), and semi-variance as risk measure. In this paper we investigated the portfolio selection methods with minimum transaction lots with conditional value at risk (CVaR) as risk measure. The mean-CVaR methodology only involves the part of the tail of the distribution that contributed to high losses. This approach looks better when we work with non-symmetric return probability distribution. Solution of this method can be found with Genetic Algorithm (GA) methods. We provide real examples using stocks from Indonesia stocks market.
75 FR 70128 - 2011 Changes for Domestic Mailing Services
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-17
...LOT, RDI, and Five-Digit ZIP. The Postal Service certifies software meeting its standards until the... Delivery Point Validation (DPV) service in conjunction with CASS-Certified address matching software... interface between address-matching software and the LACS \\Link\\ database service. 1.21.2 Interface...
Variability of Acoustic Transmissions in a Shallow Water Area,
1981-05-01
as changes in the probability density and distribution functions, and the 17 PEILChJ4WO AA Aw i-m--t SACLANTCEN SR-46 test is sensitive to these...transducer on the bottom, i.e. no delay variations, and another with a lot of movements. The left parts of the figure show projections of spreading...change a lot from one ping group (matrix) to the next (see Figs. 9c and TOc). Comparing the runs tests (Figs. 9a and lOa) we see that there is a lot
Begg, Graham S; Elliott, Martin J; Cullen, Danny W; Iannetta, Pietro P M; Squire, Geoff R
2008-10-01
The implementation of co-existence in the commercialisation of GM crops requires GM and non-GM products to be segregated in production and supply. However, maintaining segregation in oilseed rape will be made difficult by the highly persistent nature of this species. An understanding of its population dynamics is needed to predict persistence and develop potential strategies for control, while to ensure segregation is being achieved, the production of GM oilseed rape must be accompanied by the monitoring of GM levels in crop or seed populations. Heterogeneity in the spatial distribution of oilseed rape has the potential to affect both control and monitoring and, although a universal phenomenon in arable weeds and harvested seed lots, spatial heterogeneity in oilseed rape populations remains to be demonstrated and quantified. Here we investigate the distribution of crop and volunteer populations in a commercial field before and during the cultivation of the first conventional oilseed rape (winter) crop since the cultivation of a GM glufosinate-tolerant oilseed rape crop (spring) three years previously. GM presence was detected by ELISA for the PAT protein in each of three morphologically distinguishable phenotypes: autumn germinating crop-type plants (3% GM), autumn-germinating 'regrowths' (72% GM) and spring germinating 'small-type' plants (17% GM). Statistical models (Poisson log-normal and binomial logit-normal) were used to describe the spatial distribution of these populations at multiple spatial scales in the field and of GM presence in the harvested seed lot. Heterogeneity was a consistent feature in the distribution of GM and conventional oilseed rape. Large trends across the field (50 x 400 m) and seed lot (4 x 1.5 x 1.5 m) were observed in addition to small-scale heterogeneity, less than 20 m in the field and 20 cm in the seed lot. The heterogeneity was greater for the 'regrowth' and 'small' phenotypes, which were likely to be volunteers and included most of the GM plants detected, than for the largely non-GM 'crop' phenotype. The implications of the volunteer heterogeneity for field management and GM-sampling are discussed.
NASA Astrophysics Data System (ADS)
Sinaga, A. T.; Wangsaputra, R.
2018-03-01
The development of technology causes the needs of products and services become increasingly complex, diverse, and fluctuating. This causes the level of inter-company dependencies within a production chains increased. To be able to compete, efficiency improvements need to be done collaboratively in the production chain network. One of the efforts to increase efficiency is to harmonize production and distribution activities in the production chain network. This paper describes the harmonization of production and distribution activities by applying the use of push-pull system and supply hub in the production chain between two companies. The research methodology begins with conducting empirical and literature studies, formulating research questions, developing mathematical models, conducting trials and analyses, and taking conclusions. The relationship between the two companies is described in the MINLP mathematical model with the total cost of production chain as the objective function. Decisions generated by the mathematical models are the size of production lot, size of delivery lot, number of kanban, frequency of delivery, and the number of understock and overstock lot.
Universal scaling of the distribution of land in urban areas
NASA Astrophysics Data System (ADS)
Riascos, A. P.
2017-09-01
In this work, we explore the spatial structure of built zones and green areas in diverse western cities by analyzing the probability distribution of areas and a coefficient that characterize their respective shapes. From the analysis of diverse datasets describing land lots in urban areas, we found that the distribution of built-up areas and natural zones in cities obey inverse power laws with a similar scaling for the cities explored. On the other hand, by studying the distribution of shapes of lots in urban regions, we are able to detect global differences in the spatial structure of the distribution of land. Our findings introduce information about spatial patterns that emerge in the structure of urban settlements; this knowledge is useful for the understanding of urban growth, to improve existing models of cities, in the context of sustainability, in studies about human mobility in urban areas, among other applications.
[Integrated DNA barcoding database for identifying Chinese animal medicine].
Shi, Lin-Chun; Yao, Hui; Xie, Li-Fang; Zhu, Ying-Jie; Song, Jing-Yuan; Zhang, Hui; Chen, Shi-Lin
2014-06-01
In order to construct an integrated DNA barcoding database for identifying Chinese animal medicine, the authors and their cooperators have completed a lot of researches for identifying Chinese animal medicines using DNA barcoding technology. Sequences from GenBank have been analyzed simultaneously. Three different methods, BLAST, barcoding gap and Tree building, have been used to confirm the reliabilities of barcode records in the database. The integrated DNA barcoding database for identifying Chinese animal medicine has been constructed using three different parts: specimen, sequence and literature information. This database contained about 800 animal medicines and the adulterants and closely related species. Unknown specimens can be identified by pasting their sequence record into the window on the ID page of species identification system for traditional Chinese medicine (www. tcmbarcode. cn). The integrated DNA barcoding database for identifying Chinese animal medicine is significantly important for animal species identification, rare and endangered species conservation and sustainable utilization of animal resources.
Fujikawa, Hiroshi
2017-01-01
Microbial concentration in samples of a food product lot has been generally assumed to follow the log-normal distribution in food sampling, but this distribution cannot accommodate the concentration of zero. In the present study, first, a probabilistic study with the most probable number (MPN) technique was done for a target microbe present at a low (or zero) concentration in food products. Namely, based on the number of target pathogen-positive samples in the total samples of a product found by a qualitative, microbiological examination, the concentration of the pathogen in the product was estimated by means of the MPN technique. The effects of the sample size and the total sample number of a product were then examined. Second, operating characteristic (OC) curves for the concentration of a target microbe in a product lot were generated on the assumption that the concentration of a target microbe could be expressed with the Poisson distribution. OC curves for Salmonella and Cronobacter sakazakii in powdered formulae for infants and young children were successfully generated. The present study suggested that the MPN technique and the Poisson distribution would be useful for qualitative microbiological test data analysis for a target microbe whose concentration in a lot is expected to be low.
Welcome to Fermilab Butterflies!!
, fascinating insects, and there's a lot to learn about them! Join our expert, Tom Peterson, and explore the Meet Tom Peterson, Fermilab's Butterfly Expert Go to our Butterfly Links Have fun! Graphics and Page Design: Rory Parilac, Content: Tom Peterson and Rory Parilac Database and Lasso Code: Liz Quigg Web
An integrated eVoucher mechanism for flexible loads in real-time retail electricity market
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chen, Tao; Pourbabak, Hajir; Liang, Zheming
This study proposes an innovative economic and engineering coupled framework to encourage typical flexible loads or load aggregators, such as parking lots with high penetration of electric vehicles, to participate directly in the real-time retail electricity market based on an integrated eVoucher program. The integrated eVoucher program entails demand side management, either in the positive or negative direction, following a popular customer-centric design principle. It provides the extra economic benefit to end-users and reduces the risk associated with the wholesale electricity market for electric distribution companies (EDCs), meanwhile improving the potential resilience of the distribution networks with consideration for frequencymore » deviations. When implemented, the eVoucher program allows typical flexible loads, such as electric vehicle parking lots, to adjust their demand and consumption behavior according to financial incentives from an EDC. A distribution system operator (DSO) works as a third party to hasten negotiations between such parking lots and EDCs, as well as the price clearing process. Eventually, both electricity retailers and power system operators will benefit from the active participation of the flexible loads and energy customers.« less
An integrated eVoucher mechanism for flexible loads in real-time retail electricity market
Chen, Tao; Pourbabak, Hajir; Liang, Zheming; ...
2017-01-26
This study proposes an innovative economic and engineering coupled framework to encourage typical flexible loads or load aggregators, such as parking lots with high penetration of electric vehicles, to participate directly in the real-time retail electricity market based on an integrated eVoucher program. The integrated eVoucher program entails demand side management, either in the positive or negative direction, following a popular customer-centric design principle. It provides the extra economic benefit to end-users and reduces the risk associated with the wholesale electricity market for electric distribution companies (EDCs), meanwhile improving the potential resilience of the distribution networks with consideration for frequencymore » deviations. When implemented, the eVoucher program allows typical flexible loads, such as electric vehicle parking lots, to adjust their demand and consumption behavior according to financial incentives from an EDC. A distribution system operator (DSO) works as a third party to hasten negotiations between such parking lots and EDCs, as well as the price clearing process. Eventually, both electricity retailers and power system operators will benefit from the active participation of the flexible loads and energy customers.« less
Modeling Altruistic and Aggressive Driver Behavior in a No-Notice Evacuation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brandstetter, Tim; Garrow, Dr. Laurie; Hunter, Dr. Michael
2007-01-01
This study examines the impact of altruistic and aggressive driver behavior on the effectiveness of an evacuation for a section of downtown Atlanta. The study area includes 37 signalized intersections, seven ramps, and 48 parking lots that vary by size, type (lot versus garage), peak volume, and number of ingress and egress points. A detailed microscopic model of the study area was created in VISSIM. Different scenarios examined the impacts of driver behavior on parking lot discharge rates and the loading rates from side streets on primary evacuation routes. A new methodology was created to accurately represent parking lot dischargemore » rates. This study is also unique in that it assumes a "worst case scenario" that occurs with no advance notice during the morning peak period, when vehicles must transition from inbound to outbound routes. Simulation results indicate that while overall network clearance times are similar across scenarios, the distribution of delay on individual routes and across parking lots differ markedly. More equitable solutions (defined as the allocation of delay from parking lots and side streets to main evacuation routes) were observed with altruistic driver behavior.« less
Two lots of sodium nitrate fertilizer derived from Chilean caliche were analyzed to determine the distribution of perchlorate throughout the material. Although our samples represent a limited amount, we found that distribution was essentially homogeneous in any 100-g portion. Whe...
Distributed Learning and Constructivist Philosophy (Uzaktan Ögretim Ve Yapilandirmaci Felsefe)
ERIC Educational Resources Information Center
Tekinarslan, Erkan
2003-01-01
Distance education and its new form of distributed learning have been used in many countries to provide education to people who need training. Recent developments in instructional technology enable the institutions to distribute their education to more people in distant places than ever before. The field of distributed learning has a lot of…
Sampling requirements for forage quality characterization of rectangular hay bales
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheaffer, C.C.; Martin, N.P.; Jewett, J.G.
2000-02-01
Commercial lots of alfalfa (Medicago sativa L.) hay are often bought and sold on the basis of forage quality. Proper sampling is essential to obtain accurate forage quality results for pricing of alfalfa hay, but information about sampling is limited to small, 20- to 40-kg rectangular bales. Their objectives were to determine the within-bale variation in 400-kg rectangular bales and to determine the number and distribution of core samples required to represent the crude protein (CP), acid detergent fiber (ADF), neutral detergent fiber (NDF), and dry matter (DM) concentration in commercial lots of alfalfa hay. Four bales were selected frommore » each of three hay lots and core sampled nine times per side for a total of 54 cores per bale. There was no consistent pattern of forage quality variation within bales. Averaged across lots, any portion of a bale was highly correlated with bale grand means for CP, ADF, NDF, and DM. Three lots of hay were probed six times per bale, one core per bale side from 55, 14, and 14 bales per lot. For determination of CP, ADF, NDF, and DM concentration, total core numbers required to achieve an acceptable standard error (SE) were minimized by sampling once per bale. Bootstrap analysis of data from the most variable hay lot suggested that forage quality of any lot of 400-kg alfalfa hay bales should be adequately represented by 12 bales sampled once per bale.« less
Lifetime assessment analysis of Galileo Li/SO2 cells: Final report
DOE Office of Scientific and Technical Information (OSTI.GOV)
Levy, S.C.; Jaeger, C.D.; Bouchard, D.A.
Galileo Li/SO2 cells from five lots and five storage temperatures were studied to establish a database from which the performance of flight modules may be predicted. Nondestructive tests consisting of complex impedance analysis and a 15-s pulse were performed on all cells. Chemical analysis was performed on one cell from each lot/storage group, and the remaining cells were discharged at Galileo mission loads. An additional number of cells were placed on high-temperature accelerated aging storage for 6 months and then discharged. All data were statistically analyzed. Results indicate that the present Galileo design Li/SO2 cell will satisfy electrical requirements formore » a 10-year mission. 10 figs., 4 tabs.« less
The future application of GML database in GIS
NASA Astrophysics Data System (ADS)
Deng, Yuejin; Cheng, Yushu; Jing, Lianwen
2006-10-01
In 2004, the Geography Markup Language (GML) Implementation Specification (version 3.1.1) was published by Open Geospatial Consortium, Inc. Now more and more applications in geospatial data sharing and interoperability depend on GML. The primary purpose of designing GML is for exchange and transportation of geo-information by standard modeling and encoding of geography phenomena. However, the problems of how to organize and access lots of GML data effectively arise in applications. The research on GML database focuses on these problems. The effective storage of GML data is a hot topic in GIS communities today. GML Database Management System (GDBMS) mainly deals with the problem of storage and management of GML data. Now two types of XML database, namely Native XML Database, and XML-Enabled Database are classified. Since GML is an application of the XML standard to geographic data, the XML database system can also be used for the management of GML. In this paper, we review the status of the art of XML database, including storage, index and query languages, management systems and so on, then move on to the GML database. At the end, the future prospect of GML database in GIS application is presented.
Study of data I/O performance on distributed disk system in mask data preparation
NASA Astrophysics Data System (ADS)
Ohara, Shuichiro; Odaira, Hiroyuki; Chikanaga, Tomoyuki; Hamaji, Masakazu; Yoshioka, Yasuharu
2010-09-01
Data volume is getting larger every day in Mask Data Preparation (MDP). In the meantime, faster data handling is always required. MDP flow typically introduces Distributed Processing (DP) system to realize the demand because using hundreds of CPU is a reasonable solution. However, even if the number of CPU were increased, the throughput might be saturated because hard disk I/O and network speeds could be bottlenecks. So, MDP needs to invest a lot of money to not only hundreds of CPU but also storage and a network device which make the throughput faster. NCS would like to introduce new distributed processing system which is called "NDE". NDE could be a distributed disk system which makes the throughput faster without investing a lot of money because it is designed to use multiple conventional hard drives appropriately over network. NCS studies I/O performance with OASIS® data format on NDE which contributes to realize the high throughput in this paper.
NCBI-compliant genome submissions: tips and tricks to save time and money.
Pirovano, Walter; Boetzer, Marten; Derks, Martijn F L; Smit, Sandra
2017-03-01
Genome sequences nowadays play a central role in molecular biology and bioinformatics. These sequences are shared with the scientific community through sequence databases. The sequence repositories of the International Nucleotide Sequence Database Collaboration (INSDC, comprising GenBank, ENA and DDBJ) are the largest in the world. Preparing an annotated sequence in such a way that it will be accepted by the database is challenging because many validation criteria apply. In our opinion, it is an undesirable situation that researchers who want to submit their sequence need either a lot of experience or help from partners to get the job done. To save valuable time and money, we list a number of recommendations for people who want to submit an annotated genome to a sequence database, as well as for tool developers, who could help to ease the process. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.
Five years database of landslides and floods affecting Swiss transportation networks
NASA Astrophysics Data System (ADS)
Voumard, Jérémie; Derron, Marc-Henri; Jaboyedoff, Michel
2017-04-01
Switzerland is a country threatened by a lot of natural hazards. Many events occur in built environment, affecting infrastructures, buildings or transportation networks and producing occasionally expensive damages. This is the reason why large landslides are generally well studied and monitored in Switzerland to reduce the financial and human risks. However, we have noticed a lack of data on small events which have impacted roads and railways these last years. This is why we have collect all the reported natural hazard events which have affected the Swiss transportation networks since 2012 in a database. More than 800 roads and railways closures have been recorded in five years from 2012 to 2016. These event are classified into six classes: earth flow, debris flow, rockfall, flood, avalanche and others. Data come from Swiss online press articles sorted by Google Alerts. The search is based on more than thirty keywords, in three languages (Italian, French, German). After verifying that the article relates indeed an event which has affected a road or a railways track, it is studied in details. We get finally information on about sixty attributes by event about event date, event type, event localisation, meteorological conditions as well as impacts and damages on the track and human damages. From this database, many trends over the five years of data collection can be outlined: in particular, the spatial and temporal distributions of the events, as well as their consequences in term of traffic (closure duration, deviation, etc.). Even if the database is imperfect (by the way it was built and because of the short time period considered), it highlights the not negligible impact of small natural hazard events on roads and railways in Switzerland at a national level. This database helps to better understand and quantify this events, to better integrate them in risk assessment.
BGDB: a database of bivalent genes.
Li, Qingyan; Lian, Shuabin; Dai, Zhiming; Xiang, Qian; Dai, Xianhua
2013-01-01
Bivalent gene is a gene marked with both H3K4me3 and H3K27me3 epigenetic modification in the same area, and is proposed to play a pivotal role related to pluripotency in embryonic stem (ES) cells. Identification of these bivalent genes and understanding their functions are important for further research of lineage specification and embryo development. So far, lots of genome-wide histone modification data were generated in mouse and human ES cells. These valuable data make it possible to identify bivalent genes, but no comprehensive data repositories or analysis tools are available for bivalent genes currently. In this work, we develop BGDB, the database of bivalent genes. The database contains 6897 bivalent genes in human and mouse ES cells, which are manually collected from scientific literature. Each entry contains curated information, including genomic context, sequences, gene ontology and other relevant information. The web services of BGDB database were implemented with PHP + MySQL + JavaScript, and provide diverse query functions. Database URL: http://dailab.sysu.edu.cn/bgdb/
A statistical walk through the IAU MDC database
NASA Astrophysics Data System (ADS)
Andreić, Željko; Šegon, Damir; Vida, Denis
2014-02-01
The IAU MDC database is an important tool for the study of meteor showers. Though the history, the amount of data in the database for particular showers, and also their extent, varied significantly. Thus, a systematic check of the current database (as of 1st of June, 2014) was performed, and the results are reported and discussed in this paper. The most obvious one is that the database contains showers for which only basic radiant data are available, showers for which a full set of radiant and orbital data is provided, and showers with data span anywhere in between. As a lot of current work on meteor showers involves D-criteria for orbital similarity, this automatically excludes showers without the orbital data from such work. A test run to compare showers only by their radiant data was performed, and was found to be inadequate in testing for shower similarities. A few inconsistencies and typographic errors were found and are briefly described here.
NASA Astrophysics Data System (ADS)
Gentry, Jeffery D.
2000-05-01
A relational database is a powerful tool for collecting and analyzing the vast amounts of inner-related data associated with the manufacture of composite materials. A relational database contains many individual database tables that store data that are related in some fashion. Manufacturing process variables as well as quality assurance measurements can be collected and stored in database tables indexed according to lot numbers, part type or individual serial numbers. Relationships between manufacturing process and product quality can then be correlated over a wide range of product types and process variations. This paper presents details on how relational databases are used to collect, store, and analyze process variables and quality assurance data associated with the manufacture of advanced composite materials. Important considerations are covered including how the various types of data are organized and how relationships between the data are defined. Employing relational database techniques to establish correlative relationships between process variables and quality assurance measurements is then explored. Finally, the benefits of database techniques such as data warehousing, data mining and web based client/server architectures are discussed in the context of composite material manufacturing.
Begg, Graham S; Cullen, Danny W; Iannetta, Pietro P M; Squire, Geoff R
2007-02-01
Testing of seed and grain lots is essential in the enforcement of GM labelling legislation and needs reliable procedures for which associated errors have been identified and minimised. In this paper we consider the testing of oilseed rape seed lots obtained from the harvest of a non-GM crop known to be contaminated by volunteer plants from a GM herbicide tolerant variety. The objective was to identify and quantify the error associated with the testing of these lots from the initial sampling to completion of the real-time PCR assay with which the level of GM contamination was quantified. The results showed that, under the controlled conditions of a single laboratory, the error associated with the real-time PCR assay to be negligible in comparison with sampling error, which was exacerbated by heterogeneity in the distribution of GM seeds, most notably at a small scale, i.e. 25 cm3. Sampling error was reduced by one to two thirds on the application of appropriate homogenisation procedures.
NASA Technical Reports Server (NTRS)
Goodrich, Kenneth H.; Sliwa, Steven M.; Lallman, Frederick J.
1989-01-01
Airplane designs are currently being proposed with a multitude of lifting and control devices. Because of the redundancy in ways to generate moments and forces, there are a variety of strategies for trimming each airplane. A linear optimum trim solution (LOTS) is derived using a Lagrange formulation. LOTS enables the rapid calculation of the longitudinal load distribution resulting in the minimum trim drag in level, steady-state flight for airplanes with a mixture of three or more aerodynamic surfaces and propulsive control effectors. Comparisons of the trim drags obtained using LOTS, a direct constrained optimization method, and several ad hoc methods are presented for vortex-lattice representations of a three-surface airplane and two-surface airplane with thrust vectoring. These comparisons show that LOTS accurately predicts the results obtained from the nonlinear optimization and that the optimum methods result in trim drag reductions of up to 80 percent compared to the ad hoc methods.
The Problem of Plagiarism: Students Who Copy May Not Know They've Committed an Offense
ERIC Educational Resources Information Center
MacDonell, Colleen
2005-01-01
With so many middle and high school students using subscription databases and the Web to complete assignments, there's a lot more cutting and pasting taking place than educators would like to see. And while it's understandable that teachers would be tempted to give failing grades to plagiarized work, it is unfair to students who may not even know…
21 CFR 211.196 - Distribution records.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 4 2010-04-01 2010-04-01 false Distribution records. 211.196 Section 211.196 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) DRUGS... contain lot or control numbers. (Approved by the Office of Management and Budget under control number 0910...
Pezzoli, Lorenzo; Pineda, Silvia; Halkyer, Percy; Crespo, Gladys; Andrews, Nick; Ronveaux, Olivier
2009-03-01
To estimate the yellow fever (YF) vaccine coverage for the endemic and non-endemic areas of Bolivia and to determine whether selected districts had acceptable levels of coverage (>70%). We conducted two surveys of 600 individuals (25 x 12 clusters) to estimate coverage in the endemic and non-endemic areas. We assessed 11 districts using lot quality assurance sampling (LQAS). The lot (district) sample was 35 individuals with six as decision value (alpha error 6% if true coverage 70%; beta error 6% if true coverage 90%). To increase feasibility, we divided the lots into five clusters of seven individuals; to investigate the effect of clustering, we calculated alpha and beta by conducting simulations where each cluster's true coverage was sampled from a normal distribution with a mean of 70% or 90% and standard deviations of 5% or 10%. Estimated coverage was 84.3% (95% CI: 78.9-89.7) in endemic areas, 86.8% (82.5-91.0) in non-endemic and 86.0% (82.8-89.1) nationally. LQAS showed that four lots had unacceptable coverage levels. In six lots, results were inconsistent with the estimated administrative coverage. The simulations suggested that the effect of clustering the lots is unlikely to have significantly increased the risk of making incorrect accept/reject decisions. Estimated YF coverage was high. Discrepancies between administrative coverage and LQAS results may be due to incorrect population data. Even allowing for clustering in LQAS, the statistical errors would remain low. Catch-up campaigns are recommended in districts with unacceptable coverage.
Drug-Path: a database for drug-induced pathways
Zeng, Hui; Cui, Qinghua
2015-01-01
Some databases for drug-associated pathways have been built and are publicly available. However, the pathways curated in most of these databases are drug-action or drug-metabolism pathways. In recent years, high-throughput technologies such as microarray and RNA-sequencing have produced lots of drug-induced gene expression profiles. Interestingly, drug-induced gene expression profile frequently show distinct patterns, indicating that drugs normally induce the activation or repression of distinct pathways. Therefore, these pathways contribute to study the mechanisms of drugs and drug-repurposing. Here, we present Drug-Path, a database of drug-induced pathways, which was generated by KEGG pathway enrichment analysis for drug-induced upregulated genes and downregulated genes based on drug-induced gene expression datasets in Connectivity Map. Drug-Path provides user-friendly interfaces to retrieve, visualize and download the drug-induced pathway data in the database. In addition, the genes deregulated by a given drug are highlighted in the pathways. All data were organized using SQLite. The web site was implemented using Django, a Python web framework. Finally, we believe that this database will be useful for related researches. Database URL: http://www.cuilab.cn/drugpath PMID:26130661
Amadoz, Alicia; González-Candelas, Fernando
2007-04-20
Most research scientists working in the fields of molecular epidemiology, population and evolutionary genetics are confronted with the management of large volumes of data. Moreover, the data used in studies of infectious diseases are complex and usually derive from different institutions such as hospitals or laboratories. Since no public database scheme incorporating clinical and epidemiological information about patients and molecular information about pathogens is currently available, we have developed an information system, composed by a main database and a web-based interface, which integrates both types of data and satisfies requirements of good organization, simple accessibility, data security and multi-user support. From the moment a patient arrives to a hospital or health centre until the processing and analysis of molecular sequences obtained from infectious pathogens in the laboratory, lots of information is collected from different sources. We have divided the most relevant data into 12 conceptual modules around which we have organized the database schema. Our schema is very complete and it covers many aspects of sample sources, samples, laboratory processes, molecular sequences, phylogenetics results, clinical tests and results, clinical information, treatments, pathogens, transmissions, outbreaks and bibliographic information. Communication between end-users and the selected Relational Database Management System (RDMS) is carried out by default through a command-line window or through a user-friendly, web-based interface which provides access and management tools for the data. epiPATH is an information system for managing clinical and molecular information from infectious diseases. It facilitates daily work related to infectious pathogens and sequences obtained from them. This software is intended for local installation in order to safeguard private data and provides advanced SQL-users the flexibility to adapt it to their needs. The database schema, tool scripts and web-based interface are free software but data stored in our database server are not publicly available. epiPATH is distributed under the terms of GNU General Public License. More details about epiPATH can be found at http://genevo.uv.es/epipath.
Development and implementation of a quality assurance program for a hormonal contraceptive implant.
Owen, Derek H; Jenkins, David; Cancel, Aida; Carter, Eli; Dorflinger, Laneta; Spieler, Jeff; Steiner, Markus J
2013-04-01
The importance of the distribution of safe, effective and cost-effective pharmaceutical products in resource-constrained countries is the subject of increasing attention. FHI 360 has developed a program aimed at evaluating the quality of a contraceptive implant manufactured in China, while the product is being registered in an increasing number of countries and distributed by international procurement agencies. The program consists of (1) independent product testing; (2) ongoing evaluation of the manufacturing facility through audits and inspections; and (3) post-marketing surveillance. This article focuses on the laboratory testing of the product. The various test methods were chosen from the following test method compendia, the United States Pharmacopeia (USP), British Pharmacopeia (BP), International Organization for Standardization (ISO), the American Society for Testing and Materials (ASTM), or lot release tests mandated by Chinese regulatory requirements. Each manufactured lot is independently tested prior to its distribution to countries supported by this program. In addition, a more detailed annual testing program includes evaluation of the active ingredient (levonorgestrel), the final product and the packaging material. Over the first 4 years of this 5-year project, all tested lots met the established quality criteria. The quality assurance program developed for this contraceptive implant has helped ensure that a safe product was being introduced into developing country family planning programs. This program provides a template for establishing quality assurance programs for other cost-effective pharmaceutical products that have not yet received stringent regulatory approval and are being distributed in resource-poor settings. Copyright © 2013 Elsevier Inc. All rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-21
... recalls made (1,881), and total number of adverse experience reports received (143,883) in FY 2010. The..., the fill lot numbers for the total number of dosage units of each strength or potency distributed (e.g... each step in the manufacture and distribution of a product including any [[Page 22403
Albuquerque, F S; Peso-Aguiar, M C; Assunção-Albuquerque, M J T
2008-11-01
The goal of this study was to document the distribution and establishment A. fulica such as their feeding preference and behavior in situ. The study was carried out at the city of Lauro de Freitas, Bahia state, Brazil, between November 2001 and November 2002. We used catch per unit effort methods to determine abundance, distribution, habitat choice and food preferences. The abundance and distribution of A. fulica was most representative in urban area, mainly near to the coastline. Lots and house gardens were the most preferred sites during active hours. The results indicated that A. fulica started their activity at the end of the evening and stopped in mid-morning. Their preferred food were vascular plants such as Hibiscus syriacus, Ricinus communis, Carica papaya, Galinsonga coccinea, Lippia alba, Ixora coccinea, Musa parasidisiaca, Mentha spicata and Cymbopogon citrates. Our results indicate that A. fulica are well adapted and established in this city and modified environments facilitate their establishment and dispersion. However, human perturbation, such as clearance of lots could be limiting for the persistence of A. fulica populations.
Equilibrium sampling by reweighting nonequilibrium simulation trajectories
NASA Astrophysics Data System (ADS)
Yang, Cheng; Wan, Biao; Xu, Shun; Wang, Yanting; Zhou, Xin
2016-03-01
Based on equilibrium molecular simulations, it is usually difficult to efficiently visit the whole conformational space of complex systems, which are separated into some metastable regions by high free energy barriers. Nonequilibrium simulations could enhance transitions among these metastable regions and then be applied to sample equilibrium distributions in complex systems, since the associated nonequilibrium effects can be removed by employing the Jarzynski equality (JE). Here we present such a systematical method, named reweighted nonequilibrium ensemble dynamics (RNED), to efficiently sample equilibrium conformations. The RNED is a combination of the JE and our previous reweighted ensemble dynamics (RED) method. The original JE reproduces equilibrium from lots of nonequilibrium trajectories but requires that the initial distribution of these trajectories is equilibrium. The RED reweights many equilibrium trajectories from an arbitrary initial distribution to get the equilibrium distribution, whereas the RNED has both advantages of the two methods, reproducing equilibrium from lots of nonequilibrium simulation trajectories with an arbitrary initial conformational distribution. We illustrated the application of the RNED in a toy model and in a Lennard-Jones fluid to detect its liquid-solid phase coexistence. The results indicate that the RNED sufficiently extends the application of both the original JE and the RED in equilibrium sampling of complex systems.
Equilibrium sampling by reweighting nonequilibrium simulation trajectories.
Yang, Cheng; Wan, Biao; Xu, Shun; Wang, Yanting; Zhou, Xin
2016-03-01
Based on equilibrium molecular simulations, it is usually difficult to efficiently visit the whole conformational space of complex systems, which are separated into some metastable regions by high free energy barriers. Nonequilibrium simulations could enhance transitions among these metastable regions and then be applied to sample equilibrium distributions in complex systems, since the associated nonequilibrium effects can be removed by employing the Jarzynski equality (JE). Here we present such a systematical method, named reweighted nonequilibrium ensemble dynamics (RNED), to efficiently sample equilibrium conformations. The RNED is a combination of the JE and our previous reweighted ensemble dynamics (RED) method. The original JE reproduces equilibrium from lots of nonequilibrium trajectories but requires that the initial distribution of these trajectories is equilibrium. The RED reweights many equilibrium trajectories from an arbitrary initial distribution to get the equilibrium distribution, whereas the RNED has both advantages of the two methods, reproducing equilibrium from lots of nonequilibrium simulation trajectories with an arbitrary initial conformational distribution. We illustrated the application of the RNED in a toy model and in a Lennard-Jones fluid to detect its liquid-solid phase coexistence. The results indicate that the RNED sufficiently extends the application of both the original JE and the RED in equilibrium sampling of complex systems.
Density Distributions of Cyclotrimethylenetrinitramines (RDX)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffman, D M
2002-03-19
As part of the US Army Foreign Comparative Testing (FCT) program the density distributions of six samples of class 1 RDX were measured using the density gradient technique. This technique was used in an attempt to distinguish between RDX crystallized by a French manufacturer (designated insensitive or IRDX) from RDX manufactured at Holston Army Ammunition Plant (HAAP), the current source of RDX for Department of Defense (DoD). Two samples from different lots of French IRDX had an average density of 1.7958 {+-} 0.0008 g/cc. The theoretical density of a perfect RDX crystal is 1.806 g/cc. This yields 99.43% of themore » theoretical maximum density (TMD). For two HAAP RDX lots the average density was 1.786 {+-} 0.002 g/cc, only 98.89% TMD. Several other techniques were used for preliminary characterization of one lot of French IRDX and two lot of HAAP RDX. Light scattering, SEM and polarized optical microscopy (POM) showed that SNPE and Holston RDX had the appropriate particle size distribution for Class 1 RDX. High performance liquid chromatography showed quantities of HMX in HAAP RDX. French IRDX also showed a 1.1 C higher melting point compared to HAAP RDX in the differential scanning calorimetry (DSC) consistent with no melting point depression due to the HMX contaminant. A second part of the program involved characterization of Holston RDX recrystallized using the French process. After reprocessing the average density of the Holston RDX was increased to 1.7907 g/cc. Apparently HMX in RDX can act as a nucleating agent in the French RDX recrystallization process. The French IRDX contained no HMX, which is assumed to account for its higher density and narrower density distribution. Reprocessing of RDX from Holston improved the average density compared to the original Holston RDX, but the resulting HIRDX was not as dense as the original French IRDX. Recrystallized Holston IRDX crystals were much larger (3-500 {micro}m or more) then either the original class 1 HAAP RDX or French IRDX.« less
BGDB: a database of bivalent genes
Li, Qingyan; Lian, Shuabin; Dai, Zhiming; Xiang, Qian; Dai, Xianhua
2013-01-01
Bivalent gene is a gene marked with both H3K4me3 and H3K27me3 epigenetic modification in the same area, and is proposed to play a pivotal role related to pluripotency in embryonic stem (ES) cells. Identification of these bivalent genes and understanding their functions are important for further research of lineage specification and embryo development. So far, lots of genome-wide histone modification data were generated in mouse and human ES cells. These valuable data make it possible to identify bivalent genes, but no comprehensive data repositories or analysis tools are available for bivalent genes currently. In this work, we develop BGDB, the database of bivalent genes. The database contains 6897 bivalent genes in human and mouse ES cells, which are manually collected from scientific literature. Each entry contains curated information, including genomic context, sequences, gene ontology and other relevant information. The web services of BGDB database were implemented with PHP + MySQL + JavaScript, and provide diverse query functions. Database URL: http://dailab.sysu.edu.cn/bgdb/ PMID:23894186
Kumar, Pankaj; Chaitanya, Pasumarthy S; Nagarajaram, Hampapathalu A
2011-01-01
PSSRdb (Polymorphic Simple Sequence Repeats database) (http://www.cdfd.org.in/PSSRdb/) is a relational database of polymorphic simple sequence repeats (PSSRs) extracted from 85 different species of prokaryotes. Simple sequence repeats (SSRs) are the tandem repeats of nucleotide motifs of the sizes 1-6 bp and are highly polymorphic. SSR mutations in and around coding regions affect transcription and translation of genes. Such changes underpin phase variations and antigenic variations seen in some bacteria. Although SSR-mediated phase variation and antigenic variations have been well-studied in some bacteria there seems a lot of other species of prokaryotes yet to be investigated for SSR mediated adaptive and other evolutionary advantages. As a part of our on-going studies on SSR polymorphism in prokaryotes we compared the genome sequences of various strains and isolates available for 85 different species of prokaryotes and extracted a number of SSRs showing length variations and created a relational database called PSSRdb. This database gives useful information such as location of PSSRs in genomes, length variation across genomes, the regions harboring PSSRs, etc. The information provided in this database is very useful for further research and analysis of SSRs in prokaryotes.
Drug-Path: a database for drug-induced pathways.
Zeng, Hui; Qiu, Chengxiang; Cui, Qinghua
2015-01-01
Some databases for drug-associated pathways have been built and are publicly available. However, the pathways curated in most of these databases are drug-action or drug-metabolism pathways. In recent years, high-throughput technologies such as microarray and RNA-sequencing have produced lots of drug-induced gene expression profiles. Interestingly, drug-induced gene expression profile frequently show distinct patterns, indicating that drugs normally induce the activation or repression of distinct pathways. Therefore, these pathways contribute to study the mechanisms of drugs and drug-repurposing. Here, we present Drug-Path, a database of drug-induced pathways, which was generated by KEGG pathway enrichment analysis for drug-induced upregulated genes and downregulated genes based on drug-induced gene expression datasets in Connectivity Map. Drug-Path provides user-friendly interfaces to retrieve, visualize and download the drug-induced pathway data in the database. In addition, the genes deregulated by a given drug are highlighted in the pathways. All data were organized using SQLite. The web site was implemented using Django, a Python web framework. Finally, we believe that this database will be useful for related researches. © The Author(s) 2015. Published by Oxford University Press.
D Partition-Based Clustering for Supply Chain Data Management
NASA Astrophysics Data System (ADS)
Suhaibah, A.; Uznir, U.; Anton, F.; Mioc, D.; Rahman, A. A.
2015-10-01
Supply Chain Management (SCM) is the management of the products and goods flow from its origin point to point of consumption. During the process of SCM, information and dataset gathered for this application is massive and complex. This is due to its several processes such as procurement, product development and commercialization, physical distribution, outsourcing and partnerships. For a practical application, SCM datasets need to be managed and maintained to serve a better service to its three main categories; distributor, customer and supplier. To manage these datasets, a structure of data constellation is used to accommodate the data into the spatial database. However, the situation in geospatial database creates few problems, for example the performance of the database deteriorate especially during the query operation. We strongly believe that a more practical hierarchical tree structure is required for efficient process of SCM. Besides that, three-dimensional approach is required for the management of SCM datasets since it involve with the multi-level location such as shop lots and residential apartments. 3D R-Tree has been increasingly used for 3D geospatial database management due to its simplicity and extendibility. However, it suffers from serious overlaps between nodes. In this paper, we proposed a partition-based clustering for the construction of a hierarchical tree structure. Several datasets are tested using the proposed method and the percentage of the overlapping nodes and volume coverage are computed and compared with the original 3D R-Tree and other practical approaches. The experiments demonstrated in this paper substantiated that the hierarchical structure of the proposed partitionbased clustering is capable of preserving minimal overlap and coverage. The query performance was tested using 300,000 points of a SCM dataset and the results are presented in this paper. This paper also discusses the outlook of the structure for future reference.
ERIC Educational Resources Information Center
Phillips, Ian
2015-01-01
Is it a good thing to have a lot of evidence? Surely the historian would answer that yes, it is: the more evidence that can be used, the better. The problem with this approach, though, is that too much data can be overwhelming for the history student--and, in Ian Phillips's experience, for the history student teacher. In this article Phillips…
A virtual observatory for photoionized nebulae: the Mexican Million Models database (3MdB).
NASA Astrophysics Data System (ADS)
Morisset, C.; Delgado-Inglada, G.; Flores-Fajardo, N.
2015-04-01
Photoionization models obtained with numerical codes are widely used to study the physics of the interstellar medium (planetary nebulae, HII regions, etc). Grids of models are performed to understand the effects of the different parameters used to describe the regions on the observables (mainly emission line intensities). Most of the time, only a small part of the computed results of such grids are published, and they are sometimes hard to obtain in a user-friendly format. We present here the Mexican Million Models dataBase (3MdB), an effort to resolve both of these issues in the form of a database of photoionization models, easily accessible through the MySQL protocol, and containing a lot of useful outputs from the models, such as the intensities of 178 emission lines, the ionic fractions of all the ions, etc. Some examples of the use of the 3MdB are also presented.
Paoletti, Claudia; Esbensen, Kim H
2015-01-01
Material heterogeneity influences the effectiveness of sampling procedures. Most sampling guidelines used for assessment of food and/or feed commodities are based on classical statistical distribution requirements, the normal, binomial, and Poisson distributions-and almost universally rely on the assumption of randomness. However, this is unrealistic. The scientific food and feed community recognizes a strong preponderance of non random distribution within commodity lots, which should be a more realistic prerequisite for definition of effective sampling protocols. Nevertheless, these heterogeneity issues are overlooked as the prime focus is often placed only on financial, time, equipment, and personnel constraints instead of mandating acquisition of documented representative samples under realistic heterogeneity conditions. This study shows how the principles promulgated in the Theory of Sampling (TOS) and practically tested over 60 years provide an effective framework for dealing with the complete set of adverse aspects of both compositional and distributional heterogeneity (material sampling errors), as well as with the errors incurred by the sampling process itself. The results of an empirical European Union study on genetically modified soybean heterogeneity, Kernel Lot Distribution Assessment are summarized, as they have a strong bearing on the issue of proper sampling protocol development. TOS principles apply universally in the food and feed realm and must therefore be considered the only basis for development of valid sampling protocols free from distributional constraints.
NASA Astrophysics Data System (ADS)
Hafiz Mohd Hazir, Mohd; Muda, Tuan Mohamad Tuan
2016-06-01
The Malaysian rubber industry, especially in the upstream sector, is much dependent on smallholders to produce latex or cup lumps. Identification and monitoring of rubber smallholders are essential tasks when it comes to the Malaysian rubber industry's sustainability. The authorised agencies who support the rubber smallholders can do better planning, arranging, and managing. This paper introduces a method of calculating the total number of smallholders as well as identifying the location of their planted rubber area. The scope of this study only focused on land owners as rubber smallholders in the selected study area of Negeri Sembilan. The land use map provided by the Department of Agriculture Malaysia gave information on distribution of rubber area in Malaysia, while the cadastral map from the Department of Survey and Mapping Malaysia was specifically used for identifying land owners of each rubber parcel or rubber lot. Both data were analyzed and processed with ArcGIS software to extract the information, and the results were then compared to the Malaysian Rubber Board smallholders database.
Yang, Y.; Van Metre, P.C.; Mahler, B.J.; Wilson, J.T.; Ligouis, B.; Razzaque, M.; Schaeffer, D.J.; Werth, C.J.
2010-01-01
Carbonaceous material (CM) particles are the principal vectors transporting polycyclic aromatic hydrocarbons (PAHs) into urban waters via runoff; however, characteristics of CM particles in urban watersheds and their relative contributions to PAH contamination remain unclear. Our objectives were to identify the sources and distribution of CM particles in an urban watershed and to determine the types of CMs that were the dominant sources of PAHs in the lake and stream sediments. Samples of soils, parking lot and street dust, and streambed and lake sediment were collected from the Lake Como watershed in Fort Worth, Texas. Characteristics of CM particles determined by organic petrography and a significant correlation between PAH concentrations and organic carbon in coal tar, asphalt, and soot indicate that these three CM particle types are the major sources and carriers of PAHs in the watershed. Estimates of the distribution of PAHs in CM particles indicate that coal-tar pitch, usedinsomepavementsealcoats, isadominant source of PAHs in the watershed, and contributes as much as 99% of the PAHs in sealed parking lot dust, 92% in unsealed parking lot dust, 88% in commercial area soil, 71% in streambed sediment, and 84% in surficial lake sediment. ?? 2010 American Chemical Society.
Building a generalized distributed system model
NASA Technical Reports Server (NTRS)
Mukkamala, Ravi
1991-01-01
A number of topics related to building a generalized distributed system model are discussed. The effects of distributed database modeling on evaluation of transaction rollbacks, the measurement of effects of distributed database models on transaction availability measures, and a performance analysis of static locking in replicated distributed database systems are covered.
Analysis of quantitative data obtained from toxicity studies showing non-normal distribution.
Kobayashi, Katsumi
2005-05-01
The data obtained from toxicity studies are examined for homogeneity of variance, but, usually, they are not examined for normal distribution. In this study I examined the measured items of a carcinogenicity/chronic toxicity study with rats for both homogeneity of variance and normal distribution. It was observed that a lot of hematology and biochemistry items showed non-normal distribution. For testing normal distribution of the data obtained from toxicity studies, the data of the concurrent control group may be examined, and for the data that show a non-normal distribution, non-parametric tests with robustness may be applied.
MED32/442: Internet Lectures: A five years experience at the University of Vienna
Kritz, H; Najemnik, C; Sinzinger, H
1999-01-01
Introduction Internet lectures are a very useful specialized tool to distribute information to interested people. Our experience started 1994 and we want to give a survey. Methods We started with a special lecture on atherosclerosis and improved the method within the last years. We had a lot of problems with certification and identification procedures, but they are solved now. In the last years we added web casting tools. Results At this time we have 80- 140 student per term, which are participants of a lectures. These are students attending a certificate course. Additionally we distribute our content over two homepages which we have installed to improve our work http://www.lipidforum.at, http://www.billrothhaus.at (10000 hits/month). Discussion Internet will be the most important tool for future teaching. It is an interesting experience to teach students from all over the world and you learn a lot from these people by yourself.
A comparison of LMC and SDL complexity measures on binomial distributions
NASA Astrophysics Data System (ADS)
Piqueira, José Roberto C.
2016-02-01
The concept of complexity has been widely discussed in the last forty years, with a lot of thinking contributions coming from all areas of the human knowledge, including Philosophy, Linguistics, History, Biology, Physics, Chemistry and many others, with mathematicians trying to give a rigorous view of it. In this sense, thermodynamics meets information theory and, by using the entropy definition, López-Ruiz, Mancini and Calbet proposed a definition for complexity that is referred as LMC measure. Shiner, Davison and Landsberg, by slightly changing the LMC definition, proposed the SDL measure and the both, LMC and SDL, are satisfactory to measure complexity for a lot of problems. Here, SDL and LMC measures are applied to the case of a binomial probability distribution, trying to clarify how the length of the data set implies complexity and how the success probability of the repeated trials determines how complex the whole set is.
Computer Science Research in Europe.
1984-08-29
most attention, multi- database and its structure, and (3) the dependencies between databases Distributed Systems and multi- databases . Having...completed a multi- database Newcastle University, UK system for distributed data management, At the University of Newcastle the INRIA is now working on a real...communications re- INRIA quirements of distributed database A project called SIRIUS was estab- systems, protocols for checking the lished in 1977 at the
A Framework for Optimizing Phytosanitary Thresholds in Seed Systems.
Choudhury, Robin Alan; Garrett, Karen A; Klosterman, Steven J; Subbarao, Krishna V; McRoberts, Neil
2017-10-01
Seedborne pathogens and pests limit production in many agricultural systems. Quarantine programs help prevent the introduction of exotic pathogens into a country, but few regulations directly apply to reducing the reintroduction and spread of endemic pathogens. Use of phytosanitary thresholds helps limit the movement of pathogen inoculum through seed, but the costs associated with rejected seed lots can be prohibitive for voluntary implementation of phytosanitary thresholds. In this paper, we outline a framework to optimize thresholds for seedborne pathogens, balancing the cost of rejected seed lots and benefit of reduced inoculum levels. The method requires relatively small amounts of data, and the accuracy and robustness of the analysis improves over time as data accumulate from seed testing. We demonstrate the method first and illustrate it with a case study of seedborne oospores of Peronospora effusa, the causal agent of spinach downy mildew. A seed lot threshold of 0.23 oospores per seed could reduce the overall number of oospores entering the production system by 90% while removing 8% of seed lots destined for distribution. Alternative mitigation strategies may result in lower economic losses to seed producers, but have uncertain efficacy. We discuss future challenges and prospects for implementing this approach.
Salary Management System for Small and Medium-sized Enterprises
NASA Astrophysics Data System (ADS)
Hao, Zhang; Guangli, Xu; Yuhuan, Zhang; Yilong, Lei
Small and Medium-sized Enterprises (SMEs) in the process of wage entry, calculation, the total number are needed to be done manually in the past, the data volume is quite large, processing speed is low, and it is easy to make error, which is resulting in low efficiency. The main purpose of writing this paper is to present the basis of salary management system, establish a scientific database, the computer payroll system, using the computer instead of a lot of past manual work in order to reduce duplication of staff labor, it will improve working efficiency.This system combines the actual needs of SMEs, through in-depth study and practice of the C/S mode, PowerBuilder10.0 development tools, databases and SQL language, Completed a payroll system needs analysis, database design, application design and development work. Wages, departments, units and personnel database file are included in this system, and have data management, department management, personnel management and other functions, through the control and management of the database query, add, delete, modify, and other functions can be realized. This system is reasonable design, a more complete function, stable operation has been tested to meet the basic needs of the work.
Distribution Grid Integration Unit Cost Database | Solar Research | NREL
Unit Cost Database Distribution Grid Integration Unit Cost Database NREL's Distribution Grid Integration Unit Cost Database contains unit cost information for different components that may be used to associated with PV. It includes information from the California utility unit cost guides on traditional
Technology Used for Realization of the Reform in Informal Areas.
NASA Astrophysics Data System (ADS)
Qirko, K.
2008-12-01
ORGANIZATION OF STRUCTURE AND ADMINISTRATION OF ALUIZNI Law no. 9482, date 03.03.2006 " On legalization, urban planning and integration of unauthorized buildings", entered into force on May 15, 2006. The Council of Ministers, with its decision no.289, date 17.05.2006, established the Agency for the Legalization, Urbanization, and Integration of the Informal Zones/Buildings (ALUIZNI), with its twelve local bodies. ALUIZNI began its activity in reliance to Law no. 9482, date 03.03.2006 " On legalization, urban planning and integration of unauthorized buildings", in July 2006. The administration of this agency was completed during this period and it is composed of; General Directory and twelve regional directories. As of today, this institution has 300 employees. The administrative structure of ALUIZNI is organized to achieve the objectives of the reform and to solve the problems arising during its completion. The following sectors have been established to achieve the objectives: Sector of compensation of owners; sector of cartography, sector of geographic system data elaboration (GIS) and Information Technology; sector of urban planning; sector of registration of legalized properties and Human resource sector. Following this vision, digital air photography of the Republic of Albania is in process of realization, from which we will receive, for the first time, orthophoto and digital map, unique for the entire territory of our country. This cartographic product, will serve to all government institutions and private ones. All other systems, such as; system of territory management; system of property registration ; system of population registration; system of addresses; urban planning studies and systems; definition of boundaries of administrative and touristic zones will be established based on this cartographic system. The cartographic product will be of parameters mentioned below, divided in lots:(2.3 MEuro) 1.Lot I: It includes the urban zone, 1200 km2. It will have a resolution of 8cm pixel and it will be produced as a orthophoto and digital vectorized map. 2. Lot II: It includes the rural zone, 12000km2. Orthophoto, with resolution 8cm pixel, will be produced. 3.Lot III: It includes mountainous zone, 15000km2. We will receive orthophoto, with resolution 30cm pixel. All the technical documentation of the process will be produced in a digital manner, based on the digital map and it will be the main databases. We have established the sector of geographic system data elaboration (GIS) and Information Technology, with the purpose to assure transparency, and correctness to the process, and to assure a permanent useful information for various reasons. (1.1MEuro) GIS is a modern technology, which elaborates and makes connections among different information. The main objective of this sector is the establishment of self declaration databases, with 30 characteristics for each of them and a databases for the process, with 40 characteristics for each property, which includes cartographic, geographic and construction data.
VIEWCACHE: An incremental pointer-based access method for autonomous interoperable databases
NASA Technical Reports Server (NTRS)
Roussopoulos, N.; Sellis, Timos
1992-01-01
One of biggest problems facing NASA today is to provide scientists efficient access to a large number of distributed databases. Our pointer-based incremental database access method, VIEWCACHE, provides such an interface for accessing distributed data sets and directories. VIEWCACHE allows database browsing and search performing inter-database cross-referencing with no actual data movement between database sites. This organization and processing is especially suitable for managing Astrophysics databases which are physically distributed all over the world. Once the search is complete, the set of collected pointers pointing to the desired data are cached. VIEWCACHE includes spatial access methods for accessing image data sets, which provide much easier query formulation by referring directly to the image and very efficient search for objects contained within a two-dimensional window. We will develop and optimize a VIEWCACHE External Gateway Access to database management systems to facilitate distributed database search.
Growth of enterotoxigenic Staphylococcus aureus in povi masima, a traditional Pacific island food.
Wong, T L; Whyte, R J; Graham, C G; Saunders, D; Schumacher, J; Hudson, J A
2004-01-01
To obtain preliminary data on the microbiology and hurdles to pathogen growth in the traditional Pacific Island food, povi masima, which is essentially beef brisket cured in brine. Six containers of povi masima were prepared and two were inoculated with five enterotoxigenic strains of Staphyloccocus aureus. The povi masima were divided into two lots each containing two uninoculated control and an inoculated container. Lot 1 was incubated at room temperature (20 degrees C) and lot 2 under refrigeration (4-5 degrees C) for up to 98 days. During storage, samples were removed and tested for aerobic plate count, coagulase-producing Staphylococci, Clostridium perfringens, staphylococcal enterotoxin and various chemical parameters of the food. Coagulase-producing Staphylococci and aerobic plate counts grew to high levels in both the inoculated and uninoculated lots stored at room temperature, but enterotoxin was only detected at one time point in these lots and this may represent a false positive result. The concentration of NaCl in the meat increased with time as concentrations equilibrated, and nitrite was rapidly lost in those lots stored at room temperature. Storage at 4-5 degrees C prevented proliferation of coagulase-producing Staphylococci. For safe curing and storage, this food should be kept under refrigeration as this prevented growth of staphylococci. Optimum storage would also be achieved with improved attempts to ensure equal distribution of NaCl prior to storage. Under conditions traditionally used to cure and store this food, enterotoxigenic staphylococci can grow to numbers where toxigenesis might occur, especially during the early stages of curing where the salt has not diffused from the brine into the meat.
MAJOR TRANSPORT MECHANISMS OF PYRETHROIDS IN RESIDENTIAL SETTINGS AND EFFECTS OF MITIGATION MEASURES
Davidson, Paul C; Jones, Russell L; Harbourt, Christopher M; Hendley, Paul; Goodwin, Gregory E; Sliz, Bradley A
2014-01-01
The major pathways for transport of pyrethroids were determined in runoff studies conducted at a full-scale test facility in central California, USA. The 6 replicate house lots were typical of front lawns and house fronts of California residential developments and consisted of stucco walls, garage doors, driveways, and residential lawn irrigation sprinkler systems. Each of the 6 lots also included a rainfall simulator to generate artificial rainfall events. Different pyrethroids were applied to 5 surfaces—driveway, garage door and adjacent walls, lawn, lawn perimeter (grass near the house walls), and house walls above grass. The volume of runoff water from each house lot was measured, sampled, and analyzed to determine the amount of pyrethroid mass lost from each surface. Applications to 3 of the house lots were made using the application practices typically used prior to recent label changes, and applications were made to the other 3 house lots according to the revised application procedures. Results from the house lots using the historic application procedures showed that losses of the compounds applied to the driveway and garage door (including the adjacent walls) were 99.75% of total measured runoff losses. The greatest losses were associated with significant rainfall events rather than lawn irrigation events. However, runoff losses were 40 times less using the revised application procedures recently specified on pyrethroid labels. Environ Toxicol Chem 2014;33:52–60. © 2013 The Authors. Environmental Toxicology and Chemistry published by Wiley Periodicals, Inc. on behalf of SETAC. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution, and reproduction in any medium, provided the original work is properly cited. PMID:24105831
1951-12-31
AVAIL AND/OR SPECIAL DISTRIBUTION STAMP UNANNOUNCED DATE RECEIVED IN DTIC DISTRIBUTION STATF.MFNT Ä Approved lot public releasej Distribution...es (countirio prelininary pafes) Ho. -A* AW ^q^l^^^grles A OIEUTION JINCHJB *.*?** " **** **** mOJWI l(8)b AIR WEATHER SERVICE PiRTICIPATICK Hi...Top Figh Right MediuB Top Mediua Left Medina Right Medina Top High Botton High Right High Left High PROJECT l(8)b o i M h P 9 e •a. (O
Asteroids in three-body mean motion resonances with planets
NASA Astrophysics Data System (ADS)
Smirnov, Evgeny A.; Dovgalev, Ilya S.; Popova, Elena A.
2018-04-01
We have identified all asteroids in three-body mean-motion resonances in all possible planets configurations. The identification was done dynamically: the orbits of the asteroids were integrated for 100,000 yrs and the set of the resonant arguments was numerically analyzed. We have found that each possible planets configuration has a lot of the resonant asteroids. In total 65,972 resonant asteroids (≈14.1% of the total number of 467,303 objects from AstDyS database) have been identified.
Renormalizability of quasiparton distribution functions
Ishikawa, Tomomi; Ma, Yan-Qing; Qiu, Jian-Wei; ...
2017-11-21
Quasi-parton distribution functions have received a lot of attentions in both perturbative QCD and lattice QCD communities in recent years because they not only carry good information on the parton distribution functions, but also could be evaluated by lattice QCD simulations. However, unlike the parton distribution functions, the quasi-parton distribution functions have perturbative ultraviolet power divergences because they are not defined by twist-2 operators. Here in this article, we identify all sources of ultraviolet divergences for the quasi-parton distribution functions in coordinate-space, and demonstrated that power divergences, as well as all logarithmic divergences can be renormalized multiplicatively to all ordersmore » in QCD perturbation theory.« less
Renormalizability of quasiparton distribution functions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ishikawa, Tomomi; Ma, Yan-Qing; Qiu, Jian-Wei
Quasi-parton distribution functions have received a lot of attentions in both perturbative QCD and lattice QCD communities in recent years because they not only carry good information on the parton distribution functions, but also could be evaluated by lattice QCD simulations. However, unlike the parton distribution functions, the quasi-parton distribution functions have perturbative ultraviolet power divergences because they are not defined by twist-2 operators. Here in this article, we identify all sources of ultraviolet divergences for the quasi-parton distribution functions in coordinate-space, and demonstrated that power divergences, as well as all logarithmic divergences can be renormalized multiplicatively to all ordersmore » in QCD perturbation theory.« less
A framework for analysis of large database of old art paintings
NASA Astrophysics Data System (ADS)
Da Rugna, Jérome; Chareyron, Ga"l.; Pillay, Ruven; Joly, Morwena
2011-03-01
For many years, a lot of museums and countries organize the high definition digitalization of their own collections. In consequence, they generate massive data for each object. In this paper, we only focus on art painting collections. Nevertheless, we faced a very large database with heterogeneous data. Indeed, image collection includes very old and recent scans of negative photos, digital photos, multi and hyper spectral acquisitions, X-ray acquisition, and also front, back and lateral photos. Moreover, we have noted that art paintings suffer from much degradation: crack, softening, artifact, human damages and, overtime corruption. Considering that, it appears necessary to develop specific approaches and methods dedicated to digital art painting analysis. Consequently, this paper presents a complete framework to evaluate, compare and benchmark devoted to image processing algorithms.
Code of Federal Regulations, 2010 CFR
2010-04-01
... designs, manufactures, fabricates, assembles, or processes a finished device. Manufacturer includes but is... numbers, or both, from which the history of the manufacturing, packaging, labeling, and distribution of a unit, lot, or batch of finished devices can be determined. (e) Design history file (DHF) means a...
HPV (Human Papillomavirus) Gardasil® Vaccine - what you need to know
... all lots of Gardasil® (quadrivalent HPV vaccine) already distributed in the United States have expired. Continue using this VIS when administering Gardasil®. When all remaining doses have expired in May of 2017, this VIS will be removed.
SAADA: Astronomical Databases Made Easier
NASA Astrophysics Data System (ADS)
Michel, L.; Nguyen, H. N.; Motch, C.
2005-12-01
Many astronomers wish to share datasets with their community but have not enough manpower to develop databases having the functionalities required for high-level scientific applications. The SAADA project aims at automatizing the creation and deployment process of such databases. A generic but scientifically relevant data model has been designed which allows one to build databases by providing only a limited number of product mapping rules. Databases created by SAADA rely on a relational database supporting JDBC and covered by a Java layer including a lot of generated code. Such databases can simultaneously host spectra, images, source lists and plots. Data are grouped in user defined collections whose content can be seen as one unique set per data type even if their formats differ. Datasets can be correlated one with each other using qualified links. These links help, for example, to handle the nature of a cross-identification (e.g., a distance or a likelihood) or to describe their scientific content (e.g., by associating a spectrum to a catalog entry). The SAADA query engine is based on a language well suited to the data model which can handle constraints on linked data, in addition to classical astronomical queries. These constraints can be applied on the linked objects (number, class and attributes) and/or on the link qualifier values. Databases created by SAADA are accessed through a rich WEB interface or a Java API. We are currently developing an inter-operability module implanting VO protocols.
A Bayesian Framework of Uncertainties Integration in 3D Geological Model
NASA Astrophysics Data System (ADS)
Liang, D.; Liu, X.
2017-12-01
3D geological model can describe complicated geological phenomena in an intuitive way while its application may be limited by uncertain factors. Great progress has been made over the years, lots of studies decompose the uncertainties of geological model to analyze separately, while ignored the comprehensive impacts of multi-source uncertainties. Great progress has been made over the years, while lots of studies ignored the comprehensive impacts of multi-source uncertainties when analyzed them item by item from each source. To evaluate the synthetical uncertainty, we choose probability distribution to quantify uncertainty, and propose a bayesian framework of uncertainties integration. With this framework, we integrated data errors, spatial randomness, and cognitive information into posterior distribution to evaluate synthetical uncertainty of geological model. Uncertainties propagate and cumulate in modeling process, the gradual integration of multi-source uncertainty is a kind of simulation of the uncertainty propagation. Bayesian inference accomplishes uncertainty updating in modeling process. Maximum entropy principle makes a good effect on estimating prior probability distribution, which ensures the prior probability distribution subjecting to constraints supplied by the given information with minimum prejudice. In the end, we obtained a posterior distribution to evaluate synthetical uncertainty of geological model. This posterior distribution represents the synthetical impact of all the uncertain factors on the spatial structure of geological model. The framework provides a solution to evaluate synthetical impact on geological model of multi-source uncertainties and a thought to study uncertainty propagation mechanism in geological modeling.
Floré, Katelijne M J; Fiers, Tom; Delanghe, Joris R
2008-01-01
In recent years a number of point of care testing (POCT) glucometers were introduced on the market. We investigated the analytical variability (lot-to-lot variation, calibration error, inter-instrument and inter-operator variability) of glucose POCT systems in a university hospital environment and compared these results with the analytical needs required for tight glucose monitoring. The reference hexokinase method was compared to different POCT systems based on glucose oxidase (blood gas instruments) or glucose dehydrogenase (handheld glucometers). Based upon daily internal quality control data, total errors were calculated for the various glucose methods and the analytical variability of the glucometers was estimated. The total error of the glucometers exceeded by far the desirable analytical specifications (based on a biological variability model). Lot-to-lot variation, inter-instrument variation and inter-operator variability contributed approximately equally to total variance. As in a hospital environment, distribution of hematocrit values is broad, converting blood glucose into plasma values using a fixed factor further increases variance. The percentage of outliers exceeded the ISO 15197 criteria in a broad glucose concentration range. Total analytical variation of handheld glucometers is larger than expected. Clinicians should be aware that the variability of glucose measurements obtained by blood gas instruments is lower than results obtained with handheld glucometers on capillary blood.
Sampling, testing and modeling particle size distribution in urban catch basins.
Garofalo, G; Carbone, M; Piro, P
2014-01-01
The study analyzed the particle size distribution of particulate matter (PM) retained in two catch basins located, respectively, near a parking lot and a traffic intersection with common high levels of traffic activity. Also, the treatment performance of a filter medium was evaluated by laboratory testing. The experimental treatment results and the field data were then used as inputs to a numerical model which described on a qualitative basis the hydrological response of the two catchments draining into each catch basin, respectively, and the quality of treatment provided by the filter during the measured rainfall. The results show that PM concentrations were on average around 300 mg/L (parking lot site) and 400 mg/L (road site) for the 10 rainfall-runoff events observed. PM with a particle diameter of <45 μm represented 40-50% of the total PM mass. The numerical model showed that a catch basin with a filter unit can remove 30 to 40% of the PM load depending on the storm characteristics.
Burade, Vinod; Bhowmick, Subhas; Maiti, Kuntal; Zalawadia, Rishit; Jain, Deepak; Rajamannar, Thennati
2017-05-01
The liposomal formulation of doxorubicin [doxorubicin (DXR) hydrochloride (HCl) liposome injection, Caelyx ® ] alters the tissue distribution of DXR as compared with nonliposomal DXR, resulting in an improved benefit-risk profile. We conducted studies in murine models to compare the plasma and tissue distribution of a proposed generic DXR HCl liposome injection developed by Sun Pharmaceuticals Industries Limited (SPIL DXR HCl liposome injection) with Caelyx ® . The plasma and tissue distributions of the SPIL and reference DXR HCl liposome injections were compared in syngeneic fibrosarcoma-bearing BALB/c mice and Sprague-Dawley rats. Different batches and different lots of the same batch of the reference product were also compared with each other. The SPIL and reference DXR HCl liposome injections exhibited generally comparable plasma and tissue distribution profiles in both models. While minor differences were observed between the two products in some tissues, different batches and lots of the reference product also showed some differences in the distribution of various analytes in some tissues. The ratios of estimated free to encapsulated DXR for plasma and tissue were generally comparable between the SPIL and reference DXR HCl liposome injections in both models, indicating similar extents of absorption into the tissues and similar rates of drug release from liposomes. The plasma and tissue distribution profiles of the SPIL and reference DXR HCl liposome injections were shown to be generally comparable. Inconsistencies between the products observed in some tissues were thought to be due to biological variation.
PKI security in large-scale healthcare networks.
Mantas, Georgios; Lymberopoulos, Dimitrios; Komninos, Nikos
2012-06-01
During the past few years a lot of PKI (Public Key Infrastructures) infrastructures have been proposed for healthcare networks in order to ensure secure communication services and exchange of data among healthcare professionals. However, there is a plethora of challenges in these healthcare PKI infrastructures. Especially, there are a lot of challenges for PKI infrastructures deployed over large-scale healthcare networks. In this paper, we propose a PKI infrastructure to ensure security in a large-scale Internet-based healthcare network connecting a wide spectrum of healthcare units geographically distributed within a wide region. Furthermore, the proposed PKI infrastructure facilitates the trust issues that arise in a large-scale healthcare network including multi-domain PKI infrastructures.
Automation in drug inventory management saves personnel time and budget.
Awaya, Toshio; Ohtaki, Ko-ichi; Yamada, Takehiro; Yamamoto, Kuniko; Miyoshi, Toshiyuki; Itagaki, Yu-ichi; Tasaki, Yoshikazu; Hayase, Nobumasa; Matsubara, Kazuo
2005-05-01
Automation in the drug distribution processes is helpful to pharmacists in creating new clinical services. We have ameliorated the drug inventory control system seamlessly connected with the physician order-entry system. This control system application, named Artima, allows inventory functions to be faster and more efficient in real time. The medicines used in our hospital are automatically fixed and arranged to sold-packages, and are ordered from each wholesaler by a fax-modem every day. Artima can search the lot number and expiration date of drug in the purchase and delivery records. These functions are powerful and useful in patient's safety and cost containment. We surveyed the inventory amount stored in the computer database, and evaluated time required for inventory management by tabulating working records of employees during past decades. Inventory decreased by 70% along with the continuous improvement of the system during the past decade. The workload in the inventory management in each section of the Pharmacy Department as well as in clinical units was dramatically reduced after the implementation of this system. The automation system in the drug inventory management allows creating new clinical positions for pharmacists. This system also could pay for itself in time.
Fishes of the Cusiana River (Meta River basin, Colombia), with an identification key to its species
Urbano-Bonilla, Alexander; Ballen, Gustavo A.; Herrera-R, Guido A.; Jhon Zamudio; Herrera-Collazos, Edgar E.; DoNascimiento, Carlos; Saúl Prada-Pedreros; Maldonado-Ocampo, Javier A.
2018-01-01
Abstract The Cusiana River sub-basin has been identified as a priority conservation area in the Orinoco region in Colombia due to its high species diversity. This study presents an updated checklist and identification key for fishes of the Cusiana River sub-basin. The checklist was assembled through direct examination of specimens deposited in the main Colombian ichthyological collections. A total of 2020 lots from 167 different localities from the Cusiana River sub-basin were examined and ranged from 153 to 2970 m in elevation. The highest number of records were from the piedmont region (1091, 54.0 %), followed by the Llanos (878, 43.5 %) and Andean (51, 2.5 %). 241 species distributed in 9 orders, 40 families, and 158 genera were found. The fish species richness observed (241), represents 77.7 % of the 314 estimated species (95 % CI=276.1–394.8). The use of databases to develop lists of fish species is not entirely reliable; therefore taxonomic verification of specimens in collections is essential. The results will facilitate comparisons with other sub-basins of the Orinoquia, which are not categorized as areas of importance for conservation in Colombia. PMID:29416408
Data Mining on Distributed Medical Databases: Recent Trends and Future Directions
NASA Astrophysics Data System (ADS)
Atilgan, Yasemin; Dogan, Firat
As computerization in healthcare services increase, the amount of available digital data is growing at an unprecedented rate and as a result healthcare organizations are much more able to store data than to extract knowledge from it. Today the major challenge is to transform these data into useful information and knowledge. It is important for healthcare organizations to use stored data to improve quality while reducing cost. This paper first investigates the data mining applications on centralized medical databases, and how they are used for diagnostic and population health, then introduces distributed databases. The integration needs and issues of distributed medical databases are described. Finally the paper focuses on data mining studies on distributed medical databases.
Jordan, D; McEwen, S A; Wilson, J B; McNab, W B; Lammerding, A M
1999-05-01
A study was conducted to provide a quantitative description of the amount of tag (mud, soil, and bedding) adhered to the hides of feedlot beef cattle and to appraise the statistical reliability of a subjective rating system for assessing this trait. Initially, a single rater obtained baseline data by assessing 2,417 cattle for 1 month at an Ontario beef processing plant. Analysis revealed that there was a strong tendency for animals within sale-lots to have a similar total tag score (intralot correlation = 0.42). Baseline data were summarized by fitting a linear model describing an individual's total tag score as the sum of their lot mean tag score (LMTS) plus an amount representing normal variation within the lot. LMTSs predicted by the linear model were adequately described by a beta distribution with parameters nu = 3.12 and omega = 5.82 scaled to fit on the 0-to-9 interval. Five raters, trained in use of the tag scoring system, made 1,334 tag score observations in a commercial abattoir, allowing reliability to be assessed at the individual level and at the lot level. High values for reliability were obtained for individual total tag score (0.84) and lot total tag score (0.83); these values suggest that the tag scoring system could be used in the marketing and slaughter of Ontario beef cattle to improve the cleanliness of animals presented for slaughter in an effort to control the entry of microbial contamination into abattoirs. Implications for the use of the tag scoring system in research are discussed.
NASA Astrophysics Data System (ADS)
Onodera, Natsuo; Mizukami, Masayuki
This paper estimates several quantitative indice on production and distribution of scientific and technical databases based on various recent publications and attempts to compare the indice internationally. Raw data used for the estimation are brought mainly from the Database Directory (published by MITI) for database production and from some domestic and foreign study reports for database revenues. The ratio of the indice among Japan, US and Europe for usage of database is similar to those for general scientific and technical activities such as population and R&D expenditures. But Japanese contributions to production, revenue and over-countory distribution of databases are still lower than US and European countries. International comparison of relative database activities between public and private sectors is also discussed.
21 CFR 660.36 - Samples and protocols.
Code of Federal Regulations, 2010 CFR
2010-04-01
... ADDITIONAL STANDARDS FOR DIAGNOSTIC SUBSTANCES FOR LABORATORY TESTS Reagent Red Blood Cells § 660.36 Samples... a cell panel intended for identification of unexpected antibodies. The sample shall be packaged as... distribution of each lot of Reagent Red Blood Cells for detection or identification of unexpected antibodies...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-18
..., process deviation, or contamination with microorganisms where any lot of the food has entered distribution... DEPARTMENT OF HEALTH AND HUMAN SERVICES Food and Drug Administration [Docket No. FDA-2013-N-1119] Agency Information Collection Activities; Proposed Collection; Comment Request; Food Canning...
Performance related issues in distributed database systems
NASA Technical Reports Server (NTRS)
Mukkamala, Ravi
1991-01-01
The key elements of research performed during the year long effort of this project are: Investigate the effects of heterogeneity in distributed real time systems; Study the requirements to TRAC towards building a heterogeneous database system; Study the effects of performance modeling on distributed database performance; and Experiment with an ORACLE based heterogeneous system.
Key features for ATA / ATR database design in missile systems
NASA Astrophysics Data System (ADS)
Özertem, Kemal Arda
2017-05-01
Automatic target acquisition (ATA) and automatic target recognition (ATR) are two vital tasks for missile systems, and having a robust detection and recognition algorithm is crucial for overall system performance. In order to have a robust target detection and recognition algorithm, an extensive image database is required. Automatic target recognition algorithms use the database of images in training and testing steps of algorithm. This directly affects the recognition performance, since the training accuracy is driven by the quality of the image database. In addition, the performance of an automatic target detection algorithm can be measured effectively by using an image database. There are two main ways for designing an ATA / ATR database. The first and easy way is by using a scene generator. A scene generator can model the objects by considering its material information, the atmospheric conditions, detector type and the territory. Designing image database by using a scene generator is inexpensive and it allows creating many different scenarios quickly and easily. However the major drawback of using a scene generator is its low fidelity, since the images are created virtually. The second and difficult way is designing it using real-world images. Designing image database with real-world images is a lot more costly and time consuming; however it offers high fidelity, which is critical for missile algorithms. In this paper, critical concepts in ATA / ATR database design with real-world images are discussed. Each concept is discussed in the perspective of ATA and ATR separately. For the implementation stage, some possible solutions and trade-offs for creating the database are proposed, and all proposed approaches are compared to each other with regards to their pros and cons.
A Hybrid Multilevel Storage Architecture for Electric Power Dispatching Big Data
NASA Astrophysics Data System (ADS)
Yan, Hu; Huang, Bibin; Hong, Bowen; Hu, Jing
2017-10-01
Electric power dispatching is the center of the whole power system. In the long run time, the power dispatching center has accumulated a large amount of data. These data are now stored in different power professional systems and form lots of information isolated islands. Integrating these data and do comprehensive analysis can greatly improve the intelligent level of power dispatching. In this paper, a hybrid multilevel storage architecture for electrical power dispatching big data is proposed. It introduces relational database and NoSQL database to establish a power grid panoramic data center, effectively meet power dispatching big data storage needs, including the unified storage of structured and unstructured data fast access of massive real-time data, data version management and so on. It can be solid foundation for follow-up depth analysis of power dispatching big data.
NASA Astrophysics Data System (ADS)
Abd-Elmotaal, Hussein; Kühtreiber, Norbert
2016-04-01
In the framework of the IAG African Geoid Project, there are a lot of large data gaps in its gravity database. These gaps are filled initially using unequal weight least-squares prediction technique. This technique uses a generalized Hirvonen covariance function model to replace the empirically determined covariance function. The generalized Hirvonen covariance function model has a sensitive parameter which is related to the curvature parameter of the covariance function at the origin. This paper studies the effect of the curvature parameter on the least-squares prediction results, especially in the large data gaps as appearing in the African gravity database. An optimum estimation of the curvature parameter has also been carried out. A wide comparison among the results obtained in this research along with their obtained accuracy is given and thoroughly discussed.
Standard line slopes as a measure of a relative matrix effect in quantitative HPLC-MS bioanalysis.
Matuszewski, B K
2006-01-18
A simple experimental approach for studying and identifying the relative matrix effect (for example "plasma-to-plasma" and/or "urine-to-urine") in quantitative analyses by HPLC-MS/MS is described. Using as a database a large number of examples of methods developed in recent years in our laboratories, the relationship between the precision of standard line slopes constructed in five different lots of a biofluid (for example plasma) and the reliability of determination of concentration of an analyte in a particular plasma lot (or subject) was examined. In addition, the precision of standard line slopes was compared when stable isotope-labeled analytes versus analogs were used as internal standards (IS). Also, in some cases, a direct comparison of standard line slopes was made when different HPLC-MS interfaces (APCI versus ESI) were used for the assay of the same compound, using the same IS and the same sample preparation and chromatographic separation conditions. In selected cases, the precision of standard line slopes in five different lots of a biofluid was compared with precision values determined five times in a single lot. The results of these studies indicated that the variability of standard line slopes in different lots of a biofluid [precision of standard line slopes expressed as coefficient of variation, CV (%)] may serve as a good indicator of a relative matrix effect and, it is suggested, this precision value should not exceed 3-4% for the method to be considered reliable and free from the relative matrix effect liability. Based on the results presented, in order to assess the relative matrix effect in bioanalytical methods, it is recommended to perform assay precision and accuracy determination in five different lots of a biofluid, instead of repeat (n=5) analysis in the same, single biofluid lot, calculate standard line slopes and precision of these slopes, and to use <3-4% slope precision value as a guide for method applicability to support clinical studies. It was also demonstrated that when stable isotope-labeled analytes were used as internal standards, the precision of standard line slopes in five different lots of a biofluid was =2.4% irrespective of the HPLC-MS interface utilized. This clearly indicated that, in all cases studied, the use of stable isotope-labeled IS eliminated relative matrix effect. Also, the utilization of the APCI interface instead of ESI led to the elimination of the relative matrix effect in all cases studied. When the precision of standard line slope values exceeds the 3-4% limit, the method may require improvements (a more efficient chromatography, a more selective extraction, a stable isotope-labeled IS instead of an analog as an IS, and/or a change in the HPLC-MS interface) to eliminate the relative matrix effect and to improve assay selectivity.
NASA Astrophysics Data System (ADS)
Casajus Ramo, A.; Graciani Diaz, R.
2012-12-01
DIRAC framework for distributed computing has been designed as a group of collaborating components, agents and servers, with persistent database back-end. Components communicate with each other using DISET, an in-house protocol that provides Remote Procedure Call (RPC) and file transfer capabilities. This approach has provided DIRAC with a modular and stable design by enforcing stable interfaces across releases. But it made complicated to scale further with commodity hardware. To further scale DIRAC, components needed to send more queries between them. Using RPC to do so requires a lot of processing power just to handle the secure handshake required to establish the connection. DISET now provides a way to keep stable connections and send and receive queries between components. Only one handshake is required to send and receive any number of queries. Using this new communication mechanism DIRAC now provides a new type of component called Executor. Executors process any task (such as resolving the input data of a job) sent to them by a task dispatcher. This task dispatcher takes care of persisting the state of the tasks to the storage backend and distributing them among all the Executors based on the requirements of each task. In case of a high load, several Executors can be started to process the extra load and stop them once the tasks have been processed. This new approach of handling tasks in DIRAC makes Executors easy to replace and replicate, thus enabling DIRAC to further scale beyond the current approach based on polling agents.
Wei, Qunshan; Zhu, Gefu; Wu, Peng; Cui, Li; Zhang, Kaisong; Zhou, Jingjing; Zhang, Wenru
2010-01-01
The pollutants in urban storm runoff, which lead to an non-point source contamination of water environment around cities, are of great concerns. The distributions of typical contaminants and the variations of their species in short term storm runoff from different land surfaces in Xiamen City were investigated. The concentrations of various contaminants, including organic matter, nutrients (i.e., N and P) and heavy metals, were significantly higher in parking lot and road runoff than those in roof and lawn runoff. The early runoff samples from traffic road and parking lot contained much high total nitrogen (TN 6-19 mg/L) and total phosphorus (TP 1-3 mg/L). A large proportion (around 60%) of TN existed as total dissolved nitrogen (TDN) species in most runoff. The percentage of TDN and the percentage of total dissolved phosphorus remained relatively stable during the rain events and did not decrease as dramatically as TN and TP. In addition, only parking lot and road runoff were contaminated by heavy metals, and both Pb (25-120 microg/L) and Zn (0.1-1.2 mg/L) were major heavy metals contaminating both runoff. Soluble Pb and Zn were predominantly existed as labile complex species (50%-99%), which may be adsorbed onto the surfaces of suspended particles and could be easily released out when pH decreased. This would have the great impact to the environment.
21 CFR 606.165 - Distribution and receipt; procedures and records.
Code of Federal Regulations, 2010 CFR
2010-04-01
... SERVICES (CONTINUED) BIOLOGICS CURRENT GOOD MANUFACTURING PRACTICE FOR BLOOD AND BLOOD COMPONENTS Records..., or for crossmatched blood and blood components, the name of the recipient. (c) Receipt records shall contain the name and address of the collecting facility, date received, donor or lot number assigned by...
Realizing the Vision of Zero Software Defects
2011-05-16
AGENCY NAME(S) AND ADDRESS(ES) 10. SPONSOR/MONITOR’S ACRONYM(S) 11. SPONSOR/MONITOR’S REPORT NUMBER(S) 12 . DISTRIBUTION/AVAILABILITY STATEMENT...data And lots more … 12 The Vision of Zero Defect Software Is it possible? Yes, but with some caveats Is it applicable to all types of
Code of Federal Regulations, 2010 CFR
2010-07-01
... under FIFRA sections 3, 4 or 24(c). (2) An application for an experimental use permit under FIFRA... distribution of a pesticide. Batch means a specific quantity or lot of a test, control, or reference substance... to a test system. Control substance means any chemical substance or mixture, or any other material...
Code of Federal Regulations, 2013 CFR
2013-07-01
... under FIFRA sections 3, 4 or 24(c). (2) An application for an experimental use permit under FIFRA... distribution of a pesticide. Batch means a specific quantity or lot of a test, control, or reference substance... to a test system. Control substance means any chemical substance or mixture, or any other material...
Code of Federal Regulations, 2012 CFR
2012-07-01
... under FIFRA sections 3, 4 or 24(c). (2) An application for an experimental use permit under FIFRA... distribution of a pesticide. Batch means a specific quantity or lot of a test, control, or reference substance... to a test system. Control substance means any chemical substance or mixture, or any other material...
Code of Federal Regulations, 2011 CFR
2011-07-01
... under FIFRA sections 3, 4 or 24(c). (2) An application for an experimental use permit under FIFRA... distribution of a pesticide. Batch means a specific quantity or lot of a test, control, or reference substance... to a test system. Control substance means any chemical substance or mixture, or any other material...
Code of Federal Regulations, 2014 CFR
2014-07-01
... under FIFRA sections 3, 4 or 24(c). (2) An application for an experimental use permit under FIFRA... distribution of a pesticide. Batch means a specific quantity or lot of a test, control, or reference substance... to a test system. Control substance means any chemical substance or mixture, or any other material...
Preparation of Chemical Compounds for the U.S. Army Drug Development Program
1990-09-14
Aldrich, Lot No. ML0824ML Johns Manville , no Lot No. J.T. Baker, Lot No. A42837 Fisher Scientific Lot No. 885835-60 Ashland, Lot No. 0701768E...Aldrich, Lot No. 03905ET Moore-Tec, No Lot No. Lot Nos. KAPM and KDPA Aaper, Lot Nos. R9529, 89D19-R, 89K06 Kodak, Lot No. 807198C Johns Manville , Lot...89-K06-R and 90-A124-R Fisher, Lot Nos. 881166-60, 895184-36 and 894961-36 Kodak, Lot NO. 807198C Johns Manville Lot Nos. G5P34633 and 3P-291
Mesoscopic Fluorescence Molecular Tomography for Evaluating Engineered Tissues.
Ozturk, Mehmet S; Chen, Chao-Wei; Ji, Robin; Zhao, Lingling; Nguyen, Bao-Ngoc B; Fisher, John P; Chen, Yu; Intes, Xavier
2016-03-01
Optimization of regenerative medicine strategies includes the design of biomaterials, development of cell-seeding methods, and control of cell-biomaterial interactions within the engineered tissues. Among these steps, one paramount challenge is to non-destructively image the engineered tissues in their entirety to assess structure, function, and molecular expression. It is especially important to be able to enable cell phenotyping and monitor the distribution and migration of cells throughout the bulk scaffold. Advanced fluorescence microscopic techniques are commonly employed to perform such tasks; however, they are limited to superficial examination of tissue constructs. Therefore, the field of tissue engineering and regenerative medicine would greatly benefit from the development of molecular imaging techniques which are capable of non-destructive imaging of three-dimensional cellular distribution and maturation within a tissue-engineered scaffold beyond the limited depth of current microscopic techniques. In this review, we focus on an emerging depth-resolved optical mesoscopic imaging technique, termed laminar optical tomography (LOT) or mesoscopic fluorescence molecular tomography (MFMT), which enables longitudinal imaging of cellular distribution in thick tissue engineering constructs at depths of a few millimeters and with relatively high resolution. The physical principle, image formation, and instrumentation of LOT/MFMT systems are introduced. Representative applications in tissue engineering include imaging the distribution of human mesenchymal stem cells embedded in hydrogels, imaging of bio-printed tissues, and in vivo applications.
WLN's Database: New Directions.
ERIC Educational Resources Information Center
Ziegman, Bruce N.
1988-01-01
Describes features of the Western Library Network's database, including the database structure, authority control, contents, quality control, and distribution methods. The discussion covers changes in distribution necessitated by increasing telecommunications costs and the development of optical data disk products. (CLB)
Interest of LQAS method in a survey of HTLV-I infection in Benin (West Africa).
Houinato, Dismand; Preux, Pierre-Marie; Charriere, Bénédicte; Massit, Bruno; Avodé, Gilbert; Denis, François; Dumas, Michel; Boutros-Toni, Fernand; Salamon, Roger
2002-02-01
HTLV-I is heterogeneously distributed in Sub-Saharan Africa. Traditional survey methods as cluster sampling could provide information for a country or region of interest. However, they cannot identify small areas with higher prevalences of infection to help in the health policy planning. Identification of such areas could be done by a Lot Quality Assurance Sampling (LQAS) method, which is currently used in industry to identify a poor performance in assembly lines. The LQAS method was used in Atacora (Northern Benin) between March and May 1998 to identify areas with a HTLV-I seroprevalence higher than 4%. Sixty-five subjects were randomly selected in each of 36 communes (lots) of this department. Lots were classified as unacceptable when the sample contained at least one positive subject. The LQAS method identified 25 (69.4 %) communes with a prevalence higher than 4%. Using stratified sampling theory, the overall HTLV-I seroprevalence was 4.5% (95% CI: 3.6-5.4%). These data show the interest of LQAS method application under field conditions to detect clusters of infection.
VIEWCACHE: An incremental pointer-based access method for autonomous interoperable databases
NASA Technical Reports Server (NTRS)
Roussopoulos, N.; Sellis, Timos
1993-01-01
One of the biggest problems facing NASA today is to provide scientists efficient access to a large number of distributed databases. Our pointer-based incremental data base access method, VIEWCACHE, provides such an interface for accessing distributed datasets and directories. VIEWCACHE allows database browsing and search performing inter-database cross-referencing with no actual data movement between database sites. This organization and processing is especially suitable for managing Astrophysics databases which are physically distributed all over the world. Once the search is complete, the set of collected pointers pointing to the desired data are cached. VIEWCACHE includes spatial access methods for accessing image datasets, which provide much easier query formulation by referring directly to the image and very efficient search for objects contained within a two-dimensional window. We will develop and optimize a VIEWCACHE External Gateway Access to database management systems to facilitate database search.
Flynn, Timothy Corcoran; Thompson, David H; Hyun, Seok-Hee
2013-10-01
In this study, the authors sought to determine the molecular weight distribution of three hyaluronic acids-Belotero Balance, Restylane, and Juvéderm Ultra-and their rates of degradation following exposure to hyaluronidase. Lot consistency of Belotero Balance also was analyzed. Three lots of Belotero Balance were analyzed using liquid chromatography techniques. The product was found to have high-molecular-weight and low-molecular-weight species. One lot of Belotero Balance was compared to one lot each of Juvéderm Ultra and Restylane. Molecular weights of the species were analyzed. The hyaluronic acids were exposed to ovine testicular hyaluronidase at six time points-baseline and 0.5, 1, 2, 6, and 24 hours-to determine degradation rates. Belotero Balance lots were remarkably consistent. Belotero Balance had the largest high-molecular-weight species, followed by Juvéderm Ultra and Restylane (p < 0.001). Low-molecular-weight differences among all three hyaluronic acids were not statistically significant. Percentages of high-molecular-weight polymer differ among the three materials, with Belotero Balance having the highest fraction of high-molecular-weight polymer. Degradation of the high-molecular-weight species over time showed different molecular weights of the high-molecular-weight fraction. Rates of degradation of the hyaluronic acids following exposure to ovine testicular hyaluronidase were similar. All hyaluronic acids were fully degraded at 24 hours. Fractions of high-molecular-weight polymer differ across the hyaluronic acids tested. The low-molecular-weight differences are not statistically significant. The high-molecular-weight products have different molecular weights at the 0.5- and 2-hour time points when exposed to ovine testicular hyaluronidase and are not statistically different at 24 hours.
Pezzoli, Lorenzo; Andrews, Nick; Ronveaux, Olivier
2010-05-01
Vaccination programmes targeting disease elimination aim to achieve very high coverage levels (e.g. 95%). We calculated the precision of different clustered lot quality assurance sampling (LQAS) designs in computer-simulated surveys to provide local health officers in the field with preset LQAS plans to simply and rapidly assess programmes with high coverage targets. We calculated sample size (N), decision value (d) and misclassification errors (alpha and beta) of several LQAS plans by running 10 000 simulations. We kept the upper coverage threshold (UT) at 90% or 95% and decreased the lower threshold (LT) progressively by 5%. We measured the proportion of simulations with < or =d individuals unvaccinated or lower if the coverage was set at the UT (pUT) to calculate beta (1-pUT) and the proportion of simulations with >d unvaccinated individuals if the coverage was LT% (pLT) to calculate alpha (1-pLT). We divided N in clusters (between 5 and 10) and recalculated the errors hypothesising that the coverage would vary in the clusters according to a binomial distribution with preset standard deviations of 0.05 and 0.1 from the mean lot coverage. We selected the plans fulfilling these criteria: alpha < or = 5% beta < or = 20% in the unclustered design; alpha < or = 10% beta < or = 25% when the lots were divided in five clusters. When the interval between UT and LT was larger than 10% (e.g. 15%), we were able to select precise LQAS plans dividing the lot in five clusters with N = 50 (5 x 10) and d = 4 to evaluate programmes with 95% coverage target and d = 7 to evaluate programmes with 90% target. These plans will considerably increase the feasibility and the rapidity of conducting the LQAS in the field.
77 FR 53906 - Notice of Proposed Withdrawal and Opportunity for Public Meeting; California
Federal Register 2010, 2011, 2012, 2013, 2014
2012-09-04
...\\; Sec. 32, lots 4 and 5; Sec. 34, lot 4. T. 13 N., R. 10 E., Sec. 2, lot 1, and lots 3 to 15, inclusive; Sec. 9, lots 8, 12, and 13, and SW\\1/4\\NE\\1/4\\; Sec. 10, lots 1 to 10, inclusive, E\\1/2\\NE\\1/4\\, E\\1/2..., inclusive, S\\1/2\\ of lot 5, S\\1/2\\ of lot 8, lots 11 and 13; Sec. 19, lot 24; Sec. 20, lots 1, 2, 3, and 8...
The dye-sensitized solar cell database.
Venkatraman, Vishwesh; Raju, Rajesh; Oikonomopoulos, Solon P; Alsberg, Bjørn K
2018-04-03
Dye-sensitized solar cells (DSSCs) have garnered a lot of attention in recent years. The solar energy to power conversion efficiency of a DSSC is influenced by various components of the cell such as the dye, electrolyte, electrodes and additives among others leading to varying experimental configurations. A large number of metal-based and metal-free dye sensitizers have now been reported and tools using such data to indicate new directions for design and development are on the rise. DSSCDB, the first of its kind dye-sensitized solar cell database, aims to provide users with up-to-date information from publications on the molecular structures of the dyes, experimental details and reported measurements (efficiencies and spectral properties) and thereby facilitate a comprehensive and critical evaluation of the data. Currently, the DSSCDB contains over 4000 experimental observations spanning multiple dye classes such as triphenylamines, carbazoles, coumarins, phenothiazines, ruthenium and porphyrins. The DSSCDB offers a web-based, comprehensive source of property data for dye sensitized solar cells. Access to the database is available through the following URL: www.dyedb.com .
Performance analysis of static locking in replicated distributed database systems
NASA Technical Reports Server (NTRS)
Kuang, Yinghong; Mukkamala, Ravi
1991-01-01
Data replication and transaction deadlocks can severely affect the performance of distributed database systems. Many current evaluation techniques ignore these aspects, because it is difficult to evaluate through analysis and time consuming to evaluate through simulation. A technique is used that combines simulation and analysis to closely illustrate the impact of deadlock and evaluate performance of replicated distributed database with both shared and exclusive locks.
Heterogeneous distributed query processing: The DAVID system
NASA Technical Reports Server (NTRS)
Jacobs, Barry E.
1985-01-01
The objective of the Distributed Access View Integrated Database (DAVID) project is the development of an easy to use computer system with which NASA scientists, engineers and administrators can uniformly access distributed heterogeneous databases. Basically, DAVID will be a database management system that sits alongside already existing database and file management systems. Its function is to enable users to access the data in other languages and file systems without having to learn the data manipulation languages. Given here is an outline of a talk on the DAVID project and several charts.
Development of stable Grid service at the next generation system of KEKCC
NASA Astrophysics Data System (ADS)
Nakamura, T.; Iwai, G.; Matsunaga, H.; Murakami, K.; Sasaki, T.; Suzuki, S.; Takase, W.
2017-10-01
A lot of experiments in the field of accelerator based science are actively running at High Energy Accelerator Research Organization (KEK) by using SuperKEKB and J-PARC accelerator in Japan. In these days at KEK, the computing demand from the various experiments for the data processing, analysis, and MC simulation is monotonically increasing. It is not only for the case with high-energy experiments, the computing requirement from the hadron and neutrino experiments and some projects of astro-particle physics is also rapidly increasing due to the very high precision measurement. Under this situation, several projects, Belle II, T2K, ILC and KAGRA experiments supported by KEK are going to utilize Grid computing infrastructure as the main computing resource. The Grid system and services in KEK, which is already in production, are upgraded for the further stable operation at the same time of whole scale hardware replacement of KEK Central Computer System (KEKCC). The next generation system of KEKCC starts the operation from the beginning of September 2016. The basic Grid services e.g. BDII, VOMS, LFC, CREAM computing element and StoRM storage element are made by the more robust hardware configuration. Since the raw data transfer is one of the most important tasks for the KEKCC, two redundant GridFTP servers are adapted to the StoRM service instances with 40 Gbps network bandwidth on the LHCONE routing. These are dedicated to the Belle II raw data transfer to the other sites apart from the servers for the data transfer usage of the other VOs. Additionally, we prepare the redundant configuration for the database oriented services like LFC and AMGA by using LifeKeeper. The LFC servers are made by two read/write servers and two read-only servers for the Belle II experiment, and all of them have an individual database for the purpose of load balancing. The FTS3 service is newly deployed as a service for the Belle II data distribution. The service of CVMFS stratum-0 is started for the Belle II software repository, and stratum-1 service is prepared for the other VOs. In this way, there are a lot of upgrade for the real production service of Grid infrastructure at KEK Computing Research Center. In this paper, we would like to introduce the detailed configuration of the hardware for Grid instance, and several mechanisms to construct the robust Grid system in the next generation system of KEKCC.
Architecture Knowledge for Evaluating Scalable Databases
2015-01-16
problems, arising from the proliferation of new data models and distributed technologies for building scalable, available data stores . Architects must...longer are relational databases the de facto standard for building data repositories. Highly distributed, scalable “ NoSQL ” databases [11] have emerged...This is especially challenging at the data storage layer. The multitude of competing NoSQL database technologies creates a complex and rapidly
USDA-ARS?s Scientific Manuscript database
Beef cattle backgrounding in US, function as an intermediate between cow-calf enterprises and feedlot finishing. Beef cattle backgrounding receives weaned calves of different growth stages from cow-calf operations and prepare them ready for feed lot finishing. Many beef cattle backgrounding operati...
21 CFR 660.36 - Samples and protocols.
Code of Federal Regulations, 2014 CFR
2014-04-01
..., whenever a new donor is used, a sample of red blood cells from each new donor used in a cell panel intended... ADDITIONAL STANDARDS FOR DIAGNOSTIC SUBSTANCES FOR LABORATORY TESTS Reagent Red Blood Cells § 660.36 Samples... distribution of each lot of Reagent Red Blood Cells for detection or identification of unexpected antibodies...
21 CFR 660.36 - Samples and protocols.
Code of Federal Regulations, 2013 CFR
2013-04-01
..., whenever a new donor is used, a sample of red blood cells from each new donor used in a cell panel intended... ADDITIONAL STANDARDS FOR DIAGNOSTIC SUBSTANCES FOR LABORATORY TESTS Reagent Red Blood Cells § 660.36 Samples... distribution of each lot of Reagent Red Blood Cells for detection or identification of unexpected antibodies...
21 CFR 660.36 - Samples and protocols.
Code of Federal Regulations, 2012 CFR
2012-04-01
..., whenever a new donor is used, a sample of red blood cells from each new donor used in a cell panel intended... ADDITIONAL STANDARDS FOR DIAGNOSTIC SUBSTANCES FOR LABORATORY TESTS Reagent Red Blood Cells § 660.36 Samples... distribution of each lot of Reagent Red Blood Cells for detection or identification of unexpected antibodies...
Accelerated life testing and reliability of high K multilayer ceramic capacitors
NASA Technical Reports Server (NTRS)
Minford, W. J.
1981-01-01
The reliability of one lot of high K multilayer ceramic capacitors was evaluated using accelerated life testing. The degradation in insulation resistance was characterized as a function of voltage and temperature. The times to failure at a voltage-temperature stress conformed to a lognormal distribution with a standard deviation approximately 0.5.
Bayesian techniques for surface fuel loading estimation
Kathy Gray; Robert Keane; Ryan Karpisz; Alyssa Pedersen; Rick Brown; Taylor Russell
2016-01-01
A study by Keane and Gray (2013) compared three sampling techniques for estimating surface fine woody fuels. Known amounts of fine woody fuel were distributed on a parking lot, and researchers estimated the loadings using different sampling techniques. An important result was that precise estimates of biomass required intensive sampling for both the planar intercept...
Programmed database system at the Chang Gung Craniofacial Center: part II--digitizing photographs.
Chuang, Shiow-Shuh; Hung, Kai-Fong; de Villa, Glenda H; Chen, Philip K T; Lo, Lun-Jou; Chang, Sophia C N; Yu, Chung-Chih; Chen, Yu-Ray
2003-07-01
The archival tools used for digital images in advertising are not to fulfill the clinic requisition and are just beginning to develop. The storage of a large amount of conventional photographic slides needs a lot of space and special conditions. In spite of special precautions, degradation of the slides still occurs. The most common degradation is the appearance of fungus flecks. With the recent advances in digital technology, it is now possible to store voluminous numbers of photographs on a computer hard drive and keep them for a long time. A self-programmed interface has been developed to integrate database and image browser system that can build and locate needed files archive in a matter of seconds with the click of a button. This system requires hardware and software were market provided. There are 25,200 patients recorded in the database that involve 24,331 procedures. In the image files, there are 6,384 patients with 88,366 digital pictures files. From 1999 through 2002, NT400,000 dollars have been saved using the new system. Photographs can be managed with the integrating Database and Browse software for database archiving. This allows labeling of the individual photographs with demographic information and browsing. Digitized images are not only more efficient and economical than the conventional slide images, but they also facilitate clinical studies.
NASA Astrophysics Data System (ADS)
Dong, L.
2017-12-01
Abstract: The original urban surface structure changed a lot because of the rapid development of urbanization. Impermeable area has increased a lot. It causes great pressure for city flood control and drainage. Songmushan reservoir basin with high degree of urbanization is taken for an example. Pixel from Landsat is decomposed by Linear spectral mixture model and the proportion of urban area in it is considered as impervious rate. Based on impervious rate data before and after urbanization, an physically based distributed hydrological model, Liuxihe Model, is used to simulate the process of hydrology. The research shows that the performance of the flood forecasting of high urbanization area carried out with Liuxihe Model is perfect and can meet the requirement of the accuracy of city flood control and drainage. The increase of impervious area causes conflux speed more quickly and peak flow to be increased. It also makes the time of peak flow advance and the runoff coefficient increase. Key words: Liuxihe Model; Impervious rate; City flood control and drainage; Urbanization; Songmushan reservoir basin
Free fatty acid particles in protein formulations, part 2: contribution of polysorbate raw material.
Siska, Christine C; Pierini, Christopher J; Lau, Hollis R; Latypov, Ramil F; Fesinmeyer, R Matthew; Litowski, Jennifer R
2015-02-01
Polysorbate 20 (PS20) is a nonionic surfactant frequently used to stabilize protein biopharmaceuticals. During the development of mAb formulations containing PS20, small clouds of particles were observed in solutions stored in vials. The degree of particle formation was dependent on PS20 concentration. The particles were characterized by reversed-phase HPLC after dissolution and labeling with the fluorescent dye 1-pyrenyldiazomethane. The analysis showed that the particles consisted of free fatty acids (FFAs), with the distribution of types consistent with those found in the PS20 raw material. Protein solutions formulated with polysorbate 80, a chemically similar nonionic surfactant, showed a substantial delay in particle formation over time compared with PS20. Multiple lots of polysorbates were evaluated for FFA levels, each exhibiting differences based on polysorbate type and lot. Polysorbates purchased in more recent years show a greater distribution and quantity of FFA and also a greater propensity to form particles. This work shows that the quality control of polysorbate raw materials could play an important role in biopharmaceutical product quality. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
Preliminary surficial geologic map database of the Amboy 30 x 60 minute quadrangle, California
Bedford, David R.; Miller, David M.; Phelps, Geoffrey A.
2006-01-01
The surficial geologic map database of the Amboy 30x60 minute quadrangle presents characteristics of surficial materials for an area approximately 5,000 km2 in the eastern Mojave Desert of California. This map consists of new surficial mapping conducted between 2000 and 2005, as well as compilations of previous surficial mapping. Surficial geology units are mapped and described based on depositional process and age categories that reflect the mode of deposition, pedogenic effects occurring post-deposition, and, where appropriate, the lithologic nature of the material. The physical properties recorded in the database focus on those that drive hydrologic, biologic, and physical processes such as particle size distribution (PSD) and bulk density. This version of the database is distributed with point data representing locations of samples for both laboratory determined physical properties and semi-quantitative field-based information. Future publications will include the field and laboratory data as well as maps of distributed physical properties across the landscape tied to physical process models where appropriate. The database is distributed in three parts: documentation, spatial map-based data, and printable map graphics of the database. Documentation includes this file, which provides a discussion of the surficial geology and describes the format and content of the map data, a database 'readme' file, which describes the database contents, and FGDC metadata for the spatial map information. Spatial data are distributed as Arc/Info coverage in ESRI interchange (e00) format, or as tabular data in the form of DBF3-file (.DBF) file formats. Map graphics files are distributed as Postscript and Adobe Portable Document Format (PDF) files, and are appropriate for representing a view of the spatial database at the mapped scale.
On Study of Application of Big Data and Cloud Computing Technology in Smart Campus
NASA Astrophysics Data System (ADS)
Tang, Zijiao
2017-12-01
We live in an era of network and information, which means we produce and face a lot of data every day, however it is not easy for database in the traditional meaning to better store, process and analyze the mass data, therefore the big data was born at the right moment. Meanwhile, the development and operation of big data rest with cloud computing which provides sufficient space and resources available to process and analyze data of big data technology. Nowadays, the proposal of smart campus construction aims at improving the process of building information in colleges and universities, therefore it is necessary to consider combining big data technology and cloud computing technology into construction of smart campus to make campus database system and campus management system mutually combined rather than isolated, and to serve smart campus construction through integrating, storing, processing and analyzing mass data.
Design and implementation of a distributed large-scale spatial database system based on J2EE
NASA Astrophysics Data System (ADS)
Gong, Jianya; Chen, Nengcheng; Zhu, Xinyan; Zhang, Xia
2003-03-01
With the increasing maturity of distributed object technology, CORBA, .NET and EJB are universally used in traditional IT field. However, theories and practices of distributed spatial database need farther improvement in virtue of contradictions between large scale spatial data and limited network bandwidth or between transitory session and long transaction processing. Differences and trends among of CORBA, .NET and EJB are discussed in details, afterwards the concept, architecture and characteristic of distributed large-scale seamless spatial database system based on J2EE is provided, which contains GIS client application, web server, GIS application server and spatial data server. Moreover the design and implementation of components of GIS client application based on JavaBeans, the GIS engine based on servlet, the GIS Application server based on GIS enterprise JavaBeans(contains session bean and entity bean) are explained.Besides, the experiments of relation of spatial data and response time under different conditions are conducted, which proves that distributed spatial database system based on J2EE can be used to manage, distribute and share large scale spatial data on Internet. Lastly, a distributed large-scale seamless image database based on Internet is presented.
Comparison of the Frontier Distributed Database Caching System to NoSQL Databases
NASA Astrophysics Data System (ADS)
Dykstra, Dave
2012-12-01
One of the main attractions of non-relational “NoSQL” databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It also compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.
Comparison of the Frontier Distributed Database Caching System to NoSQL Databases
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dykstra, Dave
One of the main attractions of non-relational NoSQL databases is their ability to scale to large numbers of readers, including readers spread over a wide area. The Frontier distributed database caching system, used in production by the Large Hadron Collider CMS and ATLAS detector projects for Conditions data, is based on traditional SQL databases but also adds high scalability and the ability to be distributed over a wide-area for an important subset of applications. This paper compares the major characteristics of the two different approaches and identifies the criteria for choosing which approach to prefer over the other. It alsomore » compares in some detail the NoSQL databases used by CMS and ATLAS: MongoDB, CouchDB, HBase, and Cassandra.« less
NASA Astrophysics Data System (ADS)
Viegas, F.; Malon, D.; Cranshaw, J.; Dimitrov, G.; Nowak, M.; Nairz, A.; Goossens, L.; Gallas, E.; Gamboa, C.; Wong, A.; Vinek, E.
2010-04-01
The TAG files store summary event quantities that allow a quick selection of interesting events. This data will be produced at a nominal rate of 200 Hz, and is uploaded into a relational database for access from websites and other tools. The estimated database volume is 6TB per year, making it the largest application running on the ATLAS relational databases, at CERN and at other voluntary sites. The sheer volume and high rate of production makes this application a challenge to data and resource management, in many aspects. This paper will focus on the operational challenges of this system. These include: uploading the data from files to the CERN's and remote sites' databases; distributing the TAG metadata that is essential to guide the user through event selection; controlling resource usage of the database, from the user query load to the strategy of cleaning and archiving of old TAG data.
Lai, Edward Chia-Cheng; Man, Kenneth K C; Chaiyakunapruk, Nathorn; Cheng, Ching-Lan; Chien, Hsu-Chih; Chui, Celine S L; Dilokthornsakul, Piyameth; Hardy, N Chantelle; Hsieh, Cheng-Yang; Hsu, Chung Y; Kubota, Kiyoshi; Lin, Tzu-Chieh; Liu, Yanfang; Park, Byung Joo; Pratt, Nicole; Roughead, Elizabeth E; Shin, Ju-Young; Watcharathanakij, Sawaeng; Wen, Jin; Wong, Ian C K; Yang, Yea-Huei Kao; Zhang, Yinghong; Setoguchi, Soko
2015-11-01
This study describes the availability and characteristics of databases in Asian-Pacific countries and assesses the feasibility of a distributed network approach in the region. A web-based survey was conducted among investigators using healthcare databases in the Asia-Pacific countries. Potential survey participants were identified through the Asian Pharmacoepidemiology Network. Investigators from a total of 11 databases participated in the survey. Database sources included four nationwide claims databases from Japan, South Korea, and Taiwan; two nationwide electronic health records from Hong Kong and Singapore; a regional electronic health record from western China; two electronic health records from Thailand; and cancer and stroke registries from Taiwan. We identified 11 databases with capabilities for distributed network approaches. Many country-specific coding systems and terminologies have been already converted to international coding systems. The harmonization of health expenditure data is a major obstacle for future investigations attempting to evaluate issues related to medical costs.
Performance analysis of static locking in replicated distributed database systems
NASA Technical Reports Server (NTRS)
Kuang, Yinghong; Mukkamala, Ravi
1991-01-01
Data replications and transaction deadlocks can severely affect the performance of distributed database systems. Many current evaluation techniques ignore these aspects, because it is difficult to evaluate through analysis and time consuming to evaluate through simulation. Here, a technique is discussed that combines simulation and analysis to closely illustrate the impact of deadlock and evaluate performance of replicated distributed databases with both shared and exclusive locks.
A Database for Decision-Making in Training and Distributed Learning Technology
1998-04-01
developer must answer these questions: ♦ Who will develop the courseware? Should we outsource ? ♦ What media should we use? How much will it cost? ♦ What...to develop , the database can be useful for answering staffing questions and planning transitions to technology- assisted courses. The database...of distributed learning curricula in com- parison to traditional methods. To develop a military-wide distributed learning plan, the existing course
Methods To Determine the Silicone Oil Layer Thickness in Sprayed-On Siliconized Syringes.
Loosli, Viviane; Germershaus, Oliver; Steinberg, Henrik; Dreher, Sascha; Grauschopf, Ulla; Funke, Stefanie
2018-01-01
The silicone lubricant layer in prefilled syringes has been investigated with regards to siliconization process performance, prefilled syringe functionality, and drug product attributes, such as subvisible particle levels, in several studies in the past. However, adequate methods to characterize the silicone oil layer thickness and distribution are limited, and systematic evaluation is missing. In this study, white light interferometry was evaluated to close this gap in method understanding. White light interferometry demonstrated a good accuracy of 93-99% for MgF 2 coated, curved standards covering a thickness range of 115-473 nm. Thickness measurements for sprayed-on siliconized prefilled syringes with different representative silicone oil distribution patterns (homogeneous, pronounced siliconization at flange or needle side, respectively) showed high instrument (0.5%) and analyst precision (4.1%). Different white light interferometry instrument parameters (autofocus, protective shield, syringe barrel dimensions input, type of non-siliconized syringe used as base reference) had no significant impact on the measured average layer thickness. The obtained values from white light interferometry applying a fully developed method (12 radial lines, 50 mm measurement distance, 50 measurements points) were in agreement with orthogonal results from combined white and laser interferometry and 3D-laser scanning microscopy. The investigated syringe batches (lot A and B) exhibited comparable longitudinal silicone oil layer thicknesses ranging from 170-190 nm to 90-100 nm from flange to tip and homogeneously distributed silicone layers over the syringe barrel circumference (110- 135 nm). Empty break-loose (4-4.5 N) and gliding forces (2-2.5 N) were comparably low for both analyzed syringe lots. A silicone oil layer thickness of 100-200 nm was thus sufficient for adequate functionality in this particular study. Filling the syringe with a surrogate solution including short-term exposure and emptying did not significantly influence the silicone oil layer at the investigated silicone level. It thus appears reasonable to use this approach to characterize silicone oil layers in filled syringes over time. The developed method characterizes non-destructively the layer thickness and distribution of silicone oil in empty syringes and provides fast access to reliable results. The gained information can be further used to support optimization of siliconization processes and increase the understanding of syringe functionality. LAY ABSTRACT: Silicone oil layers as lubricant are required to ensure functionality of prefilled syringes. Methods evaluating these layers are limited, and systematic evaluation is missing. The aim of this study was to develop and assess white light interferometry as an analytical method to characterize sprayed-on silicone oil layers in 1 mL prefilled syringes. White light interferometry showed a good accuracy (93-99%) as well as instrument and analyst precision (0.5% and 4.1%, respectively). Different applied instrument parameters had no significant impact on the measured layer thickness. The obtained values from white light interferometry applying a fully developed method concurred with orthogonal results from 3D-laser scanning microscopy and combined white light and laser interferometry. The average layer thicknesses in two investigated syringe lots gradually decreased from 170-190 nm at the flange to 100-90 nm at the needle side. The silicone layers were homogeneously distributed over the syringe barrel circumference (110-135 nm) for both lots. Empty break-loose (4-4.5 N) and gliding forces (2-2.5 N) were comparably low for both analyzed syringe lots. Syringe filling with a surrogate solution, including short-term exposure and emptying, did not significantly affect the silicone oil layer. The developed, non-destructive method provided reliable results to characterize the silicone oil layer thickness and distribution in empty siliconized syringes. This information can be further used to support optimization of siliconization processes and increase understanding of syringe functionality. © PDA, Inc. 2018.
Packets Distributing Evolutionary Algorithm Based on PSO for Ad Hoc Network
NASA Astrophysics Data System (ADS)
Xu, Xiao-Feng
2018-03-01
Wireless communication network has such features as limited bandwidth, changeful channel and dynamic topology, etc. Ad hoc network has lots of difficulties in accessing control, bandwidth distribution, resource assign and congestion control. Therefore, a wireless packets distributing Evolutionary algorithm based on PSO (DPSO)for Ad Hoc Network is proposed. Firstly, parameters impact on performance of network are analyzed and researched to obtain network performance effective function. Secondly, the improved PSO Evolutionary Algorithm is used to solve the optimization problem from local to global in the process of network packets distributing. The simulation results show that the algorithm can ensure fairness and timeliness of network transmission, as well as improve ad hoc network resource integrated utilization efficiency.
miRNEST database: an integrative approach in microRNA search and annotation
Szcześniak, Michał Wojciech; Deorowicz, Sebastian; Gapski, Jakub; Kaczyński, Łukasz; Makałowska, Izabela
2012-01-01
Despite accumulating data on animal and plant microRNAs and their functions, existing public miRNA resources usually collect miRNAs from a very limited number of species. A lot of microRNAs, including those from model organisms, remain undiscovered. As a result there is a continuous need to search for new microRNAs. We present miRNEST (http://mirnest.amu.edu.pl), a comprehensive database of animal, plant and virus microRNAs. The core part of the database is built from our miRNA predictions conducted on Expressed Sequence Tags of 225 animal and 202 plant species. The miRNA search was performed based on sequence similarity and as many as 10 004 miRNA candidates in 221 animal and 199 plant species were discovered. Out of them only 299 have already been deposited in miRBase. Additionally, miRNEST has been integrated with external miRNA data from literature and 13 databases, which includes miRNA sequences, small RNA sequencing data, expression, polymorphisms and targets data as well as links to external miRNA resources, whenever applicable. All this makes miRNEST a considerable miRNA resource in a sense of number of species (544) that integrates a scattered miRNA data into a uniform format with a user-friendly web interface. PMID:22135287
Distribution System Upgrade Unit Cost Database
Horowitz, Kelsey
2017-11-30
This database contains unit cost information for different components that may be used to integrate distributed photovotaic (D-PV) systems onto distribution systems. Some of these upgrades and costs may also apply to integration of other distributed energy resources (DER). Which components are required, and how many of each, is system-specific and should be determined by analyzing the effects of distributed PV at a given penetration level on the circuit of interest in combination with engineering assessments on the efficacy of different solutions to increase the ability of the circuit to host additional PV as desired. The current state of the distribution system should always be considered in these types of analysis. The data in this database was collected from a variety of utilities, PV developers, technology vendors, and published research reports. Where possible, we have included information on the source of each data point and relevant notes. In some cases where data provided is sensitive or proprietary, we were not able to specify the source, but provide other information that may be useful to the user (e.g. year, location where equipment was installed). NREL has carefully reviewed these sources prior to inclusion in this database. Additional information about the database, data sources, and assumptions is included in the "Unit_cost_database_guide.doc" file included in this submission. This guide provides important information on what costs are included in each entry. Please refer to this guide before using the unit cost database for any purpose.
Khatibi, Piyum A.; McMaster, Nicole J.; Musser, Robert; Schmale, David G.
2014-01-01
Fuel ethanol co-products known as distillers’ dried grains with solubles (DDGS) are a significant source of energy, protein, and phosphorous in animal feed. Fuel ethanol production may concentrate mycotoxins present in corn into DDGS. One hundred and forty one corn DDGS lots collected in 2011 from 78 ethanol plants located in 12 states were screened for the mycotoxins deoxynivalenol (DON), 15-acetyldeoxynivalenol (15-ADON), 3-acetyldeoxynivalenol (3-ADON), nivalenol (NIV), and zearalenone (ZON). DON ranged from <0.50 to 14.62 μg g−1, 15-ADON ranged from <0.10 to 7.55 μg g−1, and ZON ranged from <0.10 to 2.12 μg g−1. None of the DDGS lots contained 3-ADON or NIV. Plants in OH had the highest levels of DON overall (mean of 9.51 μg g−1), and plants in NY, MI, IN, NE, and WI had mean DON levels >1 and <4 μg g−1. Twenty six percent (36/141) of the DDGS lots contained 1.0 to 5.0 μg g−1 DON, 2% (3/141) contained >5.0 and <10.0 μg g−1 DON, and 3% (4/141) contained >10.0 μg g−1 DON. All DDGS lots contaminated with unacceptable levels of DON evaded detection prior to their commercial distribution and were likely sold as feed products. PMID:24674933
NASA Technical Reports Server (NTRS)
Moroh, Marsha
1988-01-01
A methodology for building interfaces of resident database management systems to a heterogeneous distributed database management system under development at NASA, the DAVID system, was developed. The feasibility of that methodology was demonstrated by construction of the software necessary to perform the interface task. The interface terminology developed in the course of this research is presented. The work performed and the results are summarized.
Improving the Plasticity of LIMS Implementation: LIMS Extension through Microsoft Excel
NASA Technical Reports Server (NTRS)
Culver, Mark
2017-01-01
A Laboratory Information Management System (LIMS) is a databasing software with many built-in tools ideal for handling and documenting most laboratory processes in an accurate and consistent manner, making it an indispensable tool for the modern laboratory. However, a lot of LIMS end users will find that in the performance of analyses that have unique considerations such as standard curves, multiple stages incubations, or logical considerations, a base LIMS distribution may not ideally suit their needs. These considerations bring about the need for extension languages, which can extend the functionality of a LIMS. While these languages do provide the implementation team the functionality required to accommodate these special laboratory analyses, they are usually too complex for the end user to modify to compensate for natural changes in laboratory operations. The LIMS utilized by our laboratory offers a unique and easy-to-use choice for an extension language, one that is already heavily relied upon not only in science but also in most academic and business pursuits: Microsoft Excel. The validity of Microsoft Excel as a pseudo programming language and its usability and versatility as a LIMS extension language will be discussed. The NELAC implications and overall drawbacks of this LIMS configuration will also be discussed.
CMO: Cruise Metadata Organizer for JAMSTEC Research Cruises
NASA Astrophysics Data System (ADS)
Fukuda, K.; Saito, H.; Hanafusa, Y.; Vanroosebeke, A.; Kitayama, T.
2011-12-01
JAMSTEC's Data Research Center for Marine-Earth Sciences manages and distributes a wide variety of observational data and samples obtained from JAMSTEC research vessels and deep sea submersibles. Generally, metadata are essential to identify data and samples were obtained. In JAMSTEC, cruise metadata include cruise information such as cruise ID, name of vessel, research theme, and diving information such as dive number, name of submersible and position of diving point. They are submitted by chief scientists of research cruises in the Microsoft Excel° spreadsheet format, and registered into a data management database to confirm receipt of observational data files, cruise summaries, and cruise reports. The cruise metadata are also published via "JAMSTEC Data Site for Research Cruises" within two months after end of cruise. Furthermore, these metadata are distributed with observational data, images and samples via several data and sample distribution websites after a publication moratorium period. However, there are two operational issues in the metadata publishing process. One is that duplication efforts and asynchronous metadata across multiple distribution websites due to manual metadata entry into individual websites by administrators. The other is that differential data types or representation of metadata in each website. To solve those problems, we have developed a cruise metadata organizer (CMO) which allows cruise metadata to be connected from the data management database to several distribution websites. CMO is comprised of three components: an Extensible Markup Language (XML) database, an Enterprise Application Integration (EAI) software, and a web-based interface. The XML database is used because of its flexibility for any change of metadata. Daily differential uptake of metadata from the data management database to the XML database is automatically processed via the EAI software. Some metadata are entered into the XML database using the web-based interface by a metadata editor in CMO as needed. Then daily differential uptake of metadata from the XML database to databases in several distribution websites is automatically processed using a convertor defined by the EAI software. Currently, CMO is available for three distribution websites: "Deep Sea Floor Rock Sample Database GANSEKI", "Marine Biological Sample Database", and "JAMSTEC E-library of Deep-sea Images". CMO is planned to provide "JAMSTEC Data Site for Research Cruises" with metadata in the future.
Cook, David W; Oleary, Paul; Hunsucker, Jeff C; Sloan, Edna M; Bowers, John C; Blodgett, Robert J; Depaola, Angelo
2002-01-01
From June 1998 to July 1999, 370 lots of oysters in the shell were sampled at 275 different establishments (71%, restaurants or oyster bars; 27%, retail seafood markets: and 2%, wholesale seafood markets) in coastal and inland markets throughout the United States. The oysters were harvested from the Gulf (49%). Pacific (14%), Mid-Atlantic (18%), and North Atlantic (11%) Coasts of the United States and from Canada (8%). Densities of Vibrio vulnificus and Vibrio parahaemolyticus were determined using a modification of the most probable number (MPN) techniques described in the Food and Drug Administration's Bacteriological Analytical Manual. DNA probes and enzyme immunoassay were used to identify suspect isolates and to determine the presence of the thermostable direct hemolysin gene associated with pathogenicity of V. parahaemolyticus. Densities of both V. vulnifcus and V. parahaemolyticus in market oysters from all harvest regions followed a seasonal distribution, with highest densities in the summer. Highest densities of both organisms were observed in oysters harvested from the Gulf Coast, where densities often exceeded 10,000 MPN/g. The majority (78%) of lots harvested in the North Atlantic, Pacific, and Canadian Coasts had V. vulnificus densities below the detectable level of 0.2 MPN/g; none exceeded 100 MPN/g. V. parahaemolyticus densities were greater than those of V. vulnificus in lots from these same areas, with some lots exceeding 1,000 MPN/g for V. parahaemolyticus. Some lots from the Mid-Atlantic states exceeded 10,000 MPN/g for both V. vulnificus and V. parahaemolyicus. Overall, there was a significant correlation between V. vulificus and V. parahaemolyticus densities (r = 0.72, n = 202, P < 0.0001), but neither density correlated with salinity. Storage time significantly affected the V. vulnificus (10% decrease per day) and V. parahaemolyticus (7% decrease per day) densities in market oysters. The thermostable direct hemolysin gene associated with V parahaemolyticus virulence was detected in 9 of 3,429 (0.3%) V. parahaemolyticus cultures and in 8 of 198 (4.0%) lots of oysters. These data can be used to estimate the exposure of raw oyster consumers to V. vulnificus and V. parahaemolyticus.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Muñoz-Jaramillo, Andrés; Windmueller, John C.; Amouzou, Ernest C.
2015-02-10
In this work, we take advantage of 11 different sunspot group, sunspot, and active region databases to characterize the area and flux distributions of photospheric magnetic structures. We find that, when taken separately, different databases are better fitted by different distributions (as has been reported previously in the literature). However, we find that all our databases can be reconciled by the simple application of a proportionality constant, and that, in reality, different databases are sampling different parts of a composite distribution. This composite distribution is made up by linear combination of Weibull and log-normal distributions—where a pure Weibull (log-normal) characterizesmore » the distribution of structures with fluxes below (above) 10{sup 21}Mx (10{sup 22}Mx). Additionally, we demonstrate that the Weibull distribution shows the expected linear behavior of a power-law distribution (when extended to smaller fluxes), making our results compatible with the results of Parnell et al. We propose that this is evidence of two separate mechanisms giving rise to visible structures on the photosphere: one directly connected to the global component of the dynamo (and the generation of bipolar active regions), and the other with the small-scale component of the dynamo (and the fragmentation of magnetic structures due to their interaction with turbulent convection)« less
Bridging the Gap between the Data Base and User in a Distributed Environment.
ERIC Educational Resources Information Center
Howard, Richard D.; And Others
1989-01-01
The distribution of databases physically separates users from those who administer the database and the administrators who perform database administration. By drawing on the work of social scientists in reliability and validity, a set of concepts and a list of questions to ensure data quality were developed. (Author/MLW)
A Web-based open-source database for the distribution of hyperspectral signatures
NASA Astrophysics Data System (ADS)
Ferwerda, J. G.; Jones, S. D.; Du, Pei-Jun
2006-10-01
With the coming of age of field spectroscopy as a non-destructive means to collect information on the physiology of vegetation, there is a need for storage of signatures, and, more importantly, their metadata. Without the proper organisation of metadata, the signatures itself become limited. In order to facilitate re-distribution of data, a database for the storage & distribution of hyperspectral signatures and their metadata was designed. The database was built using open-source software, and can be used by the hyperspectral community to share their data. Data is uploaded through a simple web-based interface. The database recognizes major file-formats by ASD, GER and International Spectronics. The database source code is available for download through the hyperspectral.info web domain, and we happily invite suggestion for additions & modification for the database to be submitted through the online forums on the same website.
1983-10-01
Multiversion Data 2-18 2.7.1 Multiversion Timestamping 2-20 2.T.2 Multiversion Looking 2-20 2.8 Combining the Techniques 2-22 3. Database Recovery Algorithms...See rTHEM79, GIFF79] for details. 2.7 Multiversion Data Let us return to a database system model where each logical data item is stored at one DM...In a multiversion database each Write wifxl, produces a new copy (or version) of x, denoted xi. Thus, the value of z is a set of ver- sions. For each
Leonids 2017 from Norway – A bright surprise!
NASA Astrophysics Data System (ADS)
Gaarder, K.
2018-01-01
I am very pleased to have been able to observe near maximum activity of the Leonids, and clearly witnessed the unequal mass distribution during these hours. A lot of bright Leonids were seen, followed by a short period of high activity of fainter meteors, before a sharp drop in activity. The Leonids is undoubtedly a shower to watch closely, with its many variations in activity level and magnitude distribution. I already look forward to observing the next years’ display, hopefully under a dark and clear sky, filled with bright meteors!
NASA Astrophysics Data System (ADS)
Kumlander, Deniss
The globalization of companies operations and competitor between software vendors demand improving quality of delivered software and decreasing the overall cost. The same in fact introduce a lot of problem into software development process as produce distributed organization breaking the co-location rule of modern software development methodologies. Here we propose a reformulation of the ambassador position increasing its productivity in order to bridge communication and workflow gap by managing the entire communication process rather than concentrating purely on the communication result.
Jeffery, Caroline; Beckworth, Colin; Hadden, Wilbur C; Ouma, Joseph; Lwanga, Stephen K; Valadez, Joseph J
2016-01-01
Beginning in 2003, Uganda used Lot Quality Assurance Sampling (LQAS) to assist district managers collect and use data to improve their human immunodeficiency virus (HIV)/AIDS program. Uganda's LQAS-database (2003-2012) covers up to 73 of 112 districts. Our multidistrict analysis of the LQAS data-set at 2003-2004 and 2012 examined gender variation among adults who ever tested for HIV over time, and attributes associated with testing. Conditional logistic regression matched men and women by community with seven model effect variables. HIV testing prevalence rose from 14% (men) and 12% (women) in 2003-2004 to 62% (men) and 80% (women) in 2012. In 2003-2004, knowing the benefits of testing (Odds Ratio [OR] = 6.09, 95% CI = 3.01-12.35), knowing where to get tested (OR = 2.83, 95% CI = 1.44-5.56), and secondary education (OR = 3.04, 95% CI = 1.19-7.77) were significantly associated with HIV testing. By 2012, knowing the benefits of testing (OR = 3.63, 95% CI = 2.25-5.83), where to get tested (OR = 5.15, 95% CI = 3.26-8.14), primary education (OR = 2.01, 95% CI = 1.39-2.91), being female (OR = 3.03, 95% CI = 2.53-3.62), and being married (OR = 1.81, 95% CI = 1.17-2.8) were significantly associated with HIV testing. HIV testing prevalence in Uganda has increased dramatically, more for women than men. Our results concurred with other authors that education, knowledge of HIV, and marriage (women only) are associated with testing for HIV and suggest that couples testing is more prevalent than other authors.
Magari, Robert T
2002-03-01
The effect of different lot-to-lot variability levels on the prediction of stability are studied based on two statistical models for estimating degradation in real time and accelerated stability tests. Lot-to-lot variability is considered as random in both models, and is attributed to two sources-variability at time zero, and variability of degradation rate. Real-time stability tests are modeled as a function of time while accelerated stability tests as a function of time and temperatures. Several data sets were simulated, and a maximum likelihood approach was used for estimation. The 95% confidence intervals for the degradation rate depend on the amount of lot-to-lot variability. When lot-to-lot degradation rate variability is relatively large (CV > or = 8%) the estimated confidence intervals do not represent the trend for individual lots. In such cases it is recommended to analyze each lot individually. Copyright 2002 Wiley-Liss, Inc. and the American Pharmaceutical Association J Pharm Sci 91: 893-899, 2002
1985-12-01
RELATIONAL TO NETWORK QUERY TRANSLATOR FOR A DISTRIBUTED DATABASE MANAGEMENT SYSTEM TH ESI S .L Kevin H. Mahoney -- Captain, USAF AFIT/GCS/ENG/85D-7...NETWORK QUERY TRANSLATOR FOR A DISTRIBUTED DATABASE MANAGEMENT SYSTEM - THESIS Presented to the Faculty of the School of Engineering of the Air Force...Institute of Technology Air University In Partial Fulfillment of the Requirements for the Degree of Master of Science in Computer Systems - Kevin H. Mahoney
DOE Office of Scientific and Technical Information (OSTI.GOV)
2011-07-15
1) Configured servers: In coordination with the INSIGHT team, a hardware configuration was selected. Two nodes were purchased, configured, and shipped with compatible OS and database installation. The servers have been stress tested for reliability as they use leading edge technologies. Each node has two CPUs and 12 cores per CPU with maximum onboard memory for high performance. 2) LIM and Experimental module: The original BioSig system was developed for cancer research. Accordingly, the LIM system its corresponding web pages are being modified to facilitate (i) pathogene-donor interactions, (ii) media composition, (iii) chemical and siRNA plate configurations. The LIM systemmore » has been redesigned. The revised system allows design of new media and tracking it from lot-to-lot so that variations in the phenotypic responses can be tracked to a specific media and lot number. Similar associations are also possible with other experimental factors (e.g., donor-pathoge, siRNA, and chemical). Furthermore, the design of the experimental variables has also been revised to (i) interact with the newly developed LIM system, (ii) simplify experimental specifications, and (iii) test for potential operator's error during the data entry. Part of the complication has been due to the handshake between multiple teams that provide the small molecule plates and the team that creates assay plates. Our efforts have focused to harmonize these interactions (e.g., various data formats) so that each assay plate can be mapped to its source so that a correct set of experimental variables can be associated with each image. For example, depending upon the source of the chemical plates, they may have different formats. We have developed a canonical representation that registers SMILES code, for each chemical compound, along with its physiochemical properties. The schema for LIM conjunction with customized Web pages. 3) Import of Images and computed descriptors module: In coordination with the INSIGHT team, policies were designed to route images and computed representation into BioSig. This module (i) examines for completion of image analysis, and imports images, computed masks, and descriptors into BioSig. A database API for efficient retrieval of selection of descriptors (among thousands) was designed and implemented. 4) Computed segmentation masks from external software were imported, boundaries computed, and overlaid on images for quality control.« less
Rule-based deduplication of article records from bibliographic databases.
Jiang, Yu; Lin, Can; Meng, Weiyi; Yu, Clement; Cohen, Aaron M; Smalheiser, Neil R
2014-01-01
We recently designed and deployed a metasearch engine, Metta, that sends queries and retrieves search results from five leading biomedical databases: PubMed, EMBASE, CINAHL, PsycINFO and the Cochrane Central Register of Controlled Trials. Because many articles are indexed in more than one of these databases, it is desirable to deduplicate the retrieved article records. This is not a trivial problem because data fields contain a lot of missing and erroneous entries, and because certain types of information are recorded differently (and inconsistently) in the different databases. The present report describes our rule-based method for deduplicating article records across databases and includes an open-source script module that can be deployed freely. Metta was designed to satisfy the particular needs of people who are writing systematic reviews in evidence-based medicine. These users want the highest possible recall in retrieval, so it is important to err on the side of not deduplicating any records that refer to distinct articles, and it is important to perform deduplication online in real time. Our deduplication module is designed with these constraints in mind. Articles that share the same publication year are compared sequentially on parameters including PubMed ID number, digital object identifier, journal name, article title and author list, using text approximation techniques. In a review of Metta searches carried out by public users, we found that the deduplication module was more effective at identifying duplicates than EndNote without making any erroneous assignments.
Rule-based deduplication of article records from bibliographic databases
Jiang, Yu; Lin, Can; Meng, Weiyi; Yu, Clement; Cohen, Aaron M.; Smalheiser, Neil R.
2014-01-01
We recently designed and deployed a metasearch engine, Metta, that sends queries and retrieves search results from five leading biomedical databases: PubMed, EMBASE, CINAHL, PsycINFO and the Cochrane Central Register of Controlled Trials. Because many articles are indexed in more than one of these databases, it is desirable to deduplicate the retrieved article records. This is not a trivial problem because data fields contain a lot of missing and erroneous entries, and because certain types of information are recorded differently (and inconsistently) in the different databases. The present report describes our rule-based method for deduplicating article records across databases and includes an open-source script module that can be deployed freely. Metta was designed to satisfy the particular needs of people who are writing systematic reviews in evidence-based medicine. These users want the highest possible recall in retrieval, so it is important to err on the side of not deduplicating any records that refer to distinct articles, and it is important to perform deduplication online in real time. Our deduplication module is designed with these constraints in mind. Articles that share the same publication year are compared sequentially on parameters including PubMed ID number, digital object identifier, journal name, article title and author list, using text approximation techniques. In a review of Metta searches carried out by public users, we found that the deduplication module was more effective at identifying duplicates than EndNote without making any erroneous assignments. PMID:24434031
Distributed Episodic Exploratory Planning (DEEP)
2008-12-01
API). For DEEP, Hibernate offered the following advantages: • Abstracts SQL by utilizing HQL so any database with a Java Database Connectivity... Hibernate SQL ICCRTS International Command and Control Research and Technology Symposium JDB Java Distributed Blackboard JDBC Java Database Connectivity...selected because of its opportunistic reasoning capabilities and implemented in Java for platform independence. Java was chosen for ease of
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-13
... scrapers, 1 bone scraper handle, 1 lot of mussel shells, 1 lot of red ochre, 2 bone awls, 1 lot of charcoal... lots of bag residue, 4 lots of animal bones, 1 stone net sinker, 1 lot of tin can fragments, 2...
Monte Carlo simulations of product distributions and contained metal estimates
Gettings, Mark E.
2013-01-01
Estimation of product distributions of two factors was simulated by conventional Monte Carlo techniques using factor distributions that were independent (uncorrelated). Several simulations using uniform distributions of factors show that the product distribution has a central peak approximately centered at the product of the medians of the factor distributions. Factor distributions that are peaked, such as Gaussian (normal) produce an even more peaked product distribution. Piecewise analytic solutions can be obtained for independent factor distributions and yield insight into the properties of the product distribution. As an example, porphyry copper grades and tonnages are now available in at least one public database and their distributions were analyzed. Although both grade and tonnage can be approximated with lognormal distributions, they are not exactly fit by them. The grade shows some nonlinear correlation with tonnage for the published database. Sampling by deposit from available databases of grade, tonnage, and geological details of each deposit specifies both grade and tonnage for that deposit. Any correlation between grade and tonnage is then preserved and the observed distribution of grades and tonnages can be used with no assumption of distribution form.
New laboratory approach to study Titan ionospheric chemistry
NASA Astrophysics Data System (ADS)
Thissen, R.; Dutuit, O.; Pernot, P.; Carrasco, N.; Lilensten, J.; Quirico, E.; Schmitt, B.
The exploration of Titan reveals a very complex chemistry occurring in the ionospheric region of the atmosphere. In order to interpret the observations performed by the Cassini spectrometers, we need to improve our description of the ion molecule chemistry involving nitrogen and hydrocarbons. Up to now, models are based on databases compiled over the years. These are quite complete to describe the major ions, but lack of accuracy for some of them, they totally neglect the questions of isomerization or chemical functionality in the description of ionic species and still miss a lot of inputs for ionic species heavier than 50 daltons. We propose to improve the databases by systematic measurements of ion molecule reaction rates, and further structural description, by means of a high resolution mass spectrometer, allowing for MS/MS structural analysis of the ionic species. A thorough evaluation of nowadays databases by means of uncertainty propagation will lead our choice of the most important reactions to be studied. This study shall also lead to educated choice for chemistry simplification, which is mandatory in order to include the chemistry in 3D or fluid models of the atmosphere. We plan as well to use extracts from tholins as molecular source for our analysis.
Low-Rank Linear Dynamical Systems for Motor Imagery EEG.
Zhang, Wenchang; Sun, Fuchun; Tan, Chuanqi; Liu, Shaobo
2016-01-01
The common spatial pattern (CSP) and other spatiospectral feature extraction methods have become the most effective and successful approaches to solve the problem of motor imagery electroencephalography (MI-EEG) pattern recognition from multichannel neural activity in recent years. However, these methods need a lot of preprocessing and postprocessing such as filtering, demean, and spatiospectral feature fusion, which influence the classification accuracy easily. In this paper, we utilize linear dynamical systems (LDSs) for EEG signals feature extraction and classification. LDSs model has lots of advantages such as simultaneous spatial and temporal feature matrix generation, free of preprocessing or postprocessing, and low cost. Furthermore, a low-rank matrix decomposition approach is introduced to get rid of noise and resting state component in order to improve the robustness of the system. Then, we propose a low-rank LDSs algorithm to decompose feature subspace of LDSs on finite Grassmannian and obtain a better performance. Extensive experiments are carried out on public dataset from "BCI Competition III Dataset IVa" and "BCI Competition IV Database 2a." The results show that our proposed three methods yield higher accuracies compared with prevailing approaches such as CSP and CSSP.
Pape-Haugaard, Louise; Frank, Lars
2011-01-01
A major obstacle in ensuring ubiquitous information is the utilization of heterogeneous systems in eHealth. The objective in this paper is to illustrate how an architecture for distributed eHealth databases can be designed without lacking the characteristic features of traditional sustainable databases. The approach is firstly to explain traditional architecture in central and homogeneous distributed database computing, followed by a possible approach to use an architectural framework to obtain sustainability across disparate systems i.e. heterogeneous databases, concluded with a discussion. It is seen that through a method of using relaxed ACID properties on a service-oriented architecture it is possible to achieve data consistency which is essential when ensuring sustainable interoperability.
Database System Design and Implementation for Marine Air-Traffic-Controller Training
2017-06-01
NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS Approved for public release. Distribution is unlimited. DATABASE SYSTEM DESIGN AND...thesis 4. TITLE AND SUBTITLE DATABASE SYSTEM DESIGN AND IMPLEMENTATION FOR MARINE AIR-TRAFFIC-CONTROLLER TRAINING 5. FUNDING NUMBERS 6. AUTHOR(S...12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) This project focused on the design , development, and implementation of a centralized
India's Computational Biology Growth and Challenges.
Chakraborty, Chiranjib; Bandyopadhyay, Sanghamitra; Agoramoorthy, Govindasamy
2016-09-01
India's computational science is growing swiftly due to the outburst of internet and information technology services. The bioinformatics sector of India has been transforming rapidly by creating a competitive position in global bioinformatics market. Bioinformatics is widely used across India to address a wide range of biological issues. Recently, computational researchers and biologists are collaborating in projects such as database development, sequence analysis, genomic prospects and algorithm generations. In this paper, we have presented the Indian computational biology scenario highlighting bioinformatics-related educational activities, manpower development, internet boom, service industry, research activities, conferences and trainings undertaken by the corporate and government sectors. Nonetheless, this new field of science faces lots of challenges.
Effects of greening and community reuse of vacant lots on crime
Kondo, Michelle; Hohl, Bernadette; Han, SeungHoon; Branas, Charles
2016-01-01
The Youngstown Neighborhood Development Corporation initiated a ‘Lots of Green’ programme to reuse vacant land in 2010. We performed a difference-in-differences analysis of the effects of this programme on crime in and around newly treated lots, in comparison to crimes in and around randomly selected and matched, untreated vacant lot controls. The effects of two types of vacant lot treatments on crime were tested: a cleaning and greening ‘stabilisation’ treatment and a ‘community reuse’ treatment mostly involving community gardens. The combined effects of both types of vacant lot treatments were also tested. After adjustment for various sociodemographic factors, linear and Poisson regression models demonstrated statistically significant reductions in all crime classes for at least one lot treatment type. Regression models adjusted for spatial autocorrelation found the most consistent significant reductions in burglaries around stabilisation lots, and in assaults around community reuse lots. Spill-over crime reduction effects were found in contiguous areas around newly treated lots. Significant increases in motor vehicle thefts around both types of lots were also found after they had been greened. Community-initiated vacant lot greening may have a greater impact on reducing more serious, violent crimes. PMID:28529389
Effects of distributed database modeling on evaluation of transaction rollbacks
NASA Technical Reports Server (NTRS)
Mukkamala, Ravi
1991-01-01
Data distribution, degree of data replication, and transaction access patterns are key factors in determining the performance of distributed database systems. In order to simplify the evaluation of performance measures, database designers and researchers tend to make simplistic assumptions about the system. The effect is studied of modeling assumptions on the evaluation of one such measure, the number of transaction rollbacks, in a partitioned distributed database system. Six probabilistic models and expressions are developed for the numbers of rollbacks under each of these models. Essentially, the models differ in terms of the available system information. The analytical results so obtained are compared to results from simulation. From here, it is concluded that most of the probabilistic models yield overly conservative estimates of the number of rollbacks. The effect of transaction commutativity on system throughout is also grossly undermined when such models are employed.
Effects of distributed database modeling on evaluation of transaction rollbacks
NASA Technical Reports Server (NTRS)
Mukkamala, Ravi
1991-01-01
Data distribution, degree of data replication, and transaction access patterns are key factors in determining the performance of distributed database systems. In order to simplify the evaluation of performance measures, database designers and researchers tend to make simplistic assumptions about the system. Here, researchers investigate the effect of modeling assumptions on the evaluation of one such measure, the number of transaction rollbacks in a partitioned distributed database system. The researchers developed six probabilistic models and expressions for the number of rollbacks under each of these models. Essentially, the models differ in terms of the available system information. The analytical results obtained are compared to results from simulation. It was concluded that most of the probabilistic models yield overly conservative estimates of the number of rollbacks. The effect of transaction commutativity on system throughput is also grossly undermined when such models are employed.
A Hybrid Data Mining Approach for Credit Card Usage Behavior Analysis
NASA Astrophysics Data System (ADS)
Tsai, Chieh-Yuan
Credit card is one of the most popular e-payment approaches in current online e-commerce. To consolidate valuable customers, card issuers invest a lot of money to maintain good relationship with their customers. Although several efforts have been done in studying card usage motivation, few researches emphasize on credit card usage behavior analysis when time periods change from t to t+1. To address this issue, an integrated data mining approach is proposed in this paper. First, the customer profile and their transaction data at time period t are retrieved from databases. Second, a LabelSOM neural network groups customers into segments and identify critical characteristics for each group. Third, a fuzzy decision tree algorithm is used to construct usage behavior rules of interesting customer groups. Finally, these rules are used to analysis the behavior changes between time periods t and t+1. An implementation case using a practical credit card database provided by a commercial bank in Taiwan is illustrated to show the benefits of the proposed framework.
Advanced overlay: sampling and modeling for optimized run-to-run control
NASA Astrophysics Data System (ADS)
Subramany, Lokesh; Chung, WoongJae; Samudrala, Pavan; Gao, Haiyong; Aung, Nyan; Gomez, Juan Manuel; Gutjahr, Karsten; Park, DongSuk; Snow, Patrick; Garcia-Medina, Miguel; Yap, Lipkong; Demirer, Onur Nihat; Pierson, Bill; Robinson, John C.
2016-03-01
In recent years overlay (OVL) control schemes have become more complicated in order to meet the ever shrinking margins of advanced technology nodes. As a result, this brings up new challenges to be addressed for effective run-to- run OVL control. This work addresses two of these challenges by new advanced analysis techniques: (1) sampling optimization for run-to-run control and (2) bias-variance tradeoff in modeling. The first challenge in a high order OVL control strategy is to optimize the number of measurements and the locations on the wafer, so that the "sample plan" of measurements provides high quality information about the OVL signature on the wafer with acceptable metrology throughput. We solve this tradeoff between accuracy and throughput by using a smart sampling scheme which utilizes various design-based and data-based metrics to increase model accuracy and reduce model uncertainty while avoiding wafer to wafer and within wafer measurement noise caused by metrology, scanner or process. This sort of sampling scheme, combined with an advanced field by field extrapolated modeling algorithm helps to maximize model stability and minimize on product overlay (OPO). Second, the use of higher order overlay models means more degrees of freedom, which enables increased capability to correct for complicated overlay signatures, but also increases sensitivity to process or metrology induced noise. This is also known as the bias-variance trade-off. A high order model that minimizes the bias between the modeled and raw overlay signature on a single wafer will also have a higher variation from wafer to wafer or lot to lot, that is unless an advanced modeling approach is used. In this paper, we characterize the bias-variance trade off to find the optimal scheme. The sampling and modeling solutions proposed in this study are validated by advanced process control (APC) simulations to estimate run-to-run performance, lot-to-lot and wafer-to- wafer model term monitoring to estimate stability and ultimately high volume manufacturing tests to monitor OPO by densely measured OVL data.
Photometric model of diffuse surfaces described as a distribution of interfaced Lambertian facets.
Simonot, Lionel
2009-10-20
The Lambertian model for diffuse reflection is widely used for the sake of its simplicity. Nevertheless, this model is known to be inaccurate in describing a lot of real-world objects, including those that present a matte surface. To overcome this difficulty, we propose a photometric model where the surfaces are described as a distribution of facets where each facet consists of a flat interface on a Lambertian background. Compared to the Lambertian model, it includes two additional physical parameters: an interface roughness parameter and the ratio between the refractive indices of the background binder and of the upper medium. The Torrance-Sparrow model--distribution of strictly specular facets--and the Oren-Nayar model--distribution of strictly Lambertian facets--appear as special cases.
Does the perception that stress affects health matter? The association with health and mortality.
Keller, Abiola; Litzelman, Kristin; Wisk, Lauren E; Maddox, Torsheika; Cheng, Erika Rose; Creswell, Paul D; Witt, Whitney P
2012-09-01
This study sought to examine the relationship among the amount of stress, the perception that stress affects health, and health and mortality outcomes in a nationally representative sample of U.S. adults. Data from the 1998 National Health Interview Survey were linked to prospective National Death Index mortality data through 2006. Separate logistic regression models were used to examine the factors associated with current health status and psychological distress. Cox proportional hazard models were used to determine the impact of perceiving that stress affects health on all-cause mortality. Each model specifically examined the interaction between the amount of stress and the perception that stress affects health, controlling for sociodemographic, health behavior, and access to health care factors. 33.7% of nearly 186 million (unweighted n = 28,753) U.S. adults perceived that stress affected their health a lot or to some extent. Both higher levels of reported stress and the perception that stress affects health were independently associated with an increased likelihood of worse health and mental health outcomes. The amount of stress and the perception that stress affects health interacted such that those who reported a lot of stress and that stress impacted their health a lot had a 43% increased risk of premature death (HR = 1.43, 95% CI [1.2, 1.7]). High amounts of stress and the perception that stress impacts health are each associated with poor health and mental health. Individuals who perceived that stress affects their health and reported a large amount of stress had an increased risk of premature death. PsycINFO Database Record (c) 2012 APA, all rights reserved.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-23
... which it handles and executes odd lot and mixed lot orders.\\3\\ If CBSX is not displaying the NBBO and... mixed lot orders will be handled and executed in a more consistent manner with round lot orders. \\3\\ A... lot'' order is an order for a quantity that is less than 100. A ``mixed lot'' order is an order for a...
Daniels, Austin L; Randolph, Theodore W
2018-05-01
The presence of subvisible particles in formulations of therapeutic proteins is a risk factor for adverse immune responses. Although the immunogenic potential of particulate contaminants likely depends on particle structural characteristics (e.g., composition, size, and shape), exact structure-immunogenicity relationships are unknown. Images recorded by flow imaging microscopy reflect information about particle morphology, but flow microscopy is typically used to determine only particle size distributions, neglecting information on particle morphological features that may be immunologically relevant. We recently developed computational techniques that utilize the Kullback-Leibler divergence and multidimensional scaling to compare the morphological properties of particles in sets of flow microscopy images. In the current work, we combined these techniques with expectation maximization cluster analyses and used them to compare flow imaging microscopy data sets that had been collected by the U.S. Food and Drug Administration after severe adverse drug reactions (including 7 fatalities) were observed in patients who had been administered some lots of peginesatide formulations. Flow microscopy images of particle populations found in the peginesatide lots associated with severe adverse reactions in patients were readily distinguishable from images of particles in lots where severe adverse reactions did not occur. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.
Jones, Gabrielle; Pihier, Nathalie; Vanbockstael, Caroline; Le Hello, Simon; Cadel Six, Sabrina; Fournet, Nelly; Jourdan-da Silva, Nathalie
2016-01-01
A prolonged outbreak of Salmonella enterica serotype Enteritidis occurred in northern France between December 2014 and April 2015. Epidemiological investigations following the initial notification on 30 December 2014 of five cases of salmonellosis (two confirmed S. Enteritidis) in young children residing in the Somme department revealed that all cases frequented the same food bank A. Further epidemiological, microbiological and food trace-back investigations indicated frozen beefburgers as the source of the outbreak and the suspected lot originating from Poland was recalled on 22 January 2015. On 2 March 2015 a second notification of S. Enteritidis cases in the Somme reinitiated investigations that confirmed a link with food bank A and with consumption of frozen beefburgers from the same Polish producer. In the face of a possible persistent source of contamination, all frozen beefburgers distributed by food bank A and from the same origin were blocked on 3 March 2015. Microbiological analyses confirmed contamination by S. Enteritidis of frozen beefburgers from a second lot remaining in cases’ homes. A second recall was initiated on 6 March 2015 and all frozen beefburgers from the Polish producer remain blocked after analyses identified additional contaminated lots over several months of production. PMID:27748250
Distribution Characteristics of Air-Bone Gaps – Evidence of Bias in Manual Audiometry
Margolis, Robert H.; Wilson, Richard H.; Popelka, Gerald R.; Eikelboom, Robert H.; Swanepoel, De Wet; Saly, George L.
2015-01-01
Objective Five databases were mined to examine distributions of air-bone gaps obtained by automated and manual audiometry. Differences in distribution characteristics were examined for evidence of influences unrelated to the audibility of test signals. Design The databases provided air- and bone-conduction thresholds that permitted examination of air-bone gap distributions that were free of ceiling and floor effects. Cases with conductive hearing loss were eliminated based on air-bone gaps, tympanometry, and otoscopy, when available. The analysis is based on 2,378,921 threshold determinations from 721,831 subjects from five databases. Results Automated audiometry produced air-bone gaps that were normally distributed suggesting that air- and bone-conduction thresholds are normally distributed. Manual audiometry produced air-bone gaps that were not normally distributed and show evidence of biasing effects of assumptions of expected results. In one database, the form of the distributions showed evidence of inclusion of conductive hearing losses. Conclusions Thresholds obtained by manual audiometry show tester bias effects from assumptions of the patient’s hearing loss characteristics. Tester bias artificially reduces the variance of bone-conduction thresholds and the resulting air-bone gaps. Because the automated method is free of bias from assumptions of expected results, these distributions are hypothesized to reflect the true variability of air- and bone-conduction thresholds and the resulting air-bone gaps. PMID:26627469
Accounting and Accountability for Distributed and Grid Systems
NASA Technical Reports Server (NTRS)
Thigpen, William; McGinnis, Laura F.; Hacker, Thomas J.
2001-01-01
While the advent of distributed and grid computing systems will open new opportunities for scientific exploration, the reality of such implementations could prove to be a system administrator's nightmare. A lot of effort is being spent on identifying and resolving the obvious problems of security, scheduling, authentication and authorization. Lurking in the background, though, are the largely unaddressed issues of accountability and usage accounting: (1) mapping resource usage to resource users; (2) defining usage economies or methods for resource exchange; (3) describing implementation standards that minimize and compartmentalize the tasks required for a site to participate in a grid.
Comprehensive analysis of statistical and model-based overlay lot disposition methods
NASA Astrophysics Data System (ADS)
Crow, David A.; Flugaur, Ken; Pellegrini, Joseph C.; Joubert, Etienne L.
2001-08-01
Overlay lot disposition algorithms in lithography occupy some of the highest leverage decision points in the microelectronic manufacturing process. In a typical large volume sub-0.18micrometers fab the lithography lot disposition decision is made about 500 times per day. Each decision will send a lot of wafers either to the next irreversible process step or back to rework in an attempt to improve unacceptable overlay performance. In the case of rework, the intention is that the reworked lot will represent better yield (and thus more value) than the original lot and that the enhanced lot value will exceed the cost of rework. Given that the estimated cost of reworking a critical-level lot is around 10,000 (based upon the opportunity cost of consuming time on a state-of-the-art DUV scanner), we are faced with the implication that the lithography lot disposition decision process impacts up to 5 million per day in decisions. That means that a 1% error rate in this decision process represents over 18 million per year lost in profit for a representative sit. Remarkably, despite this huge leverage, the lithography lot disposition decision algorithm usually receives minimal attention. In many cases, this lack of attention has resulted in the retention of sub-optimal algorithms from earlier process generations and a significant negative impact on the economic output of many high-volume manufacturing sites. An ideal lot- dispositioning algorithm would be an algorithm that results into the best economic decision being made every time - lots would only be reworked where the expected value (EV) of the reworked lot minus the expected value of the original lot exceeds the cost of the rework: EV(reworked lot)- EV(original lot)>COST(rework process) Calculating the above expected values in real-time has generally been deemed too complicated and maintenance-intensive to be practical for fab operations, so a simplified rule is typically used.
Umeh, Rich; Oguche, Stephen; Oguonu, Tagbo; Pitmang, Simon; Shu, Elvis; Onyia, Jude-Tony; Daniyam, Comfort A; Shwe, David; Ahmad, Abdullahi; Jongert, Erik; Catteau, Grégory; Lievens, Marc; Ofori-Anyinam, Opokua; Leach, Amanda
2014-11-12
For regulatory approval, consistency in manufacturing of vaccine lots is expected to be demonstrated in confirmatory immunogenicity studies using two-sided equivalence trials. This randomized, double-blind study (NCT01323972) assessed consistency of three RTS,S/AS01 malaria vaccine batches formulated from commercial-scale purified antigen bulk lots in terms of anti-CS-responses induced. Healthy children aged 5-17 months were randomized (1:1:1:1) to receive RTS,S/AS01 at 0-1-2 months from one of three commercial-scale purified antigen bulk lots (1600 litres-fermentation scale; commercial-scale lots), or a comparator vaccine batch made from pilot-scale purified antigen bulk lot (20 litres-fermentation scale; pilot-scale lot). The co-primary objectives were to first demonstrate consistency of antibody responses against circumsporozoite (CS) protein at one month post-dose 3 for the three commercial-scale lots and second demonstrate non-inferiority of anti-CS antibody responses at one month post-dose 3 for the commercial-scale lots compared to the pilot-scale lot. Safety and reactogenicity were evaluated as secondary endpoints. One month post-dose-3, anti-CS antibody geometric mean titres (GMT) for the 3 commercial scale lots were 319.6 EU/ml (95% confidence interval (CI): 268.9-379.8), 241.4 EU/ml (207.6-280.7), and 302.3 EU/ml (259.4-352.3). Consistency for the RTS,S/AS01 commercial-scale lots was demonstrated as the two-sided 95% CI of the anti-CS antibody GMT ratio between each pair of lots was within the range of 0.5-2.0. GMT of the pooled commercial-scale lots (285.8 EU/ml (260.7-313.3)) was non-inferior to the pilot-scale lot (271.7 EU/ml (228.5-323.1)). Each RTS,S/AS01 lot had an acceptable tolerability profile, with infrequent reports of grade 3 solicited symptoms. No safety signals were identified and no serious adverse events were considered related to vaccination. RTS,S/AS01 lots formulated from commercial-scale purified antigen bulk batches induced a consistent anti-CS antibody response, and the anti-CS GMT of pooled commercial-scale lots was non-inferior to that of a lot formulated from a pilot-scale antigen bulk batch. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.
Private database queries based on counterfactual quantum key distribution
NASA Astrophysics Data System (ADS)
Zhang, Jia-Li; Guo, Fen-Zhuo; Gao, Fei; Liu, Bin; Wen, Qiao-Yan
2013-08-01
Based on the fundamental concept of quantum counterfactuality, we propose a protocol to achieve quantum private database queries, which is a theoretical study of how counterfactuality can be employed beyond counterfactual quantum key distribution (QKD). By adding crucial detecting apparatus to the device of QKD, the privacy of both the distrustful user and the database owner can be guaranteed. Furthermore, the proposed private-database-query protocol makes full use of the low efficiency in the counterfactual QKD, and by adjusting the relevant parameters, the protocol obtains excellent flexibility and extensibility.
Sandiford, P
1993-09-01
In recent years Lot quality assurance sampling (LQAS), a method derived from production-line industry, has been advocated as an efficient means to evaluate the coverage rates achieved by child immunization programmes. This paper examines the assumptions on which LQAS is based and the effect that these assumptions have on its utility as a management tool. It shows that the attractively low sample sizes used in LQAS are achieved at the expense of specificity unless unrealistic assumptions are made about the distribution of coverage rates amongst the immunization programmes to which the method is applied. Although it is a very sensitive test and its negative predictive value is probably high in most settings, its specificity and positive predictive value are likely to be low. The implications of these strengths and weaknesses with regard to management decision-making are discussed.
NASA Astrophysics Data System (ADS)
Tsou, Jia-Chi; Hejazi, Seyed Reza; Rasti Barzoki, Morteza
2012-12-01
The economic production quantity (EPQ) model is a well-known and commonly used inventory control technique. However, the model is built on an unrealistic assumption that all the produced items need to be of perfect quality. Having relaxed this assumption, some researchers have studied the effects of the imperfect products on the inventory control techniques. This article, thus, attempts to develop an EPQ model with continuous quality characteristic and rework. To this end, this study assumes that a produced item follows a general distribution pattern, with its quality being perfect, imperfect or defective. The analysis of the model developed indicates that there is an optimal lot size, which generates minimum total cost. Moreover, the results show that the optimal lot size of the model equals that of the classical EPQ model in case imperfect quality percentage is zero or even close to zero.
7 CFR 983.52 - Failed lots/rework procedure.
Code of Federal Regulations, 2011 CFR
2011-01-01
..., ARIZONA, AND NEW MEXICO Regulations § 983.52 Failed lots/rework procedure. (a) Substandard pistachios. Each lot of substandard pistachios may be reworked to meet aflatoxin or quality requirements. The... reporting. If a lot fails to meet the aflatoxin and/or the quality requirements of this part, a failed lot...
Surviving the Glut: The Management of Event Streams in Cyberphysical Systems
NASA Astrophysics Data System (ADS)
Buchmann, Alejandro
Alejandro Buchmann is Professor in the Department of Computer Science, Technische Universität Darmstadt, where he heads the Databases and Distributed Systems Group. He received his MS (1977) and PhD (1980) from the University of Texas at Austin. He was an Assistant/Associate Professor at the Institute for Applied Mathematics and Systems IIMAS/UNAM in Mexico, doing research on databases for CAD, geographic information systems, and objectoriented databases. At Computer Corporation of America (later Xerox Advanced Information Systems) in Cambridge, Mass., he worked in the areas of active databases and real-time databases, and at GTE Laboratories, Waltham, in the areas of distributed object systems and the integration of heterogeneous legacy systems. 1991 he returned to academia and joined T.U. Darmstadt. His current research interests are at the intersection of middleware, databases, eventbased distributed systems, ubiquitous computing, and very large distributed systems (P2P, WSN). Much of the current research is concerned with guaranteeing quality of service and reliability properties in these systems, for example, scalability, performance, transactional behaviour, consistency, and end-to-end security. Many research projects imply collaboration with industry and cover a broad spectrum of application domains. Further information can be found at http://www.dvs.tu-darmstadt.de
Olives, Casey; Pagano, Marcello
2013-02-01
Lot Quality Assurance Sampling (LQAS) is a provably useful tool for monitoring health programmes. Although LQAS ensures acceptable Producer and Consumer risks, the literature alleges that the method suffers from poor specificity and positive predictive values (PPVs). We suggest that poor LQAS performance is due, in part, to variation in the true underlying distribution. However, until now the role of the underlying distribution in expected performance has not been adequately examined. We present Bayesian-LQAS (B-LQAS), an approach to incorporating prior information into the choice of the LQAS sample size and decision rule, and explore its properties through a numerical study. Additionally, we analyse vaccination coverage data from UNICEF's State of the World's Children in 1968-1989 and 2008 to exemplify the performance of LQAS and B-LQAS. Results of our numerical study show that the choice of LQAS sample size and decision rule is sensitive to the distribution of prior information, as well as to individual beliefs about the importance of correct classification. Application of the B-LQAS approach to the UNICEF data improves specificity and PPV in both time periods (1968-1989 and 2008) with minimal reductions in sensitivity and negative predictive value. LQAS is shown to be a robust tool that is not necessarily prone to poor specificity and PPV as previously alleged. In situations where prior or historical data are available, B-LQAS can lead to improvements in expected performance.
Bauermann, Fernando V; Flores, Eduardo F; Falkenberg, Shollie M; Weiblen, Rudi; Ridpath, Julia F
2014-01-01
The detection of an emerging pestivirus species, "HoBi-like virus," in fetal bovine serum (FBS) labeled as U.S. origin, but packaged in Europe, raised concerns that HoBi-like virus may have entered the United States. In the current study, 90 lots of FBS originating in North America (NA) were screened for pestivirus antigen and antibodies. Lots in group 1 (G1, 72 samples) and group 2 (G2, 9 samples) originated in NA and were packaged in the United States. Group 3 (G3) was composed of 9 lots collected in NA and processed in Europe. Lots in G1 were claimed negative for Bovine viral diarrhea virus (BVDV), while lots in G2 and G3 were claimed positive by the commercial processor. All lots in G1 and G2 tested negative by reverse transcription polymerase chain reaction (RT-PCR) using HoBi-like-specific primers. Two G1 lots tested positive by BVDV RT-PCR. One of these was also positive by virus isolation. All G2 lots were positive by BVDV RT-PCR. In addition, four G2 lots were VI positive while 1 lot was antigen-capture enzyme-linked immunosorbent assay (ELISA) positive. Two G3 lots were positive by HoBi-like-specific RT-PCR tests. All lots were negative for HoBi_D32/00 neutralizing antibodies. Seven lots (4 G1; 1 G2; 2 G3) had antibodies against BVDV by virus neutralization and/or antigen-capture ELISA. While there is no evidence of HoBi-like viruses in NA based on tested samples, further studies are required to validate HoBi-like virus-free status and develop means to prevent the spread of HoBi-like virus into NA.
Assessing gull abundance and food availability in urban parking lots
Clark, Daniel E.; Whitney, Jillian J.; MacKenzie, Kenneth G.; Koenen, Kiana K. G.; DeStefano, Stephen
2015-01-01
Feeding birds is a common activity throughout the world; yet, little is known about the extent of feeding gulls in urban areas. We monitored 8 parking lots in central Massachusetts, USA, during the fall and winter of 2011 to 2013 in 4 monitoring sessions to document the number of gulls present, the frequency of human–gull feeding interactions, and the effectiveness of signage and direct interaction in reducing human-provisioned food. Parking lots were divided between “education” and “no-education” lots. In education lots, we erected signs about problems caused when people feed birds and also asked people to stop feeding birds. We did not erect signs or ask people to stop feeding birds at no-education lots. We spent >1,200 hours in parking lots (range = 136 to 200 hours per parking lot), and gulls were counted every 20 minutes. We conducted >4,000 counts, and ring-billed gulls (Lorus delawarensis) accounted for 98% of all gulls. Our educational efforts were minimally effective. There were fewer feedings (P = 0.01) in education lots during one of the monitoring sessions but significantly more gulls (P = 0.008) in education lots during 2 monitoring sessions. While there was a marginal decrease (P = 0.055) in the number of feedings after no-education lots were transformed into education lots, there was no difference in gull numbers in these lots (P = 0.16). Education appears to have some influence in reducing the number of people feeding gulls, but our efforts were not able to reduce the number of human feeders or the amount of food enough to influence the number of gulls using parking lots.
Stadler, David; Sulyok, Michael; Schuhmacher, Rainer; Berthiller, Franz; Krska, Rudolf
2018-05-01
Multi-mycotoxin determination by LC-MS is commonly based on external solvent-based or matrix-matched calibration and, if necessary, the correction for the method bias. In everyday practice, the method bias (expressed as apparent recovery RA), which may be caused by losses during the recovery process and/or signal/suppression enhancement, is evaluated by replicate analysis of a single spiked lot of a matrix. However, RA may vary for different lots of the same matrix, i.e., lot-to-lot variation, which can result in a higher relative expanded measurement uncertainty (U r ). We applied a straightforward procedure for the calculation of U r from the within-laboratory reproducibility, which is also called intermediate precision, and the uncertainty of RA (u r,RA ). To estimate the contribution of the lot-to-lot variation to U r , the measurement results of one replicate of seven different lots of figs and maize and seven replicates of a single lot of these matrices, respectively, were used to calculate U r . The lot-to-lot variation was contributing to u r,RA and thus to U r for the majority of the 66 evaluated analytes in both figs and maize. The major contributions of the lot-to-lot variation to u r,RA were differences in analyte recovery in figs and relative matrix effects in maize. U r was estimated from long-term participation in proficiency test schemes with 58%. Provided proper validation, a fit-for-purpose U r of 50% was proposed for measurement results obtained by an LC-MS-based multi-mycotoxin assay, independent of the concentration of the analytes.
New model for distributed multimedia databases and its application to networking of museums
NASA Astrophysics Data System (ADS)
Kuroda, Kazuhide; Komatsu, Naohisa; Komiya, Kazumi; Ikeda, Hiroaki
1998-02-01
This paper proposes a new distributed multimedia data base system where the databases storing MPEG-2 videos and/or super high definition images are connected together through the B-ISDN's, and also refers to an example of the networking of museums on the basis of the proposed database system. The proposed database system introduces a new concept of the 'retrieval manager' which functions an intelligent controller so that the user can recognize a set of image databases as one logical database. A user terminal issues a request to retrieve contents to the retrieval manager which is located in the nearest place to the user terminal on the network. Then, the retrieved contents are directly sent through the B-ISDN's to the user terminal from the server which stores the designated contents. In this case, the designated logical data base dynamically generates the best combination of such a retrieving parameter as a data transfer path referring to directly or data on the basis of the environment of the system. The generated retrieving parameter is then executed to select the most suitable data transfer path on the network. Therefore, the best combination of these parameters fits to the distributed multimedia database system.
ARACHNID: A prototype object-oriented database tool for distributed systems
NASA Technical Reports Server (NTRS)
Younger, Herbert; Oreilly, John; Frogner, Bjorn
1994-01-01
This paper discusses the results of a Phase 2 SBIR project sponsored by NASA and performed by MIMD Systems, Inc. A major objective of this project was to develop specific concepts for improved performance in accessing large databases. An object-oriented and distributed approach was used for the general design, while a geographical decomposition was used as a specific solution. The resulting software framework is called ARACHNID. The Faint Source Catalog developed by NASA was the initial database testbed. This is a database of many giga-bytes, where an order of magnitude improvement in query speed is being sought. This database contains faint infrared point sources obtained from telescope measurements of the sky. A geographical decomposition of this database is an attractive approach to dividing it into pieces. Each piece can then be searched on individual processors with only a weak data linkage between the processors being required. As a further demonstration of the concepts implemented in ARACHNID, a tourist information system is discussed. This version of ARACHNID is the commercial result of the project. It is a distributed, networked, database application where speed, maintenance, and reliability are important considerations. This paper focuses on the design concepts and technologies that form the basis for ARACHNID.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Lot numbers. 46.20 Section 46.20 Agriculture... Receivers § 46.20 Lot numbers. An identifying lot number shall be assigned to each shipment of produce to be sold on consignment or joint account or for the account of another person or firm. A lot number should...
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Lot numbers. 46.20 Section 46.20 Agriculture... Receivers § 46.20 Lot numbers. An identifying lot number shall be assigned to each shipment of produce to be sold on consignment or joint account or for the account of another person or firm. A lot number should...
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Lot numbers. 46.20 Section 46.20 Agriculture... Receivers § 46.20 Lot numbers. An identifying lot number shall be assigned to each shipment of produce to be sold on consignment or joint account or for the account of another person or firm. A lot number should...
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Lot numbers. 46.20 Section 46.20 Agriculture... Receivers § 46.20 Lot numbers. An identifying lot number shall be assigned to each shipment of produce to be sold on consignment or joint account or for the account of another person or firm. A lot number should...
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Lot numbers. 46.20 Section 46.20 Agriculture... Receivers § 46.20 Lot numbers. An identifying lot number shall be assigned to each shipment of produce to be sold on consignment or joint account or for the account of another person or firm. A lot number should...
Content Based Image Retrieval based on Wavelet Transform coefficients distribution
Lamard, Mathieu; Cazuguel, Guy; Quellec, Gwénolé; Bekri, Lynda; Roux, Christian; Cochener, Béatrice
2007-01-01
In this paper we propose a content based image retrieval method for diagnosis aid in medical fields. We characterize images without extracting significant features by using distribution of coefficients obtained by building signatures from the distribution of wavelet transform. The research is carried out by computing signature distances between the query and database images. Several signatures are proposed; they use a model of wavelet coefficient distribution. To enhance results, a weighted distance between signatures is used and an adapted wavelet base is proposed. Retrieval efficiency is given for different databases including a diabetic retinopathy, a mammography and a face database. Results are promising: the retrieval efficiency is higher than 95% for some cases using an optimization process. PMID:18003013
78 FR 51742 - Notice of Proposed Withdrawal and Opportunity for Public Meeting; California
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-21
..., inclusive, and SE\\1/ 4\\NE\\1/4\\; sec. 26, lots 11, 12, and 13, S\\1/2\\SE\\1/4\\NE\\1/4\\, and N\\1/ 2\\NE\\1/4\\SE\\1/4..., lots 13 to 16, inclusive, and 18 to 22, inclusive, and a portion of lot 8 as described in the Donation... platted. sec. 19, lots 11, 13, and 16 to 19, inclusive; sec. 20, lot 4; sec. 29, lots 7 and 11; sec. 32...
Design considerations, architecture, and use of the Mini-Sentinel distributed data system.
Curtis, Lesley H; Weiner, Mark G; Boudreau, Denise M; Cooper, William O; Daniel, Gregory W; Nair, Vinit P; Raebel, Marsha A; Beaulieu, Nicolas U; Rosofsky, Robert; Woodworth, Tiffany S; Brown, Jeffrey S
2012-01-01
We describe the design, implementation, and use of a large, multiorganizational distributed database developed to support the Mini-Sentinel Pilot Program of the US Food and Drug Administration (FDA). As envisioned by the US FDA, this implementation will inform and facilitate the development of an active surveillance system for monitoring the safety of medical products (drugs, biologics, and devices) in the USA. A common data model was designed to address the priorities of the Mini-Sentinel Pilot and to leverage the experience and data of participating organizations and data partners. A review of existing common data models informed the process. Each participating organization designed a process to extract, transform, and load its source data, applying the common data model to create the Mini-Sentinel Distributed Database. Transformed data were characterized and evaluated using a series of programs developed centrally and executed locally by participating organizations. A secure communications portal was designed to facilitate queries of the Mini-Sentinel Distributed Database and transfer of confidential data, analytic tools were developed to facilitate rapid response to common questions, and distributed querying software was implemented to facilitate rapid querying of summary data. As of July 2011, information on 99,260,976 health plan members was included in the Mini-Sentinel Distributed Database. The database includes 316,009,067 person-years of observation time, with members contributing, on average, 27.0 months of observation time. All data partners have successfully executed distributed code and returned findings to the Mini-Sentinel Operations Center. This work demonstrates the feasibility of building a large, multiorganizational distributed data system in which organizations retain possession of their data that are used in an active surveillance system. Copyright © 2012 John Wiley & Sons, Ltd.
Integrating a local database into the StarView distributed user interface
NASA Technical Reports Server (NTRS)
Silberberg, D. P.
1992-01-01
A distributed user interface to the Space Telescope Data Archive and Distribution Service (DADS) known as StarView is being developed. The DADS architecture consists of the data archive as well as a relational database catalog describing the archive. StarView is a client/server system in which the user interface is the front-end client to the DADS catalog and archive servers. Users query the DADS catalog from the StarView interface. Query commands are transmitted via a network and evaluated by the database. The results are returned via the network and are displayed on StarView forms. Based on the results, users decide which data sets to retrieve from the DADS archive. Archive requests are packaged by StarView and sent to DADS, which returns the requested data sets to the users. The advantages of distributed client/server user interfaces over traditional one-machine systems are well known. Since users run software on machines separate from the database, the overall client response time is much faster. Also, since the server is free to process only database requests, the database response time is much faster. Disadvantages inherent in this architecture are slow overall database access time due to the network delays, lack of a 'get previous row' command, and that refinements of a previously issued query must be submitted to the database server, even though the domain of values have already been returned by the previous query. This architecture also does not allow users to cross correlate DADS catalog data with other catalogs. Clearly, a distributed user interface would be more powerful if it overcame these disadvantages. A local database is being integrated into StarView to overcome these disadvantages. When a query is made through a StarView form, which is often composed of fields from multiple tables, it is translated to an SQL query and issued to the DADS catalog. At the same time, a local database table is created to contain the resulting rows of the query. The returned rows are displayed on the form as well as inserted into the local database table. Identical results are produced by reissuing the query to either the DADS catalog or to the local table. Relational databases do not provide a 'get previous row' function because of the inherent complexity of retrieving previous rows of multiple-table joins. However, since this function is easily implemented on a single table, StarView uses the local table to retrieve the previous row. Also, StarView issues subsequent query refinements to the local table instead of the DADS catalog, eliminating the network transmission overhead. Finally, other catalogs can be imported into the local database for cross correlation with local tables. Overall, it is believe that this is a more powerful architecture for distributed, database user interfaces.
Torresi, Joseph; Heron, Leon G; Qiao, Ming; Marjason, Joanne; Chambonneau, Laurent; Bouckenooghe, Alain; Boaz, Mark; van der Vliet, Diane; Wallace, Derek; Hutagalung, Yanee; Nissen, Michael D; Richmond, Peter C
2015-09-22
The recombinant yellow fever-17D-dengue virus, live, attenuated, tetravalent dengue vaccine (CYD-TDV) has undergone extensive clinical trials. Here safety and consistency of immunogenicity of phase III manufacturing lots of CYD-TDV were evaluated and compared with a phase II lot and placebo in a dengue-naïve population. Healthy 18-60 year-olds were randomly assigned in a 3:3:3:3:1 ratio to receive three subcutaneous doses of either CYD-TDV from any one of three phase III lots or a phase II lot, or placebo, respectively in a 0, 6, 12 month dosing schedule. Neutralising antibody geometric mean titres (PRNT50 GMTs) for each of the four dengue serotypes were compared in sera collected 28 days after the third vaccination-equivalence among lots was demonstrated if the lower and upper limits of the two-sided 95% CIs of the GMT ratio were ≥0.5 and ≤2.0, respectively. 712 participants received vaccine or placebo and 614 (86%) completed the study; 17 (2.4%) participants withdrew after adverse events. Equivalence of phase III lots was demonstrated for 11 of 12 pairwise comparisons. One of three comparisons for serotype 2 was not statistically equivalent. GMTs for serotype 2 in phase III lots were close to each other (65.9, 44.1 and 58.1, respectively). Phase III lots can be produced in a consistent manner with predictable immune response and acceptable safety profile similar to previously characterised phase II lots. The phase III lots may be considered as not clinically different as statistical equivalence was shown for serotypes 1, 3 and 4 across the phase III lots. For serotype 2, although equivalence was not shown between two lots, the GMTs observed in the phase III lots were consistently higher than those for the phase II lot. As such, in our view, biological equivalence for all serotypes was demonstrated. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
7 CFR 983.152 - Failed lots/rework procedure.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 8 2012-01-01 2012-01-01 false Failed lots/rework procedure. 983.152 Section 983.152..., ARIZONA, AND NEW MEXICO Rules and Regulations § 983.152 Failed lots/rework procedure. (a) Inshell rework... the lot has been reworked and tested, it fails the aflatoxin test for a second time, the lot may be...
7 CFR 983.152 - Failed lots/rework procedure.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 8 2013-01-01 2013-01-01 false Failed lots/rework procedure. 983.152 Section 983.152..., ARIZONA, AND NEW MEXICO Rules and Regulations § 983.152 Failed lots/rework procedure. (a) Inshell rework... the lot has been reworked and tested, it fails the aflatoxin test for a second time, the lot may be...
7 CFR 983.152 - Failed lots/rework procedure.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 8 2011-01-01 2011-01-01 false Failed lots/rework procedure. 983.152 Section 983.152..., ARIZONA, AND NEW MEXICO Rules and Regulations § 983.152 Failed lots/rework procedure. (a) Inshell rework... the lot has been reworked and tested, it fails the aflatoxin test for a second time, the lot may be...
7 CFR 983.152 - Failed lots/rework procedure.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 8 2014-01-01 2014-01-01 false Failed lots/rework procedure. 983.152 Section 983.152..., ARIZONA, AND NEW MEXICO Rules and Regulations § 983.152 Failed lots/rework procedure. (a) Inshell rework... the lot has been reworked and tested, it fails the aflatoxin test for a second time, the lot may be...
Analysis of Learning Curve Fitting Techniques.
1987-09-01
1986. 15. Neter, John and others. Applied Linear Regression Models. Homewood IL: Irwin, 19-33. 16. SAS User’s Guide: Basics, Version 5 Edition. SAS... Linear Regression Techniques (15:23-52). Random errors are assumed to be normally distributed when using -# ordinary least-squares, according to Johnston...lot estimated by the improvement curve formula. For a more detailed explanation of the ordinary least-squares technique, see Neter, et. al., Applied
Computer Aided Synthesis or Measurement Schemes for Telemetry applications
1997-09-02
5.2.5. Frame structure generation The algorithm generating the frame structure should take as inputs the sampling frequency requirements of the channels...these channels into the frame structure. Generally there can be a lot of ways to divide channels among groups. The algorithm implemented in...groups) first. The algorithm uses the function "try_permutation" recursively to distribute channels among the groups, and the function "try_subtable
Powder Bed Layer Characteristics: The Overseen First-Order Process Input
NASA Astrophysics Data System (ADS)
Mindt, H. W.; Megahed, M.; Lavery, N. P.; Holmes, M. A.; Brown, S. G. R.
2016-08-01
Powder Bed Additive Manufacturing offers unique advantages in terms of manufacturing cost, lot size, and product complexity compared to traditional processes such as casting, where a minimum lot size is mandatory to achieve economic competitiveness. Many studies—both experimental and numerical—are dedicated to the analysis of how process parameters such as heat source power, scan speed, and scan strategy affect the final material properties. Apart from the general urge to increase the build rate using thicker powder layers, the coating process and how the powder is distributed on the processing table has received very little attention to date. This paper focuses on the first step of every powder bed build process: Coating the process table. A numerical study is performed to investigate how powder is transferred from the source to the processing table. A solid coating blade is modeled to spread commercial Ti-6Al-4V powder. The resulting powder layer is analyzed statistically to determine the packing density and its variation across the processing table. The results are compared with literature reports using the so-called "rain" models. A parameter study is performed to identify the influence of process table displacement and wiper velocity on the powder distribution. The achieved packing density and how that affects subsequent heat source interaction with the powder bed is also investigated numerically.
Process evaluation distributed system
NASA Technical Reports Server (NTRS)
Moffatt, Christopher L. (Inventor)
2006-01-01
The distributed system includes a database server, an administration module, a process evaluation module, and a data display module. The administration module is in communication with the database server for providing observation criteria information to the database server. The process evaluation module is in communication with the database server for obtaining the observation criteria information from the database server and collecting process data based on the observation criteria information. The process evaluation module utilizes a personal digital assistant (PDA). A data display module in communication with the database server, including a website for viewing collected process data in a desired metrics form, the data display module also for providing desired editing and modification of the collected process data. The connectivity established by the database server to the administration module, the process evaluation module, and the data display module, minimizes the requirement for manual input of the collected process data.
Heterogeneous distributed databases: A case study
NASA Technical Reports Server (NTRS)
Stewart, Tracy R.; Mukkamala, Ravi
1991-01-01
Alternatives are reviewed for accessing distributed heterogeneous databases and a recommended solution is proposed. The current study is limited to the Automated Information Systems Center at the Naval Sea Combat Systems Engineering Station at Norfolk, VA. This center maintains two databases located on Digital Equipment Corporation's VAX computers running under the VMS operating system. The first data base, ICMS, resides on a VAX11/780 and has been implemented using VAX DBMS, a CODASYL based system. The second database, CSA, resides on a VAX 6460 and has been implemented using the ORACLE relational database management system (RDBMS). Both databases are used for configuration management within the U.S. Navy. Different customer bases are supported by each database. ICMS tracks U.S. Navy ships and major systems (anti-sub, sonar, etc.). Even though the major systems on ships and submarines have totally different functions, some of the equipment within the major systems are common to both ships and submarines.
Database Search Strategies & Tips. Reprints from the Best of "ONLINE" [and]"DATABASE."
ERIC Educational Resources Information Center
Online, Inc., Weston, CT.
Reprints of 17 articles presenting strategies and tips for searching databases online appear in this collection, which is one in a series of volumes of reprints from "ONLINE" and "DATABASE" magazines. Edited for information professionals who use electronically distributed databases, these articles address such topics as: (1)…
CSI Index Of Customer's Satisfaction Applied In The Area Of Public Transport
NASA Astrophysics Data System (ADS)
Poliaková, Adela
2015-06-01
In Western countries, the new visions are applied in quality control for an integrated public transport system. Public transport puts the customer at the centre of our decision making in achieving customer satisfaction with provided service. Sustainable surveys are kept among customers. A lot of companies are collecting huge databases containing over 30,000 voices of customers, which demonstrates the current satisfaction levels across the public transport service. Customer satisfaction with a provided service is a difficult task. In this service, the quality criteria are not clearly defined, and it is therefore difficult to define customer satisfaction. The paper introduces a possibility of CSI index application in conditions of the Slovak Republic transport area.
A new concept for creating the basic map
NASA Astrophysics Data System (ADS)
Parzyński, Zenon
2014-12-01
A lot of changes have been made to the legislative regulations associated with geodesy during the implementation of the INSPIRE Directive in Poland (amongst others, the structure of databases). There have also been great changes concerning the basic map and the method of its creation and updating. A new concept for creating the basic map is presented in this article Dokonaliśmy wielu zmian w prawnych regulacjach dotyczących geodezji w trakcie implementacji Dyrektywy INSPIRE w Polsce (m.in. struktury baz danych). Bardzo duże zmiany objęły mapę zasadniczą i procedury jej tworzenia i uaktualniania. W artykule jest zaprezentowana nowa koncepcja tworzenia mapy zasadniczej.
Adaptive Fourier decomposition based R-peak detection for noisy ECG Signals.
Ze Wang; Chi Man Wong; Feng Wan
2017-07-01
An adaptive Fourier decomposition (AFD) based R-peak detection method is proposed for noisy ECG signals. Although lots of QRS detection methods have been proposed in literature, most detection methods require high signal quality. The proposed method extracts the R waves from the energy domain using the AFD and determines the R-peak locations based on the key decomposition parameters, achieving the denoising and the R-peak detection at the same time. Validated by clinical ECG signals in the MIT-BIH Arrhythmia Database, the proposed method shows better performance than the Pan-Tompkin (PT) algorithm in both situations of a native PT and the PT with a denoising process.
Exploring the role of cranberry polyphenols in periodontits: A brief review
Mukherjee, Malancha; Bandyopadhyay, Prasanta; Kundu, Debabrata
2014-01-01
Cranberry juice polyphenols have gained importance over the past decade due to their promising health benefits. The bioactive component, proanthocyanidins is mainly responsible for its protective effect. A lot has been said about its role in urinary tract infection and other systemic diseases, but little is known about its oral benefits. An extensive search was carried out in the PubMed database using the terms “cranberry polyphenols” and “periodontitis” together. The institute library was also thoroughly scrutinized for all relevant information. Thus, a paper was formulated, the aim of which was to review the role of high molecular weight cranberry fraction on oral tissues and periodontal diseases. PMID:24872617
Resources | Division of Cancer Prevention
Manual of Operations Version 3, 12/13/2012 (PDF, 162KB) Database Sources Consortium for Functional Glycomics databases Design Studies Related to the Development of Distributed, Web-based European Carbohydrate Databases (EUROCarbDB) |
a Review on State-Of Face Recognition Approaches
NASA Astrophysics Data System (ADS)
Mahmood, Zahid; Muhammad, Nazeer; Bibi, Nargis; Ali, Tauseef
Automatic Face Recognition (FR) presents a challenging task in the field of pattern recognition and despite the huge research in the past several decades; it still remains an open research problem. This is primarily due to the variability in the facial images, such as non-uniform illuminations, low resolution, occlusion, and/or variation in poses. Due to its non-intrusive nature, the FR is an attractive biometric modality and has gained a lot of attention in the biometric research community. Driven by the enormous number of potential application domains, many algorithms have been proposed for the FR. This paper presents an overview of the state-of-the-art FR algorithms, focusing their performances on publicly available databases. We highlight the conditions of the image databases with regard to the recognition rate of each approach. This is useful as a quick research overview and for practitioners as well to choose an algorithm for their specified FR application. To provide a comprehensive survey, the paper divides the FR algorithms into three categories: (1) intensity-based, (2) video-based, and (3) 3D based FR algorithms. In each category, the most commonly used algorithms and their performance is reported on standard face databases and a brief critical discussion is carried out.
A novel deep learning algorithm for incomplete face recognition: Low-rank-recovery network.
Zhao, Jianwei; Lv, Yongbiao; Zhou, Zhenghua; Cao, Feilong
2017-10-01
There have been a lot of methods to address the recognition of complete face images. However, in real applications, the images to be recognized are usually incomplete, and it is more difficult to realize such a recognition. In this paper, a novel convolution neural network frame, named a low-rank-recovery network (LRRNet), is proposed to conquer the difficulty effectively inspired by matrix completion and deep learning techniques. The proposed LRRNet first recovers the incomplete face images via an approach of matrix completion with the truncated nuclear norm regularization solution, and then extracts some low-rank parts of the recovered images as the filters. With these filters, some important features are obtained by means of the binaryzation and histogram algorithms. Finally, these features are classified with the classical support vector machines (SVMs). The proposed LRRNet method has high face recognition rate for the heavily corrupted images, especially for the images in the large databases. The proposed LRRNet performs well and efficiently for the images with heavily corrupted, especially in the case of large databases. Extensive experiments on several benchmark databases demonstrate that the proposed LRRNet performs better than some other excellent robust face recognition methods. Copyright © 2017 Elsevier Ltd. All rights reserved.
2010-09-01
5 2. SCIL Architecture ...............................................................................6 3. Assertions...137 x THIS PAGE INTENTIONALLY LEFT BLANK xi LIST OF FIGURES Figure 1. SCIL architecture...Database Connectivity LAN Local Area Network ODBC Open Database Connectivity SCIL Social-Cultural Content in Language UMD
NASA Technical Reports Server (NTRS)
Handley, Thomas H., Jr.; Collins, Donald J.; Doyle, Richard J.; Jacobson, Allan S.
1991-01-01
Viewgraphs on DataHub knowledge based assistance for science visualization and analysis using large distributed databases. Topics covered include: DataHub functional architecture; data representation; logical access methods; preliminary software architecture; LinkWinds; data knowledge issues; expert systems; and data management.
Fiacco, P. A.; Rice, W. H.
1991-01-01
Computerized medical record systems require structured database architectures for information processing. However, the data must be able to be transferred across heterogeneous platform and software systems. Client-Server architecture allows for distributive processing of information among networked computers and provides the flexibility needed to link diverse systems together effectively. We have incorporated this client-server model with a graphical user interface into an outpatient medical record system, known as SuperChart, for the Department of Family Medicine at SUNY Health Science Center at Syracuse. SuperChart was developed using SuperCard and Oracle SuperCard uses modern object-oriented programming to support a hypermedia environment. Oracle is a powerful relational database management system that incorporates a client-server architecture. This provides both a distributed database and distributed processing which improves performance. PMID:1807732
Zhou, Weiqi; Troy, Austin; Grove, Morgan
2008-05-01
This article investigates how remotely sensed lawn characteristics, such as parcel lawn area and parcel lawn greenness, combined with household characteristics, can be used to predict household lawn fertilization practices on private residential lands. This study involves two watersheds, Glyndon and Baisman's Run, in Baltimore County, Maryland, USA. Parcel lawn area and lawn greenness were derived from high-resolution aerial imagery using an object-oriented classification approach. Four indicators of household characteristics, including lot size, square footage of the house, housing value, and housing age were obtained from a property database. Residential lawn care survey data combined with remotely sensed parcel lawn area and greenness data were used to estimate two measures of household lawn fertilization practices, household annual fertilizer nitrogen application amount (N_yr) and household annual fertilizer nitrogen application rate (N_ha_yr). Using multiple regression with multi-model inferential procedures, we found that a combination of parcel lawn area and parcel lawn greenness best predicts N_yr, whereas a combination of parcel lawn greenness and lot size best predicts variation in N_ha_yr. Our analyses show that household fertilization practices can be effectively predicted by remotely sensed lawn indices and household characteristics. This has significant implications for urban watershed managers and modelers.
Bieda, Angela; Hirschfeld, Gerrit; Schönfeld, Pia; Brailovskaia, Julia; Zhang, Xiao Chi; Margraf, Jürgen
2017-04-01
Research into positive aspects of the psyche is growing as psychologists learn more about the protective role of positive processes in the development and course of mental disorders, and about their substantial role in promoting mental health. With increasing globalization, there is strong interest in studies examining positive constructs across cultures. To obtain valid cross-cultural comparisons, measurement invariance for the scales assessing positive constructs has to be established. The current study aims to assess the cross-cultural measurement invariance of questionnaires for 6 positive constructs: Social Support (Fydrich, Sommer, Tydecks, & Brähler, 2009), Happiness (Subjective Happiness Scale; Lyubomirsky & Lepper, 1999), Life Satisfaction (Diener, Emmons, Larsen, & Griffin, 1985), Positive Mental Health Scale (Lukat, Margraf, Lutz, van der Veld, & Becker, 2016), Optimism (revised Life Orientation Test [LOT-R]; Scheier, Carver, & Bridges, 1994) and Resilience (Schumacher, Leppert, Gunzelmann, Strauss, & Brähler, 2004). Participants included German (n = 4,453), Russian (n = 3,806), and Chinese (n = 12,524) university students. Confirmatory factor analyses and measurement invariance testing demonstrated at least partial strong measurement invariance for all scales except the LOT-R and Subjective Happiness Scale. The latent mean comparisons of the constructs indicated differences between national groups. Potential methodological and cultural explanations for the intergroup differences are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
History and Medicine: ex voto as a tool for health and epidemiological surveillance.
Nante, N; Azzolini, E; Troiano, G; Serafini, A; Gentile, A; Messina, G
2016-01-01
Ex voto is a donation for a divinity, a Saint or to Virgin Mary for a received mercy. From the analysis of an ex voto it's possible to obtain lots of information and therefore it can be used as a tool for health and epidemiological surveillance, to study morbidity in the past. The aim of this study was the creation of a database to rebuild epidemiological events and diseases, using ex voto as a source of health surveillance. We chose to study votive pictures using three types of sources: photographed alive, on-line archives, books and photographic collections. Ex voto have been saved in an Hard Disk, numbered and inserted in a database, then analyzed using Stata®. total of 6231 ex voto were collected and catalogued in our database. Ex voto referring to diseases are the most represented (41%), but they have decreased with the time. Road accidents (21.4%) have a constant increase, especially with the appearance of cars and motorcycles. Aggressions (5.45%) decrease constantly; warlike accidents (4.44%) had a peak in the period including both world wars; non professional accidents (10.60%) and accidents at work (3.79%) increase without peaks; maritime accidents (8.88%) have not uniform ups and downs during the time. The database let us rebuild epidemiological events of the past, which are not deductible from other sources. Our purpose is to expand in the space-time our source data in order to perform an interesting comparison between past and present.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-23
...-lot'' order is an order for a quantity that is less than 100. A ``mixed-lot'' order is an order for a... mixed-lot order) are processed in the same manner as are round-lot orders, except (i) If an incoming odd... mixed-lot orders of HOLDRS in accordance with Rule 52.8, as described above. However, the prospectuses...
Can “Cleaned and Greened” Lots Take on the Role of Public Greenspace?
Megan Heckert; Michelle Kondo
2018-01-01
Cities are increasingly greening vacant lots to reduce blight. Such programs could reduce inequities in urban greenspace access, but whether and how greened lots are used remains unclear. We surveyed three hundred greened lots in Philadelphia for signs of use and compared characteristics of used and nonused lots. We found physical signs of use that might be found in...
The use of knowledge-based Genetic Algorithm for starting time optimisation in a lot-bucket MRP
NASA Astrophysics Data System (ADS)
Ridwan, Muhammad; Purnomo, Andi
2016-01-01
In production planning, Material Requirement Planning (MRP) is usually developed based on time-bucket system, a period in the MRP is representing the time and usually weekly. MRP has been successfully implemented in Make To Stock (MTS) manufacturing, where production activity must be started before customer demand is received. However, to be implemented successfully in Make To Order (MTO) manufacturing, a modification is required on the conventional MRP in order to make it in line with the real situation. In MTO manufacturing, delivery schedule to the customers is defined strictly and must be fulfilled in order to increase customer satisfaction. On the other hand, company prefers to keep constant number of workers, hence production lot size should be constant as well. Since a bucket in conventional MRP system is representing time and usually weekly, hence, strict delivery schedule could not be accommodated. Fortunately, there is a modified time-bucket MRP system, called as lot-bucket MRP system that proposed by Casimir in 1999. In the lot-bucket MRP system, a bucket is representing a lot, and the lot size is preferably constant. The time to finish every lot could be varying depends on due date of lot. Starting time of a lot must be determined so that every lot has reasonable production time. So far there is no formal method to determine optimum starting time in the lot-bucket MRP system. Trial and error process usually used for it but some time, it causes several lots have very short production time and the lot-bucket MRP would be infeasible to be executed. This paper presents the use of Genetic Algorithm (GA) for optimisation of starting time in a lot-bucket MRP system. Even though GA is well known as powerful searching algorithm, however, improvement is still required in order to increase possibility of GA in finding optimum solution in shorter time. A knowledge-based system has been embedded in the proposed GA as the improvement effort, and it is proven that the improved GA has superior performance when used in solving a lot-bucket MRP problem.
Cho, Min-Chul; Kim, So Young; Jeong, Tae-Dong; Lee, Woochang; Chun, Sail; Min, Won-Ki
2014-11-01
Verification of new lot reagent's suitability is necessary to ensure that results for patients' samples are consistent before and after reagent lot changes. A typical procedure is to measure results of some patients' samples along with quality control (QC) materials. In this study, the results of patients' samples and QC materials in reagent lot changes were analysed. In addition, the opinion regarding QC target range adjustment along with reagent lot changes was proposed. Patients' sample and QC material results of 360 reagent lot change events involving 61 analytes and eight instrument platforms were analysed. The between-lot differences for the patients' samples (ΔP) and the QC materials (ΔQC) were tested by Mann-Whitney U tests. The size of the between-lot differences in the QC data was calculated as multiples of standard deviation (SD). The ΔP and ΔQC values only differed significantly in 7.8% of the reagent lot change events. This frequency was not affected by the assay principle or the QC material source. One SD was proposed for the cutoff for maintaining pre-existing target range after reagent lot change. While non-commutable QC material results were infrequent in the present study, our data confirmed that QC materials have limited usefulness when assessing new reagent lots. Also a 1 SD standard for establishing a new QC target range after reagent lot change event was proposed. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
12 CFR 1011.20 - Unlawful sales practices-regulatory provisions.
Code of Federal Regulations, 2013 CFR
2013-01-01
... lot is suitable for a septic tank operation or there is reasonable assurance that the lot can be served by a central sewage system; (3) The lot is legally accessible; and (4) The lot is free from...
12 CFR 1011.20 - Unlawful sales practices-regulatory provisions.
Code of Federal Regulations, 2012 CFR
2012-01-01
... lot is suitable for a septic tank operation or there is reasonable assurance that the lot can be served by a central sewage system; (3) The lot is legally accessible; and (4) The lot is free from...
12 CFR 1011.20 - Unlawful sales practices-regulatory provisions.
Code of Federal Regulations, 2014 CFR
2014-01-01
... lot is suitable for a septic tank operation or there is reasonable assurance that the lot can be served by a central sewage system; (3) The lot is legally accessible; and (4) The lot is free from...
Statistical assessment of DNA extraction reagent lot variability in real-time quantitative PCR
Bushon, R.N.; Kephart, C.M.; Koltun, G.F.; Francy, D.S.; Schaefer, F. W.; Lindquist, H.D. Alan
2010-01-01
Aims: The aim of this study was to evaluate the variability in lots of a DNA extraction kit using real-time PCR assays for Bacillus anthracis, Francisella tularensis and Vibrio cholerae. Methods and Results: Replicate aliquots of three bacteria were processed in duplicate with three different lots of a commercial DNA extraction kit. This experiment was repeated in triplicate. Results showed that cycle threshold values were statistically different among the different lots. Conclusions: Differences in DNA extraction reagent lots were found to be a significant source of variability for qPCR results. Steps should be taken to ensure the quality and consistency of reagents. Minimally, we propose that standard curves should be constructed for each new lot of extraction reagents, so that lot-to-lot variation is accounted for in data interpretation. Significance and Impact of the Study: This study highlights the importance of evaluating variability in DNA extraction procedures, especially when different reagent lots are used. Consideration of this variability in data interpretation should be an integral part of studies investigating environmental samples with unknown concentrations of organisms. ?? 2010 The Society for Applied Microbiology.
Rita, Ingride; Pereira, Carla; Barros, Lillian; Ferreira, Isabel C F R
2018-08-15
Given the increasing consumers demand for novelty, tea companies have been presenting new added value products such as reserve lots of aromatic plants. Herein, infusions from different lots of three aromatic plants were assessed in terms of phenolic composition (HPLC-DAD-ESI/MS) and antioxidant properties (reducing power, free radical scavenging and lipid peroxidation inhibition capacity). Cymbopogon citratus (C. citratus; main compound 5-O-caffeoylquinic acid) and Aloysia citrodora (A. citrodora; prevalence of verbascoside) reserve lots revealed higher phenolic compounds concentration than the respective standard lots. Thymus × citriodorus (T. citriodorus; main compound rosmarinic acid) standard lot presented higher amounts of phenolic acids than the reserve lot, nonetheless, total flavonoids and phenolic compounds were not significantly different. The differences between both lots antioxidant activity were more noticeable in C. citratus, with the reserve lot presenting the highest activity. This study provides evidence of the differences between these plants chemical composition and bioactivity depending on the harvesting conditions. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Cao, Xuesong; Jiang, Ling; Hu, Ruimin
2006-10-01
Currently, the applications of surveillance system have been increasingly widespread. But there are few surveillance platforms that can meet the requirement of large-scale, cross-regional, and flexible surveillance business. In the paper, we present a distributed surveillance system platform to improve safety and security of the society. The system is constructed by an object-oriented middleware called as Internet Communications Engine (ICE). This middleware helps our platform to integrate a lot of surveillance resource of the society and accommodate diverse range of surveillance industry requirements. In the follow sections, we will describe in detail the design concepts of system and introduce traits of ICE.
Wolff, J
2005-12-01
Since national limits have been introduced for the content of DON and ZEA in cereals and cereal products designated for human consumption, it is highly important to understand how these toxins are distributed during sorting, cleaning and further processing to bakery products and pasta. Cereals from several crops were analysed before and after sorting and cleaning. After milling, flours, breads, semolinas, pastas and others were analysed. The results show that that the distribution of DON and ZEA was different. ZEA was more effectively removed than DON. The efficacy of the various processes varied markedly from one lot to the other.
DOT National Transportation Integrated Search
1989-01-01
Joint-use park-and-ride lots have proven successful in Virginia as well as other states. As expected, there are both positive and negative aspects of such lots; these are described in this report. In addition, information on incentives to lot owners,...
McKenzie, Jennifer Helen; Alwis, K Udeni; Sordillo, Joanne E; Kalluri, Kesava Srinivas; Milton, Donald Kirby
2011-06-01
Measurement of environmental endotoxin exposures is complicated by variability encountered using current biological assay methods arising in part from lot-to-lot variability of the Limulus-amebocyte lysate (LAL) reagents. Therefore, we investigated the lot-to-lot repeatability of commercially available recombinant Factor C (rFC) kits as an alternative to LAL. Specifically, we compared endotoxin estimates obtained from rFC assay of twenty indoor dust samples, using four different extraction and assay media, to endotoxin estimates previously obtained by Limulus amebocyte lysate (LAL) assay and amounts of 3-hydroxy fatty acids (3-OHFA) in lipopolysaccharide (LPS) using gas-chromatography mass spectroscopy (GC-MS). We found that lot-to-lot variability of the rFC assay kits does not significantly alter endotoxin estimates in house dust samples when performed using three of the four assay media tested and that choice of assay media significantly altered endotoxin estimates obtained by rFC assay of house dust samples. Our findings demonstrate lot-to-lot reproducibility of rFC assay of environmental samples and suggest that use of rFC assay performed with Tris buffer or water as the extraction and assay medium for measurement of endotoxin in dust samples may be a suitable choice for developing a standardized methodology.
Toward unification of taxonomy databases in a distributed computer environment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kitakami, Hajime; Tateno, Yoshio; Gojobori, Takashi
1994-12-31
All the taxonomy databases constructed with the DNA databases of the international DNA data banks are powerful electronic dictionaries which aid in biological research by computer. The taxonomy databases are, however not consistently unified with a relational format. If we can achieve consistent unification of the taxonomy databases, it will be useful in comparing many research results, and investigating future research directions from existent research results. In particular, it will be useful in comparing relationships between phylogenetic trees inferred from molecular data and those constructed from morphological data. The goal of the present study is to unify the existent taxonomymore » databases and eliminate inconsistencies (errors) that are present in them. Inconsistencies occur particularly in the restructuring of the existent taxonomy databases, since classification rules for constructing the taxonomy have rapidly changed with biological advancements. A repair system is needed to remove inconsistencies in each data bank and mismatches among data banks. This paper describes a new methodology for removing both inconsistencies and mismatches from the databases on a distributed computer environment. The methodology is implemented in a relational database management system, SYBASE.« less
NASA Astrophysics Data System (ADS)
Scherer, L.; Pfister, S.
2015-12-01
Hydropower ranks first among renewable sources of power production and provides globally about 16% of electricity. While it is praised for its low greenhouse gas emissions, it is accused of its large water consumption which surpasses that of all conventional and most renewable energy sources (except for bioenergy) by far. Previous studies mostly applied a gross evaporation approach where all the current evaporation from the plant's reservoir is allocated to hydropower. In contrast, we only considered net evaporation as the difference between current evaporation and actual evapotranspiration before the construction of the reservoir. In addition, we take into account local water stress, its monthly fluctuations and storage effects of the reservoir in order to assess the impacts on water availability for other users. We apply the method to a large dataset of almost 1500 globally distributed hydropower plants (HPPs), covering ~43% of global annual electricity generation, by combining reservoir information from the Global Reservoir and Dam (GRanD) database with information on electricity generation from the CARMA database. While we can confirm that the gross water consumption of hydropower is generally large (production-weighted average of 97 m3/GJ), other users are not necessarily deprived of water. In contrast, they also benefit in many cases from the reservoir because water is rather stored in the wet season and released in the dry season, thereby alleviating water stress. The production-weighted water scarcity footprint of the analyzed HPPs amounts to -41 m3 H2Oe/GJ. It has to be noted that the impacts among individual plants vary a lot. Larger HPPs generally consume less water per unit of electricity generated, but also the benefits related to alleviating water scarcity are lower. Overall, reservoirs promote both, energy and water security. Other environmental impacts such as flow alterations and social impacts should, however, also be considered, as they can be enormous.
Selbig, William R.; Bannerman, Roger T.
2011-01-01
The U.S Geological Survey, in cooperation with the Wisconsin Department of Natural Resources (WDNR) and in collaboration with the Root River Municipal Stormwater Permit Group monitored eight urban source areas representing six types of source areas in or near Madison, Wis. in an effort to improve characterization of particle-size distributions in urban stormwater by use of fixed-point sample collection methods. The types of source areas were parking lot, feeder street, collector street, arterial street, rooftop, and mixed use. This information can then be used by environmental managers and engineers when selecting the most appropriate control devices for the removal of solids from urban stormwater. Mixed-use and parking-lot study areas had the lowest median particle sizes (42 and 54 (u or mu)m, respectively), followed by the collector street study area (70 (u or mu)m). Both arterial street and institutional roof study areas had similar median particle sizes of approximately 95 (u or mu)m. Finally, the feeder street study area showed the largest median particle size of nearly 200 (u or mu)m. Median particle sizes measured as part of this study were somewhat comparable to those reported in previous studies from similar source areas. The majority of particle mass in four out of six source areas was silt and clay particles that are less than 32 (u or mu)m in size. Distributions of particles ranging from 500 (u or mu)m were highly variable both within and between source areas. Results of this study suggest substantial variability in data can inhibit the development of a single particle-size distribution that is representative of stormwater runoff generated from a single source area or land use. Continued development of improved sample collection methods, such as the depth-integrated sample arm, may reduce variability in particle-size distributions by mitigating the effect of sediment bias inherent with a fixed-point sampler.
Singh, J; Jain, D C; Sharma, R S; Verghese, T
1996-06-01
Lot Quality Assurance Sampling (LQAS) and standard EPI methodology (30 cluster sampling) were used to evaluate immunization coverage in a Primary Health Center (PHC) where coverage levels were reported to be more than 85%. Of 27 sub-centers (lots) evaluated by LQAS, only 2 were accepted for child coverage, whereas none was accepted for tetanus toxoid (TT) coverage in mothers. LQAS data were combined to obtain an estimate of coverage in the entire population; 41% (95% CI 36-46) infants were immunized appropriately for their ages, while 42% (95% CI 37-47) of their mothers had received a second/ booster dose of TT. TT coverage in 149 contemporary mothers sampled in EPI survey was also 42% (95% CI 31-52). Although results by the two sampling methods were consistent with each other, a big gap was evident between reported coverage (in children as well as mothers) and survey results. LQAS was found to be operationally feasible, but it cost 40% more and required 2.5 times more time than the EPI survey. LQAS therefore, is not a good substitute for current EPI methodology to evaluate immunization coverage in a large administrative area. However, LQAS has potential as method to monitor health programs on a routine basis in small population sub-units, especially in areas with high and heterogeneously distributed immunization coverage.
Jones, Gabrielle; Pihier, Nathalie; Vanbockstael, Caroline; Le Hello, Simon; Cadel Six, Sabrina; Fournet, Nelly; Jourdan-da Silva, Nathalie
2016-10-06
A prolonged outbreak of Salmonella enterica serotype Enteritidis occurred in northern France between December 2014 and April 2015. Epidemiological investigations following the initial notification on 30 December 2014 of five cases of salmonellosis (two confirmed S. Enteritidis) in young children residing in the Somme department revealed that all cases frequented the same food bank A. Further epidemiological, microbiological and food trace-back investigations indicated frozen beefburgers as the source of the outbreak and the suspected lot originating from Poland was recalled on 22 January 2015. On 2 March 2015 a second notification of S. Enteritidis cases in the Somme reinitiated investigations that confirmed a link with food bank A and with consumption of frozen beefburgers from the same Polish producer. In the face of a possible persistent source of contamination, all frozen beefburgers distributed by food bank A and from the same origin were blocked on 3 March 2015. Microbiological analyses confirmed contamination by S. Enteritidis of frozen beefburgers from a second lot remaining in cases' homes. A second recall was initiated on 6 March 2015 and all frozen beefburgers from the Polish producer remain blocked after analyses identified additional contaminated lots over several months of production. This article is copyright of The Authors, 2016.
Design of special purpose database for credit cooperation bank business processing network system
NASA Astrophysics Data System (ADS)
Yu, Yongling; Zong, Sisheng; Shi, Jinfa
2011-12-01
With the popularization of e-finance in the city, the construction of e-finance is transfering to the vast rural market, and quickly to develop in depth. Developing the business processing network system suitable for the rural credit cooperative Banks can make business processing conveniently, and have a good application prospect. In this paper, We analyse the necessity of adopting special purpose distributed database in Credit Cooperation Band System, give corresponding distributed database system structure , design the specical purpose database and interface technology . The application in Tongbai Rural Credit Cooperatives has shown that system has better performance and higher efficiency.
Baumstark, Annette; Pleus, Stefan; Schmid, Christina; Link, Manuela; Haug, Cornelia; Freckmann, Guido
2012-01-01
Background Accurate and reliable blood glucose (BG) measurements require that different test strip lots of the same BG monitoring system provide comparable measurement results. Only a small number of studies addressing this question have been published. Methods In this study, four test strip lots for each of five different BG systems [Accu-Chek® Aviva (system A), FreeStyle Lite® (system B), GlucoCheck XL (system C), Pura™/mylife™ Pura (system D), and OneTouch® Verio™ Pro (system E)] were evaluated with procedures according to DIN EN ISO 15197:2003. The BG system measurement results were compared with the manufacturer’s measurement procedure (glucose oxidase or hexokinase method). Relative bias according to Bland and Altman and system accuracy according to ISO 15197 were analyzed. A BG system consists of the BG meter itself and the test strips. Results The maximum lot-to-lot difference between any two of the four evaluated test strip lots per BG system was 1.0% for system E, 2.1% for system A, 3.1% for system C, 6.9% for system B, and 13.0% for system D. Only two systems (systems A and B) fulfill the criteria of DIN EN ISO 15197:2003 with each test strip lot. Conclusions Considerable lot-to-lot variability between test strip lots of the same BG system was found. These variations add to other sources of inaccuracy with the specific BG system. Manufacturers should regularly and effectively check the accuracy of their BG meters and test strips even between different test strip lots to minimize risk of false treatment decisions. PMID:23063033
Automated Planning and Scheduling for Space Mission Operations
NASA Technical Reports Server (NTRS)
Chien, Steve; Jonsson, Ari; Knight, Russell
2005-01-01
Research Trends: a) Finite-capacity scheduling under more complex constraints and increased problem dimensionality (subcontracting, overtime, lot splitting, inventory, etc.) b) Integrated planning and scheduling. c) Mixed-initiative frameworks. d) Management of uncertainty (proactive and reactive). e) Autonomous agent architectures and distributed production management. e) Integration of machine learning capabilities. f) Wider scope of applications: 1) analysis of supplier/buyer protocols & tradeoffs; 2) integration of strategic & tactical decision-making; and 3) enterprise integration.
NASA Technical Reports Server (NTRS)
Brunstrom, Anna; Leutenegger, Scott T.; Simha, Rahul
1995-01-01
Traditionally, allocation of data in distributed database management systems has been determined by off-line analysis and optimization. This technique works well for static database access patterns, but is often inadequate for frequently changing workloads. In this paper we address how to dynamically reallocate data for partionable distributed databases with changing access patterns. Rather than complicated and expensive optimization algorithms, a simple heuristic is presented and shown, via an implementation study, to improve system throughput by 30 percent in a local area network based system. Based on artificial wide area network delays, we show that dynamic reallocation can improve system throughput by a factor of two and a half for wide area networks. We also show that individual site load must be taken into consideration when reallocating data, and provide a simple policy that incorporates load in the reallocation decision.
Mahler, Barbara J.; Van Metre, Peter C.; Wilson, Jennifer T.
2004-01-01
Samples of creek bed sediment collected near seal-coated parking lots in Austin, Texas, by the City of Austin during 2001–02 had unusually elevated concentrations of polycyclic aromatic hydrocarbons (PAHs). To investigate the possibility that PAHs from seal-coated parking lots might be transported to urban creeks, the U.S. Geological Survey, in cooperation with the City of Austin, sampled runoff and scrapings from four test plots and 13 urban parking lots. The surfaces sampled comprise coal-tar-emulsion-sealed, asphalt-emulsion-sealed, unsealed asphalt, and unsealed concrete. Particulates and filtered water in runoff and surface scrapings were analyzed for PAHs. In addition, particulates in runoff were analyzed for major and trace elements. Samples of all three media from coal-tar-sealed parking lots had concentrations of PAHs higher than those from any other types of surface. The mean total PAH concentration in particulates in runoff from parking lots in use were 3,500,000, 620,000, and 54,000 micrograms per kilogram from coal-tar-sealed, asphalt-sealed, and unsealed (asphalt and concrete combined) lots, respectively. The probable effect concentration sediment quality guideline is 22,800 micrograms per kilogram. The mean total PAH (sum of detected PAHs) concentration in filtered water from parking lots in use was 8.6 micrograms per liter for coal-tar-sealed lots; the one sample analyzed from an asphalt-sealed lot had a concentration of 5.1 micrograms per liter and the one sample analyzed from an unsealed asphalt lot was 0.24 microgram per liter. The mean total PAH concentration in scrapings was 23,000,000, 820,000, and 14,000 micrograms per kilogram from coal-tar-sealed, asphalt-sealed, and unsealed asphalt lots, respectively. Concentrations of lead and zinc in particulates in runoff frequently exceeded the probable effect concentrations, but trace element concentrations showed no consistent variation with parking lot surface type.
Pedestrian and traffic safety in parking lots at SNL/NM : audit background report.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanchez, Paul Ernest
2009-03-01
This report supplements audit 2008-E-0009, conducted by the ES&H, Quality, Safeguards & Security Audits Department, 12870, during fall and winter of FY 2008. The study evaluates slips, trips and falls, the leading cause of reportable injuries at Sandia. In 2007, almost half of over 100 of such incidents occurred in parking lots. During the course of the audit, over 5000 observations were collected in 10 parking lots across SNL/NM. Based on benchmarks and trends of pedestrian behavior, the report proposes pedestrian-friendly features and attributes to improve pedestrian safety in parking lots. Less safe pedestrian behavior is associated with older parkingmore » lots lacking pedestrian-friendly features and attributes, like those for buildings 823, 887 and 811. Conversely, safer pedestrian behavior is associated with newer parking lots that have designated walkways, intra-lot walkways and sidewalks. Observations also revealed that motorists are in widespread noncompliance with parking lot speed limits and stop signs and markers.« less
Heuristic for Critical Machine Based a Lot Streaming for Two-Stage Hybrid Production Environment
NASA Astrophysics Data System (ADS)
Vivek, P.; Saravanan, R.; Chandrasekaran, M.; Pugazhenthi, R.
2017-03-01
Lot streaming in Hybrid flowshop [HFS] is encountered in many real world problems. This paper deals with a heuristic approach for Lot streaming based on critical machine consideration for a two stage Hybrid Flowshop. The first stage has two identical parallel machines and the second stage has only one machine. In the second stage machine is considered as a critical by valid reasons these kind of problems is known as NP hard. A mathematical model developed for the selected problem. The simulation modelling and analysis were carried out in Extend V6 software. The heuristic developed for obtaining optimal lot streaming schedule. The eleven cases of lot streaming were considered. The proposed heuristic was verified and validated by real time simulation experiments. All possible lot streaming strategies and possible sequence under each lot streaming strategy were simulated and examined. The heuristic consistently yielded optimal schedule consistently in all eleven cases. The identification procedure for select best lot streaming strategy was suggested.
Olives, Casey; Pagano, Marcello
2013-01-01
Background Lot Quality Assurance Sampling (LQAS) is a provably useful tool for monitoring health programmes. Although LQAS ensures acceptable Producer and Consumer risks, the literature alleges that the method suffers from poor specificity and positive predictive values (PPVs). We suggest that poor LQAS performance is due, in part, to variation in the true underlying distribution. However, until now the role of the underlying distribution in expected performance has not been adequately examined. Methods We present Bayesian-LQAS (B-LQAS), an approach to incorporating prior information into the choice of the LQAS sample size and decision rule, and explore its properties through a numerical study. Additionally, we analyse vaccination coverage data from UNICEF’s State of the World’s Children in 1968–1989 and 2008 to exemplify the performance of LQAS and B-LQAS. Results Results of our numerical study show that the choice of LQAS sample size and decision rule is sensitive to the distribution of prior information, as well as to individual beliefs about the importance of correct classification. Application of the B-LQAS approach to the UNICEF data improves specificity and PPV in both time periods (1968–1989 and 2008) with minimal reductions in sensitivity and negative predictive value. Conclusions LQAS is shown to be a robust tool that is not necessarily prone to poor specificity and PPV as previously alleged. In situations where prior or historical data are available, B-LQAS can lead to improvements in expected performance. PMID:23378151
Processing SPARQL queries with regular expressions in RDF databases
2011-01-01
Background As the Resource Description Framework (RDF) data model is widely used for modeling and sharing a lot of online bioinformatics resources such as Uniprot (dev.isb-sib.ch/projects/uniprot-rdf) or Bio2RDF (bio2rdf.org), SPARQL - a W3C recommendation query for RDF databases - has become an important query language for querying the bioinformatics knowledge bases. Moreover, due to the diversity of users’ requests for extracting information from the RDF data as well as the lack of users’ knowledge about the exact value of each fact in the RDF databases, it is desirable to use the SPARQL query with regular expression patterns for querying the RDF data. To the best of our knowledge, there is currently no work that efficiently supports regular expression processing in SPARQL over RDF databases. Most of the existing techniques for processing regular expressions are designed for querying a text corpus, or only for supporting the matching over the paths in an RDF graph. Results In this paper, we propose a novel framework for supporting regular expression processing in SPARQL query. Our contributions can be summarized as follows. 1) We propose an efficient framework for processing SPARQL queries with regular expression patterns in RDF databases. 2) We propose a cost model in order to adapt the proposed framework in the existing query optimizers. 3) We build a prototype for the proposed framework in C++ and conduct extensive experiments demonstrating the efficiency and effectiveness of our technique. Conclusions Experiments with a full-blown RDF engine show that our framework outperforms the existing ones by up to two orders of magnitude in processing SPARQL queries with regular expression patterns. PMID:21489225
Processing SPARQL queries with regular expressions in RDF databases.
Lee, Jinsoo; Pham, Minh-Duc; Lee, Jihwan; Han, Wook-Shin; Cho, Hune; Yu, Hwanjo; Lee, Jeong-Hoon
2011-03-29
As the Resource Description Framework (RDF) data model is widely used for modeling and sharing a lot of online bioinformatics resources such as Uniprot (dev.isb-sib.ch/projects/uniprot-rdf) or Bio2RDF (bio2rdf.org), SPARQL - a W3C recommendation query for RDF databases - has become an important query language for querying the bioinformatics knowledge bases. Moreover, due to the diversity of users' requests for extracting information from the RDF data as well as the lack of users' knowledge about the exact value of each fact in the RDF databases, it is desirable to use the SPARQL query with regular expression patterns for querying the RDF data. To the best of our knowledge, there is currently no work that efficiently supports regular expression processing in SPARQL over RDF databases. Most of the existing techniques for processing regular expressions are designed for querying a text corpus, or only for supporting the matching over the paths in an RDF graph. In this paper, we propose a novel framework for supporting regular expression processing in SPARQL query. Our contributions can be summarized as follows. 1) We propose an efficient framework for processing SPARQL queries with regular expression patterns in RDF databases. 2) We propose a cost model in order to adapt the proposed framework in the existing query optimizers. 3) We build a prototype for the proposed framework in C++ and conduct extensive experiments demonstrating the efficiency and effectiveness of our technique. Experiments with a full-blown RDF engine show that our framework outperforms the existing ones by up to two orders of magnitude in processing SPARQL queries with regular expression patterns.
ERIC Educational Resources Information Center
Lynch, Clifford A.
1997-01-01
Union catalogs and distributed search systems are two ways users can locate materials in print and electronic formats. This article examines the advantages and limitations of both approaches and argues that they should be considered complementary rather than competitive. Discusses technologies creating linkage between catalogs and databases and…
Distributed Structure-Searchable Toxicity (DSSTox) Database Network: Making Public Toxicity Data Resources More Accessible and U sable for Data Exploration and SAR Development
Many sources of public toxicity data are not currently linked to chemical structure, are not ...
SPANG: a SPARQL client supporting generation and reuse of queries for distributed RDF databases.
Chiba, Hirokazu; Uchiyama, Ikuo
2017-02-08
Toward improved interoperability of distributed biological databases, an increasing number of datasets have been published in the standardized Resource Description Framework (RDF). Although the powerful SPARQL Protocol and RDF Query Language (SPARQL) provides a basis for exploiting RDF databases, writing SPARQL code is burdensome for users including bioinformaticians. Thus, an easy-to-use interface is necessary. We developed SPANG, a SPARQL client that has unique features for querying RDF datasets. SPANG dynamically generates typical SPARQL queries according to specified arguments. It can also call SPARQL template libraries constructed in a local system or published on the Web. Further, it enables combinatorial execution of multiple queries, each with a distinct target database. These features facilitate easy and effective access to RDF datasets and integrative analysis of distributed data. SPANG helps users to exploit RDF datasets by generation and reuse of SPARQL queries through a simple interface. This client will enhance integrative exploitation of biological RDF datasets distributed across the Web. This software package is freely available at http://purl.org/net/spang .
7 CFR 983.52 - Failed lots/rework procedure.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 8 2013-01-01 2013-01-01 false Failed lots/rework procedure. 983.52 Section 983.52..., ARIZONA, AND NEW MEXICO Regulations § 983.52 Failed lots/rework procedure. (a) Substandard pistachios... committee may establish, with the Secretary's approval, appropriate rework procedures. (b) Failed lot...
7 CFR 983.52 - Failed lots/rework procedure.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 8 2012-01-01 2012-01-01 false Failed lots/rework procedure. 983.52 Section 983.52..., ARIZONA, AND NEW MEXICO Regulations § 983.52 Failed lots/rework procedure. (a) Substandard pistachios... committee may establish, with the Secretary's approval, appropriate rework procedures. (b) Failed lot...
7 CFR 983.52 - Failed lots/rework procedure.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 8 2014-01-01 2014-01-01 false Failed lots/rework procedure. 983.52 Section 983.52..., ARIZONA, AND NEW MEXICO Regulations § 983.52 Failed lots/rework procedure. (a) Substandard pistachios... committee may establish, with the Secretary's approval, appropriate rework procedures. (b) Failed lot...
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Lot seal. 29.35 Section 29.35 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... INSPECTION Regulations Definitions § 29.35 Lot seal. A seal approved by the Director for sealing lots of...
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Lot seal. 29.35 Section 29.35 Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... INSPECTION Regulations Definitions § 29.35 Lot seal. A seal approved by the Director for sealing lots of...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dirmyer, Matthew R.
This report serves as a follow up to our initial development lot 1 chemical analysis report (LA-UR-16-21970). The purpose of that report was to determine whether or not certain combinations of resin lots and curing agent lots resulted in chemical differences in the final material. One finding of that report suggested that pad P053389 was different from the three other pads analyzed. This report consists of chemical analysis of P053387, P053388, and a reinvestigation of P053389 all of which came from the potentially suspect combination of resin and curing agents lot. The goal of this report is to determine whethermore » the observations relating to P053389 were isolated to that particular pad or systemic to that combination of resin and curing agent lot. The following suite of analyses were performed on the pads: Differential Scanning Calorimetry (DSC), Thermogravimetric Analysis (TGA), Fourier Transform Infrared Spectroscopy (FT-IR), and Solid State Nuclear Magnetic Resonance (NMR). The overall conclusions of the study are that pads P053387 and P053388 behave more consistently with the pads of other resin lot and curing agent lot combinations and that the chemical observations made regarding pad P053389 are isolated to that pad and not representative of an issue with that resin lot and curing agent lot combination.« less
Active learning methods for interactive image retrieval.
Gosselin, Philippe Henri; Cord, Matthieu
2008-07-01
Active learning methods have been considered with increased interest in the statistical learning community. Initially developed within a classification framework, a lot of extensions are now being proposed to handle multimedia applications. This paper provides algorithms within a statistical framework to extend active learning for online content-based image retrieval (CBIR). The classification framework is presented with experiments to compare several powerful classification techniques in this information retrieval context. Focusing on interactive methods, active learning strategy is then described. The limitations of this approach for CBIR are emphasized before presenting our new active selection process RETIN. First, as any active method is sensitive to the boundary estimation between classes, the RETIN strategy carries out a boundary correction to make the retrieval process more robust. Second, the criterion of generalization error to optimize the active learning selection is modified to better represent the CBIR objective of database ranking. Third, a batch processing of images is proposed. Our strategy leads to a fast and efficient active learning scheme to retrieve sets of online images (query concept). Experiments on large databases show that the RETIN method performs well in comparison to several other active strategies.
NASA Astrophysics Data System (ADS)
Angelats, E.; Parés, M. E.; Kumar, P.
2018-05-01
Accessible cities with accessible services are an old claim of people with reduced mobility. But this demand is still far away of becoming a reality as lot of work is required to be done yet. First step towards accessible cities is to know about real situation of the cities and its pavement infrastructure. Detailed maps or databases on street slopes, access to sidewalks, mobility in public parks and gardens, etc. are required. In this paper, we propose to use smartphone based photogrammetric point clouds, as a starting point to create accessible maps or databases. This paper analyses the performance of these point clouds and the complexity of the image acquisition procedure required to obtain them. The paper proves, through two test cases, that smartphone technology is an economical and feasible solution to get the required information, which is quite often seek by city planners to generate accessible maps. The proposed approach paves the way to generate, in a near term, accessibility maps through the use of point clouds derived from crowdsourced smartphone imagery.
Human silhouette matching based on moment invariants
NASA Astrophysics Data System (ADS)
Sun, Yong-Chao; Qiu, Xian-Jie; Xia, Shi-Hong; Wang, Zhao-Qi
2005-07-01
This paper aims to apply the method of silhouette matching based on moment invariants to infer the human motion parameters from video sequences of single monocular uncalibrated camera. Currently, there are two ways of tracking human motion: Marker and Markerless. While a hybrid framework is introduced in this paper to recover the input video contents. A standard 3D motion database is built up by marker technique in advance. Given a video sequences, human silhouettes are extracted as well as the viewpoint information of the camera which would be utilized to project the standard 3D motion database onto the 2D one. Therefore, the video recovery problem is formulated as a matching issue of finding the most similar body pose in standard 2D library with the one in video image. The framework is applied to the special trampoline sport where we can obtain the complicated human motion parameters in the single camera video sequences, and a lot of experiments are demonstrated that this approach is feasible in the field of monocular video-based 3D motion reconstruction.
Stavelin, Anne; Riksheim, Berit Oddny; Christensen, Nina Gade; Sandberg, Sverre
2016-05-01
Providers of external quality assurance (EQA)/proficiency testing schemes have traditionally focused on evaluation of measurement procedures and participant performance and little attention has been given to reagent lot variation. The aim of the present study was to show the importance of reagent lot registration and evaluation in EQA schemes. Results from the Noklus (Norwegian Quality Improvement of Primary Care Laboratories) urine albumin/creatinine ratio (ACR) and prothrombin time international normalized ratio (INR) point-of-care EQA schemes from 2009-2015 were used as examples in this study. The between-participant CV for Afinion ACR increased from 6%-7% to 11% in 3 consecutive surveys. This increase was caused by differences between albumin reagent lots that were also observed when fresh urine samples were used. For the INR scheme, the CoaguChek INR results increased with the production date of the reagent lots, with reagent lot medians increasing from 2.0 to 2.5 INR and from 2.7 to 3.3 INR (from the oldest to the newest reagent lot) for 2 control levels, respectively. These differences in lot medians were not observed when native patient samples were used. Presenting results from different reagent lots in EQA feedback reports can give helpful information to the participants that may explain their deviant EQA results. Information regarding whether the reagent lot differences found in the schemes can affect patient samples is important and should be communicated to the participants as well as to the manufacturers. EQA providers should consider registering and evaluating results from reagent lots. © 2016 American Association for Clinical Chemistry.
Bidlake, W.R.
2002-01-01
An investigation of evapotranspiration, vegetation quantity and composition, and depth to the water table below the land surface was made at three sites in two fallowed agricultural lots on the 15,800-hectare Tule Lake National Wildlife Refuge in northern California during the 2000 growing season. All three sites had been farmed during 1999, but were not irrigated since the 1999 growing season. Vegetation at the lot C1B and lot 6 stubble sites included weedy species and small grain plants. The lot 6 cover crop site supported a crop of cereal rye that had been planted during the previous winter. Percentage of coverage by live vegetation ranged from 0 to 43.2 percent at the lot C1B site, from approximately 0 to 63.2 percent at the lot 6 stubble site, and it was estimated to range from 0 to greater than 90 percent at the lot 6 cover crop site. Evapotranspiration was measured using the Bowen ratio energy balance technique and it was estimated using a model that was based on the Priestley-Taylor equation and a model that was based on reference evapotranspiration with grass as the reference crop. Total evapotranspiration during May to October varied little among the three evapotranspiration measurement sites, although the timing of evapotranspiration losses did vary among the sites. Total evapotranspiration from the lot C1B site was 426 millimeters, total evapotranspiration from the lot 6 stubble site was 444 millimeters, and total evapotranspiration from the lot 6 cover crop site was 435 millimeters. The months of May to July accounted for approximately 78 percent of the total evapotranspiration from the lot C1B site, approximately 63 percent of the evapotranspiration from the lot 6 stubble site, and approximately 86 percent of the total evapotranspiration from the lot 6 cover crop site. Estimated growing season precipitation accounted for 16 percent of the growing-season evapotranspiration at the lot C1B site and for 17 percent of the growing-season evapotranspiration at the lot 6 stubble and cover crop sites. The ratio of evapotranspiration rate to the reference evapotranspiration rate was strongly correlated with percentage of site coverage by vegetation at the lot C1B and lot 6 stubble sites (correlation coefficient = 0.95, sample size = 6), where percentage of site coverage was determined from quantitative vegetation surveys. It is concluded that evapotranspiration was mediated by the vegetation at all three sites, and that the differences in seasonal timing of evapotranspiration losses were caused by differences in timing of vegetation growth and development and senescence among the sites. Depth to the water table below the land surface at lot C1B ranged from 0.67 meters in early July to greater than 1.39 meters in late August. Depth to the water table at lot 6 ranged from 0.77 meter in late May to greater than 1.40 meters in late August.
Laser Illumination Modality of Photoacoustic Imaging Technique for Prostate Cancer
NASA Astrophysics Data System (ADS)
Peng, Dong-qing; Peng, Yuan-yuan; Guo, Jian; Li, Hui
2016-02-01
Photoacoustic imaging (PAI) has recently emerged as a promising imaging technique for prostate cancer. But there was still a lot of challenge in the PAI for prostate cancer detection, such as laser illumination modality. Knowledge of absorbed light distribution in prostate tissue was essential since the distribution characteristic of absorbed light energy would influence the imaging depth and range of PAI. In order to make a comparison of different laser illumination modality of photoacoustic imaging technique for prostate cancer, optical model of human prostate was established and combined with Monte Carlo simulation method to calculate the light absorption distribution in the prostate tissue. Characteristic of light absorption distribution of transurethral and trans-rectal illumination case, and of tumor at different location was compared with each other.The relevant conclusions would be significant for optimizing the light illumination in a PAI system for prostate cancer detection.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 8 2010-01-01 2010-01-01 false Lot number. 987.102 Section 987.102 Agriculture... RIVERSIDE COUNTY, CALIFORNIA Administrative Rules Definitions § 987.102 Lot number. Lot number is synonymous with code and means a combination of letters or numbers, or both, acceptable to the Committee, showing...
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 8 2011-01-01 2011-01-01 false Lot number. 987.102 Section 987.102 Agriculture... RIVERSIDE COUNTY, CALIFORNIA Administrative Rules Definitions § 987.102 Lot number. Lot number is synonymous with code and means a combination of letters or numbers, or both, acceptable to the Committee, showing...
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 8 2013-01-01 2013-01-01 false Lot number. 987.102 Section 987.102 Agriculture... RIVERSIDE COUNTY, CALIFORNIA Administrative Rules Definitions § 987.102 Lot number. Lot number is synonymous with code and means a combination of letters or numbers, or both, acceptable to the Committee, showing...
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 8 2014-01-01 2014-01-01 false Lot number. 987.102 Section 987.102 Agriculture... RIVERSIDE COUNTY, CALIFORNIA Administrative Rules Definitions § 987.102 Lot number. Lot number is synonymous with code and means a combination of letters or numbers, or both, acceptable to the Committee, showing...
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 8 2012-01-01 2012-01-01 false Lot number. 987.102 Section 987.102 Agriculture... RIVERSIDE COUNTY, CALIFORNIA Administrative Rules Definitions § 987.102 Lot number. Lot number is synonymous with code and means a combination of letters or numbers, or both, acceptable to the Committee, showing...
Compressing DNA sequence databases with coil.
White, W Timothy J; Hendy, Michael D
2008-05-20
Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip) compression - an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression - the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST) data. Finally, coil can efficiently encode incremental additions to a sequence database. coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work.
Compressing DNA sequence databases with coil
White, W Timothy J; Hendy, Michael D
2008-01-01
Background Publicly available DNA sequence databases such as GenBank are large, and are growing at an exponential rate. The sheer volume of data being dealt with presents serious storage and data communications problems. Currently, sequence data is usually kept in large "flat files," which are then compressed using standard Lempel-Ziv (gzip) compression – an approach which rarely achieves good compression ratios. While much research has been done on compressing individual DNA sequences, surprisingly little has focused on the compression of entire databases of such sequences. In this study we introduce the sequence database compression software coil. Results We have designed and implemented a portable software package, coil, for compressing and decompressing DNA sequence databases based on the idea of edit-tree coding. coil is geared towards achieving high compression ratios at the expense of execution time and memory usage during compression – the compression time represents a "one-off investment" whose cost is quickly amortised if the resulting compressed file is transmitted many times. Decompression requires little memory and is extremely fast. We demonstrate a 5% improvement in compression ratio over state-of-the-art general-purpose compression tools for a large GenBank database file containing Expressed Sequence Tag (EST) data. Finally, coil can efficiently encode incremental additions to a sequence database. Conclusion coil presents a compelling alternative to conventional compression of flat files for the storage and distribution of DNA sequence databases having a narrow distribution of sequence lengths, such as EST data. Increasing compression levels for databases having a wide distribution of sequence lengths is a direction for future work. PMID:18489794
Benchmarking distributed data warehouse solutions for storing genomic variant information
Wiewiórka, Marek S.; Wysakowicz, Dawid P.; Okoniewski, Michał J.
2017-01-01
Abstract Genomic-based personalized medicine encompasses storing, analysing and interpreting genomic variants as its central issues. At a time when thousands of patientss sequenced exomes and genomes are becoming available, there is a growing need for efficient database storage and querying. The answer could be the application of modern distributed storage systems and query engines. However, the application of large genomic variant databases to this problem has not been sufficiently far explored so far in the literature. To investigate the effectiveness of modern columnar storage [column-oriented Database Management System (DBMS)] and query engines, we have developed a prototypic genomic variant data warehouse, populated with large generated content of genomic variants and phenotypic data. Next, we have benchmarked performance of a number of combinations of distributed storages and query engines on a set of SQL queries that address biological questions essential for both research and medical applications. In addition, a non-distributed, analytical database (MonetDB) has been used as a baseline. Comparison of query execution times confirms that distributed data warehousing solutions outperform classic relational DBMSs. Moreover, pre-aggregation and further denormalization of data, which reduce the number of distributed join operations, significantly improve query performance by several orders of magnitude. Most of distributed back-ends offer a good performance for complex analytical queries, while the Optimized Row Columnar (ORC) format paired with Presto and Parquet with Spark 2 query engines provide, on average, the lowest execution times. Apache Kudu on the other hand, is the only solution that guarantees a sub-second performance for simple genome range queries returning a small subset of data, where low-latency response is expected, while still offering decent performance for running analytical queries. In summary, research and clinical applications that require the storage and analysis of variants from thousands of samples can benefit from the scalability and performance of distributed data warehouse solutions. Database URL: https://github.com/ZSI-Bio/variantsdwh PMID:29220442
NASA Astrophysics Data System (ADS)
Jaggi, Chandra K.; Mittal, Mandeep; Khanna, Aditi
2013-09-01
In this article, an Economic Order Quantity (EOQ) model has been developed with unreliable supply, where each received lot may have random fraction of defective items with known distribution. Thus, the inspection of lot becomes essential in almost all the situations. Moreover, its role becomes more significant when the items are deteriorating in nature. It is assumed that defective items are salvaged as a single batch after the screening process. Further, it has been observed that the demand as well as price for certain consumer items increases linearly with time, especially under inflationary conditions. Owing to this fact, this article investigates the impact of defective items on retailer's ordering policy for deteriorating items under inflation when both demand and price vary with the passage of time. The proposed model optimises the order quantity by maximising the retailer's expected profit. Results are demonstrated with the help of a numerical example and the sensitivity analysis is also presented to provide managerial insights into practice.
ERIC Educational Resources Information Center
Kim, Deok-Hwan; Chung, Chin-Wan
2003-01-01
Discusses the collection fusion problem of image databases, concerned with retrieving relevant images by content based retrieval from image databases distributed on the Web. Focuses on a metaserver which selects image databases supporting similarity measures and proposes a new algorithm which exploits a probabilistic technique using Bayesian…
Development of an Accelerated Hydrogen Embrittlement Test for Manganese Phosphated Steels
2011-05-01
from the same vendor (RSL Testing Systems) and lot (HT/ HTP ) were used. See Appendix B for the certifications for lots HT/ HTP . Note: lot HTP is...Based on the vendor certification of the notched tensile specimens from RSL lot HT/ HTP , the NTS was 373 ksi and the average load at failure was...Systems Lot HT/ HTP ). Note that use of a higher reference NTS and load to failure is a conservative approach since the specimens will experience
Manasa, Justen; Lessells, Richard; Rossouw, Theresa; Naidu, Kevindra; Van Vuuren, Cloete; Goedhals, Dominique; van Zyl, Gert; Bester, Armand; Skingsley, Andrew; Stott, Katharine; Danaviah, Siva; Chetty, Terusha; Singh, Lavanya; Moodley, Pravi; Iwuji, Collins; McGrath, Nuala; Seebregts, Christopher J.; de Oliveira, Tulio
2014-01-01
Abstract Substantial amounts of data have been generated from patient management and academic exercises designed to better understand the human immunodeficiency virus (HIV) epidemic and design interventions to control it. A number of specialized databases have been designed to manage huge data sets from HIV cohort, vaccine, host genomic and drug resistance studies. Besides databases from cohort studies, most of the online databases contain limited curated data and are thus sequence repositories. HIV drug resistance has been shown to have a great potential to derail the progress made thus far through antiretroviral therapy. Thus, a lot of resources have been invested in generating drug resistance data for patient management and surveillance purposes. Unfortunately, most of the data currently available relate to subtype B even though >60% of the epidemic is caused by HIV-1 subtype C. A consortium of clinicians, scientists, public health experts and policy markers working in southern Africa came together and formed a network, the Southern African Treatment and Resistance Network (SATuRN), with the aim of increasing curated HIV-1 subtype C and tuberculosis drug resistance data. This article describes the HIV-1 data curation process using the SATuRN Rega database. The data curation is a manual and time-consuming process done by clinical, laboratory and data curation specialists. Access to the highly curated data sets is through applications that are reviewed by the SATuRN executive committee. Examples of research outputs from the analysis of the curated data include trends in the level of transmitted drug resistance in South Africa, analysis of the levels of acquired resistance among patients failing therapy and factors associated with the absence of genotypic evidence of drug resistance among patients failing therapy. All these studies have been important for informing first- and second-line therapy. This database is a free password-protected open source database available on www.bioafrica.net. Database URL: http://www.bioafrica.net/regadb/ PMID:24504151
7 CFR 932.151 - Incoming regulations.
Code of Federal Regulations, 2010 CFR
2010-01-01
... inspection station other than the one where the lot was sampled. (b) Lot identification. Immediately upon... complete Form COC 3A or 3C, weight and grade report or such other lot identification form as may be... performance of all actions connected with the identification of lots of olives, the weighing of boxes or bins...
7 CFR 27.12 - Classification request for each lot of cotton.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 2 2014-01-01 2014-01-01 false Classification request for each lot of cotton. 27.12... CONTAINER REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Classification Requests § 27.12 Classification request for each lot of cotton. For each lot or mark of cotton of which the...
7 CFR 27.12 - Classification request for each lot of cotton.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Classification request for each lot of cotton. 27.12... CONTAINER REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Classification Requests § 27.12 Classification request for each lot of cotton. For each lot or mark of cotton of which the...
7 CFR 27.12 - Classification request for each lot of cotton.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 2 2013-01-01 2013-01-01 false Classification request for each lot of cotton. 27.12... CONTAINER REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Classification Requests § 27.12 Classification request for each lot of cotton. For each lot or mark of cotton of which the...
7 CFR 27.12 - Classification request for each lot of cotton.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Classification request for each lot of cotton. 27.12... CONTAINER REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Classification Requests § 27.12 Classification request for each lot of cotton. For each lot or mark of cotton of which the...
7 CFR 27.12 - Classification request for each lot of cotton.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 2 2012-01-01 2012-01-01 false Classification request for each lot of cotton. 27.12... CONTAINER REGULATIONS COTTON CLASSIFICATION UNDER COTTON FUTURES LEGISLATION Regulations Classification Requests § 27.12 Classification request for each lot of cotton. For each lot or mark of cotton of which the...
46 CFR 160.077-23 - Production tests and inspections.
Code of Federal Regulations, 2010 CFR
2010-10-01
... fixed anchor, or (C) a tensile test machine that is capable of holding a given tension. The assembly... testing of each incoming lot of inflation chamber material before using that lot in production; (iii) Have... inspector must perform or supervise testing and inspection of at least one PFD lot in each five lots...
A Comparative Study on the Lot Release Systems for Vaccines as of 2016.
Fujita, Kentaro; Naito, Seishiro; Ochiai, Masaki; Konda, Toshifumi; Kato, Atsushi
2017-09-25
Many countries have already established their own vaccine lot release system that is designed for each country's situation: while the World Health Organization promotes for the convergence of these regulatory systems so that vaccines of assured quality are provided globally. We conducted a questionnaire-based investigation of the lot release systems for vaccines in 7 countries and 2 regions. We found that a review of the summary protocol by the National Regulatory Authorities was commonly applied for the independent lot release of vaccines, however, we also noted some diversity between countries, especially in regard to the testing policy. Some countries and regions, including Japan, regularly tested every lot of vaccines, whereas the frequency of these tests was reduced in other countries and regions as determined based on the risk assessment of these products. Test items selected for the lot release varied among the countries or regions investigated, although there was a tendency to prioritize the potency tests. An understanding of the lot release policy may contribute to improving and harmonizing the lot release system globally in the future.
Mostaguir, Khaled; Hoogland, Christine; Binz, Pierre-Alain; Appel, Ron D
2003-08-01
The Make 2D-DB tool has been previously developed to help build federated two-dimensional gel electrophoresis (2-DE) databases on one's own web site. The purpose of our work is to extend the strength of the first package and to build a more efficient environment. Such an environment should be able to fulfill the different needs and requirements arising from both the growing use of 2-DE techniques and the increasing amount of distributed experimental data.
Establishment of an international database for genetic variants in esophageal cancer.
Vihinen, Mauno
2016-10-01
The establishment of a database has been suggested in order to collect, organize, and distribute genetic information about esophageal cancer. The World Organization for Specialized Studies on Diseases of the Esophagus and the Human Variome Project will be in charge of a central database of information about esophageal cancer-related variations from publications, databases, and laboratories; in addition to genetic details, clinical parameters will also be included. The aim will be to get all the central players in research, clinical, and commercial laboratories to contribute. The database will follow established recommendations and guidelines. The database will require a team of dedicated curators with different backgrounds. Numerous layers of systematics will be applied to facilitate computational analyses. The data items will be extensively integrated with other information sources. The database will be distributed as open access to ensure exchange of the data with other databases. Variations will be reported in relation to reference sequences on three levels--DNA, RNA, and protein-whenever applicable. In the first phase, the database will concentrate on genetic variations including both somatic and germline variations for susceptibility genes. Additional types of information can be integrated at a later stage. © 2016 New York Academy of Sciences.
Greening vacant lots to reduce violent crime: a randomised controlled trial
Garvin, Eugenia C; Cannuscio, Carolyn C; Branas, Charles C
2014-01-01
Background Vacant lots are often overgrown with unwanted vegetation and filled with trash, making them attractive places to hide illegal guns, conduct illegal activities such as drug sales and prostitution, and engage in violent crime. There is some evidence that greening vacant lots is associated with reductions in violent crime. Methods We performed a randomised controlled trial of vacant lot greening to test the impact of this intervention on police reported crime and residents’ perceptions of safety and disorder. Greening consisted of cleaning the lots, planting grass and trees, and building a wooden fence around the perimeter. We randomly allocated two vacant lot clusters to the greening intervention or to the control status (no intervention). Administrative data were used to determine crime rates, and local resident interviews at baseline (n=29) and at follow-up (n=21) were used to assess perceptions of safety and disorder. Results Unadjusted difference-in-differences estimates showed a non-significant decrease in the number of total crimes and gun assaults around greened vacant lots compared with control. People around the intervention vacant lots reported feeling significantly safer after greening compared with those living around control vacant lots (p<0.01). Conclusions In this study, greening was associated with reductions in certain gun crimes and improvements in residents’ perceptions of safety. A larger randomised controlled trial is needed to further investigate the link between vacant lot greening and violence reduction. PMID:22871378
Crystal Structure Predictions Using Adaptive Genetic Algorithm and Motif Search methods
NASA Astrophysics Data System (ADS)
Ho, K. M.; Wang, C. Z.; Zhao, X.; Wu, S.; Lyu, X.; Zhu, Z.; Nguyen, M. C.; Umemoto, K.; Wentzcovitch, R. M. M.
2017-12-01
Material informatics is a new initiative which has attracted a lot of attention in recent scientific research. The basic strategy is to construct comprehensive data sets and use machine learning to solve a wide variety of problems in material design and discovery. In pursuit of this goal, a key element is the quality and completeness of the databases used. Recent advance in the development of crystal structure prediction algorithms has made it a complementary and more efficient approach to explore the structure/phase space in materials using computers. In this talk, we discuss the importance of the structural motifs and motif-networks in crystal structure predictions. Correspondingly, powerful methods are developed to improve the sampling of the low-energy structure landscape.
NASA Astrophysics Data System (ADS)
Samsinar, Riza; Suseno, Jatmiko Endro; Widodo, Catur Edi
2018-02-01
The distribution network is the closest power grid to the customer Electric service providers such as PT. PLN. The dispatching center of power grid companies is also the data center of the power grid where gathers great amount of operating information. The valuable information contained in these data means a lot for power grid operating management. The technique of data warehousing online analytical processing has been used to manage and analysis the great capacity of data. Specific methods for online analytics information systems resulting from data warehouse processing with OLAP are chart and query reporting. The information in the form of chart reporting consists of the load distribution chart based on the repetition of time, distribution chart on the area, the substation region chart and the electric load usage chart. The results of the OLAP process show the development of electric load distribution, as well as the analysis of information on the load of electric power consumption and become an alternative in presenting information related to peak load.
Voltage stress effects on microcircuit accelerated life test failure rates
NASA Technical Reports Server (NTRS)
Johnson, G. M.
1976-01-01
The applicability of Arrhenius and Eyring reaction rate models for describing microcircuit aging characteristics as a function of junction temperature and applied voltage was evaluated. The results of a matrix of accelerated life tests with a single metal oxide semiconductor microcircuit operated at six different combinations of temperature and voltage were used to evaluate the models. A total of 450 devices from two different lots were tested at ambient temperatures between 200 C and 250 C and applied voltages between 5 Vdc and 15 Vdc. A statistical analysis of the surface related failure data resulted in bimodal failure distributions comprising two lognormal distributions; a 'freak' distribution observed early in time, and a 'main' distribution observed later in time. The Arrhenius model was shown to provide a good description of device aging as a function of temperature at a fixed voltage. The Eyring model also appeared to provide a reasonable description of main distribution device aging as a function of temperature and voltage. Circuit diagrams are shown.
A design for the geoinformatics system
NASA Astrophysics Data System (ADS)
Allison, M. L.
2002-12-01
Informatics integrates and applies information technologies with scientific and technical disciplines. A geoinformatics system targets the spatially based sciences. The system is not a master database, but will collect pertinent information from disparate databases distributed around the world. Seamless interoperability of databases promises quantum leaps in productivity not only for scientific researchers but also for many areas of society including business and government. The system will incorporate: acquisition of analog and digital legacy data; efficient information and data retrieval mechanisms (via data mining and web services); accessibility to and application of visualization, analysis, and modeling capabilities; online workspace, software, and tutorials; GIS; integration with online scientific journal aggregates and digital libraries; access to real time data collection and dissemination; user-defined automatic notification and quality control filtering for selection of new resources; and application to field techniques such as mapping. In practical terms, such a system will provide the ability to gather data over the Web from a variety of distributed sources, regardless of computer operating systems, database formats, and servers. Search engines will gather data about any geographic location, above, on, or below ground, covering any geologic time, and at any scale or detail. A distributed network of digital geolibraries can archive permanent copies of databases at risk of being discontinued and those that continue to be maintained by the data authors. The geoinformatics system will generate results from widely distributed sources to function as a dynamic data network. Instead of posting a variety of pre-made tables, charts, or maps based on static databases, the interactive dynamic system creates these products on the fly, each time an inquiry is made, using the latest information in the appropriate databases. Thus, in the dynamic system, a map generated today may differ from one created yesterday and one to be created tomorrow, because the databases used to make it are constantly (and sometimes automatically) being updated.
Zhang, Jiyang; Ma, Jie; Dou, Lei; Wu, Songfeng; Qian, Xiaohong; Xie, Hongwei; Zhu, Yunping; He, Fuchu
2009-02-01
The hybrid linear trap quadrupole Fourier-transform (LTQ-FT) ion cyclotron resonance mass spectrometer, an instrument with high accuracy and resolution, is widely used in the identification and quantification of peptides and proteins. However, time-dependent errors in the system may lead to deterioration of the accuracy of these instruments, negatively influencing the determination of the mass error tolerance (MET) in database searches. Here, a comprehensive discussion of LTQ/FT precursor ion mass error is provided. On the basis of an investigation of the mass error distribution, we propose an improved recalibration formula and introduce a new tool, FTDR (Fourier-transform data recalibration), that employs a graphic user interface (GUI) for automatic calibration. It was found that the calibration could adjust the mass error distribution to more closely approximate a normal distribution and reduce the standard deviation (SD). Consequently, we present a new strategy, LDSF (Large MET database search and small MET filtration), for database search MET specification and validation of database search results. As the name implies, a large-MET database search is conducted and the search results are then filtered using the statistical MET estimated from high-confidence results. By applying this strategy to a standard protein data set and a complex data set, we demonstrate the LDSF can significantly improve the sensitivity of the result validation procedure.
[The biomedical periodicals of Hungarian editions--historical overview].
Berhidi, Anna; Geges, József; Vasas, Lívia
2006-03-12
The majority of Hungarian scientific results are published in international periodicals in foreign languages. Yet the publications in Hungarian scientific periodicals also should not be ignored. This study analyses biomedical periodicals of Hungarian edition from different points of view. Based on different databases a list of titles consisting of 119 items resulted, which contains both the core and the peripheral journals of the biomedical field. These periodicals were analysed empirically, one by one: checking out the titles. 13 of the titles are ceased, among the rest 106 Hungarian scientific journals 10 are published in English language. From the remaining majority of Hungarian language and publishing only a few show up in international databases. Although quarter of the Hungarian biomedical journals meet the requirements, which means they could be represented in international databases, these periodicals are not indexed. 42 biomedical periodicals are available online. Although quarter of these journals come with restricted access. 2/3 of the Hungarian biomedical journals have detailed instructions to authors. These instructions inform the publishing doctors and researchers of the requirements of a biomedical periodical. The increasing number of Hungarian biomedical journals published is welcome news. But it would be important for quality publications which are cited a lot to appear in the Hungarian journals. The more publications are cited, the more journals and authors gain in prestige on home and international level.
Requirements for benchmarking personal image retrieval systems
NASA Astrophysics Data System (ADS)
Bouguet, Jean-Yves; Dulong, Carole; Kozintsev, Igor; Wu, Yi
2006-01-01
It is now common to have accumulated tens of thousands of personal ictures. Efficient access to that many pictures can only be done with a robust image retrieval system. This application is of high interest to Intel processor architects. It is highly compute intensive, and could motivate end users to upgrade their personal computers to the next generations of processors. A key question is how to assess the robustness of a personal image retrieval system. Personal image databases are very different from digital libraries that have been used by many Content Based Image Retrieval Systems.1 For example a personal image database has a lot of pictures of people, but a small set of different people typically family, relatives, and friends. Pictures are taken in a limited set of places like home, work, school, and vacation destination. The most frequent queries are searched for people, and for places. These attributes, and many others affect how a personal image retrieval system should be benchmarked, and benchmarks need to be different from existing ones based on art images, or medical images for examples. The attributes of the data set do not change the list of components needed for the benchmarking of such systems as specified in2: - data sets - query tasks - ground truth - evaluation measures - benchmarking events. This paper proposed a way to build these components to be representative of personal image databases, and of the corresponding usage models.
Ramirez-Gonzalez, Ricardo; Caccamo, Mario; MacLean, Daniel
2011-10-01
Scientists now use high-throughput sequencing technologies and short-read assembly methods to create draft genome assemblies in just days. Tools and pipelines like the assembler, and the workflow management environments make it easy for a non-specialist to implement complicated pipelines to produce genome assemblies and annotations very quickly. Such accessibility results in a proliferation of assemblies and associated files, often for many organisms. These assemblies get used as a working reference by lots of different workers, from a bioinformatician doing gene prediction or a bench scientist designing primers for PCR. Here we describe Gee Fu, a database tool for genomic assembly and feature data, including next-generation sequence alignments. Gee Fu is an instance of a Ruby-On-Rails web application on a feature database that provides web and console interfaces for input, visualization of feature data via AnnoJ, access to data through a web-service interface, an API for direct data access by Ruby scripts and access to feature data stored in BAM files. Gee Fu provides a platform for storing and sharing different versions of an assembly and associated features that can be accessed and updated by bench biologists and bioinformaticians in ways that are easy and useful for each. http://tinyurl.com/geefu dan.maclean@tsl.ac.uk.
Monitoring health interventions – who's afraid of LQAS?
Pezzoli, Lorenzo; Kim, Sung Hye
2013-01-01
Lot quality assurance sampling (LQAS) is used to evaluate health services. Subunits of a population (lots) are accepted or rejected according to the number of failures in a random sample (N) of a given lot. If failures are greater than decision value (d), we reject the lot and recommend corrective actions in the lot (i.e. intervention area); if they are equal to or less than d, we accept it. We used LQAS to monitor coverage during the last 3 days of a meningitis vaccination campaign in Niger. We selected one health area (lot) per day reporting the lowest administrative coverage in the previous 2 days. In the sampling plan we considered: N to be small enough to allow us to evaluate one lot per day, deciding to sample 16 individuals from the selected villages of each health area, using probability proportionate to population size; thresholds and d to vary according to administrative coverage reported; α ≤5% (meaning that, if we would have conducted the survey 100 times, we would have accepted the lot up to five times when real coverage was at an unacceptable level) and β ≤20% (meaning that we would have rejected the lot up to 20 times, when real coverage was equal or above the satisfactory level). We classified all three lots as with the acceptable coverage. LQAS appeared to be a rapid, simple, and statistically sound method for in-process coverage assessment. We encourage colleagues in the field to consider using LQAS in complement with other monitoring techniques such as house-to-house monitoring. PMID:24206650
Monitoring health interventions--who's afraid of LQAS?
Pezzoli, Lorenzo; Kim, Sung Hye
2013-11-08
Lot quality assurance sampling (LQAS) is used to evaluate health services. Subunits of a population (lots) are accepted or rejected according to the number of failures in a random sample (N) of a given lot. If failures are greater than decision value (d), we reject the lot and recommend corrective actions in the lot (i.e. intervention area); if they are equal to or less than d, we accept it. We used LQAS to monitor coverage during the last 3 days of a meningitis vaccination campaign in Niger. We selected one health area (lot) per day reporting the lowest administrative coverage in the previous 2 days. In the sampling plan we considered: N to be small enough to allow us to evaluate one lot per day, deciding to sample 16 individuals from the selected villages of each health area, using probability proportionate to population size; thresholds and d to vary according to administrative coverage reported; α ≤5% (meaning that, if we would have conducted the survey 100 times, we would have accepted the lot up to five times when real coverage was at an unacceptable level) and β ≤20% (meaning that we would have rejected the lot up to 20 times, when real coverage was equal or above the satisfactory level). We classified all three lots as with the acceptable coverage. LQAS appeared to be a rapid, simple, and statistically sound method for in-process coverage assessment. We encourage colleagues in the field to consider using LQAS in complement with other monitoring techniques such as house-to-house monitoring.
Development of small scale cell culture models for screening poloxamer 188 lot-to-lot variation.
Peng, Haofan; Hall, Kaitlyn M; Clayton, Blake; Wiltberger, Kelly; Hu, Weiwei; Hughes, Erik; Kane, John; Ney, Rachel; Ryll, Thomas
2014-01-01
Shear protectants such as poloxamer 188 play a critical role in protecting cells during cell culture bioprocessing. Lot-to-lot variation of poloxamer 188 was experienced during a routine technology transfer across sites of similar scale and equipment. Cell culture medium containing a specific poloxamer 188 lot resulted in an unusual drop in cell growth, viability, and titer during manufacturing runs. After switching poloxamer lots, culture performance returned to the expected level. In order to control the quality of poloxamer 188 and thus maintain better consistency in manufacturing, multiple small scale screening models were developed. Initially, a 5L bioreactor model was established to evaluate cell damage by high sparge rates with different poloxamer 188 lots. Subsequently, a more robust, simple, and efficient baffled shake flask model was developed. The baffled shake flask model can be performed in a high throughput manner to investigate the cell damage in a bubbling environment. The main cause of the poor performance was the loss of protection, rather than toxicity. It was also suggested that suspicious lots can be identified using different cell line and media. The screening methods provide easy, yet remarkable models for understanding and controlling cell damage due to raw material lot variation as well as studying the interaction between poloxamer 188 and cells. © 2014 American Institute of Chemical Engineers.
Rita, Ingride; Pereira, Carla; Barros, Lillian; Santos-Buelga, Celestino; Ferreira, Isabel C F R
2016-10-12
Mentha spicata L., commonly known as spearmint, is widely used in both fresh and dry forms, for infusion preparation or in European and Indian cuisines. Recently, with the evolution of the tea market, several novel products with added value are emerging, and the standard lots have evolved to reserve lots, with special harvest requirements that confer them with enhanced organoleptic and sensorial characteristics. The apical leaves of these batches are collected in specific conditions having, then, a different chemical profile. In the present study, standard and reserve lots of M. spicata were assessed in terms of the antioxidants present in infusions prepared from the different lots. The reserve lots presented the highest concentration in all the compounds identified in relation to the standard lots, with 326 and 188 μg mL -1 of total phenolic compounds, respectively. Both types of samples presented rosmarinic acid as the most abundant phenolic compound, at concentrations of 169 and 101 μg mL -1 for reserve and standard lots, respectively. The antioxidant activity was higher in the reserve lots which had the highest total phenolic compounds content, with EC 50 values ranging from 152 to 336 μg mL -1 . The obtained results provide scientific information that may allow the consumer to make a conscientious choice.
Laassri, Majid; Mee, Edward T; Connaughton, Sarah M; Manukyan, Hasmik; Gruber, Marion; Rodriguez-Hernandez, Carmen; Minor, Philip D; Schepelmann, Silke; Chumakov, Konstantin; Wood, David J
2018-06-22
Bovine viral diarrhoea virus (BVDV) is a cattle pathogen that has previously been reported to be present in bovine raw materials used in the manufacture of biological products for human use. Seven lots of trivalent measles, mumps and rubella (MMR) vaccine and 1 lot of measles vaccine from the same manufacturer, together with 17 lots of foetal bovine serum (FBS) from different vendors, 4 lots of horse serum, 2 lots of bovine trypsin and 5 lots of porcine trypsin were analysed for BVDV using recently developed techniques, including PCR assays for BVDV detection, a qRT-PCR and immunofluorescence-based virus replication assays, and deep sequencing to identify and genotype BVDV genomes. All FBS lots and one lot of bovine-derived trypsin were PCR-positive for the presence of BVDV genome; in contrast all vaccine lots and the other samples were negative. qRT-PCR based virus replication assay and immunofluorescence-based infection assay detected no infectious BVDV in the PCR-positive samples. Complete BVDV genomes were generated from FBS samples by deep sequencing, and all were BVDV type 1. These data confirmed that BVDV nucleic acid may be present in bovine-derived raw materials, but no infectious virus or genomic RNA was detected in the final vaccine products. Copyright © 2018 International Alliance for Biological Standardization. All rights reserved.
Keith B. Aubry; Catherine M. Raley; Kevin S. McKelvey
2017-01-01
The availability of spatially referenced environmental data and species occurrence records in online databases enable practitioners to easily generate species distribution models (SDMs) for a broad array of taxa. Such databases often include occurrence records of unknown reliability, yet little information is available on the influence of data quality on SDMs generated...
Code of Federal Regulations, 2010 CFR
2010-01-01
... Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE PISTACHIOS GROWN IN CALIFORNIA, ARIZONA, AND NEW MEXICO Definitions § 983.18 Lot. Lot means any quantity of pistachios that is submitted for...
75 FR 36673 - Notice of Inventory Completion: Public Museum of West Michigan, Grand Rapids, MI
Federal Register 2010, 2011, 2012, 2013, 2014
2010-06-28
... red ochre, 1 shell bracelet, 1 lot of bird bone, 1 flint flake, and 1 projectile point fragment. At an... 33 associated funerary objects are 1 Busycon shell dipper, 16 lots of bone awls and fragments, 1... lots of polished bone, 1 pottery vessel, and 1 lot of turtle carapace fragments. In 1879, human remains...
38 CFR 36.4255 - Loans for the acquisition of a lot.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 2 2010-07-01 2010-07-01 false Loans for the acquisition of a lot. 36.4255 Section 36.4255 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS... of a lot. (a) A loan to finance all or part of the cost of acquisition by the veteran of a lot on...
38 CFR 36.4255 - Loans for the acquisition of a lot.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 2 2013-07-01 2013-07-01 false Loans for the acquisition of a lot. 36.4255 Section 36.4255 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS... of a lot. (a) A loan to finance all or part of the cost of acquisition by the veteran of a lot on...
38 CFR 36.4255 - Loans for the acquisition of a lot.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 2 2012-07-01 2012-07-01 false Loans for the acquisition of a lot. 36.4255 Section 36.4255 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS... of a lot. (a) A loan to finance all or part of the cost of acquisition by the veteran of a lot on...
38 CFR 36.4255 - Loans for the acquisition of a lot.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 2 2011-07-01 2011-07-01 false Loans for the acquisition of a lot. 36.4255 Section 36.4255 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS... of a lot. (a) A loan to finance all or part of the cost of acquisition by the veteran of a lot on...
38 CFR 36.4255 - Loans for the acquisition of a lot.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 38 Pensions, Bonuses, and Veterans' Relief 2 2014-07-01 2014-07-01 false Loans for the acquisition of a lot. 36.4255 Section 36.4255 Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS... of a lot. (a) A loan to finance all or part of the cost of acquisition by the veteran of a lot on...
Individualized grid-enabled mammographic training system
NASA Astrophysics Data System (ADS)
Yap, M. H.; Gale, A. G.
2009-02-01
The PERFORMS self-assessment scheme measures individuals skills in identifying key mammographic features on sets of known cases. One aspect of this is that it allows radiologists' skills to be trained, based on their data from this scheme. Consequently, a new strategy is introduced to provide revision training based on mammographic features that the radiologist has had difficulty with in these sets. To do this requires a lot of random cases to provide dynamic, unique, and up-to-date training modules for each individual. We propose GIMI (Generic Infrastructure in Medical Informatics) middleware as the solution to harvest cases from distributed grid servers. The GIMI middleware enables existing and legacy data to support healthcare delivery, research, and training. It is technology-agnostic, data-agnostic, and has a security policy. The trainee examines each case, indicating the location of regions of interest, and completes an evaluation form, to determine mammographic feature labelling, diagnosis, and decisions. For feedback, the trainee can choose to have immediate feedback after examining each case or batch feedback after examining a number of cases. All the trainees' result are recorded in a database which also contains their trainee profile. A full report can be prepared for the trainee after they have completed their training. This project demonstrates the practicality of a grid-based individualised training strategy and the efficacy in generating dynamic training modules within the coverage/outreach of the GIMI middleware. The advantages and limitations of the approach are discussed together with future plans.
NASA Astrophysics Data System (ADS)
van Rooijen, Wim-Jan; Rodriguez, Ben
2002-12-01
A complex production mask-house faces the issue of handling and understanding the logistics information from the production process of the masks. We managed to control key performance indicators like cycle-time, flow-factor, line-speed, WIP, etc. To improve the line flow, we set-up rules for optimising batching at operations and forbid batching between operations, we defined maximum and minimum WIP at the operations, scheduled urgency of the different lots and built rules for bottleneck management. Also we restricted the number of "hot lots". By migrating to the modern MES (manufacturing execution system) MaTISSe, which manages the shopfloor control, and a reporting database, we are able to eliminate the time deviations within our data, caused by data-extraction for different reports at different moments. This gives us a better understanding of our fixed bottleneck and a faster recognition of the temporarily bottlenecks caused by missing availability of machines or men. In this paper we describe the features and advantages of our new MES, as well as the migration process. We have already achieved considerable benefits. Our plan is to extend decision support within the MES, to help both managers and operators to make the right decisions. The project behind this paper reaped major benefits described here and we are looking forward to further challenges and successes.
DSSTox Website Launch: Improving Public Access to Databases for Building Structure-Toxicity Prediction Models
Ann M. Richard
US Environmental Protection Agency, Research Triangle Park, NC, USA
Distributed: Decentralized set of standardized, field-delimited databases,...
PROGRESS REPORT ON THE DSSTOX DATABASE NETWORK: NEWLY LAUNCHED WEBSITE, APPLICATIONS, FUTURE PLANS
Progress Report on the DSSTox Database Network: Newly Launched Website, Applications, Future Plans
Progress will be reported on development of the Distributed Structure-Searchable Toxicity (DSSTox) Database Network and the newly launched public website that coordinates and...
ERIC Educational Resources Information Center
Pettersson, Rune
Different kinds of pictorial databases are described with respect to aims, user groups, search possibilities, storage, and distribution. Some specific examples are given for databases used for the following purposes: (1) labor markets for artists; (2) document management; (3) telling a story; (4) preservation (archives and museums); (5) research;…
Li, Jian; Yang, Yu-Guang; Chen, Xiu-Bo; Zhou, Yi-Hua; Shi, Wei-Min
2016-08-19
A novel quantum private database query protocol is proposed, based on passive round-robin differential phase-shift quantum key distribution. Compared with previous quantum private database query protocols, the present protocol has the following unique merits: (i) the user Alice can obtain one and only one key bit so that both the efficiency and security of the present protocol can be ensured, and (ii) it does not require to change the length difference of the two arms in a Mach-Zehnder interferometer and just chooses two pulses passively to interfere with so that it is much simpler and more practical. The present protocol is also proved to be secure in terms of the user security and database security.
Okayasu, Hiromasa; Brown, Alexandra E; Nzioki, Michael M; Gasasira, Alex N; Takane, Marina; Mkanda, Pascal; Wassilak, Steven G F; Sutter, Roland W
2014-11-01
To assess the quality of supplementary immunization activities (SIAs), the Global Polio Eradication Initiative (GPEI) has used cluster lot quality assurance sampling (C-LQAS) methods since 2009. However, since the inception of C-LQAS, questions have been raised about the optimal balance between operational feasibility and precision of classification of lots to identify areas with low SIA quality that require corrective programmatic action. To determine if an increased precision in classification would result in differential programmatic decision making, we conducted a pilot evaluation in 4 local government areas (LGAs) in Nigeria with an expanded LQAS sample size of 16 clusters (instead of the standard 6 clusters) of 10 subjects each. The results showed greater heterogeneity between clusters than the assumed standard deviation of 10%, ranging from 12% to 23%. Comparing the distribution of 4-outcome classifications obtained from all possible combinations of 6-cluster subsamples to the observed classification of the 16-cluster sample, we obtained an exact match in classification in 56% to 85% of instances. We concluded that the 6-cluster C-LQAS provides acceptable classification precision for programmatic action. Considering the greater resources required to implement an expanded C-LQAS, the improvement in precision was deemed insufficient to warrant the effort. Published by Oxford University Press on behalf of the Infectious Diseases Society of America 2014. This work is written by (a) US Government employee(s) and is in the public domain in the US.
Jacobs, C M; Utterback, P L; Parsons, C M; Rice, D; Smith, B; Hinds, M; Liebergesell, M; Sauber, T
2008-03-01
An experiment using 216 Hy-Line W-36 pullets was conducted to evaluate transgenic maize grain containing the cry34Ab1 and cry35Ab1 genes from a Bacillus thuringiensis (Bt) strain and the phosphinothricin ace-tyltransferase (pat) gene from Streptomyces viridochromogenes. Expression of the cry34Ab1 and cry35Ab1 genes confers resistance to corn rootworms, and the pat gene confers tolerance to herbicides containing glufosinate-ammonium. Pullets (20 wk of age) were placed in cage lots (3 hens/cage, 2 cages/lot) and were randomly assigned to 1 of 3 corn-soybean meal dietary treatments (12 lots/treatment) formulated with the following maize grains: near-isogenic control (control), conventional maize, and transgenic test corn line 59122 containing event DAS-59122-7. Differences between 59122 and control group means were evaluated with statistical significance at P < 0.05. Body weight and gain, egg production, egg mass, and feed efficiency for hens fed the 59122 corn were not significantly different from the respective values for hens fed diets formulated with control maize grain. Egg component weights, Haugh unit measures, and egg weight class distribution were similar regardless of the corn source. This research indicates that performance of hens fed diets containing 59122 maize grain, as measured by egg production and egg quality, was similar to that of hens fed diets formulated with near-isogenic corn grain.
Domain Regeneration for Cross-Database Micro-Expression Recognition
NASA Astrophysics Data System (ADS)
Zong, Yuan; Zheng, Wenming; Huang, Xiaohua; Shi, Jingang; Cui, Zhen; Zhao, Guoying
2018-05-01
In this paper, we investigate the cross-database micro-expression recognition problem, where the training and testing samples are from two different micro-expression databases. Under this setting, the training and testing samples would have different feature distributions and hence the performance of most existing micro-expression recognition methods may decrease greatly. To solve this problem, we propose a simple yet effective method called Target Sample Re-Generator (TSRG) in this paper. By using TSRG, we are able to re-generate the samples from target micro-expression database and the re-generated target samples would share same or similar feature distributions with the original source samples. For this reason, we can then use the classifier learned based on the labeled source samples to accurately predict the micro-expression categories of the unlabeled target samples. To evaluate the performance of the proposed TSRG method, extensive cross-database micro-expression recognition experiments designed based on SMIC and CASME II databases are conducted. Compared with recent state-of-the-art cross-database emotion recognition methods, the proposed TSRG achieves more promising results.
Bohn, Justin; Eddings, Wesley; Schneeweiss, Sebastian
2017-03-15
Distributed networks of health-care data sources are increasingly being utilized to conduct pharmacoepidemiologic database studies. Such networks may contain data that are not physically pooled but instead are distributed horizontally (separate patients within each data source) or vertically (separate measures within each data source) in order to preserve patient privacy. While multivariable methods for the analysis of horizontally distributed data are frequently employed, few practical approaches have been put forth to deal with vertically distributed health-care databases. In this paper, we propose 2 propensity score-based approaches to vertically distributed data analysis and test their performance using 5 example studies. We found that these approaches produced point estimates close to what could be achieved without partitioning. We further found a performance benefit (i.e., lower mean squared error) for sequentially passing a propensity score through each data domain (called the "sequential approach") as compared with fitting separate domain-specific propensity scores (called the "parallel approach"). These results were validated in a small simulation study. This proof-of-concept study suggests a new multivariable analysis approach to vertically distributed health-care databases that is practical, preserves patient privacy, and warrants further investigation for use in clinical research applications that rely on health-care databases. © The Author 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
System for Performing Single Query Searches of Heterogeneous and Dispersed Databases
NASA Technical Reports Server (NTRS)
Maluf, David A. (Inventor); Okimura, Takeshi (Inventor); Gurram, Mohana M. (Inventor); Tran, Vu Hoang (Inventor); Knight, Christopher D. (Inventor); Trinh, Anh Ngoc (Inventor)
2017-01-01
The present invention is a distributed computer system of heterogeneous databases joined in an information grid and configured with an Application Programming Interface hardware which includes a search engine component for performing user-structured queries on multiple heterogeneous databases in real time. This invention reduces overhead associated with the impedance mismatch that commonly occurs in heterogeneous database queries.
Organization and dissemination of multimedia medical databases on the WWW.
Todorovski, L; Ribaric, S; Dimec, J; Hudomalj, E; Lunder, T
1999-01-01
In the paper, we focus on the problem of building and disseminating multimedia medical databases on the World Wide Web (WWW). The current results of the ongoing project of building a prototype dermatology images database and its WWW presentation are presented. The dermatology database is part of an ambitious plan concerning an organization of a network of medical institutions building distributed and federated multimedia databases of a much wider scale.
Recent advances on terrain database correlation testing
NASA Astrophysics Data System (ADS)
Sakude, Milton T.; Schiavone, Guy A.; Morelos-Borja, Hector; Martin, Glenn; Cortes, Art
1998-08-01
Terrain database correlation is a major requirement for interoperability in distributed simulation. There are numerous situations in which terrain database correlation problems can occur that, in turn, lead to lack of interoperability in distributed training simulations. Examples are the use of different run-time terrain databases derived from inconsistent on source data, the use of different resolutions, and the use of different data models between databases for both terrain and culture data. IST has been developing a suite of software tools, named ZCAP, to address terrain database interoperability issues. In this paper we discuss recent enhancements made to this suite, including improved algorithms for sampling and calculating line-of-sight, an improved method for measuring terrain roughness, and the application of a sparse matrix method to the terrain remediation solution developed at the Visual Systems Lab of the Institute for Simulation and Training. We review the application of some of these new algorithms to the terrain correlation measurement processes. The application of these new algorithms improves our support for very large terrain databases, and provides the capability for performing test replications to estimate the sampling error of the tests. With this set of tools, a user can quantitatively assess the degree of correlation between large terrain databases.
Development, deployment and operations of ATLAS databases
NASA Astrophysics Data System (ADS)
Vaniachine, A. V.; Schmitt, J. G. v. d.
2008-07-01
In preparation for ATLAS data taking, a coordinated shift from development towards operations has occurred in ATLAS database activities. In addition to development and commissioning activities in databases, ATLAS is active in the development and deployment (in collaboration with the WLCG 3D project) of the tools that allow the worldwide distribution and installation of databases and related datasets, as well as the actual operation of this system on ATLAS multi-grid infrastructure. We describe development and commissioning of major ATLAS database applications for online and offline. We present the first scalability test results and ramp-up schedule over the initial LHC years of operations towards the nominal year of ATLAS running, when the database storage volumes are expected to reach 6.1 TB for the Tag DB and 1.0 TB for the Conditions DB. ATLAS database applications require robust operational infrastructure for data replication between online and offline at Tier-0, and for the distribution of the offline data to Tier-1 and Tier-2 computing centers. We describe ATLAS experience with Oracle Streams and other technologies for coordinated replication of databases in the framework of the WLCG 3D services.
PIGD: a database for intronless genes in the Poaceae.
Yan, Hanwei; Jiang, Cuiping; Li, Xiaoyu; Sheng, Lei; Dong, Qing; Peng, Xiaojian; Li, Qian; Zhao, Yang; Jiang, Haiyang; Cheng, Beijiu
2014-10-01
Intronless genes are a feature of prokaryotes; however, they are widespread and unequally distributed among eukaryotes and represent an important resource to study the evolution of gene architecture. Although many databases on exons and introns exist, there is currently no cohesive database that collects intronless genes in plants into a single database. In this study, we present the Poaceae Intronless Genes Database (PIGD), a user-friendly web interface to explore information on intronless genes from different plants. Five Poaceae species, Sorghum bicolor, Zea mays, Setaria italica, Panicum virgatum and Brachypodium distachyon, are included in the current release of PIGD. Gene annotations and sequence data were collected and integrated from different databases. The primary focus of this study was to provide gene descriptions and gene product records. In addition, functional annotations, subcellular localization prediction and taxonomic distribution are reported. PIGD allows users to readily browse, search and download data. BLAST and comparative analyses are also provided through this online database, which is available at http://pigd.ahau.edu.cn/. PIGD provides a solid platform for the collection, integration and analysis of intronless genes in the Poaceae. As such, this database will be useful for subsequent bio-computational analysis in comparative genomics and evolutionary studies.
Caspard, Herve; Coelingh, Kathleen L; Mallory, Raburn M; Ambrose, Christopher S
2016-09-30
This analysis examined potential causes of the lack of vaccine effectiveness (VE) of live attenuated influenza vaccine (LAIV) against A/H1N1pdm09 viruses in the United States (US) during the 2013-2014 season. Laboratory studies have demonstrated reduced thermal stability of A/California/07/2009, the A/H1N1pdm09 strain utilized in LAIV from 2009 through 2013-2014. Post hoc analyses of a 2013-2014 test-negative case-control (TNCC) effectiveness study investigated associations between vaccine shipping conditions and LAIV lot effectiveness. Investigational sites provided the LAIV lot numbers administered to each LAIV recipient enrolled in the study, and the vaccine distributor used by the site for commercially purchased vaccine. Additionally, a review was conducted of 2009-2014 pediatric observational TNCC effectiveness studies of LAIV, summarizing effectiveness by type/subtype, season, and geographic location. From the 2013 to 2014 TNCC study, the proportion of LAIV recipients who tested positive for H1N1pdm09 was significantly higher among children who received a lot released between August 1 and September 15, 2013, compared with a lot shipped either earlier or later (21% versus 4%; P<0.01). A linear relationship was observed between the proportion of subjects testing positive for H1N1pdm09 and outdoor temperatures during truck unloading at distributors' central locations. The review of LAIV VE studies showed that in the 2010-2011 and 2013-2014 influenza seasons, no significant effectiveness of LAIV against H1N1pdm09 was demonstrated for the trivalent or quadrivalent formulations of LAIV in the US, respectively, in contrast to significant effectiveness against A/H3N2 and B strains during 2010-2014. This study showed that the lack of VE observed with LAIV in the US against H1N1pdm09 viruses was associated with exposure of some LAIV lots to temperatures above recommended storage conditions during US distribution, and is likely explained by the increased susceptibility of the A/California/7/2009 (H1N1pdm09) LAIV strain to thermal degradation. NCT01997450. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 7 Agriculture 3 2012-01-01 2012-01-01 false Lot number or other lot identification of vegetable seed in containers of more than 1 pound. 201.30b Section 201.30b Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED)...
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 3 2011-01-01 2011-01-01 false Lot number or other lot identification of vegetable seed in containers of more than 1 pound. 201.30b Section 201.30b Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED)...
Code of Federal Regulations, 2013 CFR
2013-01-01
... 7 Agriculture 3 2013-01-01 2013-01-01 false Lot number or other lot identification of vegetable seed in containers of more than 1 pound. 201.30b Section 201.30b Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED)...
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 3 2010-01-01 2010-01-01 false Lot number or other lot identification of vegetable seed in containers of more than 1 pound. 201.30b Section 201.30b Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED)...
Code of Federal Regulations, 2014 CFR
2014-01-01
... 7 Agriculture 3 2014-01-01 2014-01-01 false Lot number or other lot identification of vegetable seed in containers of more than 1 pound. 201.30b Section 201.30b Agriculture Regulations of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE (CONTINUED)...
Effects of greening and community reuse of vacant lots on crime
M. Kondo; B. Hohl; S. Han; C. Branas
2016-01-01
The Youngstown Neighborhood Development Corporation initiated a âLots of Greenâ programme to reuse vacant land in 2010. We performed a difference-in-differences analysis of the effects of this programme on crime in and around newly treated lots, in comparison to crimes in and around randomly selected and matched, untreated vacant lot controls. The effects of two types...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-26
... Change Amending Its Rules To Remove the Concept of an ``Odd Lot Dealer'' May 19, 2011. Pursuant to... remove the concept of an ``Odd Lot Dealer.'' The text of the proposed rule change is available at the... proposes to amend its rules to remove the concept of an Odd Lot Dealer. An Odd Lot Dealer is any Market...
Neighborhood blight, stress, and health: a walking trial of urban greening and ambulatory heart rate
Eugenia C. South; Michelle C. Kondo; Rose A. Cheney; Charles C. Branas
2015-01-01
We measured dynamic stress responses using ambulatory heart rate monitoring as participants in Philadelphia, Pennsylvania walked past vacant lots before and after a greening remediation treatment of randomly selected lots. Being in view of a greened vacant lot decreased heart rate significantly more than did being in view of a nongreened vacant lot or not in view of...
Digital Video of Live-Scan Fingerprint Data
National Institute of Standards and Technology Data Gateway
NIST Digital Video of Live-Scan Fingerprint Data (PC database for purchase) NIST Special Database 24 contains MPEG-2 (Moving Picture Experts Group) compressed digital video of live-scan fingerprint data. The database is being distributed for use in developing and testing of fingerprint verification systems.
"Mr. Database" : Jim Gray and the History of Database Technologies.
Hanwahr, Nils C
2017-12-01
Although the widespread use of the term "Big Data" is comparatively recent, it invokes a phenomenon in the developments of database technology with distinct historical contexts. The database engineer Jim Gray, known as "Mr. Database" in Silicon Valley before his disappearance at sea in 2007, was involved in many of the crucial developments since the 1970s that constitute the foundation of exceedingly large and distributed databases. Jim Gray was involved in the development of relational database systems based on the concepts of Edgar F. Codd at IBM in the 1970s before he went on to develop principles of Transaction Processing that enable the parallel and highly distributed performance of databases today. He was also involved in creating forums for discourse between academia and industry, which influenced industry performance standards as well as database research agendas. As a co-founder of the San Francisco branch of Microsoft Research, Gray increasingly turned toward scientific applications of database technologies, e. g. leading the TerraServer project, an online database of satellite images. Inspired by Vannevar Bush's idea of the memex, Gray laid out his vision of a Personal Memex as well as a World Memex, eventually postulating a new era of data-based scientific discovery termed "Fourth Paradigm Science". This article gives an overview of Gray's contributions to the development of database technology as well as his research agendas and shows that central notions of Big Data have been occupying database engineers for much longer than the actual term has been in use.
Dynamic Stall Computations Using a Zonal Navier-Stokes Model
1988-06-01
NAVAL POSTGRADUATE SCHOOL lotMonterey ,California CD Lj STATF ,-S THESIS DYNAMIC STALL CALCULATIONS USING A ZONAL.-,_ % 0 NVETESISDE by Jack H...Conroyd, Jr. June 1988 Thesis Co-advisors: M.F. Platzer Lawrence W. Carr Approved for public release; distribution is unlimitedDOTIC , ~~~~~~~~ELECT...OINT %, Master s Thesis OM To June 212 6 SLP;’LEENTARY NOTATION ri The views expressed in this thesis are those of the author and do not reflect the
Analysis of the U.S. Navy Office of Women’s Policy Facebook Use
2015-06-01
groups: Being in a workplace where there’s … only a handful of [ women ] or none at all…a lot of junior sailors have reached out and said, “You know...As women in the military, these posts display a level of frustration. Users may be reluctant to share with male counterparts or in the workplace ...is unlimited 12b. DISTRIBUTION CODE 13. ABSTRACT (maximum 200 words) The use of social media within the workplace as a tool for communication
The Information Age: An Anthology on Its Impact and Consequences
1997-01-01
the United States, but it soon assumed global proportions as information and its collection, management , and distribution became the hallmarks of...on election day, soon forgotten in the enjoyment of power, is over," he argues. There is a simple reason for this, Huber maintains. Just as you can...than the value of all U.S. exports. Thus a lot of commerce that looks domestic to an economist—such as the Stouffer’s frozen dinner you bought last
New Orleans, Louisiana, Mississippi River, and Lake Pontchartrain
1973-06-22
SL2-05-397 (22 June 1973) --- New Orleans, Louisiana, Mississippi River, and Lake Pontchartrain (31.0N, 91.0W) can all be seen in this single detailed view. The marshlands of the Atchafalaya Basin, previously the main drainage way for the Mississippi River, can be seen to be partially silted as a result of sediments. The long narrow field patterns fronting on the river is called the "Long Lot" system of equal land distribution based on the French Napoleonic Civil Code. Photo credit: NASA
NASA Astrophysics Data System (ADS)
Fukuoka, Hiroshi; Watanabe, Eisuke
2017-04-01
Since Keefer published the paper on earthquake magnitude and affected area, maximum epicentral/fault distance of induced landslide distribution in 1984, showing the envelope of plots, a lot of studies on this topic have been conducted. It has been generally supposed that landslides have been triggered by shallow quakes and more landslides are likely to occur with heavy rainfalls immediately before the quake. In order to confirm this, we have collected 22 case records of earthquake-induced landslide distribution in Japan and examined the effect of hypocenter depth and antecedent precipitation. Earthquake magnitude by JMA (Japan Meteorological Agency) of the cases are from 4.5 to 9.0. Analysis on hycpocenter depth showed the deeper quake cause wider distribution. Antecedent precipitation was evaluated using the Soil Water Index (SWI), which was developed by JMA for issuing landslide alert. We could not find meaningful correlation between SWI and the earthquake-induced landslide distribution. Additionally, we found that smaller minimum size of collected landslides results in wider distribution especially between 1,000 to 100,000 m2.
Security in the CernVM File System and the Frontier Distributed Database Caching System
NASA Astrophysics Data System (ADS)
Dykstra, D.; Blomer, J.
2014-06-01
Both the CernVM File System (CVMFS) and the Frontier Distributed Database Caching System (Frontier) distribute centrally updated data worldwide for LHC experiments using http proxy caches. Neither system provides privacy or access control on reading the data, but both control access to updates of the data and can guarantee the authenticity and integrity of the data transferred to clients over the internet. CVMFS has since its early days required digital signatures and secure hashes on all distributed data, and recently Frontier has added X.509-based authenticity and integrity checking. In this paper we detail and compare the security models of CVMFS and Frontier.
Application of new type of distributed multimedia databases to networked electronic museum
NASA Astrophysics Data System (ADS)
Kuroda, Kazuhide; Komatsu, Naohisa; Komiya, Kazumi; Ikeda, Hiroaki
1999-01-01
Recently, various kinds of multimedia application systems have actively been developed based on the achievement of advanced high sped communication networks, computer processing technologies, and digital contents-handling technologies. Under this background, this paper proposed a new distributed multimedia database system which can effectively perform a new function of cooperative retrieval among distributed databases. The proposed system introduces a new concept of 'Retrieval manager' which functions as an intelligent controller so that the user can recognize a set of distributed databases as one logical database. The logical database dynamically generates and performs a preferred combination of retrieving parameters on the basis of both directory data and the system environment. Moreover, a concept of 'domain' is defined in the system as a managing unit of retrieval. The retrieval can effectively be performed by cooperation of processing among multiple domains. Communication language and protocols are also defined in the system. These are used in every action for communications in the system. A language interpreter in each machine translates a communication language into an internal language used in each machine. Using the language interpreter, internal processing, such internal modules as DBMS and user interface modules can freely be selected. A concept of 'content-set' is also introduced. A content-set is defined as a package of contents. Contents in the content-set are related to each other. The system handles a content-set as one object. The user terminal can effectively control the displaying of retrieved contents, referring to data indicating the relation of the contents in the content- set. In order to verify the function of the proposed system, a networked electronic museum was experimentally built. The results of this experiment indicate that the proposed system can effectively retrieve the objective contents under the control to a number of distributed domains. The result also indicate that the system can effectively work even if the system becomes large.
The distribution of common construction materials at risk to acid deposition in the United States
NASA Astrophysics Data System (ADS)
Lipfert, Frederick W.; Daum, Mary L.
Information on the geographic distribution of various types of exposed materials is required to estimate the economic costs of damage to construction materials from acid deposition. This paper focuses on the identification, evaluation and interpretation of data describing the distributions of exterior construction materials, primarily in the United States. This information could provide guidance on how data needed for future economic assessments might be acquired in the most cost-effective ways. Materials distribution surveys from 16 cities in the U.S. and Canada and five related databases from government agencies and trade organizations were examined. Data on residential buildings are more commonly available than on nonresidential buildings; little geographically resolved information on distributions of materials in infrastructure was found. Survey results generally agree with the appropriate ancillary databases, but the usefulness of the databases is often limited by their coarse spatial resolution. Information on those materials which are most sensitive to acid deposition is especially scarce. Since a comprehensive error analysis has never been performed on the data required for an economic assessment, it is not possible to specify the corresponding detailed requirements for data on the distributions of materials.
Effect of Soybean Casein Digest Agar Lot on Number of Bacillus stearothermophilus Spores Recovered †
Pflug, I. J.; Smith, Geraldine M.; Christensen, Ronald
1981-01-01
In recent years it has become increasingly apparent that Bacillus stearothermophilus spores are affected by various environmental factors that influence the performance of the spores as biological indicators. One environmental factor is the recovery medium. The effect of different lots of commercial soybean casein digest agar on the number of colony-forming units per plate was examined in two series of experiments: (i) several lots of medium from two manufacturers were compared in single experiments, and (ii) paired media experiments with four lots of medium were carried out and yielded three-point survivor curves. The results demonstrate that commercial soybean casein digest agar is variable on a lot-to-lot basis. The variation was lowest when recovering unheated or minimally heated spores and increased greatly with the severity of heating. PMID:16345822
Parking lot sealcoat: An unrecognized source of urban polycyclic aromatic hydrocarbons
Mahler, B.J.; Van Metre, P.C.; Bashara, T.J.; Wilson, J.T.; Johns, D.A.
2005-01-01
Polycyclic aromatic hydrocarbons (PAHs) are a ubiquitous contaminant in urban environments. Although numerous sources of PAHs to urban runoff have been identified, their relative importance remains uncertain. We show that a previously unidentified source of urban PAHs, parking lot sealcoat, may dominate loading of PAHs to urban water bodies in the United States. Particles in runoff from parking lots with coal-tar emulsion sealcoat had mean concentrations of PAHs of 3500 mg/kg, 65 times higher than the mean concentration from unsealed asphalt and cement lots. Diagnostic ratios of individual PAHs indicating sources are similar for particles from coal-tar emulsion sealed lots and suspended sediment from four urban streams. Contaminant yields projected to the watershed scale for the four associated watersheds indicate that runoff from sealed parking lots could account for the majority of stream PAH loads.
Hinton, Devon E; Barlow, David H; Reis, Ria; de Jong, Joop
2016-12-01
We present a general model of why "thinking a lot" is a key presentation of distress in many cultures and examine how "thinking a lot" plays out in the Cambodian cultural context. We argue that the complaint of "thinking a lot" indicates the presence of a certain causal network of psychopathology that is found across cultures, but that this causal network is localized in profound ways. We show, using a Cambodian example, that examining "thinking a lot" in a cultural context is a key way of investigating the local bio-cultural ontology of psychopathology. Among Cambodian refugees, a typical episode of "thinking a lot" begins with ruminative-type negative cognitions, in particular worry and depressive thoughts. Next these negative cognitions may induce mental symptoms (e.g., poor concentration, forgetfulness, and "zoning out") and somatic symptoms (e.g., migraine headache, migraine-like blurry vision such as scintillating scotomas, dizziness, palpitations). Subsequently the very fact of "thinking a lot" and the induced symptoms may give rise to multiple catastrophic cognitions. Soon, as distress escalates, in a kind of looping, other negative cognitions such as trauma memories may be triggered. All these processes are highly shaped by the Cambodian socio-cultural context. The article shows that Cambodian trauma survivors have a locally specific illness reality that centers on dynamic episodes of "thinking a lot," or on what might be called the "thinking a lot" causal network.
Maritime Operations in Disconnected, Intermittent, and Low-Bandwidth Environments
2013-06-01
of a Dynamic Distributed Database ( DDD ) is a core element enabling the distributed operation of networks and applications, as described in this...document. The DDD is a database containing all the relevant information required to reconfigure the applications, routing, and other network services...optimize application configuration. Figure 5 gives a snapshot of entries in the DDD . In current testing, the DDD is replicated using Domino
24 CFR 1715.20 - Unlawful sales practices-regulatory provisions.
Code of Federal Regulations, 2011 CFR
2011-04-01
... septic tank operation or there is reasonable assurance that the lot can be served by a central sewage system; (3) The lot is legally accessible; and (4) The lot is free from periodic flooding. ...
24 CFR 1715.20 - Unlawful sales practices-regulatory provisions.
Code of Federal Regulations, 2014 CFR
2014-04-01
... septic tank operation or there is reasonable assurance that the lot can be served by a central sewage system; (3) The lot is legally accessible; and (4) The lot is free from periodic flooding. ...
24 CFR 1715.20 - Unlawful sales practices-regulatory provisions.
Code of Federal Regulations, 2013 CFR
2013-04-01
... septic tank operation or there is reasonable assurance that the lot can be served by a central sewage system; (3) The lot is legally accessible; and (4) The lot is free from periodic flooding. ...
24 CFR 1715.20 - Unlawful sales practices-regulatory provisions.
Code of Federal Regulations, 2012 CFR
2012-04-01
... septic tank operation or there is reasonable assurance that the lot can be served by a central sewage system; (3) The lot is legally accessible; and (4) The lot is free from periodic flooding. ...
A knowledge base architecture for distributed knowledge agents
NASA Technical Reports Server (NTRS)
Riedesel, Joel; Walls, Bryan
1990-01-01
A tuple space based object oriented model for knowledge base representation and interpretation is presented. An architecture for managing distributed knowledge agents is then implemented within the model. The general model is based upon a database implementation of a tuple space. Objects are then defined as an additional layer upon the database. The tuple space may or may not be distributed depending upon the database implementation. A language for representing knowledge and inference strategy is defined whose implementation takes advantage of the tuple space. The general model may then be instantiated in many different forms, each of which may be a distinct knowledge agent. Knowledge agents may communicate using tuple space mechanisms as in the LINDA model as well as using more well known message passing mechanisms. An implementation of the model is presented describing strategies used to keep inference tractable without giving up expressivity. An example applied to a power management and distribution network for Space Station Freedom is given.
NASA Astrophysics Data System (ADS)
WANG, Qingrong; ZHU, Changfeng
2017-06-01
Integration of distributed heterogeneous data sources is the key issues under the big data applications. In this paper the strategy of variable precision is introduced to the concept lattice, and the one-to-one mapping mode of variable precision concept lattice and ontology concept lattice is constructed to produce the local ontology by constructing the variable precision concept lattice for each subsystem, and the distributed generation algorithm of variable precision concept lattice based on ontology heterogeneous database is proposed to draw support from the special relationship between concept lattice and ontology construction. Finally, based on the standard of main concept lattice of the existing heterogeneous database generated, a case study has been carried out in order to testify the feasibility and validity of this algorithm, and the differences between the main concept lattice and the standard concept lattice are compared. Analysis results show that this algorithm above-mentioned can automatically process the construction process of distributed concept lattice under the heterogeneous data sources.
Li, Jian; Yang, Yu-Guang; Chen, Xiu-Bo; Zhou, Yi-Hua; Shi, Wei-Min
2016-01-01
A novel quantum private database query protocol is proposed, based on passive round-robin differential phase-shift quantum key distribution. Compared with previous quantum private database query protocols, the present protocol has the following unique merits: (i) the user Alice can obtain one and only one key bit so that both the efficiency and security of the present protocol can be ensured, and (ii) it does not require to change the length difference of the two arms in a Mach-Zehnder interferometer and just chooses two pulses passively to interfere with so that it is much simpler and more practical. The present protocol is also proved to be secure in terms of the user security and database security. PMID:27539654
Information resources at the National Center for Biotechnology Information.
Woodsmall, R M; Benson, D A
1993-01-01
The National Center for Biotechnology Information (NCBI), part of the National Library of Medicine, was established in 1988 to perform basic research in the field of computational molecular biology as well as build and distribute molecular biology databases. The basic research has led to new algorithms and analysis tools for interpreting genomic data and has been instrumental in the discovery of human disease genes for neurofibromatosis and Kallmann syndrome. The principal database responsibility is the National Institutes of Health (NIH) genetic sequence database, GenBank. NCBI, in collaboration with international partners, builds, distributes, and provides online and CD-ROM access to over 112,000 DNA sequences. Another major program is the integration of multiple sequences databases and related bibliographic information and the development of network-based retrieval systems for Internet access. PMID:8374583
Evaluation and validity of a LORETA normative EEG database.
Thatcher, R W; North, D; Biver, C
2005-04-01
To evaluate the reliability and validity of a Z-score normative EEG database for Low Resolution Electromagnetic Tomography (LORETA), EEG digital samples (2 second intervals sampled 128 Hz, 1 to 2 minutes eyes closed) were acquired from 106 normal subjects, and the cross-spectrum was computed and multiplied by the Key Institute's LORETA 2,394 gray matter pixel T Matrix. After a log10 transform or a Box-Cox transform the mean and standard deviation of the *.lor files were computed for each of the 2394 gray matter pixels, from 1 to 30 Hz, for each of the subjects. Tests of Gaussianity were computed in order to best approximate a normal distribution for each frequency and gray matter pixel. The relative sensitivity of a Z-score database was computed by measuring the approximation to a Gaussian distribution. The validity of the LORETA normative database was evaluated by the degree to which confirmed brain pathologies were localized using the LORETA normative database. Log10 and Box-Cox transforms approximated Gaussian distribution in the range of 95.64% to 99.75% accuracy. The percentage of normative Z-score values at 2 standard deviations ranged from 1.21% to 3.54%, and the percentage of Z-scores at 3 standard deviations ranged from 0% to 0.83%. Left temporal lobe epilepsy, right sensory motor hematoma and a right hemisphere stroke exhibited maximum Z-score deviations in the same locations as the pathologies. We conclude: (1) Adequate approximation to a Gaussian distribution can be achieved using LORETA by using a log10 transform or a Box-Cox transform and parametric statistics, (2) a Z-Score normative database is valid with adequate sensitivity when using LORETA, and (3) the Z-score LORETA normative database also consistently localized known pathologies to the expected Brodmann areas as an hypothesis test based on the surface EEG before computing LORETA.
GPCALMA: A Tool For Mammography With A GRID-Connected Distributed Database
NASA Astrophysics Data System (ADS)
Bottigli, U.; Cerello, P.; Cheran, S.; Delogu, P.; Fantacci, M. E.; Fauci, F.; Golosio, B.; Lauria, A.; Lopez Torres, E.; Magro, R.; Masala, G. L.; Oliva, P.; Palmiero, R.; Raso, G.; Retico, A.; Stumbo, S.; Tangaro, S.
2003-09-01
The GPCALMA (Grid Platform for Computer Assisted Library for MAmmography) collaboration involves several departments of physics, INFN (National Institute of Nuclear Physics) sections, and italian hospitals. The aim of this collaboration is developing a tool that can help radiologists in early detection of breast cancer. GPCALMA has built a large distributed database of digitised mammographic images (about 5500 images corresponding to 1650 patients) and developed a CAD (Computer Aided Detection) software which is integrated in a station that can also be used to acquire new images, as archive and to perform statistical analysis. The images (18×24 cm2, digitised by a CCD linear scanner with a 85 μm pitch and 4096 gray levels) are completely described: pathological ones have a consistent characterization with radiologist's diagnosis and histological data, non pathological ones correspond to patients with a follow up at least three years. The distributed database is realized throught the connection of all the hospitals and research centers in GRID tecnology. In each hospital local patients digital images are stored in the local database. Using GRID connection, GPCALMA will allow each node to work on distributed database data as well as local database data. Using its database the GPCALMA tools perform several analysis. A texture analysis, i.e. an automated classification on adipose, dense or glandular texture, can be provided by the system. GPCALMA software also allows classification of pathological features, in particular massive lesions (both opacities and spiculated lesions) analysis and microcalcification clusters analysis. The detection of pathological features is made using neural network software that provides a selection of areas showing a given "suspicion level" of lesion occurrence. The performance of the GPCALMA system will be presented in terms of the ROC (Receiver Operating Characteristic) curves. The results of GPCALMA system as "second reader" will also be presented.
Klee, Kathrin; Ernst, Rebecca; Spannagl, Manuel; Mayer, Klaus F X
2007-08-30
Apollo, a genome annotation viewer and editor, has become a widely used genome annotation and visualization tool for distributed genome annotation projects. When using Apollo for annotation, database updates are carried out by uploading intermediate annotation files into the respective database. This non-direct database upload is laborious and evokes problems of data synchronicity. To overcome these limitations we extended the Apollo data adapter with a generic, configurable web service client that is able to retrieve annotation data in a GAME-XML-formatted string and pass it on to Apollo's internal input routine. This Apollo web service adapter, Apollo2Go, simplifies the data exchange in distributed projects and aims to render the annotation process more comfortable. The Apollo2Go software is freely available from ftp://ftpmips.gsf.de/plants/apollo_webservice.
Klee, Kathrin; Ernst, Rebecca; Spannagl, Manuel; Mayer, Klaus FX
2007-01-01
Background Apollo, a genome annotation viewer and editor, has become a widely used genome annotation and visualization tool for distributed genome annotation projects. When using Apollo for annotation, database updates are carried out by uploading intermediate annotation files into the respective database. This non-direct database upload is laborious and evokes problems of data synchronicity. Results To overcome these limitations we extended the Apollo data adapter with a generic, configurable web service client that is able to retrieve annotation data in a GAME-XML-formatted string and pass it on to Apollo's internal input routine. Conclusion This Apollo web service adapter, Apollo2Go, simplifies the data exchange in distributed projects and aims to render the annotation process more comfortable. The Apollo2Go software is freely available from . PMID:17760972
7 CFR 33.7 - Less than carload lot.
Code of Federal Regulations, 2010 CFR
2010-01-01
... ISSUED UNDER AUTHORITY OF THE EXPORT APPLE ACT Definitions § 33.7 Less than carload lot. Less than carload lot means a quantity of apples in packages not exceeding 20,000 pounds gross weight or 400...
7 CFR 33.7 - Less than carload lot.
Code of Federal Regulations, 2014 CFR
2014-01-01
... ISSUED UNDER AUTHORITY OF THE EXPORT APPLE ACT Definitions § 33.7 Less than carload lot. Less than carload lot means a quantity of apples in packages not exceeding 20,000 pounds gross weight or 400...
7 CFR 33.7 - Less than carload lot.
Code of Federal Regulations, 2012 CFR
2012-01-01
... ISSUED UNDER AUTHORITY OF THE EXPORT APPLE ACT Definitions § 33.7 Less than carload lot. Less than carload lot means a quantity of apples in packages not exceeding 20,000 pounds gross weight or 400...
7 CFR 33.7 - Less than carload lot.
Code of Federal Regulations, 2011 CFR
2011-01-01
... ISSUED UNDER AUTHORITY OF THE EXPORT APPLE ACT Definitions § 33.7 Less than carload lot. Less than carload lot means a quantity of apples in packages not exceeding 20,000 pounds gross weight or 400...
7 CFR 33.7 - Less than carload lot.
Code of Federal Regulations, 2013 CFR
2013-01-01
... ISSUED UNDER AUTHORITY OF THE EXPORT APPLE ACT Definitions § 33.7 Less than carload lot. Less than carload lot means a quantity of apples in packages not exceeding 20,000 pounds gross weight or 400...
Parking lot sealcoat: an unrecognized source of urban polycyclic aromatic hydrocarbons
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barbara J. Mahler; Peter C. Van Metre; Thomas J. Bashara
2005-08-01
Polycyclic aromatic hydrocarbons (PAHs) are a ubiquitous contaminant in urban environments. Although numerous sources of PAHs to urban runoff have been identified, their relative importance remains uncertain. The authors show that a previously unidentified source of urban PAHs, parking lot sealcoat, may dominate loading of PAHs to urban water bodies in the United States. Particles in runoff from parking lots with coal-tar emulsion sealcoat had mean concentrations of PAHs of 3500 mg/kg, 65 times higher than the mean concentration from unsealed asphalt and cement lots. Diagnostic ratios of individual PAHs indicating sources are similar for particles from coal-tar emulsion sealedmore » lots and suspended sediment from four urban streams. Contaminant yields projected to the watershed scale for the four associated watersheds indicate that runoff from sealed parking lots could account for the majority of stream PAH loads. 35 refs., 6 figs., 2 tabs.« less
Incomplete Data in Smart Grid: Treatment of Values in Electric Vehicle Charging Data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Majipour, Mostafa; Chu, Peter; Gadh, Rajit
2014-11-03
In this paper, five imputation methods namely Constant (zero), Mean, Median, Maximum Likelihood, and Multiple Imputation methods have been applied to compensate for missing values in Electric Vehicle (EV) charging data. The outcome of each of these methods have been used as the input to a prediction algorithm to forecast the EV load in the next 24 hours at each individual outlet. The data is real world data at the outlet level from the UCLA campus parking lots. Given the sparsity of the data, both Median and Constant (=zero) imputations improved the prediction results. Since in most missing value casesmore » in our database, all values of that instance are missing, the multivariate imputation methods did not improve the results significantly compared to univariate approaches.« less
Expression and Organization of Geographic Spatial Relations Based on Topic Maps
NASA Astrophysics Data System (ADS)
Liang, H. J.; Wang, H.; Cui, T. J.; Guo, J. F.
2017-09-01
Spatial Relation is one of the important components of Geographical Information Science and Spatial Database. There have been lots of researches on Spatial Relation and many different spatial relations have been proposed. The relationships among these spatial relations such as hierarchy and so on are complex and this brings some difficulties to the applications and teaching of these spatial relations. This paper summaries some common spatial relations, extracts the topic types, association types, resource types of these spatial relations using the technology of Topic Maps, and builds many different relationships among these spatial relations. Finally, this paper utilizes Java and Ontopia to build a topic map among these common spatial relations, forms a complex knowledge network of spatial relations, and realizes the effective management and retrieval of spatial relations.
[The food legislation evaluation of environmental chemicals in freshwater fish].
Krüger, K E
1990-07-01
During the last 1 1/2 decades different regulatory limits have been given to value pollutants in fish under food-legal aspects. Their requested target, which consists of an effective consumer's protection however has been missed by various reasons: The production and distribution of environmental pollutants cannot be suppressed by limits for food. The selective elimination of limit-exceeding individuals from a lot is impossible. Treating both, natural pollutants like geogenic mercury and anthropogenic ones similar seems to be indefensible with regard to food law. Therefore proposals are made to rule only anthropogenic pollutants by law, when regulatory limits are planned to be supplemented. In case of natural distribution less stringent advisory limits seem to be more suitable.
Serial network simplifies the design of multiple microcomputer systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Folkes, D.
1981-01-01
Recently there has been a lot of interest in developing network communication schemes for carrying digital data between locally distributed computing stations. Many of these schemes have focused on distributed networking techniques for data processing applications. These applications suggest the use of a serial, multipoint bus, where a number of remote intelligent units act as slaves to a central or host computer. Each slave would be serially addressable from the host and would perform required operations upon being addressed by the host. Based on an MK3873 single-chip microcomputer, the SCU 20 is designed to be such a remote slave device.more » The capabilities of the SCU 20 and its use in systems applications are examined.« less
Distributed Ship Navigation Control System Based on Dual Network
NASA Astrophysics Data System (ADS)
Yao, Ying; Lv, Wu
2017-10-01
Navigation system is very important for ship’s normal running. There are a lot of devices and sensors in the navigation system to guarantee ship’s regular work. In the past, these devices and sensors were usually connected via CAN bus for high performance and reliability. However, as the development of related devices and sensors, the navigation system also needs the ability of high information throughput and remote data sharing. To meet these new requirements, we propose the communication method based on dual network which contains CAN bus and industrial Ethernet. Also, we import multiple distributed control terminals with cooperative strategy based on the idea of synchronizing the status by multicasting UDP message contained operation timestamp to make the system more efficient and reliable.
Peng, Jinye; Babaguchi, Noboru; Luo, Hangzai; Gao, Yuli; Fan, Jianping
2010-07-01
Digital video now plays an important role in supporting more profitable online patient training and counseling, and integration of patient training videos from multiple competitive organizations in the health care network will result in better offerings for patients. However, privacy concerns often prevent multiple competitive organizations from sharing and integrating their patient training videos. In addition, patients with infectious or chronic diseases may not want the online patient training organizations to identify who they are or even which video clips they are interested in. Thus, there is an urgent need to develop more effective techniques to protect both video content privacy and access privacy . In this paper, we have developed a new approach to construct a distributed Hippocratic video database system for supporting more profitable online patient training and counseling. First, a new database modeling approach is developed to support concept-oriented video database organization and assign a degree of privacy of the video content for each database level automatically. Second, a new algorithm is developed to protect the video content privacy at the level of individual video clip by filtering out the privacy-sensitive human objects automatically. In order to integrate the patient training videos from multiple competitive organizations for constructing a centralized video database indexing structure, a privacy-preserving video sharing scheme is developed to support privacy-preserving distributed classifier training and prevent the statistical inferences from the videos that are shared for cross-validation of video classifiers. Our experiments on large-scale video databases have also provided very convincing results.
Database Development for Ocean Impacts: Imaging, Outreach, and Rapid Response
2012-09-30
1 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. Database Development for Ocean Impacts: Imaging, Outreach...Development for Ocean Impacts: Imaging, Outreach, and Rapid Response 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) 5d...hoses ( Applied Ocean Physics & Engineering department, WHOI, to evaluate wear and locate in mooring optical cables used in the Right Whale monitoring
Relativistic quantum private database queries
NASA Astrophysics Data System (ADS)
Sun, Si-Jia; Yang, Yu-Guang; Zhang, Ming-Ou
2015-04-01
Recently, Jakobi et al. (Phys Rev A 83, 022301, 2011) suggested the first practical private database query protocol (J-protocol) based on the Scarani et al. (Phys Rev Lett 92, 057901, 2004) quantum key distribution protocol. Unfortunately, the J-protocol is just a cheat-sensitive private database query protocol. In this paper, we present an idealized relativistic quantum private database query protocol based on Minkowski causality and the properties of quantum information. Also, we prove that the protocol is secure in terms of the user security and the database security.
The Network Configuration of an Object Relational Database Management System
NASA Technical Reports Server (NTRS)
Diaz, Philip; Harris, W. C.
2000-01-01
The networking and implementation of the Oracle Database Management System (ODBMS) requires developers to have knowledge of the UNIX operating system as well as all the features of the Oracle Server. The server is an object relational database management system (DBMS). By using distributed processing, processes are split up between the database server and client application programs. The DBMS handles all the responsibilities of the server. The workstations running the database application concentrate on the interpretation and display of data.
Sato, Ko; Kikuchi, Yuki; Masago, Yoshifumi; Ohmiya, Suguru; Ito, Hiroko; Omura, Tatsuo; Nishimura, Hidekazu
2015-11-01
Currently in Japan, the only approved influenza vaccine is the inactivated vaccine which is injected subcutaneously. On the other hand, there is a live vaccine available elsewhere in the world. Flumist, an intranasal influenza live vaccine which contains four strains of infectious viruses, has been used in the United States for more than 10 years; the vaccine has been found effective in clinical trials, while it has some limitations such as those on subjects for the administration, strict storage conditions, relatively short expiration date etc. It is not yet approved in Japan, but available through personal import by some medical institutions, and prescribed based on the decision of the doctor. However, in Japan, there is no checking system whether the vaccine contains appropriate amounts of infectious viruses or not. In the present study, we purchased 2013-14 and 2014-15 years' lots of Flumist from a parallel importer and measured the amount of infectious viruses of each component of them using the focus assay. Consequently, for type A influenza viruses, the titers of both of H1N1pdm09 and H3N2 viruses in the 2013-14's lot were 1/30 of the lower limit of those shown in the package insert and 1/10 in 2014-15's lot, while those of type B viruses, both of B/Massachusetts and B/Brisbane viruses marginally cleared the lower limit. The digital PCR analysis showed that the absolute genome copy numbers of type A viruses were 1/10 of those of type B viruses. The relatively higher titer of B/Massachusetts also gradually decreased over time during its storage at 4°C and finally reached the lower limit at about one week before the expiration date. In case it is approved officially in the future to be used in Japan, some studies will be required to elucidate the minimum viral titers of the components necessary for effective live vaccine. In addition, there should be a system to check the titer during the distribution process in Japan.
77 FR 39726 - Land Acquisitions: Pueblo of Santa Clara
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-05
... Indian Affairs, 1001 Indian School Road NW., Albuquerque, NM 87104-2303; Telephone (505) 563-3337, sandy...., Sec. 17, lots 1 to 8, inclusive; Sec. 18, lots 5 to 12, inclusive; Sec. 19, lots 12 to 17, inclusive...
Field performance of a porous asphaltic pavement.
DOT National Transportation Integrated Search
1992-01-01
The Virginia Department of Transportation constructed a 2.52-acre parking lot of porous asphaltic pavement in Warrenton, Virginia. Runoff from the lot was collected and monitored for quantity, detention time, and quality. Prior to the lot opening for...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Langan, Roisin T.; Archibald, Richard K.; Lamberti, Vincent
We have applied a new imputation-based method for analyzing incomplete data, called Monte Carlo Bayesian Database Generation (MCBDG), to the Spent Fuel Isotopic Composition (SFCOMPO) database. About 60% of the entries are absent for SFCOMPO. The method estimates missing values of a property from a probability distribution created from the existing data for the property, and then generates multiple instances of the completed database for training a machine learning algorithm. Uncertainty in the data is represented by an empirical or an assumed error distribution. The method makes few assumptions about the underlying data, and compares favorably against results obtained bymore » replacing missing information with constant values.« less
GlobTherm, a global database on thermal tolerances for aquatic and terrestrial organisms.
Bennett, Joanne M; Calosi, Piero; Clusella-Trullas, Susana; Martínez, Brezo; Sunday, Jennifer; Algar, Adam C; Araújo, Miguel B; Hawkins, Bradford A; Keith, Sally; Kühn, Ingolf; Rahbek, Carsten; Rodríguez, Laura; Singer, Alexander; Villalobos, Fabricio; Ángel Olalla-Tárraga, Miguel; Morales-Castilla, Ignacio
2018-03-13
How climate affects species distributions is a longstanding question receiving renewed interest owing to the need to predict the impacts of global warming on biodiversity. Is climate change forcing species to live near their critical thermal limits? Are these limits likely to change through natural selection? These and other important questions can be addressed with models relating geographical distributions of species with climate data, but inferences made with these models are highly contingent on non-climatic factors such as biotic interactions. Improved understanding of climate change effects on species will require extensive analysis of thermal physiological traits, but such data are both scarce and scattered. To overcome current limitations, we created the GlobTherm database. The database contains experimentally derived species' thermal tolerance data currently comprising over 2,000 species of terrestrial, freshwater, intertidal and marine multicellular algae, plants, fungi, and animals. The GlobTherm database will be maintained and curated by iDiv with the aim to keep expanding it, and enable further investigations on the effects of climate on the distribution of life on Earth.
Distributed data collection for a database of radiological image interpretations
NASA Astrophysics Data System (ADS)
Long, L. Rodney; Ostchega, Yechiam; Goh, Gin-Hua; Thoma, George R.
1997-01-01
The National Library of Medicine, in collaboration with the National Center for Health Statistics and the National Institute for Arthritis and Musculoskeletal and Skin Diseases, has built a system for collecting radiological interpretations for a large set of x-ray images acquired as part of the data gathered in the second National Health and Nutrition Examination Survey. This system is capable of delivering across the Internet 5- and 10-megabyte x-ray images to Sun workstations equipped with X Window based 2048 X 2560 image displays, for the purpose of having these images interpreted for the degree of presence of particular osteoarthritic conditions in the cervical and lumbar spines. The collected interpretations can then be stored in a database at the National Library of Medicine, under control of the Illustra DBMS. This system is a client/server database application which integrates (1) distributed server processing of client requests, (2) a customized image transmission method for faster Internet data delivery, (3) distributed client workstations with high resolution displays, image processing functions and an on-line digital atlas, and (4) relational database management of the collected data.
Fast parallel algorithms that compute transitive closure of a fuzzy relation
NASA Technical Reports Server (NTRS)
Kreinovich, Vladik YA.
1993-01-01
The notion of a transitive closure of a fuzzy relation is very useful for clustering in pattern recognition, for fuzzy databases, etc. The original algorithm proposed by L. Zadeh (1971) requires the computation time O(n(sup 4)), where n is the number of elements in the relation. In 1974, J. C. Dunn proposed a O(n(sup 2)) algorithm. Since we must compute n(n-1)/2 different values s(a, b) (a not equal to b) that represent the fuzzy relation, and we need at least one computational step to compute each of these values, we cannot compute all of them in less than O(n(sup 2)) steps. So, Dunn's algorithm is in this sense optimal. For small n, it is ok. However, for big n (e.g., for big databases), it is still a lot, so it would be desirable to decrease the computation time (this problem was formulated by J. Bezdek). Since this decrease cannot be done on a sequential computer, the only way to do it is to use a computer with several processors working in parallel. We show that on a parallel computer, transitive closure can be computed in time O((log(sub 2)(n))2).
Insecticides: effects on cutthroat trout of repeated exposure to DDT
Allison, Don; Kallman, Burton J.; Cope, Oliver B.; Van Valin, Charles C.
1963-01-01
Cutthroat trout were periodically exposed to p, pp-DDT, in acetone solution or in the food. Excessive mortality occurred only in lots treated with high concentrations of DDT, probably as a result of decreased resistance to nonspecific stressors. Surviving fish in these lots were significantly larger than those in the control lot, or in the lots treated with low concentrations of DDT. The number and volume of eggs produced was not reduced by DDT, but mortality among sac fry appeared to be highest in the lots treated with high concentrations. The data suggest that the sublethal concentrations of DDT ordinarily encountered in the environment are unlikely to damage a fishery.
Digital Management and Curation of the National Rock and Ore Collections at NMNH, Smithsonian
NASA Astrophysics Data System (ADS)
Cottrell, E.; Andrews, B.; Sorensen, S. S.; Hale, L. J.
2011-12-01
The National Museum of Natural History, Smithsonian Institution, is home to the world's largest curated rock collection. The collection houses 160,680 physical rock and ore specimen lots ("samples"), all of which already have a digital record that can be accessed by the public through a searchable web interface (http://collections.mnh.si.edu/search/ms/). In addition, there are 66 accessions pending that when catalogued will add approximately 60,000 specimen lots. NMNH's collections are digitally managed on the KE EMu° platform which has emerged as the premier system for managing collections in natural history museums worldwide. In 2010 the Smithsonian released an ambitious 5 year Digitization Strategic Plan. In Mineral Sciences, new digitization efforts in the next five years will focus on integrating various digital resources for volcanic specimens. EMu sample records will link to the corresponding records for physical eruption information housed within the database of Smithsonian's Global Volcanism Program (GVP). Linkages are also planned between our digital records and geochemical databases (like EarthChem or PetDB) maintained by third parties. We anticipate that these linkages will increase the use of NMNH collections as well as engender new scholarly directions for research. Another large project the museum is currently undertaking involves the integration of the functionality of in-house designed Transaction Management software with the EMu database. This will allow access to the details (borrower, quantity, date, and purpose) of all loans of a given specimen through its catalogue record. We hope this will enable cross-referencing and fertilization of research ideas while avoiding duplicate efforts. While these digitization efforts are critical, we propose that the greatest challenge to sample curation is not posed by digitization and that a global sample registry alone will not ensure that samples are available for reuse. We suggest instead that the ability of the Earth science community to identify and preserve important collections and make them available for future study is limited by personnel and space resources from the level of the individual PI to the level of national facilities. Moreover, when it comes to specimen "estate planning," the cultural attitudes of scientists, institutions, and funding agencies are often inadequate to provide for long-term specimen curation - even if specimen discovery is enabled by digital registry. Timely access to curated samples requires that adequate resources be devoted to the physical care of specimens (facilities) and to the personnel costs associated with curation - from the conservation, storage, and inventory management of specimens, to the dispersal of samples for research, education, and exhibition.
Selbig, William R.
2014-01-01
A new sample collection system was developed to improve the representation of sediment in stormwater by integrating the entire water column. The depth-integrated sampler arm (DISA) was able to mitigate sediment stratification bias in storm water, thereby improving the characterization of particle size distribution from urban source areas. Collector streets had the lowest median particle diameter of 8 μm, followed by parking lots, arterial streets, feeder streets, and residential and mixed land use (32, 43, 50, 80 and 95 μm, respectively). Results from this study suggest there is no single distribution of particles that can be applied uniformly to runoff in urban environments; however, integrating more of the entire water column during the sample collection can address some of the shortcomings of a fixed-point sampler by reducing variability and bias caused by the stratification of solids in a water column.
Genetic variation and population structure of Cucumber green mottle mosaic virus.
Rao, Li-Xia; Guo, Yushuang; Zhang, Li-Li; Zhou, Xue-Ping; Hong, Jian; Wu, Jian-Xiang
2017-05-01
Cucumber green mottle mosaic virus (CGMMV) is a single-stranded, positive sense RNA virus infecting cucurbitaceous plants. In recent years, CGMMV has become an important pathogen of cucurbitaceous crops including watermelon, pumpkin, cucumber and bottle gourd in China, causing serious losses to their production. In this study, we surveyed CGMMV infection in various cucurbitaceous crops grown in Zhejiang Province and in several seed lots purchased from local stores with the dot enzyme-linked immunosorbent assay (dot-ELISA), using a CGMMV specific monoclonal antibody. Seven CGMMV isolates obtained from watermelon, grafted watermelon or oriental melon samples were cloned and sequenced. Identity analysis showed that the nucleotide identities of the seven complete genome sequences ranged from 99.2 to 100%. Phylogenetic analysis of seven CGMMV isolates as well as 24 other CGMMV isolates from the GenBank database showed that all CGMMV isolates could be grouped into two distinct monophyletic clades according to geographic distribution, i.e. Asian isolates for subtype I and European isolates for subtype II, indicating that population diversification of CGMMV isolates may be affected by geographical distribution. Site variation rate analysis of CGMMV found that the overall variation rate was below 8% and mainly ranged from 2 to 5%, indicating that the CGMMV genomic sequence was conservative. Base substitution type analysis of CGMMV showed a mutational bias, with more transitions (A↔G and C↔T) than transversions (A↔C, A↔T, G↔C and G↔T). Most of the variation occurring in the CGMMV genome resulted in non-synonymous substitutions, and the variation rate of some sites was higher than 30% because of this mutational bias. Selection constraint analysis of CGMMV ORFs showed strong negative selection acting on the replication-associated protein, similar to what occurs for other plant RNA viruses. Finally, potential recombination analysis identified isolate Ec as a recombinant with a low degree of confidence.
Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys
Hund, Lauren; Bedrick, Edward J.; Pagano, Marcello
2015-01-01
Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes) depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis. PMID:26125967
Choosing a Cluster Sampling Design for Lot Quality Assurance Sampling Surveys.
Hund, Lauren; Bedrick, Edward J; Pagano, Marcello
2015-01-01
Lot quality assurance sampling (LQAS) surveys are commonly used for monitoring and evaluation in resource-limited settings. Recently several methods have been proposed to combine LQAS with cluster sampling for more timely and cost-effective data collection. For some of these methods, the standard binomial model can be used for constructing decision rules as the clustering can be ignored. For other designs, considered here, clustering is accommodated in the design phase. In this paper, we compare these latter cluster LQAS methodologies and provide recommendations for choosing a cluster LQAS design. We compare technical differences in the three methods and determine situations in which the choice of method results in a substantively different design. We consider two different aspects of the methods: the distributional assumptions and the clustering parameterization. Further, we provide software tools for implementing each method and clarify misconceptions about these designs in the literature. We illustrate the differences in these methods using vaccination and nutrition cluster LQAS surveys as example designs. The cluster methods are not sensitive to the distributional assumptions but can result in substantially different designs (sample sizes) depending on the clustering parameterization. However, none of the clustering parameterizations used in the existing methods appears to be consistent with the observed data, and, consequently, choice between the cluster LQAS methods is not straightforward. Further research should attempt to characterize clustering patterns in specific applications and provide suggestions for best-practice cluster LQAS designs on a setting-specific basis.
NASA Astrophysics Data System (ADS)
Huang, Yeu-Shiang; Wang, Ruei-Pei; Ho, Jyh-Wen
2015-07-01
Due to the constantly changing business environment, producers often have to deal with customers by adopting different procurement policies. That is, manufacturers confront not only predictable and regular orders, but also unpredictable and irregular orders. In this study, from the perspective of upstream manufacturers, both regular and irregular orders are considered in coping with the situation in which an uncertain demand is faced by the manufacturer, and a capacity confirming mechanism is used to examine such demand. If the demand is less than or equal to the capacity of the ordinary production channel, the general supply channel is utilised to fully account for the manufacturing process, but if the demand is greater than the capacity of the ordinary production channel, the contingency production channel would be activated along with the ordinary channel to satisfy the upcoming high demand. Besides, the reproductive property of the probability distribution is employed to represent the order quantity of the two types of demand. Accordingly, the optimal production rates and lot sizes for both channels are derived to provide managers with insights for further production planning.
Barban, V; Girerd, Y; Aguirre, M; Gulia, S; Pétiard, F; Riou, P; Barrere, B; Lang, J
2007-04-12
We have retrospectively analyzed 12 bulk lots of yellow fever vaccine Stamaril, produced between 1990 and 2002 and prepared from the same seed lot that has been in continuous use since 1990. All vaccine batches displayed identical genome sequence. Only four nucleotide substitutions were observed, compared to previously published sequence, with no incidence at amino-acid level. Fine analysis of viral plaque size distribution was used as an additional marker for genetic stability and demonstrated a remarkable homogeneity of the viral population. The total virus load, measured by qRT-PCR, was also homogeneous pointing out reproducibility of the vaccine production process. Mice inoculated intracerebrally with the different bulks exhibited a similar average survival time, and ratio between in vitro potency and mouse LD(50) titers remained constant from batch-to-batch. Taken together, these data demonstrate the genetic stability of the strain at mass production level over a period of 12 years and reinforce the generally admitted idea of the safety of YF17D-based vaccines.
The nutritional intake of undergraduates at the University of Zimbabwe College of Health Sciences.
Cooper, R G; Chifamba, J
2009-01-01
In developing countries the cost of treating disease is much more than prevention and so there is now a lot of interest in understanding nutrition. In this pilot study we selected a cohort of pre-clinical students studying at the College of Health Sciences in the University of Zimbabwe. This study was carried to investigate the gender-based weekly consumption of different food categories amongst University of Zimbabwe students. Semi-structured questionnaires distributed to 100 undergraduate students (male= 47; female= 52). The proportion of male and female respondents, age and body weight did not differ significantly. Principal foods consumed by males included sadza and cerevita; naartjies, bananas and avocado pears; tomatoes, onions, covo and spinach; beef; and condensed milk and powdered milk occupied the larger proportions. Females frequently ate a lot of bread, cerevita, sadza and cereal; lemons and avocado pears; onions, tomatoes, rape and covo; beef and soya meat; creamer, powdered milk and milk. This study suggests that females consumed a greater variety of food, including the infrequent types by comparison with men.
Library Micro-Computing, Vol. 2. Reprints from the Best of "ONLINE" [and]"DATABASE."
ERIC Educational Resources Information Center
Online, Inc., Weston, CT.
Reprints of 19 articles pertaining to library microcomputing appear in this collection, the second of two volumes on this topic in a series of volumes of reprints from "ONLINE" and "DATABASE" magazines. Edited for information professionals who use electronically distributed databases, these articles address such topics as: (1)…
Web Database Development: Implications for Academic Publishing.
ERIC Educational Resources Information Center
Fernekes, Bob
This paper discusses the preliminary planning, design, and development of a pilot project to create an Internet accessible database and search tool for locating and distributing company data and scholarly work. Team members established four project objectives: (1) to develop a Web accessible database and decision tool that creates Web pages on the…
Prevalence and geographical distribution of Usher syndrome in Germany.
Spandau, Ulrich H M; Rohrschneider, Klaus
2002-06-01
To estimate the prevalence of Usher syndrome in Heidelberg and Mannheim and to map its geographical distribution in Germany. Usher syndrome patients were ascertained through the databases of the Low Vision Department at the University of Heidelberg, and of the patient support group Pro Retina. Ophthalmic and audiologic examinations and medical records were used to classify patients into one of the subtypes. The database of the University of Heidelberg contains 247 Usher syndrome patients, 63 with Usher syndrome type 1 (USH1) and 184 with Usher syndrome type 2 (USH2). The USH1:USH2 ratio in the Heidelberg database was 1:3. The Pro Retina database includes 248 Usher syndrome patients, 21 with USH1 and 227 with USH2. The total number of Usher syndrome patients was 424, with 75 USH1 and 349 USH2 patients; 71 patients were in both databases. The prevalence of Usher syndrome in Heidelberg and suburbs was calculated to be 6.2 per 100,000 inhabitants. There seems to be a homogeneous distribution in Germany for both subtypes. Knowledge of the high prevalence of Usher syndrome, with up to 5,000 patients in Germany, should lead to increased awareness and timely diagnosis by ophthalmologists and otologists. It should also ensure that these patients receive good support through hearing and vision aids.
Blake, M.C.; Jones, D.L.; Graymer, R.W.; digital database by Soule, Adam
2000-01-01
This digital map database, compiled from previously published and unpublished data, and new mapping by the authors, represents the general distribution of bedrock and surficial deposits in the mapped area. Together with the accompanying text file (mageo.txt, mageo.pdf, or mageo.ps), it provides current information on the geologic structure and stratigraphy of the area covered. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The scale of the source maps limits the spatial resolution (scale) of the database to 1:62,500 or smaller general distribution of bedrock and surficial deposits in the mapped area. Together with the accompanying text file (mageo.txt, mageo.pdf, or mageo.ps), it provides current information on the geologic structure and stratigraphy of the area covered. The database delineates map units that are identified by general age and lithology following the stratigraphic nomenclature of the U.S. Geological Survey. The scale of the source maps limits the spatial resolution (scale) of the database to 1:62,500 or smaller.
Angermeier, Paul L.; Frimpong, Emmanuel A.
2009-01-01
The need for integrated and widely accessible sources of species traits data to facilitate studies of ecology, conservation, and management has motivated development of traits databases for various taxa. In spite of the increasing number of traits-based analyses of freshwater fishes in the United States, no consolidated database of traits of this group exists publicly, and much useful information on these species is documented only in obscure sources. The largely inaccessible and unconsolidated traits information makes large-scale analysis involving many fishes and/or traits particularly challenging. FishTraits is a database of >100 traits for 809 (731 native and 78 exotic) fish species found in freshwaters of the conterminous United States, including 37 native families and 145 native genera. The database contains information on four major categories of traits: (1) trophic ecology, (2) body size and reproductive ecology (life history), (3) habitat associations, and (4) salinity and temperature tolerances. Information on geographic distribution and conservation status is also included. Together, we refer to the traits, distribution, and conservation status information as attributes. Descriptions of attributes are available here. Many sources were consulted to compile attributes, including state and regional species accounts and other databases.
Greenland, K; Rondy, M; Chevez, A; Sadozai, N; Gasasira, A; Abanida, E A; Pate, M A; Ronveaux, O; Okayasu, H; Pedalino, B; Pezzoli, L
2011-07-01
To evaluate oral poliovirus vaccine (OPV) coverage of the November 2009 round in five Northern Nigeria states with ongoing wild poliovirus transmission using clustered lot quality assurance sampling (CLQAS). We selected four local government areas in each pre-selected state and sampled six clusters of 10 children in each Local Government Area, defined as the lot area. We used three decision thresholds to classify OPV coverage: 75-90%, 55-70% and 35-50%. A full lot was completed, but we also assessed in retrospect the potential time-saving benefits of stopping sampling when a lot had been classified. We accepted two local government areas (LGAs) with vaccination coverage above 75%. Of the remaining 18 rejected LGAs, 11 also failed to reach 70% coverage, of which four also failed to reach 50%. The average time taken to complete a lot was 10 h. By stopping sampling when a decision was reached, we could have classified lots in 5.3, 7.7 and 7.3 h on average at the 90%, 70% and 50% coverage targets, respectively. Clustered lot quality assurance sampling was feasible and useful to estimate OPV coverage in Northern Nigeria. The multi-threshold approach provided useful information on the variation of IPD vaccination coverage. CLQAS is a very timely tool, allowing corrective actions to be directly taken in insufficiently covered areas. © 2011 Blackwell Publishing Ltd.
Implementing model-based system engineering for the whole lifecycle of a spacecraft
NASA Astrophysics Data System (ADS)
Fischer, P. M.; Lüdtke, D.; Lange, C.; Roshani, F.-C.; Dannemann, F.; Gerndt, A.
2017-09-01
Design information of a spacecraft is collected over all phases in the lifecycle of a project. A lot of this information is exchanged between different engineering tasks and business processes. In some lifecycle phases, model-based system engineering (MBSE) has introduced system models and databases that help to organize such information and to keep it consistent for everyone. Nevertheless, none of the existing databases approached the whole lifecycle yet. Virtual Satellite is the MBSE database developed at DLR. It has been used for quite some time in Phase A studies and is currently extended for implementing it in the whole lifecycle of spacecraft projects. Since it is unforeseeable which future use cases such a database needs to support in all these different projects, the underlying data model has to provide tailoring and extension mechanisms to its conceptual data model (CDM). This paper explains the mechanisms as they are implemented in Virtual Satellite, which enables extending the CDM along the project without corrupting already stored information. As an upcoming major use case, Virtual Satellite will be implemented as MBSE tool in the S2TEP project. This project provides a new satellite bus for internal research and several different payload missions in the future. This paper explains how Virtual Satellite will be used to manage configuration control problems associated with such a multi-mission platform. It discusses how the S2TEP project starts using the software for collecting the first design information from concurrent engineering studies, then making use of the extension mechanisms of the CDM to introduce further information artefacts such as functional electrical architecture, thus linking more and more processes into an integrated MBSE approach.
External access to ALICE controls conditions data
NASA Astrophysics Data System (ADS)
Jadlovský, J.; Jadlovská, A.; Sarnovský, J.; Jajčišin, Š.; Čopík, M.; Jadlovská, S.; Papcun, P.; Bielek, R.; Čerkala, J.; Kopčík, M.; Chochula, P.; Augustinus, A.
2014-06-01
ALICE Controls data produced by commercial SCADA system WINCCOA is stored in ORACLE database on the private experiment network. The SCADA system allows for basic access and processing of the historical data. More advanced analysis requires tools like ROOT and needs therefore a separate access method to the archives. The present scenario expects that detector experts create simple WINCCOA scripts, which retrieves and stores data in a form usable for further studies. This relatively simple procedure generates a lot of administrative overhead - users have to request the data, experts needed to run the script, the results have to be exported outside of the experiment network. The new mechanism profits from database replica, which is running on the CERN campus network. Access to this database is not restricted and there is no risk of generating a heavy load affecting the operation of the experiment. The developed tools presented in this paper allow for access to this data. The users can use web-based tools to generate the requests, consisting of the data identifiers and period of time of interest. The administrators maintain full control over the data - an authorization and authentication mechanism helps to assign privileges to selected users and restrict access to certain groups of data. Advanced caching mechanism allows the user to profit from the presence of already processed data sets. This feature significantly reduces the time required for debugging as the retrieval of raw data can last tens of minutes. A highly configurable client allows for information retrieval bypassing the interactive interface. This method is for example used by ALICE Offline to extract operational conditions after a run is completed. Last but not least, the software can be easily adopted to any underlying database structure and is therefore not limited to WINCCOA.
Income distribution patterns from a complete social security database
NASA Astrophysics Data System (ADS)
Derzsy, N.; Néda, Z.; Santos, M. A.
2012-11-01
We analyze the income distribution of employees for 9 consecutive years (2001-2009) using a complete social security database for an economically important district of Romania. The database contains detailed information on more than half million taxpayers, including their monthly salaries from all employers where they worked. Besides studying the characteristic distribution functions in the high and low/medium income limits, the database allows us a detailed dynamical study by following the time-evolution of the taxpayers income. To our knowledge, this is the first extensive study of this kind (a previous Japanese taxpayers survey was limited to two years). In the high income limit we prove once again the validity of Pareto’s law, obtaining a perfect scaling on four orders of magnitude in the rank for all the studied years. The obtained Pareto exponents are quite stable with values around α≈2.5, in spite of the fact that during this period the economy developed rapidly and also a financial-economic crisis hit Romania in 2007-2008. For the low and medium income category we confirmed the exponential-type income distribution. Following the income of employees in time, we have found that the top limit of the income distribution is a highly dynamical region with strong fluctuations in the rank. In this region, the observed dynamics is consistent with a multiplicative random growth hypothesis. Contrarily with previous results obtained for the Japanese employees, we find that the logarithmic growth-rate is not independent of the income.
Development and Applications of Laminar Optical Tomography for In Vivo Imaging
NASA Astrophysics Data System (ADS)
Burgess, Sean A.
Laminar optical tomography (LOT) is an optical imaging technique capable of making depth-resolved measurements of absorption and fluorescence contrast in scattering tissue. LOT was first demonstrated in 2004 by Hillman et al [1]. The technique combines a non-contact laser scanning geometry, similar to a low magnification confocal microscope, with the imaging principles of diffuse optical tomography (DOT). This thesis describes the development and application of a second generation LOT system, which acquires both fluorescence and multi-wavelength measurements simultaneously and is better suited for in vivo measurements. Chapter 1 begins by reviewing the interactions of light with tissue that form the foundation of optical imaging. A range of related optical imaging techniques and the basic principles of LOT imaging are then described. In Chapter 2, the development of the new LOT imaging system is described including the implementation of a series of interfaces to allow clinical imaging. System performance is then evaluated on a range of imaging phantoms. Chapter 3 describes two in vivo imaging applications explored using the second generation LOT system, first in a clinical setting where skin lesions were imaged, and then in a laboratory setting where LOT imaging was performed on exposed rat cortex. The final chapter provides a brief summary and describes future directions for LOT. LOT has the potential to find applications in medical diagnostics, surgical guidance, and in-situ monitoring owing to its sensitivity to absorption and fluorescence contrast as well as its ability to provide depth sensitive measures. Optical techniques can characterize blood volume and oxygenation, two important biological parameters, through measurements at different wavelengths. Fluorescence measurements, either from autofluorescence or fluorescent dyes, have shown promise for identifying and analyzing lesions in various epithelial tissues including skin [2, 3], colon [4], esophagus [5, 6], oral mucosa [7, 8], and cervix [9]. The desire to capture these types of measurements with LOT motivated much of the work presented here.
Extending GIS Technology to Study Karst Features of Southeastern Minnesota
NASA Astrophysics Data System (ADS)
Gao, Y.; Tipping, R. G.; Alexander, E. C.; Alexander, S. C.
2001-12-01
This paper summarizes ongoing research on karst feature distribution of southeastern Minnesota. The main goals of this interdisciplinary research are: 1) to look for large-scale patterns in the rate and distribution of sinkhole development; 2) to conduct statistical tests of hypotheses about the formation of sinkholes; 3) to create management tools for land-use managers and planners; and 4) to deliver geomorphic and hydrogeologic criteria for making scientifically valid land-use policies and ethical decisions in karst areas of southeastern Minnesota. Existing county and sub-county karst feature datasets of southeastern Minnesota have been assembled into a large GIS-based database capable of analyzing the entire data set. The central database management system (DBMS) is a relational GIS-based system interacting with three modules: GIS, statistical and hydrogeologic modules. ArcInfo and ArcView were used to generate a series of 2D and 3D maps depicting karst feature distributions in southeastern Minnesota. IRIS ExplorerTM was used to produce satisfying 3D maps and animations using data exported from GIS-based database. Nearest-neighbor analysis has been used to test sinkhole distributions in different topographic and geologic settings. All current nearest-neighbor analyses testify that sinkholes in southeastern Minnesota are not evenly distributed in this area (i.e., they tend to be clustered). More detailed statistical methods such as cluster analysis, histograms, probability estimation, correlation and regression have been used to study the spatial distributions of some mapped karst features of southeastern Minnesota. A sinkhole probability map for Goodhue County has been constructed based on sinkhole distribution, bedrock geology, depth to bedrock, GIS buffer analysis and nearest-neighbor analysis. A series of karst features for Winona County including sinkholes, springs, seeps, stream sinks and outcrop has been mapped and entered into the Karst Feature Database of Southeastern Minnesota. The Karst Feature Database of Winona County is being expanded to include all the mapped karst features of southeastern Minnesota. Air photos from 1930s to 1990s of Spring Valley Cavern Area in Fillmore County were scanned and geo-referenced into our GIS system. This technology has been proved to be very useful to identify sinkholes and study the rate of sinkhole development.
Slattery, Martha L; Herrick, Jennifer S; Stevens, John R; Wolff, Roger K; Mullany, Lila E
2017-01-01
Determination of functional pathways regulated by microRNAs (miRNAs), while an essential step in developing therapeutics, is challenging. Some miRNAs have been studied extensively; others have limited information. In this study, we focus on 254 miRNAs previously identified as being associated with colorectal cancer and their database-identified validated target genes. We use RNA-Seq data to evaluate messenger RNA (mRNA) expression for 157 subjects who also had miRNA expression data. In the replication phase of the study, we replicated associations between 254 miRNAs associated with colorectal cancer and mRNA expression of database-identified target genes in normal colonic mucosa. In the discovery phase of the study, we evaluated expression of 18 miR-NAs (those with 20 or fewer database-identified target genes along with miR-21-5p, miR-215-5p, and miR-124-3p which have more than 500 database-identified target genes) with expression of 17 434 mRNAs to identify new targets in colon tissue. Seed region matches between miRNA and newly identified targeted mRNA were used to help determine direct miRNA-mRNA associations. From the replication of the 121 miRNAs that had at least 1 database-identified target gene using mRNA expression methods, 97.9% were expressed in normal colonic mucosa. Of the 8622 target miRNA-mRNA associations identified in the database, 2658 (30.2%) were associated with gene expression in normal colonic mucosa after adjusting for multiple comparisons. Of the 133 miRNAs with database-identified target genes by non-mRNA expression methods, 97.2% were expressed in normal colonic mucosa. After adjustment for multiple comparisons, 2416 miRNA-mRNA associations remained significant (19.8%). Results from the discovery phase based on detailed examination of 18 miRNAs identified more than 80 000 miRNA-mRNA associations that had not previously linked to the miRNA. Of these miRNA-mRNA associations, 15.6% and 14.8% had seed matches for CRCh38 and CRCh37, respectively. Our data suggest that miRNA target gene databases are incomplete; pathways derived from these databases have similar deficiencies. Although we know a lot about several miRNAs, little is known about other miRNAs in terms of their targeted genes. We encourage others to use their data to continue to further identify and validate miRNA-targeted genes.
The Raid distributed database system
NASA Technical Reports Server (NTRS)
Bhargava, Bharat; Riedl, John
1989-01-01
Raid, a robust and adaptable distributed database system for transaction processing (TP), is described. Raid is a message-passing system, with server processes on each site to manage concurrent processing, consistent replicated copies during site failures, and atomic distributed commitment. A high-level layered communications package provides a clean location-independent interface between servers. The latest design of the package delivers messages via shared memory in a configuration with several servers linked into a single process. Raid provides the infrastructure to investigate various methods for supporting reliable distributed TP. Measurements on TP and server CPU time are presented, along with data from experiments on communications software, consistent replicated copy control during site failures, and concurrent distributed checkpointing. A software tool for evaluating the implementation of TP algorithms in an operating-system kernel is proposed.
7 CFR 983.52 - Failed lots/rework procedure.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Agreements and Orders; Fruits, Vegetables, Nuts), DEPARTMENT OF AGRICULTURE PISTACHIOS GROWN IN CALIFORNIA, ARIZONA, AND NEW MEXICO Regulations § 983.52 Failed lots/rework procedure. (a) Substandard pistachios. Each lot of substandard pistachios may be reworked to meet aflatoxin or quality requirements. The...
1. Context view showing cabin on Lot 2 in foreground ...
1. Context view showing cabin on Lot 2 in foreground (17419 North Shore Drive) and east side of Frank-Jensen Summer Home on Lot 3 in background. - Frank-Jensen Summer Home, 17423 North Lake Shore Drive, Telma, Chelan County, WA
Joint Air-to-Surface Standoff Missile (JASSM)
2015-12-01
6.1.3) All Ops All Ops Joint Critical Ops All Ops All Ops Missile Reliability (KSA) (CPD para 6.2.8) 4th Lot .91 4th Lot .91 IOT &E .80 4th Lot .85 IOT &E...the ORD 303-95-III dated January 20, 2004 Change Explanations None Acronyms and Abbreviations IOT &E - Initial Operational Test and Evaluation KSA... Actuator Control Card, Lots 12 and 4 Systems Engineering Program Support/Program Tooling and Test Equipment, and JASSM-ER Standard Data Protocol (DS
Research on Some Bus Transport Networks with Random Overlapping Clique Structure
NASA Astrophysics Data System (ADS)
Yang, Xu-Hua; Wang, Bo; Wang, Wan-Liang; Sun, You-Xian
2008-11-01
On the basis of investigating the statistical data of bus transport networks of three big cities in China, we propose that each bus route is a clique (maximal complete subgraph) and a bus transport network (BTN) consists of a lot of cliques, which intensively connect and overlap with each other. We study the network properties, which include the degree distribution, multiple edges' overlapping time distribution, distribution of the overlap size between any two overlapping cliques, distribution of the number of cliques that a node belongs to. Naturally, the cliques also constitute a network, with the overlapping nodes being their multiple links. We also research its network properties such as degree distribution, clustering, average path length, and so on. We propose that a BTN has the properties of random clique increment and random overlapping clique, at the same time, a BTN is a small-world network with highly clique-clustered and highly clique-overlapped. Finally, we introduce a BTN evolution model, whose simulation results agree well with the statistical laws that emerge in real BTNs.
Growth of Salmonella during sprouting of alfalfa seeds associated with salmonellosis outbreaks.
Stewart, D S; Reineke, K F; Ulaszek, J M; Tortorello, M L
2001-05-01
Growth of Salmonella was assessed during sprouting of naturally contaminated alfalfa seeds associated with two outbreaks of salmonellosis. Salmonella was determined daily in sprouts and sprout rinse water samples by a three-tube most probable number (MPN) procedure and a commercial enzyme immunoassay (EIA). Growth of Salmonella in the sprouts was reflected in the rinse water, and the MPNs of the two samples were generally in agreement within approximately 1 log. The results from EIA testing of sprouts and water samples were also in agreement. The pathogen was present in the seed at less than 1 MPN/g, and it increased in number to maximum population levels of 102 to 10(3) MPN/g in one seed lot and 10(2) to 10(4) MPN/ g in the other seed lot. Maximum populations of the pathogen were apparent by day 2 of sprouting. These results show the ability of the pathogen to grow to detectable levels during the sprouting process, and they provide support for the recommendation to test the sprout water for the presence of pathogens 48 h after starting seed sprouting. The effectiveness of a 10-min, 20,0000-microg/ml (ppm) calcium hypochlorite treatment of the outbreak-associated seeds was studied. For both seed lots, the hypochlorite treatment caused a reduction, but not elimination, of Salmonella contamination in the finished sprouts. These results confirm the need to test each production batch for the presence of pathogens, even after 20,000 microg/ml (ppm) hypochlorite treatment of seeds, so that contaminated product is not distributed.
Hoijemberg, Pablo A; Pelczer, István
2018-01-05
A lot of time is spent by researchers in the identification of metabolites in NMR-based metabolomic studies. The usual metabolite identification starts employing public or commercial databases to match chemical shifts thought to belong to a given compound. Statistical total correlation spectroscopy (STOCSY), in use for more than a decade, speeds the process by finding statistical correlations among peaks, being able to create a better peak list as input for the database query. However, the (normally not automated) analysis becomes challenging due to the intrinsic issue of peak overlap, where correlations of more than one compound appear in the STOCSY trace. Here we present a fully automated methodology that analyzes all STOCSY traces at once (every peak is chosen as driver peak) and overcomes the peak overlap obstacle. Peak overlap detection by clustering analysis and sorting of traces (POD-CAST) first creates an overlap matrix from the STOCSY traces, then clusters the overlap traces based on their similarity and finally calculates a cumulative overlap index (COI) to account for both strong and intermediate correlations. This information is gathered in one plot to help the user identify the groups of peaks that would belong to a single molecule and perform a more reliable database query. The simultaneous examination of all traces reduces the time of analysis, compared to viewing STOCSY traces by pairs or small groups, and condenses the redundant information in the 2D STOCSY matrix into bands containing similar traces. The COI helps in the detection of overlapping peaks, which can be added to the peak list from another cross-correlated band. POD-CAST overcomes the generally overlooked and underestimated presence of overlapping peaks and it detects them to include them in the search of all compounds contributing to the peak overlap, enabling the user to accelerate the metabolite identification process with more successful database queries and searching all tentative compounds in the sample set.
Pedretti, Kevin
2008-11-18
A compute processor allocator architecture for allocating compute processors to run applications in a multiple processor computing apparatus is distributed among a subset of processors within the computing apparatus. Each processor of the subset includes a compute processor allocator. The compute processor allocators can share a common database of information pertinent to compute processor allocation. A communication path permits retrieval of information from the database independently of the compute processor allocators.
Secure Indoor Localization Based on Extracting Trusted Fingerprint
Yin, Xixi; Zheng, Yanliu; Wang, Chun
2018-01-01
Indoor localization based on WiFi has attracted a lot of research effort because of the widespread application of WiFi. Fingerprinting techniques have received much attention due to their simplicity and compatibility with existing hardware. However, existing fingerprinting localization algorithms may not resist abnormal received signal strength indication (RSSI), such as unexpected environmental changes, impaired access points (APs) or the introduction of new APs. Traditional fingerprinting algorithms do not consider the problem of new APs and impaired APs in the environment when using RSSI. In this paper, we propose a secure fingerprinting localization (SFL) method that is robust to variable environments, impaired APs and the introduction of new APs. In the offline phase, a voting mechanism and a fingerprint database update method are proposed. We use the mutual cooperation between reference anchor nodes to update the fingerprint database, which can reduce the interference caused by the user measurement data. We analyze the standard deviation of RSSI, mobilize the reference points in the database to vote on APs and then calculate the trust factors of APs based on the voting results. In the online phase, we first make a judgment about the new APs and the broken APs, then extract the secure fingerprints according to the trusted factors of APs and obtain the localization results by using the trusted fingerprints. In the experiment section, we demonstrate the proposed method and find that the proposed strategy can resist abnormal RSSI and can improve the localization accuracy effectively compared with the existing fingerprinting localization algorithms. PMID:29401755
Secure Indoor Localization Based on Extracting Trusted Fingerprint.
Luo, Juan; Yin, Xixi; Zheng, Yanliu; Wang, Chun
2018-02-05
[-5]Indoor localization based on WiFi has attracted a lot of research effort because of the widespread application of WiFi. Fingerprinting techniques have received much attention due to their simplicity and compatibility with existing hardware. However, existing fingerprinting localization algorithms may not resist abnormal received signal strength indication (RSSI), such as unexpected environmental changes, impaired access points (APs) or the introduction of new APs. Traditional fingerprinting algorithms do not consider the problem of new APs and impaired APs in the environment when using RSSI. In this paper, we propose a secure fingerprinting localization (SFL) method that is robust to variable environments, impaired APs and the introduction of new APs. In the offline phase, a voting mechanism and a fingerprint database update method are proposed. We use the mutual cooperation between reference anchor nodes to update the fingerprint database, which can reduce the interference caused by the user measurement data. We analyze the standard deviation of RSSI, mobilize the reference points in the database to vote on APs and then calculate the trust factors of APs based on the voting results. In the online phase, we first make a judgment about the new APs and the broken APs, then extract the secure fingerprints according to the trusted factors of APs and obtain the localization results by using the trusted fingerprints. In the experiment section, we demonstrate the proposed method and find that the proposed strategy can resist abnormal RSSI and can improve the localization accuracy effectively compared with the existing fingerprinting localization algorithms.
Practical private database queries based on a quantum-key-distribution protocol
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakobi, Markus; Humboldt-Universitaet zu Berlin, D-10117 Berlin; Simon, Christoph
2011-02-15
Private queries allow a user, Alice, to learn an element of a database held by a provider, Bob, without revealing which element she is interested in, while limiting her information about the other elements. We propose to implement private queries based on a quantum-key-distribution protocol, with changes only in the classical postprocessing of the key. This approach makes our scheme both easy to implement and loss tolerant. While unconditionally secure private queries are known to be impossible, we argue that an interesting degree of security can be achieved by relying on fundamental physical principles instead of unverifiable security assumptions inmore » order to protect both the user and the database. We think that the scope exists for such practical private queries to become another remarkable application of quantum information in the footsteps of quantum key distribution.« less
Gherghel, Iulian; Papeş, Monica; Brischoux, François; Sahlean, Tiberiu; Strugariu, Alexandru
2016-01-01
Abstract The genus Laticauda (Reptilia: Elapidae), commonly known as sea kraits, comprises eight species of marine amphibious snakes distributed along the shores of the Western Pacific Ocean and the Eastern Indian Ocean. We review the information available on the geographic range of sea kraits and analyze their distribution patterns. Generally, we found that south and south-west of Japan, Philippines Archipelago, parts of Indonesia, and Vanuatu have the highest diversity of sea krait species. Further, we compiled the information available on sea kraits’ occurrences from a variety of sources, including museum records, field surveys, and the scientific literature. The final database comprises 694 occurrence records, with Laticauda colubrina having the highest number of records and Laticauda schistorhyncha the lowest. The occurrence records were georeferenced and compiled as a database for each sea krait species. This database can be freely used for future studies. PMID:27110155
Gherghel, Iulian; Papeş, Monica; Brischoux, François; Sahlean, Tiberiu; Strugariu, Alexandru
2016-01-01
The genus Laticauda (Reptilia: Elapidae), commonly known as sea kraits, comprises eight species of marine amphibious snakes distributed along the shores of the Western Pacific Ocean and the Eastern Indian Ocean. We review the information available on the geographic range of sea kraits and analyze their distribution patterns. Generally, we found that south and south-west of Japan, Philippines Archipelago, parts of Indonesia, and Vanuatu have the highest diversity of sea krait species. Further, we compiled the information available on sea kraits' occurrences from a variety of sources, including museum records, field surveys, and the scientific literature. The final database comprises 694 occurrence records, with Laticauda colubrina having the highest number of records and Laticauda schistorhyncha the lowest. The occurrence records were georeferenced and compiled as a database for each sea krait species. This database can be freely used for future studies.
Cellular Manufacturing System with Dynamic Lot Size Material Handling
NASA Astrophysics Data System (ADS)
Khannan, M. S. A.; Maruf, A.; Wangsaputra, R.; Sutrisno, S.; Wibawa, T.
2016-02-01
Material Handling take as important role in Cellular Manufacturing System (CMS) design. In several study at CMS design material handling was assumed per pieces or with constant lot size. In real industrial practice, lot size may change during rolling period to cope with demand changes. This study develops CMS Model with Dynamic Lot Size Material Handling. Integer Linear Programming is used to solve the problem. Objective function of this model is minimizing total expected cost consisting machinery depreciation cost, operating costs, inter-cell material handling cost, intra-cell material handling cost, machine relocation costs, setup costs, and production planning cost. This model determines optimum cell formation and optimum lot size. Numerical examples are elaborated in the paper to ilustrate the characterictic of the model.
Mugshot Identification Database (MID)
National Institute of Standards and Technology Data Gateway
NIST Mugshot Identification Database (MID) (Web, free access) NIST Special Database 18 is being distributed for use in development and testing of automated mugshot identification systems. The database consists of three CD-ROMs, containing a total of 3248 images of variable size using lossless compression. A newer version of the compression/decompression software on the CDROM can be found at the website http://www.nist.gov/itl/iad/ig/nigos.cfm as part of the NBIS package.
Database Entity Persistence with Hibernate for the Network Connectivity Analysis Model
2014-04-01
time savings in the Java coding development process. Appendices A and B describe address setup procedures for installing the MySQL database...development environment is required: • The open source MySQL Database Management System (DBMS) from Oracle, which is a Java Database Connectivity (JDBC...compliant DBMS • MySQL JDBC Driver library that comes as a plug-in with the Netbeans distribution • The latest Java Development Kit with the latest
DNApod: DNA polymorphism annotation database from next-generation sequence read archives.
Mochizuki, Takako; Tanizawa, Yasuhiro; Fujisawa, Takatomo; Ohta, Tazro; Nikoh, Naruo; Shimizu, Tokurou; Toyoda, Atsushi; Fujiyama, Asao; Kurata, Nori; Nagasaki, Hideki; Kaminuma, Eli; Nakamura, Yasukazu
2017-01-01
With the rapid advances in next-generation sequencing (NGS), datasets for DNA polymorphisms among various species and strains have been produced, stored, and distributed. However, reliability varies among these datasets because the experimental and analytical conditions used differ among assays. Furthermore, such datasets have been frequently distributed from the websites of individual sequencing projects. It is desirable to integrate DNA polymorphism data into one database featuring uniform quality control that is distributed from a single platform at a single place. DNA polymorphism annotation database (DNApod; http://tga.nig.ac.jp/dnapod/) is an integrated database that stores genome-wide DNA polymorphism datasets acquired under uniform analytical conditions, and this includes uniformity in the quality of the raw data, the reference genome version, and evaluation algorithms. DNApod genotypic data are re-analyzed whole-genome shotgun datasets extracted from sequence read archives, and DNApod distributes genome-wide DNA polymorphism datasets and known-gene annotations for each DNA polymorphism. This new database was developed for storing genome-wide DNA polymorphism datasets of plants, with crops being the first priority. Here, we describe our analyzed data for 679, 404, and 66 strains of rice, maize, and sorghum, respectively. The analytical methods are available as a DNApod workflow in an NGS annotation system of the DNA Data Bank of Japan and a virtual machine image. Furthermore, DNApod provides tables of links of identifiers between DNApod genotypic data and public phenotypic data. To advance the sharing of organism knowledge, DNApod offers basic and ubiquitous functions for multiple alignment and phylogenetic tree construction by using orthologous gene information.
DNApod: DNA polymorphism annotation database from next-generation sequence read archives
Mochizuki, Takako; Tanizawa, Yasuhiro; Fujisawa, Takatomo; Ohta, Tazro; Nikoh, Naruo; Shimizu, Tokurou; Toyoda, Atsushi; Fujiyama, Asao; Kurata, Nori; Nagasaki, Hideki; Kaminuma, Eli; Nakamura, Yasukazu
2017-01-01
With the rapid advances in next-generation sequencing (NGS), datasets for DNA polymorphisms among various species and strains have been produced, stored, and distributed. However, reliability varies among these datasets because the experimental and analytical conditions used differ among assays. Furthermore, such datasets have been frequently distributed from the websites of individual sequencing projects. It is desirable to integrate DNA polymorphism data into one database featuring uniform quality control that is distributed from a single platform at a single place. DNA polymorphism annotation database (DNApod; http://tga.nig.ac.jp/dnapod/) is an integrated database that stores genome-wide DNA polymorphism datasets acquired under uniform analytical conditions, and this includes uniformity in the quality of the raw data, the reference genome version, and evaluation algorithms. DNApod genotypic data are re-analyzed whole-genome shotgun datasets extracted from sequence read archives, and DNApod distributes genome-wide DNA polymorphism datasets and known-gene annotations for each DNA polymorphism. This new database was developed for storing genome-wide DNA polymorphism datasets of plants, with crops being the first priority. Here, we describe our analyzed data for 679, 404, and 66 strains of rice, maize, and sorghum, respectively. The analytical methods are available as a DNApod workflow in an NGS annotation system of the DNA Data Bank of Japan and a virtual machine image. Furthermore, DNApod provides tables of links of identifiers between DNApod genotypic data and public phenotypic data. To advance the sharing of organism knowledge, DNApod offers basic and ubiquitous functions for multiple alignment and phylogenetic tree construction by using orthologous gene information. PMID:28234924
NASA Astrophysics Data System (ADS)
Romeuf, Nathalie; Roux, Lionel; Faralli, Alain
2017-04-01
The Provence region provides a lot of limestones and biocalcarenites outcrops, Oligocene and Miocene in age. These outcrops allow us to study a key period in the Mediterranean geological history: the Corsica-Sardinia rotation and Liguro-Provençal Basin spreading. These sedimentary rocks can be studied at several grades: At middle school, past biodiversity and a paleogeographic reconstruction can be approached through the very rich fossils contents of limestones and a Miocene fossils database developed by the Lithotheque-PACA group, At high school, a comparison between several zones (from the Côte Bleue to the North, outcrops in the Vaucluse area) can be done in order to study the Miocene transgression that followed the opening of the Liguro-Provençal basin. These rocks have been highly exploited to provide construction rocks used in a lot of monuments in the Provence region. The nature of the crust between Provence and Corsica can be determined by using edusismo tools (determination of the P-waves velocity through oceanic and continental crust). At the university, the complexity of a transgression can be understood: the correlation of stratigraphic data in different places in the same zone shows the complex geometry of the topographic transgression surface, the dynamic of Liguro-Provençal opening which stopped many millions years before the end of the Miocene transgression. This can be used to introduce the model of thermic subsidence, vertical facies variation and can be used to demonstrate the non-constant speed of transgression tendency and even that different cycles transgression/regression with different periods are entangled. The aim of our project is to present the link between the fieldwork, the exploitation of a pedagogical database (the Lithothèque-PACA website: http://www.lithotheque.ac-aix-marseille.fr/) and the studies led into classroom. In fact, we have guided several fieldtrips for teachers to allow them to understand the abundance of possible pedagogic material based on regional geological resources in Provence. The formation has been completed by some conferences, pedagogic practical works and web documents. We hope those suggestions have allowed teachers to work from scientific data (instead of generic pedagogic materials) in link with their student's direct environment. The Lithotheque-PACA website presents regional geological data of interest sites for science teachers at middle and high school. The goals of the site is to simplify the work for the teachers to prepare the field trips with students providing especially: -Scientific geological data on pedagogic sites, - Access and outcropping conditions that permit to assure student security, - Pedagogic indication according to the official programs in order to show some ways to use the geological objects. - Documents useful for teachers: photographs of landscapes, outcrops, rocks and details (fossils, minerals, tectonics,...), schematic cross-sections, geological maps... - A database on Miocene fossils preserved in the regional museums of Natural History.
7 CFR 42.103 - Purpose and scope.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... CONDITION OF FOOD CONTAINERS Procedures for Stationary Lot Sampling and Inspection § 42.103 Purpose and... stationary lots of packaged foods. This subpart shall be used to determine the acceptability of a lot based...
Code of Federal Regulations, 2010 CFR
2010-01-01
..., and read the new Percent of Lots Expected to be Accepted, Pas, which results when using these skip lot... point, proceed vertically to the curve and then horizontally to the left to the vertical axis. From this...
Code of Federal Regulations, 2011 CFR
2011-01-01
..., and read the new Percent of Lots Expected to be Accepted, Pas, which results when using these skip lot... point, proceed vertically to the curve and then horizontally to the left to the vertical axis. From this...
Homogeneity of GAFCHROMIC EBT2 film among different lot numbers
Takahashi, Yutaka; Tanaka, Atsushi; Hirayama, Takamitsu; Yamaguchi, Tsuyoshi; Katou, Hiroaki; Takahara, Keiko; Okamoto, Yoshiaki; Teshima, Teruki
2012-01-01
EBT2 film is widely used for quality assurance in radiation therapy. The purpose of this study was to investigate the homogeneity of EBT2 film among various lots, and the dose dependence of heterogeneity. EBT2 film was positioned in the center of a flatbed scanner and scanned in transmission mode at 75 dpi. Homogeneity was investigated by evaluating gray value and net optical density (netOD) with the red color channel. The dose dependence of heterogeneity in a single sheet from five lots was investigated at 0.5, 2, and 3 Gy. Maximum coefficient of variation as evaluated by netOD in a single film was 3.0% in one lot, but no higher than 0.5% in other lots. Dose dependence of heterogeneity was observed on evaluation by gray value but not on evaluation by netOD. These results suggest that EBT2 should be examined in each lot number before clinical use, and that the dose calibration curve should be constructed using netOD. PACS number: 87 PMID:22766947
Susceptibility to Cracking of Different Lots of CDR35 Capacitors
NASA Technical Reports Server (NTRS)
Teverovsky, Alexander
2017-01-01
On-orbit flight anomalies that occurred after several months of operation were attributed to excessive leakage currents in CDR35 style 0.47 microF 50 V capacitors operating at 10 V. In this work, a lot of capacitors similar to the lot that caused the anomaly have been evaluated in parallel with another lot of similar parts to assess their susceptibility to cracking under manual soldering conditions and get insight into a possible mechanism of failure. Leakage currents in capacitors were monitored at different voltages and environmental conditions before and after terminal solder dip testing that was used to simulate thermal shock during manual soldering. Results of cross-sectioning, acoustic microscopy, and measurements of electrical and mechanical characteristics of the parts have been analyzed, and possible mechanisms of failures considered. It is shown that the susceptibility to cracking and failures caused by manual soldering is lot-related. Recommendations for testing that would help to select lots that are more robust against manual soldering stresses and mitigate the risk of failures suggested.
Mohammed, Azad; Peterman, Paul; Echols, Kathy; Feltz, Kevin; Tegerdine, George; Manoo, Anton; Maraj, Dexter; Agard, John; Orazio, Carl
2011-01-01
Concentrations of polychlorinated biphenyls (PCBs) and organochlorine pesticides (OCPs) were determined in nearshore marine surficial sediments from three locations in Trinidad. Sediments were sampled at Sea Lots on the west coast, in south Port-of-Spain Harbor, south of Sea Lots at Caroni Lagoon National Park, and on Trinidad's east coast at Manzanilla. Total PCB concentrations in Sea Lots sediments ranged from 62 to 601 ng/g (dry weight {dw}), which was higher than at Caroni and Manzanilla, 13 and 8 ng/g dw, respectively. Total OCP concentrations at Sea Lots were ranged from 44.5 to 145 ng/g dw, compared with 13.1 and 23.8 n/g (dw), for Caroni and Manzanilla respectively. The concentrations of PCBs and of some OCPs in sediments from Sea Lots were above the Canadian interim sediment quality guidelines. To date, this data is the first report on the levels of PCBs and other organochlorine compounds from Trinidad and Tobago.
2017-09-01
NAVAL POSTGRADUATE SCHOOL MONTEREY, CALIFORNIA THESIS DATABASE CREATION AND STATISTICAL ANALYSIS: FINDING CONNECTIONS BETWEEN TWO OR MORE SECONDARY...BLANK ii Approved for public release. Distribution is unlimited. DATABASE CREATION AND STATISTICAL ANALYSIS: FINDING CONNECTIONS BETWEEN TWO OR MORE...Problem and Motivation . . . . . . . . . . . . . . . . . . . 1 1.2 DOD Applicability . . . . . . . . . . . . . . . . .. . . . . . . 2 1.3 Research
Checkpointing and Recovery in Distributed and Database Systems
ERIC Educational Resources Information Center
Wu, Jiang
2011-01-01
A transaction-consistent global checkpoint of a database records a state of the database which reflects the effect of only completed transactions and not the results of any partially executed transactions. This thesis establishes the necessary and sufficient conditions for a checkpoint of a data item (or the checkpoints of a set of data items) to…
Library Micro-Computing, Vol. 1. Reprints from the Best of "ONLINE" [and]"DATABASE."
ERIC Educational Resources Information Center
Online, Inc., Weston, CT.
Reprints of 18 articles pertaining to library microcomputing appear in this collection, the first of two volumes on this topic in a series of volumes of reprints from "ONLINE" and "DATABASE" magazines. Edited for information professionals who use electronically distributed databases, these articles address such topics as: (1) an integrated library…
Airborne endotoxin concentrations at a large open-lot dairy in southern idaho.
Dungan, Robert S; Leytem, April B
2009-01-01
Endotoxins are derived from gram-negative bacteria and are a potential respiratory health risk for animals and humans. To determine the potential for endotoxin transport from a large open-lot dairy, total airborne endotoxin concentrations were determined at an upwind location (background) and five downwind locations on three separate days. The downwind locations were situated at of the edge of the lot, 200 and 1390 m downwind from the lot, and downwind from a manure composting area and wastewater holding pond. When the wind was predominantly from the west, the average endotoxin concentration at the upwind location was 24 endotoxin units (EU) m(-3), whereas at the edge of the lot on the downwind side it was 259 EU m(-3). At 200 and 1390 m downwind from the edge of the lot, the average endotoxin concentrations were 168 and 49 EU m(-3), respectively. Average airborne endotoxin concentrations downwind from the composting site (36 EU m(-3)) and wastewater holding pond (89 EU m(-3)) and 1390 m from the edge of the lot were not significantly different from the upwind location. There were no significant correlations between ambient weather data collected and endotoxin concentrations over the experimental period. The downwind data show that the airborne endotoxin concentrations decreased exponentially with distance from the lot edge. Decreasing an individual's proximity to the dairy should lower their risk of airborne endotoxin exposure and associated health effects.
Volatilization of polycyclic aromatic hydrocarbons from coal-tar-sealed pavement
Van Metre, Peter C.; Majewski, Michael S.; Mahler, Barbara J.; Foreman, William T.; Braun, Christopher L.; Wilson, Jennifer T.; Burbank, Teresa L.
2012-01-01
Coal-tar-based pavement sealants, a major source of PAHs to urban water bodies, are a potential source of volatile PAHs to the atmosphere. An initial assessment of volatilization of PAHs from coal-tar-sealed pavement is presented here in which we measured summertime gas-phase PAH concentrations 0.03 m and 1.28 m above the pavement surface of seven sealed (six with coal-tar-based sealant and one with asphalt-based sealant) and three unsealed (two asphalt and one concrete) parking lots in central Texas. PAHs also were measured in parking lot dust. The geometric mean concentration of the sum of eight frequently detected PAHs (ΣPAH8) in the 0.03-m samples above sealed lots (1320 ng m-3) during the hottest part of the day was 20 times greater than that above unsealed lots (66.5 ng m-3). The geometric mean concentration in the 1.28-m samples above sealed lots (138 ng m-3) was five times greater than above unsealed lots (26.0 ng m-3). Estimated PAH flux from the sealed lots was 60 times greater than that from unsealed lots (geometric means of 88 and 1.4 μg m-2 h-1, respectively). Although the data set presented here is small, the much higher estimated fluxes from sealed pavement than from unsealed pavement indicate that coal-tar-based sealants are emitting PAHs to urban air at high rates compared to other paved surfaces.
Volatilization of polycyclic aromatic hydrocarbons from coal-tar-sealed pavement.
Van Metre, Peter C; Majewski, Michael S; Mahler, Barbara J; Foreman, William T; Braun, Christopher L; Wilson, Jennifer T; Burbank, Teresa L
2012-06-01
Coal-tar-based pavement sealants, a major source of PAHs to urban water bodies, are a potential source of volatile PAHs to the atmosphere. An initial assessment of volatilization of PAHs from coal-tar-sealed pavement is presented here in which we measured summertime gas-phase PAH concentrations 0.03 m and 1.28 m above the pavement surface of seven sealed (six with coal-tar-based sealant and one with asphalt-based sealant) and three unsealed (two asphalt and one concrete) parking lots in central Texas. PAHs also were measured in parking lot dust. The geometric mean concentration of the sum of eight frequently detected PAHs (ΣPAH(8)) in the 0.03-m samples above sealed lots (1320 ng m(-3)) during the hottest part of the day was 20 times greater than that above unsealed lots (66.5 ng m(-3)). The geometric mean concentration in the 1.28-m samples above sealed lots (138 ng m(-3)) was five times greater than above unsealed lots (26.0 ng m(-3)). Estimated PAH flux from the sealed lots was 60 times greater than that from unsealed lots (geometric means of 88 and 1.4 μg m(-2) h(-1), respectively). Although the data set presented here is small, the much higher estimated fluxes from sealed pavement than from unsealed pavement indicate that coal-tar-based sealants are emitting PAHs to urban air at high rates compared to other paved surfaces. Published by Elsevier Ltd.
An Improved Algorithm to Generate a Wi-Fi Fingerprint Database for Indoor Positioning
Chen, Lina; Li, Binghao; Zhao, Kai; Rizos, Chris; Zheng, Zhengqi
2013-01-01
The major problem of Wi-Fi fingerprint-based positioning technology is the signal strength fingerprint database creation and maintenance. The significant temporal variation of received signal strength (RSS) is the main factor responsible for the positioning error. A probabilistic approach can be used, but the RSS distribution is required. The Gaussian distribution or an empirically-derived distribution (histogram) is typically used. However, these distributions are either not always correct or require a large amount of data for each reference point. Double peaks of the RSS distribution have been observed in experiments at some reference points. In this paper a new algorithm based on an improved double-peak Gaussian distribution is proposed. Kurtosis testing is used to decide if this new distribution, or the normal Gaussian distribution, should be applied. Test results show that the proposed algorithm can significantly improve the positioning accuracy, as well as reduce the workload of the off-line data training phase. PMID:23966197
An improved algorithm to generate a Wi-Fi fingerprint database for indoor positioning.
Chen, Lina; Li, Binghao; Zhao, Kai; Rizos, Chris; Zheng, Zhengqi
2013-08-21
The major problem of Wi-Fi fingerprint-based positioning technology is the signal strength fingerprint database creation and maintenance. The significant temporal variation of received signal strength (RSS) is the main factor responsible for the positioning error. A probabilistic approach can be used, but the RSS distribution is required. The Gaussian distribution or an empirically-derived distribution (histogram) is typically used. However, these distributions are either not always correct or require a large amount of data for each reference point. Double peaks of the RSS distribution have been observed in experiments at some reference points. In this paper a new algorithm based on an improved double-peak Gaussian distribution is proposed. Kurtosis testing is used to decide if this new distribution, or the normal Gaussian distribution, should be applied. Test results show that the proposed algorithm can significantly improve the positioning accuracy, as well as reduce the workload of the off-line data training phase.
7 CFR 42.121 - Sampling and inspection procedures.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 7 Agriculture 2 2011-01-01 2011-01-01 false Sampling and inspection procedures. 42.121 Section 42... REGULATIONS STANDARDS FOR CONDITION OF FOOD CONTAINERS Skip Lot Sampling and Inspection Procedures § 42.121 Sampling and inspection procedures. (a) Following skip lot procedure authorization, inspect every lot...
7 CFR 42.121 - Sampling and inspection procedures.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 7 Agriculture 2 2010-01-01 2010-01-01 false Sampling and inspection procedures. 42.121 Section 42... REGULATIONS STANDARDS FOR CONDITION OF FOOD CONTAINERS Skip Lot Sampling and Inspection Procedures § 42.121 Sampling and inspection procedures. (a) Following skip lot procedure authorization, inspect every lot...
Modelling Carpool and Transit Park-and-Ride Lots
DOT National Transportation Integrated Search
1997-01-01
Park-and-Ride (PnR) lots are an increasingly common element of many areas plans for air quality conformity. However, few, if any, travel models estimate the impacts of carpool PnR lots, and it is not at all clear that they always improve air quality....
Privacy-Aware Location Database Service for Granular Queries
NASA Astrophysics Data System (ADS)
Kiyomoto, Shinsaku; Martin, Keith M.; Fukushima, Kazuhide
Future mobile markets are expected to increasingly embrace location-based services. This paper presents a new system architecture for location-based services, which consists of a location database and distributed location anonymizers. The service is privacy-aware in the sense that the location database always maintains a degree of anonymity. The location database service permits three different levels of query and can thus be used to implement a wide range of location-based services. Furthermore, the architecture is scalable and employs simple functions that are similar to those found in general database systems.
Nuclear Forensics Analysis with Missing and Uncertain Data
Langan, Roisin T.; Archibald, Richard K.; Lamberti, Vincent
2015-10-05
We have applied a new imputation-based method for analyzing incomplete data, called Monte Carlo Bayesian Database Generation (MCBDG), to the Spent Fuel Isotopic Composition (SFCOMPO) database. About 60% of the entries are absent for SFCOMPO. The method estimates missing values of a property from a probability distribution created from the existing data for the property, and then generates multiple instances of the completed database for training a machine learning algorithm. Uncertainty in the data is represented by an empirical or an assumed error distribution. The method makes few assumptions about the underlying data, and compares favorably against results obtained bymore » replacing missing information with constant values.« less
A multidisciplinary database for global distribution
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolfe, P.J.
The issue of selenium toxicity in the environment has been documented in the scientific literature for over 50 years. Recent studies reveal a complex connection between selenium and human and animal populations. This article introduces a bibliographic citation database on selenium in the environment developed for global distribution via the Internet by the University of Wyoming Libraries. The database incorporates material from commercial sources, print abstracts, indexes, and U.S. government literature, resulting in a multidisciplinary resource. Relevant disciplines include, biology, medicine, veterinary science, botany, chemistry, geology, pollution, aquatic sciences, ecology, and others. It covers the years 1985-1996 for most subjectmore » material, with additional years being added as resources permit.« less
Shoukourian, S K; Vasilyan, A M; Avagyan, A A; Shukurian, A K
1999-01-01
A formalized "top to bottom" design approach was described in [1] for distributed applications built on databases, which were considered as a medium between virtual and real user environments for a specific medical application. Merging different components within a unified distributed application posits new essential problems for software. Particularly protection tools, which are sufficient separately, become deficient during the integration due to specific additional links and relationships not considered formerly. E.g., it is impossible to protect a shared object in the virtual operating room using only DBMS protection tools, if the object is stored as a record in DB tables. The solution of the problem should be found only within the more general application framework. Appropriate tools are absent or unavailable. The present paper suggests a detailed outline of a design and testing toolset for access differentiation systems (ADS) in distributed medical applications which use databases. The appropriate formal model as well as tools for its mapping to a DMBS are suggested. Remote users connected via global networks are considered too.
Code of Federal Regulations, 2010 CFR
2010-01-01
... of the Department of Agriculture (Continued) AGRICULTURAL MARKETING SERVICE (Marketing Agreements and... Administrative Rules and Regulations Definitions § 993.104 Lot. (a) Lot for the purposes of §§ 993.49 and 993.149... containers, processed in any continuous production of one calendar day, and offered for inspection as a new...
76 FR 42067 - Inspection and Weighing of Grain in Combined and Single Lots
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-18
.... * * * * * (b) * * * (1) General. If grain in a carrier is offered for inspection or weighing service as one lot... weighing service procedures that GIPSA's Federal Grain Inspection Service (FGIS) performs under the... the inspection and weighing of such container lots to the official service provider's area of...
46 CFR 160.053-4 - Inspections and tests.
Code of Federal Regulations, 2014 CFR
2014-10-01
...: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Work Vests, Unicellular Plastic Foam § 160.053-4 Inspections and.... (b) Manufacturer's inspections and tests. Manufacturers of approved work vests shall maintain quality... samples from each lot to maintain the quality of their product. (c) Lot size. A lot shall consist of not...
46 CFR 160.053-4 - Inspections and tests.
Code of Federal Regulations, 2013 CFR
2013-10-01
...: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Work Vests, Unicellular Plastic Foam § 160.053-4 Inspections and.... (b) Manufacturer's inspections and tests. Manufacturers of approved work vests shall maintain quality... samples from each lot to maintain the quality of their product. (c) Lot size. A lot shall consist of not...
46 CFR 160.053-4 - Inspections and tests.
Code of Federal Regulations, 2012 CFR
2012-10-01
...: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Work Vests, Unicellular Plastic Foam § 160.053-4 Inspections and.... (b) Manufacturer's inspections and tests. Manufacturers of approved work vests shall maintain quality... samples from each lot to maintain the quality of their product. (c) Lot size. A lot shall consist of not...
9 CFR 351.19 - Refusal of certification for specific lots.
Code of Federal Regulations, 2011 CFR
2011-01-01
... AND VOLUNTARY INSPECTION AND CERTIFICATION CERTIFICATION OF TECHNICAL ANIMAL FATS FOR EXPORT Remedies... lot of technical animal fat is ineligible for certification under § 351.3, or any materials to be used in a lot of technical animal fat would make the technical animal fat ineligible for such...
9 CFR 351.19 - Refusal of certification for specific lots.
Code of Federal Regulations, 2012 CFR
2012-01-01
... AND VOLUNTARY INSPECTION AND CERTIFICATION CERTIFICATION OF TECHNICAL ANIMAL FATS FOR EXPORT Remedies... lot of technical animal fat is ineligible for certification under § 351.3, or any materials to be used in a lot of technical animal fat would make the technical animal fat ineligible for such...
9 CFR 351.19 - Refusal of certification for specific lots.
Code of Federal Regulations, 2014 CFR
2014-01-01
... AND VOLUNTARY INSPECTION AND CERTIFICATION CERTIFICATION OF TECHNICAL ANIMAL FATS FOR EXPORT Remedies... lot of technical animal fat is ineligible for certification under § 351.3, or any materials to be used in a lot of technical animal fat would make the technical animal fat ineligible for such...
9 CFR 351.19 - Refusal of certification for specific lots.
Code of Federal Regulations, 2013 CFR
2013-01-01
... AND VOLUNTARY INSPECTION AND CERTIFICATION CERTIFICATION OF TECHNICAL ANIMAL FATS FOR EXPORT Remedies... lot of technical animal fat is ineligible for certification under § 351.3, or any materials to be used in a lot of technical animal fat would make the technical animal fat ineligible for such...
46 CFR 160.047-5 - Inspections and tests. 1
Code of Federal Regulations, 2013 CFR
2013-10-01
...: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Specification for a Buoyant Vest, Kapok or Fibrous Glass, Adult... labeled buoyant vests shall— (1) Maintain quality control of the materials used, the manufacturing methods.... (b) Lot size and sampling. (1) A lot consists of 500 buoyant vests or fewer. (2) A new lot begins...
46 CFR 160.047-5 - Inspections and tests. 1
Code of Federal Regulations, 2014 CFR
2014-10-01
...: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Specification for a Buoyant Vest, Kapok or Fibrous Glass, Adult... labeled buoyant vests shall— (1) Maintain quality control of the materials used, the manufacturing methods.... (b) Lot size and sampling. (1) A lot consists of 500 buoyant vests or fewer. (2) A new lot begins...
46 CFR 160.047-5 - Inspections and tests. 1
Code of Federal Regulations, 2012 CFR
2012-10-01
...: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Specification for a Buoyant Vest, Kapok or Fibrous Glass, Adult... labeled buoyant vests shall— (1) Maintain quality control of the materials used, the manufacturing methods.... (b) Lot size and sampling. (1) A lot consists of 500 buoyant vests or fewer. (2) A new lot begins...
46 CFR 160.026-6 - Sampling, inspection, and tests of production lots.
Code of Federal Regulations, 2010 CFR
2010-10-01
... Bacteriological limits and salt content MIL-W-15117 and U.S. Public Health “Drinking Water Standards.” (e) Lot..., CONSTRUCTION, AND MATERIALS: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Water, Emergency Drinking (In... lots. (a) General. Containers of emergency drinking water must be tested in accordance with the...
46 CFR 160.026-6 - Sampling, inspection, and tests of production lots.
Code of Federal Regulations, 2011 CFR
2011-10-01
... Bacteriological limits and salt content MIL-W-15117 and U.S. Public Health “Drinking Water Standards.” (e) Lot..., CONSTRUCTION, AND MATERIALS: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Water, Emergency Drinking (In... lots. (a) General. Containers of emergency drinking water must be tested in accordance with the...
46 CFR 160.026-6 - Sampling, inspection, and tests of production lots.
Code of Federal Regulations, 2013 CFR
2013-10-01
... Bacteriological limits and salt content MIL-W-15117 and U.S. Public Health “Drinking Water Standards.” (e) Lot..., CONSTRUCTION, AND MATERIALS: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Water, Emergency Drinking (In... lots. (a) General. Containers of emergency drinking water must be tested in accordance with the...
46 CFR 160.026-6 - Sampling, inspection, and tests of production lots.
Code of Federal Regulations, 2012 CFR
2012-10-01
... Bacteriological limits and salt content MIL-W-15117 and U.S. Public Health “Drinking Water Standards.” (e) Lot..., CONSTRUCTION, AND MATERIALS: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Water, Emergency Drinking (In... lots. (a) General. Containers of emergency drinking water must be tested in accordance with the...
46 CFR 160.026-6 - Sampling, inspection, and tests of production lots.
Code of Federal Regulations, 2014 CFR
2014-10-01
... Bacteriological limits and salt content MIL-W-15117 and U.S. Public Health “Drinking Water Standards.” (e) Lot..., CONSTRUCTION, AND MATERIALS: SPECIFICATIONS AND APPROVAL LIFESAVING EQUIPMENT Water, Emergency Drinking (In... lots. (a) General. Containers of emergency drinking water must be tested in accordance with the...
Fingerprint test data report: FM 5834 test lots No. 1, 3, 4, and 5. [resin matrix composites
NASA Technical Reports Server (NTRS)
1986-01-01
Quality control testing is presented for various lots of resin matrix composites. The tests conducted were filler test, resin test, fabric test, and prepreg test for lots 1, 3, 4, and 5. The results of the tests are presented in chart forms.
Synchronizable Series Expressions. Part 2. Overview of the Theory and Implementation.
1987-11-01
more running time than shown in the table. because time is eventually required in order to collect the garbage it creates. Program Running ’rime Garbage...possible to simply put an enumerator where it is used.) (loop for x integer from I to 4 collect x) - (lotS* ((x (Eup I :to 4))) (declare (type integer x...below. (loop for x from 1 to 4 and for y = 0 then (1- x) collect (list x y)) - (lotS* ((x (Eup 1 :to 4)) (y (Tprevious (1- x) 0))) (Rlist (list x y
Methyl Centralite Coated M10 Propellant for the 25-mm Bushmaster Gun Projectiles
1984-09-01
orisinator. servces y th U.S. Gvernent 4di I:THE INFORMATION CONTAINED HEREIN SMALL BE USED FOR GOVERNMENT PURPOSES ONLY ______Unclassified SECURITY CLASS...bomb DpiDt versus P t-aces of Lots 42 RAD-PE-559-15 compared with lots RAD-PE-S59-ll, 16, and 17 and Swiss lot P-2078. 1! ] y .,,narison of Lot RAD-PE...further remove ether and improve coating gradient , 28 to 48 hour water dry time to remove practically all of the coating acquired alcohol, and a
Coal-Tar-Based Parking Lot Sealcoat: An Unrecognized Source of PAH to Settled House Dust
2010-01-01
Despite much speculation, the principal factors controlling concentrations of polycyclic aromatic hydrocarbons (PAH) in settled house dust (SHD) have not yet been identified. In response to recent reports that dust from pavement with coal-tar-based sealcoat contains extremely high concentrations of PAH, we measured PAH in SHD from 23 apartments and in dust from their associated parking lots, one-half of which had coal-tar-based sealcoat (CT). The median concentration of total PAH (T-PAH) in dust from CT parking lots (4760 μg/g, n = 11) was 530 times higher than that from parking lots with other pavement surface types (asphalt-based sealcoat, unsealed asphalt, concrete [median 9.0 μg/g, n = 12]). T-PAH in SHD from apartments with CT parking lots (median 129 μg/g) was 25 times higher than that in SHD from apartments with parking lots with other pavement surface types (median 5.1 μg/g). Presence or absence of CT on a parking lot explained 48% of the variance in log-transformed T-PAH in SHD. Urban land-use intensity near the residence also had a significant but weaker relation to T-PAH. No other variables tested, including carpeting, frequency of vacuuming, and indoor burning, were significant. PMID:20063893
A photographic method for estimating wear of coal tar sealcoat from parking lots
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mateo Scoggins; Tom Ennis; Nathan Parker
2009-07-01
Coal-tar-based sealcoat has been recognized as an important source of PAHs to the environment through wear and transport via stormwater runoff. Sealcoat removal rates have not been measured or even estimated in the literature due to the complex array of physical and chemical process involved. A photographic study was conducted that incorporates all sources of wear using 10 coal tar-sealed parking lots in Austin, Texas, with sealcoat age ranging from 0 to 5 years. Randomly located photographs from each parking lot were analyzed digitally to quantify black sealed areas versus lighter colored unsealed areas at the pixel level. The resultsmore » indicate that coal tar sealcoat wears off of the driving areas of parking lots at a rate of approximately 4.7% per year, and from the parking areas of the lots at a rate of approximately 1.4% per year. The overall annual loss of sealcoat was calculated at 2.4%. This results in an annual delivery to the environment of 0.51 g of PAHs per m{sup 2} of coal tar-sealed parking lot. These values provide a more robust and much higher estimate of loading of PAHs from coal tar sealcoated parking lots when compared to other available measures. 20 refs., 6 figs.« less
"Thinking a Lot" Among the Khwe of South Africa: A Key Idiom of Personal and Interpersonal Distress.
den Hertog, T N; de Jong, M; van der Ham, A J; Hinton, D; Reis, R
2016-09-01
"Thinking too much", and variations such as "thinking a lot", are common idioms of distress across the world. The contextual meaning of this idiom of distress in particular localities remains largely unknown. This paper reports on a systematic study of the content and cause, consequences, and social response and coping related to the local terms |x'an n|a te and |eu-ca n|a te, both translated as "thinking a lot", and was part of a larger ethnographic study among the Khwe of South Africa. Semi-structured exploratory interviews with community members revealed that "thinking a lot" refers to a common experience of reflecting on personal and interpersonal problems. Consequences were described in emotional, psychological, social, behavioral, and physical effects. Coping strategies included social support, distraction, and religious practices. Our contextualized approach revealed meanings and experiences of "thinking a lot" that go beyond a psychological state or psychopathology. The common experience of "thinking a lot" is situated in socio-political, economic, and social context that reflect the marginalized and displaced position of the Khwe. We argue that "thinking a lot" and associated local meanings may vary across settings, may not necessarily indicate psychopathology, and should be understood in individual, interpersonal, community, and socio-political dimensions.
BAAK-BAAK, CARLOS M.; ARANA-GUARDIA, ROGER; CIGARROA-TOLEDO, NOHEMI; LOROÑO-PINO, MARÍA ALBA; REYES-SOLIS, GUADALUPE; MACHAIN-WILLIAMS, CARLOS; BEATY, BARRY J.; EISEN, LARS; GARCÍA-REJÓN, JULIÁN E.
2014-01-01
We assessed the potential for vacant lots and other non-residential settings to serve as source environments for Aedes (Stegomyia) aegypti (L.) in Mérida City, México. Mosquito immatures were collected, during November 2011 – June 2013, from residential premises (n = 156 site visits) and non-residential settings represented by vacant lots (50), parking lots (18), and streets/sidewalks (28). Collections totaled 46,025 mosquito immatures of 13 species. Ae. aegypti was the most commonly encountered species accounting for 81.0% of total immatures, followed by Culex quinquefasciatus Say (12.1%). Site visits to vacant lots (74.0%) were more likely to result in collection of Ae. aegypti immatures that residential premises (35.9%). Tires accounted for 75.5% of Ae. aegypti immatures collected from vacant lots. Our data suggest that vacant lots should be considered for inclusion in mosquito surveillance and control efforts in Mérida City, as they often are located near homes, commonly have abundant vegetation, and frequently harbor accumulations of small and large discarded water-holding containers that we now have demonstrated to serve as development sites for immature mosquitoes. Additionally, we present data for associations of immature production with various container characteristics, such as storage capacity, water quality and physical location in the environment. PMID:24724299
Coal-tar-based parking lot sealcoat: An unrecognized source of PAH to settled house dust
Mahler, B.J.; Van Metre, P.C.; Wilson, J.T.; Musgrove, M.; Burbank, T.L.; Ennis, T.E.; Bashara, T.J.
2010-01-01
Despite much speculation, the principal factors controlling concentrations of polycyclic aromatic hydrocarbons (PAH) in settled house dust (SHD) have not yet been identified. In response to recent reports that dust from pavement with coaltar-based sealcoat contains extremely high concentrations of PAH, we measured PAH in SHD from 23 apartments and in dust from their associated parking lots, one-half of which had coal-tar-based sealcoat (CT). The median concentration of total PAH (T-PAH) in dust from CT parking lots (4760 ??g/g, n = 11) was 530 times higher than that from parking lots with other pavement surface types (asphalt-based sealcoat, unsealed asphalt, concrete [median 9.0 ??g/g, n = 12]). T-PAH in SHD from apartments with CT parking lots (median 129 ??g/g) was 25 times higher than that in SHD from apartments with parking lots with other pavement surface types (median 5.1 ??g/g). Presence or absence of CT on a parking lot explained 48% of the variance in log-transformed T-PAH in SHD. Urban land-use intensity near the residence also had a significant but weaker relation to T-PAH. No other variables tested, including carpeting, frequency of vacuuming, and indoor burning, were significant. ?? 2010 American Chemical Society.
Technology and Its Use in Education: Present Roles and Future Prospects
ERIC Educational Resources Information Center
Courville, Keith
2011-01-01
(Purpose) This article describes two current trends in Educational Technology: distributed learning and electronic databases. (Findings) Topics addressed in this paper include: (1) distributed learning as a means of professional development; (2) distributed learning for content visualization; (3) usage of distributed learning for educational…
MIPS: a database for protein sequences, homology data and yeast genome information.
Mewes, H W; Albermann, K; Heumann, K; Liebl, S; Pfeiffer, F
1997-01-01
The MIPS group (Martinsried Institute for Protein Sequences) at the Max-Planck-Institute for Biochemistry, Martinsried near Munich, Germany, collects, processes and distributes protein sequence data within the framework of the tripartite association of the PIR-International Protein Sequence Database (,). MIPS contributes nearly 50% of the data input to the PIR-International Protein Sequence Database. The database is distributed on CD-ROM together with PATCHX, an exhaustive supplement of unique, unverified protein sequences from external sources compiled by MIPS. Through its WWW server (http://www.mips.biochem.mpg.de/ ) MIPS permits internet access to sequence databases, homology data and to yeast genome information. (i) Sequence similarity results from the FASTA program () are stored in the FASTA database for all proteins from PIR-International and PATCHX. The database is dynamically maintained and permits instant access to FASTA results. (ii) Starting with FASTA database queries, proteins have been classified into families and superfamilies (PROT-FAM). (iii) The HPT (hashed position tree) data structure () developed at MIPS is a new approach for rapid sequence and pattern searching. (iv) MIPS provides access to the sequence and annotation of the complete yeast genome (), the functional classification of yeast genes (FunCat) and its graphical display, the 'Genome Browser' (). A CD-ROM based on the JAVA programming language providing dynamic interactive access to the yeast genome and the related protein sequences has been compiled and is available on request. PMID:9016498