Sample records for provide maximum information

  1. 78 FR 9035 - Renewal and Revision of a Previously Approved Information Collection; Comment Request; State...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-07

    ... maximum advertised speed, technology type and spectrum (if applicable) for each broadband provider... funding to collect the maximum advertised speed and technology type to which various classes of Community... businesses use the data to identify where broadband is available, the advertised speeds and other information...

  2. 20 CFR 10.806 - How are the maximum fees defined?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... AMENDED Information for Medical Providers Medical Fee Schedule § 10.806 How are the maximum fees defined? For professional medical services, the Director shall maintain a schedule of maximum allowable fees... Procedural Terminology (HCPCS/CPT) code which represents the relative skill, effort, risk and time required...

  3. 20 CFR 10.806 - How are the maximum fees defined?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... AMENDED Information for Medical Providers Medical Fee Schedule § 10.806 How are the maximum fees defined? For professional medical services, the Director shall maintain a schedule of maximum allowable fees.../Current Procedural Terminology (HCPCS/CPT) code which represents the relative skill, effort, risk and time...

  4. 20 CFR 10.806 - How are the maximum fees defined?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... AMENDED Information for Medical Providers Medical Fee Schedule § 10.806 How are the maximum fees defined? For professional medical services, the Director shall maintain a schedule of maximum allowable fees... Procedural Terminology (HCPCS/CPT) code which represents the relative skill, effort, risk and time required...

  5. 20 CFR 10.806 - How are the maximum fees defined?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... AMENDED Information for Medical Providers Medical Fee Schedule § 10.806 How are the maximum fees defined? For professional medical services, the Director shall maintain a schedule of maximum allowable fees... Procedural Terminology (HCPCS/CPT) code which represents the relative skill, effort, risk and time required...

  6. 20 CFR 10.806 - How are the maximum fees defined?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... AMENDED Information for Medical Providers Medical Fee Schedule § 10.806 How are the maximum fees defined? For professional medical services, the Director shall maintain a schedule of maximum allowable fees.../Current Procedural Terminology (HCPCS/CPT) code which represents the relative skill, effort, risk and time...

  7. Marketing and Distributive Education. Wholesaling Curriculum Guide.

    ERIC Educational Resources Information Center

    Northern Illinois Univ., DeKalb. Dept. of Business Education and Administration Services.

    This document is one of four curriculum guides designed to provide the curriculum coordinator with a basis for planning a comprehensive program in the field of marketing as well as to provide marketing and distributive education teachers with maximum flexibility. Introductory information provides directions for using the guide and information on…

  8. Marketing and Distributive Education. Food Marketing Curriculum Guide

    ERIC Educational Resources Information Center

    Northern Illinois Univ., DeKalb. Dept. of Business Education and Administration Services.

    This document is one of four curriculum guides designed to provide the curriculum coordinator with a basis for planning a comprehensive program in the field of marketing as well as to provide marketing and distributive education teachers with maximum flexibility. Introductory information provides directions for using the guide and information on…

  9. Comparing methods to estimate Reineke’s maximum size-density relationship species boundary line slope

    Treesearch

    Curtis L. VanderSchaaf; Harold E. Burkhart

    2010-01-01

    Maximum size-density relationships (MSDR) provide natural resource managers useful information about the relationship between tree density and average tree size. Obtaining a valid estimate of how maximum tree density changes as average tree size changes is necessary to accurately describe these relationships. This paper examines three methods to estimate the slope of...

  10. Modelling information flow along the human connectome using maximum flow.

    PubMed

    Lyoo, Youngwook; Kim, Jieun E; Yoon, Sujung

    2018-01-01

    The human connectome is a complex network that transmits information between interlinked brain regions. Using graph theory, previously well-known network measures of integration between brain regions have been constructed under the key assumption that information flows strictly along the shortest paths possible between two nodes. However, it is now apparent that information does flow through non-shortest paths in many real-world networks such as cellular networks, social networks, and the internet. In the current hypothesis, we present a novel framework using the maximum flow to quantify information flow along all possible paths within the brain, so as to implement an analogy to network traffic. We hypothesize that the connection strengths of brain networks represent a limit on the amount of information that can flow through the connections per unit of time. This allows us to compute the maximum amount of information flow between two brain regions along all possible paths. Using this novel framework of maximum flow, previous network topological measures are expanded to account for information flow through non-shortest paths. The most important advantage of the current approach using maximum flow is that it can integrate the weighted connectivity data in a way that better reflects the real information flow of the brain network. The current framework and its concept regarding maximum flow provides insight on how network structure shapes information flow in contrast to graph theory, and suggests future applications such as investigating structural and functional connectomes at a neuronal level. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Marketing and Distributive Education. General Retail Merchandising Curriculum Guide.

    ERIC Educational Resources Information Center

    Northern Illinois Univ., DeKalb. Dept. of Business Education and Administration Services.

    This document is one of four curriculum guides designed to provide the curriculum coordinator with a basis for planning a comprehensive program in the field of marketing as well as to provide marketing and distributive education teachers with maximum flexibility. Introductory information provides directions for using the guide and information on…

  12. EXERGY AND FISHER INFORMATION AS ECOLOGICAL INDEXES

    EPA Science Inventory

    Ecological indices are used to provide summary information about a particular aspect of ecosystem behavior. Many such indices have been proposed and here we investigate two: exergy and Fisher Information. Exergy, a thermodynamically based index, is a measure of maximum amount o...

  13. Assessment of MODIS-EVI, MODIS-NDVI and VEGETATION-NDVI composite data using agricultural measurements: an example at corn fields in western Mexico.

    PubMed

    Chen, Pei-Yu; Fedosejevs, Gunar; Tiscareño-López, Mario; Arnold, Jeffrey G

    2006-08-01

    Although several types of satellite data provide temporal information of the land use at no cost, digital satellite data applications for agricultural studies are limited compared to applications for forest management. This study assessed the suitability of vegetation indices derived from the TERRA-Moderate Resolution Imaging Spectroradiometer (MODIS) sensor and SPOT-VEGETATION (VGT) sensor for identifying corn growth in western Mexico. Overall, the Normalized Difference Vegetation Index (NDVI) composites from the VGT sensor based on bi-directional compositing method produced vegetation information most closely resembling actual crop conditions. The NDVI composites from the MODIS sensor exhibited saturated signals starting 30 days after planting, but corresponded to green leaf senescence in April. The temporal NDVI composites from the VGT sensor based on the maximum value method had a maximum plateau for 80 days, which masked the important crop transformation from vegetative stage to reproductive stage. The Enhanced Vegetation Index (EVI) composites from the MODIS sensor reached a maximum plateau 40 days earlier than the occurrence of maximum leaf area index (LAI) and maximum intercepted fraction of photosynthetic active radiation (fPAR) derived from in-situ measurements. The results of this study showed that the 250-m resolution MODIS data did not provide more accurate vegetation information for corn growth description than the 500-m and 1000-m resolution MODIS data.

  14. Maximum-Likelihood Methods for Processing Signals From Gamma-Ray Detectors

    PubMed Central

    Barrett, Harrison H.; Hunter, William C. J.; Miller, Brian William; Moore, Stephen K.; Chen, Yichun; Furenlid, Lars R.

    2009-01-01

    In any gamma-ray detector, each event produces electrical signals on one or more circuit elements. From these signals, we may wish to determine the presence of an interaction; whether multiple interactions occurred; the spatial coordinates in two or three dimensions of at least the primary interaction; or the total energy deposited in that interaction. We may also want to compute listmode probabilities for tomographic reconstruction. Maximum-likelihood methods provide a rigorous and in some senses optimal approach to extracting this information, and the associated Fisher information matrix provides a way of quantifying and optimizing the information conveyed by the detector. This paper will review the principles of likelihood methods as applied to gamma-ray detectors and illustrate their power with recent results from the Center for Gamma-ray Imaging. PMID:20107527

  15. Communication methods, systems, apparatus, and devices involving RF tag registration

    DOEpatents

    Burghard, Brion J [W. Richland, WA; Skorpik, James R [Kennewick, WA

    2008-04-22

    One technique of the present invention includes a number of Radio Frequency (RF) tags that each have a different identifier. Information is broadcast to the tags from an RF tag interrogator. This information corresponds to a maximum quantity of tag response time slots that are available. This maximum quantity may be less than the total number of tags. The tags each select one of the time slots as a function of the information and a random number provided by each respective tag. The different identifiers are transmitted to the interrogator from at least a subset of the RF tags.

  16. Application of the Maximum Entropy Method to Risk Analysis of Mergers and Acquisitions

    NASA Astrophysics Data System (ADS)

    Xie, Jigang; Song, Wenyun

    The maximum entropy (ME) method can be used to analyze the risk of mergers and acquisitions when only pre-acquisition information is available. A practical example of the risk analysis of China listed firms’ mergers and acquisitions is provided to testify the feasibility and practicality of the method.

  17. Effects of Smoking on Respiratory Capacity and Control

    ERIC Educational Resources Information Center

    Awan, Shaheen N.; Alphonso, Vania A.

    2007-01-01

    The purpose of this study was to provide information concerning the possible early effects of smoking on measures of respiratory capacity and control in young adult female smokers vs. nonsmokers. In particular, maximum performance test results (vital capacity and maximum phonation time) and measures of air pressures and airflows during voiceless,…

  18. Impact of Threat Level, Task Instruction, and Individual Characteristics on Cold Pressor Pain and Fear among Children and Their Parents.

    PubMed

    Boerner, Katelynn E; Noel, Melanie; Birnie, Kathryn A; Caes, Line; Petter, Mark; Chambers, Christine T

    2016-07-01

    The cold pressor task (CPT) is increasingly used to induce experimental pain in children, but the specific methodology of the CPT is quite variable across pediatric studies. This study examined how subtle variations in CPT methodology (eg. provision of low- or high-threat information regarding the task; provision or omission of maximum immersion time) may influence children's and parents' perceptions of the pain experience. Forty-eight children (8 to 14 years) and their parents were randomly assigned to receive information about the CPT that varied on 2 dimensions, prior to completing the task: (i) threat level: high-threat (task described as very painful, high pain expressions depicted) or low-threat (standard CPT instructions provided, low pain expressions depicted); (ii) ceiling: informed (provided maximum immersion time) or uninformed (information about maximum immersion time omitted). Parents and children in the high-threat condition expected greater child pain, and these children reported higher perceived threat of pain and state pain catastrophizing. For children in the low-threat condition, an informed ceiling was associated with less state pain catastrophizing during the CPT. Pain intensity, tolerance, and fear during the CPT did not differ by experimental group, but were predicted by child characteristics. Findings suggest that provision of threatening information may impact anticipatory outcomes, but experienced pain was better explained by individual child variables. © 2015 World Institute of Pain.

  19. Modeling of depth to base of Last Glacial Maximum and seafloor sediment thickness for the California State Waters Map Series, eastern Santa Barbara Channel, California

    USGS Publications Warehouse

    Wong, Florence L.; Phillips, Eleyne L.; Johnson, Samuel Y.; Sliter, Ray W.

    2012-01-01

    Models of the depth to the base of Last Glacial Maximum and sediment thickness over the base of Last Glacial Maximum for the eastern Santa Barbara Channel are a key part of the maps of shallow subsurface geology and structure for offshore Refugio to Hueneme Canyon, California, in the California State Waters Map Series. A satisfactory interpolation of the two datasets that accounted for regional geologic structure was developed using geographic information systems modeling and graphics software tools. Regional sediment volumes were determined from the model. Source data files suitable for geographic information systems mapping applications are provided.

  20. Entropy Methods For Univariate Distributions in Decision Analysis

    NASA Astrophysics Data System (ADS)

    Abbas, Ali E.

    2003-03-01

    One of the most important steps in decision analysis practice is the elicitation of the decision-maker's belief about an uncertainty of interest in the form of a representative probability distribution. However, the probability elicitation process is a task that involves many cognitive and motivational biases. Alternatively, the decision-maker may provide other information about the distribution of interest, such as its moments, and the maximum entropy method can be used to obtain a full distribution subject to the given moment constraints. In practice however, decision makers cannot readily provide moments for the distribution, and are much more comfortable providing information about the fractiles of the distribution of interest or bounds on its cumulative probabilities. In this paper we present a graphical method to determine the maximum entropy distribution between upper and lower probability bounds and provide an interpretation for the shape of the maximum entropy distribution subject to fractile constraints, (FMED). We also discuss the problems with the FMED in that it is discontinuous and flat over each fractile interval. We present a heuristic approximation to a distribution if in addition to its fractiles, we also know it is continuous and work through full examples to illustrate the approach.

  1. An implementation of the maximum-caliber principle by replica-averaged time-resolved restrained simulations

    NASA Astrophysics Data System (ADS)

    Capelli, Riccardo; Tiana, Guido; Camilloni, Carlo

    2018-05-01

    Inferential methods can be used to integrate experimental informations and molecular simulations. The maximum entropy principle provides a framework for using equilibrium experimental data, and it has been shown that replica-averaged simulations, restrained using a static potential, are a practical and powerful implementation of such a principle. Here we show that replica-averaged simulations restrained using a time-dependent potential are equivalent to the principle of maximum caliber, the dynamic version of the principle of maximum entropy, and thus may allow us to integrate time-resolved data in molecular dynamics simulations. We provide an analytical proof of the equivalence as well as a computational validation making use of simple models and synthetic data. Some limitations and possible solutions are also discussed.

  2. An implementation of the maximum-caliber principle by replica-averaged time-resolved restrained simulations.

    PubMed

    Capelli, Riccardo; Tiana, Guido; Camilloni, Carlo

    2018-05-14

    Inferential methods can be used to integrate experimental informations and molecular simulations. The maximum entropy principle provides a framework for using equilibrium experimental data, and it has been shown that replica-averaged simulations, restrained using a static potential, are a practical and powerful implementation of such a principle. Here we show that replica-averaged simulations restrained using a time-dependent potential are equivalent to the principle of maximum caliber, the dynamic version of the principle of maximum entropy, and thus may allow us to integrate time-resolved data in molecular dynamics simulations. We provide an analytical proof of the equivalence as well as a computational validation making use of simple models and synthetic data. Some limitations and possible solutions are also discussed.

  3. Maximum mutual information estimation of a simplified hidden MRF for offline handwritten Chinese character recognition

    NASA Astrophysics Data System (ADS)

    Xiong, Yan; Reichenbach, Stephen E.

    1999-01-01

    Understanding of hand-written Chinese characters is at such a primitive stage that models include some assumptions about hand-written Chinese characters that are simply false. So Maximum Likelihood Estimation (MLE) may not be an optimal method for hand-written Chinese characters recognition. This concern motivates the research effort to consider alternative criteria. Maximum Mutual Information Estimation (MMIE) is an alternative method for parameter estimation that does not derive its rationale from presumed model correctness, but instead examines the pattern-modeling problem in automatic recognition system from an information- theoretic point of view. The objective of MMIE is to find a set of parameters in such that the resultant model allows the system to derive from the observed data as much information as possible about the class. We consider MMIE for recognition of hand-written Chinese characters using on a simplified hidden Markov Random Field. MMIE provides improved performance improvement over MLE in this application.

  4. 78 FR 78985 - Agency Information Collection Activities: Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-27

    ...) will publish periodic summaries of proposed projects. To request more information on the proposed... the effects and accomplishments of SAMHSA programs. The following table is an estimated annual... transaction \\1\\ This table represents the maximum additional burden if adult respondents for ATR provide...

  5. 77 FR 38397 - Agency Information Collection (Interest Rate Reduction Refinancing Loan Worksheet) Activities...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-27

    ... . Please refer to ``OMB Control No. 2900- 0386.'' SUPPLEMENTARY INFORMATION: Title: Interest Rate Reduction... guaranty on all interest rate reduction refinancing loan and provide a receipt as proof that the funding... ensure lenders computed the funding fee and the maximum permissible loan amount for interest rate...

  6. 76 FR 66169 - Office of Advocacy and Outreach Federal Financial Assistance Programs

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-10-26

    ...) Information about how to obtain proposal forms and the instructions for completing such forms. (14...) Regulatory information. (8) Definitions. (9) Minimum and maximum budget requests and whether proposals... Content of a proposal. The RFP provides instructions on how to access a funding opportunity. The funding...

  7. Weighing conservation objectives: maximum expected coverage versus endangered species protection

    Treesearch

    Jeffrey L. Arthur; Jeffrey D. Camm; Robert G. Haight; Claire A. Montgomery; Stephen Polasky

    2004-01-01

    Decision makers involved in land acquisition and protection often have multiple conservation objectives and are uncertain about the occurrence of species or other features in candidate sites. Model informing decisions on selection of sites for reserves need to provide information about cost-efficient trade-offs between objectives and account for incidence uncertainty...

  8. Snow, topography, and the diurnal cycle in streamflow

    USGS Publications Warehouse

    Lundquist, J.D.; Knowles, N.; Dettinger, M.; Cayan, D.

    2002-01-01

    Because snowmelt processes are spatially complex, point measurements, particularly in mountainous regions, are often inadequate to resolve basin-scale characteristics. Satellite measurements provide good spatial sampling but are often infrequent in time, particularly during cloudy weather. Fortunately, hourly measurements of river discharge provide another widely available, but as yet underutilized, source of information, providing direct information on basin output at a fine temporal scale. The hour of maximum discharge recorded each day reflects the travel time between peak melt and the time most water reaches the gauge. Traditional theories, based on numerical models of melt-water percolation through a snowpack and localized, small-basin observations, report that the hour of daily maximum flow becomes earlier as the snowpack thins and matures, reflecting shorter travel times for surface melt to reach the base of the snowpack. However, an examination of hourly discharge from 100 basins in the Western United States, ranging in size from 1.3 km2 to 10,813 km2, reveals a more complex situation. The sequences of seasonal evolution of the hour of maximum discharge are unique to each basin, but within a given basin are remarkably consistent between years, regardless of the size of the snowpack. This seems to imply that basin topography strongly influences the timing of peak flow. In most of the basins examined, at the end of the melt season, the hour of maximum discharge shifts to later in the day, reflecting increased travel times as the snowline retreats to higher elevations.

  9. Matching Fishers’ Knowledge and Landing Data to Overcome Data Missing in Small-Scale Fisheries

    PubMed Central

    Damasio, Ludmila de Melo Alves; Lopes, Priscila F. M.; Guariento, Rafael D.; Carvalho, Adriana R.

    2015-01-01

    Background In small-scale fishery, information provided by fishers has been useful to complement current and past lack of knowledge on species and environment. Methodology Through interviews, 82 fishers from the largest fishing communities on the north and south borders of a Brazilian northeastern coastal state provided estimates of the catch per unit effort (CPUE) and rank of species abundance of their main target fishes for three time points: current year (2013 at the time of the research), 10, and 20 years past. This information was contrasted to other available data sources: scientific sampling of fish landing (2013), governmental statistics (2003), and information provided by expert fishers (1993), respectively. Principal Findings Fishers were more accurate when reporting information about their maximum CPUE for 2013, but except for three species, which they estimated accurately, fishers overestimated their mean CPUE per species. Fishers were also accurate at establishing ranks of abundance of their main target species for all periods. Fishers' beliefs that fish abundance has not changed over the last 10 years (2003–2013) were corroborated by governmental and scientific landing data. Conclusions The comparison between official and formal landing records and fishers' perceptions revealed that fishers are accurate when reporting maximum CPUE, but not when reporting mean CPUE. Moreover, fishers are less precise the less common a species is in their catches, suggesting that they could provide better information for management purposes on their current target species. PMID:26176538

  10. The official websites of blood centers in China: A nationwide cross-sectional study.

    PubMed

    Hu, Huiying; Wang, Jing; Zhu, Ming

    2017-01-01

    Blood collection agencies worldwide are facing ongoing and increasing medical demands for blood products. Many potential donors would search related information online before making decision of whether or not to donate blood. However, there is little knowledge of the online information and services provided by blood centers in China, despite the constantly increase of internet users. Our research investigates the number of blood centers' official websites and their quality, and highlights the deficiencies that required future advances. Identified official websites of blood centers were scored using a newly developed evaluation instrument with 42 items concerning technical aspects, information quality, information comprehensiveness and interactive services. Scores of websites were compared between blood centers with different level (provincial vs. regional blood centers) and location (blood centers located in economically developed vs. developing region). For the 253 working official websites all the 350 blood centers in China, and the mean overall score of websites was 24.7 out of 42. 79.1% websites were rated as fair (50-75% of maximum), 5.5% as good (≥75% of maximum) and 15.4% as poor(25-50% of maximum;). Websites got very low sub-scores in information quality (mean = 3.8; range 1-8; maximum = 9) and interactive services (3.3; 0-10; 10). Higher proportions of provincial (vs. regional) blood centers and economically developed (vs. developing) blood centers had official websites (p = 0.044 and p = 0.001; respectively) with better overall quality (p<0.001 and p <0.01) and better sub-scores (in all of the four sections and in technical aspects and information quality). Website overall scores was positively correlated with the number of people served by each blood center (p< 0.001) and the donation rate of each province (p = 0.046). This study suggests there is a need to further develop and improve official websites in China, especially for regional and inland blood centers. The poor information quality and interactive services provided by these websites is of particular concern, given the challenges in blood donor counselling and recruitment.

  11. The official websites of blood centers in China: A nationwide cross-sectional study

    PubMed Central

    Hu, Huiying; Wang, Jing

    2017-01-01

    Background Blood collection agencies worldwide are facing ongoing and increasing medical demands for blood products. Many potential donors would search related information online before making decision of whether or not to donate blood. However, there is little knowledge of the online information and services provided by blood centers in China, despite the constantly increase of internet users. Our research investigates the number of blood centers’ official websites and their quality, and highlights the deficiencies that required future advances. Methods Identified official websites of blood centers were scored using a newly developed evaluation instrument with 42 items concerning technical aspects, information quality, information comprehensiveness and interactive services. Scores of websites were compared between blood centers with different level (provincial vs. regional blood centers) and location (blood centers located in economically developed vs. developing region). Results For the 253 working official websites all the 350 blood centers in China, and the mean overall score of websites was 24.7 out of 42. 79.1% websites were rated as fair (50–75% of maximum), 5.5% as good (≥75% of maximum) and 15.4% as poor(25–50% of maximum;). Websites got very low sub-scores in information quality (mean = 3.8; range 1–8; maximum = 9) and interactive services (3.3; 0–10; 10). Higher proportions of provincial (vs. regional) blood centers and economically developed (vs. developing) blood centers had official websites (p = 0.044 and p = 0.001; respectively) with better overall quality (p<0.001 and p <0.01) and better sub-scores (in all of the four sections and in technical aspects and information quality). Website overall scores was positively correlated with the number of people served by each blood center (p< 0.001) and the donation rate of each province (p = 0.046). Conclusions This study suggests there is a need to further develop and improve official websites in China, especially for regional and inland blood centers. The poor information quality and interactive services provided by these websites is of particular concern, given the challenges in blood donor counselling and recruitment. PMID:28793324

  12. Sequence information signal processor for local and global string comparisons

    DOEpatents

    Peterson, John C.; Chow, Edward T.; Waterman, Michael S.; Hunkapillar, Timothy J.

    1997-01-01

    A sequence information signal processing integrated circuit chip designed to perform high speed calculation of a dynamic programming algorithm based upon the algorithm defined by Waterman and Smith. The signal processing chip of the present invention is designed to be a building block of a linear systolic array, the performance of which can be increased by connecting additional sequence information signal processing chips to the array. The chip provides a high speed, low cost linear array processor that can locate highly similar global sequences or segments thereof such as contiguous subsequences from two different DNA or protein sequences. The chip is implemented in a preferred embodiment using CMOS VLSI technology to provide the equivalent of about 400,000 transistors or 100,000 gates. Each chip provides 16 processing elements, and is designed to provide 16 bit, two's compliment operation for maximum score precision of between -32,768 and +32,767. It is designed to provide a comparison between sequences as long as 4,194,304 elements without external software and between sequences of unlimited numbers of elements with the aid of external software. Each sequence can be assigned different deletion and insertion weight functions. Each processor is provided with a similarity measure device which is independently variable. Thus, each processor can contribute to maximum value score calculation using a different similarity measure.

  13. Bayesian structural equation modeling in sport and exercise psychology.

    PubMed

    Stenling, Andreas; Ivarsson, Andreas; Johnson, Urban; Lindwall, Magnus

    2015-08-01

    Bayesian statistics is on the rise in mainstream psychology, but applications in sport and exercise psychology research are scarce. In this article, the foundations of Bayesian analysis are introduced, and we will illustrate how to apply Bayesian structural equation modeling in a sport and exercise psychology setting. More specifically, we contrasted a confirmatory factor analysis on the Sport Motivation Scale II estimated with the most commonly used estimator, maximum likelihood, and a Bayesian approach with weakly informative priors for cross-loadings and correlated residuals. The results indicated that the model with Bayesian estimation and weakly informative priors provided a good fit to the data, whereas the model estimated with a maximum likelihood estimator did not produce a well-fitting model. The reasons for this discrepancy between maximum likelihood and Bayesian estimation are discussed as well as potential advantages and caveats with the Bayesian approach.

  14. Software Update.

    ERIC Educational Resources Information Center

    Currents, 2000

    2000-01-01

    A chart of 40 alumni-development database systems provides information on vendor/Web site, address, contact/phone, software name, price range, minimum suggested workstation/suggested server, standard reports/reporting tools, minimum/maximum record capacity, and number of installed sites/client type. (DB)

  15. Combined optical and photoelectric study of the photocycle of 13-cis bacteriorhodopsin.

    PubMed Central

    Gergely, C; Ganea, C; Váró, G

    1994-01-01

    The photocycle of the 13-cis retinal containing bacteriorhodopsin was studied by three different techniques. The optical multichannel analyzer monitored the spectral changes during the photocycle and gave information about the number and the spectrum of the intermediates. The absorption kinetic measurements provided the possibility of following the absorbance changes at several characteristic wavelengths. The electric signal provided information about the charge motions during the photocycle. The results reveal the existence of two intermediates in the 13-cis photocycle, one with a short lifetime having an average of 1.7 microseconds and an absorption maximum at 620 nm. The other, a long-living intermediate, has a lifetime of about 50 ms and an absorption maximum around 585 nm. The data analysis suggests that these intermediates are in two parallel branches of the photocycle, and branching from the intermediate with the shorter lifetime might be responsible for the light-adaptation process. PMID:7948698

  16. Beyond maximum entropy: Fractal pixon-based image reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, R. C.; Pina, R. K.

    1994-01-01

    We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other methods, including Goodness-of-Fit (e.g. Least-Squares and Lucy-Richardson) and Maximum Entropy (ME). Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME.

  17. Gene Regulatory Network Inferences Using a Maximum-Relevance and Maximum-Significance Strategy

    PubMed Central

    Liu, Wei; Zhu, Wen; Liao, Bo; Chen, Xiangtao

    2016-01-01

    Recovering gene regulatory networks from expression data is a challenging problem in systems biology that provides valuable information on the regulatory mechanisms of cells. A number of algorithms based on computational models are currently used to recover network topology. However, most of these algorithms have limitations. For example, many models tend to be complicated because of the “large p, small n” problem. In this paper, we propose a novel regulatory network inference method called the maximum-relevance and maximum-significance network (MRMSn) method, which converts the problem of recovering networks into a problem of how to select the regulator genes for each gene. To solve the latter problem, we present an algorithm that is based on information theory and selects the regulator genes for a specific gene by maximizing the relevance and significance. A first-order incremental search algorithm is used to search for regulator genes. Eventually, a strict constraint is adopted to adjust all of the regulatory relationships according to the obtained regulator genes and thus obtain the complete network structure. We performed our method on five different datasets and compared our method to five state-of-the-art methods for network inference based on information theory. The results confirm the effectiveness of our method. PMID:27829000

  18. A low-power, high-throughput maximum-likelihood convolutional decoder chip for NASA's 30/20 GHz program

    NASA Technical Reports Server (NTRS)

    Mccallister, R. D.; Crawford, J. J.

    1981-01-01

    It is pointed out that the NASA 30/20 GHz program will place in geosynchronous orbit a technically advanced communication satellite which can process time-division multiple access (TDMA) information bursts with a data throughput in excess of 4 GBPS. To guarantee acceptable data quality during periods of signal attenuation it will be necessary to provide a significant forward error correction (FEC) capability. Convolutional decoding (utilizing the maximum-likelihood techniques) was identified as the most attractive FEC strategy. Design trade-offs regarding a maximum-likelihood convolutional decoder (MCD) in a single-chip CMOS implementation are discussed.

  19. Defining Virtual Interactions: A Taxonomy for Researchers and Practitioners

    DTIC Science & Technology

    1999-11-01

    Engineering and Management of the Air Force Institute of Technology Air University Air Education and Training Command In Partial Fulfillment of the...information technology and produce the maximum benefits for all virtual components involved. Vlll DEFINING VIRTUAL INTERACTIONS: A TAXONOMY FOR...allow the human factor to maximize information exchange and provide high quality products to intelligence consumers. Applicability of this research In

  20. High-resolution electron microscope

    NASA Technical Reports Server (NTRS)

    Nathan, R.

    1977-01-01

    Employing scanning transmission electron microscope as interferometer, relative phases of diffraction maximums can be determined by analysis of dark field images. Synthetic aperture technique and Fourier-transform computer processing of amplitude and phase information provide high resolution images at approximately one angstrom.

  1. Calorific values and combustion chemistry of animal manure

    USDA-ARS?s Scientific Manuscript database

    Combustion chemistry and calorific value analyses are the fundamental information for evaluating different biomass waste-to-energy conversion operations. Specific chemical exergy of manure and other biomass feedstock will provide a measure for the theoretically maximum attainable energy. The specifi...

  2. Domestic embedded reporter program: saving lives and securing tactical operations

    DTIC Science & Technology

    2017-03-01

    estimated to average 1 hour per response, including the time for reviewing instruction, searching existing data sources, gathering and maintaining the...13. ABSTRACT (maximum 200 words) Advances in technology have provided journalists the tools to obtain and share real- time information during domestic...terrorist and mass-shooting incidents. This real- time information-sharing compromises the safety of first responders, victims, and reporters. Real

  3. Maximum power point tracker for photovoltaic power plants

    NASA Astrophysics Data System (ADS)

    Arcidiacono, V.; Corsi, S.; Lambri, L.

    The paper describes two different closed-loop control criteria for the maximum power point tracking of the voltage-current characteristic of a photovoltaic generator. The two criteria are discussed and compared, inter alia, with regard to the setting-up problems that they pose. Although a detailed analysis is not embarked upon, the paper also provides some quantitative information on the energy advantages obtained by using electronic maximum power point tracking systems, as compared with the situation in which the point of operation of the photovoltaic generator is not controlled at all. Lastly, the paper presents two high-efficiency MPPT converters for experimental photovoltaic plants of the stand-alone and the grid-interconnected type.

  4. Use of electrothermal atomic absorption spectrometry for size profiling of gold and silver nanoparticles.

    PubMed

    Panyabut, Teerawat; Sirirat, Natnicha; Siripinyanond, Atitaya

    2018-02-13

    Electrothermal atomic absorption spectrometry (ETAAS) was applied to investigate the atomization behaviors of gold nanoparticles (AuNPs) and silver nanoparticles (AgNPs) in order to relate with particle size information. At various atomization temperatures from 1400 °C to 2200 °C, the time-dependent atomic absorption peak profiles of AuNPs and AgNPs with varying sizes from 5 nm to 100 nm were examined. With increasing particle size, the maximum absorbance was observed at the longer time. The time at maximum absorbance was found to linearly increase with increasing particle size, suggesting that ETAAS can be applied to provide the size information of nanoparticles. With the atomization temperature of 1600 °C, the mixtures of nanoparticles containing two particle sizes, i.e., 5 nm tannic stabilized AuNPs with 60, 80, 100 nm citrate stabilized AuNPs, were investigated and bimodal peaks were observed. The particle size dependent atomization behaviors of nanoparticles show potential application of ETAAS for providing size information of nanoparticles. The calibration plot between the time at maximum absorbance and the particle size was applied to estimate the particle size of in-house synthesized AuNPs and AgNPs and the results obtained were in good agreement with those from flow field-flow fractionation (FlFFF) and transmission electron microscopy (TEM) techniques. Furthermore, the linear relationship between the activation energy and the particle size was observed. Copyright © 2017 Elsevier B.V. All rights reserved.

  5. Is Europa's Subsurface Water Ocean Warm?

    NASA Technical Reports Server (NTRS)

    Melosh, H. J.; Ekholm, A. G.; Showman, A. P.; Lorenz, R. D.

    2002-01-01

    Europa's subsurface water ocean may be warm: that is, at the temperature of water's maximum density. This provides a natural explanation of chaos melt-through events and leads to a correct estimate of the age of its surface. Additional information is contained in the original extended abstract.

  6. Radiological effluents released from US continental tests, 1961 through 1992. Revision 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schoengold, C.R.; DeMarre, M.E.; Kirkwood, E.M.

    1996-08-01

    This report documents all continental tests from September 15, 1961, through September 23, 1992, from which radioactive effluents were released. The report includes both updated information previously published in the publicly available May, 1990 report, DOE/NV-317, ``Radiological Effluents Released from Announced US Continental Tests 1961 through 1988``, and effluent release information on formerly unannounced tests. General information provided for each test includes the date, time, location, type of test, sponsoring laboratory and/or agency or other sponsor, depth of burial, purpose, yield or yield range, extent of release (onsite only or offsite), and category of release (detonation-time versus post-test operations). Wheremore » a test with simultaneous detonations is listed, location, depth of burial and yield information are given for each detonation if applicable, as well as the specific source of the release. A summary of each release incident by type of release is included. For a detonation-time release, the effluent curies are expressed at R+12 hours. For a controlled releases from tunnel-tests, the effluent curies are expressed at both time of release and at R+12 hours. All other types are listed at the time of the release. In addition, a qualitative statement of the isotopes in the effluent is included for detonation-time and controlled releases and a quantitative listing is included for all other types. Offsite release information includes the cloud direction, the maximum activity detected in the air offsite, the maximum gamma exposure rate detected offsite, the maximum iodine level detected offsite, and the maximum distance radiation was detected offsite. A release summary incudes whatever other pertinent information is available for each release incident. This document includes effluent release information for 433 tests, some of which have simultaneous detonations. However, only 52 of these are designated as having offsite releases.« less

  7. Bit Error Probability for Maximum Likelihood Decoding of Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc P. C.; Rhee, Dojun

    1996-01-01

    In this paper, the bit error probability P(sub b) for maximum likelihood decoding of binary linear codes is investigated. The contribution of each information bit to P(sub b) is considered. For randomly generated codes, it is shown that the conventional approximation at high SNR P(sub b) is approximately equal to (d(sub H)/N)P(sub s), where P(sub s) represents the block error probability, holds for systematic encoding only. Also systematic encoding provides the minimum P(sub b) when the inverse mapping corresponding to the generator matrix of the code is used to retrieve the information sequence. The bit error performances corresponding to other generator matrix forms are also evaluated. Although derived for codes with a generator matrix randomly generated, these results are shown to provide good approximations for codes used in practice. Finally, for decoding methods which require a generator matrix with a particular structure such as trellis decoding or algebraic-based soft decision decoding, equivalent schemes that reduce the bit error probability are discussed.

  8. Kinetics of phase transformations in glass forming systems

    NASA Technical Reports Server (NTRS)

    Ray, Chandra S.

    1994-01-01

    A nucleation rate like curve for a glass can be determined from the functional dependence of the maximum height of its DTA crystallization peak, (delta T)(sub p), on the nucleation temperature, T(sub n). This nucleation rate curve provides information for the temperature range where nucleation for the glass can occur and the temperature where the nucleation rate is a maximum. However, this curve does not provide information for the nucleation rate, I, for the glass at different temperatures. A method for estimating I at different temperatures from (delta T)(sub p) was developed using a Li2O.2SiO2 (LS2) glass. Also, the dielectric constant (epsilon) and the loss factor (tan delta) of a glass-ceramic depend, in part, upon the amount of crystallinity which, in turn, depends upon the nucleation density in the starting glass. It is therefore expected that epsilon and tan delta should have a relationship with nucleation density and hence on the nucleation rate.

  9. Kinetic study on anaerobic oxidation of methane coupled to denitrification.

    PubMed

    Yu, Hou; Kashima, Hiroyuki; Regan, John M; Hussain, Abid; Elbeshbishy, Elsayed; Lee, Hyung-Sool

    2017-09-01

    Monod kinetic parameters provide information required for kinetic analysis of anaerobic oxidation of methane coupled to denitrification (AOM-D). This information is critical for engineering AOM-D processes in wastewater treatment facilities. We first experimentally determined Monod kinetic parameters for an AOM-D enriched culture and obtained the following values: maximum specific growth rate (μ max ) 0.121/d, maximum substrate-utilization rate (q max ) 28.8mmol CH 4 /g cells-d, half maximum-rate substrate concentration (K s ) 83μΜ CH 4 , growth yield (Y) 4.76gcells/mol CH 4 , decay coefficient (b) 0.031/d, and threshold substrate concentration (S min ) 28.8μM CH 4 . Clone library analysis of 16S rRNA and mcrA gene fragments suggested that AOM-D reactions might have occurred via the syntrophic interaction between denitrifying bacteria (e.g., Ignavibacterium, Acidovorax, and Pseudomonas spp.) and hydrogenotrophic methanogens (Methanobacterium spp.), supporting reverse methanogenesis-dependent AOM-D in our culture. High μ max and q max , and low K s for the AOM-D enrichment imply that AOM-D could play a significant role in mitigating atmospheric methane efflux. In addition, these high kinetic features suggest that engineered AOM-D systems may provide a sustainable alternative to nitrogen removal in wastewater treatment. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. 77 FR 12823 - Solicitation of Comments on a Proposed Change to the Disclosure Limitation Policy for Information...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-02

    ... policy for information reported on fuel ethanol production capacity, (both nameplate and maximum... fuel ethanol production capacity, (both nameplate and maximum sustainable capacity) on Form EIA-819 as... treat all information reported on fuel ethanol production capacity, (both nameplate and maximum...

  11. Natural Resource Information System. Volume 2: System operating procedures and instructions

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A total computer software system description is provided for the prototype Natural Resource Information System designed to store, process, and display data of maximum usefulness to land management decision making. Program modules are described, as are the computer file design, file updating methods, digitizing process, and paper tape conversion to magnetic tape. Operating instructions for the system, data output, printed output, and graphic output are also discussed.

  12. Mixture class recovery in GMM under varying degrees of class separation: frequentist versus Bayesian estimation.

    PubMed

    Depaoli, Sarah

    2013-06-01

    Growth mixture modeling (GMM) represents a technique that is designed to capture change over time for unobserved subgroups (or latent classes) that exhibit qualitatively different patterns of growth. The aim of the current article was to explore the impact of latent class separation (i.e., how similar growth trajectories are across latent classes) on GMM performance. Several estimation conditions were compared: maximum likelihood via the expectation maximization (EM) algorithm and the Bayesian framework implementing diffuse priors, "accurate" informative priors, weakly informative priors, data-driven informative priors, priors reflecting partial-knowledge of parameters, and "inaccurate" (but informative) priors. The main goal was to provide insight about the optimal estimation condition under different degrees of latent class separation for GMM. Results indicated that optimal parameter recovery was obtained though the Bayesian approach using "accurate" informative priors, and partial-knowledge priors showed promise for the recovery of the growth trajectory parameters. Maximum likelihood and the remaining Bayesian estimation conditions yielded poor parameter recovery for the latent class proportions and the growth trajectories. (PsycINFO Database Record (c) 2013 APA, all rights reserved).

  13. The Preliminary Evaluation of Liquid Lubricants for Space Applications by Vacuum Tribometry

    NASA Technical Reports Server (NTRS)

    Jones, W. R., Jr.; Pepper, S. V.; Herrera-Fierro, P.; Feuchter, D.; Toddy, T. J.; Jayne, D. T.; Wheeler, D. R.; Abel, P. B.; Kingsbury, E.; Morales, W.

    1994-01-01

    Four different vacuum tribometers for the evaluation of liquid lubricants for space applications are described. These range from simple ball-on-flat sliders with maximum in-situ control and surface characterization to an instrument bearing apparatus having no in-situ characterization. Thus, the former provides an abundance of surface chemical information but is not particularly simulative of most triboelements. On the other hand, the instrument bearing apparatus is completely simulative, but only allows post-mortem surface chemical information. Two other devices, a four-ball apparatus and a ball-on-plate tribometer, provide varying degrees of surface chemical information and tribo-simulation. Examples of data from each device are presented.

  14. Analysis techniques for residual acceleration data

    NASA Technical Reports Server (NTRS)

    Rogers, Melissa J. B.; Alexander, J. Iwan D.; Snyder, Robert S.

    1990-01-01

    Various aspects of residual acceleration data are of interest to low-gravity experimenters. Maximum and mean values and various other statistics can be obtained from data as collected in the time domain. Additional information may be obtained through manipulation of the data. Fourier analysis is discussed as a means of obtaining information about dominant frequency components of a given data window. Transformation of data into different coordinate axes is useful in the analysis of experiments with different orientations and can be achieved by the use of a transformation matrix. Application of such analysis techniques to residual acceleration data provides additional information than what is provided in a time history and increases the effectiveness of post-flight analysis of low-gravity experiments.

  15. 76 FR 52947 - Clean Water Act Section 303(d): Final Agency Action on 16 Total Maximum Daily Loads (TMDLs) in...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-24

    ..., Sulfate, TDS. 08040203-010 Saline River TDS. 08040204-006 Saline River TDS. 08040206-015 Big Cornie Creek... public to provide EPA with any significant data or information that might impact the 16 TMDLs at Federal...

  16. ARSENIC: CARCINOGENIC MECHANISMS, RISK ASSESSMENT AND THE MAXIMUM CONTAMINANT LEVEL (MCL)

    EPA Science Inventory


    This workshop will provide an up-to-date overview on key issues related to cancer risk assessment of arsenic: carcinogenic mechanisms; application of mechanistic information to risk assessment models; and the development of the MCL for arsenic in drinking water. The two prese...

  17. Fast and Efficient Stochastic Optimization for Analytic Continuation

    DOE PAGES

    Bao, Feng; Zhang, Guannan; Webster, Clayton G; ...

    2016-09-28

    In this analytic continuation of imaginary-time quantum Monte Carlo data to extract real-frequency spectra remains a key problem in connecting theory with experiment. Here we present a fast and efficient stochastic optimization method (FESOM) as a more accessible variant of the stochastic optimization method introduced by Mishchenko et al. [Phys. Rev. B 62, 6317 (2000)], and we benchmark the resulting spectra with those obtained by the standard maximum entropy method for three representative test cases, including data taken from studies of the two-dimensional Hubbard model. Genearally, we find that our FESOM approach yields spectra similar to the maximum entropy results.more » In particular, while the maximum entropy method yields superior results when the quality of the data is strong, we find that FESOM is able to resolve fine structure with more detail when the quality of the data is poor. In addition, because of its stochastic nature, the method provides detailed information on the frequency-dependent uncertainty of the resulting spectra, while the maximum entropy method does so only for the spectral weight integrated over a finite frequency region. Therefore, we believe that this variant of the stochastic optimization approach provides a viable alternative to the routinely used maximum entropy method, especially for data of poor quality.« less

  18. Mold heating and cooling microprocessor conversion. Final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hoffman, D.P.

    Conversion of the microprocessors and software for the Mold Heating and Cooling (MHAC) pump package control systems was initiated to allow required system enhancements and provide data communications capabilities with the Plastics Information and Control System (PICS). The existing microprocessor-based control systems for the pump packages use an Intel 8088-based microprocessor board with a maximum of 64 Kbytes of program memory. The requirements for the system conversion were developed, and hardware has been selected to allow maximum reuse of existing hardware and software while providing the required additional capabilities and capacity. The new hardware will incorporate an Intel 80286-based microprocessormore » board with an 80287 math coprocessor, the system includes additional memory, I/O, and RS232 communication ports.« less

  19. Legends of the Dakota, Ojibwe, Winnebago. Teacher's Guide.

    ERIC Educational Resources Information Center

    Fairbanks, Paulette; And Others

    Intended to help teachers present Indian legends for the maximum benefit and enjoyment of students, this guide provides background information and learning activities for seven legends derived from the Dakota, Ojibwe, and Winnebago tribes. Introductory material discusses the history and purposes of tribal legends and outlines student objectives…

  20. Improved Forecasting of Next Day Ozone Concentrations in the Eastern U.S.

    EPA Science Inventory

    There is an urgent need to provide accurate air quality information and forecasts to the general public. A hierarchical space-time model is used to forecast next day spatial patterns of daily maximum 8-hr ozone concentrations. The model combines ozone monitoring data and gridded...

  1. Improved Space-Time Forecasting of next Day Ozone Concentrations in the Eastern U.S.

    EPA Science Inventory

    There is an urgent need to provide accurate air quality information and forecasts to the general public and environmental health decision-makers. This paper develops a hierarchical space-time model for daily 8-hour maximum ozone concentration (O3) data covering much of the easter...

  2. 50 CFR 648.21 - Procedures for determining initial annual amounts.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... maximum probability of overfishing as informed by the OFL distribution will be 35 percent for stocks with... specifications established pursuant to this section may be adjusted by the Regional Administrator, in... the reasons for such an action and providing a 30-day public comment period. (f) Distribution of...

  3. 47 CFR 1.1409 - Commission consideration of the complaint.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... rebuttal, studies that have been conducted. The Commission may also request that one or more of the parties make additional filings or provide additional information. Where one of the parties has failed to... formulas for determining a maximum just and reasonable rate: (1) The following formula shall apply to...

  4. An assessment of the quality and content of information on diverticulitis on the internet.

    PubMed

    Connelly, Tara M; Khan, Mohammad Shoaib; Victory, Liana; Mehmood, Abeera; Cooke, Fiachra

    2018-05-21

    Although commonly the first port of call for medical information, the internet provides unregulated information of variable quality. We aimed to evaluate commonly accessed web-based patient information on diverticulitis using validated and novel scoring systems. The top internet search engines (Google/Bing/Yahoo) were queried using the keyword 'diverticulitis.' The first 20 websites from each were graded using the DISCERN and Journal of the American Medical Association (JAMA) benchmark criteria. A novel diverticulitis-specific score was devised and applied. Thirty-six unique websites were identified. The mean total DISCERN score for all websites was 39.92 ± 12.44 (range = 18-62). No website achieved the maximum DISCERN score of 75. The mean JAMA and diverticulitis scores were 2.5 ± 1.08 (maximum possible score = 4) and 11.08 ± 4.17 (19 points possible) respectively. Fourteen (35.9%) and 20 (51.2%) did not provide the date of last update and authorship respectively. Thirty-three (84.6%) mentioned surgery as a treatment option; however, the majority (69.7%) did not describe the surgery or the possibility of a stoma. All except two described disease symptoms. Only ten (25.64%) provided information on when to seek further medical advice or help. Web-based information on diverticulitis is of variable content and quality. The majority of top websites describe disease symptoms and aetiology; however, information to prompt seeking medical attention if required, descriptions of surgical procedures and the possibility of stoma creation are poorly described in the majority of websites. These findings should be highlighted to patients utilising the internet to obtain information on diverticulitis. Copyright © 2018 Royal College of Surgeons of Edinburgh (Scottish charity number SC005317) and Royal College of Surgeons in Ireland. Published by Elsevier Ltd. All rights reserved.

  5. Preoperative assessment of intracranial tumors with perfusion MR and a volumetric interpolated examination: a comparative study with DSA.

    PubMed

    Wetzel, Stephan G; Cha, Soonmee; Law, Meng; Johnson, Glyn; Golfinos, John; Lee, Peter; Nelson, Peter Kim

    2002-01-01

    In evaluating intracranial tumors, a safe low-cost alternative that provides information similar to that of digital subtraction angiography (DSA) may be of interest. Our purpose was to determine the utility and limitations of a combined MR protocol in assessing (neo-) vascularity in intracranial tumors and their relation to adjacent vessels and to compare the results with those of DSA. Twenty-two consecutive patients with an intracranial tumor who underwent preoperative stereoscopic DSA were examined with contrast-enhanced dynamic T2*-weighted perfusion MR imaging followed by a T1-weighted three-dimensional (3D) MR study (volumetric interpolated brain examination [VIBE]). The maximum relative cerebral blood volume (rCBV) of the tumor was compared with tumor vascularity at DSA. Critical vessel structures were defined in each patient, and VIBE images of these structures were compared with DSA findings. For full exploitation of the 3D data sets, maximum-intensity projection algorithms reconstructed in real time with any desired volume and orientation were used. Tumor blush scores at DSA were significantly correlated with the rCBV measurements (r = 0.75; P <.01, Spearman rank correlation coefficient). In 17 (77%) patients, VIBE provided all relevant information about the venous system, whereas information about critical arteries were partial in 50% of the cases and not relevant in the other 50%. A fast imaging protocol consisting of perfusion MR imaging and a volumetric MR acquisition provides some of the information about tumor (neo-) vascularity and adjacent vascular anatomy that can be obtained with conventional angiography. However, the MR protocol provides insufficient visualization of distal cerebral arteries.

  6. Using iMCFA to Perform the CFA, Multilevel CFA, and Maximum Model for Analyzing Complex Survey Data.

    PubMed

    Wu, Jiun-Yu; Lee, Yuan-Hsuan; Lin, John J H

    2018-01-01

    To construct CFA, MCFA, and maximum MCFA with LISREL v.8 and below, we provide iMCFA (integrated Multilevel Confirmatory Analysis) to examine the potential multilevel factorial structure in the complex survey data. Modeling multilevel structure for complex survey data is complicated because building a multilevel model is not an infallible statistical strategy unless the hypothesized model is close to the real data structure. Methodologists have suggested using different modeling techniques to investigate potential multilevel structure of survey data. Using iMCFA, researchers can visually set the between- and within-level factorial structure to fit MCFA, CFA and/or MAX MCFA models for complex survey data. iMCFA can then yield between- and within-level variance-covariance matrices, calculate intraclass correlations, perform the analyses and generate the outputs for respective models. The summary of the analytical outputs from LISREL is gathered and tabulated for further model comparison and interpretation. iMCFA also provides LISREL syntax of different models for researchers' future use. An empirical and a simulated multilevel dataset with complex and simple structures in the within or between level was used to illustrate the usability and the effectiveness of the iMCFA procedure on analyzing complex survey data. The analytic results of iMCFA using Muthen's limited information estimator were compared with those of Mplus using Full Information Maximum Likelihood regarding the effectiveness of different estimation methods.

  7. Model-based decoding, information estimation, and change-point detection techniques for multineuron spike trains.

    PubMed

    Pillow, Jonathan W; Ahmadian, Yashar; Paninski, Liam

    2011-01-01

    One of the central problems in systems neuroscience is to understand how neural spike trains convey sensory information. Decoding methods, which provide an explicit means for reading out the information contained in neural spike responses, offer a powerful set of tools for studying the neural coding problem. Here we develop several decoding methods based on point-process neural encoding models, or forward models that predict spike responses to stimuli. These models have concave log-likelihood functions, which allow efficient maximum-likelihood model fitting and stimulus decoding. We present several applications of the encoding model framework to the problem of decoding stimulus information from population spike responses: (1) a tractable algorithm for computing the maximum a posteriori (MAP) estimate of the stimulus, the most probable stimulus to have generated an observed single- or multiple-neuron spike train response, given some prior distribution over the stimulus; (2) a gaussian approximation to the posterior stimulus distribution that can be used to quantify the fidelity with which various stimulus features are encoded; (3) an efficient method for estimating the mutual information between the stimulus and the spike trains emitted by a neural population; and (4) a framework for the detection of change-point times (the time at which the stimulus undergoes a change in mean or variance) by marginalizing over the posterior stimulus distribution. We provide several examples illustrating the performance of these estimators with simulated and real neural data.

  8. On the information content of discrete phylogenetic characters.

    PubMed

    Bordewich, Magnus; Deutschmann, Ina Maria; Fischer, Mareike; Kasbohm, Elisa; Semple, Charles; Steel, Mike

    2017-12-16

    Phylogenetic inference aims to reconstruct the evolutionary relationships of different species based on genetic (or other) data. Discrete characters are a particular type of data, which contain information on how the species should be grouped together. However, it has long been known that some characters contain more information than others. For instance, a character that assigns the same state to each species groups all of them together and so provides no insight into the relationships of the species considered. At the other extreme, a character that assigns a different state to each species also conveys no phylogenetic signal. In this manuscript, we study a natural combinatorial measure of the information content of an individual character and analyse properties of characters that provide the maximum phylogenetic information, particularly, the number of states such a character uses and how the different states have to be distributed among the species or taxa of the phylogenetic tree.

  9. Reading Assessment: A Primer for Teachers and Tutors.

    ERIC Educational Resources Information Center

    Caldwell, JoAnne Schudt

    This primer provides the basic information that teachers and tutors need to get started on the complex process of reading assessment. Designed for maximum utility in today's standards-driven classroom, the primer presents simple, practical assessment strategies that are based on theory and research. It takes teachers step by step through learning…

  10. Guidelines for conducting TMDL consultations on selenium

    Treesearch

    Dennis A. Lemly

    2000-01-01

    This report was prepared to provide Environmental Contaminants Specialists in the U.S. Fish and Wildlife Service (Service) with a step-by-step procedure for consultations involving Total Maximum Daily Loads (TMDL's) for selenium. The need for this information stems from recent actions taken by the U.S. Environmental Protection Agency (EPA) that will involve the...

  11. 50 CFR 86.60 - What are the criteria used to select projects for grants?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... vessels (includes education/information) 0-15 points. (4) Include private, local, or other State funds in..., for a maximum of 15 points (8) Provide significant positive economic impacts to a community. For... year in the community 1-5 points. (9) Include multi-State efforts that result in coordinating location...

  12. The changing role of the medical technologist from technologist to information specialist.

    PubMed

    Miller, W G

    2000-01-01

    Pathology laboratory services are dependent on the laboratory information system (LIS) to organize the work, manage the operation, and communicate the results for effective laboratory medicine. For maximum efficiency, staffing for the LIS should be an integral component of laboratory operations and is facilitated by a two-tier structure. A core LIS staff provides system support and continuous services. A group of bench medical technologists have multitasking responsibilities, including LIS support for a specific laboratory work area. The two components form a team that uses staff efficiently to provide ongoing operational services and flexibility for problem solving and new functionality implementation.

  13. Incident Management in Academic Information System using ITIL Framework

    NASA Astrophysics Data System (ADS)

    Palilingan, V. R.; Batmetan, J. R.

    2018-02-01

    Incident management is very important in order to ensure the continuity of a system. Information systems require incident management to ensure information systems can provide maximum service according to the service provided. Many of the problems that arise in academic information systems come from incidents that are not properly handled. The objective of this study aims to find the appropriate way of incident management. The incident can be managed so it will not be a big problem. This research uses the ITIL framework to solve incident problems. The technique used in this study is a technique adopted and developed from the service operations section of the ITIL framework. The results of this research found that 84.5% of incidents appearing in academic information systems can be handled quickly and appropriately. 15.5% incidents can be escalated so as to not cause any new problems. The model of incident management applied to make academic information system can run quickly in providing academic service in a good and efficient. The incident management model implemented in this research is able to manage resources appropriately so as to quickly and easily manage incidents.

  14. Two new endemic species of Ameiva (Squamata: Teiidae) from the dry forest of northwestern Peru and additional information on Ameiva concolor Ruthven, 1924.

    PubMed

    Koch, Claudia; Venegas, Pablo J; Rödder, Dennis; Flecks, Morris; Böhme, Wolfgang

    2013-12-04

    We describe two new species of Ameiva Meyer, 1795 from the dry forest of the Northern Peruvian Andes. The new species Ameiva nodam sp. nov. and Ameiva aggerecusans sp. nov. share a divided frontal plate and are differentiated from each other and from their congeners based on genetic (12S and 16S rRNA genes) and morphological characteristics. A. nodam sp. nov. has dilated postbrachials, a maximum known snout-vent length of 101 mm, 10 longitudinal rows of ventral plates, 86-113 midbody granules, 25-35 lamellae under the fourth toe, and a color pattern with 5 longitudinal yellow stripes on the dorsum. Ameiva aggerecusans sp. nov. has not or only hardly dilated postbrachials, a maximum known snout-vent length of 99.3 mm, 10-12 longitudinal rows of ventral plates, 73-92 midbody granules, 31-39 lamellae under the fourth toe, and the females and juveniles of the species normally exhibit a cream-colored vertebral stripe on a dark dorsum ground color. We provide information on the intraspecific variation and distribution of A. concolor. Furthermore, we provide information on the environmental niches of the taxa and test for niche conservatism. 

  15. Modeling internal ballistics of gas combustion guns.

    PubMed

    Schorge, Volker; Grossjohann, Rico; Schönekess, Holger C; Herbst, Jörg; Bockholdt, Britta; Ekkernkamp, Axel; Frank, Matthias

    2016-05-01

    Potato guns are popular homemade guns which work on the principle of gas combustion. They are usually constructed for recreational rather than criminal purposes. Yet some serious injuries and fatalities due to these guns are reported. As information on the internal ballistics of homemade gas combustion-powered guns is scarce, it is the aim of this work to provide an experimental model of the internal ballistics of these devices and to investigate their basic physical parameters. A gas combustion gun was constructed with a steel tube as the main component. Gas/air mixtures of acetylene, hydrogen, and ethylene were used as propellants for discharging a 46-mm caliber test projectile. Gas pressure in the combustion chamber was captured with a piezoelectric pressure sensor. Projectile velocity was measured with a ballistic speed measurement system. The maximum gas pressure, the maximum rate of pressure rise, the time parameters of the pressure curve, and the velocity and path of the projectile through the barrel as a function of time were determined according to the pressure-time curve. The maximum gas pressure was measured to be between 1.4 bar (ethylene) and 4.5 bar (acetylene). The highest maximum rate of pressure rise was determined for hydrogen at (dp/dt)max = 607 bar/s. The muzzle energy was calculated to be between 67 J (ethylene) and 204 J (acetylene). To conclude, this work provides basic information on the internal ballistics of homemade gas combustion guns. The risk of injury to the operator or bystanders is high, because accidental explosions of the gun due to the high-pressure rise during combustion of the gas/air mixture may occur.

  16. Application of the Maximum Amplitude-Early Rise Correlation to Cycle 23

    NASA Technical Reports Server (NTRS)

    Willson, Robert M.; Hathaway, David H.

    2004-01-01

    On the basis of the maximum amplitude-early rise correlation, cycle 23 could have been predicted to be about the size of the mean cycle as early as 12 mo following cycle minimum. Indeed, estimates for the size of cycle 23 throughout its rise consistently suggested a maximum amplitude that would not differ appreciably from the mean cycle, contrary to predictions based on precursor information. Because cycle 23 s average slope during the rising portion of the solar cycle measured 2.4, computed as the difference between the conventional maximum (120.8) and minimum (8) amplitudes divided by the ascent duration in months (47), statistically speaking, it should be a cycle of shorter period. Hence, conventional sunspot minimum for cycle 24 should occur before December 2006, probably near July 2006 (+/-4 mo). However, if cycle 23 proves to be a statistical outlier, then conventional sunspot minimum for cycle 24 would be delayed until after July 2007, probably near December 2007 (+/-4 mo). In anticipation of cycle 24, a chart and table are provided for easy monitoring of the nearness and size of its maximum amplitude once onset has occurred (with respect to the mean cycle and using the updated maximum amplitude-early rise relationship).

  17. Wave Information Studies of US Coastlines: Hindcast Wave Information for the Great Lakes: Lake Erie

    DTIC Science & Technology

    1991-10-01

    total ice cover) for individual grid cells measuring 5 km square. 42. The GLERL analyzed each half-month data set to provide the maximum, minimum...average, median, and modal ice concentrations for each 5-km cell . The median value, which represents an estimate of the 50-percent point of the ice...incorporating the progression and decay of the time-dependent ice cover was complicated by the fact that different grid cell sizes were used for mapping the ice

  18. Predicting protein β-sheet contacts using a maximum entropy-based correlated mutation measure.

    PubMed

    Burkoff, Nikolas S; Várnai, Csilla; Wild, David L

    2013-03-01

    The problem of ab initio protein folding is one of the most difficult in modern computational biology. The prediction of residue contacts within a protein provides a more tractable immediate step. Recently introduced maximum entropy-based correlated mutation measures (CMMs), such as direct information, have been successful in predicting residue contacts. However, most correlated mutation studies focus on proteins that have large good-quality multiple sequence alignments (MSA) because the power of correlated mutation analysis falls as the size of the MSA decreases. However, even with small autogenerated MSAs, maximum entropy-based CMMs contain information. To make use of this information, in this article, we focus not on general residue contacts but contacts between residues in β-sheets. The strong constraints and prior knowledge associated with β-contacts are ideally suited for prediction using a method that incorporates an often noisy CMM. Using contrastive divergence, a statistical machine learning technique, we have calculated a maximum entropy-based CMM. We have integrated this measure with a new probabilistic model for β-contact prediction, which is used to predict both residue- and strand-level contacts. Using our model on a standard non-redundant dataset, we significantly outperform a 2D recurrent neural network architecture, achieving a 5% improvement in true positives at the 5% false-positive rate at the residue level. At the strand level, our approach is competitive with the state-of-the-art single methods achieving precision of 61.0% and recall of 55.4%, while not requiring residue solvent accessibility as an input. http://www2.warwick.ac.uk/fac/sci/systemsbiology/research/software/

  19. Marshall Space Flight Center Engineering Directorate Overview: Launching the Future of Science and Exploration

    NASA Technical Reports Server (NTRS)

    Miley, Steven C.

    2009-01-01

    The Marshall Small Business Association (MSBA) serves as a central point of contact to inform and educate small businesses interested in pursuing contracting and subcontracting opportunities at the Marshall Space Flight Center. The MSBA meets quarterly to provide industry with information about how to do business with Marshall and to share specific information about Marshall s mission, which allows private businesses to envision how they might contribute. For the February 19 meeting, the Engineering Directorate will give an overview of its unique capabilities and how it is organized to provide maximum support for the programs and projects resident at Marshall, for example, the Space Shuttle Propulsion Office, Ares Projects Office, and Science and Mission Systems Office. This briefing provides a top-level summary of the work conducted by Marshall s largest organization, while explaining how resources are deployed to perform the volume of work under Marshall s purview.

  20. Understanding Peripheral Bat Populations Using Maximum-Entropy Suitability Modeling

    PubMed Central

    Barnhart, Paul R.; Gillam, Erin H.

    2016-01-01

    Individuals along the periphery of a species distribution regularly encounter more challenging environmental and climatic conditions than conspecifics near the center of the distribution. Due to these potential constraints, individuals in peripheral margins are expected to change their habitat and behavioral characteristics. Managers typically rely on species distribution maps when developing adequate management practices. However, these range maps are often too simplistic and do not provide adequate information as to what fine-scale biotic and abiotic factors are driving a species occurrence. In the last decade, habitat suitability modelling has become widely used as a substitute for simplistic distribution mapping which allows regional managers the ability to fine-tune management resources. The objectives of this study were to use maximum-entropy modeling to produce habitat suitability models for seven species that have a peripheral margin intersecting the state of North Dakota, according to current IUCN distributions, and determine the vegetative and climatic characteristics driving these models. Mistnetting resulted in the documentation of five species outside the IUCN distribution in North Dakota, indicating that current range maps for North Dakota, and potentially the northern Great Plains, are in need of update. Maximum-entropy modeling showed that temperature and not precipitation were the variables most important for model production. This fine-scale result highlights the importance of habitat suitability modelling as this information cannot be extracted from distribution maps. Our results provide baseline information needed for future research about how and why individuals residing in the peripheral margins of a species’ distribution may show marked differences in habitat use as a result of urban expansion, habitat loss, and climate change compared to more centralized populations. PMID:27935936

  1. Today's CIO: catalyst for managed care change.

    PubMed

    Sanchez, P

    1997-05-01

    As the impact of managed care increases and capitation becomes all pervasive, healthcare providers' attention to cost control will intensify. For integrated delivery networks (IDNs) to be competitive, today's CIO must leverage managed care as a catalyst for change, and use a sophisticated information system toolset as the means to an integrated end. An area many CIOs target for fast results and maximum cost savings in resource management. This article reviews how Dick Escue, chief information officer at Baptist Memorial Health Care Corporation (Memphis, TN), uses electronic information management systems to integrate and conserve the resources of Baptist's widespread healthcare organization.

  2. Item Selection and Ability Estimation Procedures for a Mixed-Format Adaptive Test

    ERIC Educational Resources Information Center

    Ho, Tsung-Han; Dodd, Barbara G.

    2012-01-01

    In this study we compared five item selection procedures using three ability estimation methods in the context of a mixed-format adaptive test based on the generalized partial credit model. The item selection procedures used were maximum posterior weighted information, maximum expected information, maximum posterior weighted Kullback-Leibler…

  3. A pairwise maximum entropy model accurately describes resting-state human brain networks

    PubMed Central

    Watanabe, Takamitsu; Hirose, Satoshi; Wada, Hiroyuki; Imai, Yoshio; Machida, Toru; Shirouzu, Ichiro; Konishi, Seiki; Miyashita, Yasushi; Masuda, Naoki

    2013-01-01

    The resting-state human brain networks underlie fundamental cognitive functions and consist of complex interactions among brain regions. However, the level of complexity of the resting-state networks has not been quantified, which has prevented comprehensive descriptions of the brain activity as an integrative system. Here, we address this issue by demonstrating that a pairwise maximum entropy model, which takes into account region-specific activity rates and pairwise interactions, can be robustly and accurately fitted to resting-state human brain activities obtained by functional magnetic resonance imaging. Furthermore, to validate the approximation of the resting-state networks by the pairwise maximum entropy model, we show that the functional interactions estimated by the pairwise maximum entropy model reflect anatomical connexions more accurately than the conventional functional connectivity method. These findings indicate that a relatively simple statistical model not only captures the structure of the resting-state networks but also provides a possible method to derive physiological information about various large-scale brain networks. PMID:23340410

  4. WASTE TREATMENT PLANT (WTP) LIQUID EFFLUENT TREATABILITY EVALUATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LUECK, K.J.

    2004-10-18

    A forecast of the radioactive, dangerous liquid effluents expected to be produced by the Waste Treatment Plant (WTP) was provided by Bechtel National, Inc. (BNI 2004). The forecast represents the liquid effluents generated from the processing of Tank Farm waste through the end-of-mission for the WTP. The WTP forecast is provided in the Appendices. The WTP liquid effluents will be stored, treated, and disposed of in the Liquid Effluent Retention Facility (LERF) and the Effluent Treatment Facility (ETF). Both facilities are located in the 200 East Area and are operated by Fluor Hanford, Inc. (FH) for the US. Department ofmore » Energy (DOE). The treatability of the WTP liquid effluents in the LERF/ETF was evaluated. The evaluation was conducted by comparing the forecast to the LERF/ETF treatability envelope (Aromi 1997), which provides information on the items which determine if a liquid effluent is acceptable for receipt and treatment at the LERF/ETF. The format of the evaluation corresponds directly to the outline of the treatability envelope document. Except where noted, the maximum annual average concentrations over the range of the 27 year forecast was evaluated against the treatability envelope. This is an acceptable approach because the volume capacity in the LERF Basin will equalize the minimum and maximum peaks. Background information on the LERF/ETF design basis is provided in the treatability envelope document.« less

  5. Best practices for missing data management in counseling psychology.

    PubMed

    Schlomer, Gabriel L; Bauman, Sheri; Card, Noel A

    2010-01-01

    This article urges counseling psychology researchers to recognize and report how missing data are handled, because consumers of research cannot accurately interpret findings without knowing the amount and pattern of missing data or the strategies that were used to handle those data. Patterns of missing data are reviewed, and some of the common strategies for dealing with them are described. The authors provide an illustration in which data were simulated and evaluate 3 methods of handling missing data: mean substitution, multiple imputation, and full information maximum likelihood. Results suggest that mean substitution is a poor method for handling missing data, whereas both multiple imputation and full information maximum likelihood are recommended alternatives to this approach. The authors suggest that researchers fully consider and report the amount and pattern of missing data and the strategy for handling those data in counseling psychology research and that editors advise researchers of this expectation.

  6. Bovine milk proteome: Quantitative changes in normal milk exosomes, milk fat globule membranes and whey proteomes resulting from Staphylococcus aureus mastitis

    USDA-ARS?s Scientific Manuscript database

    Knowledge of milk protein composition/expression in healthy cows and cows with mastitis will provide information important for the dairy food industry, mammary biology and immune function in the mammary gland. To facilitate maximum protein discovery, milk was fractioned into whey, milk fat globule ...

  7. 14 CFR 331.21 - What information must operators or providers submit in their applications for reimbursement?

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... instructed in the appendix to this part. (j) If you need professional accounting services to assist in the... services, up to a maximum reimbursement of $2,000. You may claim reimbursement only for professional services; your own time in applying for reimbursement is not reimbursable. Any claim for professional...

  8. 10 CFR Appendix A to Part 625 - Standard Sales Provisions

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... of the NS will be publicized on the Fossil Energy web page http://www.fe.doe.gov/programs/reserves... authorized to do so, in the form of corporate minutes, the Authorized Signature List, or the General... information will be provided to DOE by the offeror on the SPR on-line offer form: (1) Maximum MLI Quantity...

  9. 10 CFR Appendix A to Part 625 - Standard Sales Provisions

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... of the NS will be publicized on the Fossil Energy web page http://www.fe.doe.gov/programs/reserves... authorized to do so, in the form of corporate minutes, the Authorized Signature List, or the General... information will be provided to DOE by the offeror on the SPR on-line offer form: (1) Maximum MLI Quantity...

  10. 10 CFR Appendix A to Part 625 - Standard Sales Provisions

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... of the NS will be publicized on the Fossil Energy web page http://www.fe.doe.gov/programs/reserves... authorized to do so, in the form of corporate minutes, the Authorized Signature List, or the General... information will be provided to DOE by the offeror on the SPR on-line offer form: (1) Maximum MLI Quantity...

  11. 10 CFR Appendix A to Part 625 - Standard Sales Provisions

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... of the NS will be publicized on the Fossil Energy web page http://www.fe.doe.gov/programs/reserves... authorized to do so, in the form of corporate minutes, the Authorized Signature List, or the General... information will be provided to DOE by the offeror on the SPR on-line offer form: (1) Maximum MLI Quantity...

  12. 10 CFR Appendix A to Part 625 - Standard Sales Provisions

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... of the NS will be publicized on the Fossil Energy web page http://www.fe.doe.gov/programs/reserves... authorized to do so, in the form of corporate minutes, the Authorized Signature List, or the General... information will be provided to DOE by the offeror on the SPR on-line offer form: (1) Maximum MLI Quantity...

  13. Charge Efficiency Tests of Lead/Acid Batteries

    NASA Technical Reports Server (NTRS)

    Rowlette, J. J.

    1984-01-01

    Current, voltage, and gas evolution measured during charge/discharge cycles. Series of standarized tests for evaluating charging efficiency of lead/acid storage batteries described in report. Purpose of tests to provide information for design of battery charger that allows maximum recharge efficiency for electric-vehicle batteries consistent with other operating parameters, such as range, water loss, and cycle life.

  14. Energetic Phenomena on the Sun: The Solar Maximum Mission Flare Workshop. Proceedings

    NASA Technical Reports Server (NTRS)

    Kundu, Mukul (Editor); Woodgate, Bruce (Editor)

    1986-01-01

    The general objectives of the conference were as follows: (1) Synthesize flare studies after three years of Solar Maximum Mission (SSM) data analysis. Encourage a broader participation in the SMM data analysis and combine this more fully with theory and other data sources-data obtained with other spacecraft such as the HINOTORI, p78-1, and ISEE-3 spacecrafts, and with the Very Large Array (VLA) and many other ground-based instruments. Many coordinated data sets, unprecedented in their breadth of coverage and multiplicity of sources, had been obtained within the structure of the Solar Maximum Year (SMY). (2) Stimulate joint studies, and publication in the general scientific literature. The intended primary benefit was for informal collaborations to be started or broadened at the Workshops with subsequent publications. (3) Provide a special publication resulting from the Workshop.

  15. NOLIN: A nonlinear laminate analysis program

    NASA Technical Reports Server (NTRS)

    Kibler, J. J.

    1975-01-01

    A nonlinear, plane-stress, laminate analysis program, NOLIN, was developed which accounts for laminae nonlinearity under inplane shear and transverse extensional stress. The program determines the nonlinear stress-strain behavior of symmetric laminates subjected to any combination of inplane shear and biaxial extensional loadings. The program has the ability to treat different stress-strain behavior in tension and compression, and predicts laminate failure using any or all of maximum stress, maximum strain, and quadratic interaction failure criteria. A brief description of the program is presented including discussion of the flow of information and details of the input required. Sample problems and a complete listing of the program is also provided.

  16. White paper updating conclusions of 1998 ILAW performance assessment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    MANN, F.M.

    The purpose of this document is to provide a comparison of the estimated immobilized low-activity waste (LAW) disposal system performance against established performance objectives using the beat estimates for parameters and models to describe the system. The principal advances in knowledge since the last performance assessment (known as the 1998 ILAW PA [Mann 1998a]) have been in site specific information and data on the waste form performance for BNFL, Inc. relevant glass formulations. The white paper also estimates the maximum release rates for technetium and other key radionuclides and chemicals from the waste form. Finally, this white paper provides limitedmore » information on the impact of changes in waste form loading.« less

  17. Mapping hurricane rita inland storm tide

    USGS Publications Warehouse

    Berenbrock, C.; Mason, R.R.; Blanchard, S.F.

    2009-01-01

    Flood-inundation data are most useful for decision makers when presented in the context of maps of affected communities and (or) areas. But because the data are scarce and rarely cover the full extent of the flooding, interpolation and extrapolation of the information are needed. Many geographic information systems provide various interpolation tools, but these tools often ignore the effects of the topographic and hydraulic features that influence flooding. A barrier mapping method was developed to improve maps of storm tide produced by Hurricane Rita. Maps were developed for the maximum storm tide and at 3-h intervals from midnight (00:00 hours) through noon (12:00 hours) on 24 September 2005. The improved maps depict storm-tide elevations and the extent of flooding. The extent of storm-tide inundation from the improved maximum storm-tide map was compared with the extent of flood inundation from a map prepared by the Federal Emergency Management Agency (FEMA). The boundaries from these two maps generally compared quite well especially along the Calcasieu River. Also a cross-section profile that parallels the Louisiana coast was developed from the maximum storm-tide map and included FEMA high-water marks. ?? 2009 Blackwell Publishing Ltd.

  18. Mapping Hurricane Rita inland storm tide

    USGS Publications Warehouse

    Berenbrock, Charles; Mason, Jr., Robert R.; Blanchard, Stephen F.; Simonovic, Slobodan P.

    2009-01-01

    Flood-inundation data are most useful for decision makers when presented in the context of maps of effected communities and (or) areas. But because the data are scarce and rarely cover the full extent of the flooding, interpolation and extrapolation of the information are needed. Many geographic information systems (GIS) provide various interpolation tools, but these tools often ignore the effects of the topographic and hydraulic features that influence flooding. A barrier mapping method was developed to improve maps of storm tide produced by Hurricane Rita. Maps were developed for the maximum storm tide and at 3-hour intervals from midnight (0000 hour) through noon (1200 hour) on September 24, 2005. The improved maps depict storm-tide elevations and the extent of flooding. The extent of storm-tide inundation from the improved maximum storm-tide map was compared to the extent of flood-inundation from a map prepared by the Federal Emergency Management Agency (FEMA). The boundaries from these two maps generally compared quite well especially along the Calcasieu River. Also a cross-section profile that parallels the Louisiana coast was developed from the maximum storm-tide map and included FEMA high-water marks.

  19. The efficiency frontier approach to economic evaluation of health-care interventions.

    PubMed

    Caro, J Jaime; Nord, Erik; Siebert, Uwe; McGuire, Alistair; McGregor, Maurice; Henry, David; de Pouvourville, Gérard; Atella, Vincenzo; Kolominsky-Rabas, Peter

    2010-10-01

    IQWiG commissioned an international panel of experts to develop methods for the assessment of the relation of benefits to costs in the German statutory health-care system. The panel recommended that IQWiG inform German decision makers of the net costs and value of additional benefits of an intervention in the context of relevant other interventions in that indication. To facilitate guidance regarding maximum reimbursement, this information is presented in an efficiency plot with costs on the horizontal axis and value of benefits on the vertical. The efficiency frontier links the interventions that are not dominated and provides guidance. A technology that places on the frontier or to the left is reasonably efficient, while one falling to the right requires further justification for reimbursement at that price. This information does not automatically give the maximum reimbursement, as other considerations may be relevant. Given that the estimates are for a specific indication, they do not address priority setting across the health-care system. This approach informs decision makers about efficiency of interventions, conforms to the mandate and is consistent with basic economic principles. Empirical testing of its feasibility and usefulness is required.

  20. An Upper Bound on Orbital Debris Collision Probability When Only One Object has Position Uncertainty Information

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2015-01-01

    Upper bounds on high speed satellite collision probability, P (sub c), have been investigated. Previous methods assume an individual position error covariance matrix is available for each object. The two matrices being combined into a single, relative position error covariance matrix. Components of the combined error covariance are then varied to obtain a maximum P (sub c). If error covariance information for only one of the two objects was available, either some default shape has been used or nothing could be done. An alternative is presented that uses the known covariance information along with a critical value of the missing covariance to obtain an approximate but useful P (sub c) upper bound. There are various avenues along which an upper bound on the high speed satellite collision probability has been pursued. Typically, for the collision plane representation of the high speed collision probability problem, the predicted miss position in the collision plane is assumed fixed. Then the shape (aspect ratio of ellipse), the size (scaling of standard deviations) or the orientation (rotation of ellipse principal axes) of the combined position error ellipse is varied to obtain a maximum P (sub c). Regardless as to the exact details of the approach, previously presented methods all assume that an individual position error covariance matrix is available for each object and the two are combined into a single, relative position error covariance matrix. This combined position error covariance matrix is then modified according to the chosen scheme to arrive at a maximum P (sub c). But what if error covariance information for one of the two objects is not available? When error covariance information for one of the objects is not available the analyst has commonly defaulted to the situation in which only the relative miss position and velocity are known without any corresponding state error covariance information. The various usual methods of finding a maximum P (sub c) do no good because the analyst defaults to no knowledge of the combined, relative position error covariance matrix. It is reasonable to think, given an assumption of no covariance information, an analyst might still attempt to determine the error covariance matrix that results in an upper bound on the P (sub c). Without some guidance on limits to the shape, size and orientation of the unknown covariance matrix, the limiting case is a degenerate ellipse lying along the relative miss vector in the collision plane. Unless the miss position is exceptionally large or the at-risk object is exceptionally small, this method results in a maximum P (sub c) too large to be of practical use. For example, assuming that the miss distance is equal to the current ISS alert volume along-track (+ or -) distance of 25 kilometers and that the at-risk area has a 70 meter radius. The maximum (degenerate ellipse) P (sub c) is about 0.00136. At 40 kilometers, the maximum P (sub c) would be 0.00085 which is still almost an order of magnitude larger than the ISS maneuver threshold of 0.0001. In fact, a miss distance of almost 340 kilometers is necessary to reduce the maximum P (sub c) associated with this degenerate ellipse to the ISS maneuver threshold value. Such a result is frequently of no practical value to the analyst. Some improvement may be made with respect to this problem by realizing that while the position error covariance matrix of one of the objects (usually the debris object) may not be known the position error covariance matrix of the other object (usually the asset) is almost always available. Making use of the position error covariance information for the one object provides an improvement in finding a maximum P (sub c) which, in some cases, may offer real utility. The equations to be used are presented and their use discussed.

  1. Note: Fully integrated active quenching circuit achieving 100 MHz count rate with custom technology single photon avalanche diodes.

    PubMed

    Acconcia, G; Labanca, I; Rech, I; Gulinatti, A; Ghioni, M

    2017-02-01

    The minimization of Single Photon Avalanche Diodes (SPADs) dead time is a key factor to speed up photon counting and timing measurements. We present a fully integrated Active Quenching Circuit (AQC) able to provide a count rate as high as 100 MHz with custom technology SPAD detectors. The AQC can also operate the new red enhanced SPAD and provide the timing information with a timing jitter Full Width at Half Maximum (FWHM) as low as 160 ps.

  2. Selection and implementation of a distributed phased archive for a multivendor incremental approach to PACS

    NASA Astrophysics Data System (ADS)

    Smith, Edward M.; Wandtke, John; Robinson, Arvin E.

    1999-07-01

    The selection criteria for the archive were based on the objectives of the Medical Information, Communication and Archive System (MICAS), a multi-vendor incremental approach to PACS. These objectives include interoperability between all components, seamless integration of the Radiology Information System (RIS) with MICAS and eventually other hospital databases, all components must demonstrate DICOM compliance prior to acceptance and automated workflow that can be programmed to meet changes in the healthcare environment. The long-term multi-modality archive is being implemented in 3 or more phases with the first phase designed to provide a 12 to 18 month storage solution. This decision was made because the cost per GB of storage is rapidly decreasing and the speed at which data can be retrieved is increasing with time. The open-solution selected allows incorporation of leading edge, 'best of breed' hardware and software and provides maximum jukeboxes, provides maximum flexibility of workflow both within and outside of radiology. The selected solution is media independent, supports multiple jukeboxes, provides expandable storage capacity and will provide redundancy and fault tolerance at minimal cost. Some of the required attributes of the archive include scalable archive strategy, virtual image database with global query and object-oriented database. The selection process took approximately 10 months with Cemax-Icon being the vendor selected. Prior to signing a purchase order, Cemax-Icon performed a site survey, agreed upon the acceptance test protocol and provided a written guarantee of connectivity between their archive and the imaging modalities and other MICAS components.

  3. Laboratory Training Manual on the Use of Radionuclides and Radiation in Animal Research, Third Edition.

    ERIC Educational Resources Information Center

    International Atomic Energy Agency, Vienna (Austria).

    This publication is written for those researchers who are interested in the use of radionuclides and radiation in the animal science field. Part I presents topics intended to provide the theoretical base of radionuclides which is important in order to design an experiment for drawing maximum information from it. The topics included in this…

  4. 78 FR 27341 - Restrictions on Legal Assistance With Respect to Criminal Proceedings in Tribal Courts

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-05-10

    ... under the LSC's adoption of the Freedom of Information Act, 42 U.S.C. 2996d(g), and the LSC regulation... maximum jail sentence that any tribal court may impose from one to three years for any single offense... a defendant is indigent, providing the defendant with a licensed defense attorney at the tribe's...

  5. Stereometric Analysis Of Static Equilibrium In CNS Disorders

    NASA Astrophysics Data System (ADS)

    Sheffer, D. B.; Lehmkuhl, D. L.; Herron, R. E.

    1980-07-01

    A primary aim in the physical rehabilitation of individuals with severe head or spinal injuries resulting in hemiparesis, tetraparesis or paraparesis, is the restoration of the functional motor abilities controlling involved muscle groups. The regaining of trunk postural stability provides a valuable antecedent in the recovery of use of the arms and legs. Accurate objective infor-mation must be provided to the therapist for assessment of treatment regimes. At present, few objective practical methods are available to furnish this evaluative information. Therefore the purpose of this study was to investigate the use of a biostereometric range of motion sensor for recording and quantifying trunk static equilibrium in individuals ungergoing therapy for head trauma. The sensor located the relative position of the C-7 vertebrae of the patient in space using continual monitoring of spherical coordinates. Results of the test protocol included : plots of the movement of the trunk excursions, determination of the maximum area of excursion and a trunk sway index (the relationship of the maximum area, the total excursion distance and a time factor). Further results demonstrated that the biostereometric sensor yielded quantitative documentation of improvement in a patient undergoing therapy.

  6. Improvements in lake water budget computations using Landsat data

    NASA Technical Reports Server (NTRS)

    Gervin, J. C.; Shih, S. F.

    1979-01-01

    A supervised multispectral classification was performed on Landsat data for Lake Okeechobee's extensive littoral zone to provide two types of information. First, the acreage of a given plant species as measured by satellite was combined with a more accurate transpiration rate to give a better estimate of evapotranspiration from the littoral zone. Second, the surface area coupled by plant communities was used to develop a better estimate of the water surface as a function of lake stage. Based on this information, more detailed representations of evapotranspiration and total water surface (and hence total lake volume) were provided to the water balance budget model for lake volume predictions. The model results based on information derived from satellite demonstrated a 94 percent reduction in cumulative lake stage error and a 70 percent reduction in the maximum deviation of the lake stage.

  7. Remote sensing for rural development planning in Africa

    NASA Technical Reports Server (NTRS)

    Dunford, C.; Mouat, D. A.; Norton-Griffiths, M.; Slaymaker, D. M.

    1983-01-01

    Multilevel remote-sensing techniques were combined to provide land resource and land-use information for rural development planning in Arusha Region, Tanzania. Enhanced Landsat imagery, supplemented by low-level aerial survey data, slope angle data from topographic sheets, and existing reports on vegetation and soil conditions, was used jointly by image analysts and district-level land-management officials to divide the region's six districts into land-planning units. District-planning officials selected a number of these land-planning units for priority planning and development activities. For the priority areas, natural color aerial photographs provided detailed information for land-use planning discussions between district officials and villagers. Consideration of the efficiency of this remote sensing approach leads to general recommendations for similar applications. The technology and timing of data collection and interpretation activities should allow maximum participation by intended users of the information.

  8. A perspective of synthetic aperture radar for remote sensing

    NASA Technical Reports Server (NTRS)

    Skolnik, M. I.

    1978-01-01

    The characteristics and capabilities of synthetic aperture radar are discussed so as to identify those features particularly unique to SAR. The SAR and Optical images were compared. The SAR is an example of radar that provides more information about a target than simply its location. It is the spatial resolution and imaging capability of SAR that has made its application of interest, especially from spaceborne platforms. However, for maximum utility to remote sensing, it was proposed that other information be extracted from SAR data, such as the cross section with frequency and polarization.

  9. Toxicity studies on agent GA (Phase 2): 90 day subchronic study of GA (Tabun) in cd rats. Appendices. Final report, July 1985-August 1991

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1992-03-01

    The purpose of the report is to provide essential toxicologic information on Tabun administration over a 90 day period. This toxicologic information may be used to adjust the maximum-tolerated dose for subsequent dominant-lethal and two-generation reproduction studies. The objectives were to determine the toxic effects of nerve agent exposure (e.g., target organs); and to determine the effects of nerve agent GA on sperm morphology and motility and vaginal cytology.

  10. Determination of contact angle from the maximum height of enlarged drops on solid surfaces

    NASA Astrophysics Data System (ADS)

    Behroozi, F.

    2012-04-01

    Measurement of the liquid/solid contact angle provides useful information on the wetting properties of fluids. In 1870, the German physicist Georg Hermann Quincke (1834-1924) published the functional relation between the maximum height of an enlarged drop and its contact angle. Quincke's relation offered an alternative to the direct measurement of contact angle, which in practice suffers from several experimental uncertainties. In this paper, we review Quincke's original derivation and show that it is based on a hidden assumption. We then present a new derivation that exposes this assumption and clarifies the conditions under which Quincke's relation is valid. To explore Quincke's relation experimentally, we measure the maximum height of enlarged water drops on several substrates and calculate the contact angle in each case. Our results are in good agreement with contact angles measured directly from droplet images.

  11. How much a quantum measurement is informative?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Arno, Michele; ICFO-Institut de Ciencies Fotoniques, E-08860 Castelldefels, Barcelona; Quit Group, Dipartimento di Fisica, via Bassi 6, I-27100 Pavia

    2014-12-04

    The informational power of a quantum measurement is the maximum amount of classical information that the measurement can extract from any ensemble of quantum states. We discuss its main properties. Informational power is an additive quantity, being equivalent to the classical capacity of a quantum-classical channel. The informational power of a quantum measurement is the maximum of the accessible information of a quantum ensemble that depends on the measurement. We present some examples where the symmetry of the measurement allows to analytically derive its informational power.

  12. Optimal tuning of a confined Brownian information engine.

    PubMed

    Park, Jong-Min; Lee, Jae Sung; Noh, Jae Dong

    2016-03-01

    A Brownian information engine is a device extracting mechanical work from a single heat bath by exploiting the information on the state of a Brownian particle immersed in the bath. As for engines, it is important to find the optimal operating condition that yields the maximum extracted work or power. The optimal condition for a Brownian information engine with a finite cycle time τ has been rarely studied because of the difficulty in finding the nonequilibrium steady state. In this study, we introduce a model for the Brownian information engine and develop an analytic formalism for its steady-state distribution for any τ. We find that the extracted work per engine cycle is maximum when τ approaches infinity, while the power is maximum when τ approaches zero.

  13. Expected versus Observed Information in SEM with Incomplete Normal and Nonnormal Data

    ERIC Educational Resources Information Center

    Savalei, Victoria

    2010-01-01

    Maximum likelihood is the most common estimation method in structural equation modeling. Standard errors for maximum likelihood estimates are obtained from the associated information matrix, which can be estimated from the sample using either expected or observed information. It is known that, with complete data, estimates based on observed or…

  14. Minimum and Maximum Entropy Distributions for Binary Systems with Known Means and Pairwise Correlations

    DTIC Science & Technology

    2017-08-21

    distributions, and we discuss some applications for engineered and biological information transmission systems. Keywords: information theory; minimum...of its interpretation as a measure of the amount of information communicable by a neural system to groups of downstream neurons. Previous authors...of the maximum entropy approach. Our results also have relevance for engineered information transmission systems. We show that empirically measured

  15. A double-gaussian, percentile-based method for estimating maximum blood flow velocity.

    PubMed

    Marzban, Caren; Illian, Paul R; Morison, David; Mourad, Pierre D

    2013-11-01

    Transcranial Doppler sonography allows for the estimation of blood flow velocity, whose maximum value, especially at systole, is often of clinical interest. Given that observed values of flow velocity are subject to noise, a useful notion of "maximum" requires a criterion for separating the signal from the noise. All commonly used criteria produce a point estimate (ie, a single value) of maximum flow velocity at any time and therefore convey no information on the distribution or uncertainty of flow velocity. This limitation has clinical consequences especially for patients in vasospasm, whose largest flow velocities can be difficult to measure. Therefore, a method for estimating flow velocity and its uncertainty is desirable. A gaussian mixture model is used to separate the noise from the signal distribution. The time series of a given percentile of the latter, then, provides a flow velocity envelope. This means of estimating the flow velocity envelope naturally allows for displaying several percentiles (e.g., 95th and 99th), thereby conveying uncertainty in the highest flow velocity. Such envelopes were computed for 59 patients and were shown to provide reasonable and useful estimates of the largest flow velocities compared to a standard algorithm. Moreover, we found that the commonly used envelope was generally consistent with the 90th percentile of the signal distribution derived via the gaussian mixture model. Separating the observed distribution of flow velocity into a noise component and a signal component, using a double-gaussian mixture model, allows for the percentiles of the latter to provide meaningful measures of the largest flow velocities and their uncertainty.

  16. 14 CFR 23.1527 - Maximum operating altitude.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Maximum operating altitude. 23.1527 Section... Information § 23.1527 Maximum operating altitude. (a) The maximum altitude up to which operation is allowed... established. (b) A maximum operating altitude limitation of not more than 25,000 feet must be established for...

  17. Naval Petroleum and Oil Shale Reserves Combined Financial Statements September 30, 1994 and 1993 and Management Overview and Supplemental Financial and Management Information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    NONE

    1994-12-31

    This report presents the results of the independent certified public accountant`s audit of the Department of Energy`s (Department) Naval Petroleum and Oil Shale Reserves (NPOSR) financial statements as of September 30, 1994. The auditors have expressed an unqualified opinion on the 1994 statements. Their reports on the NPOSR internal control structure and on compliance with laws and regulations, and management letter on addressing needed improvements are also provided. NPOSR consists of petroleum reserves in California and Wyoming, and oil shale reserves in Colorado and Utah. The Government`s interests in NPOSR are managed by the Department through its headquarters office inmore » Washington, D.C. In addition, the Department has site offices in both California and Wyoming that are responsible for contractor oversight functions. Daily operations are conducted under contract by two management and operating contractors. By law, NPOSR was authorized to produce crude oil at the maximum efficient rate for six years. The law allowed production to be extended for three year periods, provided that the President of the United States certified that continued maximum production was in the best interest of the nation. The current three year period ends on April 5, 1997. Additional information about NPOSR is provided in the overview and notes to the financial statements.« less

  18. Exploiting Acoustic and Syntactic Features for Automatic Prosody Labeling in a Maximum Entropy Framework

    PubMed Central

    Sridhar, Vivek Kumar Rangarajan; Bangalore, Srinivas; Narayanan, Shrikanth S.

    2009-01-01

    In this paper, we describe a maximum entropy-based automatic prosody labeling framework that exploits both language and speech information. We apply the proposed framework to both prominence and phrase structure detection within the Tones and Break Indices (ToBI) annotation scheme. Our framework utilizes novel syntactic features in the form of supertags and a quantized acoustic–prosodic feature representation that is similar to linear parameterizations of the prosodic contour. The proposed model is trained discriminatively and is robust in the selection of appropriate features for the task of prosody detection. The proposed maximum entropy acoustic–syntactic model achieves pitch accent and boundary tone detection accuracies of 86.0% and 93.1% on the Boston University Radio News corpus, and, 79.8% and 90.3% on the Boston Directions corpus. The phrase structure detection through prosodic break index labeling provides accuracies of 84% and 87% on the two corpora, respectively. The reported results are significantly better than previously reported results and demonstrate the strength of maximum entropy model in jointly modeling simple lexical, syntactic, and acoustic features for automatic prosody labeling. PMID:19603083

  19. 14 CFR 23.1524 - Maximum passenger seating configuration.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Maximum passenger seating configuration. 23.1524 Section 23.1524 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF... Operating Limitations and Information § 23.1524 Maximum passenger seating configuration. The maximum...

  20. Concertina browsers: a formative evaluation of user preference.

    PubMed

    Harper, Simon; Christophorou, Nicola

    2008-09-01

    Evidence suggests that concertina browsers - browsers with the facility to expand and contract sections of information - are important in providing the reader with an enhanced cognition of small to medium amounts of information. These systems have been shown to be useful for visually disabled users surfing the World Wide Web (Web), and with the development of the Mobile Web, there has been renewed interest in their use. This is due to the similarities of reduced or constrained vision found to exist between visually impaired users and the users of mobile devices. The cognition of information fragments is key to the user experience and the reduction of 'information overload'; as such we are concerned with assisting designers of concertina browsers in providing an enhanced user experience by ascertaining user preference through a formative evaluation of concertina summaries. This aspect of browsing is important because in all concertina systems there is a distinct cognition speed/depth trade-off. Here we investigate a number of these concertina summarization techniques against each other. We describe a formative evaluation which concludes that users prefer concertina summarization of Web documents starting from 6.25% slices of both the top and bottom and expanding from the top in 2% steps to a target maximum of 18.50% (being 12.25% from the top and 6.25% from the bottom). These preferences were found to be representative of documents of less than 600 words of content, and included the preference to not fragment an individual sentence even if that meant slightly increasing the target: Starting, maximum, and step percentage slices.

  1. Analyzing Quadratic Unconstrained Binary Optimization Problems Via Multicommodity Flows

    PubMed Central

    Wang, Di; Kleinberg, Robert D.

    2009-01-01

    Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C2, C3, C4,…. It is known that C2 can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing Ck (k > 2) require solving a linear program. In this paper we prove that C3 can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}n, this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network. PMID:20161596

  2. Analyzing Quadratic Unconstrained Binary Optimization Problems Via Multicommodity Flows.

    PubMed

    Wang, Di; Kleinberg, Robert D

    2009-11-28

    Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C(2), C(3), C(4),…. It is known that C(2) can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing C(k) (k > 2) require solving a linear program. In this paper we prove that C(3) can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}(n), this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network.

  3. F-16 Instructional Sequencing Plan Report.

    DTIC Science & Technology

    1981-03-01

    information). 2. Interference (learning of some tasks interferes with the learning of other tasks when they possess similar but confusing differences ...profound effect on the total training expense. This increases the desirability of systematic, precise methods of syllabus generation. Inherent in a given...the expensive to acquire. resource. Least cost The syllabus must Select sequences which provide a least total make maximum use of cost method of

  4. Energy Emergency Management Information System (EEMIS): Functional requirements

    NASA Astrophysics Data System (ADS)

    1980-10-01

    These guidelines state that in order to create the widest practicable competition, the system's requirements, with few exceptions, must be expressed in functional terms without reference to specific hardware or software products, and that wherever exceptions are made a statement of justification must be provided. In addition, these guidelines set forth a recommended maximum threshold limit of annual contract value for schedule contract procurements.

  5. A Comparison of Item Selection Procedures Using Different Ability Estimation Methods in Computerized Adaptive Testing Based on the Generalized Partial Credit Model

    ERIC Educational Resources Information Center

    Ho, Tsung-Han

    2010-01-01

    Computerized adaptive testing (CAT) provides a highly efficient alternative to the paper-and-pencil test. By selecting items that match examinees' ability levels, CAT not only can shorten test length and administration time but it can also increase measurement precision and reduce measurement error. In CAT, maximum information (MI) is the most…

  6. Leachate Testing of Hamlet City Lake, North Carolina, Sediment

    DTIC Science & Technology

    1992-11-01

    release; distribution is unlimited. 13. ABSTRACT (Maximum 200 words) Sediment leaching studies of Hamlet City Lake, Hamlet, NC, were conducted in...laboratories at the U.S. Army Engineer Waterways Experiment Station. The pur- pose of these studies was to provide quantitative information on the...conditions similar to landfarming. The study involved three elements: batch leach tests, column leach tests, and simulations using the Hydrologic

  7. Guaranteed convergence of the Hough transform

    NASA Astrophysics Data System (ADS)

    Soffer, Menashe; Kiryati, Nahum

    1995-01-01

    The straight-line Hough Transform using normal parameterization with a continuous voting kernel is considered. It transforms the colinearity detection problem to a problem of finding the global maximum of a two dimensional function above a domain in the parameter space. The principle is similar to robust regression using fixed scale M-estimation. Unlike standard M-estimation procedures the Hough Transform does not rely on a good initial estimate of the line parameters: The global optimization problem is approached by exhaustive search on a grid that is usually as fine as computationally feasible. The global maximum of a general function above a bounded domain cannot be found by a finite number of function evaluations. Only if sufficient a-priori knowledge about the smoothness of the objective function is available, convergence to the global maximum can be guaranteed. The extraction of a-priori information and its efficient use are the main challenges in real global optimization problems. The global optimization problem in the Hough Transform is essentially how fine should the parameter space quantization be in order not to miss the true maximum. More than thirty years after Hough patented the basic algorithm, the problem is still essentially open. In this paper an attempt is made to identify a-priori information on the smoothness of the objective (Hough) function and to introduce sufficient conditions for the convergence of the Hough Transform to the global maximum. An image model with several application dependent parameters is defined. Edge point location errors as well as background noise are accounted for. Minimal parameter space quantization intervals that guarantee convergence are obtained. Focusing policies for multi-resolution Hough algorithms are developed. Theoretical support for bottom- up processing is provided. Due to the randomness of errors and noise, convergence guarantees are probabilistic.

  8. [Study of the health food information for cancer patients on Japanese websites].

    PubMed

    Kishimoto, Keiko; Yoshino, Chie; Fukushima, Noriko

    2010-08-01

    The aim of this paper is to evaluate the reliability of websites providing health food information for cancer patients and, to assess the status to get this information online. We used four common Japanese search engines (Yahoo!, Google, goo, and MSN) to look up websites on Dec. 2, 2008. The search keywords were "health food" and "cancer". The websites for the first 100 hits generated by each search engine were screened and extracted by three conditions. We extracted 64 unique websites by the result of retrieval, of which 54 websites had information about health food factors. The two scales were used to evaluate the quality of the content on 54 websites. On the scale of reliability of information on the Web, the average score was 2.69+/-1.70 (maximum 6) and the median was 2.5. The other scale was matter need to check whether listed to use safely this information. On this scale, the average score was 0.72+/-1.22 (maximum 5) and the median was 0. Three engines showed poor correlation between the ranking and the latter score. But several websites on the top indicated 0 score. Fifty-four websites were extracted with one to four engines and the average number of search engines was 1.9. The two scales were positively correlated with the number of search engines, but these correlations were very poor. Ranking high and extraction by multiple search engines were of minor benefit to pick out more reliable information.

  9. Maximum Mass-Particle Velocities in Kantor's Information Mechanics

    NASA Astrophysics Data System (ADS)

    Sverdlik, Daniel I.

    1989-02-01

    Kantor's information mechanics links phenomena previously regarded as not treatable by a single theory. It is used here to calculate the maximum velocities ν m of single particles. For the electron, ν m/c≈1-1.253 814×10-77. The maximum ν m corresponds to ν m/c≈1-1.097864×10-122 for a single mass particle with a rest mass of 3.078 496×10-5g. This is the fastest that matter can move. Either information mechanics or classical mechanics can be used to show that ν m is less for heavier particles. That ν m is less for lighter particles can be deduced from an information mechanics argument alone.

  10. 78 FR 13914 - Submission for Review: Survivor Annuity Election for a Spouse, RI 20-63; Cover Letter Giving...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-01

    ... 20-63; Cover Letter Giving Information About the Cost To Elect Less Than the Maximum Survivor Annuity, RI 20-116; Cover Letter Giving Information About the Cost To Elect the Maximum Survivor Annuity, RI... other Federal agencies the opportunity to comment on a revised information collection request (ICR 3206...

  11. 78 FR 42986 - Submission for Review: Survivor Annuity Election for a Spouse, RI 20-63; Cover Letter Giving...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-18

    .... This letter may be used to ask for more information. Analysis Agency: Retirement Operations, Retirement... 20-63; Cover Letter Giving Information About The Cost To Elect Less Than the Maximum Survivor Annuity, RI 20-116; Cover Letter Giving Information About The Cost To Elect the Maximum Survivor Annuity, RI...

  12. Effectiveness of Visual Methods in Information Procedures for Stem Cell Recipients and Donors

    PubMed Central

    Sarıtürk, Çağla; Gereklioğlu, Çiğdem; Korur, Aslı; Asma, Süheyl; Yeral, Mahmut; Solmaz, Soner; Büyükkurt, Nurhilal; Tepebaşı, Songül; Kozanoğlu, İlknur; Boğa, Can; Özdoğu, Hakan

    2017-01-01

    Objective: Obtaining informed consent from hematopoietic stem cell recipients and donors is a critical step in the transplantation process. Anxiety may affect their understanding of the provided information. However, use of audiovisual methods may facilitate understanding. In this prospective randomized study, we investigated the effectiveness of using an audiovisual method of providing information to patients and donors in combination with the standard model. Materials and Methods: A 10-min informational animation was prepared for this purpose. In total, 82 participants were randomly assigned to two groups: group 1 received the additional audiovisual information and group 2 received standard information. A 20-item questionnaire was administered to participants at the end of the informational session. Results: A reliability test and factor analysis showed that the questionnaire was reliable and valid. For all participants, the mean overall satisfaction score was 184.8±19.8 (maximum possible score of 200). However, for satisfaction with information about written informed consent, group 1 scored significantly higher than group 2 (p=0.039). Satisfaction level was not affected by age, education level, or differences between the physicians conducting the informative session. Conclusion: This study shows that using audiovisual tools may contribute to a better understanding of the informed consent procedure and potential risks of stem cell transplantation. PMID:27476890

  13. Procedures for estimating the frequency of commercial airline flights encountering high cabin ozone levels

    NASA Technical Reports Server (NTRS)

    Holdeman, J. D.

    1979-01-01

    Three analytical problems in estimating the frequency at which commercial airline flights will encounter high cabin ozone levels are formulated and solved: namely, estimating flight-segment mean levels, estimating maximum-per-flight levels, and estimating the maximum average level over a specified flight interval. For each problem, solution procedures are given for different levels of input information - from complete cabin ozone data, which provides a direct solution, to limited ozone information, such as ambient ozone means and standard deviations, with which several assumptions are necessary to obtain the required estimates. Each procedure is illustrated by an example case calculation that uses simultaneous cabin and ambient ozone data obtained by the NASA Global Atmospheric Sampling Program. Critical assumptions are discussed and evaluated, and the several solutions for each problem are compared. Example calculations are also performed to illustrate how variations in lattitude, altitude, season, retention ratio, flight duration, and cabin ozone limits affect the estimated probabilities.

  14. Eyesafe laser cloud mapper

    NASA Astrophysics Data System (ADS)

    Woodall, Milton A., II; Minch, J. R.; Nunez, J.; Keeter, Howard S.; Johnson, Anthony M.

    1990-07-01

    The performance of eyesafe erbium:glass lasers operating at a wavelength of 1. 54 urn has been tested under various natural and manmade obscurants. To obtain the maximum amount of information two distinct system configurations were employed. The first a laser cloud mapper was designed to provide a direct depth profile of smoke density and reflectivity as well as target position. The second configuration was a production military laser rangefinder. It is representative of systems currently incorporated in tactical armored vehicles and was used to provide a direct indication of target range. 1.

  15. Assessing Family Planning Service Quality And User Experiences In Social Franchising Programme - Case Studies From Two Rural Districts In Pakistan.

    PubMed

    Azmat, Syed Khurram; Ali, Moazzam; Hameed, Waqas; Awan, Muhammad Ali

    2018-01-01

    Studies have documented the impact of quality family planning services on improved contraceptive uptake and continuation, however, relatively little is known about their quality of service provision especially in the context of social franchising. This study examined the quality of clinical services and user experiences among two models in franchised service providers in rural Pakistan. This facility-based assessment was carried out during May-June 2015 at the 20 randomly selected social franchise providers from Chakwal and Faisalabad. In our case, a franchise health facility was a private clinic (mostly) run by a single provider, supported by an assistant. Within the selected health facilities, a total 39 user-provider interactions were observed and same users were interviewed separately. Most of the health facilities were in the private sector. Comparatively, service providers at Greenstar Social Marketing/Population Services International (GSM/PSI) model franchised facilities had higher number of rooms and staff employed, with more providers' ownership. Quality of service indices showed high scores for both Marie Stopes Society (MSS) and GSM/PSI franchised providers. MSS franchised providers demonstrated comparative edge in terms of clinical governance, better method mix and they were more user-focused, while PSI providers offered broader range of non-FP services. Quality of counselling services were similar among both models. Service providers performed well on all indicators of interpersonal care however overall low scores were noted in technical care. For both models, service providers attained an average score of 6.7 (out of the maximum value of 8) on waste disposal mechanism, supplies 12.5 (out of the maximum value of 15), user-centred facility 2.7 (out of the maximum value of 4), and clinical governance 6.5 (out of the maximum value of 11) and respecting clients' privacy. The exit interviews yielded high user satisfaction in both service models. The findings seem suggesting that the MSS and GSM/PSI service providers were maintaining high quality standards in provision of family planning information, services, and commodities but overall there was not much difference between the two models in terms of quality and satisfaction. The results demonstrate that service quality and client satisfaction are an important determinant of use of clinical contraceptive methods in Pakistan.

  16. Plant Distribution Data Show Broader Climatic Limits than Expert-Based Climatic Tolerance Estimates

    PubMed Central

    Curtis, Caroline A.; Bradley, Bethany A.

    2016-01-01

    Background Although increasingly sophisticated environmental measures are being applied to species distributions models, the focus remains on using climatic data to provide estimates of habitat suitability. Climatic tolerance estimates based on expert knowledge are available for a wide range of plants via the USDA PLANTS database. We aim to test how climatic tolerance inferred from plant distribution records relates to tolerance estimated by experts. Further, we use this information to identify circumstances when species distributions are more likely to approximate climatic tolerance. Methods We compiled expert knowledge estimates of minimum and maximum precipitation and minimum temperature tolerance for over 1800 conservation plant species from the ‘plant characteristics’ information in the USDA PLANTS database. We derived climatic tolerance from distribution data downloaded from the Global Biodiversity and Information Facility (GBIF) and corresponding climate from WorldClim. We compared expert-derived climatic tolerance to empirical estimates to find the difference between their inferred climate niches (ΔCN), and tested whether ΔCN was influenced by growth form or range size. Results Climate niches calculated from distribution data were significantly broader than expert-based tolerance estimates (Mann-Whitney p values << 0.001). The average plant could tolerate 24 mm lower minimum precipitation, 14 mm higher maximum precipitation, and 7° C lower minimum temperatures based on distribution data relative to expert-based tolerance estimates. Species with larger ranges had greater ΔCN for minimum precipitation and minimum temperature. For maximum precipitation and minimum temperature, forbs and grasses tended to have larger ΔCN while grasses and trees had larger ΔCN for minimum precipitation. Conclusion Our results show that distribution data are consistently broader than USDA PLANTS experts’ knowledge and likely provide more robust estimates of climatic tolerance, especially for widespread forbs and grasses. These findings suggest that widely available expert-based climatic tolerance estimates underrepresent species’ fundamental niche and likely fail to capture the realized niche. PMID:27870859

  17. Building pathway graphs from BioPAX data in R.

    PubMed

    Benis, Nirupama; Schokker, Dirkjan; Kramer, Frank; Smits, Mari A; Suarez-Diez, Maria

    2016-01-01

    Biological pathways are increasingly available in the BioPAX format which uses an RDF model for data storage. One can retrieve the information in this data model in the scripting language R using the package rBiopaxParser , which converts the BioPAX format to one readable in R. It also has a function to build a regulatory network from the pathway information. Here we describe an extension of this function. The new function allows the user to build graphs of entire pathways, including regulated as well as non-regulated elements, and therefore provides a maximum of information. This function is available as part of the rBiopaxParser distribution from Bioconductor.

  18. Providing a complete online multimedia patient record.

    PubMed Central

    Dayhoff, R. E.; Kuzmak, P. M.; Kirin, G.; Frank, S.

    1999-01-01

    Seamless integration of all types of patient data is a critical feature for clinical workstation software. The Dept. of Veterans Affairs has developed a multimedia online patient record that includes traditional medical chart information as well as a wide variety of medical images from specialties such as cardiology, pulmonary and gastrointestinal medicine, pathology, radiology, hematology, and nuclear medicine. This online patient record can present data in ways not possible with a paper chart or other physical media. Obtaining a critical mass of information online is essential to achieve the maximum benefits from an integrated patient record system. Images Figure 1 Figure 2 PMID:10566357

  19. Hospital Based Customization of a Medical Information System

    PubMed Central

    Rath, Marilyn A.; Ferguson, Julie C.

    1983-01-01

    A Medical Information System must be current if it is to be a viable adjunct to patient care within a hospital setting. Hospital-based customization provides a means of achieving this timeliness with maximum user satisfaction. It, however, requires a major commitment in personnel time as well as additional software and training expenses. The enhanced control of system modifications and overall flexibility in planning the change process result in enthusiastic support of this approach by many hospitals. The key factors for success include careful selection of local personnel with adequate vendor support, extensive QA control, thorough auditing/validation and direct user involvement.

  20. Stochastic characteristics of different duration annual maximum rainfall and its spatial difference in China based on information entropy

    NASA Astrophysics Data System (ADS)

    Li, X.; Sang, Y. F.

    2017-12-01

    Mountain torrents, urban floods and other disasters caused by extreme precipitation bring great losses to the ecological environment, social and economic development, people's lives and property security. So there is of great significance to floods prevention and control by the study of its spatial distribution. Based on the annual maximum rainfall data of 60min, 6h and 24h, the paper generate long sequences following Pearson-III distribution, and then use the information entropy index to study the spatial distribution and difference of different duration. The results show that the information entropy value of annual maximum rainfall in the south region is greater than that in the north region, indicating more obvious stochastic characteristics of annual maximum rainfall in the latter. However, the spatial distribution of stochastic characteristics is different in different duration. For example, stochastic characteristics of 60min annual maximum rainfall in the Eastern Tibet is smaller than surrounding, but 6h and 24h annual maximum rainfall is larger than surrounding area. In the Haihe River Basin and the Huaihe River Basin, the stochastic characteristics of the 60min annual maximum rainfall was not significantly different from that in the surrounding area, and stochastic characteristics of 6h and 24h was smaller than that in the surrounding area. We conclude that the spatial distribution of information entropy values of annual maximum rainfall in different duration can reflect the spatial distribution of its stochastic characteristics, thus the results can be an importantly scientific basis for the flood prevention and control, agriculture, economic-social developments and urban flood control and waterlogging.

  1. Characteristics of Physical Training Activities of West Coast U.S. Navy Sea-Air-Land Personnel (SEALS)

    DTIC Science & Technology

    1992-11-01

    REPETITIONS, OR LOADS VARY. USE TIHE AVERAGE FOR YOUR RESPONSE TO THIESE QUESTIONS Body Weight: _ pounds I Repetition Average Exercise Maximum Sets...Sea, Air, Land (SEAL) personnel undergoing advanced training. Responses to this questionnaire provided information on the types, frequencies, and...their responses were used to characterize training activity according to the American College of Sports Medicine guidelines for maintenance of aerobic

  2. RESEARCH FOR MANAGING URBAN WATERSHED MICROBIAL CONTAMINATION (PROJECT 1: MANAGING URBAN WATERSHED PATHOGEN CONTAMINATION: 2. EFFECT OF LAND USE AND SEASON ON MICROORGANISM CONCENTRATION ON URBAN STORMWATER RUNOFF; 3. MICROORGANISM DIE-OFF RATES UNDER VARIOUS CONDITIONS.

    EPA Science Inventory

    The Water Supply and Water Resources Division (WSWRD) developed a document entitled Managing Urban Watershed Pathogen Contamination (EPA 600/R-03/111). This document provides information to support specific steps of the total maximum daily load (TMDL) process for meeting water q...

  3. Hydrologic models for land-atmosphere retrospective studies of the use of LANDSAT and AVHRR data

    NASA Technical Reports Server (NTRS)

    Duchon, Claude E.; Williams, T. H. Lee; Nicks, Arlin D.

    1988-01-01

    The use of a Geographic Information System (GIS) and LANDSAT analysis in conjunction with the Simulator for Water Resources on a Rural Basin (SWRRB) hydrologic model to examine the water balance on the Little Washita River basin is discussed. LANDSAT analysis was used to divide the basin into eight non-contiguous land covers or subareas: rangeland, grazed range, winter wheat, alfalfa/pasture, bare soil, water, woodland, and impervious land (roads, quarry). The use of a geographic information system allowed for the calculation of SWRRB model parameters in each subarea. Four data sets were constructed in order to compare SWRRB estimates of hydrologic processes using two methods of maximum LAI and two methods of watershed subdivision. Maximum LAI was determined from a continental scale map, which provided a value of 4.5 for the entire basin, and from its association with the type of land-cover (eight values). The two methods of watershed subdivision were determined according to drainage subbasin (four) and the eight land-covers. These data sets were used with the SWRRB model to obtain daily hydrologic estimates for 1985. The results of the one year analysis lead to the conclusion that the greater homogeneity of a land-cover subdivision provides better water yield estimates than those based on a drainage properties subdivision.

  4. Environmental contaminants of emerging concern in seafood--European database on contaminant levels.

    PubMed

    Vandermeersch, Griet; Lourenço, Helena Maria; Alvarez-Muñoz, Diana; Cunha, Sara; Diogène, Jorge; Cano-Sancho, German; Sloth, Jens J; Kwadijk, Christiaan; Barcelo, Damia; Allegaert, Wim; Bekaert, Karen; Fernandes, José Oliveira; Marques, Antonio; Robbens, Johan

    2015-11-01

    Marine pollution gives rise to concern not only about the environment itself but also about the impact on food safety and consequently on public health. European authorities and consumers have therefore become increasingly worried about the transfer of contaminants from the marine environment to seafood. So-called "contaminants of emerging concern" are chemical substances for which no maximum levels have been laid down in EU legislation, or substances for which maximum levels have been provided but which require revision. Adequate information on their presence in seafood is often lacking and thus potential risks cannot be excluded. Assessment of food safety issues related to these contaminants has thus become urgent and imperative. A database (www.ecsafeseafooddbase.eu), containing available information on the levels of contaminants of emerging concern in seafood and providing the most recent data to scientists and regulatory authorities, was developed. The present paper reviews a selection of contaminants of emerging concern in seafood including toxic elements, endocrine disruptors, brominated flame retardants, pharmaceuticals and personal care products, polycyclic aromatic hydrocarbons and derivatives, microplastics and marine toxins. Current status on the knowledge of human exposure, toxicity and legislation are briefly presented and the outcome from scientific publications reporting on the levels of these compounds in seafood is presented and discussed. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Mars Science Laboratory Launch-Arrival Space Study: A Pork Chop Plot Analysis

    NASA Technical Reports Server (NTRS)

    Cianciolo, Alicia Dwyer; Powell, Richard; Lockwood, Mary Kae

    2006-01-01

    Launch-Arrival, or "pork chop", plot analysis can provide mission designers with valuable information and insight into a specific launch and arrival space selected for a mission. The study begins with the array of entry states for each pair of selected Earth launch and Mars arrival dates, and nominal entry, descent and landing trajectories are simulated for each pair. Parameters of interest, such as maximum heat rate, are plotted in launch-arrival space. The plots help to quickly identify launch and arrival regions that are not feasible under current constraints or technology and also provide information as to what technologies may need to be developed to reach a desired region. This paper provides a discussion of the development, application, and results of a pork chop plot analysis to the Mars Science Laboratory mission. This technique is easily applicable to other missions at Mars and other destinations.

  6. A Database of Tornado Events as Perceived by the USArray Transportable Array Network

    NASA Astrophysics Data System (ADS)

    Tytell, J. E.; Vernon, F.; Reyes, J. C.

    2015-12-01

    Over the course of the deployment of Earthscope's USArray Transportable Array (TA) network there have numerous tornado events that have occurred within the changing footprint of its network. The Array Network Facility based in San Diego, California, has compiled a database of these tornado events based on data provided by the NOAA Storm Prediction Center (SPC). The SPC data itself consists of parameters such as start-end point track data for each event, maximum EF intensities, and maximum track widths. Our database is Antelope driven and combines these data from the SPC with detailed station information from the TA network. We are now able to list all available TA stations during any specific tornado event date and also provide a single calculated "nearest" TA station per individual tornado event. We aim to provide this database as a starting resource for those with an interest in investigating tornado signatures within surface pressure and seismic response data. On a larger scale, the database may be of particular interest to the infrasound research community

  7. Leveraging Intelligent Vehicle Technologies to Maximize Fuel Economy (Presentation)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonder, J.

    2011-11-01

    Advancements in vehicle electronics, along with communication and sensing technologies, have led to a growing number of intelligent vehicle applications. Example systems include those for advanced driver information, route planning and prediction, driver assistance, and crash avoidance. The National Renewable Energy Laboratory is exploring ways to leverage intelligent vehicle systems to achieve fuel savings. This presentation discusses several potential applications, such as providing intelligent feedback to drivers on specific ways to improve their driving efficiency, and using information about upcoming driving to optimize electrified vehicle control strategies for maximum energy efficiency and battery life. The talk also covers the potentialmore » of Advanced Driver Assistance Systems (ADAS) and related technologies to deliver significant fuel savings in addition to providing safety and convenience benefits.« less

  8. Guidelines for Management Information Systems in Canadian Health Care Facilities

    PubMed Central

    Thompson, Larry E.

    1987-01-01

    The MIS Guidelines are a comprehensive set of standards for health care facilities for the recording of staffing, financial, workload, patient care and other management information. The Guidelines enable health care facilities to develop management information systems which identify resources, costs and products to more effectively forecast and control costs and utilize resources to their maximum potential as well as provide improved comparability of operations. The MIS Guidelines were produced by the Management Information Systems (MIS) Project, a cooperative effort of the federal and provincial governments, provincial hospital/health associations, under the authority of the Canadian Federal/Provincial Advisory Committee on Institutional and Medical Services. The Guidelines are currently being implemented on a “test” basis in ten health care facilities across Canada and portions integrated in government reporting as finalized.

  9. New Zealand supereruption provides time marker for the Last Glacial Maximum in Antarctica

    USGS Publications Warehouse

    Dunbar, Nelia W.; Iverson, Nels A.; Van Eaton, Alexa R.; Sigl, Michael; Alloway, Brent V.; Kurbatov, Andrei V.; Mastin, Larry G.; McConnell, Joseph R.; Wilson, Colin J. N.

    2017-01-01

    Multiple, independent time markers are essential to correlate sediment and ice cores from the terrestrial, marine and glacial realms. These records constrain global paleoclimate reconstructions and inform future climate change scenarios. In the Northern Hemisphere, sub-visible layers of volcanic ash (cryptotephra) are valuable time markers due to their widespread dispersal and unique geochemical fingerprints. However, cryptotephra are not as widely identified in the Southern Hemisphere, leaving a gap in the climate record, particularly during the Last Glacial Maximum (LGM). Here we report the first identification of New Zealand volcanic ash in Antarctic ice. The Oruanui supereruption from Taupo volcano (25,580  ±  258 cal. a BP) provides a key time marker for the LGM in the New Zealand sector of the SW Pacific. This finding provides a high-precision chronological link to mid-latitude terrestrial and marine sites, and sheds light on the long-distance transport of tephra in the Southern Hemisphere. As occurred after identification of the Alaskan White River Ash in northern Europe, recognition of ash from the Oruanui eruption in Antarctica dramatically increases the reach and value of tephrochronology, providing links among climate records in widely different geographic areas and depositional environments.

  10. Improved gap size estimation for scaffolding algorithms.

    PubMed

    Sahlin, Kristoffer; Street, Nathaniel; Lundeberg, Joakim; Arvestad, Lars

    2012-09-01

    One of the important steps of genome assembly is scaffolding, in which contigs are linked using information from read-pairs. Scaffolding provides estimates about the order, relative orientation and distance between contigs. We have found that contig distance estimates are generally strongly biased and based on false assumptions. Since erroneous distance estimates can mislead in subsequent analysis, it is important to provide unbiased estimation of contig distance. In this article, we show that state-of-the-art programs for scaffolding are using an incorrect model of gap size estimation. We discuss why current maximum likelihood estimators are biased and describe what different cases of bias we are facing. Furthermore, we provide a model for the distribution of reads that span a gap and derive the maximum likelihood equation for the gap length. We motivate why this estimate is sound and show empirically that it outperforms gap estimators in popular scaffolding programs. Our results have consequences both for scaffolding software, structural variation detection and for library insert-size estimation as is commonly performed by read aligners. A reference implementation is provided at https://github.com/SciLifeLab/gapest. Supplementary data are availible at Bioinformatics online.

  11. 5 CFR 581.402 - Maximum garnishment limitations.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Maximum garnishment limitations. 581.402... § 581.402 Maximum garnishment limitations. (a) Except as provided in paragraph (b) of this section... Protection Act, as amended), unless a lower maximum garnishment limitation is provided by applicable State or...

  12. Deterministic Seismic Hazard Assessment of Center-East IRAN (55.5-58.5˚ E, 29-31˚ N)

    NASA Astrophysics Data System (ADS)

    Askari, M.; Ney, Beh

    2009-04-01

    Deterministic Seismic Hazard Assessment of Center-East IRAN (55.5-58.5˚E, 29-31˚N) Mina Askari, Behnoosh Neyestani Students of Science and Research University,Iran. Deterministic seismic hazard assessment has been performed in Center-East IRAN, including Kerman and adjacent regions of 100km is selected. A catalogue of earthquakes in the region, including historical earthquakes and instrumental earthquakes is provided. A total of 25 potential seismic source zones in the region delineated as area sources for seismic hazard assessment based on geological, seismological and geophysical information, then minimum distance for every seismic sources until site (Kerman) and maximum magnitude for each source have been determined, eventually using the N. A. ABRAHAMSON and J. J. LITEHISER '1989 attenuation relationship, maximum acceleration is estimated to be 0.38g, that is related to the movement of blind fault with maximum magnitude of this source is Ms=5.5.

  13. Algorithms of maximum likelihood data clustering with applications

    NASA Astrophysics Data System (ADS)

    Giada, Lorenzo; Marsili, Matteo

    2002-12-01

    We address the problem of data clustering by introducing an unsupervised, parameter-free approach based on maximum likelihood principle. Starting from the observation that data sets belonging to the same cluster share a common information, we construct an expression for the likelihood of any possible cluster structure. The likelihood in turn depends only on the Pearson's coefficient of the data. We discuss clustering algorithms that provide a fast and reliable approximation to maximum likelihood configurations. Compared to standard clustering methods, our approach has the advantages that (i) it is parameter free, (ii) the number of clusters need not be fixed in advance and (iii) the interpretation of the results is transparent. In order to test our approach and compare it with standard clustering algorithms, we analyze two very different data sets: time series of financial market returns and gene expression data. We find that different maximization algorithms produce similar cluster structures whereas the outcome of standard algorithms has a much wider variability.

  14. Maximum Correntropy Unscented Kalman Filter for Spacecraft Relative State Estimation.

    PubMed

    Liu, Xi; Qu, Hua; Zhao, Jihong; Yue, Pengcheng; Wang, Meng

    2016-09-20

    A new algorithm called maximum correntropy unscented Kalman filter (MCUKF) is proposed and applied to relative state estimation in space communication networks. As is well known, the unscented Kalman filter (UKF) provides an efficient tool to solve the non-linear state estimate problem. However, the UKF usually plays well in Gaussian noises. Its performance may deteriorate substantially in the presence of non-Gaussian noises, especially when the measurements are disturbed by some heavy-tailed impulsive noises. By making use of the maximum correntropy criterion (MCC), the proposed algorithm can enhance the robustness of UKF against impulsive noises. In the MCUKF, the unscented transformation (UT) is applied to obtain a predicted state estimation and covariance matrix, and a nonlinear regression method with the MCC cost is then used to reformulate the measurement information. Finally, the UT is adopted to the measurement equation to obtain the filter state and covariance matrix. Illustrative examples demonstrate the superior performance of the new algorithm.

  15. Maximum Correntropy Unscented Kalman Filter for Spacecraft Relative State Estimation

    PubMed Central

    Liu, Xi; Qu, Hua; Zhao, Jihong; Yue, Pengcheng; Wang, Meng

    2016-01-01

    A new algorithm called maximum correntropy unscented Kalman filter (MCUKF) is proposed and applied to relative state estimation in space communication networks. As is well known, the unscented Kalman filter (UKF) provides an efficient tool to solve the non-linear state estimate problem. However, the UKF usually plays well in Gaussian noises. Its performance may deteriorate substantially in the presence of non-Gaussian noises, especially when the measurements are disturbed by some heavy-tailed impulsive noises. By making use of the maximum correntropy criterion (MCC), the proposed algorithm can enhance the robustness of UKF against impulsive noises. In the MCUKF, the unscented transformation (UT) is applied to obtain a predicted state estimation and covariance matrix, and a nonlinear regression method with the MCC cost is then used to reformulate the measurement information. Finally, the UT is adopted to the measurement equation to obtain the filter state and covariance matrix. Illustrative examples demonstrate the superior performance of the new algorithm. PMID:27657069

  16. Analysis of ground-water-quality data of the Upper Colorado River basin, water years 1972-92

    USGS Publications Warehouse

    Apodaca, L.E.

    1998-01-01

    As part of the U.S. Geological Survey's National Water-Quality Assessment program, an analysis of the existing ground-water-quality data in the Upper Colorado River Basin study unit is necessary to provide information on the historic water-quality conditions. Analysis of the historical data provides information on the availability or lack of data and water-quality issues. The information gathered from the historical data will be used in the design of ground-water-quality studies in the basin. This report includes an analysis of the ground-water data (well and spring data) available for the Upper Colorado River Basin study unit from water years 1972 to 1992 for major cations and anions, metals and selected trace elements, and nutrients. The data used in the analysis of the ground-water quality in the Upper Colorado River Basin study unit were predominantly from the U.S. Geological Survey National Water Information System and the Colorado Department of Public Health and Environment data bases. A total of 212 sites representing alluvial aquifers and 187 sites representing bedrock aquifers were used in the analysis. The available data were not ideal for conducting a comprehensive basinwide water-quality assessment because of lack of sufficient geographical coverage.Evaluation of the ground-water data in the Upper Colorado River Basin study unit was based on the regional environmental setting, which describes the natural and human factors that can affect the water quality. In this report, the ground-water-quality information is evaluated on the basis of aquifers or potential aquifers (alluvial, Green River Formation, Mesaverde Group, Mancos Shale, Dakota Sandstone, Morrison Formation, Entrada Sandstone, Leadville Limestone, and Precambrian) and land-use classifications for alluvial aquifers.Most of the ground-water-quality data in the study unit were for major cations and anions and dissolved-solids concentrations. The aquifer with the highest median concentrations of major ions was the Mancos Shale. The U.S. Environmental Protection Agency secondary maximum contaminant level of 500 milligrams per liter for dissolved solids in drinking water was exceeded in about 75 percent of the samples from the Mancos Shale aquifer. The guideline by the Food and Agriculture Organization of the United States for irrigation water of 2,000 milligrams per liter was also exceeded by the median concentration from the Mancos Shale aquifer. For sulfate, the U.S. Environmental Protection Agency proposed maximum contaminant level of 500 milligrams per liter for drinking water was exceeded by the median concentration for the Mancos Shale aquifer. A total of 66 percent of the sites in the Mancos Shale aquifer exceeded the proposed maximum contaminant level.Metal and selected trace-element data were available for some sites, but most of these data also were below the detection limit. The median concentrations for iron for the selected aquifers and land-use classifications were below the U.S. Environmental Protection Agency secondary maximum contaminant level of 300 micrograms per liter in drinking water. Median concentration of manganese for the Mancos Shale exceeded the U.S. Environmental Protection Agency secondary maximum contaminant level of 50 micrograms per liter in drinking water. The highest selenium concentrations were in the alluvial aquifer and were associated with rangeland. However, about 22 percent of the selenium values from the Mancos Shale exceeded the U.S. Environmental Protection Agency maximum contaminant level of 50 micrograms per liter in drinking water.Few nutrient data were available for the study unit. The only nutrient species presented in this report were nitrate-plus-nitrite as nitrogen and orthophosphate. Median concentrations for nitrate-plus-nitrite as nitrogen were below the U.S. Environmental Protection Agency maximum contaminant level of 10 milligrams per liter in drinking water except for 0.02 percent of the sites in the al

  17. A Comparison of Item Selection Techniques for Testlets

    ERIC Educational Resources Information Center

    Murphy, Daniel L.; Dodd, Barbara G.; Vaughn, Brandon K.

    2010-01-01

    This study examined the performance of the maximum Fisher's information, the maximum posterior weighted information, and the minimum expected posterior variance methods for selecting items in a computerized adaptive testing system when the items were grouped in testlets. A simulation study compared the efficiency of ability estimation among the…

  18. Leaf photosynthesis and respiration of three bioenergy crops in relation to temperature and leaf nitrogen: how conserved are biochemical model parameters among crop species?

    PubMed Central

    Archontoulis, S. V.; Yin, X.; Vos, J.; Danalatos, N. G.; Struik, P. C.

    2012-01-01

    Given the need for parallel increases in food and energy production from crops in the context of global change, crop simulation models and data sets to feed these models with photosynthesis and respiration parameters are increasingly important. This study provides information on photosynthesis and respiration for three energy crops (sunflower, kenaf, and cynara), reviews relevant information for five other crops (wheat, barley, cotton, tobacco, and grape), and assesses how conserved photosynthesis parameters are among crops. Using large data sets and optimization techniques, the C3 leaf photosynthesis model of Farquhar, von Caemmerer, and Berry (FvCB) and an empirical night respiration model for tested energy crops accounting for effects of temperature and leaf nitrogen were parameterized. Instead of the common approach of using information on net photosynthesis response to CO2 at the stomatal cavity (An–Ci), the model was parameterized by analysing the photosynthesis response to incident light intensity (An–Iinc). Convincing evidence is provided that the maximum Rubisco carboxylation rate or the maximum electron transport rate was very similar whether derived from An–Ci or from An–Iinc data sets. Parameters characterizing Rubisco limitation, electron transport limitation, the degree to which light inhibits leaf respiration, night respiration, and the minimum leaf nitrogen required for photosynthesis were then determined. Model predictions were validated against independent sets. Only a few FvCB parameters were conserved among crop species, thus species-specific FvCB model parameters are needed for crop modelling. Therefore, information from readily available but underexplored An–Iinc data should be re-analysed, thereby expanding the potential of combining classical photosynthetic data and the biochemical model. PMID:22021569

  19. Stratotype for the Mérida Glaciation at Pueblo Llano in the northern Venezuelan Andes

    NASA Astrophysics Data System (ADS)

    Mahaney, W. C.; Milner, M. W.; Voros, J.; Kalm, V.; Hütt, G.; Bezada, M.; Hancock, R. G. V.; Aufreiter, S.

    2000-12-01

    The Mérida Glaciation (cf. Wisconsinan, Weichselian) as proposed by Schubert (1974b) culminated at about 18 ka during the last glacial maximum (LGM) and ended at about 13 ka as indicated by 14C dating and correlation with the Cordillera Oriental of Colombia. Moraines of an early stade of Mérida Glaciation reached to 2800 m a.s.l. and were largely overrun or eradicated by the maximum Wisconsinan advance (LGM); where they outcrop, the older moraines are characterized by eroded, weathered glacial diamictons and outwash fans. At Pueblo Llano in the central Mérida Andes (Cordillera de Trujillo), older to younger beds of contorted glacitectonized diamict, overlying beds of bouldery till and indurated outwash, all belong to the early Mérida stade. Overlying the early Mérida stade, deposits of rhythmically bedded glaciolacustrine sediments are in turn overlain with contorted sand and silt beds capped with outwash. Above the outwash terrace a loop moraine of LGM age completely encircles the margins of the basin. A stream cut exposed by catastrophic (tectonic or surge?) release of meltwater displays a lithostratigraphic succession that is bereft of organic material for radiocarbon dating. Five optically-stimulated luminescence (OSL) dates place the maximum age of the lowest till at 81 ka. Particle size distributions allow clear distinctions between major lithic units. Heavy mineral analysis of the middle and lower coarse units in the section provide information on sediment sourcing and on major lithostratigraphic divisions. Trace element concentrations provide information on the relative homogeneity of the deposits. The HREE (heavy rare earth element) concentrations allow discrimination of the lower till from the rest of the section; the LREE (light rare earth element) concentrations highlight differences between the lower till, LGM till, and the rest of the section.

  20. Development and Preliminary Results of CTAS on Airline Operational Control Center Operations

    NASA Technical Reports Server (NTRS)

    Zelenka, Richard; Beatty, Roger; Falcone, Richard; Engelland, Shawn; Tobias, Leonard (Technical Monitor)

    1998-01-01

    Continued growth and expansion of air traffic and increased air carrier economic pressures have mandated greater flexibility and collaboration in air traffic management. The ability of airspace users to select their own routes, so called "free-flight", and to more actively manage their fleet operations for maximum economic advantage are receiving great attention. A first step toward greater airspace user and service provider collaboration is information sharing. In this work, arrival scheduling and airspace management data generated by the NASA/FAA Center/TRACON Automation System (CTAS) and used by the FAA service provider is shared with an airline with extensive operations within the CTAS operational domain. The design and development of a specialized airline CTAS "repeater" system is described, as well as some preliminary results of the impact and benefits of this information on the air carrier's operations. FAA controller per aircraft scheduling information, such as that provided by CTAS, has never before been shared in real-time with an airline. Expected airline benefits include improved fleet planning and arrival gate management, more informed "hold-go" decisions, and avoidance of costly aircraft diversions to alternate airports when faced with uncertain airborne arrival delays.

  1. Development and Preliminary Results of CTAS on Airline Operational Control Center Operations

    NASA Technical Reports Server (NTRS)

    Zelenka, Richard; Beatty, Roger; Engelland, Shawn

    2004-01-01

    Continued growth and expansion of air traffic and increased air carrier economic pressures have mandated greater flexibility and collaboration in air traffic management. The ability of airspace users to select their own routes, so called "free-flight", and to more actively manage their fleet operations for maximum economic advantage are receiving great attention. A first step toward greater airspace user and service provider collaboration is information sharing. In this work, arrival scheduling and airspace management data generated by the NASA/FAA Center/TRACON Automation System (CTAS) and used by the FAA service provider is shared with an airline with extensive operations within the CTAS operational domain. The design and development of a specialized airline CTAS "repeater" system is described, as well as some preliminary results of the impact and benefits of this information on the air carrier's operations. FAA controller per aircraft scheduling information, such as that provided by CTAS, has never before been shared in real-time with an airline. Expected airline benefits include improved fleet planning and arrival gate management, more informed "hold-go decisions, and avoidance of costly aircraft diversions to alternate airports when faced with uncertain airborne arrival delays.

  2. 40 CFR 35.635 - Maximum federal share.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Maximum federal share. 35.635 Section... (sections 319(h) and 518(f)) § 35.635 Maximum federal share. (a) The Regional Administrator may provide up... be provided from non-federal sources. (b) The Regional Administrator may increase the maximum federal...

  3. Redox Control For Hanford HLW Feeds VSL-12R2530-1, REV 0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kruger, A. A.; Matlack, Keith S.; Pegg, Ian L.

    2012-12-13

    The principal objectives of this work were to investigate the effects of processing simulated Hanford HLW at the estimated maximum concentrations of nitrates and oxalates and to identify strategies to mitigate any processing issues resulting from high concentrations of nitrates and oxalates. This report provides results for a series of tests that were performed on the DM10 melter system with simulated C-106/AY-102 HLW. The tests employed simulated HLW feeds containing variable amounts of nitrates and waste organic compounds corresponding to maximum concentrations proj ected for Hanford HLW streams in order to determine their effects on glass production rate, processing characteristics,more » glass redox conditions, melt pool foaming, and the tendency to form secondary phases. Such melter tests provide information on key process factors such as feed processing behavior, dynamic effects during processing, processing rates, off-gas amounts and compositions, foaming control, etc., that cannot be reliably obtained from crucible melts.« less

  4. Statistical mechanics of letters in words

    PubMed Central

    Stephens, Greg J.; Bialek, William

    2013-01-01

    We consider words as a network of interacting letters, and approximate the probability distribution of states taken on by this network. Despite the intuition that the rules of English spelling are highly combinatorial and arbitrary, we find that maximum entropy models consistent with pairwise correlations among letters provide a surprisingly good approximation to the full statistics of words, capturing ~92% of the multi-information in four-letter words and even “discovering” words that were not represented in the data. These maximum entropy models incorporate letter interactions through a set of pairwise potentials and thus define an energy landscape on the space of possible words. Guided by the large letter redundancy we seek a lower-dimensional encoding of the letter distribution and show that distinctions between local minima in the landscape account for ~68% of the four-letter entropy. We suggest that these states provide an effective vocabulary which is matched to the frequency of word use and much smaller than the full lexicon. PMID:20866490

  5. Automatic Spike Sorting Using Tuning Information

    PubMed Central

    Ventura, Valérie

    2011-01-01

    Current spike sorting methods focus on clustering neurons’ characteristic spike waveforms. The resulting spike-sorted data are typically used to estimate how covariates of interest modulate the firing rates of neurons. However, when these covariates do modulate the firing rates, they provide information about spikes’ identities, which thus far have been ignored for the purpose of spike sorting. This letter describes a novel approach to spike sorting, which incorporates both waveform information and tuning information obtained from the modulation of firing rates. Because it efficiently uses all the available information, this spike sorter yields lower spike misclassification rates than traditional automatic spike sorters. This theoretical result is verified empirically on several examples. The proposed method does not require additional assumptions; only its implementation is different. It essentially consists of performing spike sorting and tuning estimation simultaneously rather than sequentially, as is currently done. We used an expectation-maximization maximum likelihood algorithm to implement the new spike sorter. We present the general form of this algorithm and provide a detailed implementable version under the assumptions that neurons are independent and spike according to Poisson processes. Finally, we uncover a systematic flaw of spike sorting based on waveform information only. PMID:19548802

  6. Automatic spike sorting using tuning information.

    PubMed

    Ventura, Valérie

    2009-09-01

    Current spike sorting methods focus on clustering neurons' characteristic spike waveforms. The resulting spike-sorted data are typically used to estimate how covariates of interest modulate the firing rates of neurons. However, when these covariates do modulate the firing rates, they provide information about spikes' identities, which thus far have been ignored for the purpose of spike sorting. This letter describes a novel approach to spike sorting, which incorporates both waveform information and tuning information obtained from the modulation of firing rates. Because it efficiently uses all the available information, this spike sorter yields lower spike misclassification rates than traditional automatic spike sorters. This theoretical result is verified empirically on several examples. The proposed method does not require additional assumptions; only its implementation is different. It essentially consists of performing spike sorting and tuning estimation simultaneously rather than sequentially, as is currently done. We used an expectation-maximization maximum likelihood algorithm to implement the new spike sorter. We present the general form of this algorithm and provide a detailed implementable version under the assumptions that neurons are independent and spike according to Poisson processes. Finally, we uncover a systematic flaw of spike sorting based on waveform information only.

  7. Statistical Properties of Maximum Likelihood Estimators of Power Law Spectra Information

    NASA Technical Reports Server (NTRS)

    Howell, L. W.

    2002-01-01

    A simple power law model consisting of a single spectral index, a is believed to be an adequate description of the galactic cosmic-ray (GCR) proton flux at energies below 10(exp 13) eV, with a transition at the knee energy, E(sub k), to a steeper spectral index alpha(sub 2) greater than alpha(sub 1) above E(sub k). The Maximum likelihood (ML) procedure was developed for estimating the single parameter alpha(sub 1) of a simple power law energy spectrum and generalized to estimate the three spectral parameters of the broken power law energy spectrum from simulated detector responses and real cosmic-ray data. The statistical properties of the ML estimator were investigated and shown to have the three desirable properties: (P1) consistency (asymptotically unbiased). (P2) efficiency asymptotically attains the Cramer-Rao minimum variance bound), and (P3) asymptotically normally distributed, under a wide range of potential detector response functions. Attainment of these properties necessarily implies that the ML estimation procedure provides the best unbiased estimator possible. While simulation studies can easily determine if a given estimation procedure provides an unbiased estimate of the spectra information, and whether or not the estimator is approximately normally distributed, attainment of the Cramer-Rao bound (CRB) can only he ascertained by calculating the CRB for an assumed energy spectrum-detector response function combination, which can be quite formidable in practice. However. the effort in calculating the CRB is very worthwhile because it provides the necessary means to compare the efficiency of competing estimation techniques and, furthermore, provides a stopping rule in the search for the best unbiased estimator. Consequently, the CRB for both the simple and broken power law energy spectra are derived herein and the conditions under which they are attained in practice are investigated. The ML technique is then extended to estimate spectra information from an arbitrary number of astrophysics data sets produced by vastly different science instruments. This theory and its successful implementation will facilitate the interpretation of spectral information from multiple astrophysics missions and thereby permit the derivation of superior spectral parameter estimates based on the combination of data sets.

  8. Exploring the Influence of Topographic Correction and SWIR Spectral Information Inclusion on Burnt Scars Detection From High Resolution EO Imagery: A Case Study Using ASTER imagery

    NASA Astrophysics Data System (ADS)

    Said, Yahia A.; Petropoulos, George; Srivastava, Prashant K.

    2014-05-01

    Information on burned area estimates is of key importance in environmental and ecological studies as well as in fire management including damage assessment and planning of post-fire recovery of affected areas. Earth Observation (EO) provides today the most efficient way in obtaining such information in a rapid, consistent and cost-effective manner. The present study aimed at exploring the effect of topographic correction to the burnt area delineation in conditions characteristic of a Mediterranean environment using ASTER high resolution multispectral remotely sensed imagery. A further objective was to investigate the potential added-value of the inclusion of the shortwave infrared (SWIR) bands in improving the retrievals of burned area cartography from the ASTER data. In particular the capability of the Maximum Likelihood (ML), the Support Vector Machines (SVMs) and Object-based Image Analysis (OBIA) classification techniques has been examined herein for the purposes of our study. As a case study is used a typical Mediterranean site on which a fire event occurred in Greece during the summer of 2007, for which post-fire ASTER imagery has been acquired. Our results indicated that the combination of topographic correction (ortho-rectification) with the inclusion of the SWIR bands returned the most accurate results in terms of burnt area mapping. In terms of image processing methods, OBIA showed the best results and found as the most promising approach for burned area mapping with least absolute difference from the validation polygon followed by SVM and ML. All in all, our study provides an important contribution to the understanding of the capability of high resolution imagery such as that from ASTER sensor and corroborates the usefulness particularly of the topographic correction as an image processing step when in delineating the burnt areas from such data. It also provides further evidence that use of EO technology can offer an effective practical tool for the extent of ecosystem destruction from wildfires, providing extremely useful information in co-ordinating efforts for the recovery of fire-affected ecosystems after wildfire. Keywords: Remote Sensing, ASTER, Burned area mapping, Maximum Likelihood, Support Vector Machines, Object-based image analysis, Greece

  9. MASTtreedist: visualization of tree space based on maximum agreement subtree.

    PubMed

    Huang, Hong; Li, Yongji

    2013-01-01

    Phylogenetic tree construction process might produce many candidate trees as the "best estimates." As the number of constructed phylogenetic trees grows, the need to efficiently compare their topological or physical structures arises. One of the tree comparison's software tools, the Mesquite's Tree Set Viz module, allows the rapid and efficient visualization of the tree comparison distances using multidimensional scaling (MDS). Tree-distance measures, such as Robinson-Foulds (RF), for the topological distance among different trees have been implemented in Tree Set Viz. New and sophisticated measures such as Maximum Agreement Subtree (MAST) can be continuously built upon Tree Set Viz. MAST can detect the common substructures among trees and provide more precise information on the similarity of the trees, but it is NP-hard and difficult to implement. In this article, we present a practical tree-distance metric: MASTtreedist, a MAST-based comparison metric in Mesquite's Tree Set Viz module. In this metric, the efficient optimizations for the maximum weight clique problem are applied. The results suggest that the proposed method can efficiently compute the MAST distances among trees, and such tree topological differences can be translated as a scatter of points in two-dimensional (2D) space. We also provide statistical evaluation of provided measures with respect to RF-using experimental data sets. This new comparison module provides a new tree-tree pairwise comparison metric based on the differences of the number of MAST leaves among constructed phylogenetic trees. Such a new phylogenetic tree comparison metric improves the visualization of taxa differences by discriminating small divergences of subtree structures for phylogenetic tree reconstruction.

  10. Information dynamics in living systems: prokaryotes, eukaryotes, and cancer.

    PubMed

    Frieden, B Roy; Gatenby, Robert A

    2011-01-01

    Living systems use information and energy to maintain stable entropy while far from thermodynamic equilibrium. The underlying first principles have not been established. We propose that stable entropy in living systems, in the absence of thermodynamic equilibrium, requires an information extremum (maximum or minimum), which is invariant to first order perturbations. Proliferation and death represent key feedback mechanisms that promote stability even in a non-equilibrium state. A system moves to low or high information depending on its energy status, as the benefit of information in maintaining and increasing order is balanced against its energy cost. Prokaryotes, which lack specialized energy-producing organelles (mitochondria), are energy-limited and constrained to an information minimum. Acquisition of mitochondria is viewed as a critical evolutionary step that, by allowing eukaryotes to achieve a sufficiently high energy state, permitted a phase transition to an information maximum. This state, in contrast to the prokaryote minima, allowed evolution of complex, multicellular organisms. A special case is a malignant cell, which is modeled as a phase transition from a maximum to minimum information state. The minimum leads to a predicted power-law governing the in situ growth that is confirmed by studies measuring growth of small breast cancers. We find living systems achieve a stable entropic state by maintaining an extreme level of information. The evolutionary divergence of prokaryotes and eukaryotes resulted from acquisition of specialized energy organelles that allowed transition from information minima to maxima, respectively. Carcinogenesis represents a reverse transition: of an information maximum to minimum. The progressive information loss is evident in accumulating mutations, disordered morphology, and functional decline characteristics of human cancers. The findings suggest energy restriction is a critical first step that triggers the genetic mutations that drive somatic evolution of the malignant phenotype.

  11. Projection of Maximum Software Maintenance Manning Levels.

    DTIC Science & Technology

    1982-06-01

    mainte- nance team development and for outyear support resource estimation, and to provide an analysis of applications of the model in areas other...by General Research Corporation of Santa Barbara, Ca., indicated that the Planning and Resource Management Information System (PARRIS) at the Air Force...determined that when the optimal input effort is applied, steps in the development would be achieved at a rate proportional to V(t). Thus the work-rate could

  12. Effects of experimental design on calibration curve precision in routine analysis

    PubMed Central

    Pimentel, Maria Fernanda; Neto, Benício de Barros; Saldanha, Teresa Cristina B.

    1998-01-01

    A computational program which compares the effciencies of different experimental designs with those of maximum precision (D-optimized designs) is described. The program produces confidence interval plots for a calibration curve and provides information about the number of standard solutions, concentration levels and suitable concentration ranges to achieve an optimum calibration. Some examples of the application of this novel computational program are given, using both simulated and real data. PMID:18924816

  13. Georgia fishery study: implications for dose calculations. Revision 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turcotte, M.D.S.

    Fish consumption will contribute a major portion of the estimated individual and population doses from L-Reactor liquid releases and Cs-137 remobilization in Steel Creek. It is therefore important that the values for fish consumption used in dose calculations be as realistic as possible. Since publication of the L-Reactor Environmental Information Document (EID), data have become available on sport fishing in the Savannah River. These data provide SRP with a site-specific sport fish harvest and consumption values for use in dose calculations. The Georgia fishery data support the total population fish consumption and calculated dose reported in the EID. The datamore » indicate, however, that both the EID average and maximum individual fish consumption have been underestimated, although each to a different degree. The average fish consumption value used in the EID is approximately 3% below the lower limit of the fish consumption range calculated using the Georgia data. Maximum fish consumption in the EID has been underestimated by approximately 60%, and doses to the maximum individual should also be recalculated. Future dose calculations should utilize an average adult fish consumption value of 11.3 kg/yr, and a maximum adult fish consumption value of 34 kg/yr. Consumption values for the teen and child age groups should be increased proportionally: (1) teen average = 8.5; maximum = 25.9 kg/yr; and (2) child average = 3.6; maximum = 11.2 kg/yr. 8 refs.« less

  14. Applying Bayesian Item Selection Approaches to Adaptive Tests Using Polytomous Items

    ERIC Educational Resources Information Center

    Penfield, Randall D.

    2006-01-01

    This study applied the maximum expected information (MEI) and the maximum posterior-weighted information (MPI) approaches of computer adaptive testing item selection to the case of a test using polytomous items following the partial credit model. The MEI and MPI approaches are described. A simulation study compared the efficiency of ability…

  15. Signal detection theory and vestibular perception: III. Estimating unbiased fit parameters for psychometric functions.

    PubMed

    Chaudhuri, Shomesh E; Merfeld, Daniel M

    2013-03-01

    Psychophysics generally relies on estimating a subject's ability to perform a specific task as a function of an observed stimulus. For threshold studies, the fitted functions are called psychometric functions. While fitting psychometric functions to data acquired using adaptive sampling procedures (e.g., "staircase" procedures), investigators have encountered a bias in the spread ("slope" or "threshold") parameter that has been attributed to the serial dependency of the adaptive data. Using simulations, we confirm this bias for cumulative Gaussian parametric maximum likelihood fits on data collected via adaptive sampling procedures, and then present a bias-reduced maximum likelihood fit that substantially reduces the bias without reducing the precision of the spread parameter estimate and without reducing the accuracy or precision of the other fit parameters. As a separate topic, we explain how to implement this bias reduction technique using generalized linear model fits as well as other numeric maximum likelihood techniques such as the Nelder-Mead simplex. We then provide a comparison of the iterative bootstrap and observed information matrix techniques for estimating parameter fit variance from adaptive sampling procedure data sets. The iterative bootstrap technique is shown to be slightly more accurate; however, the observed information technique executes in a small fraction (0.005 %) of the time required by the iterative bootstrap technique, which is an advantage when a real-time estimate of parameter fit variance is required.

  16. Bipartite entangled stabilizer mutually unbiased bases as maximum cliques of Cayley graphs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dam, Wim van; Howard, Mark; Department of Physics, University of California, Santa Barbara, California 93106

    2011-07-15

    We examine the existence and structure of particular sets of mutually unbiased bases (MUBs) in bipartite qudit systems. In contrast to well-known power-of-prime MUB constructions, we restrict ourselves to using maximally entangled stabilizer states as MUB vectors. Consequently, these bipartite entangled stabilizer MUBs (BES MUBs) provide no local information, but are sufficient and minimal for decomposing a wide variety of interesting operators including (mixtures of) Jamiolkowski states, entanglement witnesses, and more. The problem of finding such BES MUBs can be mapped, in a natural way, to that of finding maximum cliques in a family of Cayley graphs. Some relationships withmore » known power-of-prime MUB constructions are discussed, and observables for BES MUBs are given explicitly in terms of Pauli operators.« less

  17. MaxEnt-Based Ecological Theory: A Template for Integrated Catchment Theory

    NASA Astrophysics Data System (ADS)

    Harte, J.

    2017-12-01

    The maximum information entropy procedure (MaxEnt) is both a powerful tool for inferring least-biased probability distributions from limited data and a framework for the construction of complex systems theory. The maximum entropy theory of ecology (METE) describes remarkably well widely observed patterns in the distribution, abundance and energetics of individuals and taxa in relatively static ecosystems. An extension to ecosystems undergoing change in response to disturbance or natural succession (DynaMETE) is in progress. I describe the structure of both the static and the dynamic theory and show a range of comparisons with census data. I then propose a generalization of the MaxEnt approach that could provide a framework for a predictive theory of both static and dynamic, fully-coupled, eco-socio-hydrological catchment systems.

  18. Bipartite entangled stabilizer mutually unbiased bases as maximum cliques of Cayley graphs

    NASA Astrophysics Data System (ADS)

    van Dam, Wim; Howard, Mark

    2011-07-01

    We examine the existence and structure of particular sets of mutually unbiased bases (MUBs) in bipartite qudit systems. In contrast to well-known power-of-prime MUB constructions, we restrict ourselves to using maximally entangled stabilizer states as MUB vectors. Consequently, these bipartite entangled stabilizer MUBs (BES MUBs) provide no local information, but are sufficient and minimal for decomposing a wide variety of interesting operators including (mixtures of) Jamiołkowski states, entanglement witnesses, and more. The problem of finding such BES MUBs can be mapped, in a natural way, to that of finding maximum cliques in a family of Cayley graphs. Some relationships with known power-of-prime MUB constructions are discussed, and observables for BES MUBs are given explicitly in terms of Pauli operators.

  19. [Development of MEDUC-PG14 survey to assess postgraduate teaching in medical specialties].

    PubMed

    Pizarro, Margarita; Solís, Nancy; Rojas, Viviana; Díaz, Luis Antonio; Padilla, Oslando; Letelier, Luz María; Aizman, Andrés; Sarfatis, Alberto; Olivos, Trinidad; Soza, Alejandro; Delfino, Alejandro; Latorre, Gonzalo; Ivanovic-Zuvic, Danisa; Hoyl, Trinidad; Bitran, Marcela; Arab, Juan Pablo; Riquelme, Arnoldo

    2015-08-01

    Feedback is one of the most important tools to improve teaching in medical education. To develop an instrument to assess the performance of clinical postgraduate teachers in medical specialties. A qualitative methodology consisting in interviews and focus-groups followed by a quantitative methodology to generate consensus, was employed. After generating the instrument, psychometric tests were performed to assess the construct validity (factor analysis) and reliability (Cronbach’s alpha). Experts in medical education, teachers and residents of a medical school participated in interviews and focus groups. With this information, 26 categories (79 items) were proposed and reduced to 14 items (Likert scale 1-5) by an expert’s Delphi panel, generating the MEDUC-PG14 survey, which was answered by 123 residents from different programs of medical specialties. Construct validity was carried out. Factor analysis showed three domains: Teaching and evaluation, respectful behavior towards patients and health care team, and providing feedback. The global score was 4.46 ± 0.94 (89% of the maximum). One teachers’ strength, as evaluated by their residents was “respectful behavior” with 4.85 ± 0.42 (97% of the maximum). “Providing feedback” obtained 4.09 ± 1.0 points (81.8% of the maximum). MEDUC-PG14 survey had a Cronbach’s alpha coefficient of 0.947. MEDUC-PG14 survey is a useful and reliable guide for teacher evaluation in medical specialty programs. Also provides feedback to improve educational skills of postgraduate clinical teachers.

  20. Information matrix estimation procedures for cognitive diagnostic models.

    PubMed

    Liu, Yanlou; Xin, Tao; Andersson, Björn; Tian, Wei

    2018-03-06

    Two new methods to estimate the asymptotic covariance matrix for marginal maximum likelihood estimation of cognitive diagnosis models (CDMs), the inverse of the observed information matrix and the sandwich-type estimator, are introduced. Unlike several previous covariance matrix estimators, the new methods take into account both the item and structural parameters. The relationships between the observed information matrix, the empirical cross-product information matrix, the sandwich-type covariance matrix and the two approaches proposed by de la Torre (2009, J. Educ. Behav. Stat., 34, 115) are discussed. Simulation results show that, for a correctly specified CDM and Q-matrix or with a slightly misspecified probability model, the observed information matrix and the sandwich-type covariance matrix exhibit good performance with respect to providing consistent standard errors of item parameter estimates. However, with substantial model misspecification only the sandwich-type covariance matrix exhibits robust performance. © 2018 The British Psychological Society.

  1. Traffic sign recognition based on a context-aware scale-invariant feature transform approach

    NASA Astrophysics Data System (ADS)

    Yuan, Xue; Hao, Xiaoli; Chen, Houjin; Wei, Xueye

    2013-10-01

    A new context-aware scale-invariant feature transform (CASIFT) approach is proposed, which is designed for the use in traffic sign recognition (TSR) systems. The following issues remain in previous works in which SIFT is used for matching or recognition: (1) SIFT is unable to provide color information; (2) SIFT only focuses on local features while ignoring the distribution of global shapes; (3) the template with the maximum number of matching points selected as the final result is instable, especially for images with simple patterns; and (4) SIFT is liable to result in errors when different images share the same local features. In order to resolve these problems, a new CASIFT approach is proposed. The contributions of the work are as follows: (1) color angular patterns are used to provide the color distinguishing information; (2) a CASIFT which effectively combines local and global information is proposed; and (3) a method for computing the similarity between two images is proposed, which focuses on the distribution of the matching points, rather than using the traditional SIFT approach of selecting the template with maximum number of matching points as the final result. The proposed approach is particularly effective in dealing with traffic signs which have rich colors and varied global shape distribution. Experiments are performed to validate the effectiveness of the proposed approach in TSR systems, and the experimental results are satisfying even for images containing traffic signs that have been rotated, damaged, altered in color, have undergone affine transformations, or images which were photographed under different weather or illumination conditions.

  2. Analysis of cutting force signals by wavelet packet transform for surface roughness monitoring in CNC turning

    NASA Astrophysics Data System (ADS)

    García Plaza, E.; Núñez López, P. J.

    2018-01-01

    On-line monitoring of surface finish in machining processes has proven to be a substantial advancement over traditional post-process quality control techniques by reducing inspection times and costs and by avoiding the manufacture of defective products. This study applied techniques for processing cutting force signals based on the wavelet packet transform (WPT) method for the monitoring of surface finish in computer numerical control (CNC) turning operations. The behaviour of 40 mother wavelets was analysed using three techniques: global packet analysis (G-WPT), and the application of two packet reduction criteria: maximum energy (E-WPT) and maximum entropy (SE-WPT). The optimum signal decomposition level (Lj) was determined to eliminate noise and to obtain information correlated to surface finish. The results obtained with the G-WPT method provided an in-depth analysis of cutting force signals, and frequency ranges and signal characteristics were correlated to surface finish with excellent results in the accuracy and reliability of the predictive models. The radial and tangential cutting force components at low frequency provided most of the information for the monitoring of surface finish. The E-WPT and SE-WPT packet reduction criteria substantially reduced signal processing time, but at the expense of discarding packets with relevant information, which impoverished the results. The G-WPT method was observed to be an ideal procedure for processing cutting force signals applied to the real-time monitoring of surface finish, and was estimated to be highly accurate and reliable at a low analytical-computational cost.

  3. Straight and chopped dc performance data for a General Electric 5BT 2366C10 motor and an EV-1 controller

    NASA Technical Reports Server (NTRS)

    Edie, P. C.

    1981-01-01

    Performance data on the General Electric 5BT 2366C10 series wound dc motor and EV-1 Chopper Controller is supplied for the electric vehicle manufacturer. Data is provided for both straight and chopped dc input to the motor, at 2 motor temperature levels. Testing was done at 6 voltage increments to the motor, and 2 voltage increments to the controller. Data results are presented in both tabular and graphical forms. Tabular information includes motor voltage and current input data, motor speed and torque output data, power data and temperature data. Graphical information includes torque-speed, motor power output-speed, torque-current, and efficiency-speed plots under the various operating conditions. The data resulting from this testing shows the speed-torque plots to have the most variance with operating temperature. The maximum motor efficiency is between 86% and 87%, regardless of temperature or mode of operation. When the chopper is utilized, maximum motor efficiency occurs when the chopper duty cycle approaches 100%.

  4. Entropy studies on beam distortion by atmospheric turbulence

    NASA Astrophysics Data System (ADS)

    Wu, Chensheng; Ko, Jonathan; Davis, Christopher C.

    2015-09-01

    When a beam propagates through atmospheric turbulence over a known distance, the target beam profile deviates from the projected profile of the beam on the receiver. Intuitively, the unwanted distortion provides information about the atmospheric turbulence. This information is crucial for guiding adaptive optic systems and improving beam propagation results. In this paper, we propose an entropy study based on the image from a plenoptic sensor to provide a measure of information content of atmospheric turbulence. In general, lower levels of atmospheric turbulence will have a smaller information size while higher levels of atmospheric turbulence will cause significant expansion of the information size, which may exceed the maximum capacity of a sensing system and jeopardize the reliability of an AO system. Therefore, the entropy function can be used to analyze the turbulence distortion and evaluate performance of AO systems. In fact, it serves as a metric that can tell the improvement of beam correction in each iteration step. In addition, it points out the limitation of an AO system at optimized correction as well as the minimum information needed for wavefront sensing to achieve certain levels of correction. In this paper, we will demonstrate the definition of the entropy function and how it is related to evaluating information (randomness) carried by atmospheric turbulence.

  5. Spatially varying stress state in the central U.S. from joint inversion of focal mechanism and maximum horizontal stress data

    NASA Astrophysics Data System (ADS)

    Carlson, G.; Johnson, K. M.; Rupp, J. A.

    2017-12-01

    The Midcontinental United States continues to experience anomalously high rates of seismicity and generate large earthquakes despite its location in the cratonic interior, far from any plate boundary. There is renewed interest in Midcontinent seismicity with the concern that fluid injection within the Illinois basin could induce seismicity. In order to better understand the seismic hazard and inform studies of risk mitigation, we present an assessment of the contemporary crustal stress state in the Illinois basin and surrounding region, looking specifically at how the orientation of maximum horizontal compressive stress varies throughout the region. This information will help identify which faults are critically stressed and therefore most likely to fail under increased pore pressures. We conduct a Bayesian stress inversion of focal mechanism solutions and maximum horizontal stress orientations from borehole breakout, core fracture, overcoring, hydraulic fracture, and strain gauge measurements for maximum horizontal compressive stress orientations across the Midcontinent region and produce a map of expected faulting styles. Because distinguishing the slipping fault plane from the auxiliary nodal plane is ambiguous for focal mechanisms, the choice of the fault plane and associated slip vector to use in the inversion is important in the estimation of the stress tensor. The stress inversion provides an objective means to estimate nonlinear parameters including the spatial smoothing parameter, unknown data uncertainties, as well as the selection of focal mechanism nodal planes. We find a systematic rotation of the maximum horizontal stress orientation (SHmax) across a 1000 km width of the Midcontinent. We find that SHmax rotates from N60E to E/W orientation across the southern Illinois basin and returns to N60E in the western Appalachian basin. The stress regime is largely consistent with strike-slip faulting with pockets of a reverse-faulting stress regime near the New Madrid and Wabash Valley seismic zones.

  6. Stratified and Maximum Information Item Selection Procedures in Computer Adaptive Testing

    ERIC Educational Resources Information Center

    Deng, Hui; Ansley, Timothy; Chang, Hua-Hua

    2010-01-01

    In this study we evaluated and compared three item selection procedures: the maximum Fisher information procedure (F), the a-stratified multistage computer adaptive testing (CAT) (STR), and a refined stratification procedure that allows more items to be selected from the high a strata and fewer items from the low a strata (USTR), along with…

  7. Uncertainty estimation of the self-thinning process by Maximum-Entropy Principle

    Treesearch

    Shoufan Fang; George Z. Gertner

    2000-01-01

    When available information is scarce, the Maximum-Entropy Principle can estimate the distributions of parameters. In our case study, we estimated the distributions of the parameters of the forest self-thinning process based on literature information, and we derived the conditional distribution functions and estimated the 95 percent confidence interval (CI) of the self-...

  8. Impact of Violation of the Missing-at-Random Assumption on Full-Information Maximum Likelihood Method in Multidimensional Adaptive Testing

    ERIC Educational Resources Information Center

    Han, Kyung T.; Guo, Fanmin

    2014-01-01

    The full-information maximum likelihood (FIML) method makes it possible to estimate and analyze structural equation models (SEM) even when data are partially missing, enabling incomplete data to contribute to model estimation. The cornerstone of FIML is the missing-at-random (MAR) assumption. In (unidimensional) computerized adaptive testing…

  9. Peer Review for EPA’s Proposed Approaches to Inform the Derivation of a Maximum Contaminant Level Goal for Perchlorate in Drinking Water

    EPA Science Inventory

    EPA is developing approaches to inform the derivation of a Maximum Contaminant Level Goal (MCLG) for perchlorate in drinking water under the Safe Drinking Water Act. EPA previously conducted an independent, external, scientific peer review of the draft biologically-based dose-res...

  10. User’s manual to update the National Wildlife Refuge System Water Quality Information System (WQIS)

    USGS Publications Warehouse

    Chojnacki, Kimberly A.; Vishy, Chad J.; Hinck, Jo Ellen; Finger, Susan E.; Higgins, Michael J.; Kilbride, Kevin

    2013-01-01

    National Wildlife Refuges may have impaired water quality resulting from historic and current land uses, upstream sources, and aerial pollutant deposition. National Wildlife Refuge staff have limited time available to identify and evaluate potential water quality issues. As a result, water quality–related issues may not be resolved until a problem has already arisen. The National Wildlife Refuge System Water Quality Information System (WQIS) is a relational database developed for use by U.S. Fish and Wildlife Service staff to identify existing water quality issues on refuges in the United States. The WQIS database relies on a geospatial overlay analysis of data layers for ownership, streams and water quality. The WQIS provides summary statistics of 303(d) impaired waters and total maximum daily loads for the National Wildlife Refuge System at the national, regional, and refuge level. The WQIS allows U.S. Fish and Wildlife Service staff to be proactive in addressing water quality issues by identifying and understanding the current extent and nature of 303(d) impaired waters and subsequent total maximum daily loads. Water quality data are updated bi-annually, making it necessary to refresh the WQIS to maintain up-to-date information. This manual outlines the steps necessary to update the data and reports in the WQIS.

  11. Very Large Array Observations of the Sun with Related Observations Using the SMM (Solar Maximum Mission) Satellite

    DTIC Science & Technology

    1988-10-12

    white light sunspots (black dotsl but these regions are associated with intense radiation at 20 cm wave- material would, however, be invisible in X...spots. The intense , million degree radiation at 6 cm lies above sunspot umbrae in coronal regions where the longitudinal magnetic field strength Hi...capable of measuring the radio intensity and polarization with high angular and time resolution, thereby providing information about the preburst heating

  12. Test Review: Woodcock, R. W., Schrank, F. A., Mather, N., & McGrew, K. S. 2007). "Woodcock-Johnson III Tests of Achievement, Form C/Brief Battery." Rolling Meadows, IL: Riverside

    ERIC Educational Resources Information Center

    Grenwelge, Cheryl H.

    2009-01-01

    The Woodcock Johnson III Brief Assessment is a "maximum performance test" (Reynolds, Livingston, Willson, 2006) that is designed to assess the upper levels of knowledge and skills of the test taker using both power and speed to obtain a large amount of information in a short period of time. The Brief Assessment also provides an adequate…

  13. Statistical Properties of Maximum Likelihood Estimators of Power Law Spectra Information

    NASA Technical Reports Server (NTRS)

    Howell, L. W., Jr.

    2003-01-01

    A simple power law model consisting of a single spectral index, sigma(sub 2), is believed to be an adequate description of the galactic cosmic-ray (GCR) proton flux at energies below 10(exp 13) eV, with a transition at the knee energy, E(sub k), to a steeper spectral index sigma(sub 2) greater than sigma(sub 1) above E(sub k). The maximum likelihood (ML) procedure was developed for estimating the single parameter sigma(sub 1) of a simple power law energy spectrum and generalized to estimate the three spectral parameters of the broken power law energy spectrum from simulated detector responses and real cosmic-ray data. The statistical properties of the ML estimator were investigated and shown to have the three desirable properties: (Pl) consistency (asymptotically unbiased), (P2) efficiency (asymptotically attains the Cramer-Rao minimum variance bound), and (P3) asymptotically normally distributed, under a wide range of potential detector response functions. Attainment of these properties necessarily implies that the ML estimation procedure provides the best unbiased estimator possible. While simulation studies can easily determine if a given estimation procedure provides an unbiased estimate of the spectra information, and whether or not the estimator is approximately normally distributed, attainment of the Cramer-Rao bound (CRB) can only be ascertained by calculating the CRB for an assumed energy spectrum- detector response function combination, which can be quite formidable in practice. However, the effort in calculating the CRB is very worthwhile because it provides the necessary means to compare the efficiency of competing estimation techniques and, furthermore, provides a stopping rule in the search for the best unbiased estimator. Consequently, the CRB for both the simple and broken power law energy spectra are derived herein and the conditions under which they are stained in practice are investigated.

  14. Transfer entropy analysis of maternal and fetal heart rate coupling.

    PubMed

    Marzbanrad, Faezeh; Kimura, Yoshitaka; Endo, Miyuki; Palaniswami, Marimuthu; Khandoker, Ahsan H

    2015-01-01

    Although evidence of the short term relationship between maternal and fetal heart rates has been found in previous model-based studies, knowledge about the mechanism and patterns of the coupling during gestation is still limited. In this study, a model-free method based on Transfer Entropy (TE) was applied to quantify the maternal-fetal heart rate couplings in both directions. Furthermore, analysis of the lag at which TE was maximum and its changes throughout gestation, provided more information about the mechanism of coupling and its latency. Experimental results based on fetal electrocardiograms (fECGs) and maternal ECG showed the evidence of coupling for 62 out of 65 healthy mothers and fetuses in each direction, by statistically validating against the surrogate pairs. The fetuses were divided into three gestational age groups: early (16-25 weeks), mid (26-31 weeks) and late (32-41 weeks) gestation. The maximum TE from maternal to fetal heart rate significantly increased from early to mid gestation, while the coupling delay on both directions decreased significantly from mid to late gestation. These changes occur concomitant with the maturation of the fetal sensory and autonomic nervous systems with advancing gestational age. In conclusion, the application of TE with delays revealed detailed information about the changes in fetal-maternal heart rate coupling strength and latency throughout gestation, which could provide novel clinical markers of fetal development and well-being.

  15. Physiographic and land cover attributes of the Puget Lowland and the active streamflow gaging network, Puget Sound Basin

    USGS Publications Warehouse

    Konrad, Christopher; Sevier, Maria

    2014-01-01

    Geospatial information for the active streamflow gaging network in the Puget Sound Basin was compiled to support regional monitoring of stormwater effects to small streams. The compilation includes drainage area boundaries and physiographic and land use attributes that affect hydrologic processes. Three types of boundaries were used to tabulate attributes: Puget Sound Watershed Characterization analysis units (AU); the drainage area of active streamflow gages; and the catchments of Regional Stream Monitoring Program (RSMP) sites. The active streamflow gaging network generally includes sites that represent the ranges of attributes for lowland AUs, although there are few sites with low elevations (less than 60 meters), low precipitation (less than 1 meter year), or high stream density (greater than 5 kilometers per square kilometers). The active streamflow gaging network can serve to provide streamflow information in some AUs and RSMP sites, particularly where the streamflow gage measures streamflow generated from a part of the AU or that drains to the RSMP site, and that part of the AU or RSMP site is a significant fraction of the drainage area of the streamgage. The maximum fraction of each AU or RSMP catchment upstream of a streamflow gage and the maximum fraction of any one gaged basin in an AU or RSMP along with corresponding codes are provided in the attribute tables.

  16. Using Instrumental Variable (IV) Tests to Evaluate Model Specification in Latent Variable Structural Equation Models*

    PubMed Central

    Kirby, James B.; Bollen, Kenneth A.

    2009-01-01

    Structural Equation Modeling with latent variables (SEM) is a powerful tool for social and behavioral scientists, combining many of the strengths of psychometrics and econometrics into a single framework. The most common estimator for SEM is the full-information maximum likelihood estimator (ML), but there is continuing interest in limited information estimators because of their distributional robustness and their greater resistance to structural specification errors. However, the literature discussing model fit for limited information estimators for latent variable models is sparse compared to that for full information estimators. We address this shortcoming by providing several specification tests based on the 2SLS estimator for latent variable structural equation models developed by Bollen (1996). We explain how these tests can be used to not only identify a misspecified model, but to help diagnose the source of misspecification within a model. We present and discuss results from a Monte Carlo experiment designed to evaluate the finite sample properties of these tests. Our findings suggest that the 2SLS tests successfully identify most misspecified models, even those with modest misspecification, and that they provide researchers with information that can help diagnose the source of misspecification. PMID:20419054

  17. Geotechnical parameter spatial distribution stochastic analysis based on multi-precision information assimilation

    NASA Astrophysics Data System (ADS)

    Wang, C.; Rubin, Y.

    2014-12-01

    Spatial distribution of important geotechnical parameter named compression modulus Es contributes considerably to the understanding of the underlying geological processes and the adequate assessment of the Es mechanics effects for differential settlement of large continuous structure foundation. These analyses should be derived using an assimilating approach that combines in-situ static cone penetration test (CPT) with borehole experiments. To achieve such a task, the Es distribution of stratum of silty clay in region A of China Expo Center (Shanghai) is studied using the Bayesian-maximum entropy method. This method integrates rigorously and efficiently multi-precision of different geotechnical investigations and sources of uncertainty. Single CPT samplings were modeled as a rational probability density curve by maximum entropy theory. Spatial prior multivariate probability density function (PDF) and likelihood PDF of the CPT positions were built by borehole experiments and the potential value of the prediction point, then, preceding numerical integration on the CPT probability density curves, the posterior probability density curve of the prediction point would be calculated by the Bayesian reverse interpolation framework. The results were compared between Gaussian Sequential Stochastic Simulation and Bayesian methods. The differences were also discussed between single CPT samplings of normal distribution and simulated probability density curve based on maximum entropy theory. It is shown that the study of Es spatial distributions can be improved by properly incorporating CPT sampling variation into interpolation process, whereas more informative estimations are generated by considering CPT Uncertainty for the estimation points. Calculation illustrates the significance of stochastic Es characterization in a stratum, and identifies limitations associated with inadequate geostatistical interpolation techniques. This characterization results will provide a multi-precision information assimilation method of other geotechnical parameters.

  18. Phase transitions in a system of long rods on two-dimensional lattices by means of information theory

    NASA Astrophysics Data System (ADS)

    Vogel, E. E.; Saravia, G.; Ramirez-Pastor, A. J.

    2017-12-01

    The orientational phase transitions that occur in the deposition of longitudinal polymers of length k (in terms of lattice units) are characterized by information theory techniques. We calculate the absolute value of an order parameter δ , which weights the relative orientations of the deposited rods, which varies between 0.0 (random orientation) and 1.0 (fully oriented in either of the two equivalent directions in an L ×L square lattice). A Monte Carlo (MC) algorithm is implemented to induce a dynamics allowing for accommodation of the rods for any given density or coverage θ (ratio of the occupied sites over all the sites in the lattice). The files storing δ (t ) (with time t measured in MC steps) are then treated by data recognizer wlzip based on data compressor techniques yielding the information content measured by a parameter η (θ ) . This allows us to recognize two maxima separated by a well-defined minimum for η (θ ) provided k ≥7 . The first maximum is associated with an isotropic-nematic (I -N ) phase transition occurring at intermediate density, while the second maximum is associated with some kind of nematic-isotropic transition at high coverage. In the cases of k <7 , the curves for η (θ ) are almost constant, presenting a very broad maximum which can hardly be associated with a phase transition. The study varies L and k , allowing for a basic scaling of the found critical densities towards the thermodynamic limit. These calculations confirm the tendency obtained by different methods in the case of the intermediate-density I -N phase transition, while this tendency is established here in the case of the high-density phase transition.

  19. Human vision is determined based on information theory.

    PubMed

    Delgado-Bonal, Alfonso; Martín-Torres, Javier

    2016-11-03

    It is commonly accepted that the evolution of the human eye has been driven by the maximum intensity of the radiation emitted by the Sun. However, the interpretation of the surrounding environment is constrained not only by the amount of energy received but also by the information content of the radiation. Information is related to entropy rather than energy. The human brain follows Bayesian statistical inference for the interpretation of visual space. The maximization of information occurs in the process of maximizing the entropy. Here, we show that the photopic and scotopic vision absorption peaks in humans are determined not only by the intensity but also by the entropy of radiation. We suggest that through the course of evolution, the human eye has not adapted only to the maximum intensity or to the maximum information but to the optimal wavelength for obtaining information. On Earth, the optimal wavelengths for photopic and scotopic vision are 555 nm and 508 nm, respectively, as inferred experimentally. These optimal wavelengths are determined by the temperature of the star (in this case, the Sun) and by the atmospheric composition.

  20. Human vision is determined based on information theory

    NASA Astrophysics Data System (ADS)

    Delgado-Bonal, Alfonso; Martín-Torres, Javier

    2016-11-01

    It is commonly accepted that the evolution of the human eye has been driven by the maximum intensity of the radiation emitted by the Sun. However, the interpretation of the surrounding environment is constrained not only by the amount of energy received but also by the information content of the radiation. Information is related to entropy rather than energy. The human brain follows Bayesian statistical inference for the interpretation of visual space. The maximization of information occurs in the process of maximizing the entropy. Here, we show that the photopic and scotopic vision absorption peaks in humans are determined not only by the intensity but also by the entropy of radiation. We suggest that through the course of evolution, the human eye has not adapted only to the maximum intensity or to the maximum information but to the optimal wavelength for obtaining information. On Earth, the optimal wavelengths for photopic and scotopic vision are 555 nm and 508 nm, respectively, as inferred experimentally. These optimal wavelengths are determined by the temperature of the star (in this case, the Sun) and by the atmospheric composition.

  1. Human vision is determined based on information theory

    PubMed Central

    Delgado-Bonal, Alfonso; Martín-Torres, Javier

    2016-01-01

    It is commonly accepted that the evolution of the human eye has been driven by the maximum intensity of the radiation emitted by the Sun. However, the interpretation of the surrounding environment is constrained not only by the amount of energy received but also by the information content of the radiation. Information is related to entropy rather than energy. The human brain follows Bayesian statistical inference for the interpretation of visual space. The maximization of information occurs in the process of maximizing the entropy. Here, we show that the photopic and scotopic vision absorption peaks in humans are determined not only by the intensity but also by the entropy of radiation. We suggest that through the course of evolution, the human eye has not adapted only to the maximum intensity or to the maximum information but to the optimal wavelength for obtaining information. On Earth, the optimal wavelengths for photopic and scotopic vision are 555 nm and 508 nm, respectively, as inferred experimentally. These optimal wavelengths are determined by the temperature of the star (in this case, the Sun) and by the atmospheric composition. PMID:27808236

  2. Regional early flood warning system: design and implementation

    NASA Astrophysics Data System (ADS)

    Chang, L. C.; Yang, S. N.; Kuo, C. L.; Wang, Y. F.

    2017-12-01

    This study proposes a prototype of the regional early flood inundation warning system in Tainan City, Taiwan. The AI technology is used to forecast multi-step-ahead regional flood inundation maps during storm events. The computing time is only few seconds that leads to real-time regional flood inundation forecasting. A database is built to organize data and information for building real-time forecasting models, maintaining the relations of forecasted points, and displaying forecasted results, while real-time data acquisition is another key task where the model requires immediately accessing rain gauge information to provide forecast services. All programs related database are constructed in Microsoft SQL Server by using Visual C# to extracting real-time hydrological data, managing data, storing the forecasted data and providing the information to the visual map-based display. The regional early flood inundation warning system use the up-to-date Web technologies driven by the database and real-time data acquisition to display the on-line forecasting flood inundation depths in the study area. The friendly interface includes on-line sequentially showing inundation area by Google Map, maximum inundation depth and its location, and providing KMZ file download of the results which can be watched on Google Earth. The developed system can provide all the relevant information and on-line forecast results that helps city authorities to make decisions during typhoon events and make actions to mitigate the losses.

  3. A method for classification of multisource data using interval-valued probabilities and its application to HIRIS data

    NASA Technical Reports Server (NTRS)

    Kim, H.; Swain, P. H.

    1991-01-01

    A method of classifying multisource data in remote sensing is presented. The proposed method considers each data source as an information source providing a body of evidence, represents statistical evidence by interval-valued probabilities, and uses Dempster's rule to integrate information based on multiple data source. The method is applied to the problems of ground-cover classification of multispectral data combined with digital terrain data such as elevation, slope, and aspect. Then this method is applied to simulated 201-band High Resolution Imaging Spectrometer (HIRIS) data by dividing the dimensionally huge data source into smaller and more manageable pieces based on the global statistical correlation information. It produces higher classification accuracy than the Maximum Likelihood (ML) classification method when the Hughes phenomenon is apparent.

  4. Optimal firing rate estimation

    NASA Technical Reports Server (NTRS)

    Paulin, M. G.; Hoffman, L. F.

    2001-01-01

    We define a measure for evaluating the quality of a predictive model of the behavior of a spiking neuron. This measure, information gain per spike (Is), indicates how much more information is provided by the model than if the prediction were made by specifying the neuron's average firing rate over the same time period. We apply a maximum Is criterion to optimize the performance of Gaussian smoothing filters for estimating neural firing rates. With data from bullfrog vestibular semicircular canal neurons and data from simulated integrate-and-fire neurons, the optimal bandwidth for firing rate estimation is typically similar to the average firing rate. Precise timing and average rate models are limiting cases that perform poorly. We estimate that bullfrog semicircular canal sensory neurons transmit in the order of 1 bit of stimulus-related information per spike.

  5. Applications of quantum entropy to statistics

    NASA Astrophysics Data System (ADS)

    Silver, R. N.; Martz, H. F.

    This paper develops two generalizations of the maximum entropy (ME) principle. First, Shannon classical entropy is replaced by von Neumann quantum entropy to yield a broader class of information divergences (or penalty functions) for statistics applications. Negative relative quantum entropy enforces convexity, positivity, non-local extensivity and prior correlations such as smoothness. This enables the extension of ME methods from their traditional domain of ill-posed in-verse problems to new applications such as non-parametric density estimation. Second, given a choice of information divergence, a combination of ME and Bayes rule is used to assign both prior and posterior probabilities. Hyperparameters are interpreted as Lagrange multipliers enforcing constraints. Conservation principles are proposed to act statistical regularization and other hyperparameters, such as conservation of information and smoothness. ME provides an alternative to hierarchical Bayes methods.

  6. The effects of age, viewing distance, display type, font type, colour contrast and number of syllables on the legibility of Korean characters.

    PubMed

    Kong, Yong-Ku; Lee, Inseok; Jung, Myung-Chul; Song, Young-Woong

    2011-05-01

    This study evaluated the effects of age (20s and 60s), viewing distance (50 cm, 200 cm), display type (paper, monitor), font type (Gothic, Ming), colour contrast (black letters on white background, white letters on black background) and number of syllables (one, two) on the legibility of Korean characters by using the four legibility measures (minimum letter size for 100% correctness, maximum letter size for 0% correctness, minimum letter size for the least discomfort and maximum letter size for the most discomfort). Ten subjects in each age group read the four letters presented on a slide (letter size varied from 80 pt to 2 pt). Subjects also subjectively rated the reading discomfort of the letters on a 4-point scale (1 = no discomfort, 4 = most discomfort). According to the ANOVA procedure, age, viewing distance and font type significantly affected the four dependent variables (p < 0.05), while the main effect of colour contrast was not statistically significant for any measures. Two-syllable letters had smaller letters than one-syllable letters in the two correctness measures. The younger group could see letter sizes two times smaller than the old group could and the viewing distance of 50 cm showed letters about three times smaller than those at a 200 cm viewing distance. The Gothic fonts were smaller than the Ming fonts. Monitors were smaller than paper for correctness and maximum letter size for the most discomfort. From a comparison of the results for correctness and discomfort, people generally preferred larger letter sizes to those that they could read. The findings of this study may provide basic information for setting a global standard of letter size or font type to improve the legibility of characters written in Korean. STATEMENT OF RELEVANCE: Results obtained in this study will provide basic information and guidelines for setting standards of letter size and font type to improve the legibility of characters written in Korean. Also, the results might offer useful information for people who are working on design of visual displays.

  7. Controlling sunbathing safety during the summer holidays - The solar UV campaign at Baltic Sea coast in 2015.

    PubMed

    Guzikowski, Jakub; Czerwińska, Agnieszka E; Krzyścin, Janusz W; Czerwiński, Michał A

    2017-08-01

    Information regarding the intensity of surface UV radiation, provided for the public, is frequently given in terms of a daily maximum UV Index (UVI), based on a prognostic model. The quality of the UV forecast depends on the accuracy of column amount of ozone and cloudiness prediction. Daily variability of UVI is needed to determine the risk of the UV overexposure during outdoor activities. Various methods of estimating the temporary UVI and the maximum duration of UV exposures (received a dose equal to minimal erythemal dose - MED), at the site of sunbathing, were compared. The UV indices were obtained during a field experiment at the Baltic Sea coast in the period from 13th to 24th July 2015. The following UVI calculation models were considered: UVI measurements by simple hand-held biometers (Silver Crest, Oregon Scientific, or more advanced Solarmeter 6.5), our smartphone models based on cloud cover observations at the site and the cloudless-sky UVI forecast (available for any site for all smartphone users) or measured UVI, and the 24h weather predictions by the ensemble set of 10 models (with various cloud parameterizations). The direct UV measurements, even by a simple biometer, provided useful UVI estimates. The smartphone applications yielded a good agreement with the UV measurements. The weather prediction models for cloudless-sky conditions could provide valuable information if almost cloudless-sky conditions (cloudless-sky or slightly scattered clouds) were observed at the sunbathing site. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. The in vitro wear behavior of experimental resin-based composites derived from a commercial formulation.

    PubMed

    Finlay, Nessa; Hahnel, Sebastian; Dowling, Adam H; Fleming, Garry J P

    2013-04-01

    To investigate the short- and long-term in vitro wear resistance of experimental resin-based composites (RBCs) derived from a commercial formulation. Six experimental RBCs were manufactured by manipulating the monomeric resin composition and the filler characteristics of Grandio (Voco GmbH, Cuxhaven, Germany). The Oregon Health Sciences University (OHSU) oral wear simulator was used in the presence of a food-like slurry to simulate three-body abrasion and attrition wear for 50,000, 150,000 and 300,000 cycles. A three-dimensional image of each wear facet was created and the total volumetric wear (mm(3)) and maximum wear depth (μm) were quantified for the RBC and antagonist. Statistical analyses of the total volumetric wear and maximum wear depth data (two- and one-way analyses of variance (ANOVA), with Tukey's post hoc tests where required) and regression analyses, were conducted at p=0.05. Two-way ANOVAs identified a significant effect of RBC material×wear cycles, RBC material and wear cycles (all p<0.0001). Regression analyses showed significant increases in the total volumetric wear (p≤0.001) and maximum wear depth data (p≤0.004) for all RBCs with increasing wear cycles. Differences between all RBC materials were evident after ≥150,000 wear cycles and antagonist wear provided valuable information to support the experimental findings. Wear simulating machines can provide an indication of the clinical performance but clinical performance is multi-factorial and wear is only a single facet. Employing experimental RBCs provided by a dental manufacturer rather than using self-manufactured RBCs or dental products provides increased experimental control by limiting the variables involved. Copyright © 2013 Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  9. Marginal and Random Intercepts Models for Longitudinal Binary Data With Examples From Criminology.

    PubMed

    Long, Jeffrey D; Loeber, Rolf; Farrington, David P

    2009-01-01

    Two models for the analysis of longitudinal binary data are discussed: the marginal model and the random intercepts model. In contrast to the linear mixed model (LMM), the two models for binary data are not subsumed under a single hierarchical model. The marginal model provides group-level information whereas the random intercepts model provides individual-level information including information about heterogeneity of growth. It is shown how a type of numerical averaging can be used with the random intercepts model to obtain group-level information, thus approximating individual and marginal aspects of the LMM. The types of inferences associated with each model are illustrated with longitudinal criminal offending data based on N = 506 males followed over a 22-year period. Violent offending indexed by official records and self-report were analyzed, with the marginal model estimated using generalized estimating equations and the random intercepts model estimated using maximum likelihood. The results show that the numerical averaging based on the random intercepts can produce prediction curves almost identical to those obtained directly from the marginal model parameter estimates. The results provide a basis for contrasting the models and the estimation procedures and key features are discussed to aid in selecting a method for empirical analysis.

  10. Simulated environmental transport distances of Lepeophtheirus salmonis in Loch Linnhe, Scotland, for informing aquaculture area management structures.

    PubMed

    Salama, N K G; Murray, A G; Rabe, B

    2016-04-01

    In the majority of salmon farming countries, production occurs in zones where practices are coordinated to manage disease agents such as Lepeophtheirus salmonis. To inform the structure of zones in specific systems, models have been developed accounting for parasite biology and system hydrodynamics. These models provide individual system farm relationships, and as such, it may be beneficial to produce more generalized principles for informing structures. Here, we use six different forcing scenarios to provide simulations from a previously described model of the Loch Linnhe system, Scotland, to assess the maximum dispersal distance of lice particles released from 12 sites transported over 19 day. Results indicate that the median distance travelled is 6.1 km from release site with <2.5% transported beyond 15 km, which occurs from particles originating from half of the release sites, with an absolute simulated distance of 36 km observed. This provides information suggesting that the disease management areas developed for infectious salmon anaemia control may also have properties appropriate for salmon lice management in Scottish coastal waters. Additionally, general numerical descriptors of the simulated relative lice abundance reduction with increased distance from release location are proposed. © 2015 Crown copyright. © 2015 John Wiley & Sons Ltd.

  11. Multidimensional biochemical information processing of dynamical patterns

    NASA Astrophysics Data System (ADS)

    Hasegawa, Yoshihiko

    2018-02-01

    Cells receive signaling molecules by receptors and relay information via sensory networks so that they can respond properly depending on the type of signal. Recent studies have shown that cells can extract multidimensional information from dynamical concentration patterns of signaling molecules. We herein study how biochemical systems can process multidimensional information embedded in dynamical patterns. We model the decoding networks by linear response functions, and optimize the functions with the calculus of variations to maximize the mutual information between patterns and output. We find that, when the noise intensity is lower, decoders with different linear response functions, i.e., distinct decoders, can extract much information. However, when the noise intensity is higher, distinct decoders do not provide the maximum amount of information. This indicates that, when transmitting information by dynamical patterns, embedding information in multiple patterns is not optimal when the noise intensity is very large. Furthermore, we explore the biochemical implementations of these decoders using control theory and demonstrate that these decoders can be implemented biochemically through the modification of cascade-type networks, which are prevalent in actual signaling pathways.

  12. Multidimensional biochemical information processing of dynamical patterns.

    PubMed

    Hasegawa, Yoshihiko

    2018-02-01

    Cells receive signaling molecules by receptors and relay information via sensory networks so that they can respond properly depending on the type of signal. Recent studies have shown that cells can extract multidimensional information from dynamical concentration patterns of signaling molecules. We herein study how biochemical systems can process multidimensional information embedded in dynamical patterns. We model the decoding networks by linear response functions, and optimize the functions with the calculus of variations to maximize the mutual information between patterns and output. We find that, when the noise intensity is lower, decoders with different linear response functions, i.e., distinct decoders, can extract much information. However, when the noise intensity is higher, distinct decoders do not provide the maximum amount of information. This indicates that, when transmitting information by dynamical patterns, embedding information in multiple patterns is not optimal when the noise intensity is very large. Furthermore, we explore the biochemical implementations of these decoders using control theory and demonstrate that these decoders can be implemented biochemically through the modification of cascade-type networks, which are prevalent in actual signaling pathways.

  13. Analysis of Summer 2002 Melt Extent on the Greenland Ice Sheet using MODIS and SSM/I Data

    NASA Technical Reports Server (NTRS)

    Hall, Dorothy K.; Williams, Richard S., Jr.; Steffen, Konrad; Chien, Y. L.; Foster, James L.; Robinson, David A.; Riggs, George A.

    2004-01-01

    Previous work has shown that the summer of 2002 had the greatest area of snow melt extent on the Greenland ice sheet ever recorded using passive-microwave data. In this paper, we compare the 0 degree isotherm derived from the Moderate-Resolution Imaging Spectroradiometer (MODIS) instrument, with Special Sensor Microwave/Imager (SSM/I)-derived melt, at the time of the maximum melt extent in 2002. To validate the MODIS-derived land-surface temperatures (LSTs), we compared the MODIS LSTs with air temperatures from nine stations (using 11 different data points) and found that they agreed to within 2.3 plus or minus 2.09 C, with station temperatures consistently lower than the MODIS LSTs. According to the MODIS LST, the maximum surface melt extended to approximately 2300 m in southern Greenland; while the SSM/I measurements showed that the maximum melt extended to nearly 2700 m in southeastern Greenland. The MODIS and SSM/I data are complementary in providing detailed information about the progression of surface and near-surface melt on the Greenland ice sheet.

  14. Effect of density feedback on the two-route traffic scenario with bottleneck

    NASA Astrophysics Data System (ADS)

    Sun, Xiao-Yan; Ding, Zhong-Jun; Huang, Guo-Hua

    2016-12-01

    In this paper, we investigate the effect of density feedback on the two-route scenario with a bottleneck. The simulation and theory analysis shows that there exist two critical vehicle entry probabilities αc1 and αc2. When vehicle entry probability α≤αc1, four different states, i.e. free flow state, transition state, maximum current state and congestion state are identified in the system, which correspond to three critical reference densities. However, in the interval αc1<α<αc2, the free flow and transition state disappear, and there is only congestion state when α≥αc2. According to the results, traffic control center can adjust the reference density so that the system is in maximum current state. In this case, the capacity of the traffic system reaches maximum so that drivers can make full use of the roads. We hope that the study results can provide good advice for alleviating traffic jam and be useful to traffic control center for designing advanced traveller information systems.

  15. Corroboration of in vivo cartilage pressures with implications for synovial joint tribology and osteoarthritis causation.

    PubMed

    Morrell, Kjirste C; Hodge, W Andrew; Krebs, David E; Mann, Robert W

    2005-10-11

    Pressures on normal human acetabular cartilage have been collected from two implanted instrumented femoral head hemiprostheses. Despite significant differences in subjects' gender, morphology, mobility, and coordination, in vivo pressure measurements from both subjects covered similar ranges, with maximums of 5-6 MPa in gait, and as high as 18 MPa in other movements. Normalized for subject weight and height (nMPa), for free-speed walking the maximum pressure values were 25.2 for the female subject and 24.5 for the male subject. The overall maximum nMPa values were 76.2 for the female subject during rising from a chair at 11 months postoperative and 82.3 for the male subject while descending steps at 9 months postoperative. These unique in vivo data are consistent with corresponding cadaver experiments and model analyses. The collective results, in vitro data, model studies, and now corroborating in vivo data support the self-pressurizing "weeping" theory of synovial joint lubrication and provide unique information to evaluate the influence of in vivo pressure regimes on osteoarthritis causation and the efficacy of augmentations to, and substitutions for, natural cartilage.

  16. Cardiorespiratory performance during prolonged swimming tests with salmonids: a perspective on temperature effects and potential analytical pitfalls.

    PubMed

    Farrell, A P

    2007-11-29

    A prolonged swimming trial is the most common approach in studying steady-state changes in oxygen uptake, cardiac output and tissue oxygen extraction as a function of swimming speed in salmonids. The data generated by these sorts of studies are used here to support the idea that a maximum oxygen uptake is reached during a critical swimming speed test. Maximum oxygen uptake has a temperature optimum. Potential explanations are advanced to explain why maximum aerobic performance falls off at high temperature. The valuable information provided by critical swimming tests can be confounded by non-steady-state swimming behaviours, which typically occur with increasing frequency as salmonids approach fatigue. Two major concerns are noted. Foremost, measurements of oxygen uptake during swimming can considerably underestimate the true cost of transport near critical swimming speed, apparently in a temperature-dependent manner. Second, based on a comparison with voluntary swimming ascents in a raceway, forced swimming trials in a swim tunnel respirometer may underestimate critical swimming speed, possibly because fish in a swim tunnel respirometer are unable to sustain a ground speed.

  17. Maximum-likelihood estimation of recent shared ancestry (ERSA).

    PubMed

    Huff, Chad D; Witherspoon, David J; Simonson, Tatum S; Xing, Jinchuan; Watkins, W Scott; Zhang, Yuhua; Tuohy, Therese M; Neklason, Deborah W; Burt, Randall W; Guthery, Stephen L; Woodward, Scott R; Jorde, Lynn B

    2011-05-01

    Accurate estimation of recent shared ancestry is important for genetics, evolution, medicine, conservation biology, and forensics. Established methods estimate kinship accurately for first-degree through third-degree relatives. We demonstrate that chromosomal segments shared by two individuals due to identity by descent (IBD) provide much additional information about shared ancestry. We developed a maximum-likelihood method for the estimation of recent shared ancestry (ERSA) from the number and lengths of IBD segments derived from high-density SNP or whole-genome sequence data. We used ERSA to estimate relationships from SNP genotypes in 169 individuals from three large, well-defined human pedigrees. ERSA is accurate to within one degree of relationship for 97% of first-degree through fifth-degree relatives and 80% of sixth-degree and seventh-degree relatives. We demonstrate that ERSA's statistical power approaches the maximum theoretical limit imposed by the fact that distant relatives frequently share no DNA through a common ancestor. ERSA greatly expands the range of relationships that can be estimated from genetic data and is implemented in a freely available software package.

  18. Analysis of Summer 2002 Melt Extent on the Greenland Ice Sheet using MODIS and SSM/I Data

    NASA Technical Reports Server (NTRS)

    Hall, Dorothy K.; Williams, Richard S.; Steffen, Konrad; Chien, Janet Y. L.

    2004-01-01

    Previous work has shown that the summer of 2002 had the greatest area of snow melt extent on the Greenland ice sheet ever recorded using passive-microwave data. In this paper, we compare the 0 deg. isotherm derived from the Moderate-Resolution Imaging Spectroradiometer (MODIS) instrument, with Special Sensor Microwave/Imager (SSM/I)-derived melt, at the time of the maximum melt extent in 2002. To validate the MODIS derived land-surface temperatures (LSTs), we compared the MODIS LSTs with air temperatures from nine stations (using 11 different data points) and found that they agreed to within 2.3 +/- 2.09 C, with station temperatures consistently lower than the MODIS LSTs. According to the MODIS LST, the maximum surface melt extended to approx. 2300 m in southern Greenland; while the SSM/I measurements showed that the maximum melt extended to nearly 2700 m in southeastern Greenland. The MODIS and SSM/I data are complementary in providing detailed information about the progression of surface and near- surface melt on the Greenland ice sheet.

  19. Analysis of summer 2002 melt extent on the Greenland ice sheet using MODIS and SSM/I data

    USGS Publications Warehouse

    Hall, D.K.; Williams, R.S.; Steffen, K.; Chien, Janet Y.L.

    2004-01-01

    Previous work has shown that the summer of 2002 had the greatest area of snow melt extent on the Greenland ice sheet ever recorded using passive-microwave data. In this paper, we compare the 0?? isotherm derived from the Moderate-Resolution Imaging Spectroradiometer (MODIS) instrument, with Special Sensor Microwave/Imager (SSM/I)-derived melt, at the time of the maximum melt extent in 2002. To validate the MODIS-derived land-surface temperatures (LSTs), we compared the MODIS LSTs with air temperatures from nine stations (using 11 different data points) and found that they agreed to within 2.3??2.09??C, with station temperatures consistently lower than the MODIS LSTs. According to the MODIS LST, the maximum surface melt extended to ???2300 m in southern Greenland; while the SSM/I measurements showed that the maximum melt extended to nearly 2700 m in southeastern Greenland. The MODIS and SSM/I data are complementary in providing detailed information about the progression of surface and near-surface melt on the Greenland ice sheet.

  20. Analysis of summer 2002 melt extent on the Greenland ice sheet using MODIS and SSM/I data

    USGS Publications Warehouse

    Hall, D. K.; Williams, R.S.; Steffen, K.; Chien, Janet Y.L.

    2004-01-01

    Previous work has shown that the summer of 2002 had the greatest area of snow melt extent on the Greenland ice sheet ever recorded using passive-microwave data. In this paper, we compare the 0deg isotherm derived from the Moderate-Resolution Imaging Spectroradiometer (MODIS) instrument, with Special Sensor Microwave/Imager (SSM/I)-derived melt, at the time of the maximum melt extent in 2002. To validate the MODIS-derived land-surface temperatures (LSTs), we compared the MODIS LSTs with air temperatures from nine stations (using 11 different data points) and found that they agreed to within 2.3 plusmn 2.09 degC, with station temperatures consistently lower than the MODIS LSTs. According to the MODIS LST, the maximum surface melt extended to ~2300 m in southern Greenland; while the SSM/I measurements showed that the maximum melt extended to nearly 2700 m in southeastern Greenland. The MODIS and SSM/I data are complementary in providing detailed information about the progression of surface and near-surface melt on the Greenland ice sheet.

  1. The effect of high leverage points on the logistic ridge regression estimator having multicollinearity

    NASA Astrophysics Data System (ADS)

    Ariffin, Syaiba Balqish; Midi, Habshah

    2014-06-01

    This article is concerned with the performance of logistic ridge regression estimation technique in the presence of multicollinearity and high leverage points. In logistic regression, multicollinearity exists among predictors and in the information matrix. The maximum likelihood estimator suffers a huge setback in the presence of multicollinearity which cause regression estimates to have unduly large standard errors. To remedy this problem, a logistic ridge regression estimator is put forward. It is evident that the logistic ridge regression estimator outperforms the maximum likelihood approach for handling multicollinearity. The effect of high leverage points are then investigated on the performance of the logistic ridge regression estimator through real data set and simulation study. The findings signify that logistic ridge regression estimator fails to provide better parameter estimates in the presence of both high leverage points and multicollinearity.

  2. First passage Brownian functional properties of snowmelt dynamics

    NASA Astrophysics Data System (ADS)

    Dubey, Ashutosh; Bandyopadhyay, Malay

    2018-04-01

    In this paper, we model snow-melt dynamics in terms of a Brownian motion (BM) with purely time dependent drift and difusion and examine its first passage properties by suggesting and examining several Brownian functionals which characterize the lifetime and reactivity of such stochastic processes. We introduce several probability distribution functions (PDFs) associated with such time dependent BMs. For instance, for a BM with initial starting point x0, we derive analytical expressions for : (i) the PDF P(tf|x0) of the first passage time tf which specify the lifetime of such stochastic process, (ii) the PDF P(A|x0) of the area A till the first passage time and it provides us numerous valuable information about the total fresh water availability during melting, (iii) the PDF P(M) associated with the maximum size M of the BM process before the first passage time, and (iv) the joint PDF P(M; tm) of the maximum size M and its occurrence time tm before the first passage time. These P(M) and P(M; tm) are useful in determining the time of maximum fresh water availability and in calculating the total maximum amount of available fresh water. These PDFs are examined for the power law time dependent drift and diffusion which matches quite well with the available data of snowmelt dynamics.

  3. Summary of resources available to small water systems for meeting the 10 ppb arsenic drinking water limit.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krumhansl, James Lee; Thomson, Bruce M.; Ziegler, Matt

    2007-01-01

    With the lowering of the EPA maximum contaminant level of arsenic from 50 parts per billion (ppb) to 10 ppb, many public water systems in the country and in New Mexico in particular, are faced with making decisions about how to bring their system into compliance. This document provides detail on the options available to the water systems and the steps they need to take to achieve compliance with this regulation. Additionally, this document provides extensive resources and reference information for additional outreach support, financing options, vendors for treatment systems, and media pilot project results.

  4. Using simulation to interpret experimental data in terms of protein conformational ensembles.

    PubMed

    Allison, Jane R

    2017-04-01

    In their biological environment, proteins are dynamic molecules, necessitating an ensemble structural description. Molecular dynamics simulations and solution-state experiments provide complimentary information in the form of atomically detailed coordinates and averaged or distributions of structural properties or related quantities. Recently, increases in the temporal and spatial scale of conformational sampling and comparison of the more diverse conformational ensembles thus generated have revealed the importance of sampling rare events. Excitingly, new methods based on maximum entropy and Bayesian inference are promising to provide a statistically sound mechanism for combining experimental data with molecular dynamics simulations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Detection of main tidal frequencies using least squares harmonic estimation method

    NASA Astrophysics Data System (ADS)

    Mousavian, R.; Hossainali, M. Mashhadi

    2012-11-01

    In this paper the efficiency of the method of Least Squares Harmonic Estimation (LS-HE) for detecting the main tidal frequencies is investigated. Using this method, the tidal spectrum of the sea level data is evaluated at two tidal stations: Bandar Abbas in south of Iran and Workington on the eastern coast of the UK. The amplitudes of the tidal constituents at these two tidal stations are not the same. Moreover, in contrary to the Workington station, the Bandar Abbas tidal record is not an equispaced time series. Therefore, the analysis of the hourly tidal observations in Bandar Abbas and Workington can provide a reasonable insight into the efficiency of this method for analyzing the frequency content of tidal time series. Furthermore, applying the method of Fourier transform to the Workington tidal record provides an independent source of information for evaluating the tidal spectrum proposed by the LS-HE method. According to the obtained results, the spectrums of these two tidal records contain the components with the maximum amplitudes among the expected ones in this time span and some new frequencies in the list of known constituents. In addition, in terms of frequencies with maximum amplitude; the power spectrums derived from two aforementioned methods are the same. These results demonstrate the ability of LS-HE for identifying the frequencies with maximum amplitude in both tidal records.

  6. [Ethical considerations in genomic cohort study].

    PubMed

    Choi, Eun Kyung; Kim, Ock-Joo

    2007-03-01

    During the last decade, genomic cohort study has been developed in many countries by linking health data and genetic data in stored samples. Genomic cohort study is expected to find key genetic components that contribute to common diseases, thereby promising great advance in genome medicine. While many countries endeavor to build biobank systems, biobank-based genome research has raised important ethical concerns including genetic privacy, confidentiality, discrimination, and informed consent. Informed consent for biobank poses an important question: whether true informed consent is possible in population-based genomic cohort research where the nature of future studies is unforeseeable when consent is obtained. Due to the sensitive character of genetic information, protecting privacy and keeping confidentiality become important topics. To minimize ethical problems and achieve scientific goals to its maximum degree, each country strives to build population-based genomic cohort research project, by organizing public consultation, trying public and expert consensus in research, and providing safeguards to protect privacy and confidentiality.

  7. e-Labs and Work Objects: Towards Digital Health Economies

    NASA Astrophysics Data System (ADS)

    Ainsworth, John D.; Buchan, Iain E.

    The optimal provision of healthcare and public health services requires the synthesis of evidence from multiple disciplines. It is necessary to understand the genetic, environmental, behavioural and social determinants of disease and health-related states; to balance the effectiveness of interventions with their costs; to ensure the maximum safety and acceptability of interventions; and to provide fair access to care services for given populations. Ever expanding databases of knowledge and local health information, and the ability to employ computationally expensive methods, promises much for decisions to be both supported by best evidence and locally relevant. This promise will, however, not be realised without providing health professionals with the tools to make sense of this information rich environment and to collaborate across disciplines. We propose, as a solution to this problem, the e-Lab and Work Objects model as a sense-making platform for digital health economies - bringing together data, methods and people for timely health intelligence.

  8. Inverse statistics and information content

    NASA Astrophysics Data System (ADS)

    Ebadi, H.; Bolgorian, Meysam; Jafari, G. R.

    2010-12-01

    Inverse statistics analysis studies the distribution of investment horizons to achieve a predefined level of return. This distribution provides a maximum investment horizon which determines the most likely horizon for gaining a specific return. There exists a significant difference between inverse statistics of financial market data and a fractional Brownian motion (fBm) as an uncorrelated time-series, which is a suitable criteria to measure information content in financial data. In this paper we perform this analysis for the DJIA and S&P500 as two developed markets and Tehran price index (TEPIX) as an emerging market. We also compare these probability distributions with fBm probability, to detect when the behavior of the stocks are the same as fBm.

  9. Satellite information on Orlando, Florida. [coordination of LANDSAT and Skylab data and EREP photography

    NASA Technical Reports Server (NTRS)

    Hannah, J. W.; Thomas, G. L.; Esparza, F.

    1975-01-01

    A land use map of Orange County, Florida was prepared from EREP photography while LANDSAT and EREP multispectral scanner data were used to provide more detailed information on Orlando and its suburbs. The generalized maps were prepared by tracing the patterns on an overlay, using an enlarging viewer. Digital analysis of the multispectral scanner data was basically the maximum likelihood classification method with training sample input and computer printer mapping of the results. Urban features delineated by the maps are discussed. It is concluded that computer classification, accompanied by human interpretation and manual simplification can produce land use maps which are useful on a regional, county, and city basis.

  10. BUILD: A community development simulation game, appendix A

    NASA Technical Reports Server (NTRS)

    Orlando, J. A.; Pennington, A. J.

    1973-01-01

    The computer based urban decision-making game BUILD is described. BUILD is aimed at: (1) allowing maximum expression of value positions by participants through resolution of intense, task-oriented conflicts: (2) heuristically gathering information on both the technical and social functioning of the city through feedback from participants: (3) providing community participants with access to technical expertise in urban decision making, and to expose professionals to the value positions of the community: and (4) laying the groundwork for eventual development of an actual policy making tool. A brief description of the roles, sample input/output formats, an initial scenario, and information on accessing the game through a time-sharing system are included.

  11. Columbia SMA Project: A Randomized, Control Trial of the Effects of Exercise on Motor Function and Strength in Patients with Spinal Muscular Atrophy (SMA)

    DTIC Science & Technology

    2012-06-01

    Reference values of maximum isometric muscle force obtained in 270 children aged 4-16 years by hand-held dynamometry. Neuromuscul Disord. 2001;11(5...evaluation of specific muscle groups responsible for fatigue-related changes. Since fiber type proportion is determined by its innervation, evaluating muscle ... fiber output provides down-stream information about the integrity of the motor neuron. Objective To determine the association between muscle

  12. Missile Manufacturing Technology Conference Held at Hilton Head Island, South Carolina on 22-26 September 1975. Panel Presentations. Test Equipment

    DTIC Science & Technology

    1975-01-01

    in the computer in 16 bit parallel computer DIO transfers at the max- imum computer I/O speed. it then transmits this data in a bit- serial echo...maximum DIO rate under computer interrupt control. The LCI also provides station interrupt information for transfer to the computer under computer...been in daily operation since 1973. The SAM-D Missile system is currently in the Engineering De - velopment phase which precedes the Production and

  13. Lumped Parameter experiments for Single Mode Fiber Laser Cutting of Thin Stainless Steel Plate

    NASA Astrophysics Data System (ADS)

    Lai, Shengying; Jia, Ye; Han, Bing; Wang, Jun; Liu, Zongkai; Ni, Xiaowu; Shen, Zhonghua; Lu, Jian

    2017-06-01

    The present work reports the parameters on laser cutting stainless steel including workpiece thickness, cutting speed, defocus length and assisting gas pressure. The cutting kerf width, dross attachment and cut edge squareness deviation are examined to provide information on cutting quality. The results show that with the increasing thickness, the cutting speed decrease rate is about 27%. The optimal ranges of cutting speed, defocus length and gas pressure are obtained with maximum quality. The first section in your paper

  14. Composite Riflescope

    NASA Technical Reports Server (NTRS)

    1989-01-01

    Bushnell Division of Bausch & Lomb's Armor-Sight riflescope combines the company's world-renowned optics with a graphite composite (Graphlon VI) developed for space applications. The riflescope is 10 percent lighter than aluminum scopes, and, because its thermal expansion coefficient is near zero, optical distortion from heat and cold extremes is eliminated. It is fogproof and waterproof; advanced multicoated optics provide maximum light transmission to brighten target ranges. Bushnell was assisted by NIAC/USC in searching for technical information on graphic composites and in overcoming difficulties with bonding and porosity.

  15. Mapping from Space - Ontology Based Map Production Using Satellite Imageries

    NASA Astrophysics Data System (ADS)

    Asefpour Vakilian, A.; Momeni, M.

    2013-09-01

    Determination of the maximum ability for feature extraction from satellite imageries based on ontology procedure using cartographic feature determination is the main objective of this research. Therefore, a special ontology has been developed to extract maximum volume of information available in different high resolution satellite imageries and compare them to the map information layers required in each specific scale due to unified specification for surveying and mapping. ontology seeks to provide an explicit and comprehensive classification of entities in all sphere of being. This study proposes a new method for automatic maximum map feature extraction and reconstruction of high resolution satellite images. For example, in order to extract building blocks to produce 1 : 5000 scale and smaller maps, the road networks located around the building blocks should be determined. Thus, a new building index has been developed based on concepts obtained from ontology. Building blocks have been extracted with completeness about 83%. Then, road networks have been extracted and reconstructed to create a uniform network with less discontinuity on it. In this case, building blocks have been extracted with proper performance and the false positive value from confusion matrix was reduced by about 7%. Results showed that vegetation cover and water features have been extracted completely (100%) and about 71% of limits have been extracted. Also, the proposed method in this article had the ability to produce a map with largest scale possible from any multi spectral high resolution satellite imagery equal to or smaller than 1 : 5000.

  16. Mapping from Space - Ontology Based Map Production Using Satellite Imageries

    NASA Astrophysics Data System (ADS)

    Asefpour Vakilian, A.; Momeni, M.

    2013-09-01

    Determination of the maximum ability for feature extraction from satellite imageries based on ontology procedure using cartographic feature determination is the main objective of this research. Therefore, a special ontology has been developed to extract maximum volume of information available in different high resolution satellite imageries and compare them to the map information layers required in each specific scale due to unified specification for surveying and mapping. ontology seeks to provide an explicit and comprehensive classification of entities in all sphere of being. This study proposes a new method for automatic maximum map feature extraction and reconstruction of high resolution satellite images. For example, in order to extract building blocks to produce 1 : 5000 scale and smaller maps, the road networks located around the building blocks should be determined. Thus, a new building index has been developed based on concepts obtained from ontology. Building blocks have been extracted with completeness about 83 %. Then, road networks have been extracted and reconstructed to create a uniform network with less discontinuity on it. In this case, building blocks have been extracted with proper performance and the false positive value from confusion matrix was reduced by about 7 %. Results showed that vegetation cover and water features have been extracted completely (100 %) and about 71 % of limits have been extracted. Also, the proposed method in this article had the ability to produce a map with largest scale possible from any multi spectral high resolution satellite imagery equal to or smaller than 1 : 5000.

  17. 75 FR 3763 - Proposed Collection; Comment Request for Review of a Currently Approved Information Collection

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-22

    ... information about the cost to elect less than the maximum survivor annuity. This letter may be used to decline... about the cost to elect the maximum survivor annuity. This letter may be used to ask for more... who do not have a former spouse who is entitled to a survivor annuity benefit. RI 20-63B is for those...

  18. Radiotherapy-induced Cherenkov luminescence imaging in a human body phantom.

    PubMed

    Ahmed, Syed Rakin; Jia, Jeremy Mengyu; Bruza, Petr; Vinogradov, Sergei; Jiang, Shudong; Gladstone, David J; Jarvis, Lesley A; Pogue, Brian W

    2018-03-01

    Radiation therapy produces Cherenkov optical emission in tissue, and this light can be utilized to activate molecular probes. The feasibility of sensing luminescence from a tissue molecular oxygen sensor from within a human body phantom was examined using the geometry of the axillary lymph node region. Detection of regions down to 30-mm deep was feasible with submillimeter spatial resolution with the total quantity of the phosphorescent sensor PtG4 near 1 nanomole. Radiation sheet scanning in an epi-illumination geometry provided optimal coverage, and maximum intensity projection images provided illustration of the concept. This work provides the preliminary information needed to attempt this type of imaging in vivo. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  19. ERTS-1 observes algal blooms in Lake Erie and Utah Lake

    NASA Technical Reports Server (NTRS)

    Strong, A. E.

    1973-01-01

    During late summer when the surface waters of Lake Erie reach their maximum temperature an algal bloom is likely to develop. Such phenomena have been noticed on other shallow lakes using ERTS-1 and characterize eutrophic conditions. The concentration of the algae into long streamers provides additional information on surface circulations. To augment the ERTS-1 MSS data of Lake Erie an aircraft was flown to provide correlative thermal-IR and additional multiband photographs. The algal bloom is highly absorptive in the visible wavelengths but reverses contrast with the surrounding water in the near-IR bands. The absorption of shortwave energy heats the dark brown algal mass, providing a hot surface target for the thermal-IR scanner.

  20. Binary versus non-binary information in real time series: empirical results and maximum-entropy matrix models

    NASA Astrophysics Data System (ADS)

    Almog, Assaf; Garlaschelli, Diego

    2014-09-01

    The dynamics of complex systems, from financial markets to the brain, can be monitored in terms of multiple time series of activity of the constituent units, such as stocks or neurons, respectively. While the main focus of time series analysis is on the magnitude of temporal increments, a significant piece of information is encoded into the binary projection (i.e. the sign) of such increments. In this paper we provide further evidence of this by showing strong nonlinear relations between binary and non-binary properties of financial time series. These relations are a novel quantification of the fact that extreme price increments occur more often when most stocks move in the same direction. We then introduce an information-theoretic approach to the analysis of the binary signature of single and multiple time series. Through the definition of maximum-entropy ensembles of binary matrices and their mapping to spin models in statistical physics, we quantify the information encoded into the simplest binary properties of real time series and identify the most informative property given a set of measurements. Our formalism is able to accurately replicate, and mathematically characterize, the observed binary/non-binary relations. We also obtain a phase diagram allowing us to identify, based only on the instantaneous aggregate return of a set of multiple time series, a regime where the so-called ‘market mode’ has an optimal interpretation in terms of collective (endogenous) effects, a regime where it is parsimoniously explained by pure noise, and a regime where it can be regarded as a combination of endogenous and exogenous factors. Our approach allows us to connect spin models, simple stochastic processes, and ensembles of time series inferred from partial information.

  1. Undergraduate medical textbooks do not provide adequate information on intravenous fluid therapy: a systematic survey and suggestions for improvement.

    PubMed

    Powell, Arfon G M T; Paterson-Brown, Simon; Drummond, Gordon B

    2014-02-20

    Inappropriate prescribing of intravenous (IV) fluid, particularly 0.9% sodium chloride, causes post-operative complications. Fluid prescription is often left to junior medical staff and is frequently poorly managed. One reason for poor intravenous fluid prescribing practices could be inadequate coverage of this topic in the textbooks that are used. We formulated a comprehensive set of topics, related to important common clinical situations involving IV fluid therapy, (routine fluid replacement, fluid loss, fluids overload) to assess the adequacy of textbooks in common use. We assessed 29 medical textbooks widely available to students in the UK, scoring the presence of information provided by each book on each of the topics. The scores indicated how fully the topics were considered: not at all, partly, and adequately. No attempt was made to judge the quality of the information, because there is no consensus on these topics. The maximum score that a book could achieve was 52. Three of the topics we chose were not considered by any of the books. Discounting these topics as "too esoteric", the maximum possible score became 46. One textbook gained a score of 45, but the general score was poor (median 11, quartiles 4, 21). In particular, coverage of routine postoperative management was inadequate. Textbooks for undergraduates cover the topic of intravenous therapy badly, which may partly explain the poor knowledge and performance of junior doctors in this important field. Systematic revision of current textbooks might improve knowledge and practice by junior doctors. Careful definition of the remit and content of textbooks should be applied more widely to ensure quality and "fitness for purpose", and avoid omission of vital knowledge.

  2. Undergraduate medical textbooks do not provide adequate information on intravenous fluid therapy: a systematic survey and suggestions for improvement

    PubMed Central

    2014-01-01

    Background Inappropriate prescribing of intravenous (IV) fluid, particularly 0.9% sodium chloride, causes post-operative complications. Fluid prescription is often left to junior medical staff and is frequently poorly managed. One reason for poor intravenous fluid prescribing practices could be inadequate coverage of this topic in the textbooks that are used. Methods We formulated a comprehensive set of topics, related to important common clinical situations involving IV fluid therapy, (routine fluid replacement, fluid loss, fluids overload) to assess the adequacy of textbooks in common use. We assessed 29 medical textbooks widely available to students in the UK, scoring the presence of information provided by each book on each of the topics. The scores indicated how fully the topics were considered: not at all, partly, and adequately. No attempt was made to judge the quality of the information, because there is no consensus on these topics. Results The maximum score that a book could achieve was 52. Three of the topics we chose were not considered by any of the books. Discounting these topics as “too esoteric”, the maximum possible score became 46. One textbook gained a score of 45, but the general score was poor (median 11, quartiles 4, 21). In particular, coverage of routine postoperative management was inadequate. Conclusions Textbooks for undergraduates cover the topic of intravenous therapy badly, which may partly explain the poor knowledge and performance of junior doctors in this important field. Systematic revision of current textbooks might improve knowledge and practice by junior doctors. Careful definition of the remit and content of textbooks should be applied more widely to ensure quality and “fitness for purpose”, and avoid omission of vital knowledge. PMID:24555812

  3. Control system for maximum use of adhesive forces of a railway vehicle in a tractive mode

    NASA Astrophysics Data System (ADS)

    Spiryagin, Maksym; Lee, Kwan Soo; Yoo, Hong Hee

    2008-04-01

    The realization of maximum adhesive forces for a railway vehicle is a very difficult process, because it involves using tractive efforts and depends on friction characteristics in the contact zone between wheels and rails. Tractive efforts are realized by means of tractive torques of motors, and their maximum values can provide negative effects such as slip and skid. These situations usually happen when information about friction conditions is lacking. The negative processes have a major influence on wearing of contact bodies and tractive units. Therefore, many existing control systems for vehicles use an effect of a prediction of a friction coefficient between wheels and rails because measuring a friction coefficient at the moment of running vehicle movement is very difficult. One of the ways to solve this task is to use noise spectrum analysis for friction coefficient detection. This noise phenomenon has not been clearly studied and analyzed. In this paper, we propose an adhesion control system of railway vehicles based on an observer, which allows one to determine the maximum tractive torque based on the optimal adhesive force between the wheels (wheel pair) of a railway vehicle and rails (rail track) depending on weight load from a wheel to a rail, friction conditions in the contact zone, a lateral displacement of wheel set and wheel sleep. As a result, it allows a railway vehicle to be driven in a tractive mode by the maximum adhesion force for real friction conditions.

  4. On the Importance of Cycle Minimum in Sunspot Cycle Prediction

    NASA Technical Reports Server (NTRS)

    Wilson, Robert M.; Hathaway, David H.; Reichmann, Edwin J.

    1996-01-01

    The characteristics of the minima between sunspot cycles are found to provide important information for predicting the amplitude and timing of the following cycle. For example, the time of the occurrence of sunspot minimum sets the length of the previous cycle, which is correlated by the amplitude-period effect to the amplitude of the next cycle, with cycles of shorter (longer) than average length usually being followed by cycles of larger (smaller) than average size (true for 16 of 21 sunspot cycles). Likewise, the size of the minimum at cycle onset is correlated with the size of the cycle's maximum amplitude, with cycles of larger (smaller) than average size minima usually being associated with larger (smaller) than average size maxima (true for 16 of 22 sunspot cycles). Also, it was found that the size of the previous cycle's minimum and maximum relates to the size of the following cycle's minimum and maximum with an even-odd cycle number dependency. The latter effect suggests that cycle 23 will have a minimum and maximum amplitude probably larger than average in size (in particular, minimum smoothed sunspot number Rm = 12.3 +/- 7.5 and maximum smoothed sunspot number RM = 198.8 +/- 36.5, at the 95-percent level of confidence), further suggesting (by the Waldmeier effect) that it will have a faster than average rise to maximum (fast-rising cycles have ascent durations of about 41 +/- 7 months). Thus, if, as expected, onset for cycle 23 will be December 1996 +/- 3 months, based on smoothed sunspot number, then the length of cycle 22 will be about 123 +/- 3 months, inferring that it is a short-period cycle and that cycle 23 maximum amplitude probably will be larger than average in size (from the amplitude-period effect), having an RM of about 133 +/- 39 (based on the usual +/- 30 percent spread that has been seen between observed and predicted values), with maximum amplitude occurrence likely sometime between July 1999 and October 2000.

  5. Maximum a posteriori resampling of noisy, spatially correlated data

    NASA Astrophysics Data System (ADS)

    Goff, John A.; Jenkins, Chris; Calder, Brian

    2006-08-01

    In any geologic application, noisy data are sources of consternation for researchers, inhibiting interpretability and marring images with unsightly and unrealistic artifacts. Filtering is the typical solution to dealing with noisy data. However, filtering commonly suffers from ad hoc (i.e., uncalibrated, ungoverned) application. We present here an alternative to filtering: a newly developed method for correcting noise in data by finding the "best" value given available information. The motivating rationale is that data points that are close to each other in space cannot differ by "too much," where "too much" is governed by the field covariance. Data with large uncertainties will frequently violate this condition and therefore ought to be corrected, or "resampled." Our solution for resampling is determined by the maximum of the a posteriori density function defined by the intersection of (1) the data error probability density function (pdf) and (2) the conditional pdf, determined by the geostatistical kriging algorithm applied to proximal data values. A maximum a posteriori solution can be computed sequentially going through all the data, but the solution depends on the order in which the data are examined. We approximate the global a posteriori solution by randomizing this order and taking the average. A test with a synthetic data set sampled from a known field demonstrates quantitatively and qualitatively the improvement provided by the maximum a posteriori resampling algorithm. The method is also applied to three marine geology/geophysics data examples, demonstrating the viability of the method for diverse applications: (1) three generations of bathymetric data on the New Jersey shelf with disparate data uncertainties; (2) mean grain size data from the Adriatic Sea, which is a combination of both analytic (low uncertainty) and word-based (higher uncertainty) sources; and (3) side-scan backscatter data from the Martha's Vineyard Coastal Observatory which are, as is typical for such data, affected by speckle noise. Compared to filtering, maximum a posteriori resampling provides an objective and optimal method for reducing noise, and better preservation of the statistical properties of the sampled field. The primary disadvantage is that maximum a posteriori resampling is a computationally expensive procedure.

  6. Quantitation of circulating tumor cells in blood samples from ovarian and prostate cancer patients using tumor-specific fluorescent ligands.

    PubMed

    He, Wei; Kularatne, Sumith A; Kalli, Kimberly R; Prendergast, Franklyn G; Amato, Robert J; Klee, George G; Hartmann, Lynn C; Low, Philip S

    2008-10-15

    Quantitation of circulating tumor cells (CTCs) can provide information on the stage of a malignancy, onset of disease progression and response to therapy. In an effort to more accurately quantitate CTCs, we have synthesized fluorescent conjugates of 2 high-affinity tumor-specific ligands (folate-AlexaFluor 488 and DUPA-FITC) that bind tumor cells >20-fold more efficiently than fluorescent antibodies. Here we determine whether these tumor-specific dyes can be exploited for quantitation of CTCs in peripheral blood samples from cancer patients. A CTC-enriched fraction was isolated from the peripheral blood of ovarian and prostate cancer patients by an optimized density gradient centrifugation protocol and labeled with the aforementioned fluorescent ligands. CTCs were then quantitated by flow cytometry. CTCs were detected in 18 of 20 ovarian cancer patients (mean 222 CTCs/ml; median 15 CTCs/ml; maximum 3,118 CTCs/ml), whereas CTC numbers in 16 gender-matched normal volunteers were negligible (mean 0.4 CTCs/ml; median 0.3 CTCs/ml; maximum 1.5 CTCs/ml; p < 0.001, chi(2)). CTCs were also detected in 10 of 13 prostate cancer patients (mean 26 CTCs/ml, median 14 CTCs/ml, maximum 94 CTCs/ml) but not in 18 gender-matched healthy donors (mean 0.8 CTCs/ml, median 1, maximum 3 CTC/ml; p < 0.0026, chi(2)). Tumor-specific fluorescent antibodies were much less efficient in quantitating CTCs because of their lower CTC labeling efficiency. Use of tumor-specific fluorescent ligands to label CTCs in peripheral blood can provide a simple, accurate and sensitive method for determining the number of cancer cells circulating in the bloodstream.

  7. Optimizing the design of small-sized nucleus breeding programs for dairy cattle with minimal performance recording.

    PubMed

    Kariuki, C M; Komen, H; Kahi, A K; van Arendonk, J A M

    2014-12-01

    Dairy cattle breeding programs in developing countries are constrained by minimal and erratic pedigree and performance recording on cows on commercial farms. Small-sized nucleus breeding programs offer a viable alternative. Deterministic simulations using selection index theory were performed to determine the optimum design for small-sized nucleus schemes for dairy cattle. The nucleus was made up of 197 bulls and 243 cows distributed in 8 non-overlapping age classes. Each year 10 sires and 100 dams were selected to produce the next generation of male and female selection candidates. Conception rates and sex ratio were fixed at 0.90 and 0.50, respectively, translating to 45 male and 45 female candidates joining the nucleus per year. Commercial recorded dams provided information for genetic evaluation of selection candidates (bulls) in the nucleus. Five strategies were defined: nucleus records only [within-nucleus dam performance (DP)], progeny records in addition to nucleus records [progeny testing (PT)], genomic information only [genomic selection (GS)], dam performance records in addition to genomic information (GS+DP), and progeny records in addition to genomic information (GS+PT). Alternative PT, GS, GS+DP, and GS+PT schemes differed in the number of progeny per sire and size of reference population. The maximum number of progeny records per sire was 30, and the maximum size of the reference population was 5,000. Results show that GS schemes had higher responses and lower accuracies compared with other strategies, with the higher response being due to shorter generation intervals. Compared with similar sized progeny-testing schemes, genomic-selection schemes would have lower accuracies but these are offset by higher responses per year, which might provide additional incentive for farmers to participate in recording. Copyright © 2014 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  8. Performance Analysis of IEEE 802.11g TCM Waveforms Transmitted over a Channel with Pulse-Noise Interference

    DTIC Science & Technology

    2007-06-01

    17 Table 2. Best (maximum free distance) rate r=2/3 punctured convolutional code ...Hamming distance between all pairs of non-zero paths. Table 2 lists the best rate r=2/3, punctured convolutional code information weight structure dB...Table 2. Best (maximum free distance) rate r=2/3 punctured convolutional code information weight structure. (From: [12]). K freed freeB

  9. Central system of psychosocial support to the Czech victims affected by the tsunami in Southeast Asia.

    PubMed

    Vymetal, Stepan

    2006-01-01

    The tsunami disaster affected several countries in Southeast Asia in December 2004 and killed or affected many tourists, most of them from Europe. Eight Czech citizens died, and about 500 Czechs were seriously mentally traumatized. The psychosocial needs of tourists included: (1) protection; (2) treatment; (3) safety; (4) relief; (5) psychological first aid; (6) connecting with family members; (7) transportation home; (8) information about possible mental reactions to trauma; (9) information about the normality of their reaction; (10) procedural and environmental orientation; (11) reinforcement of personal competencies; and (12) psycho-trauma therapy. The Ministry of Foreign Affairs of the Czech Republic was in charge of general emergency management. General coordination of psychosocial support was coordinated under the Ministry of Interior of the Czech Republic, which is connected to the Central Crisis Staff of the Czech Government. The major cooperative partners were: the Ministry of Foreign Affairs, the Ministry of Defence, the Ministry of Health, Czech Airlines, psychosocial intervention teams of the Czech Republic, and the Czech Association of Clinical Psychologists. The main goals of relief workers were: (1) to bring back home the maximum number of Czech citizens; (2) to provide relevant information to the maximum number of affected Czech citizens; (3) to provide relevant information to rescue workers and professionals; and (4) to prepare working psychosocial support regional network. Major activities of the Ministry of Interior (psychology section) included: (1) establishing a psychological helpline; (2) running a team of psychological assistance (assistance in the Czech airports, psychological monitoring of tourists, crisis intervention, psychological first aid, assistance in the collection of DNA material from relatives); (3) drafting and distributing specific information materials (brochures, leaflets, address lists, printed and electronic instructions); (4) communicating via the media and advertising, and (5) providing analysis and research studies. Central coordination of psychosocial support has been found as successful in the first phase after the disaster. The plans must be built for preferable cooperation in the psychosocial field in the Czech Republic. Better collaborates with journalists must exist in order to reduce secondary psycho-trauma. There is a need for intensive international cooperation in the psychosocial field and to build the network at the global level.

  10. Central System of Psychosocial Support to the Czech Victims Affected by the Tsunami in Southeast Asia.

    PubMed

    Vymetal, Stepan

    2006-02-01

    The Tsunami disaster affected several countries in Southeast Asia in December 2004 and killed or affected many tourists, most of them from Europe. Eight Czech citizens died, and about 500 Czechs were seriously mentally traumatized. The psychosocial needs of tourists included: (1) protection; (2) treatment; (3) safety; (4) relief; (5) psychological first aid; (6) connecting with family members; (7) transportation home; (8) information about possible mental reactions to trauma; (9) information about the normality of their reaction; (10) procedural and environmental orientation; (11) reinforcement of personal competencies; and (12) psycho-trauma therapy. The Ministry of Foreign Affairs of the Czech Republic was in charge of general emergency management. General coordination of psychosocial support was coordinated under the Ministry of Interior of the Czech Republic, which is connected to the Central Crisis Staff of the Czech Government. The major cooperative partners were: the Ministry of Foreign Affairs, the Ministry of Defence, the Ministry of Health, Czech Airlines, psychosocial intervention teams of the Czech Republic, and the Czech Association of Clinical Psychologists. The main goals of relief workers were: (1) to bring back home the maximum number of Czech citizens; (2) to provide relevant information to the maximum number of affected Czech citizens; (3) to provide relevant information to rescue workers and professionals; and (4) to prepare working psychosocial support regional network. Major activities of the Ministry of Interior (psychology section) included: (1) establishing a psychological helpline; (2) running a team of psychological assistance (assistance in the Czech airports, psychological monitoring of tourists, crisis intervention, psychological first aid, assistance in the collection of DNA material from relatives); (3) drafting and distributing specific information materials (brochures, leaflets, address lists, printed and electronic instructions); (4) communicating via the media and advertising; and (5) providing analysis and research studies. Central coordination of psychosocial support has been found as successful in the first phase after the disaster. The plans must be built for preferable cooperation in the psychosocial field in the Czech Republic. Better collaborates with journalists must exist in order to reduce secondary psycho-trauma. There is a need for intensive international cooperation in the psychosocial field and to build the network at the global level.

  11. Deterministic physical systems under uncertain initial conditions: the case of maximum entropy applied to projectile motion

    NASA Astrophysics Data System (ADS)

    Montecinos, Alejandra; Davis, Sergio; Peralta, Joaquín

    2018-07-01

    The kinematics and dynamics of deterministic physical systems have been a foundation of our understanding of the world since Galileo and Newton. For real systems, however, uncertainty is largely present via external forces such as friction or lack of precise knowledge about the initial conditions of the system. In this work we focus on the latter case and describe the use of inference methodologies in solving the statistical properties of classical systems subject to uncertain initial conditions. In particular we describe the application of the formalism of maximum entropy (MaxEnt) inference to the problem of projectile motion, given information about the average horizontal range over many realizations. By using MaxEnt we can invert the problem and use the provided information on the average range to reduce the original uncertainty in the initial conditions. Also, additional insight into the initial condition's probabilities, and the projectile path distribution itself, can be achieved based on the value of the average horizontal range. The wide applicability of this procedure, as well as its ease of use, reveals a useful tool with which to revisit a large number of physics problems, from classrooms to frontier research.

  12. A new feature extraction method for signal classification applied to cord dorsum potentials detection

    PubMed Central

    Vidaurre, D.; Rodríguez, E. E.; Bielza, C.; Larrañaga, P.; Rudomin, P.

    2012-01-01

    In the spinal cord of the anesthetized cat, spontaneous cord dorsum potentials (CDPs) appear synchronously along the lumbo-sacral segments. These CDPs have different shapes and magnitudes. Previous work has indicated that some CDPs appear to be specially associated with the activation of spinal pathways that lead to primary afferent depolarization and presynaptic inhibition. Visual detection and classification of these CDPs provides relevant information on the functional organization of the neural networks involved in the control of sensory information and allows the characterization of the changes produced by acute nerve and spinal lesions. We now present a novel feature extraction approach for signal classification, applied to CDP detection. The method is based on an intuitive procedure. We first remove by convolution the noise from the CDPs recorded in each given spinal segment. Then, we assign a coefficient for each main local maximum of the signal using its amplitude and distance to the most important maximum of the signal. These coefficients will be the input for the subsequent classification algorithm. In particular, we employ gradient boosting classification trees. This combination of approaches allows a faster and more accurate discrimination of CDPs than is obtained by other methods. PMID:22929924

  13. A new feature extraction method for signal classification applied to cord dorsum potential detection.

    PubMed

    Vidaurre, D; Rodríguez, E E; Bielza, C; Larrañaga, P; Rudomin, P

    2012-10-01

    In the spinal cord of the anesthetized cat, spontaneous cord dorsum potentials (CDPs) appear synchronously along the lumbo-sacral segments. These CDPs have different shapes and magnitudes. Previous work has indicated that some CDPs appear to be specially associated with the activation of spinal pathways that lead to primary afferent depolarization and presynaptic inhibition. Visual detection and classification of these CDPs provides relevant information on the functional organization of the neural networks involved in the control of sensory information and allows the characterization of the changes produced by acute nerve and spinal lesions. We now present a novel feature extraction approach for signal classification, applied to CDP detection. The method is based on an intuitive procedure. We first remove by convolution the noise from the CDPs recorded in each given spinal segment. Then, we assign a coefficient for each main local maximum of the signal using its amplitude and distance to the most important maximum of the signal. These coefficients will be the input for the subsequent classification algorithm. In particular, we employ gradient boosting classification trees. This combination of approaches allows a faster and more accurate discrimination of CDPs than is obtained by other methods.

  14. Probable flood predictions in ungauged coastal basins of El Salvador

    USGS Publications Warehouse

    Friedel, M.J.; Smith, M.E.; Chica, A.M.E.; Litke, D.

    2008-01-01

    A regionalization procedure is presented and used to predict probable flooding in four ungauged coastal river basins of El Salvador: Paz, Jiboa, Grande de San Miguel, and Goascoran. The flood-prediction problem is sequentially solved for two regions: upstream mountains and downstream alluvial plains. In the upstream mountains, a set of rainfall-runoff parameter values and recurrent peak-flow discharge hydrographs are simultaneously estimated for 20 tributary-basin models. Application of dissimilarity equations among tributary basins (soft prior information) permitted development of a parsimonious parameter structure subject to information content in the recurrent peak-flow discharge values derived using regression equations based on measurements recorded outside the ungauged study basins. The estimated joint set of parameter values formed the basis from which probable minimum and maximum peak-flow discharge limits were then estimated revealing that prediction uncertainty increases with basin size. In the downstream alluvial plain, model application of the estimated minimum and maximum peak-flow hydrographs facilitated simulation of probable 100-year flood-flow depths in confined canyons and across unconfined coastal alluvial plains. The regionalization procedure provides a tool for hydrologic risk assessment and flood protection planning that is not restricted to the case presented herein. ?? 2008 ASCE.

  15. Coherent Wave Measurement Buoy Arrays to Support Wave Energy Extraction

    NASA Astrophysics Data System (ADS)

    Spada, F.; Chang, G.; Jones, C.; Janssen, T. T.; Barney, P.; Roberts, J.

    2016-02-01

    Wave energy is the most abundant form of hydrokinetic energy in the United States and wave energy converters (WECs) are being developed to extract the maximum possible power from the prevailing wave climate. However, maximum wave energy capture is currently limited by the narrow banded frequency response of WECs as well as extended protective shutdown requirements during periods of large waves. These limitations must be overcome in order to maximize energy extraction, thus significantly decreasing the cost of wave energy and making it a viable energy source. Techno-economic studies of several WEC devices have shown significant potential to improve wave energy capture efficiency through operational control strategies that incorporate real-time information about local surface wave motions. Integral Consulting Inc., with ARPA-E support, is partnering with Sandia National Laboratories and Spoondrift LLC to develop a coherent array of wave-measuring devices to relay and enable the prediction of wave-resolved surface dynamics at a WEC location ahead of real time. This capability will provide necessary information to optimize power production of WECs through control strategies, thereby allowing for a single WEC design to perform more effectively across a wide range of wave environments. The information, data, or work presented herein was funded in part by the Advanced Research Projects Agency-Energy (ARPA-E), U.S. Department of Energy, under Award Number DE-AR0000514.

  16. A U.S. Geological Survey Data Standard (Specifications for representation of geographic point locations for information interchange)

    USGS Publications Warehouse

    ,

    1983-01-01

    This standard establishes uniform formats for geographic point location data. Geographic point location refers to the use of a coordinate system to define the position of a point that may be on, above, or below the Earth's surface. It provides a means for representing these data in digital form for the purpose of interchanging information among data systems and improving clarity and accuracy of interpersonal communications. This document is an expansion and clarification of National Bureau of Standards FIPS PUB 70, issued October 24, 1980. There are minor editorial changes, plus the following additions and modifications: (I) The representation of latitude and longitude using radian measure was added. (2) Alternate 2 for Representation of Hemispheric Information was deleted. (3) Use of the maximum precision for all numerical values was emphasized. The Alternate Representation of Precision was deleted. (4) The length of the zone representation for the State Plane Coordinate System was standardized. (5) The term altitude was substituted for elevation throughout to conform with international usage. (6) Section 3, Specifications for Altitude Data, was expanded and upgraded significantly to the same level of detail as for the horizontal values. (7) A table delineating the coverage of Universal Transverse Mercator zones and the longitudes of the Central Meridians was added and the other tables renumbered. (8) The total length of the representation of point location data at maximum precision was standardized.

  17. Geographical information system (GIS) application for flood prediction at Sungai Sembrong

    NASA Astrophysics Data System (ADS)

    Kamin, Masiri; Ahmad, Nor Farah Atiqah; Razali, Siti Nooraiin Mohd; Hilaham, Mashuda Mohamad; Rahman, Mohamad Abdul; Ngadiman, Norhayati; Sahat, Suhaila

    2017-10-01

    The occurrence of flood is one of natural disaster that often beset Malaysia. The latest incident that happened in 2007 was the worst occurrence of floods ever be set in Johor. Reporting floods mainly focused on rising water rising levels, so about once a focus on the area of flood delineation. A study focused on the effectiveness of using Geographic Information System (GIS) to predict the flood by taking Sg. Sembrong, Batu Pahat, Johor as study area. This study combined hydrological model and water balance model in the display to show the expected flood area for future reference. The minimum, maximum and average rainfall data for January 2007 at Sg Sembrong were used in this study. The data shows that flood does not occurs at the minimum and average rainfall of 17.2mm and 2mm respectively. At maximum rainfall, 203mm, shows the flood area was 9983 hectares with the highest level of the water depth was 2m. The result showed that the combination of hydrological models and water balance model in GIS is very suitable to be used as a tool to obtain preliminary information on flood immediately. Besides that, GIS system is a very powerful tool used in hydrology engineering to help the engineer and planner to imagine the real situation of flood events, doing flood analysis, problem solving and provide a rational, accurate and efficient decision making.

  18. Elemental conservation units: communicating extinction risk without dictating targets for protection.

    PubMed

    Wood, Chris C; Gross, Mart R

    2008-02-01

    Conservation biologists mostly agree on the need to identify and protect biodiversity below the species level but have not yet resolved the best approach. We addressed 2 issues relevant to this debate. First, we distinguished between the abstract goal of preserving the maximum amount of unique biodiversity and the pragmatic goal of minimizing the loss of ecological goods and services given that further loss of biodiversity seems inevitable. Second, we distinguished between the scientific task of assessing extinction risk and the normative task of choosing targets for protection. We propose that scientific advice on extinction risk be given at the smallest meaningful scale: the elemental conservation unit (ECU). An ECU is a demographically isolated population whose probability of extinction over the time scale of interest (say 100 years) is not substantially affected by natural immigration from other populations. Within this time frame, the loss of an ECU would be irreversible without human intervention. Society's decision to protect an ECU ought to reflect human values that have social, economic, and political dimensions. Scientists can best inform this decision by providing advice about the probability that an ECU will be lost and the ecological and evolutionary consequences of that loss in a form that can be integrated into landscape planning. The ECU approach provides maximum flexibility to decision makers and ensures that the scientific task of assessing extinction risk informs, but remains distinct from, the normative social challenge of setting conservation targets.

  19. Seasonal changes in the thermoenergetics of the marsupial sugar glider, Petaurus breviceps.

    PubMed

    Holloway, J C; Geiser, F

    2001-11-01

    Little information is available on seasonal changes in thermal physiology and energy expenditure in marsupials. To provide new information on the subject, we quantified how body mass, body composition, metabolic rate, maximum heat production, body temperature and thermal conductance change with season in sugar gliders (Petaurus breviceps) held in outdoor aviaries. Sugar gliders increased body mass in autumn to a peak in May/June, which was caused to a large extent by an increase in body fat content. Body mass then declined to minimum values in August/September. Resting metabolic rate both below and above the thermoneutral zone (TNZ) was higher in summer than in winter and the lower critical temperature of the TNZ occurred at a higher ambient temperature (Ta) in summer. The basal metabolic rate was as much as 45% below that predicted from allometric equations for placental mammals and was about 15% lower in winter than in summer. In contrast, maximum heat production was raised significantly by about 20% in winter. This, together with an approximately 20% decrease in thermal conductance, resulted in a 13 degrees C reduction of the minimum effective Ta gliders were able to withstand. Our study provides the first evidence that, despite the apparent lack of functional brown adipose tissue, sugar gliders are able to significantly increase heat production in winter. Moreover, the lower thermoregulatory heat production at most TaS in winter, when food in the wild is scarce, should allow them to reduce energy expenditure.

  20. Maximum Relative Entropy of Coherence: An Operational Coherence Measure.

    PubMed

    Bu, Kaifeng; Singh, Uttam; Fei, Shao-Ming; Pati, Arun Kumar; Wu, Junde

    2017-10-13

    The operational characterization of quantum coherence is the cornerstone in the development of the resource theory of coherence. We introduce a new coherence quantifier based on maximum relative entropy. We prove that the maximum relative entropy of coherence is directly related to the maximum overlap with maximally coherent states under a particular class of operations, which provides an operational interpretation of the maximum relative entropy of coherence. Moreover, we show that, for any coherent state, there are examples of subchannel discrimination problems such that this coherent state allows for a higher probability of successfully discriminating subchannels than that of all incoherent states. This advantage of coherent states in subchannel discrimination can be exactly characterized by the maximum relative entropy of coherence. By introducing a suitable smooth maximum relative entropy of coherence, we prove that the smooth maximum relative entropy of coherence provides a lower bound of one-shot coherence cost, and the maximum relative entropy of coherence is equivalent to the relative entropy of coherence in the asymptotic limit. Similar to the maximum relative entropy of coherence, the minimum relative entropy of coherence has also been investigated. We show that the minimum relative entropy of coherence provides an upper bound of one-shot coherence distillation, and in the asymptotic limit the minimum relative entropy of coherence is equivalent to the relative entropy of coherence.

  1. Factors influencing the delivery of abortion services in Ontario: a descriptive study.

    PubMed

    Ferris, L E; McMain-Klein, M; Iron, K

    1998-01-01

    Although Canadian women have had the right to obtain legal induced abortions for the past decade, access to the procedure is still limited and controversial in many areas. Chiefs of obstetrics and gynecology, chiefs of staff, directors of nursing and other health professionals at 163 general hospitals in Ontario, Canada, were asked to provide information on issues concerning the availability of abortion services of their facility. The hospital participation rate was 97% and the individual response rate was 75%. Nearly one-half (48%) of hospitals perform abortions. Approximately 36% of these hospitals do so up to a maximum gestational age of 12 weeks, 23% to a maximum of 13-16 weeks, 37% to a maximum of 17-20 weeks and 4% at greater than 20 weeks. Hospital factors, including resources and policies, did not significantly influence whether abortions are provided. However, these factors did affect the number performed, whether there were gestational limitations and the choice of procedure. About 13% of provider hospitals indicated that staff training contributes to the existence of gestational age limits, and 24% said that it directly influences procedure choice. Only 18% of hospitals reported that their physicians have received additional training outside of their medical school or medical residency education to learn abortion techniques or to gain new skills. Forty-five percent of hospitals that provide abortions had experienced harassment within the past two years, and 15% reported that this harassment has directly affected their staff members' willingness to provide abortions. Based upon the provision of obstetric care, many hospitals in Ontario that are capable of offering abortion services do not. Some of the reasons for this failure are related to the procedure itself, while others may be related to resource issues that affect the delivery of other medical services as well. Variation in the availability of abortions is due to a shortage of clinicians performing the procedure, and training directly influences gestational limits and procedural choices.

  2. Quantum and Information Thermodynamics: A Unifying Framework Based on Repeated Interactions

    NASA Astrophysics Data System (ADS)

    Strasberg, Philipp; Schaller, Gernot; Brandes, Tobias; Esposito, Massimiliano

    2017-04-01

    We expand the standard thermodynamic framework of a system coupled to a thermal reservoir by considering a stream of independently prepared units repeatedly put into contact with the system. These units can be in any nonequilibrium state and interact with the system with an arbitrary strength and duration. We show that this stream constitutes an effective resource of nonequilibrium free energy, and we identify the conditions under which it behaves as a heat, work, or information reservoir. We also show that this setup provides a natural framework to analyze information erasure ("Landauer's principle") and feedback-controlled systems ("Maxwell's demon"). In the limit of a short system-unit interaction time, we further demonstrate that this setup can be used to provide a thermodynamically sound interpretation to many effective master equations. We discuss how nonautonomously driven systems, micromasers, lasing without inversion and the electronic Maxwell demon can be thermodynamically analyzed within our framework. While the present framework accounts for quantum features (e.g., squeezing, entanglement, coherence), we also show that quantum resources do not offer any advantage compared to classical ones in terms of the maximum extractable work.

  3. Integrating Research and Education at the National Center for Atmospheric Research at the Interface of Formal and Informal Education

    NASA Astrophysics Data System (ADS)

    Johnson, R.; Foster, S.

    2005-12-01

    The National Center for Atmospheric Research (NCAR) in Boulder, Colorado, is a leading institution in scientific research, education and service associated with exploring and understanding our atmosphere and its interactions with the Sun, the oceans, the biosphere, and human society. NCAR draws thousands of public and scientific visitors from around the world to its Mesa Laboratory facility annually for educational as well as research purposes. Public visitors include adult visitors, clubs, and families on an informal visit to NCAR and its exhibits, as well as classroom and summer camp groups. Additionally, NCAR provides extensive computational and visualization services, which can be used not only for scientific, but also public informational purposes. As such, NCAR's audience provides an opportunity to address both formal and informal education through the programs that we offer. The University Corporation for Atmospheric Research (UCAR) Office of Education and Outreach works with NCAR to develop and implement a highly-integrated strategy for reaching both formal and informal audiences through programs that range from events and exhibits to professional development (for scientists and educators) and bilingual distance learning. The hallmarks of our program include close collaboration with scientists, multi-purposing resources where appropriate for maximum efficiency, and a commitment to engage populations historically underrepresented in science in the geosciences.

  4. Relationships in subtribe Diocleinae (Leguminosae; Papilionoideae) inferred from internal transcribed spacer sequences from nuclear ribosomal DNA.

    PubMed

    Varela, Eduardo S; Lima, João P M S; Galdino, Alexsandro S; Pinto, Luciano da S; Bezerra, Walderly M; Nunes, Edson P; Alves, Maria A O; Grangeiro, Thalles B

    2004-01-01

    The complete sequences of nuclear ribosomal DNA (nrDNA) internal transcribed spacer regions (ITS/5.8S) were determined for species belonging to six genera from the subtribe Diocleinae as well as for the anomalous genera Calopogonium and Pachyrhizus. Phylogenetic trees constructed by distance matrix, maximum parsimony and maximum likelihood methods showed that Calopogonium and Pachyrhizus were outside the clade Diocleinae (Canavalia, Camptosema, Cratylia, Dioclea, Cymbosema, and Galactia). This finding supports previous morphological, phytochemical, and molecular evidence that Calopogonium and Pachyrhizus do not belong to the subtribe Diocleinae. Within the true Diocleinae clade, the clustering of genera and species were congruent with morphology-based classifications, suggesting that ITS/5.8S sequences can provide enough informative sites to allow resolution below the genus level. This is the first evidence of the phylogeny of subtribe Diocleinae based on nuclear DNA sequences.

  5. Annoyance caused by propeller airplane flyover noise

    NASA Technical Reports Server (NTRS)

    Mccurdy, D. A.; Powell, C. A.

    1984-01-01

    Laboratory experiments were conducted to provide information on quantifying the annoyance response of people to propeller airplane noise. The items of interest were current noise metrics, tone corrections, duration corrections, critical band corrections, and the effects of engine type, operation type, maximum takeoff weight, blade passage frequency, and blade tip speed. In each experiment, 64 subjects judged the annoyance of recordings of propeller and jet airplane operations presented at d-weighted sound pressure levels of 70, 80, and 90 dB in a testing room which simulates the outdoor acoustic environment. The first experiment examined 11 propeller airplanes with maximum takeoff weights greater than or equal to 5700 kg. The second experiment examined 14 propeller airplanes weighting 5700 kg or less. Five jet airplanes were included in each experiment. For both the heavy and light propeller airplanes, perceived noise level and perceived level (Stevens Mark VII procedure) predicted annoyance better than other current noise metrics.

  6. Application of the maximum entropy principle to determine ensembles of intrinsically disordered proteins from residual dipolar couplings.

    PubMed

    Sanchez-Martinez, M; Crehuet, R

    2014-12-21

    We present a method based on the maximum entropy principle that can re-weight an ensemble of protein structures based on data from residual dipolar couplings (RDCs). The RDCs of intrinsically disordered proteins (IDPs) provide information on the secondary structure elements present in an ensemble; however even two sets of RDCs are not enough to fully determine the distribution of conformations, and the force field used to generate the structures has a pervasive influence on the refined ensemble. Two physics-based coarse-grained force fields, Profasi and Campari, are able to predict the secondary structure elements present in an IDP, but even after including the RDC data, the re-weighted ensembles differ between both force fields. Thus the spread of IDP ensembles highlights the need for better force fields. We distribute our algorithm in an open-source Python code.

  7. Bounding species distribution models

    USGS Publications Warehouse

    Stohlgren, T.J.; Jarnevich, C.S.; Esaias, W.E.; Morisette, J.T.

    2011-01-01

    Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for "clamping" model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART) and maximum entropy (Maxent) models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used. ?? 2011 Current Zoology.

  8. Bounding Species Distribution Models

    NASA Technical Reports Server (NTRS)

    Stohlgren, Thomas J.; Jarnevich, Cahterine S.; Morisette, Jeffrey T.; Esaias, Wayne E.

    2011-01-01

    Species distribution models are increasing in popularity for mapping suitable habitat for species of management concern. Many investigators now recognize that extrapolations of these models with geographic information systems (GIS) might be sensitive to the environmental bounds of the data used in their development, yet there is no recommended best practice for "clamping" model extrapolations. We relied on two commonly used modeling approaches: classification and regression tree (CART) and maximum entropy (Maxent) models, and we tested a simple alteration of the model extrapolations, bounding extrapolations to the maximum and minimum values of primary environmental predictors, to provide a more realistic map of suitable habitat of hybridized Africanized honey bees in the southwestern United States. Findings suggest that multiple models of bounding, and the most conservative bounding of species distribution models, like those presented here, should probably replace the unbounded or loosely bounded techniques currently used [Current Zoology 57 (5): 642-647, 2011].

  9. Netest: A Tool to Measure the Maximum Burst Size, Available Bandwidth and Achievable Throughput

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Guojun; Tierney, Brian

    2003-01-31

    Distinguishing available bandwidth and achievable throughput is essential for improving network applications' performance. Achievable throughput is the throughput considering a number of factors such as network protocol, host speed, network path, and TCP buffer space, where as available bandwidth only considers the network path. Without understanding this difference, trying to improve network applications' performance is like ''blind men feeling the elephant'' [4]. In this paper, we define and distinguish bandwidth and throughput, and debate which part of each is achievable and which is available. Also, we introduce and discuss a new concept - Maximum Burst Size that is crucial tomore » the network performance and bandwidth sharing. A tool, netest, is introduced to help users to determine the available bandwidth, and provides information to achieve better throughput with fairness of sharing the available bandwidth, thus reducing misuse of the network.« less

  10. Multivariate normal maximum likelihood with both ordinal and continuous variables, and data missing at random.

    PubMed

    Pritikin, Joshua N; Brick, Timothy R; Neale, Michael C

    2018-04-01

    A novel method for the maximum likelihood estimation of structural equation models (SEM) with both ordinal and continuous indicators is introduced using a flexible multivariate probit model for the ordinal indicators. A full information approach ensures unbiased estimates for data missing at random. Exceeding the capability of prior methods, up to 13 ordinal variables can be included before integration time increases beyond 1 s per row. The method relies on the axiom of conditional probability to split apart the distribution of continuous and ordinal variables. Due to the symmetry of the axiom, two similar methods are available. A simulation study provides evidence that the two similar approaches offer equal accuracy. A further simulation is used to develop a heuristic to automatically select the most computationally efficient approach. Joint ordinal continuous SEM is implemented in OpenMx, free and open-source software.

  11. Refractory metal alloys and composites for space power systems

    NASA Technical Reports Server (NTRS)

    Stephens, Joseph R.; Petrasek, Donald W.; Titran, Robert H.

    1988-01-01

    Space power requirements for future NASA and other U.S. missions will range from a few kilowatts to megawatts of electricity. Maximum efficiency is a key goal of any power system in order to minimize weight and size so that the space shuttle may be used a minimum number of times to put the power supply into orbit. Nuclear power has been identified as the primary source to meet these high levels of electrical demand. One way to achieve maximum efficiency is to operate the power supply, energy conversion system, and related components at relatively high temperatures. NASA Lewis Research Center has undertaken a research program on advanced technology of refractory metal alloys and composites that will provide baseline information for space power systems in the 1900's and the 21st century. Basic research on the tensile and creep properties of fibers, matrices, and composites is discussed.

  12. A spacecraft attitude and articulation control system design for the Comet Halley intercept mission

    NASA Technical Reports Server (NTRS)

    Key, R. W.

    1981-01-01

    An attitude and articulation control system design for the Comet Halley 1986 intercept mission is presented. A spacecraft dynamics model consisting of five hinge-connected rigid bodies is used to analyze the spacecraft attitude and articulation control system performance. Inertial and optical information are combined to generate scan platform pointing commands. The comprehensive spacecraft model has been developed into a digital computer simulation program, which provides performance characteristics and insight pertaining to the control and dynamics of a Halley Intercept spacecraft. It is shown that scan platform pointing error has a maximum value of 1.8 milliradians during the four minute closest approach interval. It is also shown that the jitter or scan platform pointing rate error would have a maximum value of 2.5 milliradians/second for the nominal 1000 km closest approach distance trajectory and associated environment model.

  13. Mixed integer nonlinear programming model of wireless pricing scheme with QoS attribute of bandwidth and end-to-end delay

    NASA Astrophysics Data System (ADS)

    Irmeilyana, Puspita, Fitri Maya; Indrawati

    2016-02-01

    The pricing for wireless networks is developed by considering linearity factors, elasticity price and price factors. Mixed Integer Nonlinear Programming of wireless pricing model is proposed as the nonlinear programming problem that can be solved optimally using LINGO 13.0. The solutions are expected to give some information about the connections between the acceptance factor and the price. Previous model worked on the model that focuses on bandwidth as the QoS attribute. The models attempt to maximize the total price for a connection based on QoS parameter. The QoS attributes used will be the bandwidth and the end to end delay that affect the traffic. The maximum goal to maximum price is achieved when the provider determine the requirement for the increment or decrement of price change due to QoS change and amount of QoS value.

  14. Patient-oriented methotrexate information sites on the Internet: a review of completeness, accuracy, format, reliability, credibility, and readability.

    PubMed

    Thompson, Andrew E; Graydon, Sara L

    2009-01-01

    With continuing use of the Internet, rheumatologists are referring patients to various websites to gain information about medications and diseases. Our goal was to develop and evaluate a Medication Website Assessment Tool (MWAT) for use by health professionals, and to explore the overall quality of methotrexate information presented on common English-language websites. Identification of websites was performed using a search strategy on the search engine Google. The first 250 hits were screened. Inclusion criteria included those English-language websites from authoritative sources, trusted medical, physicians', and common health-related websites. Websites from pharmaceutical companies, online pharmacies, and where the purpose seemed to be primarily advertisements were also included. Product monographs or technical-based web pages and web pages where the information was clearly directed at patients with cancer were excluded. Two reviewers independently scored each included web page for completeness and accuracy, format, readability, reliability, and credibility. An overall ranking was provided for each methotrexate information page. Twenty-eight web pages were included in the analysis. The average score for completeness and accuracy was 15.48+/-3.70 (maximum 24) with 10 out of 28 pages scoring 18 (75%) or higher. The average format score was 6.00+/-1.46 (maximum 8). The Flesch-Kincaid Grade Level revealed an average grade level of 10.07+/-1.84, with 5 out of 28 websites written at a reading level less than grade 8; however, no web page scored at a grade 5 to 6 level. An overall ranking was calculated identifying 8 web pages as appropriate sources of accurate and reliable methotrexate information. With the enormous amount of information available on the Internet, it is important to direct patients to web pages that are complete, accurate, readable, and credible sources of information. We identified web pages that may serve the interests of both rheumatologists and patients.

  15. Bayesian or Laplacien inference, entropy and information theory and information geometry in data and signal processing

    NASA Astrophysics Data System (ADS)

    Mohammad-Djafari, Ali

    2015-01-01

    The main object of this tutorial article is first to review the main inference tools using Bayesian approach, Entropy, Information theory and their corresponding geometries. This review is focused mainly on the ways these tools have been used in data, signal and image processing. After a short introduction of the different quantities related to the Bayes rule, the entropy and the Maximum Entropy Principle (MEP), relative entropy and the Kullback-Leibler divergence, Fisher information, we will study their use in different fields of data and signal processing such as: entropy in source separation, Fisher information in model order selection, different Maximum Entropy based methods in time series spectral estimation and finally, general linear inverse problems.

  16. Implementing informative priors for heterogeneity in meta-analysis using meta-regression and pseudo data.

    PubMed

    Rhodes, Kirsty M; Turner, Rebecca M; White, Ian R; Jackson, Dan; Spiegelhalter, David J; Higgins, Julian P T

    2016-12-20

    Many meta-analyses combine results from only a small number of studies, a situation in which the between-study variance is imprecisely estimated when standard methods are applied. Bayesian meta-analysis allows incorporation of external evidence on heterogeneity, providing the potential for more robust inference on the effect size of interest. We present a method for performing Bayesian meta-analysis using data augmentation, in which we represent an informative conjugate prior for between-study variance by pseudo data and use meta-regression for estimation. To assist in this, we derive predictive inverse-gamma distributions for the between-study variance expected in future meta-analyses. These may serve as priors for heterogeneity in new meta-analyses. In a simulation study, we compare approximate Bayesian methods using meta-regression and pseudo data against fully Bayesian approaches based on importance sampling techniques and Markov chain Monte Carlo (MCMC). We compare the frequentist properties of these Bayesian methods with those of the commonly used frequentist DerSimonian and Laird procedure. The method is implemented in standard statistical software and provides a less complex alternative to standard MCMC approaches. An importance sampling approach produces almost identical results to standard MCMC approaches, and results obtained through meta-regression and pseudo data are very similar. On average, data augmentation provides closer results to MCMC, if implemented using restricted maximum likelihood estimation rather than DerSimonian and Laird or maximum likelihood estimation. The methods are applied to real datasets, and an extension to network meta-analysis is described. The proposed method facilitates Bayesian meta-analysis in a way that is accessible to applied researchers. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd. © 2016 The Authors. Statistics in Medicine Published by John Wiley & Sons Ltd.

  17. Elemental and charge state composition of the fast solar wind observed with SMS instruments on WIND

    NASA Technical Reports Server (NTRS)

    Gloeckler, G.; Galvin, A. B.; Ipavich, F. M.; Hamilton, D. C.; Bochsler, P.; Geiss, J.; Fisk, L. A.; Wilken, B.

    1995-01-01

    The elemental composition and charge state distributions of heavy ions of the solar wind provide essential information about: (1) atom-ion separation processes in the solar atmosphere leading to the 'FIP effect' (the overabundance of low First Ionization potential (FIP) elements in the solar wind compared to the photosphere); and (2) coronal temperature profiles, as well as mechanisms which heat the corona and accelerate the solar wind. This information is required for solar wind acceleration models. The SWICS instrument on Ulysses measures for all solar wind flow conditions the relative abundance of about 8 elements and 20 charge states of the solar wind. Furthermore, the Ulysses high-latitude orbit provides an unprecedented look at the solar wind from the polar coronal holes near solar minimum conditions. The MASS instrument on the WIND spacecraft is a high-mass resolution solar wind ion mass spectrometer that will provide routinely not only the abundances and charge state of all elements easily measured with SWICS, but also of N, Mg, S. The MASS sensor was fully operational at the end of 1994 and has sampled the in-ecliptic solar wind composition in both the slow and the corotating fast streams. This unique combination of SWICS on Ulysses and MASS on WIND allows us to view for the first time the solar wind from two regions of the large coronal hole. Observations with SWICS in the coronal hole wind: (1) indicate that the FIP effect is small; and (2) allow us determine the altitude of the maximum in the electron temperature profile, and indicate a maximum temperature of approximately 1.5 MK. New results from the SMS instruments on Wind will be compared with results from SWICS on Ulysses.

  18. Pacific walrus coastal haulout database, 1852-2016— Background report

    USGS Publications Warehouse

    Fischbach, Anthony S.; Kochnev, Anatoly A.; Garlich-Miller, Joel L.; Jay, Chadwick V.

    2016-01-01

    Walruses are large benthic predators that rest out of water between foraging bouts. Coastal “haulouts” (places where walruses rest) are formed by adult males in summer and sometimes by females and young when sea ice is absent, and are often used repeatedly across seasons and years. Understanding the geography and historical use of haulouts provides a context for conservation efforts. We summarize information on Pacific walrus haulouts from available reports (n =151), interviews with coastal residents and aviators, and personal observations of the authors. We provide this in the form of a georeferenced database that can be queried and displayed with standard geographic information system and database management software. The database contains 150 records of Pacific walrus haulouts, with a summary of basic characteristics on maximum haulout aggregation size, age-sex composition, season of use, and decade of most recent use. Citations to reports are provided in the appendix and as a bibliographic database. Haulouts were distributed across the coasts of the Pacific walrus range; however, the largest (maximum >10,000 walruses) of the haulouts reported in the recent 4 decades (n=19) were concentrated on the Russian shores in regions near the Bering Strait and northward into the western Chukchi Sea (n=17). Haulouts of adult female and young walruses primarily occurred in the Bering Strait region and areas northward, with others occurring in the central Bering Sea, Gulf of Anadyr, and Saint Lawrence Island regions. The Gulf of Anadyr was the only region to contain female and young walrus haulouts, which formed after the northward spring migration and prior to autumn ice formation.

  19. Lessons Learned in the Integration of Earth Remote Sensing Data within the NOAA/NWS Damage Assessment Toolkit

    NASA Astrophysics Data System (ADS)

    Molthan, A.; Schultz, L. A.; McGrath, K.; Bell, J. R.; Cole, T.; Meyer, P. J.; Burks, J.; Camp, P.; Angle, K.

    2016-12-01

    Following the occurrence of a suspected or known tornado, meteorologists with NOAA's National Weather Service are tasked with performing a detailed ground survey to map the impacts of the tornado, identify specific damage indicators, and link those damage indicators to the Enhanced Fujita scale as an estimate of the intensity of the tornado at various points along the damage path. Over the past few years, NOAA/NWS meteorologists have developed the NOAA/NWS Damage Assessment Toolkit (DAT), a smartphone and web based application to support the collection of damage information, editing of the damage survey, and final publication. This allows meteorologists in the field to sample the damage track, collect geotagged photos with notations of damage areas, and aggregation of the information to provide a more detailed survey whereas previous efforts may have been limited to start and end locations, maximum width, and maximum intensity. To support these damage assessment efforts, various Earth remote sensing data sets were incorporated into the DAT to support survey efforts, following preliminary activities using remote sensing to support select NOAA/NWS field offices following the widespread outbreak of tornadoes that occurred in the southeastern United States on April 27, 2011. These efforts included the collection of various products in collaboration with multiple federal agencies and commercial providers, with particular emphasis upon the USGS Hazards Data Distribution System, hosting and sharing of these products through geospatial platforms, partnerships with forecasters to better understand their needs, and the development and delivery of training materials. This presentation will provide an overview of the project along with strengths and weaknesses, opportunities for future work and improvements, and best practices learned during the "research to applications" process supported by the NASA Applied Sciences: Disasters program.

  20. 77 FR 76169 - Increase in Maximum Tuition and Fee Amounts Payable under the Post-9/11 GI Bill

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-26

    .... Correspondence $9,324.89. Post 9/11 Entitlement Charge Amount for Tests Licensing and Certification Tests... DEPARTMENT OF VETERANS AFFAIRS Increase in Maximum Tuition and Fee Amounts Payable under the Post... this notice is to inform the public of the increase in the Post-9/11 GI Bill maximum tuition and fee...

  1. Information theoretical assessment of visual communication with wavelet coding

    NASA Astrophysics Data System (ADS)

    Rahman, Zia-ur

    1995-06-01

    A visual communication channel can be characterized by the efficiency with which it conveys information, and the quality of the images restored from the transmitted data. Efficient data representation requires the use of constraints of the visual communication channel. Our information theoretic analysis combines the design of the wavelet compression algorithm with the design of the visual communication channel. Shannon's communication theory, Wiener's restoration filter, and the critical design factors of image gathering and display are combined to provide metrics for measuring the efficiency of data transmission, and for quantitatively assessing the visual quality of the restored image. These metrics are: a) the mutual information (Eta) between the radiance the radiance field and the restored image, and b) the efficiency of the channel which can be roughly measured by as the ratio (Eta) /H, where H is the average number of bits being used to transmit the data. Huck, et al. (Journal of Visual Communication and Image Representation, Vol. 4, No. 2, 1993) have shown that channels desinged to maximize (Eta) , also maximize. Our assessment provides a framework for designing channels which provide the highest possible visual quality for a given amount of data under the critical design limitations of the image gathering and display devices. Results show that a trade-off exists between the maximum realizable information of the channel and its efficiency: an increase in one leads to a decrease in the other. The final selection of which of these quantities to maximize is, of course, application dependent.

  2. On the statistical equivalence of restrained-ensemble simulations with the maximum entropy method

    PubMed Central

    Roux, Benoît; Weare, Jonathan

    2013-01-01

    An issue of general interest in computer simulations is to incorporate information from experiments into a structural model. An important caveat in pursuing this goal is to avoid corrupting the resulting model with spurious and arbitrary biases. While the problem of biasing thermodynamic ensembles can be formulated rigorously using the maximum entropy method introduced by Jaynes, the approach can be cumbersome in practical applications with the need to determine multiple unknown coefficients iteratively. A popular alternative strategy to incorporate the information from experiments is to rely on restrained-ensemble molecular dynamics simulations. However, the fundamental validity of this computational strategy remains in question. Here, it is demonstrated that the statistical distribution produced by restrained-ensemble simulations is formally consistent with the maximum entropy method of Jaynes. This clarifies the underlying conditions under which restrained-ensemble simulations will yield results that are consistent with the maximum entropy method. PMID:23464140

  3. An automatic data system for vibration modal tuning and evaluation

    NASA Technical Reports Server (NTRS)

    Salyer, R. A.; Jung, E. J., Jr.; Huggins, S. L.; Stephens, B. L.

    1975-01-01

    A digitally based automatic modal tuning and analysis system developed to provide an operational capability beginning at 0.1 hertz is described. The elements of the system, which provides unique control features, maximum operator visibility, and rapid data reduction and documentation, are briefly described; and the operational flow is discussed to illustrate the full range of capabilities and the flexibility of application. The successful application of the system to a modal survey of the Skylab payload is described. Information about the Skylab test article, coincident-quadrature analysis of modal response data, orthogonality, and damping calculations is included in the appendixes. Recommendations for future application of the system are also made.

  4. Digital processing of satellite imagery application to jungle areas of Peru

    NASA Technical Reports Server (NTRS)

    Pomalaza, J. C. (Principal Investigator); Pomalaza, C. A.; Espinoza, J.

    1976-01-01

    The author has identified the following significant results. The use of clustering methods permits the development of relatively fast classification algorithms that could be implemented in an inexpensive computer system with limited amount of memory. Analysis of CCTs using these techniques can provide a great deal of detail permitting the use of the maximum resolution of LANDSAT imagery. Potential cases were detected in which the use of other techniques for classification using a Gaussian approximation for the distribution functions can be used with advantage. For jungle areas, channels 5 and 7 can provide enough information to delineate drainage patterns, swamp and wet areas, and make a reasonable broad classification of forest types.

  5. Reduction of temperature rise in high-speed photography

    NASA Technical Reports Server (NTRS)

    Slater, Howard A.

    1987-01-01

    Information is provided on filtration with glass and infrared absorbing and reflecting filters. Glass and infrared filtration is a simple and effective method to reduce the radiation heat transfer associated with continuous high intensity tungsten lamps. The results of a filtration experiment are explained. The figures provide starting points for quantifying the effectiveness of various filters and associated light intensities. The combination of a spectrally selective reflector (hot or cold mirror) based on multilayer thin film principles and heat absorbing or infrared opaque glass results in the maximum reduction in temperature rise with a minimum of incident light loss. Use is recommended of a voltage regulator to further control temperature rise and incident light values.

  6. Reduction of temperature rise in high-speed photography

    NASA Technical Reports Server (NTRS)

    Slater, Howard A.

    1988-01-01

    Information is provided on filtration with glass and infrared absorbing and reflecting filters. Glass and infrared filtration is a simple and effective method to reduce the radiation heat transfer associated with continuous high intensity tungsten lamps. The results of a filtration experiment are explained. The figures provide starting points for quantifying the effectiveness of various filters and associated light intensities. The combination of a spectrally selective reflector (hot or cold mirror) based on multilayer thin film principles and heat absorbing or infrared opaque glass results in the maximum reduction in temperature rise with a minimum of incident light loss. Use is recommended of a voltage regulator to further control temperature rise and incident light values.

  7. Compatibility: drugs and parenteral nutrition

    PubMed Central

    Miranda, Talita Muniz Maloni; Ferraresi, Andressa de Abreu

    2016-01-01

    ABSTRACT Objective Standardization and systematization of data to provide quick access to compatibility of leading injectable drugs used in hospitals for parenteral nutrition. Methods We selected 55 injectable drugs analyzed individually with two types of parenteral nutrition: 2-in-1 and 3-in-1. The following variables were considered: active ingredient, compatibility of drugs with the parenteral nutrition with or without lipids, and maximum drug concentration after dilution for the drugs compatible with parenteral nutrition. Drugs were classified as compatible, incompatible and untested. Results After analysis, relevant information to the product’s compatibility with parental nutrition was summarized in a table. Conclusion Systematization of compatibility data provided quick and easy access, and enabled standardizing pharmacists work. PMID:27074235

  8. Potential distribution dataset of honeybees in Indian Ocean Islands: Case study of Zanzibar Island.

    PubMed

    Mwalusepo, Sizah; Muli, Eliud; Nkoba, Kiatoko; Nguku, Everlyn; Kilonzo, Joseph; Abdel-Rahman, Elfatih M; Landmann, Tobias; Fakih, Asha; Raina, Suresh

    2017-10-01

    Honeybees ( Apis mellifera ) are principal insect pollinators, whose worldwide distribution and abundance is known to largely depend on climatic conditions. However, the presence records dataset on potential distribution of honeybees in Indian Ocean Islands remain less documented. Presence records in shape format and probability of occurrence of honeybees with different temperature change scenarios is provided in this article across Zanzibar Island. Maximum entropy (Maxent) package was used to analyse the potential distribution of honeybees. The dataset provides information on the current and future distribution of the honey bees in Zanzibar Island. The dataset is of great importance for improving stakeholders understanding of the role of temperature change on the spatial distribution of honeybees.

  9. The Maximum Likelihood Solution for Inclination-only Data

    NASA Astrophysics Data System (ADS)

    Arason, P.; Levi, S.

    2006-12-01

    The arithmetic means of inclination-only data are known to introduce a shallowing bias. Several methods have been proposed to estimate unbiased means of the inclination along with measures of the precision. Most of the inclination-only methods were designed to maximize the likelihood function of the marginal Fisher distribution. However, the exact analytical form of the maximum likelihood function is fairly complicated, and all these methods require various assumptions and approximations that are inappropriate for many data sets. For some steep and dispersed data sets, the estimates provided by these methods are significantly displaced from the peak of the likelihood function to systematically shallower inclinations. The problem in locating the maximum of the likelihood function is partly due to difficulties in accurately evaluating the function for all values of interest. This is because some elements of the log-likelihood function increase exponentially as precision parameters increase, leading to numerical instabilities. In this study we succeeded in analytically cancelling exponential elements from the likelihood function, and we are now able to calculate its value for any location in the parameter space and for any inclination-only data set, with full accuracy. Furtermore, we can now calculate the partial derivatives of the likelihood function with desired accuracy. Locating the maximum likelihood without the assumptions required by previous methods is now straight forward. The information to separate the mean inclination from the precision parameter will be lost for very steep and dispersed data sets. It is worth noting that the likelihood function always has a maximum value. However, for some dispersed and steep data sets with few samples, the likelihood function takes its highest value on the boundary of the parameter space, i.e. at inclinations of +/- 90 degrees, but with relatively well defined dispersion. Our simulations indicate that this occurs quite frequently for certain data sets, and relatively small perturbations in the data will drive the maxima to the boundary. We interpret this to indicate that, for such data sets, the information needed to separate the mean inclination and the precision parameter is permanently lost. To assess the reliability and accuracy of our method we generated large number of random Fisher-distributed data sets and used seven methods to estimate the mean inclination and precision paramenter. These comparisons are described by Levi and Arason at the 2006 AGU Fall meeting. The results of the various methods is very favourable to our new robust maximum likelihood method, which, on average, is the most reliable, and the mean inclination estimates are the least biased toward shallow values. Further information on our inclination-only analysis can be obtained from: http://www.vedur.is/~arason/paleomag

  10. A parametric method for determining the number of signals in narrow-band direction finding

    NASA Astrophysics Data System (ADS)

    Wu, Qiang; Fuhrmann, Daniel R.

    1991-08-01

    A novel and more accurate method to determine the number of signals in the multisource direction finding problem is developed. The information-theoretic criteria of Yin and Krishnaiah (1988) are applied to a set of quantities which are evaluated from the log-likelihood function. Based on proven asymptotic properties of the maximum likelihood estimation, these quantities have the properties required by the criteria. Since the information-theoretic criteria use these quantities instead of the eigenvalues of the estimated correlation matrix, this approach possesses the advantage of not requiring a subjective threshold, and also provides higher performance than when eigenvalues are used. Simulation results are presented and compared to those obtained from the nonparametric method given by Wax and Kailath (1985).

  11. MAP Fault Localization Based on Wide Area Synchronous Phasor Measurement Information

    NASA Astrophysics Data System (ADS)

    Zhang, Yagang; Wang, Zengping

    2015-02-01

    In the research of complicated electrical engineering, the emergence of phasor measurement units (PMU) is a landmark event. The establishment and application of wide area measurement system (WAMS) in power system has made widespread and profound influence on the safe and stable operation of complicated power system. In this paper, taking full advantage of wide area synchronous phasor measurement information provided by PMUs, we have carried out precise fault localization based on the principles of maximum posteriori probability (MAP). Large numbers of simulation experiments have confirmed that the results of MAP fault localization are accurate and reliable. Even if there are interferences from white Gaussian stochastic noise, the results from MAP classification are also identical to the actual real situation.

  12. PSAW/MicroSWIS [Microminiature Surface Acoustic Wave (SAW) based Wirelesss Instrumentation System

    NASA Technical Reports Server (NTRS)

    Heermann, Doug; Krug, Eric

    2004-01-01

    This Final Report for the PSAW/MicroSWIS Program is provided in compliance with contract number NAS3-01118. This report documents the overall progress of the program and presents project objectives, work carried out, and results obtained. Program Conceptual Design Package stated the following objectives: To develop a sensor/transceiver network that can support networking operations within spacecraft with sufficient bandwidth so that (1) flight control data, (2) avionics data, (3) payload/experiment data, and (4) prognostic health monitoring sensory information can flow to appropriate locations at frequencies that contain the maximum amount of information content but require minimum interconnect and power: a very high speed, low power, programmable modulation, spread-spectrum radio sensor/transceiver.

  13. Agreement and reliability of pelvic floor measurements during rest and on maximum Valsalva maneuver using three-dimensional translabial ultrasound and virtual reality imaging.

    PubMed

    Speksnijder, L; Oom, D M J; Koning, A H J; Biesmeijer, C S; Steegers, E A P; Steensma, A B

    2016-08-01

    Imaging of the levator ani hiatus provides valuable information for the diagnosis and follow-up of patients with pelvic organ prolapse (POP). This study compared measurements of levator ani hiatal volume during rest and on maximum Valsalva, obtained using conventional three-dimensional (3D) translabial ultrasound and virtual reality imaging. Our objectives were to establish their agreement and reliability, and their relationship with prolapse symptoms and POP quantification (POP-Q) stage. One hundred women with an intact levator ani were selected from our tertiary clinic database. Information on clinical symptoms were obtained using standardized questionnaires. Ultrasound datasets were analyzed using a rendered volume with a slice thickness of 1.5 cm, at the level of minimal hiatal dimensions, during rest and on maximum Valsalva. The levator area (in cm(2) ) was measured and multiplied by 1.5 to obtain the levator ani hiatal volume (in cm(3) ) on conventional 3D ultrasound. Levator ani hiatal volume (in cm(3) ) was measured semi-automatically by virtual reality imaging using a segmentation algorithm. Twenty patients were chosen randomly to analyze intra- and interobserver agreement. The mean difference between levator hiatal volume measurements on 3D ultrasound and by virtual reality was 1.52 cm(3) (95% CI, 1.00-2.04 cm(3) ) at rest and 1.16 cm(3) (95% CI, 0.56-1.76 cm(3) ) during maximum Valsalva (P < 0.001). Both intra- and interobserver intraclass correlation coefficients were ≥ 0.96 for conventional 3D ultrasound and > 0.99 for virtual reality. Patients with prolapse symptoms or POP-Q Stage ≥ 2 had significantly larger hiatal measurements than those without symptoms or POP-Q Stage < 2. Levator ani hiatal volume at rest and on maximum Valsalva is significantly smaller when using virtual reality compared with conventional 3D ultrasound; however, this difference does not seem clinically important. Copyright © 2015 ISUOG. Published by John Wiley & Sons Ltd. Copyright © 2015 ISUOG. Published by John Wiley & Sons Ltd.

  14. Developing Architectures and Technologies for an Evolvable NASA Space Communication Infrastructure

    NASA Technical Reports Server (NTRS)

    Bhasin, Kul; Hayden, Jeffrey

    2004-01-01

    Space communications architecture concepts play a key role in the development and deployment of NASA's future exploration and science missions. Once a mission is deployed, the communication link to the user needs to provide maximum information delivery and flexibility to handle the expected large and complex data sets and to enable direct interaction with the spacecraft and experiments. In human and robotic missions, communication systems need to offer maximum reliability with robust two-way links for software uploads and virtual interactions. Identifying the capabilities to cost effectively meet the demanding space communication needs of 21st century missions, proper formulation of the requirements for these missions, and identifying the early technology developments that will be needed can only be resolved with architecture design. This paper will describe the development of evolvable space communication architecture models and the technologies needed to support Earth sensor web and collaborative observation formation missions; robotic scientific missions for detailed investigation of planets, moons, and small bodies in the solar system; human missions for exploration of the Moon, Mars, Ganymede, Callisto, and asteroids; human settlements in space, on the Moon, and on Mars; and great in-space observatories for observing other star systems and the universe. The resulting architectures will enable the reliable, multipoint, high data rate capabilities needed on demand to provide continuous, maximum coverage of areas of concentrated activities, such as in the vicinity of outposts in-space, on the Moon or on Mars.

  15. Analysis of the hydrological safety of dams combining two numerical tools: Iber and DualSPHysics

    NASA Astrophysics Data System (ADS)

    González-Cao, J.; García-Feal, O.; Domínguez, J. M.; Crespo, A. J. C.; Gómez-Gesteira, M.

    2018-02-01

    The upgrade of the hydrological safety of dams is a critical issue to avoid failures that can dramatically affect people and assets. This paper shows a numerical methodology to analyse the safety of the Belesar dam (NW, Spain) based on two different numerical codes. First, a mesh-based code named Iber, suited to deal with large 2-D domains, is used to simulate the impoundment. The initial conditions and the inlet provided to Iber correspond to the maximum water elevation and the maximum expected inflow to the impoundment defined in the technical specifications of the dam, which are associated to the more hazardous operation conditions of the dam. Iber provides information about the time needed for water to attain the crest of the dam when floodgates are closed. In addition, it also provides the velocity of discharge when gates are opened. Then, a mesh-free code named DualSPHysics, which is especially suited to deal with complex and violent 3-D flows, is used to reproduce the behaviour of one of the spillways of the dam starting from the results obtained with Iber, which are used as inlet conditions for DualSPHysics. The combined results of both model show that the left spillway can discharge the surplus of water associated to the maximum inflow to the reservoir if the gates of the spillways are opened before the overtopping of the dam was observed. In addition, water depth measured on the spillway is considerably lower than the lateral walls, preventing overtopping. Finally, velocities at different points of the spillway showed to be in good agreement with theoretical values.

  16. Report: Total Maximum Daily Load Program Needs Better Data and Measures to Demonstrate Environmental Results

    EPA Pesticide Factsheets

    Report #2007-P-00036, September 19, 2007. EPA does not have comprehensive information on the outcomes of the Total Maximum Daily Load (TMDL) program nationwide, nor national data on TMDL implementation activities.

  17. Probabilistic Structural Analysis Methods (PSAM) for select space propulsion system structural components

    NASA Technical Reports Server (NTRS)

    Cruse, T. A.

    1987-01-01

    The objective is the development of several modular structural analysis packages capable of predicting the probabilistic response distribution for key structural variables such as maximum stress, natural frequencies, transient response, etc. The structural analysis packages are to include stochastic modeling of loads, material properties, geometry (tolerances), and boundary conditions. The solution is to be in terms of the cumulative probability of exceedance distribution (CDF) and confidence bounds. Two methods of probability modeling are to be included as well as three types of structural models - probabilistic finite-element method (PFEM); probabilistic approximate analysis methods (PAAM); and probabilistic boundary element methods (PBEM). The purpose in doing probabilistic structural analysis is to provide the designer with a more realistic ability to assess the importance of uncertainty in the response of a high performance structure. Probabilistic Structural Analysis Method (PSAM) tools will estimate structural safety and reliability, while providing the engineer with information on the confidence that should be given to the predicted behavior. Perhaps most critically, the PSAM results will directly provide information on the sensitivity of the design response to those variables which are seen to be uncertain.

  18. Probabilistic Structural Analysis Methods for select space propulsion system structural components (PSAM)

    NASA Technical Reports Server (NTRS)

    Cruse, T. A.; Burnside, O. H.; Wu, Y.-T.; Polch, E. Z.; Dias, J. B.

    1988-01-01

    The objective is the development of several modular structural analysis packages capable of predicting the probabilistic response distribution for key structural variables such as maximum stress, natural frequencies, transient response, etc. The structural analysis packages are to include stochastic modeling of loads, material properties, geometry (tolerances), and boundary conditions. The solution is to be in terms of the cumulative probability of exceedance distribution (CDF) and confidence bounds. Two methods of probability modeling are to be included as well as three types of structural models - probabilistic finite-element method (PFEM); probabilistic approximate analysis methods (PAAM); and probabilistic boundary element methods (PBEM). The purpose in doing probabilistic structural analysis is to provide the designer with a more realistic ability to assess the importance of uncertainty in the response of a high performance structure. Probabilistic Structural Analysis Method (PSAM) tools will estimate structural safety and reliability, while providing the engineer with information on the confidence that should be given to the predicted behavior. Perhaps most critically, the PSAM results will directly provide information on the sensitivity of the design response to those variables which are seen to be uncertain.

  19. Maximum-Entropy Inference with a Programmable Annealer

    PubMed Central

    Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.

    2016-01-01

    Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition. PMID:26936311

  20. Finite-Element Modelling of the Acoustic Input Admittance of the Newborn Ear Canal and Middle Ear.

    PubMed

    Motallebzadeh, Hamid; Maftoon, Nima; Pitaro, Jacob; Funnell, W Robert J; Daniel, Sam J

    2017-02-01

    Admittance measurement is a promising tool for evaluating the status of the middle ear in newborns. However, the newborn ear is anatomically very different from the adult one, and the acoustic input admittance is different than in adults. To aid in understanding the differences, a finite-element model of the newborn ear canal and middle ear was developed and its behaviour was studied for frequencies up to 2000 Hz. Material properties were taken from previous measurements and estimates. The simulation results were within the range of clinical admittance measurements made in newborns. Sensitivity analyses of the material properties show that in the canal model, the maximum admittance and the frequency at which that maximum admittance occurs are affected mainly by the stiffness parameter; in the middle-ear model, the damping is as important as the stiffness in influencing the maximum admittance magnitude but its effect on the corresponding frequency is negligible. Scaling up the geometries increases the admittance magnitude and shifts the resonances to lower frequencies. The results suggest that admittance measurements can provide more information about the condition of the middle ear when made at multiple frequencies around its resonance.

  1. Bayesian logistic regression approaches to predict incorrect DRG assignment.

    PubMed

    Suleiman, Mani; Demirhan, Haydar; Boyd, Leanne; Girosi, Federico; Aksakalli, Vural

    2018-05-07

    Episodes of care involving similar diagnoses and treatments and requiring similar levels of resource utilisation are grouped to the same Diagnosis-Related Group (DRG). In jurisdictions which implement DRG based payment systems, DRGs are a major determinant of funding for inpatient care. Hence, service providers often dedicate auditing staff to the task of checking that episodes have been coded to the correct DRG. The use of statistical models to estimate an episode's probability of DRG error can significantly improve the efficiency of clinical coding audits. This study implements Bayesian logistic regression models with weakly informative prior distributions to estimate the likelihood that episodes require a DRG revision, comparing these models with each other and to classical maximum likelihood estimates. All Bayesian approaches had more stable model parameters than maximum likelihood. The best performing Bayesian model improved overall classification per- formance by 6% compared to maximum likelihood, with a 34% gain compared to random classification, respectively. We found that the original DRG, coder and the day of coding all have a significant effect on the likelihood of DRG error. Use of Bayesian approaches has improved model parameter stability and classification accuracy. This method has already lead to improved audit efficiency in an operational capacity.

  2. Nonequilibrium-thermodynamics approach to open quantum systems

    NASA Astrophysics Data System (ADS)

    Semin, Vitalii; Petruccione, Francesco

    2014-11-01

    Open quantum systems are studied from the thermodynamical point of view unifying the principle of maximum informational entropy and the hypothesis of relaxation times hierarchy. The result of the unification is a non-Markovian and local-in-time master equation that provides a direct connection for dynamical and thermodynamical properties of open quantum systems. The power of the approach is illustrated by the application to the damped harmonic oscillator and the damped driven two-level system, resulting in analytical expressions for the non-Markovian and nonequilibrium entropy and inverse temperature.

  3. In situ spectroradiometric quantification of ERTS data. [Prescott and Phoenix, Arizona

    NASA Technical Reports Server (NTRS)

    Yost, E. F. (Principal Investigator)

    1975-01-01

    The author has identified the following significant results. Analyses of ERTS-1 photographic data were made to quantitatively relate ground reflectance measurements to photometric characteristics of the images. Digital image processing of photographic data resulted in a nomograph to correct for atmospheric effects over arid terrain. Optimum processing techniques to derive maximum geologic information from desert areas were established. Additive color techniques to provide quantitative measurements of surface water between different orbits were developed which were accepted as the standard flood mapping techniques using ERTS.

  4. Plasma parameters and structures of the X4 flare of 19 May 1984 as observed by SMM-XRP.

    NASA Astrophysics Data System (ADS)

    Schmelz, J. T.; Saba, J. L. R.; Strong, K. T.

    The eruption of a large flare on the east limb of the Sun was observed by the X-ray Polychromator (XRP) on board the Solar Maximum Mission (SMM) on 19 May 1984. The XRP Flat Crystal Spectrometer (FCS) made polychromatic soft X-ray images during the preflare, flare and postflare phases. The XRP Bent Crystal Spectrometer (BCS) provided information on the temperature and dynamics of the hot (Te > 8×106K) coronal plasma from spectra integrated spatially over the whole region.

  5. Analyzing small data sets using Bayesian estimation: the case of posttraumatic stress symptoms following mechanical ventilation in burn survivors

    PubMed Central

    van de Schoot, Rens; Broere, Joris J.; Perryck, Koen H.; Zondervan-Zwijnenburg, Mariëlle; van Loey, Nancy E.

    2015-01-01

    Background The analysis of small data sets in longitudinal studies can lead to power issues and often suffers from biased parameter values. These issues can be solved by using Bayesian estimation in conjunction with informative prior distributions. By means of a simulation study and an empirical example concerning posttraumatic stress symptoms (PTSS) following mechanical ventilation in burn survivors, we demonstrate the advantages and potential pitfalls of using Bayesian estimation. Methods First, we show how to specify prior distributions and by means of a sensitivity analysis we demonstrate how to check the exact influence of the prior (mis-) specification. Thereafter, we show by means of a simulation the situations in which the Bayesian approach outperforms the default, maximum likelihood and approach. Finally, we re-analyze empirical data on burn survivors which provided preliminary evidence of an aversive influence of a period of mechanical ventilation on the course of PTSS following burns. Results Not suprisingly, maximum likelihood estimation showed insufficient coverage as well as power with very small samples. Only when Bayesian analysis, in conjunction with informative priors, was used power increased to acceptable levels. As expected, we showed that the smaller the sample size the more the results rely on the prior specification. Conclusion We show that two issues often encountered during analysis of small samples, power and biased parameters, can be solved by including prior information into Bayesian analysis. We argue that the use of informative priors should always be reported together with a sensitivity analysis. PMID:25765534

  6. Analyzing small data sets using Bayesian estimation: the case of posttraumatic stress symptoms following mechanical ventilation in burn survivors.

    PubMed

    van de Schoot, Rens; Broere, Joris J; Perryck, Koen H; Zondervan-Zwijnenburg, Mariëlle; van Loey, Nancy E

    2015-01-01

    Background : The analysis of small data sets in longitudinal studies can lead to power issues and often suffers from biased parameter values. These issues can be solved by using Bayesian estimation in conjunction with informative prior distributions. By means of a simulation study and an empirical example concerning posttraumatic stress symptoms (PTSS) following mechanical ventilation in burn survivors, we demonstrate the advantages and potential pitfalls of using Bayesian estimation. Methods : First, we show how to specify prior distributions and by means of a sensitivity analysis we demonstrate how to check the exact influence of the prior (mis-) specification. Thereafter, we show by means of a simulation the situations in which the Bayesian approach outperforms the default, maximum likelihood and approach. Finally, we re-analyze empirical data on burn survivors which provided preliminary evidence of an aversive influence of a period of mechanical ventilation on the course of PTSS following burns. Results : Not suprisingly, maximum likelihood estimation showed insufficient coverage as well as power with very small samples. Only when Bayesian analysis, in conjunction with informative priors, was used power increased to acceptable levels. As expected, we showed that the smaller the sample size the more the results rely on the prior specification. Conclusion : We show that two issues often encountered during analysis of small samples, power and biased parameters, can be solved by including prior information into Bayesian analysis. We argue that the use of informative priors should always be reported together with a sensitivity analysis.

  7. Examining the Prey Mass of Terrestrial and Aquatic Carnivorous Mammals: Minimum, Maximum and Range

    PubMed Central

    Tucker, Marlee A.; Rogers, Tracey L.

    2014-01-01

    Predator-prey body mass relationships are a vital part of food webs across ecosystems and provide key information for predicting the susceptibility of carnivore populations to extinction. Despite this, there has been limited research on the minimum and maximum prey size of mammalian carnivores. Without information on large-scale patterns of prey mass, we limit our understanding of predation pressure, trophic cascades and susceptibility of carnivores to decreasing prey populations. The majority of studies that examine predator-prey body mass relationships focus on either a single or a subset of mammalian species, which limits the strength of our models as well as their broader application. We examine the relationship between predator body mass and the minimum, maximum and range of their prey's body mass across 108 mammalian carnivores, from weasels to baleen whales (Carnivora and Cetacea). We test whether mammals show a positive relationship between prey and predator body mass, as in reptiles and birds, as well as examine how environment (aquatic and terrestrial) and phylogenetic relatedness play a role in this relationship. We found that phylogenetic relatedness is a strong driver of predator-prey mass patterns in carnivorous mammals and accounts for a higher proportion of variance compared with the biological drivers of body mass and environment. We show a positive predator-prey body mass pattern for terrestrial mammals as found in reptiles and birds, but no relationship for aquatic mammals. Our results will benefit our understanding of trophic interactions, the susceptibility of carnivores to population declines and the role of carnivores within ecosystems. PMID:25162695

  8. Examining the prey mass of terrestrial and aquatic carnivorous mammals: minimum, maximum and range.

    PubMed

    Tucker, Marlee A; Rogers, Tracey L

    2014-01-01

    Predator-prey body mass relationships are a vital part of food webs across ecosystems and provide key information for predicting the susceptibility of carnivore populations to extinction. Despite this, there has been limited research on the minimum and maximum prey size of mammalian carnivores. Without information on large-scale patterns of prey mass, we limit our understanding of predation pressure, trophic cascades and susceptibility of carnivores to decreasing prey populations. The majority of studies that examine predator-prey body mass relationships focus on either a single or a subset of mammalian species, which limits the strength of our models as well as their broader application. We examine the relationship between predator body mass and the minimum, maximum and range of their prey's body mass across 108 mammalian carnivores, from weasels to baleen whales (Carnivora and Cetacea). We test whether mammals show a positive relationship between prey and predator body mass, as in reptiles and birds, as well as examine how environment (aquatic and terrestrial) and phylogenetic relatedness play a role in this relationship. We found that phylogenetic relatedness is a strong driver of predator-prey mass patterns in carnivorous mammals and accounts for a higher proportion of variance compared with the biological drivers of body mass and environment. We show a positive predator-prey body mass pattern for terrestrial mammals as found in reptiles and birds, but no relationship for aquatic mammals. Our results will benefit our understanding of trophic interactions, the susceptibility of carnivores to population declines and the role of carnivores within ecosystems.

  9. Reconstruction of Absorbed Doses to Fibroglandular Tissue of the Breast of Women undergoing Mammography (1960 to the Present)

    PubMed Central

    Thierry-Chef, Isabelle; Simon, Steven L.; Weinstock, Robert M.; Kwon, Deukwoo; Linet, Martha S.

    2013-01-01

    The assessment of potential benefits versus harms from mammographic examinations as described in the controversial breast cancer screening recommendations of the U.S. Preventive Task Force included limited consideration of absorbed dose to the fibroglandular tissue of the breast (glandular tissue dose), the tissue at risk for breast cancer. Epidemiological studies on cancer risks associated with diagnostic radiological examinations often lack accurate information on glandular tissue dose, and there is a clear need for better estimates of these doses. Our objective was to develop a quantitative summary of glandular tissue doses from mammography by considering sources of variation over time in key parameters including imaging protocols, x-ray target materials, voltage, filtration, incident air kerma, compressed breast thickness, and breast composition. We estimated the minimum, maximum, and mean values for glandular tissue dose for populations of exposed women within 5-year periods from 1960 to the present, with the minimum to maximum range likely including 90% to 95% of the entirety of the dose range from mammography in North America and Europe. Glandular tissue dose from a single view in mammography is presently about 2 mGy, about one-sixth the dose in the 1960s. The ratio of our estimates of maximum to minimum glandular tissue doses for average-size breasts was about 100 in the 1960s compared to a ratio of about 5 in recent years. Findings from our analysis provide quantitative information on glandular tissue doses from mammographic examinations which can be used in epidemiologic studies of breast cancer. PMID:21988547

  10. A fresh look at the Last Glacial Maximum using Paleoclimate Data Assimilation

    NASA Astrophysics Data System (ADS)

    Malevich, S. B.; Tierney, J. E.; Hakim, G. J.; Tardif, R.

    2017-12-01

    Quantifying climate conditions during the Last Glacial Maximum ( 21ka) can help us to understand climate responses to forcing and climate states that are poorly represented in the instrumental record. Paleoclimate proxies may be used to estimate these climate conditions, but proxies are sparsely distributed and possess uncertainties from environmental and biogeochemical processes. Alternatively, climate model simulations provide a full-field view, but may predict unrealistic climate states or states not faithful to proxy records. Here, we use data assimilation - combining climate proxy records with a theoretical understanding from climate models - to produce field reconstructions of the LGM that leverage the information from both data and models. To date, data assimilation has mainly been used to produce reconstructions of climate fields through the last millennium. We expand this approach in order to produce a climate fields for the Last Glacial Maximum using an ensemble Kalman filter assimilation. Ensemble samples were formed from output from multiple models including CCSM3, CESM2.1, and HadCM3. These model simulations are combined with marine sediment proxies for upper ocean temperature (TEX86, UK'37, Mg/Ca and δ18O of foraminifera), utilizing forward models based on a newly developed suite of Bayesian proxy system models. We also incorporate age model and radiocarbon reservoir uncertainty into our reconstructions using Bayesian age modeling software. The resulting fields show familiar patterns based on comparison with previous proxy-based reconstructions, but additionally reveal novel patterns of large-scale shifts in ocean-atmosphere dynamics, as the surface temperature data inform upon atmospheric circulation and precipitation patterns.

  11. Tropical Africa: Land use, biomass, and carbon estimates for 1980

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, S.; Gaston, G.; Daniels, R.C.

    1996-06-01

    This document describes the contents of a digital database containing maximum potential aboveground biomass, land use, and estimated biomass and carbon data for 1980 and describes a methodology that may be used to extend this data set to 1990 and beyond based on population and land cover data. The biomass data and carbon estimates are for woody vegetation in Tropical Africa. These data were collected to reduce the uncertainty associated with the possible magnitude of historical releases of carbon from land use change. Tropical Africa is defined here as encompassing 22.7 x 10{sup 6} km{sup 2} of the earth`s landmore » surface and includes those countries that for the most part are located in Tropical Africa. Countries bordering the Mediterranean Sea and in southern Africa (i.e., Egypt, Libya, Tunisia, Algeria, Morocco, South Africa, Lesotho, Swaziland, and Western Sahara) have maximum potential biomass and land cover information but do not have biomass or carbon estimate. The database was developed using the GRID module in the ARC/INFO{sup TM} geographic information system. Source data were obtained from the Food and Agriculture Organization (FAO), the U.S. National Geophysical Data Center, and a limited number of biomass-carbon density case studies. These data were used to derive the maximum potential and actual (ca. 1980) aboveground biomass-carbon values at regional and country levels. The land-use data provided were derived from a vegetation map originally produced for the FAO by the International Institute of Vegetation Mapping, Toulouse, France.« less

  12. Developability assessment of clinical drug products with maximum absorbable doses.

    PubMed

    Ding, Xuan; Rose, John P; Van Gelder, Jan

    2012-05-10

    Maximum absorbable dose refers to the maximum amount of an orally administered drug that can be absorbed in the gastrointestinal tract. Maximum absorbable dose, or D(abs), has proved to be an important parameter for quantifying the absorption potential of drug candidates. The purpose of this work is to validate the use of D(abs) in a developability assessment context, and to establish appropriate protocol and interpretation criteria for this application. Three methods for calculating D(abs) were compared by assessing how well the methods predicted the absorption limit for a set of real clinical candidates. D(abs) was calculated for these clinical candidates by means of a simple equation and two computer simulation programs, GastroPlus and an program developed at Eli Lilly and Company. Results from single dose escalation studies in Phase I clinical trials were analyzed to identify the maximum absorbable doses for these compounds. Compared to the clinical results, the equation and both simulation programs provide conservative estimates of D(abs), but in general D(abs) from the computer simulations are more accurate, which may find obvious advantage for the simulations in developability assessment. Computer simulations also revealed the complex behavior associated with absorption saturation and suggested in most cases that the D(abs) limit is not likely to be achieved in a typical clinical dose range. On the basis of the validation findings, an approach is proposed for assessing absorption potential, and best practices are discussed for the use of D(abs) estimates to inform clinical formulation development strategies. Copyright © 2012 Elsevier B.V. All rights reserved.

  13. Psychophysically determined forces of dynamic pushing for female industrial workers: Comparison of two apparatuses.

    PubMed

    Ciriello, Vincent M; Maikala, Rammohan V; Dempsey, Patrick G; O'Brien, Niall V

    2010-01-01

    Using psychophysics, the maximum acceptable forces for pushing have been previously developed using a magnetic particle brake (MPB) treadmill at the Liberty Mutual Research Institute for Safety. The objective of this study was to investigate the reproducibility of maximum acceptable initial and sustained forces while performing a pushing task at a frequency of 1min(-1) both on a MPB treadmill and on a high-inertia pushcart. This is important because our pushing guidelines are used extensively as a ergonomic redesign strategy and we would like the information to be as applicable as possible to cart pushing. On two separate days, nineteen female industrial workers performed a 40-min MPB treadmill pushing task and a 2-hr pushcart task, in the context of a larger experiment. During pushing, the subjects were asked to select a workload they could sustain for 8h without "straining themselves or without becoming unusually tired, weakened, overheated or out of breath." The results demonstrated that maximum acceptable initial and sustained forces of pushing determined on the high inertia pushcart were 0.8% and 2.5% lower than the MPB treadmill. The results also show that the maximum acceptable sustained force of the MPB treadmill task was 0.5% higher than the maximum acceptable sustained force of Snook and Ciriello (1991). Overall, the findings confirm that the existing pushing data developed by the Liberty Mutual Research Institute for Safety still provides an accurate estimate of maximal acceptable forces for the selected combination of distance and frequency of push for female industrial workers.

  14. Predicting punching acceleration from selected strength and power variables in elite karate athletes: a multiple regression analysis.

    PubMed

    Loturco, Irineu; Artioli, Guilherme Giannini; Kobal, Ronaldo; Gil, Saulo; Franchini, Emerson

    2014-07-01

    This study investigated the relationship between punching acceleration and selected strength and power variables in 19 professional karate athletes from the Brazilian National Team (9 men and 10 women; age, 23 ± 3 years; height, 1.71 ± 0.09 m; and body mass [BM], 67.34 ± 13.44 kg). Punching acceleration was assessed under 4 different conditions in a randomized order: (a) fixed distance aiming to attain maximum speed (FS), (b) fixed distance aiming to attain maximum impact (FI), (c) self-selected distance aiming to attain maximum speed, and (d) self-selected distance aiming to attain maximum impact. The selected strength and power variables were as follows: maximal dynamic strength in bench press and squat-machine, squat and countermovement jump height, mean propulsive power in bench throw and jump squat, and mean propulsive velocity in jump squat with 40% of BM. Upper- and lower-body power and maximal dynamic strength variables were positively correlated to punch acceleration in all conditions. Multiple regression analysis also revealed predictive variables: relative mean propulsive power in squat jump (W·kg-1), and maximal dynamic strength 1 repetition maximum in both bench press and squat-machine exercises. An impact-oriented instruction and a self-selected distance to start the movement seem to be crucial to reach the highest acceleration during punching execution. This investigation, while demonstrating strong correlations between punching acceleration and strength-power variables, also provides important information for coaches, especially for designing better training strategies to improve punching speed.

  15. Combat cueing

    NASA Astrophysics Data System (ADS)

    Kachejian, Kerry C.; Vujcic, Doug

    1998-08-01

    The combat cueing (CBT-Q) research effort will develop and demonstrate a portable tactical information system that will enhance the effectiveness of small unit military operations by providing real-time target cueing information to individual warfighters and teams. CBT-Q consists of a network of portable radio frequency (RF) 'modules' and is controlled by a body-worn 'user station' utilizing a head mounted display . On the battlefield, CBT-Q modules will detect an enemy transmitter and instantly provide the warfighter with an emitter's location. During the 'fog of battle', CBT-Q would tell the warfighter, 'Look here, right now individuals into the RF spectrum, resulting in faster target engagement times, increased survivability, and reduce the potential for fratricide. CBT-Q technology can support both mounted and dismounted tactical forces involved in land, sea and air warfighting operations. The CBT-Q system combines robust geolocation and signal sorting algorithms with hardware and software modularity to offer maximum utility to the warfighter. A single CBT-Q module can provide threat RF detection. Three networked CBT-Q modules can provide emitter positions using a time difference of arrival (TDOA) technique. The TDOA approach relies on timing and positioning data derived from a global positioning systems. The information will be displayed on a variety of displays, including a flat-panel head mounted display. The end results of the program will be the demonstration of the system with US Army Scouts in an operational environment.

  16. Quantum information density scaling and qubit operation time constraints of CMOS silicon-based quantum computer architectures

    NASA Astrophysics Data System (ADS)

    Rotta, Davide; Sebastiano, Fabio; Charbon, Edoardo; Prati, Enrico

    2017-06-01

    Even the quantum simulation of an apparently simple molecule such as Fe2S2 requires a considerable number of qubits of the order of 106, while more complex molecules such as alanine (C3H7NO2) require about a hundred times more. In order to assess such a multimillion scale of identical qubits and control lines, the silicon platform seems to be one of the most indicated routes as it naturally provides, together with qubit functionalities, the capability of nanometric, serial, and industrial-quality fabrication. The scaling trend of microelectronic devices predicting that computing power would double every 2 years, known as Moore's law, according to the new slope set after the 32-nm node of 2009, suggests that the technology roadmap will achieve the 3-nm manufacturability limit proposed by Kelly around 2020. Today, circuital quantum information processing architectures are predicted to take advantage from the scalability ensured by silicon technology. However, the maximum amount of quantum information per unit surface that can be stored in silicon-based qubits and the consequent space constraints on qubit operations have never been addressed so far. This represents one of the key parameters toward the implementation of quantum error correction for fault-tolerant quantum information processing and its dependence on the features of the technology node. The maximum quantum information per unit surface virtually storable and controllable in the compact exchange-only silicon double quantum dot qubit architecture is expressed as a function of the complementary metal-oxide-semiconductor technology node, so the size scale optimizing both physical qubit operation time and quantum error correction requirements is assessed by reviewing the physical and technological constraints. According to the requirements imposed by the quantum error correction method and the constraints given by the typical strength of the exchange coupling, we determine the workable operation frequency range of a silicon complementary metal-oxide-semiconductor quantum processor to be within 1 and 100 GHz. Such constraint limits the feasibility of fault-tolerant quantum information processing with complementary metal-oxide-semiconductor technology only to the most advanced nodes. The compatibility with classical complementary metal-oxide-semiconductor control circuitry is discussed, focusing on the cryogenic complementary metal-oxide-semiconductor operation required to bring the classical controller as close as possible to the quantum processor and to enable interfacing thousands of qubits on the same chip via time-division, frequency-division, and space-division multiplexing. The operation time range prospected for cryogenic control electronics is found to be compatible with the operation time expected for qubits. By combining the forecast of the development of scaled technology nodes with operation time and classical circuitry constraints, we derive a maximum quantum information density for logical qubits of 2.8 and 4 Mqb/cm2 for the 10 and 7-nm technology nodes, respectively, for the Steane code. The density is one and two orders of magnitude less for surface codes and for concatenated codes, respectively. Such values provide a benchmark for the development of fault-tolerant quantum algorithms by circuital quantum information based on silicon platforms and a guideline for other technologies in general.

  17. A medical application integrating remote 3D visualization tools to access picture archiving and communication system on mobile devices.

    PubMed

    He, Longjun; Ming, Xing; Liu, Qian

    2014-04-01

    With computing capability and display size growing, the mobile device has been used as a tool to help clinicians view patient information and medical images anywhere and anytime. However, for direct interactive 3D visualization, which plays an important role in radiological diagnosis, the mobile device cannot provide a satisfactory quality of experience for radiologists. This paper developed a medical system that can get medical images from the picture archiving and communication system on the mobile device over the wireless network. In the proposed application, the mobile device got patient information and medical images through a proxy server connecting to the PACS server. Meanwhile, the proxy server integrated a range of 3D visualization techniques, including maximum intensity projection, multi-planar reconstruction and direct volume rendering, to providing shape, brightness, depth and location information generated from the original sectional images for radiologists. Furthermore, an algorithm that changes remote render parameters automatically to adapt to the network status was employed to improve the quality of experience. Finally, performance issues regarding the remote 3D visualization of the medical images over the wireless network of the proposed application were also discussed. The results demonstrated that this proposed medical application could provide a smooth interactive experience in the WLAN and 3G networks.

  18. Habitat requirements and burrowing depths of rodents in relation to shallow waste burial sites

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gano, K.A.; States, J.B.

    1982-05-01

    The purpose of this paper is to provide a review of the literature and summarize information on factors affecting habitat selection and maximum recorded burrowing depths for representative small mammals that we consider most likely to inhibit waste burial sites in arid and semi-arid regions of the West. The information is intended for waste management designers who need to know what to expect from small mammals that may be present at a particular site. Waste repositories oculd be designed to exclude the deep burrowing rodents of a region by creating an unattractive habitat over the waste. Summaries are given formore » habitat requirements of each group along with generalized modifications that could be employed to deter habitation. Representatives from the major groups considered to be deep burrowers are discussed. Further, detailed information about a particular species can be obtained from the references cited.« less

  19. Regression analysis of informative current status data with the additive hazards model.

    PubMed

    Zhao, Shishun; Hu, Tao; Ma, Ling; Wang, Peijie; Sun, Jianguo

    2015-04-01

    This paper discusses regression analysis of current status failure time data arising from the additive hazards model in the presence of informative censoring. Many methods have been developed for regression analysis of current status data under various regression models if the censoring is noninformative, and also there exists a large literature on parametric analysis of informative current status data in the context of tumorgenicity experiments. In this paper, a semiparametric maximum likelihood estimation procedure is presented and in the method, the copula model is employed to describe the relationship between the failure time of interest and the censoring time. Furthermore, I-splines are used to approximate the nonparametric functions involved and the asymptotic consistency and normality of the proposed estimators are established. A simulation study is conducted and indicates that the proposed approach works well for practical situations. An illustrative example is also provided.

  20. Trellises and Trellis-Based Decoding Algorithms for Linear Block Codes. Part 3; The Map and Related Decoding Algirithms

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Fossorier, Marc

    1998-01-01

    In a coded communication system with equiprobable signaling, MLD minimizes the word error probability and delivers the most likely codeword associated with the corresponding received sequence. This decoding has two drawbacks. First, minimization of the word error probability is not equivalent to minimization of the bit error probability. Therefore, MLD becomes suboptimum with respect to the bit error probability. Second, MLD delivers a hard-decision estimate of the received sequence, so that information is lost between the input and output of the ML decoder. This information is important in coded schemes where the decoded sequence is further processed, such as concatenated coding schemes, multi-stage and iterative decoding schemes. In this chapter, we first present a decoding algorithm which both minimizes bit error probability, and provides the corresponding soft information at the output of the decoder. This algorithm is referred to as the MAP (maximum aposteriori probability) decoding algorithm.

  1. Position feedback system for volume holographic storage media

    DOEpatents

    Hays, Nathan J [San Francisco, CA; Henson, James A [Morgan Hill, CA; Carpenter, Christopher M [Sunnyvale, CA; Akin, Jr William R. [Morgan Hill, CA; Ehrlich, Richard M [Saratoga, CA; Beazley, Lance D [San Jose, CA

    1998-07-07

    A method of holographic recording in a photorefractive medium wherein stored holograms may be retrieved with maximum signal-to noise ratio (SNR) is disclosed. A plurality of servo blocks containing position feedback information is recorded in the crystal and made non-erasable by heating the crystal. The servo blocks are recorded at specific increments, either angular or frequency, depending whether wavelength or angular multiplexing is applied, and each servo block is defined by one of five patterns. Data pages are then recorded at positions or wavelengths enabling each data page to be subsequently reconstructed with servo patterns which provide position feedback information. The method of recording data pages and servo blocks is consistent with conventional practices. In addition, the recording system also includes components (e.g. voice coil motor) which respond to position feedback information and adjust the angular position of the reference angle of a reference beam to maximize SNR by reducing crosstalk, thereby improving storage capacity.

  2. Environmental Performance Information Use by Conservation Agency Staff

    NASA Astrophysics Data System (ADS)

    Wardropper, Chloe Bradley

    2018-04-01

    Performance-based conservation has long been recognized as crucial to improving program effectiveness, particularly when environmental conditions are dynamic. Yet few studies have investigated the use of environmental performance information by staff of conservation organizations. This article identifies attitudinal, policy and organizational factors influencing the use of a type of performance information—water quality information—by Soil and Water Conservation District staff in the Upper Mississippi River Basin region. An online survey ( n = 277) revealed a number of important variables associated with greater information use. Variables included employees' prosocial motivation, or the belief that they helped people and natural resources through their job, the perceived trustworthiness of data, the presence of a U.S. Clean Water Act Total Maximum Daily Load standard designation, and staff discretion to prioritize programs locally. Conservation programs that retain motivated staff and provide them the resources and flexibility to plan and evaluate their work with environmental data may increase conservation effectiveness under changing conditions.

  3. 77 FR 55175 - Civil Penalties

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-07

    ... [Docket No. NHTSA-2012-0131; Notice 1] RIN 2127-AL16 Civil Penalties AGENCY: National Highway Traffic... proposes to increase the maximum civil penalty amounts for violations of motor vehicle safety requirements... and consumer information provisions. Specifically, this proposes increases in maximum civil penalty...

  4. Restoration of color in a remote sensing image and its quality evaluation

    NASA Astrophysics Data System (ADS)

    Zhang, Zuxun; Li, Zhijiang; Zhang, Jianqing; Wang, Zhihe

    2003-09-01

    This paper is focused on the restoration of color remote sensing (including airborne photo). A complete approach is recommended. It propose that two main aspects should be concerned in restoring a remote sensing image, that are restoration of space information, restoration of photometric information. In this proposal, the restoration of space information can be performed by making the modulation transfer function (MTF) as degradation function, in which the MTF is obtained by measuring the edge curve of origin image. The restoration of photometric information can be performed by improved local maximum entropy algorithm. What's more, a valid approach in processing color remote sensing image is recommended. That is splits the color remote sensing image into three monochromatic images which corresponding three visible light bands and synthesizes the three images after being processed separately with psychological color vision restriction. Finally, three novel evaluation variables are obtained based on image restoration to evaluate the image restoration quality in space restoration quality and photometric restoration quality. An evaluation is provided at last.

  5. Information Entropy Production of Maximum Entropy Markov Chains from Spike Trains

    NASA Astrophysics Data System (ADS)

    Cofré, Rodrigo; Maldonado, Cesar

    2018-01-01

    We consider the maximum entropy Markov chain inference approach to characterize the collective statistics of neuronal spike trains, focusing on the statistical properties of the inferred model. We review large deviations techniques useful in this context to describe properties of accuracy and convergence in terms of sampling size. We use these results to study the statistical fluctuation of correlations, distinguishability and irreversibility of maximum entropy Markov chains. We illustrate these applications using simple examples where the large deviation rate function is explicitly obtained for maximum entropy models of relevance in this field.

  6. Maximum Power Training and Plyometrics for Cross-Country Running.

    ERIC Educational Resources Information Center

    Ebben, William P.

    2001-01-01

    Provides a rationale for maximum power training and plyometrics as conditioning strategies for cross-country runners, examining: an evaluation of training methods (strength training and maximum power training and plyometrics); biomechanic and velocity specificity (role in preventing injury); and practical application of maximum power training and…

  7. Geographic, geologic, and hydrologic summaries of intermontane basins of the northern Rocky Mountains, Montana

    USGS Publications Warehouse

    Kendy, Eloise; Tresch, R.E.

    1996-01-01

    This report combines a literature review with new information to provide summaries of the geography, geology, and hydrology of each of 32 intermontane basins in western Montana. The summary of each intermontane basin includes concise descriptions of topography, areal extent, altitude, climate, 1990 population, land and water use, geology, surface water, aquifer hydraulic characteristics, ground-water flow, and ground-water quality. If present, geothermal features are described. Average annual and monthly temperature and precipitation are reported from one National Weather Service station in each basin. Streamflow data, including the drainage area, period of record, and average, minimum, and maximum historical streamflow, are reported for all active and discontinued USGS streamflow-gaging stations in each basin. Monitoring-well data, including the well depth, aquifer, period of record, and minimum and maximum historical water levels, are reported for all long-term USGS monitoring wells in each basin. Brief descriptions of geologic, geophysical, and potentiometric- surface maps available for each basin also are included. The summary for each basin also includes a bibliography of hydrogeologic literature. When used alone or in conjunction with regional RASA reports, this report provides a practical starting point for site-specific hydrogeologic investigations.

  8. Crowd macro state detection using entropy model

    NASA Astrophysics Data System (ADS)

    Zhao, Ying; Yuan, Mengqi; Su, Guofeng; Chen, Tao

    2015-08-01

    In the crowd security research area a primary concern is to identify the macro state of crowd behaviors to prevent disasters and to supervise the crowd behaviors. The entropy is used to describe the macro state of a self-organization system in physics. The entropy change indicates the system macro state change. This paper provides a method to construct crowd behavior microstates and the corresponded probability distribution using the individuals' velocity information (magnitude and direction). Then an entropy model was built up to describe the crowd behavior macro state. Simulation experiments and video detection experiments were conducted. It was verified that in the disordered state, the crowd behavior entropy is close to the theoretical maximum entropy; while in ordered state, the entropy is much lower than half of the theoretical maximum entropy. The crowd behavior macro state sudden change leads to the entropy change. The proposed entropy model is more applicable than the order parameter model in crowd behavior detection. By recognizing the entropy mutation, it is possible to detect the crowd behavior macro state automatically by utilizing cameras. Results will provide data support on crowd emergency prevention and on emergency manual intervention.

  9. Developing a short version of the Toronto Structured Interview for Alexithymia using item response theory.

    PubMed

    Sekely, Angela; Taylor, Graeme J; Bagby, R Michael

    2018-03-17

    The Toronto Structured Interview for Alexithymia (TSIA) was developed to provide a structured interview method for assessing alexithymia. One drawback of this instrument is the amount of time it takes to administer and score. The current study used item response theory (IRT) methods to analyze data from a large heterogeneous multi-language sample (N = 842) to investigate whether a subset of items could be selected to create a short version of the instrument. Samejima's (1969) graded response model was used to fit the item responses. Items providing maximum information were retained in the short model, resulting in the elimination of 12-items from the original 24-items. Despite the 50% reduction in the number of items, 65.22% of the information was retained. Further studies are needed to validate the short version. A short version of the TSIA is potentially of practical value to clinicians and researchers with time constraints. Copyright © 2018. Published by Elsevier B.V.

  10. Validation of an electronic device for measuring driving exposure.

    PubMed

    Huebner, Kyla D; Porter, Michelle M; Marshall, Shawn C

    2006-03-01

    This study sought to evaluate an on-board diagnostic system (CarChip) for collecting driving exposure data in older drivers. Drivers (N = 20) aged 60 to 86 years from Winnipeg and surrounding communities participated. Information on driving exposure was obtained via the CarChip and global positioning system (GPS) technology on a driving course, and obtained via the CarChip and surveys over a week of driving. Velocities and distances were measured over the road course to validate the accuracy of the CarChip compared to GPS for those parameters. The results show that the CarChip does provide valid distance measurements and slightly lower maximum velocities than GPS measures. From the results obtained in this study, it was determined that retrospective self-reports of weekly driving distances are inaccurate. Therefore, an on-board diagnostic system (OBDII) electronic device like the CarChip can provide valid and detailed information about driving exposure that would be useful for studies of crash rates or driving behavior.

  11. A hot implantation study on the evolution of defects in He ion implanted MgO(1 0 0)

    NASA Astrophysics Data System (ADS)

    Fedorov, A. V.; van Huis, M. A.; van Veen, A.

    2002-05-01

    Ion implantation at elevated temperature, so-called hot implantation, was used to study nucleation and thermal stability of the defects. In this work, MgO(1 0 0) single crystal samples were implanted with 30 keV He ions at various implantation temperatures. The implantation doses ranged from 10 14 to 10 16 cm -2. The implantation introduced defects were subsequently studied by thermal helium desorption spectroscopy (THDS) and Doppler broadening positron beam analysis (PBA). The THDS study provides vital information on the kinetics of He release from the sample. PBA technique, being sensitive to the open volume defects, provides complementary information on cavity evolution. The THD study has shown that in most cases helium release is characterised by the activation energy of Q=4.7±0.5 eV with the maximum release temperature of Tmax=1830 K. By applying first order desorption model the pre-exponent factor is estimated as ν=4.3×10 11 s -1.

  12. Integration of medical imaging into a multi-institutional hospital information system structure.

    PubMed

    Dayhoff, R E

    1995-01-01

    The Department of Veterans Affairs (VA) is providing integrated text and image data to its clinical users at its Washington and Baltimore medical centers and, soon, at nine other medical centers. The DHCP Imaging System records clinically significant diagnostic images selected by medical specialists in a variety of departments, including cardiology, gastroenterology, pathology, dermatology, surgery, radiology, podiatry, dentistry, and emergency medicine. These images, which include color and gray scale images, and electrocardiogram waveforms, are displayed on workstations located throughout the medical centers. Integration of clinical images with the VA's electronic mail system allows transfer of data from one medical center to another. The ability to incorporate transmitted text and image data into on-line patient records at the collaborating sites is an important aspect of professional consultation. In order to achieve the maximum benefits from an integrated patient record system, a critical mass of information must be available for clinicians. When there is also seamless support for administration, it becomes possible to re-engineer the processes involved in providing medical care.

  13. Optimal protocol for maximum work extraction in a feedback process with a time-varying potential

    NASA Astrophysics Data System (ADS)

    Kwon, Chulan

    2017-12-01

    The nonequilibrium nature of information thermodynamics is characterized by the inequality or non-negativity of the total entropy change of the system, memory, and reservoir. Mutual information change plays a crucial role in the inequality, in particular if work is extracted and the paradox of Maxwell's demon is raised. We consider the Brownian information engine where the protocol set of the harmonic potential is initially chosen by the measurement and varies in time. We confirm the inequality of the total entropy change by calculating, in detail, the entropic terms including the mutual information change. We rigorously find the optimal values of the time-dependent protocol for maximum extraction of work both for the finite-time and the quasi-static process.

  14. Clean Water Act Approved Total Maximum Daily Load (TMDL) Documents

    EPA Pesticide Factsheets

    Information from Approved and Established TMDL Documents as well as TMDLs that have been Withdrawn. This includes the pollutants identified in the TMDL Document, the 303(d) Listed Water(s) that the TMDL Document addresses and the associated Cause(s) of Impairment. The National Total Maximum Daily Load (TMDL) Tracking System (NTTS) contains information on waters that are Not Supporting their designated uses. These waters are listed by the state as impaired under Section 303(d) of the Clean Water Act.

  15. 75 FR 62136 - Notice of Maximum Amount of Assistance Under the Individuals and Households Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-07

    ... DEPARTMENT OF HOMELAND SECURITY Federal Emergency Management Agency Notice of Maximum Amount of.... ACTION: Notice. SUMMARY: FEMA gives notice of the maximum amount for assistance under the Individuals and.... 5174, prescribes that FEMA must annually adjust the maximum amounts for assistance provided under the...

  16. 40 CFR 35.175 - Maximum federal share.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Maximum federal share. 35.175 Section 35.175 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GRANTS AND OTHER FEDERAL ASSISTANCE...)) § 35.175 Maximum federal share. The Regional Administrator may provide a maximum of 75 percent of the...

  17. 78 FR 64523 - Notice of Maximum Amount of Assistance Under the Individuals and Households Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-29

    ... DEPARTMENT OF HOMELAND SECURITY Federal Emergency Management Agency Notice of Maximum Amount of.... ACTION: Notice. SUMMARY: FEMA gives notice of the maximum amount for assistance under the Individuals and....C. 5174, prescribes that FEMA must annually adjust the maximum amount for assistance provided under...

  18. 40 CFR 35.195 - Maximum federal share.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Maximum federal share. 35.195 Section 35.195 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GRANTS AND OTHER FEDERAL ASSISTANCE... 1443(b)) § 35.195 Maximum federal share. The Regional Administrator may provide a maximum of 75 percent...

  19. 77 FR 61425 - Notice of Maximum Amount of Assistance Under the Individuals and Households Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-09

    ... DEPARTMENT OF HOMELAND SECURITY Federal Emergency Management Agency Notice of Maximum Amount of.... ACTION: Notice. SUMMARY: FEMA gives notice of the maximum amount for assistance under the Individuals and....C. 5174, prescribes that FEMA must annually adjust the maximum amount for assistance provided under...

  20. 40 CFR 35.685 - Maximum federal share.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Maximum federal share. 35.685 Section... (section 1443(b)) § 35.685 Maximum federal share. (a) The Regional Administrator may provide up to 75 percent of the approved work plan costs. (b) The Regional Administrator may increase the maximum federal...

  1. Unification of field theory and maximum entropy methods for learning probability densities

    NASA Astrophysics Data System (ADS)

    Kinney, Justin B.

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  2. Unification of field theory and maximum entropy methods for learning probability densities.

    PubMed

    Kinney, Justin B

    2015-09-01

    The need to estimate smooth probability distributions (a.k.a. probability densities) from finite sampled data is ubiquitous in science. Many approaches to this problem have been described, but none is yet regarded as providing a definitive solution. Maximum entropy estimation and Bayesian field theory are two such approaches. Both have origins in statistical physics, but the relationship between them has remained unclear. Here I unify these two methods by showing that every maximum entropy density estimate can be recovered in the infinite smoothness limit of an appropriate Bayesian field theory. I also show that Bayesian field theory estimation can be performed without imposing any boundary conditions on candidate densities, and that the infinite smoothness limit of these theories recovers the most common types of maximum entropy estimates. Bayesian field theory thus provides a natural test of the maximum entropy null hypothesis and, furthermore, returns an alternative (lower entropy) density estimate when the maximum entropy hypothesis is falsified. The computations necessary for this approach can be performed rapidly for one-dimensional data, and software for doing this is provided.

  3. The Impact of Ocean Observations in Seasonal Climate Prediction

    NASA Technical Reports Server (NTRS)

    Rienecker, Michele; Keppenne, Christian; Kovach, Robin; Marshak, Jelena

    2010-01-01

    The ocean provides the most significant memory for the climate system. Hence, a critical element in climate forecasting with coupled models is the initialization of the ocean with states from an ocean data assimilation system. Remotely-sensed ocean surface fields (e.g., sea surface topography, SST, winds) are now available for extensive periods and have been used to constrain ocean models to provide a record of climate variations. Since the ocean is virtually opaque to electromagnetic radiation, the assimilation of these satellite data is essential to extracting the maximum information content. More recently, the Argo drifters have provided unprecedented sampling of the subsurface temperature and salinity. Although the duration of this observation set has been too short to provide solid statistical evidence of its impact, there are indications that Argo improves the forecast skill of coupled systems. This presentation will address the impact these different observations have had on seasonal climate predictions with the GMAO's coupled model.

  4. Assessment of the quality of web-based information on bunions.

    PubMed

    Chong, Yew Ming; Fraval, Andrew; Chandrananth, Janan; Plunkett, Virginia; Tran, Phong

    2013-08-01

    The Internet provides a large source of health-related information for patients. However, information on the Internet is mostly unregulated, ranging from factually correct to misleading or contradictory information. The objective of this study was to determine the quality of information available on World Wide Web on the topic of bunions. Websites were identified using 3 search engines (Google, Yahoo, and MSN) and the search term bunions. The first 30 websites in each search were analyzed. Websites were assessed using the validated DISCERN rating instrument to determine the quality of health content and treatment information. The DISCERN tool possesses moderate to very good strength of interobserver agreement, ranging from .41 to .82. A total of 90 websites were assessed and studied. Forty-eight sites were duplicated, leaving 42 unique sites. Of these, 60% (25/42) provided patient-related information, 21% (9/42) were advertisements, 7% (3/42) promoted medical centers, 5% (2/42) were dead links, 5% (2/42) were news articles, and 2% (1/42) was doctor's information. In determining the quality of unique sites, of a maximum score of 5, the average overall DISCERN score was 2.9 (range, 1.8 to 4.6). Only 24% (10/42) of websites were classified as "good" or "excellent." Although most websites contained information on symptoms, causes, risk factors, investigations, and treatment options on bunions, 60% (25/42) did not provide adequate information on the benefits for each treatment, 45% (19/42) did not contain any risks of treatment, and 76% (32/42) did not describe how treatment choices affect overall quality of life. The sources of information were clear in 33% (14/42), and the date when this information was reviewed was given in only 50% (21/42). Only 29% (12/42) of websites had been updated within the past 2 years. Overall, the quality of patient information on bunions varies widely. We believe clinicians should guide patients in identifying the best possible and genuine information on the World Wide Web. Patients are commonly using the internet as an information resource, in spite of the highly variable quality of this information. They should be encouraged to exercise caution and to utilize only well-known sites.

  5. Role of projection in the control of bird flocks

    PubMed Central

    Pearce, Daniel J. G.; Miller, Adam M.; Rowlands, George; Turner, Matthew S.

    2014-01-01

    Swarming is a conspicuous behavioral trait observed in bird flocks, fish shoals, insect swarms, and mammal herds. It is thought to improve collective awareness and offer protection from predators. Many current models involve the hypothesis that information coordinating motion is exchanged among neighbors. We argue that such local interactions alone are insufficient to explain the organization of large flocks of birds and that the mechanism for the exchange of long-range information necessary to control their density remains unknown. We show that large flocks self-organize to the maximum density at which a typical individual still can see out of the flock in many directions. Such flocks are marginally opaque—an external observer also still can see a substantial fraction of sky through the flock. Although this seems intuitive, we show it need not be the case; flocks might easily be highly diffuse or entirely opaque. The emergence of marginal opacity strongly constrains how individuals interact with one another within large swarms. It also provides a mechanism for global interactions: an individual can respond to the projection of the flock that it sees. This provides for faster information transfer and hence rapid flock dynamics, another advantage over local models. From a behavioral perspective, it optimizes the information available to each bird while maintaining the protection of a dense, coherent flock. PMID:25002501

  6. Principle of Maximum Fisher Information from Hardy’s Axioms Applied to Statistical Systems

    PubMed Central

    Frieden, B. Roy; Gatenby, Robert A.

    2014-01-01

    Consider a finite-sized, multidimensional system in a parameter state a. The system is in either a state of equilibrium or general non-equilibrium, and may obey either classical or quantum physics. L. Hardy’s mathematical axioms provide a basis for the physics obeyed by any such system. One axiom is that the number N of distinguishable states a in the system obeys N = max. This assumes that N is known as deterministic prior knowledge. However, most observed systems suffer statistical fluctuations, for which N is therefore only known approximately. Then what happens if the scope of the axiom N = max is extended to include such observed systems? It is found that the state a of the system must obey a principle of maximum Fisher information, I = Imax. This is important because many physical laws have been derived, assuming as a working hypothesis that I = Imax. These derivations include uses of the principle of Extreme physical information (EPI). Examples of such derivations were of the De Broglie wave hypothesis, quantum wave equations, Maxwell’s equations, new laws of biology (e.g. of Coulomb force-directed cell development, and of in situ cancer growth), and new laws of economic fluctuation and investment. That the principle I = Imax itself derives, from suitably extended Hardy axioms, thereby eliminates its need to be assumed in these derivations. Thus, uses of I = Imax and EPI express physics at its most fundamental level – its axiomatic basis in math. PMID:24229152

  7. Principle of maximum Fisher information from Hardy's axioms applied to statistical systems.

    PubMed

    Frieden, B Roy; Gatenby, Robert A

    2013-10-01

    Consider a finite-sized, multidimensional system in parameter state a. The system is either at statistical equilibrium or general nonequilibrium, and may obey either classical or quantum physics. L. Hardy's mathematical axioms provide a basis for the physics obeyed by any such system. One axiom is that the number N of distinguishable states a in the system obeys N=max. This assumes that N is known as deterministic prior knowledge. However, most observed systems suffer statistical fluctuations, for which N is therefore only known approximately. Then what happens if the scope of the axiom N=max is extended to include such observed systems? It is found that the state a of the system must obey a principle of maximum Fisher information, I=I(max). This is important because many physical laws have been derived, assuming as a working hypothesis that I=I(max). These derivations include uses of the principle of extreme physical information (EPI). Examples of such derivations were of the De Broglie wave hypothesis, quantum wave equations, Maxwell's equations, new laws of biology (e.g., of Coulomb force-directed cell development and of in situ cancer growth), and new laws of economic fluctuation and investment. That the principle I=I(max) itself derives from suitably extended Hardy axioms thereby eliminates its need to be assumed in these derivations. Thus, uses of I=I(max) and EPI express physics at its most fundamental level, its axiomatic basis in math.

  8. Environmental contaminants of emerging concern in seafood – European database on contaminant levels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vandermeersch, Griet, E-mail: griet.vandermeersch@ilvo.vlaanderen.be; Lourenço, Helena Maria; Alvarez-Muñoz, Diana

    Marine pollution gives rise to concern not only about the environment itself but also about the impact on food safety and consequently on public health. European authorities and consumers have therefore become increasingly worried about the transfer of contaminants from the marine environment to seafood. So-called “contaminants of emerging concern” are chemical substances for which no maximum levels have been laid down in EU legislation, or substances for which maximum levels have been provided but which require revision. Adequate information on their presence in seafood is often lacking and thus potential risks cannot be excluded. Assessment of food safety issuesmore » related to these contaminants has thus become urgent and imperative. A database ( (www.ecsafeseafooddbase.eu)), containing available information on the levels of contaminants of emerging concern in seafood and providing the most recent data to scientists and regulatory authorities, was developed. The present paper reviews a selection of contaminants of emerging concern in seafood including toxic elements, endocrine disruptors, brominated flame retardants, pharmaceuticals and personal care products, polycyclic aromatic hydrocarbons and derivatives, microplastics and marine toxins. Current status on the knowledge of human exposure, toxicity and legislation are briefly presented and the outcome from scientific publications reporting on the levels of these compounds in seafood is presented and discussed. - Highlights: • Development of a European database regarding contaminants of emerging concern. • Current status on knowledge of human exposure, toxicity and legislation. • Review on the occurrence of contaminants of emerging concern in seafood.« less

  9. Taxonomically-linked growth phenotypes during arsenic stress among arsenic resistant bacteria isolated from soils overlying the Centralia coal seam fire.

    PubMed

    Dunivin, Taylor K; Miller, Justine; Shade, Ashley

    2018-01-01

    Arsenic (As), a toxic element, has impacted life since early Earth. Thus, microorganisms have evolved many As resistance and tolerance mechanisms to improve their survival outcomes given As exposure. We isolated As resistant bacteria from Centralia, PA, the site of an underground coal seam fire that has been burning since 1962. From a 57.4°C soil collected from a vent above the fire, we isolated 25 unique aerobic As resistant bacterial strains spanning seven genera. We examined their diversity, resistance gene content, transformation abilities, inhibitory concentrations, and growth phenotypes. Although As concentrations were low at the time of soil collection (2.58 ppm), isolates had high minimum inhibitory concentrations (MICs) of arsenate and arsenite (>300 mM and 20 mM respectively), and most isolates were capable of arsenate reduction. We screened isolates (PCR and sequencing) using 12 published primer sets for six As resistance genes (AsRGs). Genes encoding arsenate reductase (arsC) and arsenite efflux pumps (arsB, ACR3(2)) were present, and phylogenetic incongruence between 16S rRNA genes and AsRGs provided evidence for horizontal gene transfer. A detailed investigation of differences in isolate growth phenotypes across As concentrations (lag time to exponential growth, maximum growth rate, and maximum OD590) showed a relationship with taxonomy, providing information that could help to predict an isolate's performance given As exposure in situ. Our results suggest that microbiological management and remediation of environmental As could be informed by taxonomically-linked As tolerance, potential for resistance gene transferability, and the rare biosphere.

  10. Networked sensors for the combat forces

    NASA Astrophysics Data System (ADS)

    Klager, Gene

    2004-11-01

    Real-time and detailed information is critical to the success of ground combat forces. Current manned reconnaissance, surveillance, and target acquisition (RSTA) capabilities are not sufficient to cover battlefield intelligence gaps, provide Beyond-Line-of-Sight (BLOS) targeting, and the ambush avoidance information necessary for combat forces operating in hostile situations, complex terrain, and conducting military operations in urban terrain. This paper describes a current US Army program developing advanced networked unmanned/unattended sensor systems to survey these gaps and provide the Commander with real-time, pertinent information. Networked Sensors for the Combat Forces plans to develop and demonstrate a new generation of low cost distributed unmanned sensor systems organic to the RSTA Element. Networked unmanned sensors will provide remote monitoring of gaps, will increase a unit"s area of coverage, and will provide the commander organic assets to complete his Battlefield Situational Awareness (BSA) picture for direct and indirect fire weapons, early warning, and threat avoidance. Current efforts include developing sensor packages for unmanned ground vehicles, small unmanned aerial vehicles, and unattended ground sensors using advanced sensor technologies. These sensors will be integrated with robust networked communications and Battle Command tools for mission planning, intelligence "reachback", and sensor data management. The network architecture design is based on a model that identifies a three-part modular design: 1) standardized sensor message protocols, 2) Sensor Data Management, and 3) Service Oriented Architecture. This simple model provides maximum flexibility for data exchange, information management and distribution. Products include: Sensor suites optimized for unmanned platforms, stationary and mobile versions of the Sensor Data Management Center, Battle Command planning tools, networked communications, and sensor management software. Details of these products and recent test results will be presented.

  11. The maximum entropy production and maximum Shannon information entropy in enzyme kinetics

    NASA Astrophysics Data System (ADS)

    Dobovišek, Andrej; Markovič, Rene; Brumen, Milan; Fajmut, Aleš

    2018-04-01

    We demonstrate that the maximum entropy production principle (MEPP) serves as a physical selection principle for the description of the most probable non-equilibrium steady states in simple enzymatic reactions. A theoretical approach is developed, which enables maximization of the density of entropy production with respect to the enzyme rate constants for the enzyme reaction in a steady state. Mass and Gibbs free energy conservations are considered as optimization constraints. In such a way computed optimal enzyme rate constants in a steady state yield also the most uniform probability distribution of the enzyme states. This accounts for the maximal Shannon information entropy. By means of the stability analysis it is also demonstrated that maximal density of entropy production in that enzyme reaction requires flexible enzyme structure, which enables rapid transitions between different enzyme states. These results are supported by an example, in which density of entropy production and Shannon information entropy are numerically maximized for the enzyme Glucose Isomerase.

  12. Handling Missing Data With Multilevel Structural Equation Modeling and Full Information Maximum Likelihood Techniques.

    PubMed

    Schminkey, Donna L; von Oertzen, Timo; Bullock, Linda

    2016-08-01

    With increasing access to population-based data and electronic health records for secondary analysis, missing data are common. In the social and behavioral sciences, missing data frequently are handled with multiple imputation methods or full information maximum likelihood (FIML) techniques, but healthcare researchers have not embraced these methodologies to the same extent and more often use either traditional imputation techniques or complete case analysis, which can compromise power and introduce unintended bias. This article is a review of options for handling missing data, concluding with a case study demonstrating the utility of multilevel structural equation modeling using full information maximum likelihood (MSEM with FIML) to handle large amounts of missing data. MSEM with FIML is a parsimonious and hypothesis-driven strategy to cope with large amounts of missing data without compromising power or introducing bias. This technique is relevant for nurse researchers faced with ever-increasing amounts of electronic data and decreasing research budgets. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  13. Muons in air showers at the Pierre Auger Observatory: Measurement of atmospheric production depth

    NASA Astrophysics Data System (ADS)

    Aab, A.; Abreu, P.; Aglietta, M.; Ahlers, M.; Ahn, E. J.; Al Samarai, I.; Albuquerque, I. F. M.; Allekotte, I.; Allen, J.; Allison, P.; Almela, A.; Alvarez Castillo, J.; Alvarez-Muñiz, J.; Alves Batista, R.; Ambrosio, M.; Aminaei, A.; Anchordoqui, L.; Andringa, S.; Aramo, C.; Arqueros, F.; Asorey, H.; Assis, P.; Aublin, J.; Ave, M.; Avenier, M.; Avila, G.; Badescu, A. M.; Barber, K. B.; Bäuml, J.; Baus, C.; Beatty, J. J.; Becker, K. H.; Bellido, J. A.; Berat, C.; Bertou, X.; Biermann, P. L.; Billoir, P.; Blanco, F.; Blanco, M.; Bleve, C.; Blümer, H.; Boháčová, M.; Boncioli, D.; Bonifazi, C.; Bonino, R.; Borodai, N.; Brack, J.; Brancus, I.; Brogueira, P.; Brown, W. C.; Buchholz, P.; Bueno, A.; Buscemi, M.; Caballero-Mora, K. S.; Caccianiga, B.; Caccianiga, L.; Candusso, M.; Caramete, L.; Caruso, R.; Castellina, A.; Cataldi, G.; Cazon, L.; Cester, R.; Chavez, A. G.; Cheng, S. H.; Chiavassa, A.; Chinellato, J. A.; Chudoba, J.; Cilmo, M.; Clay, R. W.; Cocciolo, G.; Colalillo, R.; Collica, L.; Coluccia, M. R.; Conceição, R.; Contreras, F.; Cooper, M. J.; Coutu, S.; Covault, C. E.; Criss, A.; Cronin, J.; Curutiu, A.; Dallier, R.; Daniel, B.; Dasso, S.; Daumiller, K.; Dawson, B. R.; de Almeida, R. M.; De Domenico, M.; de Jong, S. J.; de Mello Neto, J. R. T.; De Mitri, I.; de Oliveira, J.; de Souza, V.; del Peral, L.; Deligny, O.; Dembinski, H.; Dhital, N.; Di Giulio, C.; Di Matteo, A.; Diaz, J. C.; Díaz Castro, M. L.; Diep, P. N.; Diogo, F.; Dobrigkeit, C.; Docters, W.; D'Olivo, J. C.; Dong, P. N.; Dorofeev, A.; Dorosti Hasankiadeh, Q.; Dova, M. T.; Ebr, J.; Engel, R.; Erdmann, M.; Erfani, M.; Escobar, C. O.; Espadanal, J.; Etchegoyen, A.; Facal San Luis, P.; Falcke, H.; Fang, K.; Farrar, G.; Fauth, A. C.; Fazzini, N.; Ferguson, A. P.; Fernandes, M.; Fick, B.; Figueira, J. M.; Filevich, A.; Filipčič, A.; Fox, B. D.; Fratu, O.; Fröhlich, U.; Fuchs, B.; Fuji, T.; Gaior, R.; García, B.; Garcia Roca, S. T.; Garcia-Gamez, D.; Garcia-Pinto, D.; Garilli, G.; Gascon Bravo, A.; Gate, F.; Gemmeke, H.; Ghia, P. L.; Giaccari, U.; Giammarchi, M.; Giller, M.; Glaser, C.; Glass, H.; Gomez Albarracin, F.; Gómez Berisso, M.; Gómez Vitale, P. F.; Gonçalves, P.; Gonzalez, J. G.; Gookin, B.; Gorgi, A.; Gorham, P.; Gouffon, P.; Grebe, S.; Griffith, N.; Grillo, A. F.; Grubb, T. D.; Guardincerri, Y.; Guarino, F.; Guedes, G. P.; Hansen, P.; Harari, D.; Harrison, T. A.; Harton, J. L.; Haungs, A.; Hebbeker, T.; Heck, D.; Heimann, P.; Herve, A. E.; Hill, G. C.; Hojvat, C.; Hollon, N.; Holt, E.; Homola, P.; Hörandel, J. R.; Horvath, P.; Hrabovský, M.; Huber, D.; Huege, T.; Insolia, A.; Isar, P. G.; Islo, K.; Jandt, I.; Jansen, S.; Jarne, C.; Josebachuili, M.; Kääpä, A.; Kambeitz, O.; Kampert, K. H.; Kasper, P.; Katkov, I.; Kégl, B.; Keilhauer, B.; Keivani, A.; Kemp, E.; Kieckhafer, R. M.; Klages, H. O.; Kleifges, M.; Kleinfeller, J.; Krause, R.; Krohm, N.; Krömer, O.; Kruppke-Hansen, D.; Kuempel, D.; Kunka, N.; La Rosa, G.; LaHurd, D.; Latronico, L.; Lauer, R.; Lauscher, M.; Lautridou, P.; Le Coz, S.; Leão, M. S. A. B.; Lebrun, D.; Lebrun, P.; Leigui de Oliveira, M. A.; Letessier-Selvon, A.; Lhenry-Yvon, I.; Link, K.; López, R.; Lopez Agüera, A.; Louedec, K.; Lozano Bahilo, J.; Lu, L.; Lucero, A.; Ludwig, M.; Lyberis, H.; Maccarone, M. C.; Malacari, M.; Maldera, S.; Maller, J.; Mandat, D.; Mantsch, P.; Mariazzi, A. G.; Marin, V.; Mariş, I. C.; Marsella, G.; Martello, D.; Martin, L.; Martinez, H.; Martínez Bravo, O.; Martraire, D.; Masías Meza, J. J.; Mathes, H. J.; Mathys, S.; Matthews, A. J.; Matthews, J.; Matthiae, G.; Maurel, D.; Maurizio, D.; Mayotte, E.; Mazur, P. O.; Medina, C.; Medina-Tanco, G.; Melissas, M.; Melo, D.; Menichetti, E.; Menshikov, A.; Messina, S.; Meyhandan, R.; Mićanović, S.; Micheletti, M. I.; Middendorf, L.; Minaya, I. A.; Miramonti, L.; Mitrica, B.; Molina-Bueno, L.; Mollerach, S.; Monasor, M.; Monnier Ragaigne, D.; Montanet, F.; Morello, C.; Moreno, J. C.; Mostafá, M.; Moura, C. A.; Muller, M. A.; Müller, G.; Münchmeyer, M.; Mussa, R.; Navarra, G.; Navas, S.; Necesal, P.; Nellen, L.; Nelles, A.; Neuser, J.; Niechciol, M.; Niemietz, L.; Niggemann, T.; Nitz, D.; Nosek, D.; Novotny, V.; Nožka, L.; Ochilo, L.; Olinto, A.; Oliveira, M.; Ortiz, M.; Pacheco, N.; Pakk Selmi-Dei, D.; Palatka, M.; Pallotta, J.; Palmieri, N.; Papenbreer, P.; Parente, G.; Parra, A.; Pastor, S.; Paul, T.; Pech, M.; PÈ©kala, J.; Pelayo, R.; Pepe, I. M.; Perrone, L.; Pesce, R.; Petermann, E.; Peters, C.; Petrera, S.; Petrolini, A.; Petrov, Y.; Piegaia, R.; Pierog, T.; Pieroni, P.; Pimenta, M.; Pirronello, V.; Platino, M.; Plum, M.; Porcelli, A.; Porowski, C.; Prado, R. R.; Privitera, P.; Prouza, M.; Purrello, V.; Quel, E. J.; Querchfeld, S.; Quinn, S.; Rautenberg, J.; Ravel, O.; Ravignani, D.; Revenu, B.; Ridky, J.; Riggi, S.; Risse, M.; Ristori, P.; Rizi, V.; Roberts, J.; Rodrigues de Carvalho, W.; Rodriguez Cabo, I.; Rodriguez Fernandez, G.; Rodriguez Rojo, J.; Rodríguez-Frías, M. D.; Ros, G.; Rosado, J.; Rossler, T.; Roth, M.; Roulet, E.; Rovero, A. C.; Rühle, C.; Saffi, S. J.; Saftoiu, A.; Salamida, F.; Salazar, H.; Saleh, A.; Salesa Greus, F.; Salina, G.; Sánchez, F.; Sanchez-Lucas, P.; Santo, C. E.; Santos, E.; Santos, E. M.; Sarazin, F.; Sarkar, B.; Sarmento, R.; Sato, R.; Scharf, N.; Scherini, V.; Schieler, H.; Schiffer, P.; Schmidt, A.; Scholten, O.; Schoorlemmer, H.; Schovánek, P.; Schulz, A.; Schulz, J.; Sciutto, S. J.; Segreto, A.; Settimo, M.; Shadkam, A.; Shellard, R. C.; Sidelnik, I.; Sigl, G.; Sima, O.; Śmiałkowski, A.; Šmída, R.; Snow, G. R.; Sommers, P.; Sorokin, J.; Squartini, R.; Srivastava, Y. N.; Stanič, S.; Stapleton, J.; Stasielak, J.; Stephan, M.; Stutz, A.; Suarez, F.; Suomijärvi, T.; Supanitsky, A. D.; Sutherland, M. S.; Swain, J.; Szadkowski, Z.; Szuba, M.; Taborda, O. A.; Tapia, A.; Tartare, M.; Thao, N. T.; Theodoro, V. M.; Tiffenberg, J.; Timmermans, C.; Todero Peixoto, C. J.; Toma, G.; Tomankova, L.; Tomé, B.; Tonachini, A.; Torralba Elipe, G.; Torres Machado, D.; Travnicek, P.; Trovato, E.; Tueros, M.; Ulrich, R.; Unger, M.; Urban, M.; Valdés Galicia, J. F.; Valiño, I.; Valore, L.; van Aar, G.; van den Berg, A. M.; van Velzen, S.; van Vliet, A.; Varela, E.; Vargas Cárdenas, B.; Varner, G.; Vázquez, J. R.; Vázquez, R. A.; Veberič, D.; Verzi, V.; Vicha, J.; Videla, M.; Villaseñor, L.; Vlcek, B.; Vorobiov, S.; Wahlberg, H.; Wainberg, O.; Walz, D.; Watson, A. A.; Weber, M.; Weidenhaupt, K.; Weindl, A.; Werner, F.; Whelan, B. J.; Widom, A.; Wiencke, L.; Wilczyńska, B.; Wilczyński, H.; Will, M.; Williams, C.; Winchen, T.; Wittkowski, D.; Wundheiler, B.; Wykes, S.; Yamamoto, T.; Yapici, T.; Younk, P.; Yuan, G.; Yushkov, A.; Zamorano, B.; Zas, E.; Zavrtanik, D.; Zavrtanik, M.; Zaw, I.; Zepeda, A.; Zhou, J.; Zhu, Y.; Zimbres Silva, M.; Ziolkowski, M.; Pierre Auger Collaboration

    2014-07-01

    The surface detector array of the Pierre Auger Observatory provides information about the longitudinal development of the muonic component of extensive air showers. Using the timing information from the flash analog-to-digital converter traces of surface detectors far from the shower core, it is possible to reconstruct a muon production depth distribution. We characterize the goodness of this reconstruction for zenith angles around 60° and different energies of the primary particle. From these distributions, we define Xmaxμ as the depth along the shower axis where the production of muons reaches maximum. We explore the potentiality of Xmaxμ as a useful observable to infer the mass composition of ultrahigh-energy cosmic rays. Likewise, we assess its ability to constrain hadronic interaction models.

  14. Occam’s Quantum Strop: Synchronizing and Compressing Classical Cryptic Processes via a Quantum Channel

    NASA Astrophysics Data System (ADS)

    Mahoney, John R.; Aghamohammadi, Cina; Crutchfield, James P.

    2016-02-01

    A stochastic process’ statistical complexity stands out as a fundamental property: the minimum information required to synchronize one process generator to another. How much information is required, though, when synchronizing over a quantum channel? Recent work demonstrated that representing causal similarity as quantum state-indistinguishability provides a quantum advantage. We generalize this to synchronization and offer a sequence of constructions that exploit extended causal structures, finding substantial increase of the quantum advantage. We demonstrate that maximum compression is determined by the process’ cryptic order-a classical, topological property closely allied to Markov order, itself a measure of historical dependence. We introduce an efficient algorithm that computes the quantum advantage and close noting that the advantage comes at a cost-one trades off prediction for generation complexity.

  15. The Extended Erlang-Truncated Exponential distribution: Properties and application to rainfall data.

    PubMed

    Okorie, I E; Akpanta, A C; Ohakwe, J; Chikezie, D C

    2017-06-01

    The Erlang-Truncated Exponential ETE distribution is modified and the new lifetime distribution is called the Extended Erlang-Truncated Exponential EETE distribution. Some statistical and reliability properties of the new distribution are given and the method of maximum likelihood estimate was proposed for estimating the model parameters. The usefulness and flexibility of the EETE distribution was illustrated with an uncensored data set and its fit was compared with that of the ETE and three other three-parameter distributions. Results based on the minimized log-likelihood ([Formula: see text]), Akaike information criterion (AIC), Bayesian information criterion (BIC) and the generalized Cramér-von Mises [Formula: see text] statistics shows that the EETE distribution provides a more reasonable fit than the one based on the other competing distributions.

  16. Data base development and research and editorial support

    NASA Technical Reports Server (NTRS)

    1988-01-01

    The Life Sciences Bibliographic Data Base was created in 1981 and subsequently expanded. A systematic, professional system was developed to collect, organize, and disseminate information about scientific publications resulting from research. The data base consists of bibliographic information and hard copies of all research papers published by Life Sciences-supported investigators. Technical improvements were instituted in the database. To minimize costs, take advantage of advances in personal computer technology, and achieve maximum flexibility and control, the data base was transferred from the JSC computer to personal computers at George Washington University (GWU). GWU also performed a range of related activities such as conducting in-depth searches on a variety of subjects, retrieving scientific literature, preparing presentations, summarizing research progress, answering correspondence requiring reference support, and providing writing and editorial support.

  17. Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network

    PubMed Central

    Qu, Xiaobo; He, Yifan

    2018-01-01

    Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods. PMID:29509666

  18. Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network.

    PubMed

    Du, Xiaofeng; Qu, Xiaobo; He, Yifan; Guo, Di

    2018-03-06

    Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods.

  19. Occam's Quantum Strop: Synchronizing and Compressing Classical Cryptic Processes via a Quantum Channel.

    PubMed

    Mahoney, John R; Aghamohammadi, Cina; Crutchfield, James P

    2016-02-15

    A stochastic process' statistical complexity stands out as a fundamental property: the minimum information required to synchronize one process generator to another. How much information is required, though, when synchronizing over a quantum channel? Recent work demonstrated that representing causal similarity as quantum state-indistinguishability provides a quantum advantage. We generalize this to synchronization and offer a sequence of constructions that exploit extended causal structures, finding substantial increase of the quantum advantage. We demonstrate that maximum compression is determined by the process' cryptic order--a classical, topological property closely allied to Markov order, itself a measure of historical dependence. We introduce an efficient algorithm that computes the quantum advantage and close noting that the advantage comes at a cost-one trades off prediction for generation complexity.

  20. Optimization of hydrometric monitoring network in urban drainage systems using information theory.

    PubMed

    Yazdi, J

    2017-10-01

    Regular and continuous monitoring of urban runoff in both quality and quantity aspects is of great importance for controlling and managing surface runoff. Due to the considerable costs of establishing new gauges, optimization of the monitoring network is essential. This research proposes an approach for site selection of new discharge stations in urban areas, based on entropy theory in conjunction with multi-objective optimization tools and numerical models. The modeling framework provides an optimal trade-off between the maximum possible information content and the minimum shared information among stations. This approach was applied to the main surface-water collection system in Tehran to determine new optimal monitoring points under the cost considerations. Experimental results on this drainage network show that the obtained cost-effective designs noticeably outperform the consulting engineers' proposal in terms of both information contents and shared information. The research also determined the highly frequent sites at the Pareto front which might be important for decision makers to give a priority for gauge installation on those locations of the network.

  1. Interactive communication channel

    NASA Astrophysics Data System (ADS)

    Chan, R. H.; Mann, M. R.; Ciarrocchi, J. A.

    1985-10-01

    Discussed is an interactive communications channel (ICC) for providing a digital computer with high-performance multi-channel interfaces. Sixteen full duplex channels can be serviced in the ICC with the sequence or scan pattern being programmable and dependent upon the number or channels and their speed. A channel buffer system is used for line interface, and character exchange. The channel buffer system is on a byte basis. The ICC performs frame start and frame end functions, bit stripping and bit stuffing. Data is stored in a memory in block format (256 bytes maximum) by a program control and the ICC maintains byte address information and a block byte count. Data exchange with a memory is made by cycle steals. Error detection is also provided for using a cyclic redundancy check technique.

  2. Predicting Presynaptic and Postsynaptic Neurotoxins by Developing Feature Selection Technique

    PubMed Central

    Yang, Yunchun; Zhang, Chunmei; Chen, Rong; Huang, Po

    2017-01-01

    Presynaptic and postsynaptic neurotoxins are proteins which act at the presynaptic and postsynaptic membrane. Correctly predicting presynaptic and postsynaptic neurotoxins will provide important clues for drug-target discovery and drug design. In this study, we developed a theoretical method to discriminate presynaptic neurotoxins from postsynaptic neurotoxins. A strict and objective benchmark dataset was constructed to train and test our proposed model. The dipeptide composition was used to formulate neurotoxin samples. The analysis of variance (ANOVA) was proposed to find out the optimal feature set which can produce the maximum accuracy. In the jackknife cross-validation test, the overall accuracy of 94.9% was achieved. We believe that the proposed model will provide important information to study neurotoxins. PMID:28303250

  3. seawaveQ: an R package providing a model and utilities for analyzing trends in chemical concentrations in streams with a seasonal wave (seawave) and adjustment for streamflow (Q) and other ancillary variables

    USGS Publications Warehouse

    Ryberg, Karen R.; Vecchia, Aldo V.

    2013-01-01

    The seawaveQ R package fits a parametric regression model (seawaveQ) to pesticide concentration data from streamwater samples to assess variability and trends. The model incorporates the strong seasonality and high degree of censoring common in pesticide data and users can incorporate numerous ancillary variables, such as streamflow anomalies. The model is fitted to pesticide data using maximum likelihood methods for censored data and is robust in terms of pesticide, stream location, and degree of censoring of the concentration data. This R package standardizes this methodology for trend analysis, documents the code, and provides help and tutorial information, as well as providing additional utility functions for plotting pesticide and other chemical concentration data.

  4. 48 CFR 39.103 - Modular contracting.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... CATEGORIES OF CONTRACTING ACQUISITION OF INFORMATION TECHNOLOGY General 39.103 Modular contracting. (a) This section implements Section 5202, Incremental Acquisition of Information Technology, of the Clinger-Cohen... technology. Consistent with the agency's information technology architecture, agencies should, to the maximum...

  5. Regularized maximum pure-state input-output fidelity of a quantum channel

    NASA Astrophysics Data System (ADS)

    Ernst, Moritz F.; Klesse, Rochus

    2017-12-01

    As a toy model for the capacity problem in quantum information theory we investigate finite and asymptotic regularizations of the maximum pure-state input-output fidelity F (N ) of a general quantum channel N . We show that the asymptotic regularization F ˜(N ) is lower bounded by the maximum output ∞ -norm ν∞(N ) of the channel. For N being a Pauli channel, we find that both quantities are equal.

  6. 32 CFR 286.22 - General provisions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... OF INFORMATION ACT PROGRAM DOD FREEDOM OF INFORMATION ACT PROGRAM REGULATION Release and Processing Procedures § 286.22 General provisions. (a) Public information. (1) Since the policy of the Department of Defense is to make the maximum amount of information available to the public consistent with its other...

  7. Potentiometric surface in the Central Oklahoma (Garber-Wellington) aquifer, Oklahoma, 2009

    USGS Publications Warehouse

    Mashburn, Shana L.; Magers, Jessica

    2011-01-01

    A study of the hydrogeology of the Central Oklahoma aquifer was started in 2008 to provide the Oklahoma Water Resources Board (OWRB) hydrogeologic data and a groundwater flow model that can be used as a tool to help manage the aquifer. The 1973 Oklahoma water law requires the OWRB to do hydrologic investigations of Oklahoma's aquifers (termed 'groundwater basins') and to determine amounts of water that may be withdrawn by permitted water users. 'Maximum annual yield' is a term used by OWRB to describe the total amount of water that can be withdrawn from a specific aquifer in any year while allowing a minimum 20-year life of the basin (Oklahoma Water Resources Board, 2010). Currently (2010), the maximum annual yield has not been determined for the Central Oklahoma aquifer. Until the maximum annual yield determination is made, water users are issued a temporary permit by the OWRB for 2 acre-feet/acre per year. The objective of the study, in cooperation with the Oklahoma Water Resources Board, was to study the hydrogeology of the Central Oklahoma aquifer to provide information that will enable the OWRB to determine the maximum annual yield of the aquifer based on different proposed management plans. Groundwater flow models are typically used by the OWRB as a tool to help determine the maximum annual yield. This report presents the potentiometric surface of the Central Oklahoma aquifer based on water-level data collected in 2009 as part of the current (2010) hydrologic study. The U.S. Geological Survey (USGS) Hydrologic Investigations Atlas HA-724 by Christenson and others (1992) presents the 1986-87 potentiometric-surface map. This 1986-87 potentiometric-surface map was made as part of the USGS National Water-Quality Assessment pilot project for the Central Oklahoma aquifer that examined the geochemical and hydrogeological processes operating in the aquifer. An attempt was made to obtain water-level measurements for the 2009 potentiometric-surface map from the wells used for the 1986-87 potentiometric-surface map. Well symbols with circles on the 2009 potentiometric-surface map (fig. 1) indicate wells that were used for the 1986-87 potentiometric-surface map.

  8. 29 CFR 778.101 - Maximum nonovertime hours.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 3 2010-07-01 2010-07-01 false Maximum nonovertime hours. 778.101 Section 778.101 Labor... Requirements Introductory § 778.101 Maximum nonovertime hours. As a general standard, section 7(a) of the Act provides 40 hours as the maximum number that an employee subject to its provisions may work for an employer...

  9. A direct comparison of spine rotational stiffness and dynamic spine stability during repetitive lifting tasks.

    PubMed

    Graham, Ryan B; Brown, Stephen H M

    2012-06-01

    Stability of the spinal column is critical to bear loads, allow movement, and at the same time avoid injury and pain. However, there has been a debate in recent years as to how best to define and quantify spine stability, with the outcome being that different methods are used without a clear understanding of how they relate to one another. Therefore, the goal of the present study was to directly compare lumbar spine rotational stiffness, calculated with an EMG-driven biomechanical model, to local dynamic spine stability calculated using Lyapunov analyses of kinematic data, during a series of continuous dynamic lifting challenges. Twelve healthy male subjects performed 30 repetitive lifts under three varying load and three varying rate conditions. With an increase in the load lifted (constant rate) there was a significant increase in mean, maximum, and minimum spine rotational stiffness (p<0.001) and a significant increase in local dynamic stability (p<0.05); both stability measures were moderately to strongly related to one another (r=-0.55 to -0.71). With an increase in lifting rate (constant load), there was also a significant increase in mean and maximum spine rotational stiffness (p<0.01); however, there was a non-significant decrease in the minimum rotational stiffness and a non-significant decrease in local dynamic stability (p>0.05). Weak linear relationships were found for the varying rate conditions (r=-0.02 to -0.27). The results suggest that spine rotational stiffness and local dynamic stability are closely related to one another, as they provided similar information when movement rate was controlled. However, based on the results from the changing lifting rate conditions, it is evident that both models provide unique information and that future research is required to completely understand the relationship between the two models. Using both techniques concurrently may provide the best information regarding the true effects of (in) stability under different loading and movement scenarios, and in comparing healthy and clinical populations. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Distribution and variability of nitrogen and phosphorus in the alluvial, High Plains, Rush Springs, and Blaine aquifers in western Oklahoma

    USGS Publications Warehouse

    Becker, C.J.

    1994-01-01

    Aquifers are the primary source of water for drinking and agricultural purposes in western Oklahoma. Health concerns about consuming nitrogen and an increased reliance on ground water for drinking necessitate a better understanding of the cause and effect of contamination from nutrients. The purpose of this project was to compile nutrients data from the National Water Information System data base for the alluvial aquifers west of longitude 98 degrees W. and from three bedrock aquifers, High Plains, Rush Springs, and Blaine, and provide this information in a report for future projects and for the facilitation of nutrient source management. The scope of the work consisted of (1) compiling ground-water quality data concerning nitrogen and phosphorus ions, (2) constructing boxplots illustrating data variability, (3) maps for each aquifer showing locations of wells when nitrogen and phosphorus ions were measured in ground water and where concentrations of nitrate and nitrite, reported as nitrogen, exceed the maximum contaminant level, and (4) calculating summary statistics. Nutrient data were obtained from the U.S. Geological Survey data base called the National Water Information System. Data were restricted to ground-water samples, but no restrictions were placed on well and water use or date and time of sampling. Compiled nutrient data consist of dissolved and total concentrations of the common nitrogen and phosphorus ions measured in ground water. For nitrogen these ions include nitrate, nitrite, ammonium, and nitrite plus nitrate. All concentrations are reported in milligrams per liter as nitrogen. Phosphorus in ground water is measured as the orthophosphate ion, and is reported in milligrams per liter as phosphorus. Nutrient variability is illustrated by a standard boxplot. The data are presented by aquifer or hydrologic subregion for alluvial aquifers, with one boxplot constructed for each nutrient compound if more than four analyses are present. Maps for each aquifer show where nitrogen and phosphorus have been measured in ground water and where the concentrations of nitrate and nitrite exceed the maximum contaminant level. A statistical summary for each aquifer and subregion show if censored data were present, number of samples in each data set, largest minimum reporting level for each nutrient compound, percentiles used to construct boxplots, and minimum and maximum values. Also given are the number of wells sampled in each aquifer and the number of wells exceeding the maximum contaminant level.

  11. Three faces of entropy for complex systems: Information, thermodynamics, and the maximum entropy principle

    NASA Astrophysics Data System (ADS)

    Thurner, Stefan; Corominas-Murtra, Bernat; Hanel, Rudolf

    2017-09-01

    There are at least three distinct ways to conceptualize entropy: entropy as an extensive thermodynamic quantity of physical systems (Clausius, Boltzmann, Gibbs), entropy as a measure for information production of ergodic sources (Shannon), and entropy as a means for statistical inference on multinomial processes (Jaynes maximum entropy principle). Even though these notions represent fundamentally different concepts, the functional form of the entropy for thermodynamic systems in equilibrium, for ergodic sources in information theory, and for independent sampling processes in statistical systems, is degenerate, H (p ) =-∑ipilogpi . For many complex systems, which are typically history-dependent, nonergodic, and nonmultinomial, this is no longer the case. Here we show that for such processes, the three entropy concepts lead to different functional forms of entropy, which we will refer to as SEXT for extensive entropy, SIT for the source information rate in information theory, and SMEP for the entropy functional that appears in the so-called maximum entropy principle, which characterizes the most likely observable distribution functions of a system. We explicitly compute these three entropy functionals for three concrete examples: for Pólya urn processes, which are simple self-reinforcing processes, for sample-space-reducing (SSR) processes, which are simple history dependent processes that are associated with power-law statistics, and finally for multinomial mixture processes.

  12. Upper Limb Asymmetry in the Sense of Effort Is Dependent on Force Level

    PubMed Central

    Mitchell, Mark; Martin, Bernard J.; Adamo, Diane E.

    2017-01-01

    Previous studies have shown that asymmetries in upper limb sensorimotor function are dependent on the source of sensory and motor information, hand preference and differences in hand strength. Further, the utilization of sensory and motor information and the mode of control of force may differ between the right hand/left hemisphere and left hand/right hemisphere systems. To more clearly understand the unique contribution of hand strength and intrinsic differences to the control of grasp force, we investigated hand/hemisphere differences when the source of force information was encoded at two different force levels corresponding to a 20 and 70% maximum voluntary contraction or the right and left hand of each participant. Eleven, adult males who demonstrated a stronger right than left maximum grasp force were requested to match a right or left hand 20 or 70% maximal voluntary contraction reference force with the opposite hand. During the matching task, visual feedback corresponding to the production of the reference force was available and then removed when the contralateral hand performed the match. The matching relative force error was significantly different between hands for the 70% MVC reference force but not for the 20% MVC reference force. Directional asymmetries, quantified as the matching force constant error, showed right hand overshoots and left undershoots were force dependent and primarily due to greater undershoots when matching with the left hand the right hand reference force. Findings further suggest that the interaction between internal sources of information, such as efferent copy and proprioception, as well as hand strength differences appear to be hand/hemisphere system dependent. Investigations of force matching tasks under conditions whereby force level is varied and visual feedback of the reference force is available provides critical baseline information for building effective interventions for asymmetric (stroke-related, Parkinson’s Disease) and symmetric (Amyotrophic Lateral Sclerosis) upper limb recovery of neurological conditions where the various sources of sensory – motor information have been significantly altered by the disease process. PMID:28491047

  13. Upper Limb Asymmetry in the Sense of Effort Is Dependent on Force Level.

    PubMed

    Mitchell, Mark; Martin, Bernard J; Adamo, Diane E

    2017-01-01

    Previous studies have shown that asymmetries in upper limb sensorimotor function are dependent on the source of sensory and motor information, hand preference and differences in hand strength. Further, the utilization of sensory and motor information and the mode of control of force may differ between the right hand/left hemisphere and left hand/right hemisphere systems. To more clearly understand the unique contribution of hand strength and intrinsic differences to the control of grasp force, we investigated hand/hemisphere differences when the source of force information was encoded at two different force levels corresponding to a 20 and 70% maximum voluntary contraction or the right and left hand of each participant. Eleven, adult males who demonstrated a stronger right than left maximum grasp force were requested to match a right or left hand 20 or 70% maximal voluntary contraction reference force with the opposite hand. During the matching task, visual feedback corresponding to the production of the reference force was available and then removed when the contralateral hand performed the match. The matching relative force error was significantly different between hands for the 70% MVC reference force but not for the 20% MVC reference force. Directional asymmetries, quantified as the matching force constant error, showed right hand overshoots and left undershoots were force dependent and primarily due to greater undershoots when matching with the left hand the right hand reference force. Findings further suggest that the interaction between internal sources of information, such as efferent copy and proprioception, as well as hand strength differences appear to be hand/hemisphere system dependent. Investigations of force matching tasks under conditions whereby force level is varied and visual feedback of the reference force is available provides critical baseline information for building effective interventions for asymmetric (stroke-related, Parkinson's Disease) and symmetric (Amyotrophic Lateral Sclerosis) upper limb recovery of neurological conditions where the various sources of sensory - motor information have been significantly altered by the disease process.

  14. Joint reconstruction of activity and attenuation in Time-of-Flight PET: A Quantitative Analysis.

    PubMed

    Rezaei, Ahmadreza; Deroose, Christophe M; Vahle, Thomas; Boada, Fernando; Nuyts, Johan

    2018-03-01

    Joint activity and attenuation reconstruction methods from time of flight (TOF) positron emission tomography (PET) data provide an effective solution to attenuation correction when no (or incomplete/inaccurate) information on the attenuation is available. One of the main barriers limiting their use in clinical practice is the lack of validation of these methods on a relatively large patient database. In this contribution, we aim at validating the activity reconstructions of the maximum likelihood activity reconstruction and attenuation registration (MLRR) algorithm on a whole-body patient data set. Furthermore, a partial validation (since the scale problem of the algorithm is avoided for now) of the maximum likelihood activity and attenuation reconstruction (MLAA) algorithm is also provided. We present a quantitative comparison of the joint reconstructions to the current clinical gold-standard maximum likelihood expectation maximization (MLEM) reconstruction with CT-based attenuation correction. Methods: The whole-body TOF-PET emission data of each patient data set is processed as a whole to reconstruct an activity volume covering all the acquired bed positions, which helps to reduce the problem of a scale per bed position in MLAA to a global scale for the entire activity volume. Three reconstruction algorithms are used: MLEM, MLRR and MLAA. A maximum likelihood (ML) scaling of the single scatter simulation (SSS) estimate to the emission data is used for scatter correction. The reconstruction results are then analyzed in different regions of interest. Results: The joint reconstructions of the whole-body patient data set provide better quantification in case of PET and CT misalignments caused by patient and organ motion. Our quantitative analysis shows a difference of -4.2% (±2.3%) and -7.5% (±4.6%) between the joint reconstructions of MLRR and MLAA compared to MLEM, averaged over all regions of interest, respectively. Conclusion: Joint activity and attenuation estimation methods provide a useful means to estimate the tracer distribution in cases where CT-based attenuation images are subject to misalignments or are not available. With an accurate estimate of the scatter contribution in the emission measurements, the joint TOF-PET reconstructions are within clinical acceptable accuracy. Copyright © 2018 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  15. How to pass information and deliver energy to a network of implantable devices within the human body.

    PubMed

    Sun, Mingui; Hackworth, Steven A; Tang, Zhide; Gilbert, Gary; Cardin, Sylvain; Sclabassi, Robert J

    2007-01-01

    It has been envisioned that a body network can be built to collect data from, and transport information to, implanted miniature devices at multiple sites within the human body. Currently, two problems of utmost importance remain unsolved: 1) how to link information between a pair of implants at a distance? and 2) how to provide electric power to these implants allowing them to function and communicate? In this paper, we present new solutions to these problems by minimizing the intra-body communication distances. We show that, based on a study of human anatomy, the maximum distance from the body surface to the deepest point inside the body is approximately 15 cm. This finding provides an upper bound for the lengths of communication pathways required to reach the body's interior. We also show that these pathways do not have to cross any joins within the body. In order to implement the envisioned body network, we present the design of a new device, called an energy pad. This small-size, light-weight device can easily interface with the skin to perform data communication with, and supply power to, miniature implants.

  16. Persistent maritime traffic monitoring for the Canadian Arctic

    NASA Astrophysics Data System (ADS)

    Ulmke, M.; Battistello, G.; Biermann, J.; Mohrdieck, C.; Pelot, R.; Koch, W.

    2017-05-01

    This paper presents results of the Canadian-German research project PASSAGES (Protection and Advanced Surveillance System for the Arctic: Green, Efficient, Secure)1 on an advanced surveillance system for safety and security of maritime operations in Arctic areas. The motivation for a surveillance system of the Northwest Passage is the projected growth of maritime traffic along Arctic sea routes and the need for securing Canada's sovereignty by controlling its arctic waters as well as for protecting the safety of international shipping and the intactness of the arctic marine environment. To ensure border security and to detect and prevent illegal activities it is necessary to develop a system for surveillance and reconnaissance that brings together all related means, assets, organizations, processes and structures to build one homogeneous and integrated system. The harsh arctic conditions require a new surveillance concept that fuses heterogeneous sensor data, contextual information, and available pre-processed surveillance data and combines all components to efficiently extract and provide the maximum available amount of information. The fusion of all these heterogeneous data and information will provide improved and comprehensive situation awareness for risk assessment and decision support of different stakeholder groups as governmental authorities, commercial users and Northern communities.

  17. Maximum work extraction and implementation costs for nonequilibrium Maxwell's demons.

    PubMed

    Sandberg, Henrik; Delvenne, Jean-Charles; Newton, Nigel J; Mitter, Sanjoy K

    2014-10-01

    We determine the maximum amount of work extractable in finite time by a demon performing continuous measurements on a quadratic Hamiltonian system subjected to thermal fluctuations, in terms of the information extracted from the system. The maximum work demon is found to apply a high-gain continuous feedback involving a Kalman-Bucy estimate of the system state and operates in nonequilibrium. A simple and concrete electrical implementation of the feedback protocol is proposed, which allows for analytic expressions of the flows of energy, entropy, and information inside the demon. This let us show that any implementation of the demon must necessarily include an external power source, which we prove both from classical thermodynamics arguments and from a version of Landauer's memory erasure argument extended to nonequilibrium linear systems.

  18. Linking trading ratio with TMDL (total maximum daily load) allocation matrix and uncertainty analysis.

    PubMed

    Zhang, H X

    2008-01-01

    An innovative approach for total maximum daily load (TMDL) allocation and implementation is the watershed-based pollutant trading. Given the inherent scientific uncertainty for the tradeoffs between point and nonpoint sources, setting of trading ratios can be a contentious issue and was already listed as an obstacle by several pollutant trading programs. One of the fundamental reasons that a trading ratio is often set higher (e.g. greater than 2) is to allow for uncertainty in the level of control needed to attain water quality standards, and to provide a buffer in case traded reductions are less effective than expected. However, most of the available studies did not provide an approach to explicitly address the determination of trading ratio. Uncertainty analysis has rarely been linked to determination of trading ratio.This paper presents a practical methodology in estimating "equivalent trading ratio (ETR)" and links uncertainty analysis with trading ratio determination from TMDL allocation process. Determination of ETR can provide a preliminary evaluation of "tradeoffs" between various combination of point and nonpoint source control strategies on ambient water quality improvement. A greater portion of NPS load reduction in overall TMDL load reduction generally correlates with greater uncertainty and thus requires greater trading ratio. The rigorous quantification of trading ratio will enhance the scientific basis and thus public perception for more informed decision in overall watershed-based pollutant trading program. (c) IWA Publishing 2008.

  19. Food consumption and digestion time estimation of spotted scat, Scatophagus argus, using X-radiography technique

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hashim, Marina; Abidin, Diana Atiqah Zainal; Das, Simon K.

    The present study was conducted to investigate the food consumption pattern and gastric emptying time using x-radiography technique in scats fish, Scatophagus argus feeding to satiation in laboratory conditions. Prior to feeding experiment, fish of various sizes were examined their stomach volume, using freshly prepared stomachs ligatured at the tips of the burret, where the maximum amount of distilled water collected in the stomach were measured (ml). Stomach volume is correlated with maximum food intake (S{sub max}) and it can estimate the maximum stomach distension by allometric model i.e volume=0.0000089W{sup 2.93}. Gastric emptying time was estimated using a qualitative X-radiographymore » technique, where the fish of various sizes were fed to satiation at different time since feeding. All the experimental fish was feed into satiation using radio-opaque barium sulphate (BaSO{sub 4}) paste injected in the wet shrimp in proportion to the body weight. The BaSO{sub 4} was found suitable to track the movement of feed/prey in the stomach over time and gastric emptying time of scats fish can be estimated. The results of qualitative X-Radiography observation of gastric motility, showed the fish (200 gm) that fed to maximum satiation meal (circa 11 gm) completely emptied their stomach within 30 - 36 hrs. The results of the present study will provide the first baseline information on the stomach volume, gastric emptying of scats fish in captivity.« less

  20. Food consumption and digestion time estimation of spotted scat, Scatophagus argus, using X-radiography technique

    NASA Astrophysics Data System (ADS)

    Hashim, Marina; Abidin, Diana Atiqah Zainal; Das, Simon K.; Ghaffar, Mazlan Abd.

    2014-09-01

    The present study was conducted to investigate the food consumption pattern and gastric emptying time using x-radiography technique in scats fish, Scatophagus argus feeding to satiation in laboratory conditions. Prior to feeding experiment, fish of various sizes were examined their stomach volume, using freshly prepared stomachs ligatured at the tips of the burret, where the maximum amount of distilled water collected in the stomach were measured (ml). Stomach volume is correlated with maximum food intake (Smax) and it can estimate the maximum stomach distension by allometric model i.e volume=0.0000089W2.93. Gastric emptying time was estimated using a qualitative X-radiography technique, where the fish of various sizes were fed to satiation at different time since feeding. All the experimental fish was feed into satiation using radio-opaque barium sulphate (BaSO4) paste injected in the wet shrimp in proportion to the body weight. The BaSO4 was found suitable to track the movement of feed/prey in the stomach over time and gastric emptying time of scats fish can be estimated. The results of qualitative X-Radiography observation of gastric motility, showed the fish (200 gm) that fed to maximum satiation meal (circa 11 gm) completely emptied their stomach within 30 - 36 hrs. The results of the present study will provide the first baseline information on the stomach volume, gastric emptying of scats fish in captivity.

  1. 75 FR 77629 - Office of Special Education and Rehabilitative Services; Overview Information; National Institute...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-13

    ... DEPARTMENT OF EDUCATION Office of Special Education and Rehabilitative Services; Overview... Secretary for Special Education and Rehabilitative Services may change the maximum amount through a notice... Secretary for Special Education and Rehabilitative Services may change the maximum project period through a...

  2. 48 CFR 245.7304 - Informal bid procedures.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Inventory 245.7304 Informal bid procedures. (a) Upon approval of the plant clearance officer, the contractor...— (1) Maximum practical competition is maintained; (2) Sources solicited are recorded; and (3) Informal... plant clearance officer prior to soliciting bids from other prospective bidders. ...

  3. Knowledge of appropriate acetaminophen doses and potential toxicities in an adult clinic population.

    PubMed

    Stumpf, Janice L; Skyles, Amy J; Alaniz, Cesar; Erickson, Steven R

    2007-01-01

    To evaluate the knowledge of appropriate doses and potential toxicities of acetaminophen and assess the ability to recognize products containing acetaminophen in an adult outpatient setting. Cross-sectional, prospective study. University adult general internal medicine (AGIM) clinic. 104 adult patients presenting to the clinic over consecutive weekdays in December 2003. Three-page, written questionnaire. Ability of patients to identify maximum daily doses and potential toxicities of acetaminophen and recognize products that contain acetaminophen. A large percentage of participants (68.3%) reported pain on a daily or weekly basis, and 78.9% reported use of acetaminophen in the past 6 months. Only 2 patients correctly identified the maximum daily dose of regular acetaminophen, and just 3 correctly identified the maximum dose of extra-strength acetaminophen. Furthermore, 28 patients were unsure of the maximum dose of either product. Approximately 63% of participants either had not received or were unsure whether information on the possible danger of high doses of acetaminophen had been previously provided to them. When asked to identify potential problems associated with high doses of acetaminophen, 43.3% of patients noted the liver would be affected. The majority of the patients (71.2%) recognized Tylenol as containing acetaminophen, but fewer than 15% correctly identified Vicodin, Darvocet, Tylox, Percocet, and Lorcet as containing acetaminophen. Although nearly 80% of this AGIM population reported recent acetaminophen use, their knowledge of the maximum daily acetaminophen doses and potential toxicities associated with higher doses was poor and appeared to be independent of education level, age, and race. This indicates a need for educational efforts to all patients receiving acetaminophen-containing products, especially since the ability to recognize multi-ingredient products containing acetaminophen was likewise poor.

  4. Strength determination of brittle materials as curved monolithic structures.

    PubMed

    Hooi, P; Addison, O; Fleming, G J P

    2014-04-01

    The dental literature is replete with "crunch the crown" monotonic load-to-failure studies of all-ceramic materials despite fracture behavior being dominated by the indenter contact surface. Load-to-failure data provide no information on stress patterns, and comparisons among studies are impossible owing to variable testing protocols. We investigated the influence of nonplanar geometries on the maximum principal stress of curved discs tested in biaxial flexure in the absence of analytical solutions. Radii of curvature analogous to elements of complex dental geometries and a finite element analysis method were integrated with experimental testing as a surrogate solution to calculate the maximum principal stress at failure. We employed soda-lime glass discs, a planar control (group P, n = 20), with curvature applied to the remaining discs by slump forming to different radii of curvature (30, 20, 15, and 10 mm; groups R30-R10). The mean deflection (group P) and radii of curvature obtained on slumping (groups R30-R10) were determined by profilometry before and after annealing and surface treatment protocols. Finite element analysis used the biaxial flexure load-to-failure data to determine the maximum principal stress at failure. Mean maximum principal stresses and load to failure were analyzed with one-way analyses of variance and post hoc Tukey tests (α = 0.05). The measured radii of curvature differed significantly among groups, and the radii of curvature were not influenced by annealing. Significant increases in the mean load to failure were observed as the radius of curvature was reduced. The maximum principal stress did not demonstrate sensitivity to radius of curvature. The findings highlight the sensitivity of failure load to specimen shape. The data also support the synergistic use of bespoke computational analysis with conventional mechanical testing and highlight a solution to complications with complex specimen geometries.

  5. First annual register of allergenic pollen in Talca, Chile.

    PubMed

    Mardones, P; Grau, M; Araya, J; Córdova, A; Pereira, I; Peñailillo, P; Silva, R; Moraga, A; Aguilera-Insunza, R; Yepes-Nuñez, J J; Palomo, I

    2013-01-01

    There are no data on atmospheric pollen in Talca. In the present work, our aim is to describe the amount of pollen grain in the atmosphere of the city of Talca likely to cause pollinosis of its inhabitants. A volumetric Hirst sampler (Burkard seven-day recording device) was used to study pollen levels. It was placed in the centre of Talca from May 2007 to April 2008. The highest airborne presence of pollen, as measured in weekly averages, was Platanus acerifolia with a maximum weekly daily average of 203 grains/m³ registered during September and October. The second highest was Acer pseudoplatanus with a maximum weekly daily average of 116 grains/m³. Populus spp. had a maximum weekly daily average 103 grains/m³. Olea europaea reached 19 grains/m³ in November. Grasses presented high levels of pollen counts with a maximum weekly daily average of 27 grains/m³ from the end of August until the end of January. Pollens of Plantago spp. Rumex acetosella and Chenopodium spp. had a similar distribution and were present from October to April with maximum weekly daily average of 7 grains/m³, 7 grains/m³ and 3 grains/m³ respectively. Significant concentrations of Ambrosia artemisiifolia were detected from February until April. The population of Talca was exposed to high concentrations of allergenic pollen, such as P. acerifolia, A. pseudoplatanus, and grasses in the months of August through November. The detection of O. europaea and A. artemisiifolia is important as these are emergent pollens in the city of Talca. Aerobiological monitoring will provide the community with reliable information about the level of allergenic pollens, improving treatment and quality of life of patients with respiratory allergy. Copyright © 2011 SEICAP. Published by Elsevier Espana. All rights reserved.

  6. Anatomical and neuromuscular variables strongly predict maximum knee extension torque in healthy men.

    PubMed

    Trezise, J; Collier, N; Blazevich, A J

    2016-06-01

    This study examined the relative influence of anatomical and neuromuscular variables on maximal isometric and concentric knee extensor torque and provided a comparative dataset for healthy young males. Quadriceps cross-sectional area (CSA) and fascicle length (l f) and angle (θ f) from the four quadriceps components; agonist (EMG:M) and antagonist muscle activity, and percent voluntary activation (%VA); patellar tendon moment arm distance (MA) and maximal voluntary isometric and concentric (60° s(-1)) torques, were measured in 56 men. Linear regression models predicting maximum torque were ranked using Akaike's Information Criterion (AICc), and Pearson's correlation coefficients assessed relationships between variables. The best-fit models explained up to 72 % of the variance in maximal voluntary knee extension torque. The combination of 'CSA + θ f + EMG:M + %VA' best predicted maximum isometric torque (R (2) = 72 %, AICc weight = 0.38) and 'CSA + θ f + MA' (R (2) = 65 %, AICc weight = 0.21) best predicted maximum concentric torque. Proximal quadriceps CSA was included in all models rather than the traditionally used mid-muscle CSA. Fascicle angle appeared consistently in all models despite its weak correlation with maximum torque in isolation, emphasising the importance of examining interactions among variables. While muscle activity was important for torque prediction in both contraction modes, MA only strongly influenced maximal concentric torque. These models identify the main sources of inter-individual differences strongly influencing maximal knee extension torque production in healthy men. The comparative dataset allows the identification of potential variables to target (i.e. weaknesses) in individuals.

  7. A deployment of broadband seismic stations in two deep gold mines, South Africa

    USGS Publications Warehouse

    McGarr, Arthur F.; Boettcher, Margaret S.; Fletcher, Jon Peter B.; Johnston, Malcolm J.; Durrheim, R.; Spottiswoode, S.; Milev, A.

    2009-01-01

    In-mine seismic networks throughout the TauTona and Mponeng gold mines provide precise locations and seismic source parameters of earthquakes. They also support small-scale experimental projects, including NELSAM (Natural Earthquake Laboratory in South African Mines), which is intended to record, at close hand, seismic rupture of a geologic fault that traverses the project region near the deepest part of TauTona. To resolve some questions regarding the in-mine and NELSAM networks, we deployed four portable broadband seismic stations at deep sites within TauTona and Mponeng for one week during September 2007 and recorded ground acceleration. Moderately large earthquakes within our temporary network were recorded with sufficiently high signal-to-noise that we were able to integrate the acceleration to ground velocity and displacement, from which moment tensors could be determined. We resolved the questions concerning the NELSAM and in-mine networks by using these moment tensors to calculate synthetic seismograms at various network recording sites for comparison with the ground motion recorded at the same locations. We also used the peak velocity of the S wave pulse, corrected for attenuation with distance, to estimate the maximum slip within the rupture zone of an earthquake. We then combined the maximum slip and seismic moment with results from laboratory friction experiments to estimate maximum slip rates within the same high-slip patches of the rupture zone. For the four largest earthquakes recorded within our network, all with magnitudes near 2, these inferred maximum slips range from 4 to 27 mm and the corresponding maximum slip rates range from 1 to 6 m/s. These results, in conjunction with information from previous ground motion studies, indicate that underground support should be capable of withstanding peak ground velocities of at least 5 m/s.

  8. Beyond maximum entropy: Fractal Pixon-based image reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, Richard C.; Pina, R. K.

    1994-01-01

    We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other competing methods, including Goodness-of-Fit methods such as Least-Squares fitting and Lucy-Richardson reconstruction, as well as Maximum Entropy (ME) methods such as those embodied in the MEMSYS algorithms. Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME. Our past work has shown how uniform information content pixons can be used to develop a 'Super-ME' method in which entropy is maximized exactly. Recently, however, we have developed a superior pixon basis for the image, the Fractal Pixon Basis (FPB). Unlike the Uniform Pixon Basis (UPB) of our 'Super-ME' method, the FPB basis is selected by employing fractal dimensional concepts to assess the inherent structure in the image. The Fractal Pixon Basis results in the best image reconstructions to date, superior to both UPB and the best ME reconstructions. In this paper, we review the theory of the UPB and FPB pixon and apply our methodology to the reconstruction of far-infrared imaging of the galaxy M51. The results of our reconstruction are compared to published reconstructions of the same data using the Lucy-Richardson algorithm, the Maximum Correlation Method developed at IPAC, and the MEMSYS ME algorithms. The results show that our reconstructed image has a spatial resolution a factor of two better than best previous methods (and a factor of 20 finer than the width of the point response function), and detects sources two orders of magnitude fainter than other methods.

  9. Bioprocess development workflow: Transferable physiological knowledge instead of technological correlations.

    PubMed

    Reichelt, Wieland N; Haas, Florian; Sagmeister, Patrick; Herwig, Christoph

    2017-01-01

    Microbial bioprocesses need to be designed to be transferable from lab scale to production scale as well as between setups. Although substantial effort is invested to control technological parameters, usually the only true constant parameter is the actual producer of the product: the cell. Hence, instead of solely controlling technological process parameters, the focus should be increasingly laid on physiological parameters. This contribution aims at illustrating a workflow of data life cycle management with special focus on physiology. Information processing condenses the data into physiological variables, while information mining condenses the variables further into physiological descriptors. This basis facilitates data analysis for a physiological explanation for observed phenomena in productivity. Targeting transferability, we demonstrate this workflow using an industrially relevant Escherichia coli process for recombinant protein production and substantiate the following three points: (1) The postinduction phase is independent in terms of productivity and physiology from the preinduction variables specific growth rate and biomass at induction. (2) The specific substrate uptake rate during induction phase was found to significantly impact the maximum specific product titer. (3) The time point of maximum specific titer can be predicted by an easy accessible physiological variable: while the maximum specific titers were reached at different time points (19.8 ± 7.6 h), those maxima were reached all within a very narrow window of cumulatively consumed substrate dSn (3.1 ± 0.3 g/g). Concluding, this contribution provides a workflow on how to gain a physiological view on the process and illustrates potential benefits. © 2016 American Institute of Chemical Engineers Biotechnol. Prog., 33:261-270, 2017. © 2016 American Institute of Chemical Engineers.

  10. Likelihood of Tree Topologies with Fossils and Diversification Rate Estimation.

    PubMed

    Didier, Gilles; Fau, Marine; Laurin, Michel

    2017-11-01

    Since the diversification process cannot be directly observed at the human scale, it has to be studied from the information available, namely the extant taxa and the fossil record. In this sense, phylogenetic trees including both extant taxa and fossils are the most complete representations of the diversification process that one can get. Such phylogenetic trees can be reconstructed from molecular and morphological data, to some extent. Among the temporal information of such phylogenetic trees, fossil ages are by far the most precisely known (divergence times are inferences calibrated mostly with fossils). We propose here a method to compute the likelihood of a phylogenetic tree with fossils in which the only considered time information is the fossil ages, and apply it to the estimation of the diversification rates from such data. Since it is required in our computation, we provide a method for determining the probability of a tree topology under the standard diversification model. Testing our approach on simulated data shows that the maximum likelihood rate estimates from the phylogenetic tree topology and the fossil dates are almost as accurate as those obtained by taking into account all the data, including the divergence times. Moreover, they are substantially more accurate than the estimates obtained only from the exact divergence times (without taking into account the fossil record). We also provide an empirical example composed of 50 Permo-Carboniferous eupelycosaur (early synapsid) taxa ranging in age from about 315 Ma (Late Carboniferous) to 270 Ma (shortly after the end of the Early Permian). Our analyses suggest a speciation (cladogenesis, or birth) rate of about 0.1 per lineage and per myr, a marginally lower extinction rate, and a considerable hidden paleobiodiversity of early synapsids. [Extinction rate; fossil ages; maximum likelihood estimation; speciation rate.]. © The Author(s) 2017. Published by Oxford University Press, on behalf of the Society of Systematic Biologists. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  11. Universally-Usable Interactive Electronic Physics Instructional And Educational Materials

    NASA Astrophysics Data System (ADS)

    Gardner, John

    2000-03-01

    Recent developments of technologies that promote full accessibility of electronic information by future generations of people with print disabilities will be described. ("Print disabilities" include low vision, blindness, and dyslexia.) The guiding philosophy of these developments is that information should be created and transmitted in a form that is as display-independent as possible, and that the user should have maximum freedom over how that information is to be displayed. This philosophy leads to maximum usability by everybody and is, in the long run, the only way to assure truly equal access. Research efforts to be described include access to mathematics and scientific notation and to graphs, tables, charts, diagrams, and general object-oriented graphics.

  12. Knowledge of appropriate acetaminophen use: A survey of college-age women.

    PubMed

    Stumpf, Janice L; Liao, Allison C; Nguyen, Stacy; Skyles, Amy J; Alaniz, Cesar

    To evaluate college-age women's knowledge of appropriate doses and potential toxicities of acetaminophen, competency in interpreting Drug Facts label dosing information, and ability to recognize products containing acetaminophen. In this cross-sectional prospective study, a 20-item written survey was provided to female college students at a University of Michigan fundraising event in March 2015. A total of 203 female college students, 18-24 years of age, participated in the study. Pain was experienced on a daily or weekly basis by 22% of the subjects over the previous 6 months, and 83% reported taking acetaminophen. The maximum 3-gram daily dose of extra-strength acetaminophen was correctly identified by 64 participants; an additional 51 subjects indicated the generally accepted 4 grams daily as the maximum dose. When provided with the Tylenol Drug Facts label, 68.5% correctly identified the maximum amount of regular-strength acetaminophen recommended for a healthy adult. Hepatotoxicity was associated with high acetaminophen doses by 63.6% of participants, significantly more than those who selected distracter responses (P < 0.001). Knowledge of liver damage as a potential toxicity was correlated with age 20 years and older (P < 0.001) but was independent from race and ethnicity and level of alcohol consumption. Although more than one-half of the subjects (58.6%) recognized that Tylenol contained acetaminophen, fewer than one-fourth correctly identified other acetaminophen-containing products. Despite ongoing educational campaigns, a large proportion of the college-age women who participated in our study did not know and could not interpret the maximum recommended daily dose from Drug Facts labeling, did not know that liver damage was a potential toxicity of acetaminophen, and could not recognize acetaminophen-containing products. These data suggest a continued role for pharmacists in educational efforts targeted to college-age women. Copyright © 2018 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  13. Assessing the performance of winter footwear using a new maximum achievable incline method.

    PubMed

    Hsu, Jennifer; Li, Yue; Dutta, Tilak; Fernie, Geoff

    2015-09-01

    More informative tests of winter footwear performance are required in order to identify footwear that will prevent injurious slips and falls on icy conditions. In this study, eight participants tested four styles of winter boots on smooth wet ice. The surface was progressively tilted to create increasing longitudinal and cross-slopes until participants could no longer continue standing or walking. Maximum achievable incline angles provided consistent measures of footwear slip resistance and demonstrated better resolution than mechanical tests. One footwear outsole material and tread combination outperformed the others on wet ice allowing participants to successfully walk on steep longitudinal slopes of 17.5° ± 1.9° (mean ± SD). By further exploiting the methodology to include additional surfaces and contaminants, such tests could be used to optimize tread designs and materials that are ideal for reducing the risk of slips and falls. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  14. Estimation of potential maximum biomass of trout in Wyoming streams to assist management decisions

    USGS Publications Warehouse

    Hubert, W.A.; Marwitz, T.D.; Gerow, K.G.; Binns, N.A.; Wiley, R.W.

    1996-01-01

    Fishery managers can benefit from knowledge of the potential maximum biomass (PMB) of trout in streams when making decisions on the allocation of resources to improve fisheries. Resources are most likely to he expended on streams with high PMB and with large differences between PMB and currently measured biomass. We developed and tested a model that uses four easily measured habitat variables to estimate PMB (upper 90th percentile of predicted mean bid mass) of trout (Oncorhynchus spp., Salmo trutta, and Salvelinus fontinalis) in Wyoming streams. The habitat variables were proportion of cover, elevation, wetted width, and channel gradient. The PMB model was constructed from data on 166 stream reaches throughout Wyoming and validated on an independent data set of 50 stream reaches. Prediction of PMB in combination with estimation of current biomass and information on habitat quality can provide managers with insight into the extent to which management actions may enhance trout biomass.

  15. Changes in Phenolic Compounds and Phytotoxicity of the Spanish-Style Green Olive Processing Wastewaters by Aspergillus niger B60.

    PubMed

    Papadaki, Eugenia; Tsimidou, Maria Z; Mantzouridou, Fani Th

    2018-05-16

    This study systematically investigated the degradation kinetics and changes in the composition of phenolic compounds in Spanish-style Chalkidiki green olive processing wastewaters (TOPWs) during treatment using Aspergillus niger B60. The fungal growth and phenol degradation kinetics were described sufficiently by the Logistic and Edward models, respectively. The maximum specific growth rate (2.626 1/d) and the maximum degradation rate (0.690 1/h) were observed at 1500 mg/L of total polar phenols, indicating the applicability of the process in TOPWs with a high concentration of phenolic compounds. Hydroxytyrosol and the other simple phenols were depleted after 3-8 days. The newly formed secoiridoid derivatives identified by HPLC-DAD-FLD and LC-MS are likely produced by oleoside and oleuropein aglycon via the action of fungal β-glucosidase and esterase. The treated streams were found to be less phytotoxic with reduced chemical oxygen demand by up to 76%. Findings will provide useful information for the subsequent treatment of residual contaminants.

  16. Algorithms and Complexity Results for Genome Mapping Problems.

    PubMed

    Rajaraman, Ashok; Zanetti, Joao Paulo Pereira; Manuch, Jan; Chauve, Cedric

    2017-01-01

    Genome mapping algorithms aim at computing an ordering of a set of genomic markers based on local ordering information such as adjacencies and intervals of markers. In most genome mapping models, markers are assumed to occur uniquely in the resulting map. We introduce algorithmic questions that consider repeats, i.e., markers that can have several occurrences in the resulting map. We show that, provided with an upper bound on the copy number of repeated markers and with intervals that span full repeat copies, called repeat spanning intervals, the problem of deciding if a set of adjacencies and repeat spanning intervals admits a genome representation is tractable if the target genome can contain linear and/or circular chromosomal fragments. We also show that extracting a maximum cardinality or weight subset of repeat spanning intervals given a set of adjacencies that admits a genome realization is NP-hard but fixed-parameter tractable in the maximum copy number and the number of adjacent repeats, and tractable if intervals contain a single repeated marker.

  17. Principle of maximum entropy for reliability analysis in the design of machine components

    NASA Astrophysics Data System (ADS)

    Zhang, Yimin

    2018-03-01

    We studied the reliability of machine components with parameters that follow an arbitrary statistical distribution using the principle of maximum entropy (PME). We used PME to select the statistical distribution that best fits the available information. We also established a probability density function (PDF) and a failure probability model for the parameters of mechanical components using the concept of entropy and the PME. We obtained the first four moments of the state function for reliability analysis and design. Furthermore, we attained an estimate of the PDF with the fewest human bias factors using the PME. This function was used to calculate the reliability of the machine components, including a connecting rod, a vehicle half-shaft, a front axle, a rear axle housing, and a leaf spring, which have parameters that typically follow a non-normal distribution. Simulations were conducted for comparison. This study provides a design methodology for the reliability of mechanical components for practical engineering projects.

  18. Image degradation by glare in radiologic display devices

    NASA Astrophysics Data System (ADS)

    Badano, Aldo; Flynn, Michael J.

    1997-05-01

    No electronic devices are currently available that can display digital radiographs without loss of visual information compared to traditional transilluminated film. Light scattering within the glass faceplate of cathode-ray tube (CRT) devices causes excessive glare that reduces image contrast. This glare, along with ambient light reflection, has been recognized as a significant limitation for radiologic applications. Efforts to control the effect of glare and ambient light reflection in CRTs include the use of absorptive glass and thin film coatings. In the near future, flat panel displays (FPD) with thin emissive structures should provide very low glare, high performance devices. We have used an optical Monte Carlo simulation to evaluate the effect of glare on image quality for typical CRT and flat panel display devices. The trade-off between display brightness and image contrast is described. For CRT systems, achieving good glare ratio requires a reduction of brightness to 30-40 percent of the maximum potential brightness. For FPD systems, similar glare performance can be achieved while maintaining 80 percent of the maximum potential brightness.

  19. Maximum likelihood techniques applied to quasi-elastic light scattering

    NASA Technical Reports Server (NTRS)

    Edwards, Robert V.

    1992-01-01

    There is a necessity of having an automatic procedure for reliable estimation of the quality of the measurement of particle size from QELS (Quasi-Elastic Light Scattering). Getting the measurement itself, before any error estimates can be made, is a problem because it is obtained by a very indirect measurement of a signal derived from the motion of particles in the system and requires the solution of an inverse problem. The eigenvalue structure of the transform that generates the signal is such that an arbitrarily small amount of noise can obliterate parts of any practical inversion spectrum. This project uses the Maximum Likelihood Estimation (MLE) as a framework to generate a theory and a functioning set of software to oversee the measurement process and extract the particle size information, while at the same time providing error estimates for those measurements. The theory involved verifying a correct form of the covariance matrix for the noise on the measurement and then estimating particle size parameters using a modified histogram approach.

  20. A Synthesis of Solar Cycle Prediction Techniques

    NASA Technical Reports Server (NTRS)

    Hathaway, David H.; Wilson, Robert M.; Reichmann, Edwin J.

    1999-01-01

    A number of techniques currently in use for predicting solar activity on a solar cycle timescale are tested with historical data. Some techniques, e.g., regression and curve fitting, work well as solar activity approaches maximum and provide a month-by-month description of future activity, while others, e.g., geomagnetic precursors, work well near solar minimum but only provide an estimate of the amplitude of the cycle. A synthesis of different techniques is shown to provide a more accurate and useful forecast of solar cycle activity levels. A combination of two uncorrelated geomagnetic precursor techniques provides a more accurate prediction for the amplitude of a solar activity cycle at a time well before activity minimum. This combined precursor method gives a smoothed sunspot number maximum of 154 plus or minus 21 at the 95% level of confidence for the next cycle maximum. A mathematical function dependent on the time of cycle initiation and the cycle amplitude is used to describe the level of solar activity month by month for the next cycle. As the time of cycle maximum approaches a better estimate of the cycle activity is obtained by including the fit between previous activity levels and this function. This Combined Solar Cycle Activity Forecast gives, as of January 1999, a smoothed sunspot maximum of 146 plus or minus 20 at the 95% level of confidence for the next cycle maximum.

  1. An ERTS-1 investigation for Lake Ontario and its basin

    NASA Technical Reports Server (NTRS)

    Polcyn, F. C.; Falconer, A. (Principal Investigator); Wagner, T. W.; Rebel, D. L.

    1975-01-01

    The author has identified the following significant results. Methods of manual, semi-automatic, and automatic (computer) data processing were evaluated, as were the requirements for spatial physiographic and limnological information. The coupling of specially processed ERTS data with simulation models of the watershed precipitation/runoff process provides potential for water resources management. Optimal and full use of the data requires a mix of data processing and analysis techniques, including single band editing, two band ratios, and multiband combinations. A combination of maximum likelihood ratio and near-IR/red band ratio processing was found to be particularly useful.

  2. Effects of immersion depth on super-resolution properties of index-different microsphere-assisted nanoimaging

    NASA Astrophysics Data System (ADS)

    Zhou, Yi; Tang, Yan; He, Yu; Liu, Xi; Hu, Song

    2018-03-01

    In related applications of microsphere-assisted super-resolution imaging in biomedical visualization and microfluidic detection, liquids are widely used as background media. For the first time, we quantitatively demonstrate that the maximum irradiances, focal lengths, and waists of photonic nanojets (PNJs) will logically vary with different immersion depths (IMDs). The experimental observations also numerically illustrate the trends of the lateral magnification and field of view (FOV) with the gradual evaporation of ethyl alcohol. This work can provide exact quantitative information for the proper selection of microspheres and IMD for the high-quality discernment of nanostructures.

  3. Methods and application of system identification in shock and vibration.

    NASA Technical Reports Server (NTRS)

    Collins, J. D.; Young, J. P.; Kiefling, L.

    1972-01-01

    A logical picture is presented of current useful system identification techniques in the shock and vibration field. A technology tree diagram is developed for the purpose of organizing and categorizing the widely varying approaches according to the fundamental nature of each. Specific examples of accomplished activity for each identification category are noted and discussed. To provide greater insight into the most current trends in the system identification field, a somewhat detailed description is presented of the essential features of a recently developed technique that is based on making the maximum use of all statistically known information about a system.

  4. Natural environment design criteria for the Space Station definition and preliminary design

    NASA Astrophysics Data System (ADS)

    Vaughan, W. W.; Green, C. E.

    1985-03-01

    The natural environment design criteria for the Space Station Program (SSP) definition and preliminary design are presented. Information on the atmospheric, dynamic and thermodynamic environments, meteoroids, radiation, magnetic fields, physical constants, etc. is provided with the intension of enabling all groups involved in the definition and preliminary design studies to proceed with a common and consistent set of natural environment criteria requirements. The space station program elements (SSPE) shall be designed with no operational sensitivity to natural environment conditions during assembly, checkout, stowage, launch, and orbital operations to the maximum degree practical.

  5. Natural environment design criteria for the Space Station definition and preliminary design

    NASA Technical Reports Server (NTRS)

    Vaughan, W. W.; Green, C. E.

    1985-01-01

    The natural environment design criteria for the Space Station Program (SSP) definition and preliminary design are presented. Information on the atmospheric, dynamic and thermodynamic environments, meteoroids, radiation, magnetic fields, physical constants, etc. is provided with the intension of enabling all groups involved in the definition and preliminary design studies to proceed with a common and consistent set of natural environment criteria requirements. The space station program elements (SSPE) shall be designed with no operational sensitivity to natural environment conditions during assembly, checkout, stowage, launch, and orbital operations to the maximum degree practical.

  6. 3D-printed upper limb prostheses: a review.

    PubMed

    Ten Kate, Jelle; Smit, Gerwin; Breedveld, Paul

    2017-04-01

    This paper aims to provide an overview with quantitative information of existing 3D-printed upper limb prostheses. We will identify the benefits and drawbacks of 3D-printed devices to enable improvement of current devices based on the demands of prostheses users. A review was performed using Scopus, Web of Science and websites related to 3D-printing. Quantitative information on the mechanical and kinematic specifications and 3D-printing technology used was extracted from the papers and websites. The overview (58 devices) provides the general specifications, the mechanical and kinematic specifications of the devices and information regarding the 3D-printing technology used for hands. The overview shows prostheses for all different upper limb amputation levels with different types of control and a maximum material cost of $500. A large range of various prostheses have been 3D-printed, of which the majority are used by children. Evidence with respect to the user acceptance, functionality and durability of the 3D-printed hands is lacking. Contrary to what is often claimed, 3D-printing is not necessarily cheap, e.g., injection moulding can be cheaper. Conversely, 3D-printing provides a promising possibility for individualization, e.g., personalized socket, colour, shape and size, without the need for adjusting the production machine. Implications for rehabilitation Upper limb deficiency is a condition in which a part of the upper limb is missing as a result of a congenital limb deficiency of as a result of an amputation. A prosthetic hand can restore some of the functions of a missing limb and help the user in performing activities of daily living. Using 3D-printing technology is one of the solutions to manufacture hand prostheses. This overview provides information about the general, mechanical and kinematic specifications of all the devices and it provides the information about the 3D-printing technology used to print the hands.

  7. Opinion of the Scientific Committee on Consumer Safety (SCCS) - Revision of the opinion on o-Phenylphenol, Sodium o-phenylphenate and Potassium o-phenylphenate (OPP), in cosmetic products.

    PubMed

    Bernauer, Ulrike

    2016-08-01

    o-Phenylphenol, Sodium o-phenylphenate, Potassium o-phenylphenate, CAS n. 90-43-7, 132-27-4, 13707-65-8 as preservatives are regulated in Annex V/7 of the Cosmetics Regulation (EC) n. 1223/2009 at a maximum concentration of 0.2% (as phenol). In February 2013, the Commission received a risk assessment submitted by the French Agency ANSM (Agence nationale de sécurité des médicaments et des produits de santé) which rose concerns about the use of o-Phenylphenol as preservatives in cosmetic products. In the context of the ANSM report (Evaluation du risque lié à l'utilisation de l'orthophénylphénol CAS n. 90-43-7 dans les produits cosmétiques) o-Phenylphenol has been identified as likely to be an endocrine disruptor. The report concludes that the maximum authorised concentration (currently of 0.2%) of o-Phenylphenol for use as a preservative should be revised due to low margin of safety. In January 2014, in response to a call for data on o-Phenylphenol by the Commission, Industry submitted a safety dossier in order to defend the current use of o-Phenylphenol, Sodium o-phenylphenate, Potassium o-phenylphenate, CAS n. 90-43-7, 132-27-4, 13707- 65-8 as preservatives in cosmetic formulations at a maximum concentration of 0.2% (as phenol). o-Phenylphenol as preservative with a maximum concentration of 0.2% in leave-on cosmetic products is not safe. Also, in view of further exposures including noncosmetic uses (see Anses, 2014), the maximum concentration of o-Phenylphenol in leave-on cosmetic products should be lowered. However, the proposed maximum use concentration of up to 0.15% by the applicant can be considered safe. The use of o-Phenylphenol as preservative with a maximum concentration of 0.2% in rinse-off cosmetic products is considered safe. Based on the information provided, no conclusions of safe use can be drawn for Sodium o-phenylphenate and Potassium o-phenylphenate. In vitro data indicate an absent or very weak binding affinity of OPP to the oestrogen receptor, in line with limited stimulation of proliferation in oestrogen responsive cells. No information is available on androgenic and anti-androgenic effects of OPP in vitro. Agonistic or antagonistic effects on thyroid hormones were not observed with OPP. There might be a potential of injury to the vision system attributable to OPP. Aggregate exposure to OPP should be considered. Copyright © 2016. Published by Elsevier Inc.

  8. Entropy-based goodness-of-fit test: Application to the Pareto distribution

    NASA Astrophysics Data System (ADS)

    Lequesne, Justine

    2013-08-01

    Goodness-of-fit tests based on entropy have been introduced in [13] for testing normality. The maximum entropy distribution in a class of probability distributions defined by linear constraints induces a Pythagorean equality between the Kullback-Leibler information and an entropy difference. This allows one to propose a goodness-of-fit test for maximum entropy parametric distributions which is based on the Kullback-Leibler information. We will focus on the application of the method to the Pareto distribution. The power of the proposed test is computed through Monte Carlo simulation.

  9. Exploiting the Maximum Entropy Principle to Increase Retrieval Effectiveness.

    ERIC Educational Resources Information Center

    Cooper, William S.

    1983-01-01

    Presents information retrieval design approach in which queries of computer-based system consist of sets of terms, either unweighted or weighted with subjective term precision estimates, and retrieval outputs ranked by probability of usefulness estimated by "maximum entropy principle." Boolean and weighted request systems are discussed.…

  10. Bayesian Monte Carlo and Maximum Likelihood Approach for Uncertainty Estimation and Risk Management: Application to Lake Oxygen Recovery Model

    EPA Science Inventory

    Model uncertainty estimation and risk assessment is essential to environmental management and informed decision making on pollution mitigation strategies. In this study, we apply a probabilistic methodology, which combines Bayesian Monte Carlo simulation and Maximum Likelihood e...

  11. 49 CFR 192.620 - Alternative maximum allowable operating pressure for certain steel pipelines.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... of a maximum allowable operating pressure based on higher stress levels in the following areas: Take... pipeline at the increased stress level under this section with conventional operation; and (ii) Describe... targeted audience; and (B) Include information about the integrity management activities performed under...

  12. 49 CFR 192.620 - Alternative maximum allowable operating pressure for certain steel pipelines.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... of a maximum allowable operating pressure based on higher stress levels in the following areas: Take... pipeline at the increased stress level under this section with conventional operation; and (ii) Describe... targeted audience; and (B) Include information about the integrity management activities performed under...

  13. 49 CFR 192.620 - Alternative maximum allowable operating pressure for certain steel pipelines.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... of a maximum allowable operating pressure based on higher stress levels in the following areas: Take... pipeline at the increased stress level under this section with conventional operation; and (ii) Describe... targeted audience; and (B) Include information about the integrity management activities performed under...

  14. 40 CFR 1039.140 - What is my engine's maximum engine power?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM NEW AND IN-USE NONROAD COMPRESSION-IGNITION ENGINES... 1065, based on the manufacturer's design and production specifications for the engine. This information... power values for an engine are based on maximum engine power. For example, the group of engines with...

  15. 40 CFR 1039.140 - What is my engine's maximum engine power?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ...) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM NEW AND IN-USE NONROAD COMPRESSION-IGNITION ENGINES... 1065, based on the manufacturer's design and production specifications for the engine. This information... power values for an engine are based on maximum engine power. For example, the group of engines with...

  16. 40 CFR 1039.140 - What is my engine's maximum engine power?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ...) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM NEW AND IN-USE NONROAD COMPRESSION-IGNITION ENGINES... 1065, based on the manufacturer's design and production specifications for the engine. This information... power values for an engine are based on maximum engine power. For example, the group of engines with...

  17. 40 CFR 1039.140 - What is my engine's maximum engine power?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...) AIR POLLUTION CONTROLS CONTROL OF EMISSIONS FROM NEW AND IN-USE NONROAD COMPRESSION-IGNITION ENGINES... 1065, based on the manufacturer's design and production specifications for the engine. This information... power values for an engine are based on maximum engine power. For example, the group of engines with...

  18. 49 CFR 450.7 - Marking.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... APPROVAL OF CARGO CONTAINERS GENERAL General Provisions § 450.7 Marking. (a) On each container that construction begins on or after January 1, 1984, all maximum gross weight markings on the container must be consistent with the maximum gross weight information on the safety approval plate. (b) On each container that...

  19. 49 CFR 450.7 - Marking.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... APPROVAL OF CARGO CONTAINERS GENERAL General Provisions § 450.7 Marking. (a) On each container that construction begins on or after January 1, 1984, all maximum gross weight markings on the container must be consistent with the maximum gross weight information on the safety approval plate. (b) On each container that...

  20. 49 CFR 450.7 - Marking.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... APPROVAL OF CARGO CONTAINERS GENERAL General Provisions § 450.7 Marking. (a) On each container that construction begins on or after January 1, 1984, all maximum gross weight markings on the container must be consistent with the maximum gross weight information on the safety approval plate. (b) On each container that...

  1. 49 CFR 450.7 - Marking.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... APPROVAL OF CARGO CONTAINERS GENERAL General Provisions § 450.7 Marking. (a) On each container that construction begins on or after January 1, 1984, all maximum gross weight markings on the container must be consistent with the maximum gross weight information on the safety approval plate. (b) On each container that...

  2. 49 CFR 450.7 - Marking.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... APPROVAL OF CARGO CONTAINERS GENERAL General Provisions § 450.7 Marking. (a) On each container that construction begins on or after January 1, 1984, all maximum gross weight markings on the container must be consistent with the maximum gross weight information on the safety approval plate. (b) On each container that...

  3. Comparative analysis of package inserts of local and imported antihypertensive medications in Palestine.

    PubMed

    Qatmosh, Sandra A; Koni, Amer A; Qeeno, Baraa G; Arandy, Dina A; Abu-Hashia, Maysa W; Al-Hroub, Bahaa M; Zyoud, Sa'ed H

    2017-09-25

    Package inserts (PIs) as a reliable reference for patients and health care providers should provide accurate, complete and up-to-date information. The purpose of the current study is to assess and compare the PIs of antihypertensive agents locally produced in Palestine and their imported counterparts. Thirty-five PIs were assessed for the presence of 31 information statements using a scoring method. Word counting of 20 headings and subheadings was used to evaluate and compare local and imported PIs for information quantity. None of the analysed PIs fulfilled the criteria. All of them included the brand name, active ingredients, indications, directions for use, adverse drug reactions, drug-drug interactions, pregnancy and lactation considerations, and storage. Whereas none of them, either local or imported PIs, included the shelf life and instructions to convert tablets or capsules into liquid forms. Additionally, only one (5%) imported and no (0%) local PIs mentioned the duration of therapy. Moreover, 93.4% of local PIs were deficient in areas regarding the inactive ingredients and date of last revision, and 86.7% did not mention the drug dose and possibility of tablet splitting. Furthermore, the maximum dose was not indicated in 90% of imported and 86.7% of local PIs. In general, imported PIs contained more detailed information than their local counterparts, where the range of differences in medians between the local and imported PIs was from 1.5-fold for pregnancy considerations to >42.00-fold for the effect on the ability to drive and use machines. The findings of this study revealed the superiority of imported over local PIs in both quality and quantity of information provided. This emphasises the need for appropriate measures to be taken by the Ministry of Health and local manufacturers to ensure efficiency of local PIs in providing accurate, complete and up-to-date information.

  4. PC index as a proxy of the solar wind energy that entered into the magnetosphere and energy accumulated in the magnetosphere

    NASA Astrophysics Data System (ADS)

    Troshichev, Oleg; Sormakov, Dmitry

    The PC index has been approved by the International Association of Geomagnetism and Aeronomy (Merida, Mexico, 2013) as a new international index of magnetic activity. Application of the PC index as a proxy of a solar wind energy that entered into the magnetosphere determines a principal distinction of the PC index from AL and Dst indices, which are regarded as characteristics of the energy that realized in magnetosphere in form of substorms and magnetic storms. This conclusion is based on results of analysis of relationships between the polar cap magnetic activity (PC-index) and parameters of the solar wind, on the one hand, relationships between changes of PC and development of magnetospheric substorms (AL-index) and magnetic storms (Dst-index), on the other hand. In this study the relationships between the PC and Dst indices in course of more than 200 magnetic storms observed in epoch of solar maximum (1998-2004) have been examined for different classes of storms separated by their kind and intensity. Results of statistical analysis demonstrate that depression of geomagnetic field starts to develop as soon as PC index steadily excess the threshold level ~1.5 mV/m; the storm intensity (DstMIN) follows, with delay ~ 1 hour, the maximum of PC in course of the storm. Main features of magnetic storms are determined, irrespective of their class and intensity, by the accumulated-mean PC value (PCAM): storm is developed as long as PCAM increases, comes to maximal intensity when PCAM attains the maximum, and starts to decay as soon as PCAM value displays decline. The run of “anomalous” magnetic storm on January 21-22, 2005, lasting many hours (with intensity of ≈ -100 nT) under conditions of northward or close to zero BZ component, is perfectly governed by behavior of the accumulated-mean PCAM index and, therefore, this storm should be regarded as an ordinary phenomenon. The conclusion is made that the PC index provides the unique on-line information on solar wind energy that entered into magnetosphere and PCAM index provides information on energy that accumulated in the magnetosphere.

  5. Thresholds of information leakage for speech security outside meeting rooms.

    PubMed

    Robinson, Matthew; Hopkins, Carl; Worrall, Ken; Jackson, Tim

    2014-09-01

    This paper describes an approach to provide speech security outside meeting rooms where a covert listener might attempt to extract confidential information. Decision-based experiments are used to establish a relationship between an objective measurement of the Speech Transmission Index (STI) and a subjective assessment relating to the threshold of information leakage. This threshold is defined for a specific percentage of English words that are identifiable with a maximum safe vocal effort (e.g., "normal" speech) used by the meeting participants. The results demonstrate that it is possible to quantify an offset that links STI with a specific threshold of information leakage which describes the percentage of words identified. The offsets for male talkers are shown to be approximately 10 dB larger than for female talkers. Hence for speech security it is possible to determine offsets for the threshold of information leakage using male talkers as the "worst case scenario." To define a suitable threshold of information leakage, the results show that a robust definition can be based upon 1%, 2%, or 5% of words identified. For these percentages, results are presented for offset values corresponding to different STI values in a range from 0.1 to 0.3.

  6. A mutual information-Dempster-Shafer based decision ensemble system for land cover classification of hyperspectral data

    NASA Astrophysics Data System (ADS)

    Pahlavani, Parham; Bigdeli, Behnaz

    2017-12-01

    Hyperspectral images contain extremely rich spectral information that offer great potential to discriminate between various land cover classes. However, these images are usually composed of tens or hundreds of spectrally close bands, which result in high redundancy and great amount of computation time in hyperspectral classification. Furthermore, in the presence of mixed coverage pixels, crisp classifiers produced errors, omission and commission. This paper presents a mutual information-Dempster-Shafer system through an ensemble classification approach for classification of hyperspectral data. First, mutual information is applied to split data into a few independent partitions to overcome high dimensionality. Then, a fuzzy maximum likelihood classifies each band subset. Finally, Dempster-Shafer is applied to fuse the results of the fuzzy classifiers. In order to assess the proposed method, a crisp ensemble system based on a support vector machine as the crisp classifier and weighted majority voting as the crisp fusion method are applied on hyperspectral data. Furthermore, a dimension reduction system is utilized to assess the effectiveness of mutual information band splitting of the proposed method. The proposed methodology provides interesting conclusions on the effectiveness and potentiality of mutual information-Dempster-Shafer based classification of hyperspectral data.

  7. Factors affecting the estimate of primary production from space

    NASA Technical Reports Server (NTRS)

    Balch, W. M.; Byrne, C. F.

    1994-01-01

    Remote sensing of primary production in the euphotic zone has been based mostly on visible-band and water-leaving radiance measured with the coastal zone color scanner. There are some robust, simple relationships for calculating integral production based on surface measurements, but they also require knowledge for photoadaptive parameters such as maximum photosynthesis which currently cannot be obtained from spave. A 17,000-station data set is used to show that space-based estimates of maximum photosynthesis could improve predictions of psi, the water column light utiliztion index, which is an important term in many primary productivity models. Temperature is also examined as a factor for predicting hydrographic structure and primary production. A simple model is used to relate temperature and maximum photosynthesis; the model incorporates (1) the positive relationship between maximum photosynthesis and temperature and (2) the strongly negative relationship between temperature and nitrate in the ocean (which directly affects maximum growth rates via nitrogen limitation). Since these two factors relate to carbon and nitrogen, 'balanced carbon/nitrogen assimilation' was calculated using the Redfield ratio, It is expected that the relationship between maximum balanced carbon assimilation versus temperature is concave-down, with the peak dependent on nitrate uptake kinetics, temperature-nitrate relationships,a nd the carbon chlorophyll ration. These predictions were compared with the sea truth data. The minimum turnover time for nitrate was also calculated using this approach. Lastly, sea surface temperature gradients were used to predict the slope of isotherms (a proxy for the slope of isopycnals in many waters). Sea truth data show that at size scales of several hundred kilometers, surface temperature gradients can provide information on the slope of isotherms in the top 200 m of the water column. This is directly relevant to the supply of nutrients into the surface mixed layer, which is useful for predicting integral biomass and primary production.

  8. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics.

    PubMed

    Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan

    2017-04-06

    An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods.

  9. Time-motion analysis of goalball players in attacks: differences of the player positions and the throwing techniques.

    PubMed

    Monezi, Lucas Antônio; Magalhães, Thiago Pinguelli; Morato, Márcio Pereira; Mercadante, Luciano Allegretti; Furtado, Otávio Luis Piva da Cunha; Misuta, Milton Shoiti

    2018-03-26

    In this study, we aimed to analyse goalball players time-motion variables (distance covered, time spent, maximum and average velocities) in official goalball match attacks, taking into account the attack phases (preparation and throwing), player position (centres and wings) and throwing techniques (frontal, spin and between the legs). A total of 365 attacks were assessed using a video based method (2D) through manual tracking using the Dvideo system. Inferential non-parametric statistics were applied for comparison of preparation vs. throwing phase, wings vs. centres and, among the throwing techniques, frontal, spin and between the legs. Significant differences were found between the attack preparation versus the throwing phase for all player time-motion variables: distance covered, time spent, maximum player velocity and average player velocity. Wing players performed most of the throws (85%) and covered longer distances than centres (1.65 vs 0.31 m). The between the legs and the spin throwing techniques presented greater values for most of the time-motion variables (distance covered, time spent and maximum player velocity) than did the frontal technique in both attack phases. These findings provide important information regarding players' movement patterns during goalball matches that can be used to plan more effective training.

  10. Robust Multi-Frame Adaptive Optics Image Restoration Algorithm Using Maximum Likelihood Estimation with Poisson Statistics

    PubMed Central

    Li, Dongming; Sun, Changming; Yang, Jinhua; Liu, Huan; Peng, Jiaqi; Zhang, Lijuan

    2017-01-01

    An adaptive optics (AO) system provides real-time compensation for atmospheric turbulence. However, an AO image is usually of poor contrast because of the nature of the imaging process, meaning that the image contains information coming from both out-of-focus and in-focus planes of the object, which also brings about a loss in quality. In this paper, we present a robust multi-frame adaptive optics image restoration algorithm via maximum likelihood estimation. Our proposed algorithm uses a maximum likelihood method with image regularization as the basic principle, and constructs the joint log likelihood function for multi-frame AO images based on a Poisson distribution model. To begin with, a frame selection method based on image variance is applied to the observed multi-frame AO images to select images with better quality to improve the convergence of a blind deconvolution algorithm. Then, by combining the imaging conditions and the AO system properties, a point spread function estimation model is built. Finally, we develop our iterative solutions for AO image restoration addressing the joint deconvolution issue. We conduct a number of experiments to evaluate the performances of our proposed algorithm. Experimental results show that our algorithm produces accurate AO image restoration results and outperforms the current state-of-the-art blind deconvolution methods. PMID:28383503

  11. Summer temperature metrics for predicting brook trout (Salvelinus fontinalis) distribution in streams

    USGS Publications Warehouse

    Parrish, Donna; Butryn, Ryan S.; Rizzo, Donna M.

    2012-01-01

    We developed a methodology to predict brook trout (Salvelinus fontinalis) distribution using summer temperature metrics as predictor variables. Our analysis used long-term fish and hourly water temperature data from the Dog River, Vermont (USA). Commonly used metrics (e.g., mean, maximum, maximum 7-day maximum) tend to smooth the data so information on temperature variation is lost. Therefore, we developed a new set of metrics (called event metrics) to capture temperature variation by describing the frequency, area, duration, and magnitude of events that exceeded a user-defined temperature threshold. We used 16, 18, 20, and 22°C. We built linear discriminant models and tested and compared the event metrics against the commonly used metrics. Correct classification of the observations was 66% with event metrics and 87% with commonly used metrics. However, combined event and commonly used metrics correctly classified 92%. Of the four individual temperature thresholds, it was difficult to assess which threshold had the “best” accuracy. The 16°C threshold had slightly fewer misclassifications; however, the 20°C threshold had the fewest extreme misclassifications. Our method leveraged the volumes of existing long-term data and provided a simple, systematic, and adaptable framework for monitoring changes in fish distribution, specifically in the case of irregular, extreme temperature events.

  12. Information-seeking behaviour and information needs of LGBTQ health professionals: a follow-up study.

    PubMed

    Morris, Martin; Roberto, K R

    2016-09-01

    Except for one study in 2004, the literature has no data on the information-seeking behaviour of lesbian, gay, bisexual, transgender and queer/questioning (LGBTQ) health professionals. After a decade of change for LGBTQ people, and the growth of electronic information sources and social networks, it is appropriate to revisit this subject. To gain an updated understanding of the information-seeking behaviour of LGBTQ health professionals and of how medical libraries can provide a culturally competent service to such users. A mixed-methods approach was adopted combining a Web-based questionnaire with email follow-up discussions. One hundred and twenty-three complete responses were received, mostly from the USA and Canada, between November 2012 and October 2013. LGBTQ health professionals remain more comfortable seeking LGBTQ health information from a medical librarian whom they know to be LGBTQ because they perceive LGBTQ librarians as more likely to have specialist knowledge, or through concern that non-LGBTQ librarians may be more likely to react in a stigmatising or discriminatory way. The study also provides evidence suggesting that online chat has marginal appeal for respondents seeking LGBTQ health information, despite its anonymity. Medical libraries seeking to demonstrate their cultural competency should provide visible evidence of this, such as through the creation of dedicated resource lists, promotion of LGBTQ literature on the library's website, and display of other symbols or statements supporting diversity. Opportunities exist for LGBTQ health professionals and medical librarians to work together to ensure that medical libraries are culturally competent and welcoming spaces for LGBTQ patrons, that library collections match their needs, and in the creation of guides to ensure maximum access to the results of LGBTQ health research. Medical libraries should also consider nominating and, if necessary, training a specialist in LGBTQ health information. Such measures are more likely to be successful than reliance on online chat, despite contrary suggestions in the literature. © 2016 Health Libraries Group.

  13. Next Generation Electromagnetic Pump Analysis Tools (PLM DOC-0005-2188). Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stregy, Seth; Dasilva, Ana; Yilmaz, Serkan

    2015-10-29

    This report provides the broad historical review of EM Pump development and details of MATRIX development under this project. This report summarizes the efforts made to modernize the legacy performance models used in previous EM Pump designs and the improvements made to the analysis tools. This report provides information on Tasks 1, 3, and 4 of the entire project. The research for Task 4 builds upon Task 1: Update EM Pump Databank and Task 3: Modernize the Existing EM Pump Analysis Model, which are summarized within this report. Where research for Task 2: Insulation Materials Development and Evaluation identified parametersmore » applicable to the analysis model with Task 4, the analysis code was updated, and analyses were made for additional materials. The important design variables for the manufacture and operation of an EM Pump that the model improvement can evaluate are: space constraints; voltage capability of insulation system; maximum flux density through iron; flow rate and outlet pressure; efficiency and manufacturability. The development of the next-generation EM Pump analysis tools during this two-year program provides information in three broad areas: Status of analysis model development; Improvements made to older simulations; and Comparison to experimental data.« less

  14. Opto-Mechanical Design of FIR Diagnostic System for C-2W

    NASA Astrophysics Data System (ADS)

    Beall, Michael; Deng, B. H.; Settles, G.; Rouillard, M.; Schroeder, J.; Gota, H.; Thompson, M.; Snitchler, G.; Ziaei, S.; the TAE Team

    2016-10-01

    The goal of the C-2W far-infrared (FIR) diagnostic system is to provide highly accurate, simultaneous polarimetry and interferometry information about the generation, equilibrium and time evolution of the advanced beam-driven field-reversed configuration (FRC). Thorough spatial coverage of the confinement vessel will be provided by a set of 14 chords at the central plane, with half of the chords tilted at a 15°angle to provide additional polarimetry information. Due to the very low (<.5°) Faraday rotation expected in the field-reversed plasma, the system has a design goal of .25 μm maximum allowable vibration over the lifetime of the shot. Due to large eddy-current forces from simulation of magnetic-field ramp-up, a non-metallic canvas phenolic material has been selected for the primary breadboards, which are mounted on a rigid, sand-filled support structure. Given the size of the structure and the magnetic impact, the support structure does not use pneumatic or mechanical isolation. Dynamic vibration analysis with Ansys, based on measurements of local ground vibration and simulations of magnetic forces, predicts that the system will meet the design goal.

  15. Simultaneous confocal fluorescence microscopy and optical coherence tomography for drug distribution and tissue integrity assessment

    NASA Astrophysics Data System (ADS)

    Rinehart, Matthew T.; LaCroix, Jeffrey; Henderson, Marcus; Katz, David; Wax, Adam

    2011-03-01

    The effectiveness of microbicidal gels, topical products developed to prevent infection by sexually transmitted diseases including HIV/AIDS, is governed by extent of gel coverage, pharmacokinetics of active pharmaceutical ingredients (APIs), and integrity of vaginal epithelium. While biopsies provide localized information about drug delivery and tissue structure, in vivo measurements are preferable in providing objective data on API and gel coating distribution as well as tissue integrity. We are developing a system combining confocal fluorescence microscopy with optical coherence tomography (OCT) to simultaneously measure local concentrations and diffusion coefficients of APIs during transport from microbicidal gels into tissue, while assessing tissue integrity. The confocal module acquires 2-D images of fluorescent APIs multiple times per second allowing analysis of lateral diffusion kinetics. The custom Fourier domain OCT module has a maximum a-scan rate of 54 kHz and provides depth-resolved tissue integrity information coregistered with the confocal fluorescence measurements. The combined system is validated by imaging phantoms with a surrogate fluorophore. Time-resolved API concentration measured at fixed depths is analyzed for diffusion kinetics. This multimodal system will eventually be implemented in vivo for objective evaluation of microbicide product performance.

  16. Magnetic Pair Creation Attenuation Altitude Constraints in Gamma-Ray Pulsars

    NASA Astrophysics Data System (ADS)

    Baring, Matthew; Story, Sarah

    The Fermi gamma-ray pulsar database now exceeds 150 sources and has defined an important part of Fermi's science legacy, providing rich information for the interpretation of young energetic pulsars and old millisecond pulsars. Among the well established population characteristics is the common occurrence of exponential turnovers in the 1-10 GeV range. These turnovers are too gradual to arise from magnetic pair creation in the strong magnetic fields of pulsar inner magnetospheres, so their energy can be used to provide lower bounds to the typical altitude of GeV band emission. We explore such constraints due to single-photon pair creation transparency at and below the turnover energy. Our updated computations span both domains when general relativistic influences are important and locales where flat spacetime photon propagation is modified by rotational aberration effects. The altitude bounds, typically in the range of 2-5 stellar radii, provide key information on the emission altitude in radio quiet pulsars that do not possess double-peaked pulse profiles. However, the exceptional case of the Crab pulsar provides an altitude bound of around 20% of the light cylinder radius if pair transparency persists out to 350 GeV, the maximum energy detected by MAGIC. This is an impressive new physics-based constraint on the Crab's gamma-ray emission locale.

  17. Two-point method uncertainty during control and measurement of cylindrical element diameters

    NASA Astrophysics Data System (ADS)

    Glukhov, V. I.; Shalay, V. V.; Radev, H.

    2018-04-01

    The topic of the article is devoted to the urgent problem of the reliability of technical products geometric specifications measurements. The purpose of the article is to improve the quality of parts linear sizes control by the two-point measurement method. The article task is to investigate methodical extended uncertainties in measuring cylindrical element linear sizes. The investigation method is a geometric modeling of the element surfaces shape and location deviations in a rectangular coordinate system. The studies were carried out for elements of various service use, taking into account their informativeness, corresponding to the kinematic pairs classes in theoretical mechanics and the number of constrained degrees of freedom in the datum element function. Cylindrical elements with informativity of 4, 2, 1 and θ (zero) were investigated. The uncertainties estimation of in two-point measurements was made by comparing the results of of linear dimensions measurements with the functional diameters maximum and minimum of the element material. Methodical uncertainty is formed when cylindrical elements with maximum informativeness have shape deviations of the cut and the curvature types. Methodical uncertainty is formed by measuring the element average size for all types of shape deviations. The two-point measurement method cannot take into account the location deviations of a dimensional element, so its use for elements with informativeness less than the maximum creates unacceptable methodical uncertainties in measurements of the maximum, minimum and medium linear dimensions. Similar methodical uncertainties also exist in the arbitration control of the linear dimensions of the cylindrical elements by limiting two-point gauges.

  18. NALDB: nucleic acid ligand database for small molecules targeting nucleic acid

    PubMed Central

    Kumar Mishra, Subodh; Kumar, Amit

    2016-01-01

    Nucleic acid ligand database (NALDB) is a unique database that provides detailed information about the experimental data of small molecules that were reported to target several types of nucleic acid structures. NALDB is the first ligand database that contains ligand information for all type of nucleic acid. NALDB contains more than 3500 ligand entries with detailed pharmacokinetic and pharmacodynamic information such as target name, target sequence, ligand 2D/3D structure, SMILES, molecular formula, molecular weight, net-formal charge, AlogP, number of rings, number of hydrogen bond donor and acceptor, potential energy along with their Ki, Kd, IC50 values. All these details at single platform would be helpful for the development and betterment of novel ligands targeting nucleic acids that could serve as a potential target in different diseases including cancers and neurological disorders. With maximum 255 conformers for each ligand entry, our database is a multi-conformer database and can facilitate the virtual screening process. NALDB provides powerful web-based search tools that make database searching efficient and simplified using option for text as well as for structure query. NALDB also provides multi-dimensional advanced search tool which can screen the database molecules on the basis of molecular properties of ligand provided by database users. A 3D structure visualization tool has also been included for 3D structure representation of ligands. NALDB offers an inclusive pharmacological information and the structurally flexible set of small molecules with their three-dimensional conformers that can accelerate the virtual screening and other modeling processes and eventually complement the nucleic acid-based drug discovery research. NALDB can be routinely updated and freely available on bsbe.iiti.ac.in/bsbe/naldb/HOME.php. Database URL: http://bsbe.iiti.ac.in/bsbe/naldb/HOME.php PMID:26896846

  19. Information flow in layered networks of non-monotonic units

    NASA Astrophysics Data System (ADS)

    Schittler Neves, Fabio; Martim Schubert, Benno; Erichsen, Rubem, Jr.

    2015-07-01

    Layered neural networks are feedforward structures that yield robust parallel and distributed pattern recognition. Even though much attention has been paid to pattern retrieval properties in such systems, many aspects of their dynamics are not yet well characterized or understood. In this work we study, at different temperatures, the memory activity and information flows through layered networks in which the elements are the simplest binary odd non-monotonic function. Our results show that, considering a standard Hebbian learning approach, the network information content has its maximum always at the monotonic limit, even though the maximum memory capacity can be found at non-monotonic values for small enough temperatures. Furthermore, we show that such systems exhibit rich macroscopic dynamics, including not only fixed point solutions of its iterative map, but also cyclic and chaotic attractors that also carry information.

  20. Solar radiation control using nematic curvilinear aligned phase (NCAP) liquid crystal technology

    NASA Astrophysics Data System (ADS)

    vanKonynenburg, Peter; Marsland, Stephen; McCoy, James

    1987-11-01

    A new, advanced liquid crystal technology has made economical, large area, electrically-controlled windows a commercial reality. The new technology, Nematic Curvilinear Aligned Phase (NCAP), is based on a polymeric material containing small droplets of nematic liquid crystal which is coated and laminated between transparent electrodes and fabricated into large area field effect devices. NCAP windows feature variable solar transmission and reflection through a voltage-controlled scattering mechanism. Laminated window constructions provide the excellent transmission and visibility of glass in the powered condition. In the unpowered condition, the windows are highly translucent, and provide 1) blocked vision for privacy, security, and obscuration of information, and 2) glare control and solar shading. The stability is excellent during accelerated aging tests. Degradation mechanisms which can limit performance and lifetime are discussed. Maximum long term stability is achieved by product designs that incorporate the appropriate window materials to provide environmental protection.

  1. Development of a Common User Interface for the Launch Decision Support System

    NASA Technical Reports Server (NTRS)

    Scholtz, Jean C.

    1991-01-01

    The Launch Decision Support System (LDSS) is software to be used by the NASA Test Director (NTD) in the firing room during countdown. This software is designed to assist the NTD with time management, that is, when to resume from a hold condition. This software will assist the NTD in making and evaluating alternate plans and will keep him advised of the existing situation. As such, the interface to this software must be designed to provide the maximum amount of information in the clearest fashion and in a timely manner. This research involves applying user interface guidelines to a mature prototype of LDSS and developing displays that will enable the users to easily and efficiently obtain information from the LDSS displays. This research also extends previous work on organizing and prioritizing human-computer interaction knowledge.

  2. Satsurblia: New Insights of Human Response and Survival across the Last Glacial Maximum in the Southern Caucasus

    PubMed Central

    Pinhasi, Ron; Meshveliani, Tengiz; Matskevich, Zinovi; Bar-Oz, Guy; Weissbrod, Lior; Miller, Christopher E.; Wilkinson, Keith; Lordkipanidze, David; Jakeli, Nino; Kvavadze, Eliso; Higham, Thomas F. G.; Belfer-Cohen, Anna

    2014-01-01

    The region of western Georgia (Imereti) has been a major geographic corridor for human migrations during the Middle and Upper Palaeolithic (MP/UP). Knowledge of the MP and UP in this region, however, stems mostly from a small number of recent excavations at the sites of Ortvale Klde, Dzudzuana, Bondi, and Kotias Klde. These provide an absolute chronology for the Late MP and MP–UP transition, but only a partial perspective on the nature and timing of UP occupations, and limited data on how human groups in this region responded to the harsh climatic oscillations between 37,000–11,500 years before present. Here we report new UP archaeological sequences from fieldwork in Satsurblia cavein the same region. A series of living surfaces with combustion features, faunal remains, stone and bone tools, and ornaments provide new information about human occupations in this region (a) prior to the Last Glacial Maximum (LGM) at 25.5–24.4 ka cal. BP and (b) after the LGM at 17.9–16.2 ka cal. BP. The latter provides new evidence in the southern Caucasus for human occupation immediately after the LGM. The results of the campaigns in Satsurblia and Dzudzuana suggest that at present the most plausible scenario is one of a hiatus in the occupation of this region during the LGM (between 24.4–17.9 ka cal. BP). Analysis of the living surfaces at Satsurblia offers information about human activities such as the production and utilisation of lithics and bone tools, butchering, cooking and consumption of meat and wild cereals, the utilisation of fibers, and the use of certain woods. Microfaunal and palynological analyses point to fluctuations in the climate with consequent shifts in vegetation and the faunal spectrum not only before and after the LGM, but also during the two millennia following the end of the LGM. PMID:25354048

  3. Satsurblia: new insights of human response and survival across the Last Glacial Maximum in the southern Caucasus.

    PubMed

    Pinhasi, Ron; Meshveliani, Tengiz; Matskevich, Zinovi; Bar-Oz, Guy; Weissbrod, Lior; Miller, Christopher E; Wilkinson, Keith; Lordkipanidze, David; Jakeli, Nino; Kvavadze, Eliso; Higham, Thomas F G; Belfer-Cohen, Anna

    2014-01-01

    The region of western Georgia (Imereti) has been a major geographic corridor for human migrations during the Middle and Upper Palaeolithic (MP/UP). Knowledge of the MP and UP in this region, however, stems mostly from a small number of recent excavations at the sites of Ortvale Klde, Dzudzuana, Bondi, and Kotias Klde. These provide an absolute chronology for the Late MP and MP-UP transition, but only a partial perspective on the nature and timing of UP occupations, and limited data on how human groups in this region responded to the harsh climatic oscillations between 37,000-11,500 years before present. Here we report new UP archaeological sequences from fieldwork in Satsurblia cavein the same region. A series of living surfaces with combustion features, faunal remains, stone and bone tools, and ornaments provide new information about human occupations in this region (a) prior to the Last Glacial Maximum (LGM) at 25.5-24.4 ka cal. BP and (b) after the LGM at 17.9-16.2 ka cal. BP. The latter provides new evidence in the southern Caucasus for human occupation immediately after the LGM. The results of the campaigns in Satsurblia and Dzudzuana suggest that at present the most plausible scenario is one of a hiatus in the occupation of this region during the LGM (between 24.4-17.9 ka cal. BP). Analysis of the living surfaces at Satsurblia offers information about human activities such as the production and utilisation of lithics and bone tools, butchering, cooking and consumption of meat and wild cereals, the utilisation of fibers, and the use of certain woods. Microfaunal and palynological analyses point to fluctuations in the climate with consequent shifts in vegetation and the faunal spectrum not only before and after the LGM, but also during the two millennia following the end of the LGM.

  4. Summary of Optical-Backscatter and Suspended-Sediment Data, Tomales Bay Watershed, California, Water Years 2004, 2005, and 2006

    USGS Publications Warehouse

    Curtis, Jennifer A.

    2007-01-01

    The U.S. Geological Survey, in cooperation with Point Reyes National Seashore, is studying suspended-sediment transport dynamics in the two primary tributaries to Tomales Bay, Lagunitas Creek and Walker Creek. Suspended-sediment samples and continuous optical backscatter (turbidity) data were collected at three locations during water years 2004?06 (October 1, 2003?September 30, 2006): at two sites in the Lagunitas Creek watershed and at one site in the Walker Creek watershed. Sediment samples were analyzed for suspended-sediment concentration, grain size, and turbidity. Data were used to estimate mean daily and annual seasonal suspended-sediment discharge, which were published in U.S. Geological Survey Annual Water-Data Reports. Data were utilized further in this report to develop field-based optical-backscatter calibration equations, which then were used to derive a continuous time series (15-minute interval) of suspended-sediment concentrations. Sensor fouling and aggradation of the channel bed occurred periodically throughout the project period, resulting in data loss. Although periods of data loss occurred, collection of optical sensor data improved our understanding of suspended-sediment dynamics in the Lagunitas Creek and Walker Creek watersheds by providing continuous time-series storm event data that were analyzed to determine durations of elevated sediment concentrations (periods of time when suspended-sediment concentration was greater than 100 mg/L). Data derived from this project contributed baseline suspended-sediment transport information that will be used to develop and implement sediment total maximum daily loads for Tomales Bay and its tributary watersheds, and provides supporting information for additional total maximum daily loads (pathogens, nutrients, and mercury) and restoration efforts for four federally listed aquatic species that are affected directly by sediment loading in the Tomales Bay watershed. In addition, this project provided an opportunity to evaluate the suitability of using optical data as a surrogate for more traditional labor-intensive methods of measuring suspended-sediment transport in steep coastal watersheds.

  5. A realistic treatment of geomagnetic Cherenkov radiation from cosmic ray air showers

    NASA Astrophysics Data System (ADS)

    Werner, Klaus; de Vries, Krijn D.; Scholten, Olaf

    2012-09-01

    We present a macroscopic calculation of coherent electro-magnetic radiation from air showers initiated by ultra-high energy cosmic rays, based on currents obtained from three-dimensional Monte Carlo simulations of air showers in a realistic geo-magnetic field. We discuss the importance of a correct treatment of the index of refraction in air, given by the law of Gladstone and Dale, which affects the pulses enormously for certain configurations, compared to a simplified treatment using a constant index. We predict in particular a geomagnetic Cherenkov radiation, which provides strong signals at high frequencies (GHz), for certain geometries together with "normal radiation" from the shower maximum, leading to a double peak structure in the frequency spectrum. We also provide some information about the numerical procedures referred to as EVA 1.0.

  6. Advanced probabilistic methods for quantifying the effects of various uncertainties in structural response

    NASA Technical Reports Server (NTRS)

    Nagpal, Vinod K.

    1988-01-01

    The effects of actual variations, also called uncertainties, in geometry and material properties on the structural response of a space shuttle main engine turbopump blade are evaluated. A normal distribution was assumed to represent the uncertainties statistically. Uncertainties were assumed to be totally random, partially correlated, and fully correlated. The magnitude of these uncertainties were represented in terms of mean and variance. Blade responses, recorded in terms of displacements, natural frequencies, and maximum stress, was evaluated and plotted in the form of probabilistic distributions under combined uncertainties. These distributions provide an estimate of the range of magnitudes of the response and probability of occurrence of a given response. Most importantly, these distributions provide the information needed to estimate quantitatively the risk in a structural design.

  7. KSC-2012-6105

    NASA Image and Video Library

    2012-11-01

    CAPE CANAVERAL, Fla. – The Orion Exploration Flight Test 1 crew module is undergoing proof pressure testing at the Operations and Checkout Building at NASA's Kennedy Space Center in Florida. The test incrementally pressurizes the spacecraft with breathing air and is designed to demonstrate weld strength capability and structural performance at maximum flight operating pressures. Orion is the exploration spacecraft designed to carry crews to space beyond low Earth orbit. It will provide emergency abort capability, sustain the crew during the space travel and provide safe re-entry from deep space return velocities. The first unpiloted test flight of the Orion is scheduled to launch in 2014 atop a Delta IV rocket and in 2017 on a Space Launch System rocket. For more information, visit http://www.nasa.gov/orion Photo credit: NASA/Ben Smegelsky

  8. KSC-2012-6103

    NASA Image and Video Library

    2012-11-01

    CAPE CANAVERAL, Fla. – The Orion Exploration Flight Test 1 crew module is undergoing proof pressure testing at the Operations and Checkout Building at NASA's Kennedy Space Center in Florida. The test incrementally pressurizes the spacecraft with breathing air and is designed to demonstrate weld strength capability and structural performance at maximum flight operating pressures. Orion is the exploration spacecraft designed to carry crews to space beyond low Earth orbit. It will provide emergency abort capability, sustain the crew during the space travel and provide safe re-entry from deep space return velocities. The first unpiloted test flight of the Orion is scheduled to launch in 2014 atop a Delta IV rocket and in 2017 on a Space Launch System rocket. For more information, visit http://www.nasa.gov/orion Photo credit: NASA/Ben Smegelsky

  9. KSC-2012-6104

    NASA Image and Video Library

    2012-11-01

    CAPE CANAVERAL, Fla. – The Orion Exploration Flight Test 1 crew module is undergoing proof pressure testing at the Operations and Checkout Building at NASA's Kennedy Space Center in Florida. The test incrementally pressurizes the spacecraft with breathing air and is designed to demonstrate weld strength capability and structural performance at maximum flight operating pressures. Orion is the exploration spacecraft designed to carry crews to space beyond low Earth orbit. It will provide emergency abort capability, sustain the crew during the space travel and provide safe re-entry from deep space return velocities. The first unpiloted test flight of the Orion is scheduled to launch in 2014 atop a Delta IV rocket and in 2017 on a Space Launch System rocket. For more information, visit http://www.nasa.gov/orion Photo credit: NASA/Ben Smegelsky

  10. Chitin Adsorbents for Toxic Metals: A Review

    PubMed Central

    Anastopoulos, Ioannis; Bhatnagar, Amit; Bikiaris, Dimitrios N.; Kyzas, George Z.

    2017-01-01

    Wastewater treatment is still a critical issue all over the world. Among examined methods for the decontamination of wastewaters, adsorption is a promising, cheap, environmentally friendly and efficient procedure. There are various types of adsorbents that have been used to remove different pollutants such as agricultural waste, compost, nanomaterials, algae, etc., Chitin (poly-β-(1,4)-N-acetyl-d-glucosamine) is the second most abundant natural biopolymer and it has attracted scientific attention as an inexpensive adsorbent for toxic metals. This review article provides information about the use of chitin as an adsorbent. A list of chitin adsorbents with maximum adsorption capacity and the best isotherm and kinetic fitting models are provided. Moreover, thermodynamic studies, regeneration studies, the mechanism of adsorption and the experimental conditions are also discussed in depth. PMID:28067848

  11. 50 CFR 648.21 - Mid-Atlantic Fishery Management Council risk policy.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... to have an atypical life history, the maximum probability of overfishing as informed by the OFL... atypical life history is generally defined as one that has greater vulnerability to exploitation and whose... development process. (2) For stocks determined by the SSC to have a typical life history, the maximum...

  12. 50 CFR 648.21 - Mid-Atlantic Fishery Management Council risk policy.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... to have an atypical life history, the maximum probability of overfishing as informed by the OFL... atypical life history is generally defined as one that has greater vulnerability to exploitation and whose... development process. (2) For stocks determined by the SSC to have a typical life history, the maximum...

  13. 50 CFR 648.21 - Mid-Atlantic Fishery Management Council risk policy.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... to have an atypical life history, the maximum probability of overfishing as informed by the OFL... atypical life history is generally defined as one that has greater vulnerability to exploitation and whose... development process. (2) For stocks determined by the SSC to have a typical life history, the maximum...

  14. 40 CFR 35.705 - Maximum federal share.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Maximum federal share. 35.705 Section 35.705 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GRANTS AND OTHER FEDERAL ASSISTANCE...) § 35.705 Maximum federal share. The Regional Administrator may provide Tribes and Intertribal Consortia...

  15. Estimation of the Relative Severity of Floods in Small Ungauged Catchments for Preliminary Observations on Flash Flood Preparedness: A Case Study in Korea

    PubMed Central

    Kim, Eung Seok; Choi, Hyun Il

    2012-01-01

    An increase in the occurrence of sudden local flooding of great volume and short duration has caused significant danger and loss of life and property in Korea as well as many other parts of the World. Since such floods usually accompanied by rapid runoff and debris flow rise quite quickly with little or no advance warning to prevent flood damage, this study presents a new flash flood indexing methodology to promptly provide preliminary observations regarding emergency preparedness and response to flash flood disasters in small ungauged catchments. Flood runoff hydrographs are generated from a rainfall-runoff model for the annual maximum rainfall series of long-term observed data in the two selected small ungauged catchments. The relative flood severity factors quantifying characteristics of flood runoff hydrographs are standardized by the highest recorded maximum value, and then averaged to obtain the flash flood index only for flash flood events in each study catchment. It is expected that the regression equations between the proposed flash flood index and rainfall characteristics can provide the basis database of the preliminary information for forecasting the local flood severity in order to facilitate flash flood preparedness in small ungauged catchments. PMID:22690208

  16. Tracking the first two seconds: three stages of visual information processing?

    PubMed

    Jacob, Jane; Breitmeyer, Bruno G; Treviño, Melissa

    2013-12-01

    We compared visual priming and comparison tasks to assess information processing of a stimulus during the first 2 s after its onset. In both tasks, a 13-ms prime was followed at varying SOAs by a 40-ms probe. In the priming task, observers identified the probe as rapidly and accurately as possible; in the comparison task, observers determined as rapidly and accurately as possible whether or not the probe and prime were identical. Priming effects attained a maximum at an SOA of 133 ms and then declined monotonically to zero by 700 ms, indicating reliance on relatively brief visuosensory (iconic) memory. In contrast, the comparison effects yielded a multiphasic function, showing a maximum at 0 ms followed by a minimum at 133 ms, followed in turn by a maximum at 240 ms and another minimum at 720 ms, and finally a third maximum at 1,200 ms before declining thereafter. The results indicate three stages of prime processing that we take to correspond to iconic visible persistence, iconic informational persistence, and visual working memory, with the first two used in the priming task and all three in the comparison task. These stages are related to stages presumed to underlie stimulus processing in other tasks, such as those giving rise to the attentional blink.

  17. Optimizing an estuarine water quality monitoring program through an entropy-based hierarchical spatiotemporal Bayesian framework

    NASA Astrophysics Data System (ADS)

    Alameddine, Ibrahim; Karmakar, Subhankar; Qian, Song S.; Paerl, Hans W.; Reckhow, Kenneth H.

    2013-10-01

    The total maximum daily load program aims to monitor more than 40,000 standard violations in around 20,000 impaired water bodies across the United States. Given resource limitations, future monitoring efforts have to be hedged against the uncertainties in the monitored system, while taking into account existing knowledge. In that respect, we have developed a hierarchical spatiotemporal Bayesian model that can be used to optimize an existing monitoring network by retaining stations that provide the maximum amount of information, while identifying locations that would benefit from the addition of new stations. The model assumes the water quality parameters are adequately described by a joint matrix normal distribution. The adopted approach allows for a reduction in redundancies, while emphasizing information richness rather than data richness. The developed approach incorporates the concept of entropy to account for the associated uncertainties. Three different entropy-based criteria are adopted: total system entropy, chlorophyll-a standard violation entropy, and dissolved oxygen standard violation entropy. A multiple attribute decision making framework is adopted to integrate the competing design criteria and to generate a single optimal design. The approach is implemented on the water quality monitoring system of the Neuse River Estuary in North Carolina, USA. The model results indicate that the high priority monitoring areas identified by the total system entropy and the dissolved oxygen violation entropy criteria are largely coincident. The monitoring design based on the chlorophyll-a standard violation entropy proved to be less informative, given the low probabilities of violating the water quality standard in the estuary.

  18. Unsupervised classification of major depression using functional connectivity MRI.

    PubMed

    Zeng, Ling-Li; Shen, Hui; Liu, Li; Hu, Dewen

    2014-04-01

    The current diagnosis of psychiatric disorders including major depressive disorder based largely on self-reported symptoms and clinical signs may be prone to patients' behaviors and psychiatrists' bias. This study aims at developing an unsupervised machine learning approach for the accurate identification of major depression based on single resting-state functional magnetic resonance imaging scans in the absence of clinical information. Twenty-four medication-naive patients with major depression and 29 demographically similar healthy individuals underwent resting-state functional magnetic resonance imaging. We first clustered the voxels within the perigenual cingulate cortex into two subregions, a subgenual region and a pregenual region, according to their distinct resting-state functional connectivity patterns and showed that a maximum margin clustering-based unsupervised machine learning approach extracted sufficient information from the subgenual cingulate functional connectivity map to differentiate depressed patients from healthy controls with a group-level clustering consistency of 92.5% and an individual-level classification consistency of 92.5%. It was also revealed that the subgenual cingulate functional connectivity network with the highest discriminative power primarily included the ventrolateral and ventromedial prefrontal cortex, superior temporal gyri and limbic areas, indicating that these connections may play critical roles in the pathophysiology of major depression. The current study suggests that subgenual cingulate functional connectivity network signatures may provide promising objective biomarkers for the diagnosis of major depression and that maximum margin clustering-based unsupervised machine learning approaches may have the potential to inform clinical practice and aid in research on psychiatric disorders. Copyright © 2013 Wiley Periodicals, Inc.

  19. Taxonomically-linked growth phenotypes during arsenic stress among arsenic resistant bacteria isolated from soils overlying the Centralia coal seam fire

    PubMed Central

    Dunivin, Taylor K.; Miller, Justine

    2018-01-01

    Arsenic (As), a toxic element, has impacted life since early Earth. Thus, microorganisms have evolved many As resistance and tolerance mechanisms to improve their survival outcomes given As exposure. We isolated As resistant bacteria from Centralia, PA, the site of an underground coal seam fire that has been burning since 1962. From a 57.4°C soil collected from a vent above the fire, we isolated 25 unique aerobic As resistant bacterial strains spanning seven genera. We examined their diversity, resistance gene content, transformation abilities, inhibitory concentrations, and growth phenotypes. Although As concentrations were low at the time of soil collection (2.58 ppm), isolates had high minimum inhibitory concentrations (MICs) of arsenate and arsenite (>300 mM and 20 mM respectively), and most isolates were capable of arsenate reduction. We screened isolates (PCR and sequencing) using 12 published primer sets for six As resistance genes (AsRGs). Genes encoding arsenate reductase (arsC) and arsenite efflux pumps (arsB, ACR3(2)) were present, and phylogenetic incongruence between 16S rRNA genes and AsRGs provided evidence for horizontal gene transfer. A detailed investigation of differences in isolate growth phenotypes across As concentrations (lag time to exponential growth, maximum growth rate, and maximum OD590) showed a relationship with taxonomy, providing information that could help to predict an isolate’s performance given As exposure in situ. Our results suggest that microbiological management and remediation of environmental As could be informed by taxonomically-linked As tolerance, potential for resistance gene transferability, and the rare biosphere. PMID:29370270

  20. Information Retrieval Performance of Probabilistically Generated, Problem-Specific Computerized Provider Order Entry Pick-Lists: A Pilot Study

    PubMed Central

    Rothschild, Adam S.; Lehmann, Harold P.

    2005-01-01

    Objective: The aim of this study was to preliminarily determine the feasibility of probabilistically generating problem-specific computerized provider order entry (CPOE) pick-lists from a database of explicitly linked orders and problems from actual clinical cases. Design: In a pilot retrospective validation, physicians reviewed internal medicine cases consisting of the admission history and physical examination and orders placed using CPOE during the first 24 hours after admission. They created coded problem lists and linked orders from individual cases to the problem for which they were most indicated. Problem-specific order pick-lists were generated by including a given order in a pick-list if the probability of linkage of order and problem (PLOP) equaled or exceeded a specified threshold. PLOP for a given linked order-problem pair was computed as its prevalence among the other cases in the experiment with the given problem. The orders that the reviewer linked to a given problem instance served as the reference standard to evaluate its system-generated pick-list. Measurements: Recall, precision, and length of the pick-lists. Results: Average recall reached a maximum of .67 with a precision of .17 and pick-list length of 31.22 at a PLOP threshold of 0. Average precision reached a maximum of .73 with a recall of .09 and pick-list length of .42 at a PLOP threshold of .9. Recall varied inversely with precision in classic information retrieval behavior. Conclusion: We preliminarily conclude that it is feasible to generate problem-specific CPOE pick-lists probabilistically from a database of explicitly linked orders and problems. Further research is necessary to determine the usefulness of this approach in real-world settings. PMID:15684134

  1. Cloud-based adaptive exon prediction for DNA analysis.

    PubMed

    Putluri, Srinivasareddy; Zia Ur Rahman, Md; Fathima, Shaik Yasmeen

    2018-02-01

    Cloud computing offers significant research and economic benefits to healthcare organisations. Cloud services provide a safe place for storing and managing large amounts of such sensitive data. Under conventional flow of gene information, gene sequence laboratories send out raw and inferred information via Internet to several sequence libraries. DNA sequencing storage costs will be minimised by use of cloud service. In this study, the authors put forward a novel genomic informatics system using Amazon Cloud Services, where genomic sequence information is stored and accessed for processing. True identification of exon regions in a DNA sequence is a key task in bioinformatics, which helps in disease identification and design drugs. Three base periodicity property of exons forms the basis of all exon identification techniques. Adaptive signal processing techniques found to be promising in comparison with several other methods. Several adaptive exon predictors (AEPs) are developed using variable normalised least mean square and its maximum normalised variants to reduce computational complexity. Finally, performance evaluation of various AEPs is done based on measures such as sensitivity, specificity and precision using various standard genomic datasets taken from National Center for Biotechnology Information genomic sequence database.

  2. Optimization strategy for and structural properties of traffic efficiency under bounded information accessibility

    NASA Astrophysics Data System (ADS)

    Sanghyun, Ahn; Seungwoong, Ha; Kim, Soo Yong

    2016-06-01

    A vital challenge for many socioeconomic systems is determining the optimum use of limited information. Traffic systems, wherein the range of resources is limited, are a particularly good example of this challenge. Based on bounded information accessibility in terms of, for example, high costs or technical limitations, we develop a new optimization strategy to improve the efficiency of a traffic system with signals and intersections. Numerous studies, including the study by Chowdery and Schadschneider (whose method we denote by ChSch), have attempted to achieve the maximum vehicle speed or the minimum wait time for a given traffic condition. In this paper, we introduce a modified version of ChSch with an independently functioning, decentralized control system. With the new model, we determine the optimization strategy under bounded information accessibility, which proves the existence of an optimal point for phase transitions in the system. The paper also provides insight that can be applied by traffic engineers to create more efficient traffic systems by analyzing the area and symmetry of local sites. We support our results with a statistical analysis using empirical traffic data from Seoul, Korea.

  3. Experimental Design for Estimating Unknown Hydraulic Conductivity in a Confined Aquifer using a Genetic Algorithm and a Reduced Order Model

    NASA Astrophysics Data System (ADS)

    Ushijima, T.; Yeh, W.

    2013-12-01

    An optimal experimental design algorithm is developed to select locations for a network of observation wells that provides the maximum information about unknown hydraulic conductivity in a confined, anisotropic aquifer. The design employs a maximal information criterion that chooses, among competing designs, the design that maximizes the sum of squared sensitivities while conforming to specified design constraints. Because that the formulated problem is non-convex and contains integer variables (necessitating a combinatorial search), for a realistically-scaled model, the problem may be difficult, if not impossible, to solve through traditional mathematical programming techniques. Genetic Algorithms (GAs) are designed to search out the global optimum; however because a GA requires a large number of calls to a groundwater model, the formulated optimization problem may still be infeasible to solve. To overcome this, Proper Orthogonal Decomposition (POD) is applied to the groundwater model to reduce its dimension. The information matrix in the full model space can then be searched without solving the full model.

  4. Small size transformer provides high power regulation with low ripple and maximum control

    NASA Technical Reports Server (NTRS)

    Manoli, R.; Ulrich, B. R.

    1971-01-01

    Single, variable, transformer/choke device does work of several. Technique reduces drawer assembly physical size and design and manufacturing cost. Device provides power, voltage current and impedance regulation while maintaining maximum control of linearity and ensuring extremely low ripple. Nulling is controlled to very fine degree.

  5. [Understanding of medical information provided during orthognathic surgery consultations].

    PubMed

    Poynard, S; Pare, A; Bonin Goga, B; Laure, B; Goga, D

    2014-06-01

    A prospective study was conducted from November 2012 to May 2013 to assess what patients had understood after their preoperative consultations for orthognathic surgery. We studied the impact of a written document created in the department, containing the information given during the consultation. Fifty patients were asked to complete 2 questionnaires given to the patient the day before surgery. The first was used to assess what the patients had understood; it included 20 multiple-choice questions on information given during consultation and in the written document. For each item, the patient had to check what he thought to be the right answer. Each correct answer was graded at 1 and each incorrect answer or no answer was graded at 0. The maximum score was 20/20. The second was to assess the written document. Each item was graded from 1 to 10 (Likert-type scale). Thirty-two patients answered both questionnaires. The average score for the first was 15.03/20 (P<0.05), significantly higher than the theoretical average set at 10 (P<0.05). The written document was found understandable (score 8.47/10) and information easy to find (score 7.28/10). The document provided answers to the patients' questions (score 7.50/10), using information given during consultation (score 7.56/10). The 2 consultations and the written document helped patients better understand orthognatic care and surgery. Copyright © 2014. Published by Elsevier Masson SAS.

  6. Methods for Environments and Contaminants: Drinking Water

    EPA Pesticide Factsheets

    EPA’s Safe Drinking Water Information System Federal Version (SDWIS/FED) includes information on populations served and violations of maximum contaminant levels or required treatment techniques by the nation’s 160,000 public water systems.

  7. 78 FR 49370 - Inflation Adjustment of Maximum Forfeiture Penalties

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-08-14

    ... ``civil monetary penalties provided by law'' at least once every four years. DATES: Effective September 13... increases the maximum civil monetary forfeiture penalties available to the Commission under its rules... maximum civil penalties established in that section to account for inflation since the last adjustment to...

  8. 40 CFR 35.265 - Maximum federal share.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Maximum federal share. 35.265 Section 35.265 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GRANTS AND OTHER FEDERAL ASSISTANCE....265 Maximum federal share. The Regional Administrator may provide up to 60 percent of the approved...

  9. 40 CFR 35.715 - Maximum federal share.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Maximum federal share. 35.715 Section 35.715 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GRANTS AND OTHER FEDERAL ASSISTANCE... Monitoring (section 28) § 35.715 Maximum federal share. The Regional Administrator may provide up to 75...

  10. 40 CFR 35.295 - Maximum federal share.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Maximum federal share. 35.295 Section 35.295 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GRANTS AND OTHER FEDERAL ASSISTANCE... Maximum federal share. The Regional Administrator may provide State agencies up to 50 percent of the...

  11. 40 CFR 35.649 - Maximum federal share.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Maximum federal share. 35.649 Section 35.649 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GRANTS AND OTHER FEDERAL ASSISTANCE... and Training (section 23(a)(2)) § 35.649 Maximum federal share. The Regional Administrator may provide...

  12. Room-temperature storage of medications labeled for refrigeration.

    PubMed

    Cohen, Victor; Jellinek, Samantha P; Teperikidis, Leftherios; Berkovits, Elliot; Goldman, William M

    2007-08-15

    Data regarding the recommended maximum duration that refrigerated medications available in hospital pharmacies may be stored safely at room temperature were collected and compiled in a tabular format. During May and June of 2006, the prescribing information for medications labeled for refrigeration as obtained from the supplier were reviewed for data addressing room-temperature storage. Telephone surveys of the products' manufacturers were conducted when this information was not available in the prescribing information. Medications were included in the review if they were labeled to be stored at 2-8 degrees C and purchased by the pharmacy department for uses indicated on the hospital formulary. Frozen antibiotics thawed in the refrigerator and extemporaneously compounded medications were excluded. Information was compiled and arranged in tabular format. The U.S. Pharmacopeia's definition of room temperature (20-25 degrees C [68-77 degrees F]) was used for this review. Of the 189 medications listed in AHFS Drug Information 2006 for storage in a refrigerator, 89 were present in the pharmacy department's refrigerator. Since six manufacturers were unable to provide information for 10 medications, only 79 medications were included in the review. This table may help to avoid unnecessary drug loss and expenditures due to improper storage temperatures. Information regarding the room-temperature storage of 79 medications labeled for refrigerated storage was compiled.

  13. The Equivalence of Information-Theoretic and Likelihood-Based Methods for Neural Dimensionality Reduction

    PubMed Central

    Williamson, Ross S.; Sahani, Maneesh; Pillow, Jonathan W.

    2015-01-01

    Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron’s probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as “single-spike information” to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex. PMID:25831448

  14. What the Sunspot Record Tells Us About Space Climate

    NASA Technical Reports Server (NTRS)

    Hathaway, David H.; Wilson, Robert M.

    2004-01-01

    The records concerning the number, sizes, and positions of sunspots provide a direct means of characterizing solar activity over nearly 400 years. Sunspot numbers are strongly correlated with modem measures of solar activity including: 10.7-cm radio flux, total irradiance, x-ray flares, sunspot area, the baseline level of geomagnetic activity, and the flux of galactic cosmic rays. The Group Sunspot Number provides information on 27 sunspot cycles, far more than any of the modem measures of solar activity, and enough to provide important details about long-term variations in solar activity or Space Climate. The sunspot record shows: 1) sunspot cycles have periods of 131 plus or minus 14 months with a normal distribution; 2) sunspot cycles are asymmetric with a fast rise and slow decline; 3) the rise time from minimum to maximum decreases with cycle amplitude; 4) large amplitude cycles are preceded by short period cycles; 5 ) large amplitude cycles are preceded by high minima; 6) although the two hemispheres remain linked in phase, there are significant asymmetries in the activity in each hemisphere; 7) the rate at which the active latitudes drift toward the equator is anti-correlated with the cycle period, 8) the rate at which the active latitudes drift toward the equator is positively correlated with the amplitude of the cycle after the next; 9) there has been a significant secular increase in the amplitudes of the sunspot cycles since the end of the Maunder Minimum (1715); and 10) there is weak evidence for a quasi-periodic variation in the sunspot cycle amplitudes with a period of about 90 years. These characteristics indicate that the next solar cycle should have a maximum smoothed sunspot number of about 1.45 plus or minus 30 in 2010 while the following cycle should have a maximum of about 70 plus or minus 30 in 2023.

  15. Focused Belief Measures for Uncertainty Quantification in High Performance Semantic Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joslyn, Cliff A.; Weaver, Jesse R.

    In web-scale semantic data analytics there is a great need for methods which aggregate uncertainty claims, on the one hand respecting the information provided as accurately as possible, while on the other still being tractable. Traditional statistical methods are more robust, but only represent distributional, additive uncertainty. Generalized information theory methods, including fuzzy systems and Dempster-Shafer (DS) evidence theory, represent multiple forms of uncertainty, but are computationally and methodologically difficult. We require methods which provide an effective balance between the complete representation of the full complexity of uncertainty claims in their interaction, while satisfying the needs of both computational complexitymore » and human cognition. Here we build on J{\\o}sang's subjective logic to posit methods in focused belief measures (FBMs), where a full DS structure is focused to a single event. The resulting ternary logical structure is posited to be able to capture the minimal amount of generalized complexity needed at a maximum of computational efficiency. We demonstrate the efficacy of this approach in a web ingest experiment over the 2012 Billion Triple dataset from the Semantic Web Challenge.« less

  16. Multi-perspective analysis and spatiotemporal mapping of air pollution monitoring data.

    PubMed

    Kolovos, Alexander; Skupin, André; Jerrett, Michael; Christakos, George

    2010-09-01

    Space-time data analysis and assimilation techniques in atmospheric sciences typically consider input from monitoring measurements. The input is often processed in a manner that acknowledges characteristics of the measurements (e.g., underlying patterns, fluctuation features) under conditions of uncertainty; it also leads to the derivation of secondary information that serves study-oriented goals, and provides input to space-time prediction techniques. We present a novel approach that blends a rigorous space-time prediction model (Bayesian maximum entropy, BME) with a cognitively informed visualization of high-dimensional data (spatialization). The combined BME and spatialization approach (BME-S) is used to study monthly averaged NO2 and mean annual SO4 measurements in California over the 15-year period 1988-2002. Using the original scattered measurements of these two pollutants BME generates spatiotemporal predictions on a regular grid across the state. Subsequently, the prediction network undergoes the spatialization transformation into a lower-dimensional geometric representation, aimed at revealing patterns and relationships that exist within the input data. The proposed BME-S provides a powerful spatiotemporal framework to study a variety of air pollution data sources.

  17. Groundwater ages and mixing in the Piceance Basin natural gas province, Colorado

    USGS Publications Warehouse

    McMahon, Peter B.; Thomas, Judith C.; Hunt, Andrew G.

    2013-01-01

    Reliably identifying the effects of energy development on groundwater quality can be difficult because baseline assessments of water quality completed before the onset of energy development are rare and because interactions between hydrocarbon reservoirs and aquifers can be complex, involving both natural and human processes. Groundwater age and mixing data can strengthen interpretations of monitoring data from those areas by providing better understanding of the groundwater flow systems. Chemical, isotopic, and age tracers were used to characterize groundwater ages and mixing with deeper saline water in three areas of the Piceance Basin natural gas province. The data revealed a complex array of groundwater ages (50,000 years) and mixing patterns in the basin that helped explain concentrations and sources of methane in groundwater. Age and mixing data also can strengthen the design of monitoring programs by providing information on time scales at which water quality changes in aquifers might be expected to occur. This information could be used to establish maximum allowable distances of monitoring wells from energy development activity and the appropriate duration of monitoring.

  18. Life-time risk of mortality due to different levels of alcohol consumption in seven European countries: implications for low-risk drinking guidelines.

    PubMed

    Shield, Kevin D; Gmel, Gerrit; Gmel, Gerhard; Mäkelä, Pia; Probst, Charlotte; Room, Robin; Rehm, Jürgen

    2017-09-01

    Low-risk alcohol drinking guidelines require a scientific basis that extends beyond individual or group judgements of risk. Life-time mortality risks, judged against established thresholds for acceptable risk, may provide such a basis for guidelines. Therefore, the aim of this study was to estimate alcohol mortality risks for seven European countries based on different average daily alcohol consumption amounts. The maximum acceptable voluntary premature mortality risk was determined to be one in 1000, with sensitivity analyses of one in 100. Life-time mortality risks for different alcohol consumption levels were estimated by combining disease-specific relative risk and mortality data for seven European countries with different drinking patterns (Estonia, Finland, Germany, Hungary, Ireland, Italy and Poland). Alcohol consumption data were obtained from the Global Information System on Alcohol and Health, relative risk data from meta-analyses and mortality information from the World Health Organization. The variation in the life-time mortality risk at drinking levels relevant for setting guidelines was less than that observed at high drinking levels. In Europe, the percentage of adults consuming above a risk threshold of one in 1000 ranged from 20.6 to 32.9% for women and from 35.4 to 54.0% for men. Life-time risk of premature mortality under current guideline maximums ranged from 2.5 to 44.8 deaths per 1000 women in Finland and Estonia, respectively, and from 2.9 to 35.8 deaths per 1000 men in Finland and Estonia, respectively. If based upon an acceptable risk of one in 1000, guideline maximums for Europe should be 8-10 g/day for women and 15-20 g/day for men. If low-risk alcohol guidelines were based on an acceptable risk of one in 1000 premature deaths, then maximums for Europe should be 8-10 g/day for women and 15-20 g/day for men, and some of the current European guidelines would require downward revision. © 2017 Society for the Study of Addiction.

  19. A robot control formalism based on an information quality concept

    NASA Technical Reports Server (NTRS)

    Ekman, A.; Torne, A.; Stromberg, D.

    1994-01-01

    A relevance measure based on Jaynes maximum entropy principle is introduced. Information quality is the conjunction of accuracy and relevance. The formalism based on information quality is developed for one-agent applications. The robot requires a well defined working environment where properties of each object must be accurately specified.

  20. 32 CFR 637.14 - Use of National Crime Information Center (NCIC).

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 32 National Defense 4 2010-07-01 2010-07-01 true Use of National Crime Information Center (NCIC). 637.14 Section 637.14 National Defense Department of Defense (Continued) DEPARTMENT OF THE ARMY... Use of National Crime Information Center (NCIC). Provost marshals will make maximum use of NCIC...

  1. Proposed U.S. Geological Survey standard for digital orthophotos

    USGS Publications Warehouse

    Hooper, David; Caruso, Vincent

    1991-01-01

    The U.S. Geological Survey has added the new category of digital orthophotos to the National Digital Cartographic Data Base. This differentially rectified digital image product enables users to take advantage of the properties of current photoimagery as a source of geographic information. The product and accompanying standard were implemented in spring 1991. The digital orthophotos will be quadrangle based and cast on the Universal Transverse Mercator projection and will extend beyond the 3.75-minute or 7.5-minute quadrangle area at least 300 meters to form a rectangle. The overedge may be used for mosaicking with adjacent digital orthophotos. To provide maximum information content and utility to the user, metadata (header) records exist at the beginning of the digital orthophoto file. Header information includes the photographic source type, date, instrumentation used to create the digital orthophoto, and information relating to the DEM that was used in the rectification process. Additional header information is included on transformation constants from the 1927 and 1983 North American Datums to the orthophoto internal file coordinates to enable the user to register overlays on either datum. The quadrangle corners in both datums are also imprinted on the image. Flexibility has been built into the digital orthophoto format for future enhancements, such as the provision to include the corresponding digital elevation model elevations used to rectify the orthophoto. The digital orthophoto conforms to National Map Accuracy Standards and provides valuable mapping data that can be used as a tool for timely revision of standard map products, for land use and land cover studies, and as a digital layer in a geographic information system.

  2. Limits to anaerobic energy and cytosolic concentration in the living cell.

    PubMed

    Paglietti, A

    2015-01-01

    For many physical systems at any given temperature, the set of all states where the system's free energy reaches its largest value can be determined from the system's constitutive equations of internal energy and entropy, once a state of that set is known. Such an approach is fraught with complications when applied to a living cell, because the cell's cytosol contains thousands of solutes, and thus thousands of state variables, which makes determination of its state impractical. We show here that, when looking for the maximum energy that the cytosol can store and release, detailed information on cytosol composition is redundant. Compatibility with cell's life requires that a single variable that represents the overall concentration of cytosol solutes must fall between defined limits, which can be determined by dehydrating and overhydrating the cell to its maximum capacity. The same limits are shown to determine, in particular, the maximum amount of free energy that a cell can supply in fast anaerobic processes, starting from any given initial state. For a typical skeletal muscle in normal physiological conditions this energy, i.e., the maximum anaerobic capacity to do work, is calculated to be about 960 J per kg of muscular mass. Such energy decreases as the overall concentration of solutes in the cytosol is increased. Similar results apply to any kind of cell. They provide an essential tool to understand and control the macroscopic response of single cells and multicellular cellular tissues alike. The applications include sport physiology, cell aging, disease produced cell damage, drug absorption capacity, to mention the most obvious ones.

  3. Continuity vs. the Crowd-Tradeoffs Between Continuous and Intermittent Citizen Hydrology Streamflow Observations.

    PubMed

    Davids, Jeffrey C; van de Giesen, Nick; Rutten, Martine

    2017-07-01

    Hydrologic data has traditionally been collected with permanent installations of sophisticated and accurate but expensive monitoring equipment at limited numbers of sites. Consequently, observation frequency and costs are high, but spatial coverage of the data is limited. Citizen Hydrology can possibly overcome these challenges by leveraging easily scaled mobile technology and local residents to collect hydrologic data at many sites. However, understanding of how decreased observational frequency impacts the accuracy of key streamflow statistics such as minimum flow, maximum flow, and runoff is limited. To evaluate this impact, we randomly selected 50 active United States Geological Survey streamflow gauges in California. We used 7 years of historical 15-min flow data from 2008 to 2014 to develop minimum flow, maximum flow, and runoff values for each gauge. To mimic lower frequency Citizen Hydrology observations, we developed a bootstrap randomized subsampling with replacement procedure. We calculated the same statistics, and their respective distributions, from 50 subsample iterations with four different subsampling frequencies ranging from daily to monthly. Minimum flows were estimated within 10% for half of the subsample iterations at 39 (daily) and 23 (monthly) of the 50 sites. However, maximum flows were estimated within 10% at only 7 (daily) and 0 (monthly) sites. Runoff volumes were estimated within 10% for half of the iterations at 44 (daily) and 12 (monthly) sites. Watershed flashiness most strongly impacted accuracy of minimum flow, maximum flow, and runoff estimates from subsampled data. Depending on the questions being asked, lower frequency Citizen Hydrology observations can provide useful hydrologic information.

  4. A review of odour impact criteria in selected countries around the world.

    PubMed

    Brancher, Marlon; Griffiths, K David; Franco, Davide; de Melo Lisboa, Henrique

    2017-02-01

    Exposure to environmental odour can result in annoyance, health effects and depreciation of property values. Therefore, many jurisdictions classify odour as an atmospheric pollutant and regulate emissions and/or impacts from odour generating activities at a national, state or municipal level. In this work, a critical review of odour regulations in selected jurisdictions of 28 countries is presented. Individual approaches were identified as: comparing ambient air odour concentration and individual chemicals statistics against impact criteria (maximum impact standard); using fixed and variable separation distances (separation distance standard); maximum emission rate for mixtures of odorants and individual chemical species (maximum emission standard); number of complaints received or annoyance level determined via community surveys (maximum annoyance standard); and requiring use of best available technologies (BAT) to minimize odour emissions (technology standard). The comparison of model-predicted odour concentration statistics against odour impact criteria (OIC) is identified as one of the most common tools used by regulators to evaluate the risk of odour impacts in planning stage assessments and is also used to inform assessment of odour impacts of existing facilities. Special emphasis is given to summarizing OIC (concentration percentile and threshold) and the manner in which they are applied. The way short term odour peak to model time-step mean (peak-to-mean) effects is also captured. Furthermore, the fundamentals of odorant properties, dimensions of nuisance odour, odour sampling and analysis methods and dispersion modelling guidance are provided. Common elements of mature and effective odour regulation frameworks are identified and an integrated multi-tool strategy is recommended. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Limits to anaerobic energy and cytosolic concentration in the living cell

    NASA Astrophysics Data System (ADS)

    Paglietti, A.

    2015-11-01

    For many physical systems at any given temperature, the set of all states where the system's free energy reaches its largest value can be determined from the system's constitutive equations of internal energy and entropy, once a state of that set is known. Such an approach is fraught with complications when applied to a living cell, because the cell's cytosol contains thousands of solutes, and thus thousands of state variables, which makes determination of its state impractical. We show here that, when looking for the maximum energy that the cytosol can store and release, detailed information on cytosol composition is redundant. Compatibility with cell's life requires that a single variable that represents the overall concentration of cytosol solutes must fall between defined limits, which can be determined by dehydrating and overhydrating the cell to its maximum capacity. The same limits are shown to determine, in particular, the maximum amount of free energy that a cell can supply in fast anaerobic processes, starting from any given initial state. For a typical skeletal muscle in normal physiological conditions this energy, i.e., the maximum anaerobic capacity to do work, is calculated to be about 960 J per kg of muscular mass. Such energy decreases as the overall concentration of solutes in the cytosol is increased. Similar results apply to any kind of cell. They provide an essential tool to understand and control the macroscopic response of single cells and multicellular cellular tissues alike. The applications include sport physiology, cell aging, disease produced cell damage, drug absorption capacity, to mention the most obvious ones.

  6. Task Performance with List-Mode Data

    NASA Astrophysics Data System (ADS)

    Caucci, Luca

    This dissertation investigates the application of list-mode data to detection, estimation, and image reconstruction problems, with an emphasis on emission tomography in medical imaging. We begin by introducing a theoretical framework for list-mode data and we use it to define two observers that operate on list-mode data. These observers are applied to the problem of detecting a signal (known in shape and location) buried in a random lumpy background. We then consider maximum-likelihood methods for the estimation of numerical parameters from list-mode data, and we characterize the performance of these estimators via the so-called Fisher information matrix. Reconstruction from PET list-mode data is then considered. In a process we called "double maximum-likelihood" reconstruction, we consider a simple PET imaging system and we use maximum-likelihood methods to first estimate a parameter vector for each pair of gamma-ray photons that is detected by the hardware. The collection of these parameter vectors forms a list, which is then fed to another maximum-likelihood algorithm for volumetric reconstruction over a grid of voxels. Efficient parallel implementation of the algorithms discussed above is then presented. In this work, we take advantage of two low-cost, mass-produced computing platforms that have recently appeared on the market, and we provide some details on implementing our algorithms on these devices. We conclude this dissertation work by elaborating on a possible application of list-mode data to X-ray digital mammography. We argue that today's CMOS detectors and computing platforms have become fast enough to make X-ray digital mammography list-mode data acquisition and processing feasible.

  7. Seismic risk assessment for Poiana Uzului (Romania) buttress dam on Uz river

    NASA Astrophysics Data System (ADS)

    Moldovan, Iren-Adelina; Toma-Danila, Dragos; Paerele, Cosmin Marian; Emilian Toader, Victorin; Petruta Constantin, Angela; Ghita, Cristian

    2017-04-01

    The most important specific requirements towards dams' safety is the seismic risk assessment. This objective will be accomplished by rating the dams into seismic risk classes using the theory of Bureau and Ballentine, 2002, and Bureau (2003), taking into account the maximum expected peak ground motions at dams' site, the structures vulnerability and the downstream risk characteristics. The maximum expected values for ground motions at dams' site have been obtained using probabilistic seismic hazard assessment approaches. The structural vulnerability was obtained from dams' characteristics (age, high, water volume) and the downstream risk was assessed using human, economical, touristic, historic and cultural heritage information from the areas that might be flooded in the case of a dam failure. A couple of flooding scenarios have been performed. The results of the work consist of local and regional seismic information, specific characteristics of dam, seismic hazard values for different return periods and risk classes. The studies realized in this paper have as final goal to provide in the near future the local emergency services with warnings of a potential dam failure and ensuing flood as a result of a large earthquake occurrence, allowing further public training for evacuation. Acknowledgments This work was partially supported by the Partnership in Priority Areas Program - PNII, under MEN-UEFISCDI, DARING Project no. 69/2014 and the Nucleu Program - PN 16-35, Project no. 03 01 and 01 06.

  8. Dynamic Financial Constraints: Distinguishing Mechanism Design from Exogenously Incomplete Regimes*

    PubMed Central

    Karaivanov, Alexander; Townsend, Robert M.

    2014-01-01

    We formulate and solve a range of dynamic models of constrained credit/insurance that allow for moral hazard and limited commitment. We compare them to full insurance and exogenously incomplete financial regimes (autarky, saving only, borrowing and lending in a single asset). We develop computational methods based on mechanism design, linear programming, and maximum likelihood to estimate, compare, and statistically test these alternative dynamic models with financial/information constraints. Our methods can use both cross-sectional and panel data and allow for measurement error and unobserved heterogeneity. We estimate the models using data on Thai households running small businesses from two separate samples. We find that in the rural sample, the exogenously incomplete saving only and borrowing regimes provide the best fit using data on consumption, business assets, investment, and income. Family and other networks help consumption smoothing there, as in a moral hazard constrained regime. In contrast, in urban areas, we find mechanism design financial/information regimes that are decidedly less constrained, with the moral hazard model fitting best combined business and consumption data. We perform numerous robustness checks in both the Thai data and in Monte Carlo simulations and compare our maximum likelihood criterion with results from other metrics and data not used in the estimation. A prototypical counterfactual policy evaluation exercise using the estimation results is also featured. PMID:25246710

  9. Non-stationary hydrologic frequency analysis using B-spline quantile regression

    NASA Astrophysics Data System (ADS)

    Nasri, B.; Bouezmarni, T.; St-Hilaire, A.; Ouarda, T. B. M. J.

    2017-11-01

    Hydrologic frequency analysis is commonly used by engineers and hydrologists to provide the basic information on planning, design and management of hydraulic and water resources systems under the assumption of stationarity. However, with increasing evidence of climate change, it is possible that the assumption of stationarity, which is prerequisite for traditional frequency analysis and hence, the results of conventional analysis would become questionable. In this study, we consider a framework for frequency analysis of extremes based on B-Spline quantile regression which allows to model data in the presence of non-stationarity and/or dependence on covariates with linear and non-linear dependence. A Markov Chain Monte Carlo (MCMC) algorithm was used to estimate quantiles and their posterior distributions. A coefficient of determination and Bayesian information criterion (BIC) for quantile regression are used in order to select the best model, i.e. for each quantile, we choose the degree and number of knots of the adequate B-spline quantile regression model. The method is applied to annual maximum and minimum streamflow records in Ontario, Canada. Climate indices are considered to describe the non-stationarity in the variable of interest and to estimate the quantiles in this case. The results show large differences between the non-stationary quantiles and their stationary equivalents for an annual maximum and minimum discharge with high annual non-exceedance probabilities.

  10. Using radar-derived parameters to forecast lightning cessation for nonisolated storms

    NASA Astrophysics Data System (ADS)

    Davey, Matthew J.; Fuelberg, Henry E.

    2017-03-01

    Lightning impacts operations at the Kennedy Space Center (KSC) and other outdoor venues leading to injuries, inconvenience, and detrimental economic impacts. This research focuses on cases of "nonisolated" lightning which we define as one cell whose flashes have ceased although it is still embedded in weak composite reflectivity (Z ≥ 15 dBZ) with another cell that is still producing flashes. The objective is to determine if any radar-derived parameters provide useful information about the occurrence of lightning cessation in remnant storms. The data set consists of 50 warm season (May-September) nonisolated storms near KSC during 2013. The research utilizes the National Lightning Detection Network, the second generation Lightning Detection and Ranging network, and polarized radar data. These data are merged and analyzed using the Warning Decision Support System-Integrated Information at 1 min intervals. Our approach only considers 62 parameters, most of which are related to the noninductive charging mechanism. They included the presence of graupel at various thermal altitudes, maximum reflectivity of the decaying storm at thermal altitudes, maximum connecting composite reflectivity between the decaying cell and active cell, minutes since the previous flash, and several others. Results showed that none of the parameters reliably indicated lightning cessation for even our restrictive definition of nonisolated storms. Additional research is needed before cessation can be determined operationally with the high degree of accuracy required for safety.

  11. Federal interagency nature‐like fishway passage design guidelines for Atlantic coast diadromous fishes

    USGS Publications Warehouse

    Turek, James; Haro, Alexander J.; Towler, Brett

    2016-01-01

    The National Marine Fisheries Service (NMFS), the U.S. Geological Survey (USGS) and the U.S. Fish and Wildlife Service (USFWS) have collaborated to develop passage design guidance for use by engineers and other restoration practitioners considering and designing nature‐like fishways (NLFs). The primary purpose of these guidelines is to provide a summary of existing fish swimming and leaping performance data and the best available scientific information on safe, timely and effective passage for 14 diadromous fish species using Atlantic Coast rivers and streams. These guidelines apply to passage sites where complete barrier removal is not possible. This technical memorandum presents seven key physical design parameters based on the biometrics and swimming mode and performance of each target fishes for application in the design of NLFs addressing passage of a species or an assemblage of these species. The passage parameters include six dimensional guidelines recommended for minimum weir opening width and depth, minimum pool length, width and depth, and maximum channel slope, along with a maximum flow velocity guideline for each species. While these guidelines are targeted for the design of step‐pool NLFs, the information may also have application in the design of other NLF types being considered at passage restoration sites and grade control necessary for infrastructure protection upstream of some dam removals, and in considering passage performance at sites such as natural bedrock features.

  12. Interactive effects of carbon footprint information and its accessibility on value and subjective qualities of food products.

    PubMed

    Kimura, Atsushi; Wada, Yuji; Kamada, Akiko; Masuda, Tomohiro; Okamoto, Masako; Goto, Sho-ichi; Tsuzuki, Daisuke; Cai, Dongsheng; Oka, Takashi; Dan, Ippeita

    2010-10-01

    We aimed to explore the interactive effects of the accessibility of information and the degree of carbon footprint score on consumers' value judgments of food products. Participants (n=151, undergraduate students in Japan) rated their maximum willingness to pay (WTP) for four food products varying in information accessibility (active-search or read-only conditions) and in carbon footprint values (low, middle, high, or non-display) provided. We also assessed further effects of information accessibly and carbon footprint value on other product attributes utilizing the subjective estimation of taste, quality, healthiness, and environmental friendliness. Results of the experiment demonstrated an interactive effect of information accessibility and the degree of carbon emission on consumer valuation of carbon footprint-labeled food. The carbon footprint value had a stronger impact on participants' WTP in the active-search condition than in the read-only condition. Similar to WTP, the results of the subjective ratings for product qualities also exhibited an interactive effect of the two factors on the rating of environmental friendliness for products. These results imply that the perceived environmental friendliness inferable from a carbon footprint label contributes to creating value for a food product.

  13. Information transmission on hybrid networks

    NASA Astrophysics Data System (ADS)

    Chen, Rongbin; Cui, Wei; Pu, Cunlai; Li, Jie; Ji, Bo; Gakis, Konstantinos; Pardalos, Panos M.

    2018-01-01

    Many real-world communication networks often have hybrid nature with both fixed nodes and moving modes, such as the mobile phone networks mainly composed of fixed base stations and mobile phones. In this paper, we discuss the information transmission process on the hybrid networks with both fixed and mobile nodes. The fixed nodes (base stations) are connected as a spatial lattice on the plane forming the information-carrying backbone, while the mobile nodes (users), which are the sources and destinations of information packets, connect to their current nearest fixed nodes respectively to deliver and receive information packets. We observe the phase transition of traffic load in the hybrid network when the packet generation rate goes from below and then above a critical value, which measures the network capacity of packets delivery. We obtain the optimal speed of moving nodes leading to the maximum network capacity. We further improve the network capacity by rewiring the fixed nodes and by considering the current load of fixed nodes during packets transmission. Our purpose is to optimize the network capacity of hybrid networks from the perspective of network science, and provide some insights for the construction of future communication infrastructures.

  14. Information theoretic analysis of linear shift-invariant edge-detection operators

    NASA Astrophysics Data System (ADS)

    Jiang, Bo; Rahman, Zia-ur

    2012-06-01

    Generally, the designs of digital image processing algorithms and image gathering devices remain separate. Consequently, the performance of digital image processing algorithms is evaluated without taking into account the influences by the image gathering process. However, experiments show that the image gathering process has a profound impact on the performance of digital image processing and the quality of the resulting images. Huck et al. proposed one definitive theoretic analysis of visual communication channels, where the different parts, such as image gathering, processing, and display, are assessed in an integrated manner using Shannon's information theory. We perform an end-to-end information theory based system analysis to assess linear shift-invariant edge-detection algorithms. We evaluate the performance of the different algorithms as a function of the characteristics of the scene and the parameters, such as sampling, additive noise etc., that define the image gathering system. The edge-detection algorithm is regarded as having high performance only if the information rate from the scene to the edge image approaches its maximum possible. This goal can be achieved only by jointly optimizing all processes. Our information-theoretic assessment provides a new tool that allows us to compare different linear shift-invariant edge detectors in a common environment.

  15. Past and Present Large Solid Rocket Motor Test Capabilities

    NASA Technical Reports Server (NTRS)

    Kowalski, Robert R.; Owen, David B., II

    2011-01-01

    A study was performed to identify the current and historical trends in the capability of solid rocket motor testing in the United States. The study focused on test positions capable of testing solid rocket motors of at least 10,000 lbf thrust. Top-level information was collected for two distinct data points plus/minus a few years: 2000 (Y2K) and 2010 (Present). Data was combined from many sources, but primarily focused on data from the Chemical Propulsion Information Analysis Center s Rocket Propulsion Test Facilities Database, and heritage Chemical Propulsion Information Agency/M8 Solid Rocket Motor Static Test Facilities Manual. Data for the Rocket Propulsion Test Facilities Database and heritage M8 Solid Rocket Motor Static Test Facilities Manual is provided to the Chemical Propulsion Information Analysis Center directly from the test facilities. Information for each test cell for each time period was compiled and plotted to produce a graphical display of the changes for the nation, NASA, Department of Defense, and commercial organizations during the past ten years. Major groups of plots include test facility by geographic location, test cells by status/utilization, and test cells by maximum thrust capability. The results are discussed.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bloomquist, R.G.

    District heating and cooling (DHC) can provide multiple opportunities to reduce air emissions associated with space conditioning and electricity generation, which contribute 30% to 50% of all such emissions. When DHC is combined with cogeneration (CHP), maximum reductions in sulfur oxides (SO{sub x}), nitrogen oxides (NO{sub x}), carbon dioxide (CO{sub 2}), particulates, and ozone-depleting chlorofluorocarbon (CFC) refrigerants can most effectively be achieved. Although significant improvements in air quality have been documented in Europe and Scandinavia due to DHC and CHP implementation, accurately predicting such improvements has been difficult. Without acceptable quantification methods, regulatory bodies are reluctant to grant air emissionsmore » credits, and local community leaders are unwilling to invest in DHC and CHP as preferred methods of providing energy or strategies for air quality improvement. The recent development and release of a number of computer models designed specifically to provide quantification of air emissions that can result from DHC and CHP implementation should help provide local, state, and national policymakers with information vital to increasing support and investment in DHC development.« less

  17. 5 CFR 338.601 - Prohibition of maximum-age requirements.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... requirements. A maximum-age requirement may not be applied in either competitive or noncompetitive examinations for positions in the competitive service except as provided by: (a) Section 3307 of title 5, United States Code; or (b) Public Law 93-259 which authorizes OPM to establish a maximum-age requirement after...

  18. 5 CFR 338.601 - Prohibition of maximum-age requirements.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... requirements. A maximum-age requirement may not be applied in either competitive or noncompetitive examinations for positions in the competitive service except as provided by: (a) Section 3307 of title 5, United States Code; or (b) Public Law 93-259 which authorizes OPM to establish a maximum-age requirement after...

  19. 75 FR 58505 - Regulation Z; Truth in Lending

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-24

    ... requirement applicable to higher-priced mortgage loans, for loans that exceed the maximum principal balance.... 1639D). For loans that exceed the Freddie Mac maximum principal balance, TILA Section 129D provides that...)). The current maximum principal balance for a mortgage loan to be eligible for purchase by Freddie Mac...

  20. 40 CFR 35.245 - Maximum federal share.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 1 2010-07-01 2010-07-01 false Maximum federal share. 35.245 Section 35.245 Protection of Environment ENVIRONMENTAL PROTECTION AGENCY GRANTS AND OTHER FEDERAL ASSISTANCE... (section 23(a)(2)) § 35.245 Maximum federal share. The Regional Administrator may provide up to 50 percent...

Top