Sample records for maximum information utilization

  1. 77 FR 27777 - Agency Information Collection Activities: Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-11

    ... include the bundling of separately billed drugs, clinical laboratory tests, and other items ``to maximum... the estimated burden; (3) ways to enhance the quality, utility, and clarity of the information to be... Quality Incentive Program (QIP); Use: The Medicare Prescription Drug Improvement, and Modernization Act of...

  2. The Role of the United States Book Exchange in the Nationwide Library and Information Services Network. National Program for Libraries and Information Services Related Paper No. 27.

    ERIC Educational Resources Information Center

    Ball, Alice Dulany

    The National Commission on Libraries and Information Science's (NCLIS) nationwide information program is based in part on the sharing of resources. The United States Book Exchange (USBE) and its existing services may have a role in this program, since the USBE's major function is the preservation and maximum utilization of publications through…

  3. Constrained Maximum Likelihood Estimation for Model Calibration Using Summary-level Information from External Big Data Sources

    PubMed Central

    Chatterjee, Nilanjan; Chen, Yi-Hau; Maas, Paige; Carroll, Raymond J.

    2016-01-01

    Information from various public and private data sources of extremely large sample sizes are now increasingly available for research purposes. Statistical methods are needed for utilizing information from such big data sources while analyzing data from individual studies that may collect more detailed information required for addressing specific hypotheses of interest. In this article, we consider the problem of building regression models based on individual-level data from an “internal” study while utilizing summary-level information, such as information on parameters for reduced models, from an “external” big data source. We identify a set of very general constraints that link internal and external models. These constraints are used to develop a framework for semiparametric maximum likelihood inference that allows the distribution of covariates to be estimated using either the internal sample or an external reference sample. We develop extensions for handling complex stratified sampling designs, such as case-control sampling, for the internal study. Asymptotic theory and variance estimators are developed for each case. We use simulation studies and a real data application to assess the performance of the proposed methods in contrast to the generalized regression (GR) calibration methodology that is popular in the sample survey literature. PMID:27570323

  4. Use of General Principles in Teaching Biochemistry.

    ERIC Educational Resources Information Center

    Fernandez, Rolando Hernandez; Tomey, Agustin Vicedo

    1991-01-01

    Presents Principles of Biochemistry for use as main focus of a biochemistry course. The nine guiding ideas are the principles of continual turnover, macromolecular organization, molecular recognition, multiplicity of utilization, maximum efficiency, gradual change, interrelationship, transformational reciprocity, and information transfer. In use…

  5. 18 CFR 125.3 - Schedule of records and periods of retention.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... and agreements. 4. Accountants' and auditors' reports. Information Technology Management 5. Automatic... licensees (less nuclear). 13.2 Production—Nuclear. 14. Transmission and distribution—Public utilities and... Collection 29. Customers' service applications and contracts. 30. Rate schedules. 31. Maximum demand and...

  6. Handling Missing Data With Multilevel Structural Equation Modeling and Full Information Maximum Likelihood Techniques.

    PubMed

    Schminkey, Donna L; von Oertzen, Timo; Bullock, Linda

    2016-08-01

    With increasing access to population-based data and electronic health records for secondary analysis, missing data are common. In the social and behavioral sciences, missing data frequently are handled with multiple imputation methods or full information maximum likelihood (FIML) techniques, but healthcare researchers have not embraced these methodologies to the same extent and more often use either traditional imputation techniques or complete case analysis, which can compromise power and introduce unintended bias. This article is a review of options for handling missing data, concluding with a case study demonstrating the utility of multilevel structural equation modeling using full information maximum likelihood (MSEM with FIML) to handle large amounts of missing data. MSEM with FIML is a parsimonious and hypothesis-driven strategy to cope with large amounts of missing data without compromising power or introducing bias. This technique is relevant for nurse researchers faced with ever-increasing amounts of electronic data and decreasing research budgets. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  7. Methods for utilizing maximum power from a solar array

    NASA Technical Reports Server (NTRS)

    Decker, D. K.

    1972-01-01

    A preliminary study of maximum power utilization methods was performed for an outer planet spacecraft using an ion thruster propulsion system and a solar array as the primary energy source. The problems which arise from operating the array at or near the maximum power point of its 1-V characteristic are discussed. Two closed loop system configurations which use extremum regulators to track the array's maximum power point are presented. Three open loop systems are presented that either: (1) measure the maximum power of each array section and compute the total array power, (2) utilize a reference array to predict the characteristics of the solar array, or (3) utilize impedance measurements to predict the maximum power utilization. The advantages and disadvantages of each system are discussed and recommendations for further development are made.

  8. A low-power, high-throughput maximum-likelihood convolutional decoder chip for NASA's 30/20 GHz program

    NASA Technical Reports Server (NTRS)

    Mccallister, R. D.; Crawford, J. J.

    1981-01-01

    It is pointed out that the NASA 30/20 GHz program will place in geosynchronous orbit a technically advanced communication satellite which can process time-division multiple access (TDMA) information bursts with a data throughput in excess of 4 GBPS. To guarantee acceptable data quality during periods of signal attenuation it will be necessary to provide a significant forward error correction (FEC) capability. Convolutional decoding (utilizing the maximum-likelihood techniques) was identified as the most attractive FEC strategy. Design trade-offs regarding a maximum-likelihood convolutional decoder (MCD) in a single-chip CMOS implementation are discussed.

  9. Maximum demand charge rates for commercial and industrial electricity tariffs in the United States

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McLaren, Joyce; Gagnon, Pieter; Zimny-Schmitt, Daniel

    NREL has assembled a list of U.S. retail electricity tariffs and their associated demand charge rates for the Commercial and Industrial sectors. The data was obtained from the Utility Rate Database. Keep the following information in mind when interpreting the data: (1) These data were interpreted and transcribed manually from utility tariff sheets, which are often complex. It is a certainty that these data contain errors, and therefore should only be used as a reference. Actual utility tariff sheets should be consulted if an action requires this type of data. (2) These data only contains tariffs that were entered intomore » the Utility Rate Database. Since not all tariffs are designed in a format that can be entered into the Database, this list is incomplete - it does not contain all tariffs in the United States. (3) These data may have changed since this list was developed (4) Many of the underlying tariffs have additional restrictions or requirements that are not represented here. For example, they may only be available to the agricultural sector or closed to new customers. (5) If there are multiple demand charge elements in a given tariff, the maximum demand charge is the sum of each of the elements at any point in time. Where tiers were present, the highest rate tier was assumed. The value is a maximum for the year, and may be significantly different from demand charge rates at other times in the year. Utility Rate Database: https://openei.org/wiki/Utility_Rate_Database« less

  10. 76 FR 12963 - Request for Information (NOT-ES-11-007): Needs and Approaches for Assessing the Human Health...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-09

    ... intramural and extramural research efforts that address the combined health effects of multiple environmental... construed as a funding opportunity or grant program. Input from all interested parties is welcome including... program priorities and recommends funding levels to assure maximum utilization of available resources in...

  11. Reading Assessment: A Primer for Teachers and Tutors.

    ERIC Educational Resources Information Center

    Caldwell, JoAnne Schudt

    This primer provides the basic information that teachers and tutors need to get started on the complex process of reading assessment. Designed for maximum utility in today's standards-driven classroom, the primer presents simple, practical assessment strategies that are based on theory and research. It takes teachers step by step through learning…

  12. 75 FR 79982 - Authority To Designate Financial Market Utilities as Systemically Important

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-12-21

    ....regulations.gov . Electronic submission of comments allows the commenter maximum time to prepare and submit a... Research; the Director of the Federal Insurance Office; and a State insurance commissioner, a State banking... Designation 1. What quantitative and qualitative information should the Council use to measure the factors it...

  13. A numerical identifiability test for state-space models--application to optimal experimental design.

    PubMed

    Hidalgo, M E; Ayesa, E

    2001-01-01

    This paper describes a mathematical tool for identifiability analysis, easily applicable to high order non-linear systems modelled in state-space and implementable in simulators with a time-discrete approach. This procedure also permits a rigorous analysis of the expected estimation errors (average and maximum) in calibration experiments. The methodology is based on the recursive numerical evaluation of the information matrix during the simulation of a calibration experiment and in the setting-up of a group of information parameters based on geometric interpretations of this matrix. As an example of the utility of the proposed test, the paper presents its application to an optimal experimental design of ASM Model No. 1 calibration, in order to estimate the maximum specific growth rate microH and the concentration of heterotrophic biomass XBH.

  14. Ion-thruster propellant utilization

    NASA Technical Reports Server (NTRS)

    Kaufman, H. R.

    1971-01-01

    The evaluation and understanding of maximum propellant utilization, with mercury used as the propellant are presented. The primary-electron region in the ion chamber of a bombardment thruster is analyzed at maximum utilization. The results of this analysis, as well as experimental data from a range of ion-chamber configurations, show a nearly constant loss rate for unionized propellant at maximum utilization over a wide range of total propellant flow rate. The discharge loss level of 1000 eV/ion was used as a definition of maximum utilization, but the exact level of this definition has no effect on the qualitative results and little effect on the quantitative results. There are obvious design applications for the results of this investigation, but the results are particularly significant whenever efficient throttled operation is required.

  15. A Method of Maximum Power Control in Single-phase Utility Interactive Photovoltaic Generation System by using PWM Current Source Inverter

    NASA Astrophysics Data System (ADS)

    Neba, Yasuhiko

    This paper deals with a maximum power point tracking (MPPT) control of the photovoltaic generation with the single-phase utility interactive inverter. The photovoltaic arrays are connected by employing the PWM current source inverter to the utility. The use of the pulsating dc current and voltage allows the maximum power point to be searched. The inverter can regulate the array voltage and keep the arrays to the maximum power. This paper gives the control method and the experimental results.

  16. The Clifton Youth StrengthsExplorer Assessment: Identifying the Talents of Today's Youth

    ERIC Educational Resources Information Center

    Educational Horizons, 2006

    2006-01-01

    The aim of many educators is to help youth reach their maximum potential. The Clifton Youth StrengthsExplorer gives teachers a tool to help identify the talents of their students, as well as actionable suggestions for utilizing those talents. Such information can help teachers to individualize the ways in which they respond to youths, and the…

  17. The development of the ATC selection battery : a new procedure to make maximum use of available information when correcting correlations for restriction in range due to selection.

    DOT National Transportation Integrated Search

    1978-09-01

    A five-test selection battery was given to select Air Traffic Controllers. Data were collected on two new tests being considered for incorporation into the battery. To determine the utility of the old and new tests, it is necessary to correlate the t...

  18. Maximum saliency bias in binocular fusion

    NASA Astrophysics Data System (ADS)

    Lu, Yuhao; Stafford, Tom; Fox, Charles

    2016-07-01

    Subjective experience at any instant consists of a single ("unitary"), coherent interpretation of sense data rather than a "Bayesian blur" of alternatives. However, computation of Bayes-optimal actions has no role for unitary perception, instead being required to integrate over every possible action-percept pair to maximise expected utility. So what is the role of unitary coherent percepts, and how are they computed? Recent work provided objective evidence for non-Bayes-optimal, unitary coherent, perception and action in humans; and further suggested that the percept selected is not the maximum a posteriori percept but is instead affected by utility. The present study uses a binocular fusion task first to reproduce the same effect in a new domain, and second, to test multiple hypotheses about exactly how utility may affect the percept. After accounting for high experimental noise, it finds that both Bayes optimality (maximise expected utility) and the previously proposed maximum-utility hypothesis are outperformed in fitting the data by a modified maximum-salience hypothesis, using unsigned utility magnitudes in place of signed utilities in the bias function.

  19. Ground-water resources in the tri-state region adjacent to the Lower Delaware River

    USGS Publications Warehouse

    Barksdale, Henry C.; Greenman, David W.; Lang, Solomon Max; Hilton, George Stockbridge; Outlaw, Donald E.

    1958-01-01

    The maximum beneficial utilization of the ground-water resources cannot be accomplished in haphazard fashion. It must be planned and controlled on the basis of sound, current information about the hydrology of the various aquifers. Continued and, in some areas, intensified investigations of the ground-water resources of the region should form the basis for such planning and control.

  20. A perspective of synthetic aperture radar for remote sensing

    NASA Technical Reports Server (NTRS)

    Skolnik, M. I.

    1978-01-01

    The characteristics and capabilities of synthetic aperture radar are discussed so as to identify those features particularly unique to SAR. The SAR and Optical images were compared. The SAR is an example of radar that provides more information about a target than simply its location. It is the spatial resolution and imaging capability of SAR that has made its application of interest, especially from spaceborne platforms. However, for maximum utility to remote sensing, it was proposed that other information be extracted from SAR data, such as the cross section with frequency and polarization.

  1. Assessing methanotrophy and carbon fixation for biofuel production by Methanosarcina acetivorans

    DOE PAGES

    Nazem-Bokaee, Hadi; Gopalakrishnan, Saratram; Ferry, James G.; ...

    2016-01-17

    Methanosarcina acetivorans is a model archaeon with renewed interest due to its unique reversible methane production pathways. However, the mechanism and relevant pathways implicated in (co)utilizing novel carbon substrates in this organism are still not fully understood. This paper provides a comprehensive inventory of thermodynamically feasible routes for anaerobic methane oxidation, co-reactant utilization, and maximum carbon yields of major biofuel candidates by M. acetivorans. Here, an updated genome-scale metabolic model of M. acetivorans is introduced (iMAC868 containing 868 genes, 845 reactions, and 718 metabolites) by integrating information from two previously reconstructed metabolic models (i.e., iVS941 and iMB745), modifying 17 reactions,more » adding 24 new reactions, and revising 64 gene-proteinreaction associations based on newly available information. The new model establishes improved predictions of growth yields on native substrates and is capable of correctly predicting the knockout outcomes for 27 out of 28 gene deletion mutants. By tracing a bifurcated electron flow mechanism, the iMAC868 model predicts thermodynamically feasible (co)utilization pathway of methane and bicarbonate using various terminal electron acceptors through the reversal of the aceticlastic pathway. In conclusion, this effort paves the way in informing the search for thermodynamically feasible ways of (co)utilizing novel carbon substrates in the domain Archaea.« less

  2. Security and Interdependency in a Public Cloud: A Game Theoretic Approach

    DTIC Science & Technology

    2014-08-29

    maximum utility can be reached (i.e., Pareto efficiency). However, the examples of perverse incentives and information inequality (where this feedback...interdependent structure. Cloud computing gives way to two types of interdependent relationships: cloud host-to- client and cloud client -to- client ... Client -to- client interdependency is much less studied than to the above-mentioned cloud host-to- client relationship. Although, it can still carry the

  3. Performance Investigations of a Large Centrifugal Compressor from an Experimental Turbojet Engine

    NASA Technical Reports Server (NTRS)

    Ginsburg, Ambrose; Creagh, John W. R.; Ritter, William K.

    1948-01-01

    An investigation was conducted on a large centrifugal compressor from an experimental turbojet engine to determine the performance of the compressor and to obtain fundamental information on the aerodynamic problems associated with large centrifugal-type compressors. The results of the research conducted on the compressor indicated that the compressor would not meet the desired engine-design air-flow requirements (78 lb/sec) because of an air-flow restriction in the vaned collector (diffuser). Revision of the vaned collector resulted in an increased air-flow capacity over the speed range and showed improved matching of the impeller and diffuser components. At maximum flow, the original compressor utilized approximately 90 percent of the available geometric throat area at the vaned-collector inlet and the revised compressor utilized approximately 94 percent, regardless of impeller speed. The ratio of the maximum weight flows of the revised and original compressors were less than the ratio of effective critical throat areas of the two compressors because of the large pressure losses in the impeller near the impeller inelt and the difference increased with an increase in impeller speed. In order to further increase the pressure ratio and maximum weight flow of the compressor, the impeller must be modified to eliminate the pressure losses therein.

  4. Extreme Maximum Land Surface Temperatures.

    NASA Astrophysics Data System (ADS)

    Garratt, J. R.

    1992-09-01

    There are numerous reports in the literature of observations of land surface temperatures. Some of these, almost all made in situ, reveal maximum values in the 50°-70°C range, with a few, made in desert regions, near 80°C. Consideration of a simplified form of the surface energy balance equation, utilizing likely upper values of absorbed shortwave flux (1000 W m2) and screen air temperature (55°C), that surface temperatures in the vicinity of 90°-100°C may occur for dry, darkish soils of low thermal conductivity (0.1-0.2 W m1 K1). Numerical simulations confirm this and suggest that temperature gradients in the first few centimeters of soil may reach 0.5°-1°C mm1 under these extreme conditions. The study bears upon the intrinsic interest of identifying extreme maximum temperatures and yields interesting information regarding the comfort zone of animals (including man).

  5. Natural Resource Information System. Volume 1: Overall description

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A prototype computer-based Natural Resource Information System was designed which could store, process, and display data of maximum usefulness to land management decision making. The system includes graphic input and display, the use of remote sensing as a data source, and it is useful at multiple management levels. A survey established current decision making processes and functions, information requirements, and data collection and processing procedures. The applications of remote sensing data and processing requirements were established. Processing software was constructed and a data base established using high-altitude imagery and map coverage of selected areas of SE Arizona. Finally a demonstration of system processing functions was conducted utilizing material from the data base.

  6. Internet resources for the anaesthesiologist.

    PubMed

    Johnson, Edward

    2012-05-01

    There is considerable useful information about anaesthesia available on the World Wide Web. However, at present, it is very incomplete and scattered around many sites. Many anaesthetists find it difficult to get the right information they need because of the sheer volume of information available on the internet. This article starts with the basics of the Internet, how to utilize the search engine at the maximum and presents a comprehensive list of important websites. These important websites, which are felt to offer high educational value for the anaesthesiologists, have been selected from an extensive search on the Internet. Top-rated anaesthesia websites, web blogs, forums, societies, e-books, e-journals and educational resources are elaborately discussed with relevant URLs.

  7. Internet resources for the anaesthesiologist

    PubMed Central

    Johnson, Edward

    2012-01-01

    There is considerable useful information about anaesthesia available on the World Wide Web. However, at present, it is very incomplete and scattered around many sites. Many anaesthetists find it difficult to get the right information they need because of the sheer volume of information available on the internet. This article starts with the basics of the Internet, how to utilize the search engine at the maximum and presents a comprehensive list of important websites. These important websites, which are felt to offer high educational value for the anaesthesiologists, have been selected from an extensive search on the Internet. Top-rated anaesthesia websites, web blogs, forums, societies, e-books, e-journals and educational resources are elaborately discussed with relevant URLs. PMID:22923818

  8. Guidelines for Management Information Systems in Canadian Health Care Facilities

    PubMed Central

    Thompson, Larry E.

    1987-01-01

    The MIS Guidelines are a comprehensive set of standards for health care facilities for the recording of staffing, financial, workload, patient care and other management information. The Guidelines enable health care facilities to develop management information systems which identify resources, costs and products to more effectively forecast and control costs and utilize resources to their maximum potential as well as provide improved comparability of operations. The MIS Guidelines were produced by the Management Information Systems (MIS) Project, a cooperative effort of the federal and provincial governments, provincial hospital/health associations, under the authority of the Canadian Federal/Provincial Advisory Committee on Institutional and Medical Services. The Guidelines are currently being implemented on a “test” basis in ten health care facilities across Canada and portions integrated in government reporting as finalized.

  9. Exploiting Acoustic and Syntactic Features for Automatic Prosody Labeling in a Maximum Entropy Framework

    PubMed Central

    Sridhar, Vivek Kumar Rangarajan; Bangalore, Srinivas; Narayanan, Shrikanth S.

    2009-01-01

    In this paper, we describe a maximum entropy-based automatic prosody labeling framework that exploits both language and speech information. We apply the proposed framework to both prominence and phrase structure detection within the Tones and Break Indices (ToBI) annotation scheme. Our framework utilizes novel syntactic features in the form of supertags and a quantized acoustic–prosodic feature representation that is similar to linear parameterizations of the prosodic contour. The proposed model is trained discriminatively and is robust in the selection of appropriate features for the task of prosody detection. The proposed maximum entropy acoustic–syntactic model achieves pitch accent and boundary tone detection accuracies of 86.0% and 93.1% on the Boston University Radio News corpus, and, 79.8% and 90.3% on the Boston Directions corpus. The phrase structure detection through prosodic break index labeling provides accuracies of 84% and 87% on the two corpora, respectively. The reported results are significantly better than previously reported results and demonstrate the strength of maximum entropy model in jointly modeling simple lexical, syntactic, and acoustic features for automatic prosody labeling. PMID:19603083

  10. Evaluation of Maximal Oxygen Uptake (V02max) and Submaximal Estimates of VO2max Before, During and After Long Duration ISS Missions

    NASA Technical Reports Server (NTRS)

    Moore, Alan; Evetts, Simon; Feiveson, Alan; Lee, Stuart; McCleary, Frank; Platts, Steven

    2009-01-01

    NASA's Human Research Program Integrated Research Plan (HRP-47065) serves as a road-map identifying critically needed information for future space flight operations (Lunar, Martian). VO2max (often termed aerobic capacity) reflects the maximum rate at which oxygen can be taken up and utilized by the body during exercise. Lack of in-flight and immediate postflight VO2max measurements was one area identified as a concern. The risk associated with not knowing this information is: Unnecessary Operational Limitations due to Inaccurate Assessment of Cardiovascular Performance (HRP-47065).

  11. Three-class ROC analysis--the equal error utility assumption and the optimality of three-class ROC surface using the ideal observer.

    PubMed

    He, Xin; Frey, Eric C

    2006-08-01

    Previously, we have developed a decision model for three-class receiver operating characteristic (ROC) analysis based on decision theory. The proposed decision model maximizes the expected decision utility under the assumption that incorrect decisions have equal utilities under the same hypothesis (equal error utility assumption). This assumption reduced the dimensionality of the "general" three-class ROC analysis and provided a practical figure-of-merit to evaluate the three-class task performance. However, it also limits the generality of the resulting model because the equal error utility assumption will not apply for all clinical three-class decision tasks. The goal of this study was to investigate the optimality of the proposed three-class decision model with respect to several other decision criteria. In particular, besides the maximum expected utility (MEU) criterion used in the previous study, we investigated the maximum-correctness (MC) (or minimum-error), maximum likelihood (ML), and Nyman-Pearson (N-P) criteria. We found that by making assumptions for both MEU and N-P criteria, all decision criteria lead to the previously-proposed three-class decision model. As a result, this model maximizes the expected utility under the equal error utility assumption, maximizes the probability of making correct decisions, satisfies the N-P criterion in the sense that it maximizes the sensitivity of one class given the sensitivities of the other two classes, and the resulting ROC surface contains the maximum likelihood decision operating point. While the proposed three-class ROC analysis model is not optimal in the general sense due to the use of the equal error utility assumption, the range of criteria for which it is optimal increases its applicability for evaluating and comparing a range of diagnostic systems.

  12. Progress in the Development and Utilization of Ferrography

    DTIC Science & Technology

    1975-12-31

    in greater detail in Figs 5 and 6, could be due to some form of chemical attack or the incomplete balling- up of many small particles. The size of...When, however, the wearing surfaces started to fail and become rougher with time, the maximum size of the particles increased by as much as one or two orders of magnitude. .. I ...Prepared - or : Office cf Naval Research 31 December 1975 DISTRIBUTED BY: mi] National Technical Information Service U. S. DEPARTMENT OF COMMERCE

  13. Clinical information transfer and data capture in the acute myocardial infarction pathway: an observational study.

    PubMed

    Kesavan, Sujatha; Kelay, Tanika; Collins, Ruth E; Cox, Benita; Bello, Fernando; Kneebone, Roger L; Sevdalis, Nick

    2013-10-01

    Acute myocardial infarctions (MIs) or heart attacks are the result of a complete or an incomplete occlusion of the lumen of the coronary artery with a thrombus. Prompt diagnosis and early coronary intervention results in maximum myocardial salvage, hence time to treat is of the essence. Adequate, accurate and complete information is vital during the early stages of admission of an MI patient and can impact significantly on the quality and safety of patient care. This study aimed to record how clinical information between different clinical teams during the journey of a patient in the MI care pathway is captured and to review the flow of information within this care pathway. A prospective, descriptive, structured observational study to assess (i) current clinical information systems (CIS) utilization and (ii) real-time information availability within an acute cardiac care setting was carried out. Completeness and availability of patient information capture across four key stages of the MI care pathway were assessed prospectively. Thirteen separate information systems were utilized during the four phases of the MI pathway. Observations revealed fragmented CIS utilization, with users accessing an average of six systems to gain a complete set of patient information. Data capture was found to vary between each pathway stage and in both patient cohort risk groupings. The highest level of information completeness (100%) was observed only in the discharge stage of the MI care pathway. The lowest level of information completeness (58%) was observed in the admission stage. The study highlights fragmentation, CIS duplication, and discrepancies in the current clinical information capture and data transfer across the MI care pathway in an acute cardiac care setting. The development of an integrated and user-friendly electronic data capture and transfer system would reduce duplication and would facilitate efficient and complete information provision at the point of care. © 2012 John Wiley & Sons Ltd.

  14. Fast depth decision for HEVC inter prediction based on spatial and temporal correlation

    NASA Astrophysics Data System (ADS)

    Chen, Gaoxing; Liu, Zhenyu; Ikenaga, Takeshi

    2016-07-01

    High efficiency video coding (HEVC) is a video compression standard that outperforms the predecessor H.264/AVC by doubling the compression efficiency. To enhance the compression accuracy, the partition sizes ranging is from 4x4 to 64x64 in HEVC. However, the manifold partition sizes dramatically increase the encoding complexity. This paper proposes a fast depth decision based on spatial and temporal correlation. Spatial correlation utilize the code tree unit (CTU) Splitting information and temporal correlation utilize the motion vector predictor represented CTU in inter prediction to determine the maximum depth in each CTU. Experimental results show that the proposed method saves about 29.1% of the original processing time with 0.9% of BD-bitrate increase on average.

  15. Transformer overload and bubble evolution: Proceedings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Addis, G.; Lindgren, S.

    1988-06-01

    The EPRI workshop on Transformer Overload Characteristics and Bubble Evolution was held to review the findings of investigations over the past 7-8 years to determine whether enough information is now available for utilities to establish safe loading practices. Sixteen papers were presented, including a utility review, physical and dielectric effects of gas and bubble formation from cellulose insulated transformers, transformer life characteristics, gas bubble studies and impulse test on distribution transformers, mathematical modeling of bubble evolution, transformer overload characteristics, variation of PD-strength for oil-paper insulation, survey on maximum safe operating hot spot temperature, and overload management. The meeting concluded withmore » a general discussion covering the existing state of knowledge and the need for additional research. Sixteen papers have been cataloged separately.« less

  16. Kinetic study on anaerobic oxidation of methane coupled to denitrification.

    PubMed

    Yu, Hou; Kashima, Hiroyuki; Regan, John M; Hussain, Abid; Elbeshbishy, Elsayed; Lee, Hyung-Sool

    2017-09-01

    Monod kinetic parameters provide information required for kinetic analysis of anaerobic oxidation of methane coupled to denitrification (AOM-D). This information is critical for engineering AOM-D processes in wastewater treatment facilities. We first experimentally determined Monod kinetic parameters for an AOM-D enriched culture and obtained the following values: maximum specific growth rate (μ max ) 0.121/d, maximum substrate-utilization rate (q max ) 28.8mmol CH 4 /g cells-d, half maximum-rate substrate concentration (K s ) 83μΜ CH 4 , growth yield (Y) 4.76gcells/mol CH 4 , decay coefficient (b) 0.031/d, and threshold substrate concentration (S min ) 28.8μM CH 4 . Clone library analysis of 16S rRNA and mcrA gene fragments suggested that AOM-D reactions might have occurred via the syntrophic interaction between denitrifying bacteria (e.g., Ignavibacterium, Acidovorax, and Pseudomonas spp.) and hydrogenotrophic methanogens (Methanobacterium spp.), supporting reverse methanogenesis-dependent AOM-D in our culture. High μ max and q max , and low K s for the AOM-D enrichment imply that AOM-D could play a significant role in mitigating atmospheric methane efflux. In addition, these high kinetic features suggest that engineered AOM-D systems may provide a sustainable alternative to nitrogen removal in wastewater treatment. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Covariance Manipulation for Conjunction Assessment

    NASA Technical Reports Server (NTRS)

    Hejduk, M. D.

    2016-01-01

    The manipulation of space object covariances to try to provide additional or improved information to conjunction risk assessment is not an uncommon practice. Types of manipulation include fabricating a covariance when it is missing or unreliable to force the probability of collision (Pc) to a maximum value ('PcMax'), scaling a covariance to try to improve its realism or see the effect of covariance volatility on the calculated Pc, and constructing the equivalent of an epoch covariance at a convenient future point in the event ('covariance forecasting'). In bringing these methods to bear for Conjunction Assessment (CA) operations, however, some do not remain fully consistent with best practices for conducting risk management, some seem to be of relatively low utility, and some require additional information before they can contribute fully to risk analysis. This study describes some basic principles of modern risk management (following the Kaplan construct) and then examines the PcMax and covariance forecasting paradigms for alignment with these principles; it then further examines the expected utility of these methods in the modern CA framework. Both paradigms are found to be not without utility, but only in situations that are somewhat carefully circumscribed.

  18. Quantum Parameter Estimation: From Experimental Design to Constructive Algorithm

    NASA Astrophysics Data System (ADS)

    Yang, Le; Chen, Xi; Zhang, Ming; Dai, Hong-Yi

    2017-11-01

    In this paper we design the following two-step scheme to estimate the model parameter ω 0 of the quantum system: first we utilize the Fisher information with respect to an intermediate variable v=\\cos ({ω }0t) to determine an optimal initial state and to seek optimal parameters of the POVM measurement operators; second we explore how to estimate ω 0 from v by choosing t when a priori information knowledge of ω 0 is available. Our optimal initial state can achieve the maximum quantum Fisher information. The formulation of the optimal time t is obtained and the complete algorithm for parameter estimation is presented. We further explore how the lower bound of the estimation deviation depends on the a priori information of the model. Supported by the National Natural Science Foundation of China under Grant Nos. 61273202, 61673389, and 61134008

  19. Maximum Entropy Principle for Transportation

    NASA Astrophysics Data System (ADS)

    Bilich, F.; DaSilva, R.

    2008-11-01

    In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.

  20. Lessons Learned from the Deployment of a Hydrologic Science Observations Data Model

    NASA Astrophysics Data System (ADS)

    Beran, B.; Valentine, D.; Zaslavsky, I.; van Ingen, C.

    2007-12-01

    The CUAHSI Hydrologic Information System project is developing information technology infrastructure to support hydrologic science. The CUAHSI Observations Data Model (ODM) is a data model to store hydrologic observations data in a system designed to optimize data retrieval for integrated analysis of information collected by multiple investigators. The ODM v1, provides a distinct view into what information the community has determined is important to store, and what data views the community. As we began to work with ODM v1, we discovered the problem with the approach of tightly linking the community views of data to the database model. Design decisions for ODM v1 hindered the ability to utilize the datamodel as an aggregated information catalog need for the cyberinfrastructure. Different development groups had different approaches to populating the datamodel, and handling the complexity. The approaches varied from populating the ODM with a bare minimum of constraints to creating a fully constrained datamodel. This made the integration of different tools, difficult. In the end, we decided to utilize the fully populate model which ensure maximum compatibility with the data sources. Groups also discovered that while the data model central concept was optimized for data retrieval of individual observation. In practice, the concept of data series is better to manage data, yet there is no link between data series and data value in ODM v1. We are beginning to develop ODM v2 as a series of profiles. By utilizing profiles, we intend to make the core information model smaller, more manageable, and simpler to understand and populate. We intend to keep the community semantics, improve the linkages between data series and data values, and enhance data discovery for the CUAHSI cyberinfrastructure.

  1. On the error probability of general tree and trellis codes with applications to sequential decoding

    NASA Technical Reports Server (NTRS)

    Johannesson, R.

    1973-01-01

    An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random binary tree codes is derived and shown to be independent of the length of the tree. An upper bound on the average error probability for maximum-likelihood decoding of the ensemble of random L-branch binary trellis codes of rate R = 1/n is derived which separates the effects of the tail length T and the memory length M of the code. It is shown that the bound is independent of the length L of the information sequence. This implication is investigated by computer simulations of sequential decoding utilizing the stack algorithm. These simulations confirm the implication and further suggest an empirical formula for the true undetected decoding error probability with sequential decoding.

  2. Straight and chopped dc performance data for a General Electric 5BT 2366C10 motor and an EV-1 controller

    NASA Technical Reports Server (NTRS)

    Edie, P. C.

    1981-01-01

    Performance data on the General Electric 5BT 2366C10 series wound dc motor and EV-1 Chopper Controller is supplied for the electric vehicle manufacturer. Data is provided for both straight and chopped dc input to the motor, at 2 motor temperature levels. Testing was done at 6 voltage increments to the motor, and 2 voltage increments to the controller. Data results are presented in both tabular and graphical forms. Tabular information includes motor voltage and current input data, motor speed and torque output data, power data and temperature data. Graphical information includes torque-speed, motor power output-speed, torque-current, and efficiency-speed plots under the various operating conditions. The data resulting from this testing shows the speed-torque plots to have the most variance with operating temperature. The maximum motor efficiency is between 86% and 87%, regardless of temperature or mode of operation. When the chopper is utilized, maximum motor efficiency occurs when the chopper duty cycle approaches 100%.

  3. Maternal and child health and family planning service utilization in Guatemala: implications for service integration.

    PubMed

    Seiber, Eric E; Hotchkiss, David R; Rous, Jeffrey J; Berruti, Andrés A

    2005-07-01

    Does the utilization of modern maternal and child health (MCH) services influence subsequent contraceptive use? The answer to this question holds important implications for proposals which advocate MCH and family planning service integration. This study uses data from the 1995/6 Guatemalan Demographic Health Survey and its 1997 Providers Census to test the influence of MCH service utilization on individual contraceptive use decisions. We use a full-information maximum likelihood regression model to control for unobserved heterogeneity. This model produces estimates of the MCH effect, independent of individual women's underlying receptiveness to MCH and contraceptive messages. The results of the analysis indicate that the intensity of MCH service use is indeed positively associated with subsequent contraceptive use among Guatemalan women, even after controlling for observed and unobserved individual- , household- , and community-level factors. Importantly, this finding holds even after controlling for the unobserved factors that 'predispose' some women to use both types of services. Simulations reveal that, for these Guatemalan women, key determinants such as age and primary schooling work indirectly through MCH service use to increase contraceptive utilization.

  4. [Geographical coverage of the Mexican Healthcare System and a spatial analysis of utilization of its General Hospitals in 1998].

    PubMed

    Hernández-Avila, Juan E; Rodríguez, Mario H; Rodríguez, Norma E; Santos, René; Morales, Evangelina; Cruz, Carlos; Sepúlveda-Amor, Jaime

    2002-01-01

    To describe the geographical coverage of the Mexican Healthcare System (MHS) services and to assess the utilization of its General Hospitals. A Geographic Information System (GIS) was used to include sociodemographic data by locality, the geographical location of all MHS healthcare services, and data on hospital discharge records. A maximum likelihood estimation model was developed to assess the utilization levels of 217 MHS General Hospitals. The model included data on human resources, additional infrastructure, and the population within a 25 km radius. In 1998, 10,806 localities with 72 million inhabitants had at least one public healthcare unit, and 97.2% of the population lived within 50 km of a healthcare unit; however, over 18 million people lived in rural localities without a healthcare unit. The mean annual hospital occupation rate was 48.5 +/- 28.5 per 100 bed/years, with high variability within and between states. Hospital occupation was significantly associated with the number of physicians in the unit, and in the Mexican Institute of Social Security units utilization was associated with additional health infrastructure, and with the population's poverty index. GIS analysis allows improved estimation of the coverage and utilization of MHS hospitals.

  5. Lack of utility of a decision support system to mitigate delays in admission from the operating room to the postanesthesia care unit.

    PubMed

    Ehrenfeld, Jesse M; Dexter, Franklin; Rothman, Brian S; Minton, Betty Sue; Johnson, Diane; Sandberg, Warren S; Epstein, Richard H

    2013-12-01

    When the phase I postanesthesia care unit (PACU) is at capacity, completed cases need to be held in the operating room (OR), causing a "PACU delay." Statistical methods based on historical data can optimize PACU staffing to achieve the least possible labor cost at a given service level. A decision support process to alert PACU charge nurses that the PACU is at or near maximum census might be effective in lessening the incidence of delays and reducing over-utilized OR time, but only if alerts are timely (i.e., neither too late nor too early to act upon) and the PACU slot can be cleared quickly. We evaluated the maximum potential benefit of such a system, using assumptions deliberately biased toward showing utility. We extracted 3 years of electronic PACU data from a tertiary care medical center. At this hospital, PACU admissions were limited by neither inadequate PACU staffing nor insufficient PACU beds. We developed a model decision support system that simulated alerts to the PACU charge nurse. PACU census levels were reconstructed from the data at a 1-minute level of resolution and used to evaluate if subsequent delays would have been prevented by such alerts. The model assumed there was always a patient ready for discharge and an available hospital bed. The time from each alert until the maximum census was exceeded ("alert lead time") was determined. Alerts were judged to have utility if the alert lead time fell between various intervals from 15 or 30 minutes to 60, 75, or 90 minutes after triggering. In addition, utility for reducing over-utilized OR time was assessed using the model by determining if 2 patients arrived from 5 to 15 minutes of each other when the PACU census was at 1 patient less than the maximum census. At most, 23% of alerts arrived 30 to 60 minutes prior to the admission that resulted in the PACU exceeding the specified maximum capacity. When the notification window was extended to 15 to 90 minutes, the maximum utility was <50%. At most, 45% of alerts potentially would have resulted in reassigning the last available PACU slot to 1 OR versus another within 15 minutes of the original assignment. Despite multiple biases that favored effectiveness, the maximum potential benefit of a decision support system to mitigate PACU delays on the day on the surgery was below the 70% minimum threshold for utility of automated decision support messages, previously established via meta-analysis. Neither reduction in PACU delays nor reassigning promised PACU slots based on reducing over-utilized OR time were realized sufficiently to warrant further development of the system. Based on these results, the only evidence-based method of reducing PACU delays is to adjust PACU staffing and staff scheduling using computational algorithms to match the historical workload (e.g., as developed in 2001).

  6. Evaluation of the biophysical limitations on photosynthesis of four varietals of Brassica rapa

    NASA Astrophysics Data System (ADS)

    Pleban, J. R.; Mackay, D. S.; Aston, T.; Ewers, B.; Weinig, C.

    2014-12-01

    Evaluating performance of agricultural varietals can support the identification of genotypes that will increase yield and can inform management practices. The biophysical limitations of photosynthesis are amongst the key factors that necessitate evaluation. This study evaluated how four biophysical limitations on photosynthesis, stomatal response to vapor pressure deficit, maximum carboxylation rate by Rubisco (Ac), rate of photosynthetic electron transport (Aj) and triose phosphate use (At) vary between four Brassica rapa genotypes. Leaf gas exchange data was used in an ecophysiological process model to conduct this evaluation. The Terrestrial Regional Ecosystem Exchange Simulator (TREES) integrates the carbon uptake and utilization rate limiting factors for plant growth. A Bayesian framework integrated in TREES here used net A as the target to estimate the four limiting factors for each genotype. As a first step the Bayesian framework was used for outlier detection, with data points outside the 95% confidence interval of model estimation eliminated. Next parameter estimation facilitated the evaluation of how the limiting factors on A different between genotypes. Parameters evaluated included maximum carboxylation rate (Vcmax), quantum yield (ϕJ), the ratio between Vc-max and electron transport rate (J), and trios phosphate utilization (TPU). Finally, as trios phosphate utilization has been shown to not play major role in the limiting A in many plants, the inclusion of At in models was evaluated using deviance information criteria (DIC). The outlier detection resulted in a narrowing in the estimated parameter distributions allowing for greater differentiation of genotypes. Results show genotypes vary in the how limitations shape assimilation. The range in Vc-max , a key parameter in Ac, was 203.2 - 223.9 umol m-2 s-1 while the range in ϕJ, a key parameter in AJ, was 0.463 - 0.497 umol m-2 s-1. The added complexity of the TPU limitation did not improve model performance in the genotypes assessed based on DIC. By identifying how varietals differ in their biophysical limitations on photosynthesis genotype selection can be informed for agricultural goals. Further work aims at applying this approach to a fifth limiting factor on photosynthesis, mesophyll conductance.

  7. 7 CFR 1778.11 - Maximum grants.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 12 2010-01-01 2010-01-01 false Maximum grants. 1778.11 Section 1778.11 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE (CONTINUED) EMERGENCY AND IMMINENT COMMUNITY WATER ASSISTANCE GRANTS § 1778.11 Maximum grants. (a) Grants not...

  8. lakemorpho: Calculating lake morphometry metrics in R.

    PubMed

    Hollister, Jeffrey; Stachelek, Joseph

    2017-01-01

    Metrics describing the shape and size of lakes, known as lake morphometry metrics, are important for any limnological study. In cases where a lake has long been the subject of study these data are often already collected and are openly available. Many other lakes have these data collected, but access is challenging as it is often stored on individual computers (or worse, in filing cabinets) and is available only to the primary investigators. The vast majority of lakes fall into a third category in which the data are not available. This makes broad scale modelling of lake ecology a challenge as some of the key information about in-lake processes are unavailable. While this valuable in situ information may be difficult to obtain, several national datasets exist that may be used to model and estimate lake morphometry. In particular, digital elevation models and hydrography have been shown to be predictive of several lake morphometry metrics. The R package lakemorpho has been developed to utilize these data and estimate the following morphometry metrics: surface area, shoreline length, major axis length, minor axis length, major and minor axis length ratio, shoreline development, maximum depth, mean depth, volume, maximum lake length, mean lake width, maximum lake width, and fetch. In this software tool article we describe the motivation behind developing lakemorpho , discuss the implementation in R, and describe the use of lakemorpho with an example of a typical use case.

  9. Real-Time Radar-Based Tracking and State Estimation of Multiple Non-Conformant Aircraft

    NASA Technical Reports Server (NTRS)

    Cook, Brandon; Arnett, Timothy; Macmann, Owen; Kumar, Manish

    2017-01-01

    In this study, a novel solution for automated tracking of multiple unknown aircraft is proposed. Many current methods use transponders to self-report state information and augment track identification. While conformant aircraft typically report transponder information to alert surrounding aircraft of its state, vehicles may exist in the airspace that are non-compliant and need to be accurately tracked using alternative methods. In this study, a multi-agent tracking solution is presented that solely utilizes primary surveillance radar data to estimate aircraft state information. Main research challenges include state estimation, track management, data association, and establishing persistent track validity. In an effort to realize these challenges, techniques such as Maximum a Posteriori estimation, Kalman filtering, degree of membership data association, and Nearest Neighbor Spanning Tree clustering are implemented for this application.

  10. A new terminal guidance sensor system for asteroid intercept or rendezvous missions

    NASA Astrophysics Data System (ADS)

    Lyzhoft, Joshua; Basart, John; Wie, Bong

    2016-02-01

    This paper presents the initial conceptual study results of a new terminal guidance sensor system for asteroid intercept or rendezvous missions, which explores the use of visual, infrared, and radar devices. As was demonstrated by NASA's Deep Impact mission, visual cameras can be effectively utilized for hypervelocity intercept terminal guidance for a 5 kilometer target. Other systems such as Raytheon's EKV (Exoatmospheric Kill Vehicle) employ a different scheme that utilizes infrared target information to intercept ballistic missiles. Another example that uses infrared information is the NEOWISE telescope, which is used for asteroid detection and tracking. This paper describes the signal-to-noise ratio estimation problem for infrared sensors, minimum and maximum range of detection, and computational validation using GPU accelerated simulations. Small targets (50-100 m in diameter) are considered, and scaled polyhedron models of known objects, such as the Rosetta mission's Comet 67P/Churyumov-Gerasimenko, 101,955 Bennu, target of the OSIRIS-REx mission, and asteroid 433 Eros, are utilized. A parallelized ray tracing algorithm to simulate realistic surface-to-surface shadowing of a given celestial body is developed. By using the simulated models and parameters given from the formulation of the different sensors, impact mission scenarios are used to verify the feasibility for intercepting a small target.

  11. Cluster-Based Maximum Consensus Time Synchronization for Industrial Wireless Sensor Networks.

    PubMed

    Wang, Zhaowei; Zeng, Peng; Zhou, Mingtuo; Li, Dong; Wang, Jintao

    2017-01-13

    Time synchronization is one of the key technologies in Industrial Wireless Sensor Networks (IWSNs), and clustering is widely used in WSNs for data fusion and information collection to reduce redundant data and communication overhead. Considering IWSNs' demand for low energy consumption, fast convergence, and robustness, this paper presents a novel Cluster-based Maximum consensus Time Synchronization (CMTS) method. It consists of two parts: intra-cluster time synchronization and inter-cluster time synchronization. Based on the theory of distributed consensus, the proposed method utilizes the maximum consensus approach to realize the intra-cluster time synchronization, and adjacent clusters exchange the time messages via overlapping nodes to synchronize with each other. A Revised-CMTS is further proposed to counteract the impact of bounded communication delays between two connected nodes, because the traditional stochastic models of the communication delays would distort in a dynamic environment. The simulation results show that our method reduces the communication overhead and improves the convergence rate in comparison to existing works, as well as adapting to the uncertain bounded communication delays.

  12. Cluster-Based Maximum Consensus Time Synchronization for Industrial Wireless Sensor Networks †

    PubMed Central

    Wang, Zhaowei; Zeng, Peng; Zhou, Mingtuo; Li, Dong; Wang, Jintao

    2017-01-01

    Time synchronization is one of the key technologies in Industrial Wireless Sensor Networks (IWSNs), and clustering is widely used in WSNs for data fusion and information collection to reduce redundant data and communication overhead. Considering IWSNs’ demand for low energy consumption, fast convergence, and robustness, this paper presents a novel Cluster-based Maximum consensus Time Synchronization (CMTS) method. It consists of two parts: intra-cluster time synchronization and inter-cluster time synchronization. Based on the theory of distributed consensus, the proposed method utilizes the maximum consensus approach to realize the intra-cluster time synchronization, and adjacent clusters exchange the time messages via overlapping nodes to synchronize with each other. A Revised-CMTS is further proposed to counteract the impact of bounded communication delays between two connected nodes, because the traditional stochastic models of the communication delays would distort in a dynamic environment. The simulation results show that our method reduces the communication overhead and improves the convergence rate in comparison to existing works, as well as adapting to the uncertain bounded communication delays. PMID:28098750

  13. Georgia fishery study: implications for dose calculations. Revision 1

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Turcotte, M.D.S.

    Fish consumption will contribute a major portion of the estimated individual and population doses from L-Reactor liquid releases and Cs-137 remobilization in Steel Creek. It is therefore important that the values for fish consumption used in dose calculations be as realistic as possible. Since publication of the L-Reactor Environmental Information Document (EID), data have become available on sport fishing in the Savannah River. These data provide SRP with a site-specific sport fish harvest and consumption values for use in dose calculations. The Georgia fishery data support the total population fish consumption and calculated dose reported in the EID. The datamore » indicate, however, that both the EID average and maximum individual fish consumption have been underestimated, although each to a different degree. The average fish consumption value used in the EID is approximately 3% below the lower limit of the fish consumption range calculated using the Georgia data. Maximum fish consumption in the EID has been underestimated by approximately 60%, and doses to the maximum individual should also be recalculated. Future dose calculations should utilize an average adult fish consumption value of 11.3 kg/yr, and a maximum adult fish consumption value of 34 kg/yr. Consumption values for the teen and child age groups should be increased proportionally: (1) teen average = 8.5; maximum = 25.9 kg/yr; and (2) child average = 3.6; maximum = 11.2 kg/yr. 8 refs.« less

  14. Slider--maximum use of probability information for alignment of short sequence reads and SNP detection.

    PubMed

    Malhis, Nawar; Butterfield, Yaron S N; Ester, Martin; Jones, Steven J M

    2009-01-01

    A plethora of alignment tools have been created that are designed to best fit different types of alignment conditions. While some of these are made for aligning Illumina Sequence Analyzer reads, none of these are fully utilizing its probability (prb) output. In this article, we will introduce a new alignment approach (Slider) that reduces the alignment problem space by utilizing each read base's probabilities given in the prb files. Compared with other aligners, Slider has higher alignment accuracy and efficiency. In addition, given that Slider matches bases with probabilities other than the most probable, it significantly reduces the percentage of base mismatches. The result is that its SNP predictions are more accurate than other SNP prediction approaches used today that start from the most probable sequence, including those using base quality.

  15. seawaveQ: an R package providing a model and utilities for analyzing trends in chemical concentrations in streams with a seasonal wave (seawave) and adjustment for streamflow (Q) and other ancillary variables

    USGS Publications Warehouse

    Ryberg, Karen R.; Vecchia, Aldo V.

    2013-01-01

    The seawaveQ R package fits a parametric regression model (seawaveQ) to pesticide concentration data from streamwater samples to assess variability and trends. The model incorporates the strong seasonality and high degree of censoring common in pesticide data and users can incorporate numerous ancillary variables, such as streamflow anomalies. The model is fitted to pesticide data using maximum likelihood methods for censored data and is robust in terms of pesticide, stream location, and degree of censoring of the concentration data. This R package standardizes this methodology for trend analysis, documents the code, and provides help and tutorial information, as well as providing additional utility functions for plotting pesticide and other chemical concentration data.

  16. Estimation of descriptive statistics for multiply censored water quality data

    USGS Publications Warehouse

    Helsel, Dennis R.; Cohn, Timothy A.

    1988-01-01

    This paper extends the work of Gilliom and Helsel (1986) on procedures for estimating descriptive statistics of water quality data that contain “less than” observations. Previously, procedures were evaluated when only one detection limit was present. Here we investigate the performance of estimators for data that have multiple detection limits. Probability plotting and maximum likelihood methods perform substantially better than simple substitution procedures now commonly in use. Therefore simple substitution procedures (e.g., substitution of the detection limit) should be avoided. Probability plotting methods are more robust than maximum likelihood methods to misspecification of the parent distribution and their use should be encouraged in the typical situation where the parent distribution is unknown. When utilized correctly, less than values frequently contain nearly as much information for estimating population moments and quantiles as would the same observations had the detection limit been below them.

  17. Taking Halo-Independent Dark Matter Methods Out of the Bin

    DOE PAGES

    Fox, Patrick J.; Kahn, Yonatan; McCullough, Matthew

    2014-10-30

    We develop a new halo-independent strategy for analyzing emerging DM hints, utilizing the method of extended maximum likelihood. This approach does not require the binning of events, making it uniquely suited to the analysis of emerging DM direct detection hints. It determines a preferred envelope, at a given confidence level, for the DM velocity integral which best fits the data using all available information and can be used even in the case of a single anomalous scattering event. All of the halo-independent information from a direct detection result may then be presented in a single plot, allowing simple comparisons betweenmore » multiple experiments. This results in the halo-independent analogue of the usual mass and cross-section plots found in typical direct detection analyses, where limit curves may be compared with best-fit regions in halo-space. The method is straightforward to implement, using already-established techniques, and its utility is demonstrated through the first unbinned halo-independent comparison of the three anomalous events observed in the CDMS-Si detector with recent limits from the LUX experiment.« less

  18. An Upper Bound on Orbital Debris Collision Probability When Only One Object has Position Uncertainty Information

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2015-01-01

    Upper bounds on high speed satellite collision probability, P (sub c), have been investigated. Previous methods assume an individual position error covariance matrix is available for each object. The two matrices being combined into a single, relative position error covariance matrix. Components of the combined error covariance are then varied to obtain a maximum P (sub c). If error covariance information for only one of the two objects was available, either some default shape has been used or nothing could be done. An alternative is presented that uses the known covariance information along with a critical value of the missing covariance to obtain an approximate but useful P (sub c) upper bound. There are various avenues along which an upper bound on the high speed satellite collision probability has been pursued. Typically, for the collision plane representation of the high speed collision probability problem, the predicted miss position in the collision plane is assumed fixed. Then the shape (aspect ratio of ellipse), the size (scaling of standard deviations) or the orientation (rotation of ellipse principal axes) of the combined position error ellipse is varied to obtain a maximum P (sub c). Regardless as to the exact details of the approach, previously presented methods all assume that an individual position error covariance matrix is available for each object and the two are combined into a single, relative position error covariance matrix. This combined position error covariance matrix is then modified according to the chosen scheme to arrive at a maximum P (sub c). But what if error covariance information for one of the two objects is not available? When error covariance information for one of the objects is not available the analyst has commonly defaulted to the situation in which only the relative miss position and velocity are known without any corresponding state error covariance information. The various usual methods of finding a maximum P (sub c) do no good because the analyst defaults to no knowledge of the combined, relative position error covariance matrix. It is reasonable to think, given an assumption of no covariance information, an analyst might still attempt to determine the error covariance matrix that results in an upper bound on the P (sub c). Without some guidance on limits to the shape, size and orientation of the unknown covariance matrix, the limiting case is a degenerate ellipse lying along the relative miss vector in the collision plane. Unless the miss position is exceptionally large or the at-risk object is exceptionally small, this method results in a maximum P (sub c) too large to be of practical use. For example, assuming that the miss distance is equal to the current ISS alert volume along-track (+ or -) distance of 25 kilometers and that the at-risk area has a 70 meter radius. The maximum (degenerate ellipse) P (sub c) is about 0.00136. At 40 kilometers, the maximum P (sub c) would be 0.00085 which is still almost an order of magnitude larger than the ISS maneuver threshold of 0.0001. In fact, a miss distance of almost 340 kilometers is necessary to reduce the maximum P (sub c) associated with this degenerate ellipse to the ISS maneuver threshold value. Such a result is frequently of no practical value to the analyst. Some improvement may be made with respect to this problem by realizing that while the position error covariance matrix of one of the objects (usually the debris object) may not be known the position error covariance matrix of the other object (usually the asset) is almost always available. Making use of the position error covariance information for the one object provides an improvement in finding a maximum P (sub c) which, in some cases, may offer real utility. The equations to be used are presented and their use discussed.

  19. Stacked multilayers of alternating reduced graphene oxide and carbon nanotubes for planar supercapacitors

    NASA Astrophysics Data System (ADS)

    Moon, Geon Dae; Joo, Ji Bong; Yin, Yadong

    2013-11-01

    A simple layer-by-layer approach has been developed for constructing 2D planar supercapacitors of multi-stacked reduced graphene oxide and carbon nanotubes. This sandwiched 2D architecture enables the full utilization of the maximum active surface area of rGO nanosheets by using a CNT layer as a porous physical spacer to enhance the permeation of a gel electrolyte inside the structure and reduce the agglomeration of rGO nanosheets along the vertical direction. As a result, the stacked multilayers of rGO and CNTs are capable of offering higher output voltage and current production.A simple layer-by-layer approach has been developed for constructing 2D planar supercapacitors of multi-stacked reduced graphene oxide and carbon nanotubes. This sandwiched 2D architecture enables the full utilization of the maximum active surface area of rGO nanosheets by using a CNT layer as a porous physical spacer to enhance the permeation of a gel electrolyte inside the structure and reduce the agglomeration of rGO nanosheets along the vertical direction. As a result, the stacked multilayers of rGO and CNTs are capable of offering higher output voltage and current production. Electronic supplementary information (ESI) available: Experimental details, SEM and TEM images and additional electrochemical data. See DOI: 10.1039/c3nr04339h

  20. Understanding the Benefits and Limitations of Increasing Maximum Rotor Tip Speed for Utility-Scale Wind Turbines

    NASA Astrophysics Data System (ADS)

    Ning, A.; Dykes, K.

    2014-06-01

    For utility-scale wind turbines, the maximum rotor rotation speed is generally constrained by noise considerations. Innovations in acoustics and/or siting in remote locations may enable future wind turbine designs to operate with higher tip speeds. Wind turbines designed to take advantage of higher tip speeds are expected to be able to capture more energy and utilize lighter drivetrains because of their decreased maximum torque loads. However, the magnitude of the potential cost savings is unclear, and the potential trade-offs with rotor and tower sizing are not well understood. A multidisciplinary, system-level framework was developed to facilitate wind turbine and wind plant analysis and optimization. The rotors, nacelles, and towers of wind turbines are optimized for minimum cost of energy subject to a large number of structural, manufacturing, and transportation constraints. These optimization studies suggest that allowing for higher maximum tip speeds could result in a decrease in the cost of energy of up to 5% for land-based sites and 2% for offshore sites when using current technology. Almost all of the cost savings are attributed to the decrease in gearbox mass as a consequence of the reduced maximum rotor torque. Although there is some increased energy capture, it is very minimal (less than 0.5%). Extreme increases in tip speed are unnecessary; benefits for maximum tip speeds greater than 100-110 m/s are small to nonexistent.

  1. Utilizing Maximum Power Point Trackers in Parallel to Maximize the Power Output of a Solar (Photovoltaic) Array

    DTIC Science & Technology

    2012-12-01

    photovoltaic (PV) system to use a maximum power point tracker ( MPPT ) to increase... photovoltaic (PV) system to use a maximum power point tracker ( MPPT ) to increase the power output of the solar array. Currently, most military... MPPT ) is an optimizing circuit that is used in conjunction with photovoltaic (PV) arrays to achieve the maximum delivery of power from the array

  2. Optimizing signal output: effects of viscoelasticity and difference frequency on vibroacoustic radiation of tissue-mimicking phantoms

    NASA Astrophysics Data System (ADS)

    Namiri, Nikan K.; Maccabi, Ashkan; Bajwa, Neha; Badran, Karam W.; Taylor, Zachary D.; St. John, Maie A.; Grundfest, Warren S.; Saddik, George N.

    2018-02-01

    Vibroacoustography (VA) is an imaging technology that utilizes the acoustic response of tissues to a localized, low frequency radiation force to generate a spatially resolved, high contrast image. Previous studies have demonstrated the utility of VA for tissue identification and margin delineation in cancer tissues. However, the relationship between specimen viscoelasticity and vibroacoustic emission remains to be fully quantified. This work utilizes the effects of variable acoustic wave profiles on unique tissue-mimicking phantoms (TMPs) to maximize VA signal power according to tissue mechanical properties, particularly elasticity. A micro-indentation method was utilized to provide measurements of the elastic modulus for each biological replica. An inverse relationship was found between elastic modulus (E) and VA signal amplitude among homogeneous TMPs. Additionally, the difference frequency (Δf ) required to reach maximum VA signal correlated with specimen elastic modulus. Peak signal diminished with increasing Δf among the polyvinyl alcohol specimen, suggesting an inefficient vibroacoustic response by the specimen beyond a threshold of resonant Δf. Comparison of these measurements may provide additional information to improve tissue modeling, system characterization, as well as insights into the unique tissue composition of tumors in head and neck cancer patients.

  3. 47 CFR 90.693 - Grandfathering provisions for incumbent licensees.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... shall be calculated using the maximum ERP and the actual height of the antenna above average terrain... using the maximum ERP and the actual HAAT along each radial. Incumbent licensees seeking to utilize an...

  4. 47 CFR 90.693 - Grandfathering provisions for incumbent licensees.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... shall be calculated using the maximum ERP and the actual height of the antenna above average terrain... using the maximum ERP and the actual HAAT along each radial. Incumbent licensees seeking to utilize an...

  5. 14 CFR 23.1583 - Operating limitations.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... multiengine jets 6,000 pounds or less maximum weight in the normal, utility, and acrobatic category... climb requirements of § 23.63(c)(2). (4) For normal, utility, and acrobatic category multiengine jets... equal to the available runway length. (5) For normal, utility, and acrobatic category multiengine jets...

  6. 14 CFR 23.1583 - Operating limitations.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... multiengine jets 6,000 pounds or less maximum weight in the normal, utility, and acrobatic category... climb requirements of § 23.63(c)(2). (4) For normal, utility, and acrobatic category multiengine jets... equal to the available runway length. (5) For normal, utility, and acrobatic category multiengine jets...

  7. The Price of Uncertainty in Security Games

    NASA Astrophysics Data System (ADS)

    Grossklags, Jens; Johnson, Benjamin; Christin, Nicolas

    In the realm of information security, lack of information about other users' incentives in a network can lead to inefficient security choices and reductions in individuals' payoffs. We propose, contrast and compare three metrics for measuring the price of uncertainty due to the departure from the payoff-optimal security outcomes under complete information. Per the analogy with other efficiency metrics, such as the price of anarchy, we define the price of uncertainty as the maximum discrepancy in expected payoff in a complete information environment versus the payoff in an incomplete information environment. We consider difference, payoffratio, and cost-ratio metrics as canonical nontrivial measurements of the price of uncertainty. We conduct an algebraic, numerical, and graphical analysis of these metrics applied to different well-studied security scenarios proposed in prior work (i.e., best shot, weakest-link, and total effort). In these scenarios, we study how a fully rational expert agent could utilize the metrics to decide whether to gather information about the economic incentives of multiple nearsighted and naïve agents. We find substantial differences between the various metrics and evaluate the appropriateness for security choices in networked systems.

  8. Determining the Optimal Work Breakdown Structure for Defense Acquisition Contracts

    DTIC Science & Technology

    2016-03-24

    programs. Public utility corresponds with the generally understood concept that having more money is desirable, and having less money is not desirable...From this perspective, program completion on budget provides maximum utility , while being over budget reduces utility as there is less money for other...tree. Utility theory tools were applied using three utility perspectives, and optimal WBSs were identified. Results demonstrated that reporting at WBS

  9. Effective Utilization of Commercial Wireless Networking Technology in Planetary Environments

    NASA Technical Reports Server (NTRS)

    Caulev, Michael (Technical Monitor); Phillip, DeLeon; Horan, Stephen; Borah, Deva; Lyman, Ray

    2005-01-01

    The purpose of this research is to investigate the use of commercial, off-the-shelf wireless networking technology in planetary exploration applications involving rovers and sensor webs. The three objectives of this research project are to: 1) simulate the radio frequency environment of proposed landing sites on Mars using actual topographic data, 2) analyze the performance of current wireless networking standards in the simulated radio frequency environment, and 3) propose modifications to the standards for more efficient utilization. In this annual report, we present our results for the second year of research. During this year, the effort has focussed on the second objective of analyzing the performance of the IEEE 802.11a and IEEE 802.1lb wireless networking standards in the simulated radio frequency environment of Mars. The approach builds upon our previous results which deterministically modelled the RF environment at selected sites on Mars using high-resolution topographical data. These results provide critical information regarding antenna coverage patterns, maximum link distances, effects of surface clutter, and multipath effects. Using these previous results, the physical layer of these wireless networking standards has now been simulated and analyzed in the Martian environment. We are looking to extending these results to the and medium access layer next. Our results give us critical information regarding the performance (data rates, packet error rates, link distances, etc.) of IEEE 802.1 la/b wireless networks. This information enables a critical examination of how these wireless networks may be utilized in future Mars missions and how they may be possibly modified for more optimal usage.

  10. Estimating the Rate of Occurrence of Renal Stones in Astronauts

    NASA Technical Reports Server (NTRS)

    Myers, J.; Goodenow, D.; Gokoglu, S.; Kassemi, M.

    2016-01-01

    Changes in urine chemistry, during and post flight, potentially increases the risk of renal stones in astronauts. Although much is known about the effects of space flight on urine chemistry, no inflight incidence of renal stones in US astronauts exists and the question "How much does this risk change with space flight?" remains difficult to accurately quantify. In this discussion, we tackle this question utilizing a combination of deterministic and probabilistic modeling that implements the physics behind free stone growth and agglomeration, speciation of urine chemistry and published observations of population renal stone incidences to estimate changes in the rate of renal stone presentation. The modeling process utilizes a Population Balance Equation based model developed in the companion IWS abstract by Kassemi et al. (2016) to evaluate the maximum growth and agglomeration potential from a specified set of urine chemistry values. Changes in renal stone occurrence rates are obtained from this model in a probabilistic simulation that interrogates the range of possible urine chemistries using Monte Carlo techniques. Subsequently, each randomly sampled urine chemistry undergoes speciation analysis using the well-established Joint Expert Speciation System (JESS) code to calculate critical values, such as ionic strength and relative supersaturation. The Kassemi model utilizes this information to predict the mean and maximum stone size. We close the assessment loop by using a transfer function that estimates the rate of stone formation from combining the relative supersaturation and both the mean and maximum free stone growth sizes. The transfer function is established by a simulation analysis which combines population stone formation rates and Poisson regression. Training this transfer function requires using the output of the aforementioned assessment steps with inputs from known non-stone-former and known stone-former urine chemistries. Established in a Monte Carlo system, the entire renal stone analysis model produces a probability distribution of the stone formation rate and an expected uncertainty in the estimate. The utility of this analysis will be demonstrated by showing the change in renal stone occurrence predicted by this method using urine chemistry distributions published in Whitson et al. 2009. A comparison to the model predictions to previous assessments of renal stone risk will be used to illustrate initial validation of the model.

  11. Estimated cost savings of increased use of intravenous tissue plasminogen activator for acute ischemic stroke in Canada.

    PubMed

    Yip, Todd R; Demaerschalk, Bart M

    2007-06-01

    Intravenous tissue plasminogen activator (tPA) is an economically worthwhile but underused treatment option for acute ischemic stroke. We sought to identify the extent of tPA use in Canadian medical centers and the potential savings associated with increased use nationally and by province. We determined the nationwide annual incidence of ischemic stroke from the Canadian Institute of Health Information. The proportion of all ischemic stroke patients who received tPA was derived from published data. Economic analyses that report the expected annual cost savings of tPA were consulted. The analysis was conducted from the perspective of a universal health care system during 1 year. We estimated cost-savings with incrementally (eg, 2%, 4%, 6%, 8%, 10%, 15%, and 20%) increased use of tPA for acute ischemic stroke nationally and provincially. The current average national tPA utilization is 1.4%. For every increase of 2 percentage points in utilization, $757,204 (Canadian) could possibly be saved annually (95% CI maximum loss of $3,823,992 to a maximum savings of $2,201,252). With a 20% rate, >$7.5 million (Canadian) could be saved nationwide the first year. We estimate that even small increases in the proportion of all Canadian ischemic stroke patients receiving tPA could result in substantial realized savings for Canada's health care system.

  12. Off disk-center potential field calculations using vector magnetograms

    NASA Technical Reports Server (NTRS)

    Venkatakrishnan, P.; Gary, G. Allen

    1989-01-01

    A potential field calculation for off disk-center vector magnetograms that uses all the three components of the measured field is investigated. There is neither any need for interpolation of grid points between the image plane and the heliographic plane nor for an extension or a truncation to a heliographic rectangle. Hence, the method provides the maximum information content from the photospheric field as well as the most consistent potential field independent of the viewing angle. The introduction of polarimetric noise produces a less tolerant extrapolation procedure than using the line-of-sight extrapolation, but the resultant standard deviation is still small enough for the practical utility of this method.

  13. Detection of regional air pollution episodes utilizing satellite digital data in the visual range

    NASA Technical Reports Server (NTRS)

    Burke, H.-H. K.

    1982-01-01

    Digital analyses of satellite visible data for selected high-sulfate cases over the northeastern U.S., on July 21 and 22, 1978, are compared with ground-based measurements. Quantitative information on total aerosol loading derived from the satellite digitized data using an atmospheric radiative transfer model is found to agree with the ground measurements, and it is shown that the extent and transport of the haze pattern may be monitored from the satellite data over the period of maximum intensity for the episode. Attention is drawn to the potential benefits of satellite monitoring of pollution episodes demonstrated by the model.

  14. Static vs stochastic optimization: A case study of FTSE Bursa Malaysia sectorial indices

    NASA Astrophysics Data System (ADS)

    Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah@Rozita

    2014-06-01

    Traditional portfolio optimization methods in the likes of Markowitz' mean-variance model and semi-variance model utilize static expected return and volatility risk from historical data to generate an optimal portfolio. The optimal portfolio may not truly be optimal in reality due to the fact that maximum and minimum values from the data may largely influence the expected return and volatility risk values. This paper considers distributions of assets' return and volatility risk to determine a more realistic optimized portfolio. For illustration purposes, the sectorial indices data in FTSE Bursa Malaysia is employed. The results show that stochastic optimization provides more stable information ratio.

  15. Integration of air traffic databases : a case study

    DOT National Transportation Integrated Search

    1995-03-01

    This report describes a case study to show the benefits from maximum utilization of existing air traffic databases. The study demonstrates the utility of integrating available data through developing and demonstrating a methodology addressing the iss...

  16. Markov chain Monte Carlo estimation of quantum states

    NASA Astrophysics Data System (ADS)

    Diguglielmo, James; Messenger, Chris; Fiurášek, Jaromír; Hage, Boris; Samblowski, Aiko; Schmidt, Tabea; Schnabel, Roman

    2009-03-01

    We apply a Bayesian data analysis scheme known as the Markov chain Monte Carlo to the tomographic reconstruction of quantum states. This method yields a vector, known as the Markov chain, which contains the full statistical information concerning all reconstruction parameters including their statistical correlations with no a priori assumptions as to the form of the distribution from which it has been obtained. From this vector we can derive, e.g., the marginal distributions and uncertainties of all model parameters, and also of other quantities such as the purity of the reconstructed state. We demonstrate the utility of this scheme by reconstructing the Wigner function of phase-diffused squeezed states. These states possess non-Gaussian statistics and therefore represent a nontrivial case of tomographic reconstruction. We compare our results to those obtained through pure maximum-likelihood and Fisher information approaches.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maze, Grace M.

    STREAM II is the aqueous transport model of the Weather Information Display (WIND) emergency response system at Savannah River Site. It is used to calculate transport in the event of a chemical or radiological spill into the waterways on the Savannah River Site. Improvements were made to the code (STREAM II V7) to include flow from all site tributaries to the Savannah River total flow and utilize a 4 digit year input. The predicted downstream concentrations using V7 were generally on the same order of magnitude as V6 with slightly lower concentrations and quicker arrival times when all onsite streammore » flows are contributing to the Savannah River flow. The downstream arrival time at the Savannah River Water Plant ranges from no change to an increase of 8.77%, with minimum changes typically in March/April and maximum changes typically in October/November. The downstream concentrations are generally no more than 15% lower using V7 with the maximum percent change in January through April and minimum changes in June/July.« less

  18. 77 FR 12823 - Solicitation of Comments on a Proposed Change to the Disclosure Limitation Policy for Information...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-02

    ... policy for information reported on fuel ethanol production capacity, (both nameplate and maximum... fuel ethanol production capacity, (both nameplate and maximum sustainable capacity) on Form EIA-819 as... treat all information reported on fuel ethanol production capacity, (both nameplate and maximum...

  19. 7 CFR 1703.133 - Maximum and minimum amounts.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 11 2010-01-01 2010-01-01 false Maximum and minimum amounts. 1703.133 Section 1703.133 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Combination Loan and Grant...

  20. 7 CFR 1703.133 - Maximum and minimum amounts.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 11 2011-01-01 2011-01-01 false Maximum and minimum amounts. 1703.133 Section 1703.133 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Combination Loan and Grant...

  1. 7 CFR 1703.133 - Maximum and minimum amounts.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 11 2013-01-01 2013-01-01 false Maximum and minimum amounts. 1703.133 Section 1703.133 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Combination Loan and Grant...

  2. 7 CFR 1703.133 - Maximum and minimum amounts.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 11 2012-01-01 2012-01-01 false Maximum and minimum amounts. 1703.133 Section 1703.133 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Combination Loan and Grant...

  3. 7 CFR 1703.133 - Maximum and minimum amounts.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 11 2014-01-01 2014-01-01 false Maximum and minimum amounts. 1703.133 Section 1703.133 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Combination Loan and Grant...

  4. Utilization of waste heat in trucks for increased fuel economy

    NASA Technical Reports Server (NTRS)

    Leising, C. J.; Purohit, G. P.; Degrey, S. P.; Finegold, J. G.

    1978-01-01

    The waste heat utilization concepts include preheating, regeneration, turbocharging, turbocompounding, and Rankine engine compounding. Predictions are based on fuel-air cycle analyses, computer simulation, and engine test data. All options are evaluated in terms of maximum theoretical improvements, but the Diesel and adiabatic Diesel are also compared on the basis of maximum expected improvement and expected improvement over a driving cycle. The study indicates that Diesels should be turbocharged and aftercooled to the maximum possible level. The results reveal that Diesel driving cycle performance can be increased by 20% through increased turbocharging, turbocompounding, and Rankine engine compounding. The Rankine engine compounding provides about three times as much improvement as turbocompounding but also costs about three times as much. Performance for either can be approximately doubled if applied to an adiabatic Diesel.

  5. Enhancement of maximum attainable ion energy in the radiation pressure acceleration regime using a guiding structure

    DOE PAGES

    Bulanov, S. S.; Esarey, E.; Schroeder, C. B.; ...

    2015-03-13

    Radiation Pressure Acceleration is a highly efficient mechanism of laser driven ion acceleration, with the laser energy almost totally transferrable to the ions in the relativistic regime. There is a fundamental limit on the maximum attainable ion energy, which is determined by the group velocity of the laser. In the case of a tightly focused laser pulses, which are utilized to get the highest intensity, another factor limiting the maximum ion energy comes into play, the transverse expansion of the target. Transverse expansion makes the target transparent for radiation, thus reducing the effectiveness of acceleration. Utilization of an external guidingmore » structure for the accelerating laser pulse may provide a way of compensating for the group velocity and transverse expansion effects.« less

  6. 24 CFR 941.103 - Definitions.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... maximum project cost, as described in § 941.306: (1) Demolition of, or remediation of environmental... utility systems, and replacement of off-site underground utility systems, extensive rock and/or soil... preparation), administration, site acquisition, relocation, demolition of, and site remediation of...

  7. Acetate transport and utilization in the rat brain.

    PubMed

    Deelchand, Dinesh K; Shestov, Alexander A; Koski, Dee M; Uğurbil, Kâmil; Henry, Pierre-Gilles

    2009-05-01

    Acetate, a glial-specific substrate, is an attractive alternative to glucose for the study of neuronal-glial interactions. The present study investigates the kinetics of acetate uptake and utilization in the rat brain in vivo during infusion of [2-13C]acetate using NMR spectroscopy. When plasma acetate concentration was increased, the rate of brain acetate utilization (CMR(ace)) increased progressively and reached close to saturation for plasma acetate concentration > 2-3 mM, whereas brain acetate concentration continued to increase. The Michaelis-Menten constant for brain acetate utilization (K(M)(util) = 0.01 +/- 0.14 mM) was much smaller than for acetate transport through the blood-brain barrier (BBB) (K(M)(t) = 4.18 +/- 0.83 mM). The maximum transport capacity of acetate through the BBB (V(max)(t) = 0.96 +/- 0.18 micromol/g/min) was nearly twofold higher than the maximum rate of brain acetate utilization (V(max)(util) = 0.50 +/- 0.08 micromol/g/min). We conclude that, under our experimental conditions, brain acetate utilization is saturated when plasma acetate concentrations increase above 2-3 mM. At such high plasma acetate concentration, the rate-limiting step for glial acetate metabolism is not the BBB, but occurs after entry of acetate into the brain.

  8. Predicting tropical cyclone intensity using satellite measured equivalent blackbody temperatures of cloud tops. [regression analysis

    NASA Technical Reports Server (NTRS)

    Gentry, R. C.; Rodgers, E.; Steranka, J.; Shenk, W. E.

    1978-01-01

    A regression technique was developed to forecast 24 hour changes of the maximum winds for weak (maximum winds less than or equal to 65 Kt) and strong (maximum winds greater than 65 Kt) tropical cyclones by utilizing satellite measured equivalent blackbody temperatures around the storm alone and together with the changes in maximum winds during the preceding 24 hours and the current maximum winds. Independent testing of these regression equations shows that the mean errors made by the equations are lower than the errors in forecasts made by the peristence techniques.

  9. 7 CFR 1740.4 - Maximum amounts of grants.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 11 2011-01-01 2011-01-01 false Maximum amounts of grants. 1740.4 Section 1740.4 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE PUBLIC TELEVISION STATION DIGITAL TRANSITION GRANT PROGRAM Public Television Station Digital...

  10. 7 CFR 1740.4 - Maximum amounts of grants.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 11 2010-01-01 2010-01-01 false Maximum amounts of grants. 1740.4 Section 1740.4 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE PUBLIC TELEVISION STATION DIGITAL TRANSITION GRANT PROGRAM Public Television Station Digital...

  11. 7 CFR 1778.11 - Maximum grants.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... quantity of potable water, or an anticipated acute shortage or significant decline, cannot exceed $150,000... 7 Agriculture 12 2011-01-01 2011-01-01 false Maximum grants. 1778.11 Section 1778.11 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE...

  12. 7 CFR 1703.143 - Maximum and minimum amounts.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 11 2010-01-01 2010-01-01 false Maximum and minimum amounts. 1703.143 Section 1703.143 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Loan Program § 1703.143...

  13. 7 CFR 1703.143 - Maximum and minimum amounts.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 11 2012-01-01 2012-01-01 false Maximum and minimum amounts. 1703.143 Section 1703.143 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Loan Program § 1703.143...

  14. 7 CFR 1703.143 - Maximum and minimum amounts.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 11 2013-01-01 2013-01-01 false Maximum and minimum amounts. 1703.143 Section 1703.143 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Loan Program § 1703.143...

  15. 7 CFR 1703.143 - Maximum and minimum amounts.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 11 2014-01-01 2014-01-01 false Maximum and minimum amounts. 1703.143 Section 1703.143 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Loan Program § 1703.143...

  16. 7 CFR 1703.143 - Maximum and minimum amounts.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 11 2011-01-01 2011-01-01 false Maximum and minimum amounts. 1703.143 Section 1703.143 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Loan Program § 1703.143...

  17. Descriptive Statistics and Cluster Analysis for Extreme Rainfall in Java Island

    NASA Astrophysics Data System (ADS)

    E Komalasari, K.; Pawitan, H.; Faqih, A.

    2017-03-01

    This study aims to describe regional pattern of extreme rainfall based on maximum daily rainfall for period 1983 to 2012 in Java Island. Descriptive statistics analysis was performed to obtain centralization, variation and distribution of maximum precipitation data. Mean and median are utilized to measure central tendency data while Inter Quartile Range (IQR) and standard deviation are utilized to measure variation of data. In addition, skewness and kurtosis used to obtain shape the distribution of rainfall data. Cluster analysis using squared euclidean distance and ward method is applied to perform regional grouping. Result of this study show that mean (average) of maximum daily rainfall in Java Region during period 1983-2012 is around 80-181mm with median between 75-160mm and standard deviation between 17 to 82. Cluster analysis produces four clusters and show that western area of Java tent to have a higher annual maxima of daily rainfall than northern area, and have more variety of annual maximum value.

  18. Improved Drain Current Saturation and Voltage Gain in Graphene-on-Silicon Field Effect Transistors.

    PubMed

    Song, Seung Min; Bong, Jae Hoon; Hwang, Wan Sik; Cho, Byung Jin

    2016-05-04

    Graphene devices for radio frequency (RF) applications are of great interest due to their excellent carrier mobility and saturation velocity. However, the insufficient current saturation in graphene field effect transistors (FETs) is a barrier preventing enhancements of the maximum oscillation frequency and voltage gain, both of which should be improved for RF transistors. Achieving a high output resistance is therefore a crucial step for graphene to be utilized in RF applications. In the present study, we report high output resistances and voltage gains in graphene-on-silicon (GoS) FETs. This is achieved by utilizing bare silicon as a supporting substrate without an insulating layer under the graphene. The GoSFETs exhibit a maximum output resistance of 2.5 MΩ∙μm, maximum intrinsic voltage gain of 28 dB, and maximum voltage gain of 9 dB. This method opens a new route to overcome the limitations of conventional graphene-on-insulator (GoI) FETs and subsequently brings graphene electronics closer to practical usage.

  19. Combat cueing

    NASA Astrophysics Data System (ADS)

    Kachejian, Kerry C.; Vujcic, Doug

    1998-08-01

    The combat cueing (CBT-Q) research effort will develop and demonstrate a portable tactical information system that will enhance the effectiveness of small unit military operations by providing real-time target cueing information to individual warfighters and teams. CBT-Q consists of a network of portable radio frequency (RF) 'modules' and is controlled by a body-worn 'user station' utilizing a head mounted display . On the battlefield, CBT-Q modules will detect an enemy transmitter and instantly provide the warfighter with an emitter's location. During the 'fog of battle', CBT-Q would tell the warfighter, 'Look here, right now individuals into the RF spectrum, resulting in faster target engagement times, increased survivability, and reduce the potential for fratricide. CBT-Q technology can support both mounted and dismounted tactical forces involved in land, sea and air warfighting operations. The CBT-Q system combines robust geolocation and signal sorting algorithms with hardware and software modularity to offer maximum utility to the warfighter. A single CBT-Q module can provide threat RF detection. Three networked CBT-Q modules can provide emitter positions using a time difference of arrival (TDOA) technique. The TDOA approach relies on timing and positioning data derived from a global positioning systems. The information will be displayed on a variety of displays, including a flat-panel head mounted display. The end results of the program will be the demonstration of the system with US Army Scouts in an operational environment.

  20. An algorithm for computing moments-based flood quantile estimates when historical flood information is available

    USGS Publications Warehouse

    Cohn, T.A.; Lane, W.L.; Baier, W.G.

    1997-01-01

    This paper presents the expected moments algorithm (EMA), a simple and efficient method for incorporating historical and paleoflood information into flood frequency studies. EMA can utilize three types of at-site flood information: systematic stream gage record; information about the magnitude of historical floods; and knowledge of the number of years in the historical period when no large flood occurred. EMA employs an iterative procedure to compute method-of-moments parameter estimates. Initial parameter estimates are calculated from systematic stream gage data. These moments are then updated by including the measured historical peaks and the expected moments, given the previously estimated parameters, of the below-threshold floods from the historical period. The updated moments result in new parameter estimates, and the last two steps are repeated until the algorithm converges. Monte Carlo simulations compare EMA, Bulletin 17B's [United States Water Resources Council, 1982] historically weighted moments adjustment, and maximum likelihood estimators when fitting the three parameters of the log-Pearson type III distribution. These simulations demonstrate that EMA is more efficient than the Bulletin 17B method, and that it is nearly as efficient as maximum likelihood estimation (MLE). The experiments also suggest that EMA has two advantages over MLE when dealing with the log-Pearson type III distribution: It appears that EMA estimates always exist and that they are unique, although neither result has been proven. EMA can be used with binomial or interval-censored data and with any distributional family amenable to method-of-moments estimation.

  1. An algorithm for computing moments-based flood quantile estimates when historical flood information is available

    NASA Astrophysics Data System (ADS)

    Cohn, T. A.; Lane, W. L.; Baier, W. G.

    This paper presents the expected moments algorithm (EMA), a simple and efficient method for incorporating historical and paleoflood information into flood frequency studies. EMA can utilize three types of at-site flood information: systematic stream gage record; information about the magnitude of historical floods; and knowledge of the number of years in the historical period when no large flood occurred. EMA employs an iterative procedure to compute method-of-moments parameter estimates. Initial parameter estimates are calculated from systematic stream gage data. These moments are then updated by including the measured historical peaks and the expected moments, given the previously estimated parameters, of the below-threshold floods from the historical period. The updated moments result in new parameter estimates, and the last two steps are repeated until the algorithm converges. Monte Carlo simulations compare EMA, Bulletin 17B's [United States Water Resources Council, 1982] historically weighted moments adjustment, and maximum likelihood estimators when fitting the three parameters of the log-Pearson type III distribution. These simulations demonstrate that EMA is more efficient than the Bulletin 17B method, and that it is nearly as efficient as maximum likelihood estimation (MLE). The experiments also suggest that EMA has two advantages over MLE when dealing with the log-Pearson type III distribution: It appears that EMA estimates always exist and that they are unique, although neither result has been proven. EMA can be used with binomial or interval-censored data and with any distributional family amenable to method-of-moments estimation.

  2. A fresh look at the Last Glacial Maximum using Paleoclimate Data Assimilation

    NASA Astrophysics Data System (ADS)

    Malevich, S. B.; Tierney, J. E.; Hakim, G. J.; Tardif, R.

    2017-12-01

    Quantifying climate conditions during the Last Glacial Maximum ( 21ka) can help us to understand climate responses to forcing and climate states that are poorly represented in the instrumental record. Paleoclimate proxies may be used to estimate these climate conditions, but proxies are sparsely distributed and possess uncertainties from environmental and biogeochemical processes. Alternatively, climate model simulations provide a full-field view, but may predict unrealistic climate states or states not faithful to proxy records. Here, we use data assimilation - combining climate proxy records with a theoretical understanding from climate models - to produce field reconstructions of the LGM that leverage the information from both data and models. To date, data assimilation has mainly been used to produce reconstructions of climate fields through the last millennium. We expand this approach in order to produce a climate fields for the Last Glacial Maximum using an ensemble Kalman filter assimilation. Ensemble samples were formed from output from multiple models including CCSM3, CESM2.1, and HadCM3. These model simulations are combined with marine sediment proxies for upper ocean temperature (TEX86, UK'37, Mg/Ca and δ18O of foraminifera), utilizing forward models based on a newly developed suite of Bayesian proxy system models. We also incorporate age model and radiocarbon reservoir uncertainty into our reconstructions using Bayesian age modeling software. The resulting fields show familiar patterns based on comparison with previous proxy-based reconstructions, but additionally reveal novel patterns of large-scale shifts in ocean-atmosphere dynamics, as the surface temperature data inform upon atmospheric circulation and precipitation patterns.

  3. Research of Ancient Architectures in Jin-Fen Area Based on GIS&BIM Technology

    NASA Astrophysics Data System (ADS)

    Jia, Jing; Zheng, Qiuhong; Gao, Huiying; Sun, Hai

    2017-05-01

    The number of well-preserved ancient buildings located in Shanxi Province, enjoying the absolute maximum proportion of ancient architectures in China, is about 18418, among which, 9053 buildings have the structural style of wood frame. The value of the application of BIM (Building Information Modeling) and GIS (Geographic Information System) is gradually probed and testified in the corresponding fields of ancient architecture’s spatial distribution information management, routine maintenance and special conservation & restoration, the evaluation and simulation of related disasters, such as earthquake. The research objects are ancient architectures in JIN-FEN area, which were first investigated by Sicheng LIANG and recorded in his work of “Chinese ancient architectures survey report”. The research objects, i.e. the ancient architectures in Jin-Fen area include those in Sicheng LIANG’s investigation, and further adjustments were made through authors’ on-site investigation and literature searching & collection. During this research process, the spatial distributing Geodatabase of research objects is established utilizing GIS. The BIM components library for ancient buildings is formed combining on-site investigation data and precedent classic works, such as “Yingzao Fashi”, a treatise on architectural methods in Song Dynasty, “Yongle Encyclopedia” and “Gongcheng Zuofa Zeli”, case collections of engineering practice, by the Ministry of Construction of Qing Dynasty. A building of Guangsheng temple in Hongtong county is selected as an example to elaborate the BIM model construction process based on the BIM components library for ancient buildings. Based on the foregoing work results of spatial distribution data, attribute data of features, 3D graphic information and parametric building information model, the information management system for ancient architectures in Jin-Fen Area, utilizing GIS&BIM technology, could be constructed to support the further research of seismic disaster analysis and seismic performance simulation.

  4. Real-time validation of receiver state information in optical space-time block code systems.

    PubMed

    Alamia, John; Kurzweg, Timothy

    2014-06-15

    Free space optical interconnect (FSOI) systems are a promising solution to interconnect bottlenecks in high-speed systems. To overcome some sources of diminished FSOI performance caused by close proximity of multiple optical channels, multiple-input multiple-output (MIMO) systems implementing encoding schemes such as space-time block coding (STBC) have been developed. These schemes utilize information pertaining to the optical channel to reconstruct transmitted data. The STBC system is dependent on accurate channel state information (CSI) for optimal system performance. As a result of dynamic changes in optical channels, a system in operation will need to have updated CSI. Therefore, validation of the CSI during operation is a necessary tool to ensure FSOI systems operate efficiently. In this Letter, we demonstrate a method of validating CSI, in real time, through the use of moving averages of the maximum likelihood decoder data, and its capacity to predict the bit error rate (BER) of the system.

  5. 48 CFR 32.503-12 - Maximum unliquidated amount.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... should be fully utilized, along with the services of qualified cost analysis and engineering personnel... GENERAL CONTRACTING REQUIREMENTS CONTRACT FINANCING Progress Payments Based on Costs 32.503-12 Maximum... described in paragraph (a) above is most likely to arise under the following circumstances: (1) The costs of...

  6. 7 CFR 1703.124 - Maximum and minimum grant amounts.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 7 Agriculture 11 2010-01-01 2010-01-01 false Maximum and minimum grant amounts. 1703.124 Section 1703.124 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Grant Program § 1703.124...

  7. 7 CFR 1703.124 - Maximum and minimum grant amounts.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 11 2013-01-01 2013-01-01 false Maximum and minimum grant amounts. 1703.124 Section 1703.124 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Grant Program § 1703.124...

  8. 7 CFR 1703.124 - Maximum and minimum grant amounts.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 11 2012-01-01 2012-01-01 false Maximum and minimum grant amounts. 1703.124 Section 1703.124 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Grant Program § 1703.124...

  9. 7 CFR 1703.124 - Maximum and minimum grant amounts.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 11 2011-01-01 2011-01-01 false Maximum and minimum grant amounts. 1703.124 Section 1703.124 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Grant Program § 1703.124...

  10. 7 CFR 1703.124 - Maximum and minimum grant amounts.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 11 2014-01-01 2014-01-01 false Maximum and minimum grant amounts. 1703.124 Section 1703.124 Agriculture Regulations of the Department of Agriculture (Continued) RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL DEVELOPMENT Distance Learning and Telemedicine Grant Program § 1703.124...

  11. Assessment of multi-wildfire occurrence data for machine learning based risk modelling

    NASA Astrophysics Data System (ADS)

    Lim, C. H.; Kim, M.; Kim, S. J.; Yoo, S.; Lee, W. K.

    2017-12-01

    The occurrence of East Asian wildfires is mainly caused by human-activities, but the extreme drought increased due to the climate change caused wildfires and they spread to large-scale fires. Accurate occurrence location data is required for modelling wildfire probability and risk. In South Korea, occurrence data surveyed through KFS (Korea Forest Service) and MODIS (MODerate-resolution Imaging Spectroradiometer) satellite-based active fire data can be utilized. In this study, two sorts of wildfire occurrence data were applied to select suitable occurrence data for machine learning based wildfire risk modelling. MaxEnt (Maximum Entropy) model based on machine learning is used for wildfire risk modelling, and two types of occurrence data and socio-economic and climate-environment data are applied to modelling. In the results with KFS survey based data, the low relationship was shown with climate-environmental factors, and the uncertainty of coordinate information appeared. The MODIS-based active fire data were found outside the forests, and there were a lot of spots that did not match the actual wildfires. In order to utilize MODIS-based active fire data, it was necessary to extract forest area and utilize only high-confidence level data. In KFS data, it was necessary to separate the analysis according to the damage scale to improve the modelling accuracy. Ultimately, it is considered to be the best way to simulate the wildfire risk by constructing more accurate information by combining two sorts of wildfire occurrence data.

  12. A mutual information-Dempster-Shafer based decision ensemble system for land cover classification of hyperspectral data

    NASA Astrophysics Data System (ADS)

    Pahlavani, Parham; Bigdeli, Behnaz

    2017-12-01

    Hyperspectral images contain extremely rich spectral information that offer great potential to discriminate between various land cover classes. However, these images are usually composed of tens or hundreds of spectrally close bands, which result in high redundancy and great amount of computation time in hyperspectral classification. Furthermore, in the presence of mixed coverage pixels, crisp classifiers produced errors, omission and commission. This paper presents a mutual information-Dempster-Shafer system through an ensemble classification approach for classification of hyperspectral data. First, mutual information is applied to split data into a few independent partitions to overcome high dimensionality. Then, a fuzzy maximum likelihood classifies each band subset. Finally, Dempster-Shafer is applied to fuse the results of the fuzzy classifiers. In order to assess the proposed method, a crisp ensemble system based on a support vector machine as the crisp classifier and weighted majority voting as the crisp fusion method are applied on hyperspectral data. Furthermore, a dimension reduction system is utilized to assess the effectiveness of mutual information band splitting of the proposed method. The proposed methodology provides interesting conclusions on the effectiveness and potentiality of mutual information-Dempster-Shafer based classification of hyperspectral data.

  13. Mixture Rasch Models with Joint Maximum Likelihood Estimation

    ERIC Educational Resources Information Center

    Willse, John T.

    2011-01-01

    This research provides a demonstration of the utility of mixture Rasch models. Specifically, a model capable of estimating a mixture partial credit model using joint maximum likelihood is presented. Like the partial credit model, the mixture partial credit model has the beneficial feature of being appropriate for analysis of assessment data…

  14. Paracrine communication maximizes cellular response fidelity in wound signaling

    PubMed Central

    Handly, L Naomi; Pilko, Anna; Wollman, Roy

    2015-01-01

    Population averaging due to paracrine communication can arbitrarily reduce cellular response variability. Yet, variability is ubiquitously observed, suggesting limits to paracrine averaging. It remains unclear whether and how biological systems may be affected by such limits of paracrine signaling. To address this question, we quantify the signal and noise of Ca2+ and ERK spatial gradients in response to an in vitro wound within a novel microfluidics-based device. We find that while paracrine communication reduces gradient noise, it also reduces the gradient magnitude. Accordingly we predict the existence of a maximum gradient signal to noise ratio. Direct in vitro measurement of paracrine communication verifies these predictions and reveals that cells utilize optimal levels of paracrine signaling to maximize the accuracy of gradient-based positional information. Our results demonstrate the limits of population averaging and show the inherent tradeoff in utilizing paracrine communication to regulate cellular response fidelity. DOI: http://dx.doi.org/10.7554/eLife.09652.001 PMID:26448485

  15. Research on application of GIS and GPS in inspection and management of city gas pipeline network

    NASA Astrophysics Data System (ADS)

    Zhou, Jin; Meng, Xiangyin; Tao, Tao; Zhang, Fengpei

    2018-01-01

    To solve the problems existing in the current Gas Company patrol management, such as inaccurate attendance, whether or not the patrol personnel exceed the scope of patrol inspection. This paper Proposed that we apply the SuperMap iDeskTop 8C plug-in desktop GIS application and development platform, the positioning function of GPS and the data transmission function of 3G/4G/GPRS/Ethernet to develop a gas pipeline inspection management system. We build association between real-time data, pipe network information, patrol data, map information, spatial data and so on to realize the bottom data fusion, use the mobile location system and patrol management client to achieve real-time interaction between the client and the mobile terminal. Practical application shows that the system has completed the standardized management of patrol tasks, the reasonable evaluation of patrol work and the maximum utilization of patrol resources.

  16. Classifier utility modeling and analysis of hypersonic inlet start/unstart considering training data costs

    NASA Astrophysics Data System (ADS)

    Chang, Juntao; Hu, Qinghua; Yu, Daren; Bao, Wen

    2011-11-01

    Start/unstart detection is one of the most important issues of hypersonic inlets and is also the foundation of protection control of scramjet. The inlet start/unstart detection can be attributed to a standard pattern classification problem, and the training sample costs have to be considered for the classifier modeling as the CFD numerical simulations and wind tunnel experiments of hypersonic inlets both cost time and money. To solve this problem, the CFD simulation of inlet is studied at first step, and the simulation results could provide the training data for pattern classification of hypersonic inlet start/unstart. Then the classifier modeling technology and maximum classifier utility theories are introduced to analyze the effect of training data cost on classifier utility. In conclusion, it is useful to introduce support vector machine algorithms to acquire the classifier model of hypersonic inlet start/unstart, and the minimum total cost of hypersonic inlet start/unstart classifier can be obtained by the maximum classifier utility theories.

  17. Maximum utilization of women's potentials.

    PubMed

    1998-01-01

    Balayan's Municipal Center for Women was created to recognize women's role in the family and community in nation-building; to support the dignity and integrity of all people, especially women, and fight against rape, incest, wife beating, sexual harassment, and sexual discrimination; to empower women through education; to use women as equal partners in achieving progress; to end gender bias and discrimination, and improve women's status; and to enact progressive legal and moral change in favor of women and women's rights. The organization's functions in the following areas are described: education and information dissemination, community organizing, the provision of economic and livelihood assistance, women's counseling, health assistance, legislative advocacy and research, legal assistance, women's networking, and monitoring and evaluation.

  18. Static vs stochastic optimization: A case study of FTSE Bursa Malaysia sectorial indices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mamat, Nur Jumaadzan Zaleha; Jaaman, Saiful Hafizah; Ahmad, Rokiah Rozita

    2014-06-19

    Traditional portfolio optimization methods in the likes of Markowitz' mean-variance model and semi-variance model utilize static expected return and volatility risk from historical data to generate an optimal portfolio. The optimal portfolio may not truly be optimal in reality due to the fact that maximum and minimum values from the data may largely influence the expected return and volatility risk values. This paper considers distributions of assets' return and volatility risk to determine a more realistic optimized portfolio. For illustration purposes, the sectorial indices data in FTSE Bursa Malaysia is employed. The results show that stochastic optimization provides more stablemore » information ratio.« less

  19. Pregnant women with substance use disorders: The intersection of history, ethics, and advocacy.

    PubMed

    Acquavita, Shauna P; Kauffman, Sandra S; Talks, Alexandra; Sherman, Kate

    2016-01-01

    Pregnant women with substance use disorders face many obstacles, including obtaining evidence-based treatment and care. This article (1) briefly reviews the history of pregnant women in clinical trials and substance use disorders treatment research; (2) identifies current ethical issues facing researchers studying pregnant women with substance use disorders; (3) presents and describes an ethical framework to utilize; and (4) identifies future directions needed to develop appropriate research and treatment policies and practices. Current research is not providing enough information to clinicians, policy-makers, and the public about maternal and child health and substance use disorders, and the data will not be sufficient to offer maximum benefit until protocols are changed.

  20. Molecular simulation of CO chemisorption on Co(0001) in presence of supercritical fluid solvent: A potential of mean force study

    NASA Astrophysics Data System (ADS)

    Asiaee, Alireza; Benjamin, Kenneth M.

    2016-08-01

    For several decades, heterogeneous catalytic processes have been improved through utilizing supercritical fluids (SCFs) as solvents. While numerous experimental studies have been established across a range of chemistries, such as oxidation, pyrolysis, amination, and Fischer-Tropsch synthesis, still there is little fundamental, molecular-level information regarding the role of the SCF on elementary heterogeneous catalytic steps. In this study, the influence of hexane solvent on the adsorption of carbon monoxide on Co(0001), as the first step in the reaction mechanism of many processes involving syngas conversion, is probed. Simulations are performed at various bulk hexane densities, ranging from ideal gas conditions (no SCF hexane) to various near- and super-critical hexane densities. For this purpose, both density functional theory and molecular dynamics simulations are employed to determine the adsorption energy and free energy change during CO chemisorption. Potential of mean force calculations, utilizing umbrella sampling and the weighted histogram analysis method, provide the first commentary on SCF solvent effects on the energetic aspects of the chemisorption process. Simulation results indicate an enhanced stability of CO adsorption on the catalyst surface in the presence of supercritical hexane within the reduced pressure range of 1.0-1.5 at a constant temperature of 523 K. Furthermore, it is shown that the maximum stability of CO in the adsorbed state as a function of supercritical hexane density at 523 K nearly coincides with the maximum isothermal compressibility of bulk hexane at this temperature.

  1. Non-rigid registration between 3D ultrasound and CT images of the liver based on intensity and gradient information

    NASA Astrophysics Data System (ADS)

    Lee, Duhgoon; Nam, Woo Hyun; Lee, Jae Young; Ra, Jong Beom

    2011-01-01

    In order to utilize both ultrasound (US) and computed tomography (CT) images of the liver concurrently for medical applications such as diagnosis and image-guided intervention, non-rigid registration between these two types of images is an essential step, as local deformation between US and CT images exists due to the different respiratory phases involved and due to the probe pressure that occurs in US imaging. This paper introduces a voxel-based non-rigid registration algorithm between the 3D B-mode US and CT images of the liver. In the proposed algorithm, to improve the registration accuracy, we utilize the surface information of the liver and gallbladder in addition to the information of the vessels inside the liver. For an effective correlation between US and CT images, we treat those anatomical regions separately according to their characteristics in US and CT images. Based on a novel objective function using a 3D joint histogram of the intensity and gradient information, vessel-based non-rigid registration is followed by surface-based non-rigid registration in sequence, which improves the registration accuracy. The proposed algorithm is tested for ten clinical datasets and quantitative evaluations are conducted. Experimental results show that the registration error between anatomical features of US and CT images is less than 2 mm on average, even with local deformation due to different respiratory phases and probe pressure. In addition, the lesion registration error is less than 3 mm on average with a maximum of 4.5 mm that is considered acceptable for clinical applications.

  2. Modelling information flow along the human connectome using maximum flow.

    PubMed

    Lyoo, Youngwook; Kim, Jieun E; Yoon, Sujung

    2018-01-01

    The human connectome is a complex network that transmits information between interlinked brain regions. Using graph theory, previously well-known network measures of integration between brain regions have been constructed under the key assumption that information flows strictly along the shortest paths possible between two nodes. However, it is now apparent that information does flow through non-shortest paths in many real-world networks such as cellular networks, social networks, and the internet. In the current hypothesis, we present a novel framework using the maximum flow to quantify information flow along all possible paths within the brain, so as to implement an analogy to network traffic. We hypothesize that the connection strengths of brain networks represent a limit on the amount of information that can flow through the connections per unit of time. This allows us to compute the maximum amount of information flow between two brain regions along all possible paths. Using this novel framework of maximum flow, previous network topological measures are expanded to account for information flow through non-shortest paths. The most important advantage of the current approach using maximum flow is that it can integrate the weighted connectivity data in a way that better reflects the real information flow of the brain network. The current framework and its concept regarding maximum flow provides insight on how network structure shapes information flow in contrast to graph theory, and suggests future applications such as investigating structural and functional connectomes at a neuronal level. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. Interactive effects of carbon footprint information and its accessibility on value and subjective qualities of food products.

    PubMed

    Kimura, Atsushi; Wada, Yuji; Kamada, Akiko; Masuda, Tomohiro; Okamoto, Masako; Goto, Sho-ichi; Tsuzuki, Daisuke; Cai, Dongsheng; Oka, Takashi; Dan, Ippeita

    2010-10-01

    We aimed to explore the interactive effects of the accessibility of information and the degree of carbon footprint score on consumers' value judgments of food products. Participants (n=151, undergraduate students in Japan) rated their maximum willingness to pay (WTP) for four food products varying in information accessibility (active-search or read-only conditions) and in carbon footprint values (low, middle, high, or non-display) provided. We also assessed further effects of information accessibly and carbon footprint value on other product attributes utilizing the subjective estimation of taste, quality, healthiness, and environmental friendliness. Results of the experiment demonstrated an interactive effect of information accessibility and the degree of carbon emission on consumer valuation of carbon footprint-labeled food. The carbon footprint value had a stronger impact on participants' WTP in the active-search condition than in the read-only condition. Similar to WTP, the results of the subjective ratings for product qualities also exhibited an interactive effect of the two factors on the rating of environmental friendliness for products. These results imply that the perceived environmental friendliness inferable from a carbon footprint label contributes to creating value for a food product.

  4. Past and Present Large Solid Rocket Motor Test Capabilities

    NASA Technical Reports Server (NTRS)

    Kowalski, Robert R.; Owen, David B., II

    2011-01-01

    A study was performed to identify the current and historical trends in the capability of solid rocket motor testing in the United States. The study focused on test positions capable of testing solid rocket motors of at least 10,000 lbf thrust. Top-level information was collected for two distinct data points plus/minus a few years: 2000 (Y2K) and 2010 (Present). Data was combined from many sources, but primarily focused on data from the Chemical Propulsion Information Analysis Center s Rocket Propulsion Test Facilities Database, and heritage Chemical Propulsion Information Agency/M8 Solid Rocket Motor Static Test Facilities Manual. Data for the Rocket Propulsion Test Facilities Database and heritage M8 Solid Rocket Motor Static Test Facilities Manual is provided to the Chemical Propulsion Information Analysis Center directly from the test facilities. Information for each test cell for each time period was compiled and plotted to produce a graphical display of the changes for the nation, NASA, Department of Defense, and commercial organizations during the past ten years. Major groups of plots include test facility by geographic location, test cells by status/utilization, and test cells by maximum thrust capability. The results are discussed.

  5. Improved Drain Current Saturation and Voltage Gain in Graphene–on–Silicon Field Effect Transistors

    PubMed Central

    Song, Seung Min; Bong, Jae Hoon; Hwang, Wan Sik; Cho, Byung Jin

    2016-01-01

    Graphene devices for radio frequency (RF) applications are of great interest due to their excellent carrier mobility and saturation velocity. However, the insufficient current saturation in graphene field effect transistors (FETs) is a barrier preventing enhancements of the maximum oscillation frequency and voltage gain, both of which should be improved for RF transistors. Achieving a high output resistance is therefore a crucial step for graphene to be utilized in RF applications. In the present study, we report high output resistances and voltage gains in graphene-on-silicon (GoS) FETs. This is achieved by utilizing bare silicon as a supporting substrate without an insulating layer under the graphene. The GoSFETs exhibit a maximum output resistance of 2.5 MΩ∙μm, maximum intrinsic voltage gain of 28 dB, and maximum voltage gain of 9 dB. This method opens a new route to overcome the limitations of conventional graphene-on-insulator (GoI) FETs and subsequently brings graphene electronics closer to practical usage. PMID:27142861

  6. A rod type linear ultrasonic motor utilizing longitudinal traveling waves: proof of concept

    NASA Astrophysics Data System (ADS)

    Wang, Liang; Wielert, Tim; Twiefel, Jens; Jin, Jiamei; Wallaschek, Jörg

    2017-08-01

    This paper proposes a non-resonant linear ultrasonic motor utilizing longitudinal traveling waves. The longitudinal traveling waves in the rod type stator are generated by inducing longitudinal vibrations at one end of the waveguide and eliminating reflections at the opposite end by a passive damper. Considering the Poisson’s effect, the stator surface points move on elliptic trajectories and the slider is driven forward by friction. In contrast to many other flexural traveling wave linear ultrasonic motors, the driving direction of the proposed motor is identical to the wave propagation direction. The feasibility of the motor concept is demonstrated theoretically and experimentally. First, the design and operation principle of the motor are presented in detail. Then, the stator is modeled utilizing the transfer matrix method and verified by experimental studies. In addition, experimental parameter studies are carried out to identify the motor characteristics. Finally, the performance of the proposed motor is investigated. Overall, the results indicate very dynamic drive characteristics. The motor prototype achieves a maximum mean velocity of 115 mm s-1 and a maximum load of 0.25 N. Thereby, the start-up and shutdown times from the maximum speed are lower than 5 ms.

  7. Preoperative assessment of intracranial tumors with perfusion MR and a volumetric interpolated examination: a comparative study with DSA.

    PubMed

    Wetzel, Stephan G; Cha, Soonmee; Law, Meng; Johnson, Glyn; Golfinos, John; Lee, Peter; Nelson, Peter Kim

    2002-01-01

    In evaluating intracranial tumors, a safe low-cost alternative that provides information similar to that of digital subtraction angiography (DSA) may be of interest. Our purpose was to determine the utility and limitations of a combined MR protocol in assessing (neo-) vascularity in intracranial tumors and their relation to adjacent vessels and to compare the results with those of DSA. Twenty-two consecutive patients with an intracranial tumor who underwent preoperative stereoscopic DSA were examined with contrast-enhanced dynamic T2*-weighted perfusion MR imaging followed by a T1-weighted three-dimensional (3D) MR study (volumetric interpolated brain examination [VIBE]). The maximum relative cerebral blood volume (rCBV) of the tumor was compared with tumor vascularity at DSA. Critical vessel structures were defined in each patient, and VIBE images of these structures were compared with DSA findings. For full exploitation of the 3D data sets, maximum-intensity projection algorithms reconstructed in real time with any desired volume and orientation were used. Tumor blush scores at DSA were significantly correlated with the rCBV measurements (r = 0.75; P <.01, Spearman rank correlation coefficient). In 17 (77%) patients, VIBE provided all relevant information about the venous system, whereas information about critical arteries were partial in 50% of the cases and not relevant in the other 50%. A fast imaging protocol consisting of perfusion MR imaging and a volumetric MR acquisition provides some of the information about tumor (neo-) vascularity and adjacent vascular anatomy that can be obtained with conventional angiography. However, the MR protocol provides insufficient visualization of distal cerebral arteries.

  8. Maximum propellant utilization in an electron bombardment thruster

    NASA Technical Reports Server (NTRS)

    Kaufman, H. R.; Cohen, A. J.

    1971-01-01

    Current theory and experimental data on propellant utilization in electron bombardment ion thrusters are reviewed. Because the majority of investigations have been conducted with mercury, the presentation emphasizes that propellant. The results are presented in as general a form as possible to facilitate use in areas other than space propulsion.

  9. Utility and Limitations of Using Gene Expression Data to Identify Functional Associations

    PubMed Central

    Peng, Cheng; Shiu, Shin-Han

    2016-01-01

    Gene co-expression has been widely used to hypothesize gene function through guilt-by association. However, it is not clear to what degree co-expression is informative, whether it can be applied to genes involved in different biological processes, and how the type of dataset impacts inferences about gene functions. Here our goal is to assess the utility and limitations of using co-expression as a criterion to recover functional associations between genes. By determining the percentage of gene pairs in a metabolic pathway with significant expression correlation, we found that many genes in the same pathway do not have similar transcript profiles and the choice of dataset, annotation quality, gene function, expression similarity measure, and clustering approach significantly impacts the ability to recover functional associations between genes using Arabidopsis thaliana as an example. Some datasets are more informative in capturing coordinated expression profiles and larger data sets are not always better. In addition, to recover the maximum number of known pathways and identify candidate genes with similar functions, it is important to explore rather exhaustively multiple dataset combinations, similarity measures, clustering algorithms and parameters. Finally, we validated the biological relevance of co-expression cluster memberships with an independent phenomics dataset and found that genes that consistently cluster with leucine degradation genes tend to have similar leucine levels in mutants. This study provides a framework for obtaining gene functional associations by maximizing the information that can be obtained from gene expression datasets. PMID:27935950

  10. Upper Limb Asymmetry in the Sense of Effort Is Dependent on Force Level

    PubMed Central

    Mitchell, Mark; Martin, Bernard J.; Adamo, Diane E.

    2017-01-01

    Previous studies have shown that asymmetries in upper limb sensorimotor function are dependent on the source of sensory and motor information, hand preference and differences in hand strength. Further, the utilization of sensory and motor information and the mode of control of force may differ between the right hand/left hemisphere and left hand/right hemisphere systems. To more clearly understand the unique contribution of hand strength and intrinsic differences to the control of grasp force, we investigated hand/hemisphere differences when the source of force information was encoded at two different force levels corresponding to a 20 and 70% maximum voluntary contraction or the right and left hand of each participant. Eleven, adult males who demonstrated a stronger right than left maximum grasp force were requested to match a right or left hand 20 or 70% maximal voluntary contraction reference force with the opposite hand. During the matching task, visual feedback corresponding to the production of the reference force was available and then removed when the contralateral hand performed the match. The matching relative force error was significantly different between hands for the 70% MVC reference force but not for the 20% MVC reference force. Directional asymmetries, quantified as the matching force constant error, showed right hand overshoots and left undershoots were force dependent and primarily due to greater undershoots when matching with the left hand the right hand reference force. Findings further suggest that the interaction between internal sources of information, such as efferent copy and proprioception, as well as hand strength differences appear to be hand/hemisphere system dependent. Investigations of force matching tasks under conditions whereby force level is varied and visual feedback of the reference force is available provides critical baseline information for building effective interventions for asymmetric (stroke-related, Parkinson’s Disease) and symmetric (Amyotrophic Lateral Sclerosis) upper limb recovery of neurological conditions where the various sources of sensory – motor information have been significantly altered by the disease process. PMID:28491047

  11. Upper Limb Asymmetry in the Sense of Effort Is Dependent on Force Level.

    PubMed

    Mitchell, Mark; Martin, Bernard J; Adamo, Diane E

    2017-01-01

    Previous studies have shown that asymmetries in upper limb sensorimotor function are dependent on the source of sensory and motor information, hand preference and differences in hand strength. Further, the utilization of sensory and motor information and the mode of control of force may differ between the right hand/left hemisphere and left hand/right hemisphere systems. To more clearly understand the unique contribution of hand strength and intrinsic differences to the control of grasp force, we investigated hand/hemisphere differences when the source of force information was encoded at two different force levels corresponding to a 20 and 70% maximum voluntary contraction or the right and left hand of each participant. Eleven, adult males who demonstrated a stronger right than left maximum grasp force were requested to match a right or left hand 20 or 70% maximal voluntary contraction reference force with the opposite hand. During the matching task, visual feedback corresponding to the production of the reference force was available and then removed when the contralateral hand performed the match. The matching relative force error was significantly different between hands for the 70% MVC reference force but not for the 20% MVC reference force. Directional asymmetries, quantified as the matching force constant error, showed right hand overshoots and left undershoots were force dependent and primarily due to greater undershoots when matching with the left hand the right hand reference force. Findings further suggest that the interaction between internal sources of information, such as efferent copy and proprioception, as well as hand strength differences appear to be hand/hemisphere system dependent. Investigations of force matching tasks under conditions whereby force level is varied and visual feedback of the reference force is available provides critical baseline information for building effective interventions for asymmetric (stroke-related, Parkinson's Disease) and symmetric (Amyotrophic Lateral Sclerosis) upper limb recovery of neurological conditions where the various sources of sensory - motor information have been significantly altered by the disease process.

  12. Software For Nearly Optimal Packing Of Cargo

    NASA Technical Reports Server (NTRS)

    Fennel, Theron R.; Daughtrey, Rodney S.; Schwaab, Doug G.

    1994-01-01

    PACKMAN computer program used to find nearly optimal arrangements of cargo items in storage containers, subject to such multiple packing objectives as utilization of volumes of containers, utilization of containers up to limits on weights, and other considerations. Automatic packing algorithm employed attempts to find best positioning of cargo items in container, such that volume and weight capacity of container both utilized to maximum extent possible. Written in Common LISP.

  13. Strategy for the elucidation of elemental compositions of trace analytes based on a mass resolution of 100,000 full width at half maximum.

    PubMed

    Kaufmann, Anton

    2010-07-30

    Elemental compositions (ECs) can be elucidated by evaluating the high-resolution mass spectra of unknown or suspected unfragmented analyte ions. Classical approaches utilize the exact mass of the monoisotopic peak (M + 0) and the relative abundance of isotope peaks (M + 1 and M + 2). The availability of high-resolution instruments like the Orbitrap currently permits mass resolutions up to 100,000 full width at half maximum. This not only allows the determination of relative isotopic abundances (RIAs), but also the extraction of other diagnostic information from the spectra, such as fully resolved signals originating from (34)S isotopes and fully or partially resolved signals related to (15)N isotopes (isotopic fine structure). Fully and partially resolved peaks can be evaluated by visual inspection of the measured peak profiles. This approach is shown to be capable of correctly discarding many of the EC candidates which were proposed by commercial EC calculating algorithms. Using this intuitive strategy significantly extends the upper mass range for the successful elucidation of ECs. Copyright 2010 John Wiley & Sons, Ltd.

  14. Maximizing the potential of cropping systems for nematode management.

    PubMed

    Noe, J P; Sasser, J N; Imbriani, J L

    1991-07-01

    Quantitative techniques were used to analyze and determine optimal potential profitability of 3-year rotations of cotton, Gossypium hirsutum cv. Coker 315, and soybean, Glycine max cv. Centennial, with increasing population densities of Hoplolaimus columbus. Data collected from naturally infested on-farm research plots were combined with economic information to construct a microcomputer spreadsheet analysis of the cropping system. Nonlinear mathematical functions were fitted to field data to represent damage functions and population dynamic curves. Maximum yield losses due to H. columbus were estimated to be 20% on cotton and 42% on soybean. Maximum at-harvest population densities were calculated to be 182/100 cm(3) soil for cotton and 149/100 cm(3) soil for soybean. Projected net incomes ranged from a $17.74/ha net loss for the soybean-cotton-soybean sequence to a net profit of $46.80/ha for the cotton-soybean-cotton sequence. The relative profitability of various rotations changed as nematode densities increased, indicating economic thresholds for recommending alternative crop sequences. The utility and power of quantitative optimization was demonstrated for comparisons of rotations under different economic assumptions and with other management alternatives.

  15. Using radar-derived parameters to forecast lightning cessation for nonisolated storms

    NASA Astrophysics Data System (ADS)

    Davey, Matthew J.; Fuelberg, Henry E.

    2017-03-01

    Lightning impacts operations at the Kennedy Space Center (KSC) and other outdoor venues leading to injuries, inconvenience, and detrimental economic impacts. This research focuses on cases of "nonisolated" lightning which we define as one cell whose flashes have ceased although it is still embedded in weak composite reflectivity (Z ≥ 15 dBZ) with another cell that is still producing flashes. The objective is to determine if any radar-derived parameters provide useful information about the occurrence of lightning cessation in remnant storms. The data set consists of 50 warm season (May-September) nonisolated storms near KSC during 2013. The research utilizes the National Lightning Detection Network, the second generation Lightning Detection and Ranging network, and polarized radar data. These data are merged and analyzed using the Warning Decision Support System-Integrated Information at 1 min intervals. Our approach only considers 62 parameters, most of which are related to the noninductive charging mechanism. They included the presence of graupel at various thermal altitudes, maximum reflectivity of the decaying storm at thermal altitudes, maximum connecting composite reflectivity between the decaying cell and active cell, minutes since the previous flash, and several others. Results showed that none of the parameters reliably indicated lightning cessation for even our restrictive definition of nonisolated storms. Additional research is needed before cessation can be determined operationally with the high degree of accuracy required for safety.

  16. Compact pulse generators with soft ferromagnetic cores driven by gunpowder and explosive.

    PubMed

    Ben, Chi; He, Yong; Pan, Xuchao; Chen, Hong; He, Yuan

    2015-12-01

    Compact pulse generators which utilized soft ferromagnets as an initial energy carrier inside multi-turn coil and hard ferromagnets to provide the initial magnetic field outside the coil have been studied. Two methods of reducing the magnetic flux in the generators have been studied: (1) by igniting gunpowder to launch the core out of the generator, and (2) by detonating explosives that demagnetize the core. Several types of compact generators were explored to verify the feasibility. The generators with an 80-turn coil that utilize gunpowder were capable of producing pulses with amplitude 78.6 V and the full width at half maximum was 0.41 ms. The generators with a 37-turn coil that utilize explosive were capable of producing pulses with amplitude 1.41 kV and the full width at half maximum was 11.68 μs. These two methods were both successful, but produce voltage waveforms with significantly different characteristics.

  17. Patient-specific parameter estimation in single-ventricle lumped circulation models under uncertainty

    PubMed Central

    Schiavazzi, Daniele E.; Baretta, Alessia; Pennati, Giancarlo; Hsia, Tain-Yen; Marsden, Alison L.

    2017-01-01

    Summary Computational models of cardiovascular physiology can inform clinical decision-making, providing a physically consistent framework to assess vascular pressures and flow distributions, and aiding in treatment planning. In particular, lumped parameter network (LPN) models that make an analogy to electrical circuits offer a fast and surprisingly realistic method to reproduce the circulatory physiology. The complexity of LPN models can vary significantly to account, for example, for cardiac and valve function, respiration, autoregulation, and time-dependent hemodynamics. More complex models provide insight into detailed physiological mechanisms, but their utility is maximized if one can quickly identify patient specific parameters. The clinical utility of LPN models with many parameters will be greatly enhanced by automated parameter identification, particularly if parameter tuning can match non-invasively obtained clinical data. We present a framework for automated tuning of 0D lumped model parameters to match clinical data. We demonstrate the utility of this framework through application to single ventricle pediatric patients with Norwood physiology. Through a combination of local identifiability, Bayesian estimation and maximum a posteriori simplex optimization, we show the ability to automatically determine physiologically consistent point estimates of the parameters and to quantify uncertainty induced by errors and assumptions in the collected clinical data. We show that multi-level estimation, that is, updating the parameter prior information through sub-model analysis, can lead to a significant reduction in the parameter marginal posterior variance. We first consider virtual patient conditions, with clinical targets generated through model solutions, and second application to a cohort of four single-ventricle patients with Norwood physiology. PMID:27155892

  18. 24 CFR 880.503 - Maximum annual commitment and project account.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... may be contracted for in the ACC is the total of the contract rents and utility allowances for all... commitment exceeds the amount actually paid out under the Contract or ACC each year. Payments will be made... the Contract or ACC for a fiscal year exceeds the maximum annual commitment and would cause the amount...

  19. 24 CFR 880.503 - Maximum annual commitment and project account.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... may be contracted for in the ACC is the total of the contract rents and utility allowances for all... commitment exceeds the amount actually paid out under the Contract or ACC each year. Payments will be made... the Contract or ACC for a fiscal year exceeds the maximum annual commitment and would cause the amount...

  20. Item Selection and Ability Estimation Procedures for a Mixed-Format Adaptive Test

    ERIC Educational Resources Information Center

    Ho, Tsung-Han; Dodd, Barbara G.

    2012-01-01

    In this study we compared five item selection procedures using three ability estimation methods in the context of a mixed-format adaptive test based on the generalized partial credit model. The item selection procedures used were maximum posterior weighted information, maximum expected information, maximum posterior weighted Kullback-Leibler…

  1. Study of EHD flow generator's efficiencies utilizing pin to single ring and multi-concentric rings electrodes

    NASA Astrophysics Data System (ADS)

    Sumariyah; Kusminart; Hermanto, A.; Nuswantoro, P.

    2016-11-01

    EHD flow or ionic wind yield corona discharge is a stream coming from the ionized gas. EHD is generated by a strong electric field and its direction follows the electric field lines. In this study, the efficiency of the EHD flow generators utilizing pin-multi concentric rings electrodes (P-MRE) and the EHD pin-single ring electrode (P-SRE) have been measured. The comparison of efficiencies two types of the generator has been done. EHD flow was generated by using a high-voltage DC 0-10 KV on the electrode pin with a positive polarity and electrode ring/ multi-concentric rings of negative polarity. The efficiency was calculated by comparison between the mechanical power of flow to the electrical power that consumed. We obtained that the maximum efficiency of EHD flow generator utilizing pin-multi concentric rings electrodes was 0.54% and the maximum efficiency of EHD flow generator utilizing a pin-single ring electrode was 0.23%. Efficiency of EHD with P-MRE 2.34 times Efficiency of EHD with P-SRE

  2. Investigating cold based summit glaciers through direct access to the glacier base: a case study constraining the maximum age of Chli Titlis glacier, Switzerland

    NASA Astrophysics Data System (ADS)

    Bohleber, Pascal; Hoffmann, Helene; Kerch, Johanna; Sold, Leo; Fischer, Andrea

    2018-01-01

    Cold glaciers at the highest locations of the European Alps have been investigated by drilling ice cores to retrieve their stratigraphic climate records. Findings like the Oetztal ice man have demonstrated that small ice bodies at summit locations of comparatively lower altitudes may also contain old ice if locally frozen to the underlying bedrock. In this case, constraining the maximum age of their lowermost ice part may help to identify past periods with minimum ice extent in the Alps. However, with recent warming and consequent glacier mass loss, these sites may not preserve their unique climate information for much longer. Here we utilized an existing ice cave at Chli Titlis (3030 m), central Switzerland, to perform a case study for investigating the maximum age of cold-based summit glaciers in the Alps. The cave offers direct access to the glacier stratigraphy without the logistical effort required in ice core drilling. In addition, a pioneering exploration had already demonstrated stagnant cold ice conditions at Chli Titlis, albeit more than 25 years ago. Our englacial temperature measurements and the analysis of the isotopic and physical properties of ice blocks sampled at three locations within the ice cave show that cold ice still exists fairly unchanged today. State-of-the-art micro-radiocarbon analysis constrains the maximum age of the ice at Chli Titlis to about 5000 years before present. By this means, the approach presented here will contribute to a future systematic investigation of cold-based summit glaciers, also in the Eastern Alps.

  3. Observation of emission process in hydrogen-like nitrogen Z-pinch discharge with time integrated soft X-ray spectrum pinhole image

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sakai, Y.; Kumai, H.; Nakanishi, Y.

    2013-02-15

    The emission spectra of hydrogen-like nitrogen Balmer at the wavelength of 13.4 nm in capillary Z-pinch discharge plasma are experimentally examined. Ionization to fully strip nitrogen at the pinch maximum, and subsequent rapid expansion cooling are required to establish the population inversion between the principal quantum number of n = 2 and n = 3. The ionization and recombination processes with estimated plasma parameters are evaluated by utilizing a time integrated spectrum pinhole image containing radial spatial information. A cylindrical capillary plasma is pinched by a triangular pulsed current with peak amplitude of 50 kA and pulse width of 50more » ns.« less

  4. Radiotherapy-induced Cherenkov luminescence imaging in a human body phantom.

    PubMed

    Ahmed, Syed Rakin; Jia, Jeremy Mengyu; Bruza, Petr; Vinogradov, Sergei; Jiang, Shudong; Gladstone, David J; Jarvis, Lesley A; Pogue, Brian W

    2018-03-01

    Radiation therapy produces Cherenkov optical emission in tissue, and this light can be utilized to activate molecular probes. The feasibility of sensing luminescence from a tissue molecular oxygen sensor from within a human body phantom was examined using the geometry of the axillary lymph node region. Detection of regions down to 30-mm deep was feasible with submillimeter spatial resolution with the total quantity of the phosphorescent sensor PtG4 near 1 nanomole. Radiation sheet scanning in an epi-illumination geometry provided optimal coverage, and maximum intensity projection images provided illustration of the concept. This work provides the preliminary information needed to attempt this type of imaging in vivo. (2018) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE).

  5. Fan Beam Emission Tomography for Laminar Fires

    NASA Technical Reports Server (NTRS)

    Sivathanu, Yudaya; Lim, Jongmook; Feikema, Douglas

    2003-01-01

    Obtaining information on the instantaneous structure of turbulent and transient flames is important in a wide variety of applications such as fire safety, pollution reduction, flame spread studies, and model validation. Durao et al. has reviewed the different methods of obtaining structure information in reacting flows. These include Tunable Laser Absorption Spectroscopy, Fourier Transform Infrared Spectroscopy, and Emission Spectroscopy to mention a few. Most flames emit significant radiation signatures that are used in various applications such as fire detection, light-off detection, flame diagnostics, etc. Radiation signatures can be utilized to maximum advantage for determining structural information in turbulent flows. Emission spectroscopy is most advantageous in the infrared regions of the spectra, principally because these emission lines arise from transitions in the fundamental bands of stable species such as CO2 and H2O. Based on the above, the objective of this work was to develop a fan beam emission tomography system to obtain the local scalar properties such as temperature and mole fractions of major gas species from path integrated multi-wavelength infrared radiation measurements.

  6. Linking Home Plate and Algonquin Class Rocks through Microtextural Analysis: Evidence for Hydrovolcanism in the Inner Basin of Columbia Hills, Gusev Crater

    NASA Technical Reports Server (NTRS)

    Mittlefehldt, David W.; Yingst, R. Aileen; Schmidt, Mariek E.; Herkenhoff, Ken E.

    2007-01-01

    Examining the his-tory of a rock as the summed history of its constituent grains is a proven and powerful strategy that has been used on Earth to maximize the information that can be gleaned from limited samples. Grain size, sorting, roundness, and texture can be observed at the handlens scale, and may reveal clues to transport regime (e.g. fluvial, glacial, eolian) and transport distance. Diagenetic minerals may be of a form and textural context to allow identification, and to point to dominant diagenetic processes (e.g. evaporitic concentration, intermittent dissolution, early vs. late diagenetic emplacement). Handlens scale features of volcaniclastic particles may be diagnostic of primary vs recycled (by surface processes) grains and may provide information about eruptive patterns and processes. When the study site is truly remote, such as Mars, and when there are severe limitations on sample return or sample analysis with other methods, examination at the hand lens scale becomes critical both for extracting a maximum of information, and for best utilizing finite analytical capabilities.

  7. Electricity generation and microbial community in response to short-term changes in stack connection of self-stacked submersible microbial fuel cell powered by glycerol.

    PubMed

    Zhao, Nannan; Angelidaki, Irini; Zhang, Yifeng

    2017-02-01

    Stack connection (i.e., in series or parallel) of microbial fuel cell (MFC) is an efficient way to boost the power output for practical application. However, there is little information available on short-term changes in stack connection and its effect on the electricity generation and microbial community. In this study, a self-stacked submersible microbial fuel cell (SSMFC) powered by glycerol was tested to elucidate this important issue. In series connection, the maximum voltage output reached to 1.15 V, while maximum current density was 5.73 mA in parallel. In both connections, the maximum power density increased with the initial glycerol concentration. However, the glycerol degradation was even faster in parallel connection. When the SSMFC was shifted from series to parallel connection, the reactor reached to a stable power output without any lag phase. Meanwhile, the anodic microbial community compositions were nearly stable. Comparatively, after changing parallel to series connection, there was a lag period for the system to get stable again and the microbial community compositions became greatly different. This study is the first attempt to elucidate the influence of short-term changes in connection on the performance of MFC stack, and could provide insight to the practical utilization of MFC. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Accounting for misclassification error in retrospective smoking data.

    PubMed

    Kenkel, Donald S; Lillard, Dean R; Mathios, Alan D

    2004-10-01

    Recent waves of major longitudinal surveys in the US and other countries include retrospective questions about the timing of smoking initiation and cessation, creating a potentially important but under-utilized source of information on smoking behavior over the life course. In this paper, we explore the extent of, consequences of, and possible solutions to misclassification errors in models of smoking participation that use data generated from retrospective reports. In our empirical work, we exploit the fact that the National Longitudinal Survey of Youth 1979 provides both contemporaneous and retrospective information about smoking status in certain years. We compare the results from four sets of models of smoking participation. The first set of results are from baseline probit models of smoking participation from contemporaneously reported information. The second set of results are from models that are identical except that the dependent variable is based on retrospective information. The last two sets of results are from models that take a parametric approach to account for a simple form of misclassification error. Our preliminary results suggest that accounting for misclassification error is important. However, the adjusted maximum likelihood estimation approach to account for misclassification does not always perform as expected. Copyright 2004 John Wiley & Sons, Ltd.

  9. Adaptive structured dictionary learning for image fusion based on group-sparse-representation

    NASA Astrophysics Data System (ADS)

    Yang, Jiajie; Sun, Bin; Luo, Chengwei; Wu, Yuzhong; Xu, Limei

    2018-04-01

    Dictionary learning is the key process of sparse representation which is one of the most widely used image representation theories in image fusion. The existing dictionary learning method does not use the group structure information and the sparse coefficients well. In this paper, we propose a new adaptive structured dictionary learning algorithm and a l1-norm maximum fusion rule that innovatively utilizes grouped sparse coefficients to merge the images. In the dictionary learning algorithm, we do not need prior knowledge about any group structure of the dictionary. By using the characteristics of the dictionary in expressing the signal, our algorithm can automatically find the desired potential structure information that hidden in the dictionary. The fusion rule takes the physical meaning of the group structure dictionary, and makes activity-level judgement on the structure information when the images are being merged. Therefore, the fused image can retain more significant information. Comparisons have been made with several state-of-the-art dictionary learning methods and fusion rules. The experimental results demonstrate that, the dictionary learning algorithm and the fusion rule both outperform others in terms of several objective evaluation metrics.

  10. 33 CFR 169.5 - How are terms used in this part defined?

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... § 169.15). Gross tons means vessel tonnage measured in accordance with the method utilized by the flag... water and is capable of a maximum speed equal to or exceeding V=3.7×displ .1667, where “V” is the maximum speed and “displ” is the vessel displacement corresponding to the design waterline in cubic meters...

  11. 33 CFR 169.5 - How are terms used in this part defined?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... § 169.15). Gross tons means vessel tonnage measured in accordance with the method utilized by the flag... water and is capable of a maximum speed equal to or exceeding V=3.7×displ .1667, where “V” is the maximum speed and “displ” is the vessel displacement corresponding to the design waterline in cubic meters...

  12. 33 CFR 169.5 - How are terms used in this part defined?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... § 169.15). Gross tons means vessel tonnage measured in accordance with the method utilized by the flag... water and is capable of a maximum speed equal to or exceeding V=3.7×displ .1667, where “V” is the maximum speed and “displ” is the vessel displacement corresponding to the design waterline in cubic meters...

  13. 33 CFR 169.5 - How are terms used in this part defined?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... § 169.15). Gross tons means vessel tonnage measured in accordance with the method utilized by the flag... water and is capable of a maximum speed equal to or exceeding V=3.7×displ .1667, where “V” is the maximum speed and “displ” is the vessel displacement corresponding to the design waterline in cubic meters...

  14. Energy Efficient Cluster Based Scheduling Scheme for Wireless Sensor Networks

    PubMed Central

    Srie Vidhya Janani, E.; Ganesh Kumar, P.

    2015-01-01

    The energy utilization of sensor nodes in large scale wireless sensor network points out the crucial need for scalable and energy efficient clustering protocols. Since sensor nodes usually operate on batteries, the maximum utility of network is greatly dependent on ideal usage of energy leftover in these sensor nodes. In this paper, we propose an Energy Efficient Cluster Based Scheduling Scheme for wireless sensor networks that balances the sensor network lifetime and energy efficiency. In the first phase of our proposed scheme, cluster topology is discovered and cluster head is chosen based on remaining energy level. The cluster head monitors the network energy threshold value to identify the energy drain rate of all its cluster members. In the second phase, scheduling algorithm is presented to allocate time slots to cluster member data packets. Here congestion occurrence is totally avoided. In the third phase, energy consumption model is proposed to maintain maximum residual energy level across the network. Moreover, we also propose a new packet format which is given to all cluster member nodes. The simulation results prove that the proposed scheme greatly contributes to maximum network lifetime, high energy, reduced overhead, and maximum delivery ratio. PMID:26495417

  15. Enhancing substrate utilization and power production of a microbial fuel cell with nitrogen-doped carbon aerogel as cathode catalyst.

    PubMed

    Tardy, Gábor Márk; Lóránt, Bálint; Lóka, Máté; Nagy, Balázs; László, Krisztina

    2017-07-01

    Catalytic efficiency of a nitrogen-doped, mesoporous carbon aerogel cathode catalyst was investigated in a two-chambered microbial fuel cell (MFC) applying graphite felt as base material for cathode and anode, utilizing peptone as carbon source. This mesoporous carbon aerogel containing catalyst layer on the cathode increased the maximum power density normalized to the anode volume to 2.7 times higher compared to the maximum power density obtained applying graphite felt cathode without the catalyst layer. At high (2 and 3) cathode/anode volume ratios, maximum power density exceeded 40 W m -3 . At the same time, current density and specific substrate utilization rate increased by 58% resulting in 31.9 A m -3 and 18.8 g COD m -3  h -1 , respectively (normalized to anode volume). Besides the increase of the power and the rate of biodegradation, the investigated catalyst decreased the internal resistance from the range of 450-600 to 350-370 Ω. Although Pt/C catalyst proved to be more efficient, a considerable decrease in the material costs might be achieved by substituting it with nitrogen-doped carbon aerogel in MFCs. Such cathode still displays enhanced catalytic effect.

  16. Developing a clinical utility framework to evaluate prediction models in radiogenomics

    NASA Astrophysics Data System (ADS)

    Wu, Yirong; Liu, Jie; Munoz del Rio, Alejandro; Page, David C.; Alagoz, Oguzhan; Peissig, Peggy; Onitilo, Adedayo A.; Burnside, Elizabeth S.

    2015-03-01

    Combining imaging and genetic information to predict disease presence and behavior is being codified into an emerging discipline called "radiogenomics." Optimal evaluation methodologies for radiogenomics techniques have not been established. We aim to develop a clinical decision framework based on utility analysis to assess prediction models for breast cancer. Our data comes from a retrospective case-control study, collecting Gail model risk factors, genetic variants (single nucleotide polymorphisms-SNPs), and mammographic features in Breast Imaging Reporting and Data System (BI-RADS) lexicon. We first constructed three logistic regression models built on different sets of predictive features: (1) Gail, (2) Gail+SNP, and (3) Gail+SNP+BI-RADS. Then, we generated ROC curves for three models. After we assigned utility values for each category of findings (true negative, false positive, false negative and true positive), we pursued optimal operating points on ROC curves to achieve maximum expected utility (MEU) of breast cancer diagnosis. We used McNemar's test to compare the predictive performance of the three models. We found that SNPs and BI-RADS features augmented the baseline Gail model in terms of the area under ROC curve (AUC) and MEU. SNPs improved sensitivity of the Gail model (0.276 vs. 0.147) and reduced specificity (0.855 vs. 0.912). When additional mammographic features were added, sensitivity increased to 0.457 and specificity to 0.872. SNPs and mammographic features played a significant role in breast cancer risk estimation (p-value < 0.001). Our decision framework comprising utility analysis and McNemar's test provides a novel framework to evaluate prediction models in the realm of radiogenomics.

  17. Evaluating the quality of Internet health resources in pediatric urology.

    PubMed

    Fast, Angela M; Deibert, Christopher M; Hruby, Gregory W; Glassberg, Kenneth I

    2013-04-01

    Many patients and their parents utilize the Internet for health-related information, but quality is largely uncontrolled and unregulated. The Health on the Net Foundation Code (HONcode) and DISCERN Plus were used to evaluate the pediatric urological search terms 'circumcision,' 'vesicoureteral reflux' and 'posterior urethral valves'. A google.com search was performed to identify the top 20 websites for each term. The HONcode toolbar was utilized to determine whether each website was HONcode accredited and report the overall frequency of accreditation for each term. The DISCERN Plus instrument was used to score each website in accordance with the DISCERN Handbook. High and low scoring criteria were then compared. A total of 60 websites were identified. For the search terms 'circumcision', 'posterior urethral valves' and 'vesicoureteral reflux', 25-30% of the websites were HONcode certified. Out of the maximum score of 80, the average DISCERN Plus score was 60 (SD = 12, range 38-78), 40 (SD = 12, range 22-69) and 45 (SD = 19, range 16-78), respectively. The lowest scoring DISCERN criteria included: 'Does it describe how the treatment choices affect overall quality of life?', 'Does it describe the risks of each treatment?' and 'Does it provide details of additional sources of support and information?' (1.35, 1.83 and 1.95 out of 5, respectively). These findings demonstrate the poor quality of information that patients and their parents may use in decision-making and treatment choices. The two lowest scoring DISCERN Plus criteria involved education on quality of life issues and risks of treatment. Physicians should know how to best use these tools to help guide patients and their parents to websites with valid information. Copyright © 2012 Journal of Pediatric Urology Company. Published by Elsevier Ltd. All rights reserved.

  18. When patients have to pay a share of drug costs: effects on frequency of physician visits, hospital admissions and filling of prescriptions.

    PubMed

    Anis, Aslam H; Guh, Daphne P; Lacaille, Diane; Marra, Carlo A; Rashidi, Amir A; Li, Xin; Esdaile, John M

    2005-11-22

    Previous research has shown that patient cost-sharing leads to a reduction in overall health resource utilization. However, in Canada, where health care is provided free of charge except for prescription drugs, the converse may be true. We investigated the effect of prescription drug cost-sharing on overall health care utilization among elderly patients with rheumatoid arthritis. Elderly patients (> or = 65 years) were selected from a population-based cohort with rheumatoid arthritis. Those who had paid the maximum amount of dispensing fees (200 dollars) for the calendar year (from 1997 to 2000) were included in the analysis for that year. We defined the period during which the annual maximum co-payment had not been reached as the "cost-sharing period" and the one beyond which the annual maximum co-payment had been reached as the "free period." We compared health services utilization patterns between these periods during the 4 study years, including the number of hospital admissions, the number of physician visits, the number of prescriptions filled and the number of prescriptions per physician visit. Overall, 2968 elderly patients reached the annual maximum cost-sharing amount at least once during the study periods. Across the 4 years, there were 0.38 more physician visits per month (p < 0.001), 0.50 fewer prescriptions filled per month (p = 0.001) and 0.52 fewer prescriptions filled per physician visit (p < 0.001) during the cost-sharing period than during the free period. Among patients who were admitted to the hospital at least once, there were 0.013 more admissions per month during the cost-sharing period than during the free period (p = 0.03). In a predominantly publicly funded health care system, the implementation of cost-containment policies such as prescription drug cost-sharing may have the unintended effect of increasing overall health utilization among elderly patients with rheumatoid arthritis.

  19. An ethnobotanical survey of indigenous medicinal plants in Hafizabad district, Punjab-Pakistan.

    PubMed

    Umair, Muhammad; Altaf, Muhammad; Abbasi, Arshad Mehmood

    2017-01-01

    Present paper offers considerable information on traditional uses of medicinal plants by the inhabitants of Hafizabad district, Punjab-Pakistan. This is the first quantitative ethnobotanical study from the area comprising popularity level of medicinal plant species intendedby using relative popularity level (RPL) and rank order priority (ROP) indices.Ethnobotanical data were collected by interviewing 166 local informants and 35 traditional health practioners (THPs) from different localities of Hafizabad district. Demographic features of informants; life form, part used, methods of preparation, modes of application and ethnomedicinal uses were documented. Ethnobotanical data were analyzed using quantitative tools, i.e. Relative frequency citation (RFC), use value (UV), informant consensus factor (ICF) fidelity level (FL), RPL and ROP indices. A total of 85 species belonging to 71 genera and 34 families were documented along with ethnomedicinal uses. Solanum surattense, Withania somnifera, Cyperus rotundus, Solanum nigrum and Melia azedarach were the most utilized medicinal plant species with highest used value. The reported ailments were classified into 11 disease categories based on ICF values and highest number of plant species was reported to treat dermatological and gastrointestinal disorders. Withania somnifera and Ranunculus sceleratus with maximum FL (100%), were used against gastrointestinal and urinary disorders, respectively. The RPL and ROP values were calculated to recognize the folk medicinal plant wealth; six out of 32 plant species (19%) were found popular, based on citation by more than half of the maximum number of informant viz. 26. Consequently, the ROP value for these species was more than 75. The comparative assessment with reported literature revealed 15% resemblance and 6% variation to previous data;however79% uses of the reported species were recorded for the first time. The diversity of medicinal plant species and associated traditional knowledge is significant in primary health care system. Medicinal plant species with high RPL values should be screened for comprehensive phytochemical and pharmacological studies. This could be useful in novel drug discovery and to validate the ethomendicinal knowledge.

  20. An ethnobotanical survey of indigenous medicinal plants in Hafizabad district, Punjab-Pakistan

    PubMed Central

    Umair, Muhammad; Altaf, Muhammad

    2017-01-01

    Present paper offers considerable information on traditional uses of medicinal plants by the inhabitants of Hafizabad district, Punjab-Pakistan. This is the first quantitative ethnobotanical study from the area comprising popularity level of medicinal plant species intendedby using relative popularity level (RPL) and rank order priority (ROP) indices.Ethnobotanical data were collected by interviewing 166 local informants and 35 traditional health practioners (THPs) from different localities of Hafizabad district. Demographic features of informants; life form, part used, methods of preparation, modes of application and ethnomedicinal uses were documented. Ethnobotanical data were analyzed using quantitative tools, i.e. Relative frequency citation (RFC), use value (UV), informant consensus factor (ICF) fidelity level (FL), RPL and ROP indices. A total of 85 species belonging to 71 genera and 34 families were documented along with ethnomedicinal uses. Solanum surattense, Withania somnifera, Cyperus rotundus, Solanum nigrum and Melia azedarach were the most utilized medicinal plant species with highest used value. The reported ailments were classified into 11 disease categories based on ICF values and highest number of plant species was reported to treat dermatological and gastrointestinal disorders. Withania somnifera and Ranunculus sceleratus with maximum FL (100%), were used against gastrointestinal and urinary disorders, respectively. The RPL and ROP values were calculated to recognize the folk medicinal plant wealth; six out of 32 plant species (19%) were found popular, based on citation by more than half of the maximum number of informant viz. 26. Consequently, the ROP value for these species was more than 75. The comparative assessment with reported literature revealed 15% resemblance and 6% variation to previous data;however79% uses of the reported species were recorded for the first time. The diversity of medicinal plant species and associated traditional knowledge is significant in primary health care system. Medicinal plant species with high RPL values should be screened for comprehensive phytochemical and pharmacological studies. This could be useful in novel drug discovery and to validate the ethomendicinal knowledge. PMID:28574986

  1. Molecular simulation of CO chemisorption on Co(0001) in presence of supercritical fluid solvent: A potential of mean force study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Asiaee, Alireza; Benjamin, Kenneth M., E-mail: kenneth.benjamin@sdsmt.edu

    2016-08-28

    For several decades, heterogeneous catalytic processes have been improved through utilizing supercritical fluids (SCFs) as solvents. While numerous experimental studies have been established across a range of chemistries, such as oxidation, pyrolysis, amination, and Fischer-Tropsch synthesis, still there is little fundamental, molecular-level information regarding the role of the SCF on elementary heterogeneous catalytic steps. In this study, the influence of hexane solvent on the adsorption of carbon monoxide on Co(0001), as the first step in the reaction mechanism of many processes involving syngas conversion, is probed. Simulations are performed at various bulk hexane densities, ranging from ideal gas conditions (nomore » SCF hexane) to various near- and super-critical hexane densities. For this purpose, both density functional theory and molecular dynamics simulations are employed to determine the adsorption energy and free energy change during CO chemisorption. Potential of mean force calculations, utilizing umbrella sampling and the weighted histogram analysis method, provide the first commentary on SCF solvent effects on the energetic aspects of the chemisorption process. Simulation results indicate an enhanced stability of CO adsorption on the catalyst surface in the presence of supercritical hexane within the reduced pressure range of 1.0–1.5 at a constant temperature of 523 K. Furthermore, it is shown that the maximum stability of CO in the adsorbed state as a function of supercritical hexane density at 523 K nearly coincides with the maximum isothermal compressibility of bulk hexane at this temperature.« less

  2. Comparison of candidate solar array maximum power utilization approaches. [for spacecraft propulsion

    NASA Technical Reports Server (NTRS)

    Costogue, E. N.; Lindena, S.

    1976-01-01

    A study was made of five potential approaches that can be utilized to detect the maximum power point of a solar array while sustaining operations at or near maximum power and without endangering stability or causing array voltage collapse. The approaches studied included: (1) dynamic impedance comparator, (2) reference array measurement, (3) onset of solar array voltage collapse detection, (4) parallel tracker, and (5) direct measurement. The study analyzed the feasibility and adaptability of these approaches to a future solar electric propulsion (SEP) mission, and, specifically, to a comet rendezvous mission. Such missions presented the most challenging requirements to a spacecraft power subsystem in terms of power management over large solar intensity ranges of 1.0 to 3.5 AU. The dynamic impedance approach was found to have the highest figure of merit, and the reference array approach followed closely behind. The results are applicable to terrestrial solar power systems as well as to other than SEP space missions.

  3. Sensitivity analysis of linear CROW gyroscopes and comparison to a single-resonator gyroscope

    NASA Astrophysics Data System (ADS)

    Zamani-Aghaie, Kiarash; Digonnet, Michel J. F.

    2013-03-01

    This study presents numerical simulations of the maximum sensitivity to absolute rotation of a number of coupled resonator optical waveguide (CROW) gyroscopes consisting of a linear array of coupled ring resonators. It examines in particular the impact on the maximum sensitivity of the number of rings, of the relative spatial orientation of the rings (folded and unfolded), of various sequences of coupling ratios between the rings and various sequences of ring dimensions, and of the number of input/output waveguides (one or two) used to inject and collect the light. In all configurations the sensitivity is maximized by proper selection of the coupling ratio(s) and phase bias, and compared to the maximum sensitivity of a resonant waveguide optical gyroscope (RWOG) utilizing a single ring-resonator waveguide with the same radius and loss as each ring in the CROW. Simulations show that although some configurations are more sensitive than others, in spite of numerous claims to the contrary made in the literature, in all configurations the maximum sensitivity is independent of the number of rings, and does not exceed the maximum sensitivity of an RWOG. There are no sensitivity benefits to utilizing any of these linear CROWs for absolute rotation sensing. For equal total footprint, an RWOG is √N times more sensitive, and it is easier to fabricate and stabilize.

  4. An Accurate Scalable Template-based Alignment Algorithm

    PubMed Central

    Gardner, David P.; Xu, Weijia; Miranker, Daniel P.; Ozer, Stuart; Cannone, Jamie J.; Gutell, Robin R.

    2013-01-01

    The rapid determination of nucleic acid sequences is increasing the number of sequences that are available. Inherent in a template or seed alignment is the culmination of structural and functional constraints that are selecting those mutations that are viable during the evolution of the RNA. While we might not understand these structural and functional, template-based alignment programs utilize the patterns of sequence conservation to encapsulate the characteristics of viable RNA sequences that are aligned properly. We have developed a program that utilizes the different dimensions of information in rCAD, a large RNA informatics resource, to establish a profile for each position in an alignment. The most significant include sequence identity and column composition in different phylogenetic taxa. We have compared our methods with a maximum of eight alternative alignment methods on different sets of 16S and 23S rRNA sequences with sequence percent identities ranging from 50% to 100%. The results showed that CRWAlign outperformed the other alignment methods in both speed and accuracy. A web-based alignment server is available at http://www.rna.ccbb.utexas.edu/SAE/2F/CRWAlign. PMID:24772376

  5. Education and Library Services for Community Information Utilities.

    ERIC Educational Resources Information Center

    Farquhar, John A.

    The concept of "computer utility"--the provision of computing and information service by a utility in the form of a national network to which any person desiring information could gain access--has been gaining interest among the public and among the technical community. This report on planning community information utilities discusses the…

  6. A preliminary study applying decision analysis to the treatment of caries in primary teeth.

    PubMed

    Tamošiūnas, Vytautas; Kay, Elizabeth; Craven, Rebecca

    2013-01-01

    To determine an optimal treatment strategy for carious deciduous teeth. Manchester Dental Hospital. Decision analysis. The likelihoods of each of the sequelae of caries in deciduous teeth were determined from the literature. The utility of the outcomes from non-treatment and treatment was then measured in 100 parents of children with caries, using a visual analogue scale. Decision analysis was performed which weighted the value of each potential outcome by the probability of its occurrence. A decision tree "fold-back" and sensitivity analysis then determined which treatment strategies, under which circumstances, offered the maximum expected utilities. The decision to leave a carious deciduous tooth unrestored attracted a maximum utility of 76.65 and the overall expected utility for the decision "restore" was 73.27 The decision to restore or not restore carious deciduous teeth are therefore of almost equal value. The decision is however highly sensitive to the utility value assigned to the advent of pain by the patient. There is no clear advantage to be gained by restoring deciduous teeth if patients' evaluations of outcomes are taken into account. Avoidance of pain and avoidance of procedures which are viewed as unpleasant by parents should be key determinants of clinical decision making about carious deciduous teeth.

  7. The decisive future of inflation

    NASA Astrophysics Data System (ADS)

    Hardwick, Robert J.; Vennin, Vincent; Wands, David

    2018-05-01

    How much more will we learn about single-field inflationary models in the future? We address this question in the context of Bayesian design and information theory. We develop a novel method to compute the expected utility of deciding between models and apply it to a set of futuristic measurements. This necessarily requires one to evaluate the Bayesian evidence many thousands of times over, which is numerically challenging. We show how this can be done using a number of simplifying assumptions and discuss their validity. We also modify the form of the expected utility, as previously introduced in the literature in different contexts, in order to partition each possible future into either the rejection of models at the level of the maximum likelihood or the decision between models using Bayesian model comparison. We then quantify the ability of future experiments to constrain the reheating temperature and the scalar running. Our approach allows us to discuss possible strategies for maximising information from future cosmological surveys. In particular, our conclusions suggest that, in the context of inflationary model selection, a decrease in the measurement uncertainty of the scalar spectral index would be more decisive than a decrease in the uncertainty in the tensor-to-scalar ratio. We have incorporated our approach into a publicly available python class, foxi,1 that can be readily applied to any survey optimisation problem.

  8. Representation of Probability Density Functions from Orbit Determination using the Particle Filter

    NASA Technical Reports Server (NTRS)

    Mashiku, Alinda K.; Garrison, James; Carpenter, J. Russell

    2012-01-01

    Statistical orbit determination enables us to obtain estimates of the state and the statistical information of its region of uncertainty. In order to obtain an accurate representation of the probability density function (PDF) that incorporates higher order statistical information, we propose the use of nonlinear estimation methods such as the Particle Filter. The Particle Filter (PF) is capable of providing a PDF representation of the state estimates whose accuracy is dependent on the number of particles or samples used. For this method to be applicable to real case scenarios, we need a way of accurately representing the PDF in a compressed manner with little information loss. Hence we propose using the Independent Component Analysis (ICA) as a non-Gaussian dimensional reduction method that is capable of maintaining higher order statistical information obtained using the PF. Methods such as the Principal Component Analysis (PCA) are based on utilizing up to second order statistics, hence will not suffice in maintaining maximum information content. Both the PCA and the ICA are applied to two scenarios that involve a highly eccentric orbit with a lower apriori uncertainty covariance and a less eccentric orbit with a higher a priori uncertainty covariance, to illustrate the capability of the ICA in relation to the PCA.

  9. How much a quantum measurement is informative?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Arno, Michele; ICFO-Institut de Ciencies Fotoniques, E-08860 Castelldefels, Barcelona; Quit Group, Dipartimento di Fisica, via Bassi 6, I-27100 Pavia

    2014-12-04

    The informational power of a quantum measurement is the maximum amount of classical information that the measurement can extract from any ensemble of quantum states. We discuss its main properties. Informational power is an additive quantity, being equivalent to the classical capacity of a quantum-classical channel. The informational power of a quantum measurement is the maximum of the accessible information of a quantum ensemble that depends on the measurement. We present some examples where the symmetry of the measurement allows to analytically derive its informational power.

  10. Text grouping in patent analysis using adaptive K-means clustering algorithm

    NASA Astrophysics Data System (ADS)

    Shanie, Tiara; Suprijadi, Jadi; Zulhanif

    2017-03-01

    Patents are one of the Intellectual Property. Analyzing patent is one requirement in knowing well the development of technology in each country and in the world now. This study uses the patent document coming from the Espacenet server about Green Tea. Patent documents related to the technology in the field of tea is still widespread, so it will be difficult for users to information retrieval (IR). Therefore, it is necessary efforts to categorize documents in a specific group of related terms contained therein. This study uses titles patent text data with the proposed Green Tea in Statistical Text Mining methods consists of two phases: data preparation and data analysis stage. The data preparation phase uses Text Mining methods and data analysis stage is done by statistics. Statistical analysis in this study using a cluster analysis algorithm, the Adaptive K-Means Clustering Algorithm. Results from this study showed that based on the maximum value Silhouette, generate 87 clusters associated fifteen terms therein that can be utilized in the process of information retrieval needs.

  11. DEM interpolation weight calculation modulus based on maximum entropy

    NASA Astrophysics Data System (ADS)

    Chen, Tian-wei; Yang, Xia

    2015-12-01

    There is negative-weight in traditional interpolation of gridding DEM, in the article, the principle of Maximum Entropy is utilized to analyze the model system which depends on modulus of space weight. Negative-weight problem of the DEM interpolation is researched via building Maximum Entropy model, and adding nonnegative, first and second order's Moment constraints, the negative-weight problem is solved. The correctness and accuracy of the method was validated with genetic algorithm in matlab program. The method is compared with the method of Yang Chizhong interpolation and quadratic program. Comparison shows that the volume and scaling of Maximum Entropy's weight is fit to relations of space and the accuracy is superior to the latter two.

  12. Reduction in maximum time uncertainty of paired time signals

    DOEpatents

    Theodosiou, G.E.; Dawson, J.W.

    1983-10-04

    Reduction in the maximum time uncertainty (t[sub max]--t[sub min]) of a series of paired time signals t[sub 1] and t[sub 2] varying between two input terminals and representative of a series of single events where t[sub 1][<=]t[sub 2] and t[sub 1]+t[sub 2] equals a constant, is carried out with a circuit utilizing a combination of OR and AND gates as signal selecting means and one or more time delays to increase the minimum value (t[sub min]) of the first signal t[sub 1] closer to t[sub max] and thereby reduce the difference. The circuit may utilize a plurality of stages to reduce the uncertainty by factors of 20--800. 6 figs.

  13. Tailored composite wings with elastically produced chordwise camber

    NASA Technical Reports Server (NTRS)

    Rehfield, Lawrence W.; Chang, Stephen; Zischka, Peter J.; Pickings, Richard D.; Holl, Michael W.

    1991-01-01

    Four structural concepts were created which produce chordwise camber deformation that results in enhanced lift. A wing box can be tailored to utilize each of these with composites. In attempting to optimize the aerodynamic benefits, researchers found that there are two optimum designs that are of interest. There is a weight optimum which corresponds to the maximum lift per unit structural weight. There is also a lift optimum that corresponds to maximum absolute lift. Experience indicates that a large weight penalty accompanies the transition from weight to lift optimum designs. New structural models, the basic deformation mechanisms that are utilized, and typical analytical results are presented. It appears that lift enhancements of sufficient magnitude can be produced to render this type of wing tailoring of practical interest.

  14. Reduction in maximum time uncertainty of paired time signals

    DOEpatents

    Theodosiou, George E.; Dawson, John W.

    1983-01-01

    Reduction in the maximum time uncertainty (t.sub.max -t.sub.min) of a series of paired time signals t.sub.1 and t.sub.2 varying between two input terminals and representative of a series of single events where t.sub.1 .ltoreq.t.sub.2 and t.sub.1 +t.sub.2 equals a constant, is carried out with a circuit utilizing a combination of OR and AND gates as signal selecting means and one or more time delays to increase the minimum value (t.sub.min) of the first signal t.sub.1 closer to t.sub.max and thereby reduce the difference. The circuit may utilize a plurality of stages to reduce the uncertainty by factors of 20-800.

  15. Optimizing preoperative blood ordering with data acquired from an anesthesia information management system.

    PubMed

    Frank, Steven M; Rothschild, James A; Masear, Courtney G; Rivers, Richard J; Merritt, William T; Savage, Will J; Ness, Paul M

    2013-06-01

    The maximum surgical blood order schedule (MSBOS) is used to determine preoperative blood orders for specific surgical procedures. Because the list was developed in the late 1970s, many new surgical procedures have been introduced and others improved upon, making the original MSBOS obsolete. The authors describe methods to create an updated, institution-specific MSBOS to guide preoperative blood ordering. Blood utilization data for 53,526 patients undergoing 1,632 different surgical procedures were gathered from an anesthesia information management system. A novel algorithm based on previously defined criteria was used to create an MSBOS for each surgical specialty. The economic implications were calculated based on the number of blood orders placed, but not indicated, according to the MSBOS. Among 27,825 surgical cases that did not require preoperative blood orders as determined by the MSBOS, 9,099 (32.7%) had a type and screen, and 2,643 (9.5%) had a crossmatch ordered. Of 4,644 cases determined to require only a type and screen, 1,509 (32.5%) had a type and crossmatch ordered. By using the MSBOS to eliminate unnecessary blood orders, the authors calculated a potential reduction in hospital charges and actual costs of $211,448 and $43,135 per year, respectively, or $8.89 and $1.81 per surgical patient, respectively. An institution-specific MSBOS can be created, using blood utilization data extracted from an anesthesia information management system along with our proposed algorithm. Using these methods to optimize the process of preoperative blood ordering can potentially improve operating room efficiency, increase patient safety, and decrease costs.

  16. Optimal tuning of a confined Brownian information engine.

    PubMed

    Park, Jong-Min; Lee, Jae Sung; Noh, Jae Dong

    2016-03-01

    A Brownian information engine is a device extracting mechanical work from a single heat bath by exploiting the information on the state of a Brownian particle immersed in the bath. As for engines, it is important to find the optimal operating condition that yields the maximum extracted work or power. The optimal condition for a Brownian information engine with a finite cycle time τ has been rarely studied because of the difficulty in finding the nonequilibrium steady state. In this study, we introduce a model for the Brownian information engine and develop an analytic formalism for its steady-state distribution for any τ. We find that the extracted work per engine cycle is maximum when τ approaches infinity, while the power is maximum when τ approaches zero.

  17. Expected versus Observed Information in SEM with Incomplete Normal and Nonnormal Data

    ERIC Educational Resources Information Center

    Savalei, Victoria

    2010-01-01

    Maximum likelihood is the most common estimation method in structural equation modeling. Standard errors for maximum likelihood estimates are obtained from the associated information matrix, which can be estimated from the sample using either expected or observed information. It is known that, with complete data, estimates based on observed or…

  18. Orientation-Controllable ZnO Nanorod Array Using Imprinting Method for Maximum Light Utilization in Dye-Sensitized Solar Cells.

    PubMed

    Jeong, Huisu; Song, Hui; Lee, Ryeri; Pak, Yusin; Kumaresan, Yogeenth; Lee, Heon; Jung, Gun Young

    2015-12-01

    We present a holey titanium dioxide (TiO2) film combined with a periodically aligned ZnO nanorod layer (ZNL) for maximum light utilization in dye-sensitized solar cells (DSCs). Both the holey TiO2 film and the ZNL were simultaneously fabricated by imprint technique with a mold having vertically aligned ZnO nanorod (NR) array, which was transferred to the TiO2 film after imprinting. The orientation of the transferred ZNL such as laid, tilted, and standing ZnO NRs was dependent on the pitch and height of the ZnO NRs of the mold. The photoanode composed of the holey TiO2 film with the ZNL synergistically utilized the sunlight due to enhanced light scattering and absorption. The best power conversion efficiency of 8.5 % was achieved from the DSC with the standing ZNL, which represented a 33 % improvement compared to the reference cell with a planar TiO2.

  19. Recovery of plastic wastes from dumpsite as refuse-derived fuel and its utilization in small gasification system.

    PubMed

    Chiemchaisri, Chart; Charnnok, Boonya; Visvanathan, Chettiyappan

    2010-03-01

    An effort to utilize solid wastes at dumpsite as refuse-derived fuel (RDF) was carried out. The produced RDF briquette was then utilized in the gasification system. These wastes were initially examined for their physical composition and chemical characteristics. The wastes contained high plastic content of 24.6-44.8%, majority in polyethylene plastic bag form. The plastic wastes were purified by separating them from other components through manual separation and trommel screen after which their content increased to 82.9-89.7%. Subsequently, they were mixed with binding agent (cassava root) and transformed into RDF briquette. Maximum plastic content in RDF briquette was limit to 55% to maintain physical strength and maximum chlorine content. The RDF briquette was tested in a down-draft gasifier. The produced gas contained average energy content of 1.76 MJ/m(3), yielding cold gas efficiency of 66%. The energy production cost from this RDF process was estimated as USD0.05 perkWh. 2009 Elsevier Ltd. All rights reserved.

  20. The use of information systems to transform utilities and regulatory commissions: The application of geographic information systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wirick, D.W.; Montgomery, G.E.; Wagman, D.C.

    1995-09-01

    One technology that can assist utilities remain financially viable in competitive markets and help utilities and regulators to better serve the public is information technology. Because geography is an important part of an electric, natural gas, telecommunications, or water utility, computer-based Geographic Information Systems (GIS) and related Automated Mapping/Facilities Management systems are emerging as core technologies for managing an ever-expanding variety of formerly manual or paper-based tasks. This report focuses on GIS as an example of the types of information systems that can be used by utilities and regulatory commissions. Chapter 2 provides general information about information systems and effectsmore » of information on organizations; Chapter 3 explores the conversion of an organization to an information-based one; Chapters 4 and 5 set out GIS as an example of the use of information technologies to transform the operations of utilities and commissions; Chapter 6 describes the use of GIS and other information systems for organizational reengineering efforts; and Chapter 7 examines the regulatory treatment of information systems.« less

  1. Minimum and Maximum Entropy Distributions for Binary Systems with Known Means and Pairwise Correlations

    DTIC Science & Technology

    2017-08-21

    distributions, and we discuss some applications for engineered and biological information transmission systems. Keywords: information theory; minimum...of its interpretation as a measure of the amount of information communicable by a neural system to groups of downstream neurons. Previous authors...of the maximum entropy approach. Our results also have relevance for engineered information transmission systems. We show that empirically measured

  2. How to Become a Mentalist: Reading Decisions from a Competitor’s Pupil Can Be Achieved without Training but Requires Instruction

    PubMed Central

    Naber, Marnix; Stoll, Josef; Einhäuser, Wolfgang; Carter, Olivia

    2013-01-01

    Pupil dilation is implicated as a marker of decision-making as well as of cognitive and emotional processes. Here we tested whether individuals can exploit another’s pupil to their advantage. We first recorded the eyes of 3 "opponents", while they were playing a modified version of the "rock-paper-scissors" childhood game. The recorded videos served as stimuli to a second set of participants. These "players" played rock-paper-scissors against the pre-recorded opponents in a variety of conditions. When players just observed the opponents’ eyes without specific instruction their probability of winning was at chance. When informed that the time of maximum pupil dilation was indicative of the opponents’ choice, however, players raised their winning probability significantly above chance. When just watching the reconstructed area of the pupil against a gray background, players achieved similar performance, showing that players indeed exploited the pupil, rather than other facial cues. Since maximum pupil dilation was correct about the opponents’ decision only in 60% of trials (chance 33%), we finally tested whether increasing this validity to 100% would allow spontaneous learning. Indeed, when players were given no information, but the pupil was informative about the opponent’s response in all trials, players performed significantly above chance on average and half (5/10) reached significance at an individual level. Together these results suggest that people can in principle use the pupil to detect cognitive decisions in another individual, but that most people have neither explicit knowledge of the pupil’s utility nor have they learnt to use it despite a lifetime of exposure. PMID:23991185

  3. Efficient Bayesian experimental design for contaminant source identification

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Zeng, L.

    2013-12-01

    In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameter identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from indirect concentration measurements in identifying unknown source parameters such as the release time, strength and location. In this approach, the sampling location that gives the maximum relative entropy is selected as the optimal one. Once the sampling location is determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown source parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. Compared with the traditional optimal design, which is based on the Gaussian linear assumption, the method developed in this study can cope with arbitrary nonlinearity. It can be used to assist in groundwater monitor network design and identification of unknown contaminant sources. Contours of the expected information gain. The optimal observing location corresponds to the maximum value. Posterior marginal probability densities of unknown parameters, the thick solid black lines are for the designed location. For comparison, other 7 lines are for randomly chosen locations. The true values are denoted by vertical lines. It is obvious that the unknown parameters are estimated better with the desinged location.

  4. 14 CFR 23.1527 - Maximum operating altitude.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Maximum operating altitude. 23.1527 Section... Information § 23.1527 Maximum operating altitude. (a) The maximum altitude up to which operation is allowed... established. (b) A maximum operating altitude limitation of not more than 25,000 feet must be established for...

  5. Spatiotemporal approaches to analyzing pedestrian fatalities: the case of Cali, Colombia.

    PubMed

    Fox, Lani; Serre, Marc L; Lippmann, Steven J; Rodríguez, Daniel A; Bangdiwala, Shrikant I; Gutiérrez, María Isabel; Escobar, Guido; Villaveces, Andrés

    2015-01-01

    Injuries among pedestrians are a major public health concern in Colombian cities such as Cali. This is one of the first studies in Latin America to apply Bayesian maximum entropy (BME) methods to visualize and produce fine-scale, highly accurate estimates of citywide pedestrian fatalities. The purpose of this study is to determine the BME method that best estimates pedestrian mortality rates and reduces statistical noise. We further utilized BME methods to identify and differentiate spatial patterns and persistent versus transient pedestrian mortality hotspots. In this multiyear study, geocoded pedestrian mortality data from the Cali Injury Surveillance System (2008 to 2010) and census data were utilized to accurately visualize and estimate pedestrian fatalities. We investigated the effects of temporal and spatial scales, addressing issues arising from the rarity of pedestrian fatality events using 3 BME methods (simple kriging, Poisson kriging, and uniform model Bayesian maximum entropy). To reduce statistical noise while retaining a fine spatial and temporal scale, data were aggregated over 9-month incidence periods and censal sectors. Based on a cross-validation of BME methods, Poisson kriging was selected as the best BME method. Finally, the spatiotemporal and urban built environment characteristics of Cali pedestrian mortality hotspots were linked to intervention measures provided in Mead et al.'s (2014) pedestrian mortality review. The BME space-time analysis in Cali resulted in maps displaying hotspots of high pedestrian fatalities extending over small areas with radii of 0.25 to 1.1 km and temporal durations of 1 month to 3 years. Mapping the spatiotemporal distribution of pedestrian mortality rates identified high-priority areas for prevention strategies. The BME results allow us to identify possible intervention strategies according to the persistence and built environment of the hotspot; for example, through enforcement or long-term environmental modifications. BME methods provide useful information on the time and place of injuries and can inform policy strategies by isolating priority areas for interventions, contributing to intervention evaluation, and helping to generate hypotheses and identify the preventative strategies that may be suitable to those areas (e.g., street-level methods: pedestrian crossings, enforcement interventions; or citywide approaches: limiting vehicle speeds). This specific information is highly relevant for public health interventions because it provides the ability to target precise locations.

  6. Direct-to-consumer advertising and its utility in health care decision making: a consumer perspective.

    PubMed

    Deshpande, Aparna; Menon, Ajit; Perri, Matthew; Zinkhan, George

    2004-01-01

    The growth in direct-to-consumer advertising(DTCA)over the past two decades has facilitated the communication of prescription drug information directly to consumers. Data from a 1999 national survey are employed to determine the factors influencing consumers' opinions of the utility of DTC ads for health care decision making. We also analyze whether consumers use DTC ad information in health care decision making and who are the key drivers of such information utilization. The study results suggest that consumers have positive opinions of DTCA utility, varying across demographics and perceptions of certain advertisement features. Specifically, consumers value information about both risks and benefits, but the perception of risk information is more important in shaping opinions of ad utility than the perception of benefit information. Consumers still perceive, however that the quality of benefit information in DTC ads is better than that of risk information. Opinions about ad utility significantly influence whether information from DTC ads is used in health care decision making.

  7. 24 CFR 241.630 - Maximum insurance against loss.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... AUTHORITIES SUPPLEMENTARY FINANCING FOR INSURED PROJECT MORTGAGES Eligibility Requirements-Supplemental Loans... Individual Utility Meters in Multifamily Projects Without a HUD-Insured or HUD-Held Mortgage Special...

  8. 14 CFR 23.1524 - Maximum passenger seating configuration.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Maximum passenger seating configuration. 23.1524 Section 23.1524 Aeronautics and Space FEDERAL AVIATION ADMINISTRATION, DEPARTMENT OF... Operating Limitations and Information § 23.1524 Maximum passenger seating configuration. The maximum...

  9. Performance of convolutionally encoded noncoherent MFSK modem in fading channels

    NASA Technical Reports Server (NTRS)

    Modestino, J. W.; Mui, S. Y.

    1976-01-01

    The performance of a convolutionally encoded noncoherent multiple-frequency shift-keyed (MFSK) modem utilizing Viterbi maximum-likelihood decoding and operating on a fading channel is described. Both the lognormal and classical Rician fading channels are considered for both slow and time-varying channel conditions. Primary interest is in the resulting bit error rate as a function of the ratio between the energy per transmitted information bit and noise spectral density, parameterized by both the fading channel and code parameters. Fairly general upper bounds on bit error probability are provided and compared with simulation results in the two extremes of zero and infinite channel memory. The efficacy of simple block interleaving in combatting channel memory effects are thoroughly explored. Both quantized and unquantized receiver outputs are considered.

  10. Maximum Mass-Particle Velocities in Kantor's Information Mechanics

    NASA Astrophysics Data System (ADS)

    Sverdlik, Daniel I.

    1989-02-01

    Kantor's information mechanics links phenomena previously regarded as not treatable by a single theory. It is used here to calculate the maximum velocities ν m of single particles. For the electron, ν m/c≈1-1.253 814×10-77. The maximum ν m corresponds to ν m/c≈1-1.097864×10-122 for a single mass particle with a rest mass of 3.078 496×10-5g. This is the fastest that matter can move. Either information mechanics or classical mechanics can be used to show that ν m is less for heavier particles. That ν m is less for lighter particles can be deduced from an information mechanics argument alone.

  11. Refractory metal alloys and composites for space nuclear power systems

    NASA Technical Reports Server (NTRS)

    Titran, Robert H.; Stephens, Joseph R.; Petrasek, Donald W.

    1988-01-01

    Space power requirements for future NASA and other U.S. missions will range from a few kilowatts to megawatts of electricity. Maximum efficiency is a key goal of any power system in order to minimize weight and size so that the Space Shuttle may be used a minimum number of times to put the power supply into orbit. Nuclear power has been identified as the primary power source to meet these high levels of electrical demand. One method to achieve maximum efficiency is to operate the power supply, energy conservation system, and related components at relatively high temperatures. For systems now in the planning stages, design temperatures range from 1300 K for the immediate future to as high as 1700 K for the advanced systems. NASA Lewis Research Center has undertaken a research program on advanced technology of refractory metal alloys and composites that will provide baseline information for space power systems in the 1900's and the 21st century. Special emphasis is focused on the refractory metal alloys of niobium and on the refractory metal composites which utilize tungsten alloy wires for reinforcement. Basic research on the creep and creep-rupture properties of wires, matrices, and composites are discussed.

  12. The utility of estimating net primary productivity over Alaska using baseline AVHRR data

    USGS Publications Warehouse

    Markon, C.J.; Peterson, Kim M.

    2002-01-01

    Net primary productivity (NPP) is a fundamental ecological variable that provides information about the health and status of vegetation communities. The Normalized Difference Vegetation Index, or NDVI, derived from the Advanced Very High Resolution Radiometer (AVHRR) is increasingly being used to model or predict NPP, especially over large remote areas. In this article, seven seasonally based metrics calculated from a seven-year baseline NDVI dataset were used to model NPP over Alaska, USA. For each growing season, they included maximum, mean and summed NDVI, total days, product of total days and maximum NDVI, an integral estimate of NDVI and a summed product of NDVI and solar radiation. Field (plot) derived NPP estimates were assigned to 18 land cover classes from an Alaskan statewide land cover database. Linear relationships between NPP and each NDVI metric were analysed at four scales: plot, 1-km, 10-km and 20-km pixels. Results show moderate to poor relationship between any of the metrics and NPP estimates for all data sets and scales. Use of NDVI for estimating NPP may be possible, but caution is required due to data seasonality, the scaling process used and land surface heterogeneity.

  13. Crowd macro state detection using entropy model

    NASA Astrophysics Data System (ADS)

    Zhao, Ying; Yuan, Mengqi; Su, Guofeng; Chen, Tao

    2015-08-01

    In the crowd security research area a primary concern is to identify the macro state of crowd behaviors to prevent disasters and to supervise the crowd behaviors. The entropy is used to describe the macro state of a self-organization system in physics. The entropy change indicates the system macro state change. This paper provides a method to construct crowd behavior microstates and the corresponded probability distribution using the individuals' velocity information (magnitude and direction). Then an entropy model was built up to describe the crowd behavior macro state. Simulation experiments and video detection experiments were conducted. It was verified that in the disordered state, the crowd behavior entropy is close to the theoretical maximum entropy; while in ordered state, the entropy is much lower than half of the theoretical maximum entropy. The crowd behavior macro state sudden change leads to the entropy change. The proposed entropy model is more applicable than the order parameter model in crowd behavior detection. By recognizing the entropy mutation, it is possible to detect the crowd behavior macro state automatically by utilizing cameras. Results will provide data support on crowd emergency prevention and on emergency manual intervention.

  14. Estimating the Richness of a Population When the Maximum Number of Classes Is Fixed: A Nonparametric Solution to an Archaeological Problem

    PubMed Central

    Eren, Metin I.; Chao, Anne; Hwang, Wen-Han; Colwell, Robert K.

    2012-01-01

    Background Estimating assemblage species or class richness from samples remains a challenging, but essential, goal. Though a variety of statistical tools for estimating species or class richness have been developed, they are all singly-bounded: assuming only a lower bound of species or classes. Nevertheless there are numerous situations, particularly in the cultural realm, where the maximum number of classes is fixed. For this reason, a new method is needed to estimate richness when both upper and lower bounds are known. Methodology/Principal Findings Here, we introduce a new method for estimating class richness: doubly-bounded confidence intervals (both lower and upper bounds are known). We specifically illustrate our new method using the Chao1 estimator, rarefaction, and extrapolation, although any estimator of asymptotic richness can be used in our method. Using a case study of Clovis stone tools from the North American Lower Great Lakes region, we demonstrate that singly-bounded richness estimators can yield confidence intervals with upper bound estimates larger than the possible maximum number of classes, while our new method provides estimates that make empirical sense. Conclusions/Significance Application of the new method for constructing doubly-bound richness estimates of Clovis stone tools permitted conclusions to be drawn that were not otherwise possible with singly-bounded richness estimates, namely, that Lower Great Lakes Clovis Paleoindians utilized a settlement pattern that was probably more logistical in nature than residential. However, our new method is not limited to archaeological applications. It can be applied to any set of data for which there is a fixed maximum number of classes, whether that be site occupancy models, commercial products (e.g. athletic shoes), or census information (e.g. nationality, religion, age, race). PMID:22666316

  15. Pneumatic strength assessment device: design and isometric measurement.

    PubMed

    Paulus, David C; Reiser, Raoul F; Troxell, Wade O

    2004-01-01

    In order to load a muscle optimally during resistance exercise, it should be heavily taxed throughout the entire range of motion for that exercise. However, traditional constant resistance squats only tax the lower-extremity muscles to their limits at the "sticking region" or a critical joint configuration of the exercise cycle. Therefore, a linear motion (Smith) exercise machine was modified with pneumatics and appropriate computer control so that it could be capable of adjusting force to control velocity within a repetition of the squat exercise or other exercise performed with the device. Prior to application of this device in a dynamic squat setting, the maximum voluntary isometric force (MVIF) produced over a spectrum of knee angles is needed. This would reveal the sticking region and overall variation in strength capacity. Five incremental knee angles (90, 110, 130, 150, and 170 degrees, where 180 degrees defined full extension) were examined. After obtaining university-approved informed consent, 12 men and 12 women participated in the study. The knee angle was set, and the pneumatic cylinder was pressurized such that the subject could move the barbell slightly but no more than two-centimeters. The peak pressure exerted over a five-second maximum effort interval was recorded at each knee angle in random order and then repeated. The average of both efforts was then utilized for further analysis. The sticking region occurred consistently at a 90 degrees knee angle, however, the maximum force produced varied between 110 degrees and 170 degrees with the greatest frequency at 150 degrees for both men and women. The percent difference between the maximum and minimum MVIF was 46% for men and 57% for women.

  16. Evaluation of alkali treatment for biodegradation of corn cobs by Aspergillus niger.

    PubMed

    Singh, A; Abidi, A B; Agrawal, A K; Darmwal, N S

    1989-01-01

    Effect of NaOH pretreatment on the biodegradation of corn cobs for the production of cellulase and protein was studied using Aspergillus niger. Delignification of cobs with NaOH remarkably increased the production of cellulase and protein. Treatment of cobs with 2% NaOH was found to be the best with respect to their susceptibility to biodegradation for maximum production of cellulose 1,4-beta-cellobiosidase, cellulase, beta-glucosidase soluble protein and crude protein; this also led to the highest protein recovery, maximum cellulose utilization and also for the maximum degradation of substrate.

  17. 78 FR 13914 - Submission for Review: Survivor Annuity Election for a Spouse, RI 20-63; Cover Letter Giving...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-01

    ... 20-63; Cover Letter Giving Information About the Cost To Elect Less Than the Maximum Survivor Annuity, RI 20-116; Cover Letter Giving Information About the Cost To Elect the Maximum Survivor Annuity, RI... other Federal agencies the opportunity to comment on a revised information collection request (ICR 3206...

  18. 78 FR 42986 - Submission for Review: Survivor Annuity Election for a Spouse, RI 20-63; Cover Letter Giving...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-18

    .... This letter may be used to ask for more information. Analysis Agency: Retirement Operations, Retirement... 20-63; Cover Letter Giving Information About The Cost To Elect Less Than the Maximum Survivor Annuity, RI 20-116; Cover Letter Giving Information About The Cost To Elect the Maximum Survivor Annuity, RI...

  19. Expected benefits of federally-funded thermal energy storage research

    NASA Astrophysics Data System (ADS)

    Spanner, G. E.; Daellenbach, K. K.; Hughes, K. R.; Brown, D. R.; Drost, M. K.

    1992-09-01

    Pacific Northwest Laboratory (PNL) conducted this study for the Office of Advanced Utility Concepts of the US Department of Energy (DOE). The objective of this study was to develop a series of graphs that depict the long-term benefits of continuing DOE's thermal energy storage (TES) research program in four sectors: building heating, building cooling, utility power production, and transportation. The study was conducted in three steps. The first step was to assess the maximum possible benefits technically achievable in each sector. In some sectors, the maximum benefit was determined by a 'supply side' limitation, and in other sectors, the maximum benefit is determined by a 'demand side' limitation. The second step was to apply economic cost and diffusion models to estimate the benefits that are likely to be achieved by TES under two scenarios: (1) with continuing DOE funding of TES research; and (2) without continued funding. The models all cover the 20-year period from 1990 to 2010. The third step was to prepare graphs that show the maximum technical benefits achievable, the estimated benefits with TES research funding, and the estimated benefits in the absence of TES research funding. The benefits of federally-funded TES research are largely in four areas: displacement of primary energy, displacement of oil and natural gas, reduction in peak electric loads, and emissions reductions.

  20. Extending the maximum operation time of the MNSR reactor.

    PubMed

    Dawahra, S; Khattab, K; Saba, G

    2016-09-01

    An effective modification to extend the maximum operation time of the Miniature Neutron Source Reactor (MNSR) to enhance the utilization of the reactor has been tested using the MCNP4C code. This modification consisted of inserting manually in each of the reactor inner irradiation tube a chain of three polyethylene-connected containers filled of water. The total height of the chain was 11.5cm. The replacement of the actual cadmium absorber with B(10) absorber was needed as well. The rest of the core structure materials and dimensions remained unchanged. A 3-D neutronic model with the new modifications was developed to compare the neutronic parameters of the old and modified cores. The results of the old and modified core excess reactivities (ρex) were: 3.954, 6.241 mk respectively. The maximum reactor operation times were: 428, 1025min and the safety reactivity factors were: 1.654 and 1.595 respectively. Therefore, a 139% increase in the maximum reactor operation time was noticed for the modified core. This increase enhanced the utilization of the MNSR reactor to conduct a long time irradiation of the unknown samples using the NAA technique and increase the amount of radioisotope production in the reactor. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. 14 CFR 23.787 - Baggage and cargo compartments.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... AIRCRAFT AIRWORTHINESS STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY AIRPLANES Design and... critical load distributions at the appropriate maximum load factors corresponding to the flight and ground...

  2. 7 CFR 4284.1009 - Limitations on awards.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE GRANTS Agriculture Innovation Demonstration Centers § 4284.1009 Limitations on awards. The maximum grant award for an agriculture innovation center shall be...

  3. 7 CFR 4284.1009 - Limitations on awards.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE GRANTS Agriculture Innovation Demonstration Centers § 4284.1009 Limitations on awards. The maximum grant award for an agriculture innovation center shall be...

  4. 7 CFR 4284.1009 - Limitations on awards.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE GRANTS Agriculture Innovation Demonstration Centers § 4284.1009 Limitations on awards. The maximum grant award for an agriculture innovation center shall be...

  5. 7 CFR 4284.1009 - Limitations on awards.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE GRANTS Agriculture Innovation Demonstration Centers § 4284.1009 Limitations on awards. The maximum grant award for an agriculture innovation center shall be...

  6. 7 CFR 4284.1009 - Limitations on awards.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE GRANTS Agriculture Innovation Demonstration Centers § 4284.1009 Limitations on awards. The maximum grant award for an agriculture innovation center shall be...

  7. 7 CFR 4290.840 - Maximum term of Financing.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE RURAL BUSINESS INVESTMENT COMPANY (âRBICâ) PROGRAM Financing of Enterprises by RBICs Structuring Rbic Financing of Eligible Enterprises-Types of Financings...

  8. Synchrotron-based coherent scatter x-ray projection imaging using an array of monoenergetic pencil beams.

    PubMed

    Landheer, Karl; Johns, Paul C

    2012-09-01

    Traditional projection x-ray imaging utilizes only the information from the primary photons. Low-angle coherent scatter images can be acquired simultaneous to the primary images and provide additional information. In medical applications scatter imaging can improve x-ray contrast or reduce dose using information that is currently discarded in radiological images to augment the transmitted radiation information. Other applications include non-destructive testing and security. A system at the Canadian Light Source synchrotron was configured which utilizes multiple pencil beams (up to five) to create both primary and coherent scatter projection images, simultaneously. The sample was scanned through the beams using an automated step-and-shoot setup. Pixels were acquired in a hexagonal lattice to maximize packing efficiency. The typical pitch was between 1.0 and 1.6 mm. A Maximum Likelihood-Expectation Maximization-based iterative method was used to disentangle the overlapping information from the flat panel digital x-ray detector. The pixel value of the coherent scatter image was generated by integrating the radial profile (scatter intensity versus scattering angle) over an angular range. Different angular ranges maximize the contrast between different materials of interest. A five-beam primary and scatter image set (which had a pixel beam time of 990 ms and total scan time of 56 min) of a porcine phantom is included. For comparison a single-beam coherent scatter image of the same phantom is included. The muscle-fat contrast was 0.10 ± 0.01 and 1.16 ± 0.03 for the five-beam primary and scatter images, respectively. The air kerma was measured free in air using aluminum oxide optically stimulated luminescent dosimeters. The total area-averaged air kerma for the scan was measured to be 7.2 ± 0.4 cGy although due to difficulties in small-beam dosimetry this number could be inaccurate.

  9. Convex Accelerated Maximum Entropy Reconstruction

    PubMed Central

    Worley, Bradley

    2016-01-01

    Maximum entropy (MaxEnt) spectral reconstruction methods provide a powerful framework for spectral estimation of nonuniformly sampled datasets. Many methods exist within this framework, usually defined based on the magnitude of a Lagrange multiplier in the MaxEnt objective function. An algorithm is presented here that utilizes accelerated first-order convex optimization techniques to rapidly and reliably reconstruct nonuniformly sampled NMR datasets using the principle of maximum entropy. This algorithm – called CAMERA for Convex Accelerated Maximum Entropy Reconstruction Algorithm – is a new approach to spectral reconstruction that exhibits fast, tunable convergence in both constant-aim and constant-lambda modes. A high-performance, open source NMR data processing tool is described that implements CAMERA, and brief comparisons to existing reconstruction methods are made on several example spectra. PMID:26894476

  10. Sub-200 ps CRT in monolithic scintillator PET detectors using digital SiPM arrays and maximum likelihood interaction time estimation.

    PubMed

    van Dam, Herman T; Borghi, Giacomo; Seifert, Stefan; Schaart, Dennis R

    2013-05-21

    Digital silicon photomultiplier (dSiPM) arrays have favorable characteristics for application in monolithic scintillator detectors for time-of-flight positron emission tomography (PET). To fully exploit these benefits, a maximum likelihood interaction time estimation (MLITE) method was developed to derive the time of interaction from the multiple time stamps obtained per scintillation event. MLITE was compared to several deterministic methods. Timing measurements were performed with monolithic scintillator detectors based on novel dSiPM arrays and LSO:Ce,0.2%Ca crystals of 16 × 16 × 10 mm(3), 16 × 16 × 20 mm(3), 24 × 24 × 10 mm(3), and 24 × 24 × 20 mm(3). The best coincidence resolving times (CRTs) for pairs of identical detectors were obtained with MLITE and measured 157 ps, 185 ps, 161 ps, and 184 ps full-width-at-half-maximum (FWHM), respectively. For comparison, a small reference detector, consisting of a 3 × 3 × 5 mm(3) LSO:Ce,0.2%Ca crystal coupled to a single pixel of a dSiPM array, was measured to have a CRT as low as 120 ps FWHM. The results of this work indicate that the influence of the optical transport of the scintillation photons on the timing performance of monolithic scintillator detectors can at least partially be corrected for by utilizing the information contained in the spatio-temporal distribution of the collection of time stamps registered per scintillation event.

  11. Sub-200 ps CRT in monolithic scintillator PET detectors using digital SiPM arrays and maximum likelihood interaction time estimation

    NASA Astrophysics Data System (ADS)

    van Dam, Herman T.; Borghi, Giacomo; Seifert, Stefan; Schaart, Dennis R.

    2013-05-01

    Digital silicon photomultiplier (dSiPM) arrays have favorable characteristics for application in monolithic scintillator detectors for time-of-flight positron emission tomography (PET). To fully exploit these benefits, a maximum likelihood interaction time estimation (MLITE) method was developed to derive the time of interaction from the multiple time stamps obtained per scintillation event. MLITE was compared to several deterministic methods. Timing measurements were performed with monolithic scintillator detectors based on novel dSiPM arrays and LSO:Ce,0.2%Ca crystals of 16 × 16 × 10 mm3, 16 × 16 × 20 mm3, 24 × 24 × 10 mm3, and 24 × 24 × 20 mm3. The best coincidence resolving times (CRTs) for pairs of identical detectors were obtained with MLITE and measured 157 ps, 185 ps, 161 ps, and 184 ps full-width-at-half-maximum (FWHM), respectively. For comparison, a small reference detector, consisting of a 3 × 3 × 5 mm3 LSO:Ce,0.2%Ca crystal coupled to a single pixel of a dSiPM array, was measured to have a CRT as low as 120 ps FWHM. The results of this work indicate that the influence of the optical transport of the scintillation photons on the timing performance of monolithic scintillator detectors can at least partially be corrected for by utilizing the information contained in the spatio-temporal distribution of the collection of time stamps registered per scintillation event.

  12. Online Robot Dead Reckoning Localization Using Maximum Relative Entropy Optimization With Model Constraints

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Urniezius, Renaldas

    2011-03-14

    The principle of Maximum relative Entropy optimization was analyzed for dead reckoning localization of a rigid body when observation data of two attached accelerometers was collected. Model constraints were derived from the relationships between the sensors. The experiment's results confirmed that accelerometers each axis' noise can be successfully filtered utilizing dependency between channels and the dependency between time series data. Dependency between channels was used for a priori calculation, and a posteriori distribution was derived utilizing dependency between time series data. There was revisited data of autocalibration experiment by removing the initial assumption that instantaneous rotation axis of a rigidmore » body was known. Performance results confirmed that such an approach could be used for online dead reckoning localization.« less

  13. Reduction in maximum time uncertainty of paired time signals

    DOEpatents

    Theodosiou, G.E.; Dawson, J.W.

    1981-02-11

    Reduction in the maximum time uncertainty (t/sub max/ - t/sub min/) of a series of paired time signals t/sub 1/ and t/sub 2/ varying between two input terminals and representative of a series of single events where t/sub 1/ less than or equal to t/sub 2/ and t/sub 1/ + t/sub 2/ equals a constant, is carried out with a circuit utilizing a combination of OR and AND gates as signal selecting means and one or more time delays to increase the minimum value (t/sub min/) of the first signal t/sub 1/ closer to t/sub max/ and thereby reduce the difference. The circuit may utilize a plurality of stages to reduce the uncertainty by factors of 20 to 800.

  14. Highly Efficient Nondoped Green Organic Light-Emitting Diodes with Combination of High Photoluminescence and High Exciton Utilization.

    PubMed

    Wang, Chu; Li, Xianglong; Pan, Yuyu; Zhang, Shitong; Yao, Liang; Bai, Qing; Li, Weijun; Lu, Ping; Yang, Bing; Su, Shijian; Ma, Yuguang

    2016-02-10

    Photoluminescence (PL) efficiency and exciton utilization efficiency are two key parameters to harvest high-efficiency electroluminescence (EL) in organic light-emitting diodes (OLEDs). But it is not easy to simultaneously combine these two characteristics (high PL efficiency and high exciton utilization) into a fluorescent material. In this work, an efficient combination was achieved through two concepts of hybridized local and charge-transfer (CT) state (HLCT) and "hot exciton", in which the former is responsible for high PL efficiency while the latter contributes to high exciton utilization. On the basis of a tiny chemical modification in TPA-BZP, a green-light donor-acceptor molecule, we designed and synthesized CzP-BZP with this efficeient combination of high PL efficiency of η(PL) = 75% in the solid state and maximal exciton utilization efficiency up to 48% (especially, the internal quantum efficiency of η(IQE) = 35% substantially exceed 25% of spin statistics limit) in OLED. The nondoped OLED of CzP-BZP exhibited an excellent performance: a green emission with a CIE coordinate of (0.34, 0.60), a maximum current efficiency of 23.99 cd A(-1), and a maximum external quantum efficiency (EQE, η(EQE)) of 6.95%. This combined HLCT state and "hot exciton" strategy should be a practical way to design next-generation, low-cost, high-efficiency fluorescent OLED materials.

  15. Dextran Utilization During Its Synthesis by Weissella cibaria RBA12 Can Be Overcome by Fed-Batch Fermentation in a Bioreactor.

    PubMed

    Baruah, Rwivoo; Deka, Barsha; Kashyap, Niharika; Goyal, Arun

    2018-01-01

    Weissella cibaria RBA12 produced a maximum of 9 mg/ml dextran (with 90% efficiency) using shake flask culture under the optimized concentration of medium components viz. 2% (w/v) of each sucrose, yeast extract, and K 2 HPO 4 after incubation at optimized conditions of 20 °C and 180 rpm for 24 h. The optimized medium and conditions were used for scale-up of dextran production from Weissella cibaria RBA12 in 2.5-l working volume under batch fermentation in a bioreactor that yielded a maximum of 9.3 mg/ml dextran (with 93% efficiency) at 14 h. After 14 h, dextran produced was utilized by the bacterium till 18 h in its stationary phase under sucrose depleted conditions. Dextran utilization was further studied by fed-batch fermentation using sucrose feed. Dextran on production under fed-batch fermentation in bioreactor gave 35.8 mg/ml after 32 h. In fed-batch mode, there was no decrease in dextran concentration as observed in the batch mode. This showed that the utilization of dextran by Weissella cibaria RBA12 is initiated when there is sucrose depletion and therefore the presence of sucrose can possibly overcome the dextran hydrolysis. This is the first report of utilization of dextran, post-sucrose depletion by Weissella sp. studied in bioreactor.

  16. A semi-empirical model for the estimation of maximum horizontal displacement due to liquefaction-induced lateral spreading

    USGS Publications Warehouse

    Faris, Allison T.; Seed, Raymond B.; Kayen, Robert E.; Wu, Jiaer

    2006-01-01

    During the 1906 San Francisco Earthquake, liquefaction-induced lateral spreading and resultant ground displacements damaged bridges, buried utilities, and lifelines, conventional structures, and other developed works. This paper presents an improved engineering tool for the prediction of maximum displacement due to liquefaction-induced lateral spreading. A semi-empirical approach is employed, combining mechanistic understanding and data from laboratory testing with data and lessons from full-scale earthquake field case histories. The principle of strain potential index, based primary on correlation of cyclic simple shear laboratory testing results with in-situ Standard Penetration Test (SPT) results, is used as an index to characterized the deformation potential of soils after they liquefy. A Bayesian probabilistic approach is adopted for development of the final predictive model, in order to take fullest advantage of the data available and to deal with the inherent uncertainties intrinstiic to the back-analyses of field case histories. A case history from the 1906 San Francisco Earthquake is utilized to demonstrate the ability of the resultant semi-empirical model to estimate maximum horizontal displacement due to liquefaction-induced lateral spreading.

  17. Removal of oxytetracycline (OTC) in a synthetic pharmaceutical wastewater by a sequential anaerobic multichamber bed reactor (AMCBR)/completely stirred tank reactor (CSTR) system: biodegradation and inhibition kinetics.

    PubMed

    Sponza, Delia Teresa; Çelebi, Hakan

    2012-01-01

    An anaerobic multichamber bed reactor (AMCBR) was effective in removing both molasses-chemical oxygen demand (COD), and the antibiotic oxytetracycline (OTC). The maximum COD and OTC removals were 99% in sequential AMCBR/completely stirred tank reactor (CSTR) at an OTC concentration of 300 mg L(-1). 51%, 29% and 9% of the total volatile fatty acid (TVFA) was composed of acetic, propionic acid and butyric acids, respectively. The OTC loading rates at between 22.22 and 133.33 g OTC m(-3) d(-1) improved the hydrolysis of molasses-COD (k), the maximum specific utilization of molasses-COD (k(mh)) and the maximum specific utilization rate of TVFA (k(TVFA)). The direct effect of high OTC loadings (155.56 and -177.78 g OTC m(-3) d(-1)) on acidogens and methanogens were evaluated with Haldane inhibition kinetic. A significant decrease of the Haldane inhibition constant was indicative of increases in toxicity at increasing loading rates. Copyright © 2011 Elsevier Ltd. All rights reserved.

  18. Utility Tax Relief Program in the Netherlands

    DTIC Science & Technology

    2010-01-06

    Management Command, Europe Region LQA Living Quarters Allowance VAT Value Added Tax INSPECTOR GENERAL DEPARTMENT OF DEFENSE 400 ARMY NAVY DRIVE...savings on their utility bills. However, for DOD civilians receiving Living Quarters Allowance ( LQA ),2 DOD realizes the cost savings through reduced... LQA payments, unless the civilian’s housing expenses exceed the maximum allowable LQA . 1 For the purpose of this report, DOD personnel refers to

  19. Microbial quality, instrumental texture, and color profile evaluation of edible by-products obtained from Barbari goats

    PubMed Central

    Umaraw, Pramila; Pathak, V.; Rajkumar, V.; Verma, Arun K.; Singh, V. P.; Verma, Akhilesh K.

    2015-01-01

    Aim: The study was conducted to estimate the contribution of edible byproducts of Barbari kids to their live and carcass weight as well as to assess textural and color characteristics and microbiological status of these byproducts. Materials and Methods: Percent live weight, Percent carcass weight, Texture, color, and microbiological analysis was done for edible byproducts viz. liver, heart, kidney, spleen, brain and testicle and longissimus dorsi muscle was taken as a reference. Results: The edible byproducts of Barbari kids constitute about 3% of the live weight of an animal of which liver contributed maximum (1.47%) followed by testicles (0.69%) and heart (0.41%). While the same constituted 3.57, 1.70, and 0.99%, respectively on carcass weight. There was significant (p<0.05) difference among all organs regarding textural properties. Liver required the maximum shear force and work of shear (121.48N and 32.19 kg-sec) followed by spleen and heart. All organs revealed characteristics color values (L*, a*, b*, chroma, and hue) which were significantly different (p<0.05) from muscle values. The total viable count, coliform count showed slight differences for all organs studied. The staphylococcus counts were low with little differences among organs. Conclusion: Edible byproducts have a significant contribution to carcass weight which could enhance total edible portion of the carcass. Efficient utilization of these by-products returns good source of revenue to the meat industries. Textural and color analysis give information for their incorporation in comminuted meat products, and microbial study tells about the storage study. However, study was in the preliminary and basic step forward toward better utilization of 3% of live animal which could increase the saleable cost of animal by 6.94%. PMID:27047004

  20. Enhanced light out-coupling efficiency of organic light-emitting diodes with an extremely low haze by plasma treated nanoscale corrugation

    NASA Astrophysics Data System (ADS)

    Hwang, Ju Hyun; Lee, Hyun Jun; Shim, Yong Sub; Park, Cheol Hwee; Jung, Sun-Gyu; Kim, Kyu Nyun; Park, Young Wook; Ju, Byeong-Kwon

    2015-01-01

    Extremely low-haze light extraction from organic light-emitting diodes (OLEDs) was achieved by utilizing nanoscale corrugation, which was simply fabricated with plasma treatment and sonication. The haze of the nanoscale corrugation for light extraction (NCLE) corresponds to 0.21% for visible wavelengths, which is comparable to that of bare glass. The OLEDs with NCLE showed enhancements of 34.19% in current efficiency and 35.75% in power efficiency. Furthermore, the OLEDs with NCLE exhibited angle-stable electroluminescence (EL) spectra for different viewing angles, with no change in the full width at half maximum (FWHM) and peak wavelength. The flexibility of the polymer used for the NCLE and plasma treatment process indicates that the NCLE can be applied to large and flexible OLED displays.Extremely low-haze light extraction from organic light-emitting diodes (OLEDs) was achieved by utilizing nanoscale corrugation, which was simply fabricated with plasma treatment and sonication. The haze of the nanoscale corrugation for light extraction (NCLE) corresponds to 0.21% for visible wavelengths, which is comparable to that of bare glass. The OLEDs with NCLE showed enhancements of 34.19% in current efficiency and 35.75% in power efficiency. Furthermore, the OLEDs with NCLE exhibited angle-stable electroluminescence (EL) spectra for different viewing angles, with no change in the full width at half maximum (FWHM) and peak wavelength. The flexibility of the polymer used for the NCLE and plasma treatment process indicates that the NCLE can be applied to large and flexible OLED displays. Electronic supplementary information (ESI) available. See DOI: 10.1039/c4nr06547f

  1. Microbial quality, instrumental texture, and color profile evaluation of edible by-products obtained from Barbari goats.

    PubMed

    Umaraw, Pramila; Pathak, V; Rajkumar, V; Verma, Arun K; Singh, V P; Verma, Akhilesh K

    2015-01-01

    The study was conducted to estimate the contribution of edible byproducts of Barbari kids to their live and carcass weight as well as to assess textural and color characteristics and microbiological status of these byproducts. Percent live weight, Percent carcass weight, Texture, color, and microbiological analysis was done for edible byproducts viz. liver, heart, kidney, spleen, brain and testicle and longissimus dorsi muscle was taken as a reference. The edible byproducts of Barbari kids constitute about 3% of the live weight of an animal of which liver contributed maximum (1.47%) followed by testicles (0.69%) and heart (0.41%). While the same constituted 3.57, 1.70, and 0.99%, respectively on carcass weight. There was significant (p<0.05) difference among all organs regarding textural properties. Liver required the maximum shear force and work of shear (121.48N and 32.19 kg-sec) followed by spleen and heart. All organs revealed characteristics color values (L*, a*, b*, chroma, and hue) which were significantly different (p<0.05) from muscle values. The total viable count, coliform count showed slight differences for all organs studied. The staphylococcus counts were low with little differences among organs. Edible byproducts have a significant contribution to carcass weight which could enhance total edible portion of the carcass. Efficient utilization of these by-products returns good source of revenue to the meat industries. Textural and color analysis give information for their incorporation in comminuted meat products, and microbial study tells about the storage study. However, study was in the preliminary and basic step forward toward better utilization of 3% of live animal which could increase the saleable cost of animal by 6.94%.

  2. On the Evolutionary and Biogeographic History of Saxifraga sect. Trachyphyllum (Gaud.) Koch (Saxifragaceae Juss.)

    PubMed Central

    DeChaine, Eric G.; Anderson, Stacy A.; McNew, Jennifer M.; Wendling, Barry M.

    2013-01-01

    Arctic-alpine plants in the genus Saxifraga L. (Saxifragaceae Juss.) provide an excellent system for investigating the process of diversification in northern regions. Yet, sect. Trachyphyllum (Gaud.) Koch, which is comprised of about 8 to 26 species, has still not been explored by molecular systematists even though taxonomists concur that the section needs to be thoroughly re-examined. Our goals were to use chloroplast trnL-F and nuclear ITS DNA sequence data to circumscribe the section phylogenetically, test models of geographically-based population divergence, and assess the utility of morphological characters in estimating evolutionary relationships. To do so, we sequenced both genetic markers for 19 taxa within the section. The phylogenetic inferences of sect. Trachyphyllum using maximum likelihood and Bayesian analyses showed that the section is polyphyletic, with S. aspera L. and S bryoides L. falling outside the main clade. In addition, the analyses supported several taxonomic re-classifications to prior names. We used two approaches to test biogeographic hypotheses: i) a coalescent approach in Mesquite to test the fit of our reconstructed gene trees to geographically-based models of population divergence and ii) a maximum likelihood inference in Lagrange. These tests uncovered strong support for an origin of the clade in the Southern Rocky Mountains of North America followed by dispersal and divergence episodes across refugia. Finally we adopted a stochastic character mapping approach in SIMMAP to investigate the utility of morphological characters in estimating evolutionary relationships among taxa. We found that few morphological characters were phylogenetically informative and many were misleading. Our molecular analyses provide a foundation for the diversity and evolutionary relationships within sect. Trachyphyllum and hypotheses for better understanding the patterns and processes of divergence in this section, other saxifrages, and plants inhabiting the North Pacific Rim. PMID:23922810

  3. Perovskite photodetectors with both visible-infrared dual-mode response and super-narrowband characteristics towards photo-communication encryption application.

    PubMed

    Wu, Ye; Li, Xiaoming; Wei, Yi; Gu, Yu; Zeng, Haibo

    2017-12-21

    Photo-communication has attracted great attention because of the rapid development of wireless information transmission technology. However, it is still a great challenge in cryptography communications, where it is greatly weakened by the openness of the light channels. Here, visible-infrared dual-mode narrowband perovskite photodetectors were fabricated and a new photo-communication encryption technique was proposed. For the first time, highly narrowband and two-photon absorption (TPA) resultant photoresponses within a single photodetector are demonstrated. The full width at half maximum (FWHM) of the photoresponse is as narrow as 13.6 nm in the visible range, which is superior to state-of-the-art narrowband photodetectors. Furthermore, these two merits of narrowband and TPA characteristics are utilized to encrypt the photo-communication based on the above photodetectors. When sending information and noise signals with 532 and 442 nm laser light simultaneously, the perovskite photodetectors only receive the main information, while the commercial Si photodetector responds to both lights, losing the main information completely. The final data are determined by the secret key through the TPA process as preset. Such narrowband and TPA detection abilities endow the perovskite photodetectors with great potential in future security communication and also provide new opportunities and platforms for encryption techniques.

  4. A Novel Hybrid Dimension Reduction Technique for Undersized High Dimensional Gene Expression Data Sets Using Information Complexity Criterion for Cancer Classification

    PubMed Central

    Pamukçu, Esra; Bozdogan, Hamparsum; Çalık, Sinan

    2015-01-01

    Gene expression data typically are large, complex, and highly noisy. Their dimension is high with several thousand genes (i.e., features) but with only a limited number of observations (i.e., samples). Although the classical principal component analysis (PCA) method is widely used as a first standard step in dimension reduction and in supervised and unsupervised classification, it suffers from several shortcomings in the case of data sets involving undersized samples, since the sample covariance matrix degenerates and becomes singular. In this paper we address these limitations within the context of probabilistic PCA (PPCA) by introducing and developing a new and novel approach using maximum entropy covariance matrix and its hybridized smoothed covariance estimators. To reduce the dimensionality of the data and to choose the number of probabilistic PCs (PPCs) to be retained, we further introduce and develop celebrated Akaike's information criterion (AIC), consistent Akaike's information criterion (CAIC), and the information theoretic measure of complexity (ICOMP) criterion of Bozdogan. Six publicly available undersized benchmark data sets were analyzed to show the utility, flexibility, and versatility of our approach with hybridized smoothed covariance matrix estimators, which do not degenerate to perform the PPCA to reduce the dimension and to carry out supervised classification of cancer groups in high dimensions. PMID:25838836

  5. Readability analysis of internet-based patient information regarding skull base tumors.

    PubMed

    Misra, Poonam; Kasabwala, Khushabu; Agarwal, Nitin; Eloy, Jean Anderson; Liu, James K

    2012-09-01

    Readability is an important consideration in assessing healthcare-related literature. In order for a source of information to be the most beneficial to patients, it should be written at a level appropriate for the audience. The National Institute of Health recommends that health literature be written at a maximum level of sixth grade. This is not uniformly found in current health literature, putting patients with lower reading levels at a disadvantage. In February 2012, healthcare-oriented education resources were retrieved from websites obtained using the Google search phrase skull base tumors. Of the first 25 consecutive, unique website hits, 18 websites were found to contain information for patients. Ten different assessment scales were utilized to assess the readability of the patient-specific web pages. Patient-oriented material found online for skull base tumors was written at a significantly higher level than the reading level of the average US patient. The average reading level of this material was found to be at a minimum of eleventh grade across all ten scales. Health related material related to skull base tumors available through the internet can be improved to reach a larger audience without sacrificing the necessary information. Revisions of this material can provide significant benefit for average patients and improve their health care.

  6. Stochastic characteristics of different duration annual maximum rainfall and its spatial difference in China based on information entropy

    NASA Astrophysics Data System (ADS)

    Li, X.; Sang, Y. F.

    2017-12-01

    Mountain torrents, urban floods and other disasters caused by extreme precipitation bring great losses to the ecological environment, social and economic development, people's lives and property security. So there is of great significance to floods prevention and control by the study of its spatial distribution. Based on the annual maximum rainfall data of 60min, 6h and 24h, the paper generate long sequences following Pearson-III distribution, and then use the information entropy index to study the spatial distribution and difference of different duration. The results show that the information entropy value of annual maximum rainfall in the south region is greater than that in the north region, indicating more obvious stochastic characteristics of annual maximum rainfall in the latter. However, the spatial distribution of stochastic characteristics is different in different duration. For example, stochastic characteristics of 60min annual maximum rainfall in the Eastern Tibet is smaller than surrounding, but 6h and 24h annual maximum rainfall is larger than surrounding area. In the Haihe River Basin and the Huaihe River Basin, the stochastic characteristics of the 60min annual maximum rainfall was not significantly different from that in the surrounding area, and stochastic characteristics of 6h and 24h was smaller than that in the surrounding area. We conclude that the spatial distribution of information entropy values of annual maximum rainfall in different duration can reflect the spatial distribution of its stochastic characteristics, thus the results can be an importantly scientific basis for the flood prevention and control, agriculture, economic-social developments and urban flood control and waterlogging.

  7. Indirect Measurement of Energy Density of Soft PZT Ceramic Utilizing Mechanical Stress

    NASA Astrophysics Data System (ADS)

    Unruan, Muangjai; Unruan, Sujitra; Inkong, Yutthapong; Yimnirun, Rattikorn

    2017-11-01

    This paper reports on an indirect measurement of energy density of soft PZT ceramic utilizing mechanical stress. The method works analogous to the Olsen cycle and allows for a large amount of electro-mechanical energy conversion. A maximum energy density of 350 kJ/m3/cycle was found under 0-312 MPa and 1-20 kV/cm of applied mechanical stress and electric field, respectively. The obtained result is substantially higher than the results reported in previous studies of PZT materials utilizing a direct piezoelectric effect.

  8. UNICOS Kernel Internals Application Development

    NASA Technical Reports Server (NTRS)

    Caredo, Nicholas; Craw, James M. (Technical Monitor)

    1995-01-01

    Having an understanding of UNICOS Kernel Internals is valuable information. However, having the knowledge is only half the value. The second half comes with knowing how to use this information and apply it to the development of tools. The kernel contains vast amounts of useful information that can be utilized. This paper discusses the intricacies of developing utilities that utilize kernel information. In addition, algorithms, logic, and code will be discussed for accessing kernel information. Code segments will be provided that demonstrate how to locate and read kernel structures. Types of applications that can utilize kernel information will also be discussed.

  9. Estimating landscape carrying capacity through maximum clique analysis

    USGS Publications Warehouse

    Donovan, Therese; Warrington, Greg; Schwenk, W. Scott; Dinitz, Jeffrey H.

    2012-01-01

    Habitat suitability (HS) maps are widely used tools in wildlife science and establish a link between wildlife populations and landscape pattern. Although HS maps spatially depict the distribution of optimal resources for a species, they do not reveal the population size a landscape is capable of supporting--information that is often crucial for decision makers and managers. We used a new approach, "maximum clique analysis," to demonstrate how HS maps for territorial species can be used to estimate the carrying capacity, N(k), of a given landscape. We estimated the N(k) of Ovenbirds (Seiurus aurocapillus) and bobcats (Lynx rufus) in an 1153-km2 study area in Vermont, USA. These two species were selected to highlight different approaches in building an HS map as well as computational challenges that can arise in a maximum clique analysis. We derived 30-m2 HS maps for each species via occupancy modeling (Ovenbird) and by resource utilization modeling (bobcats). For each species, we then identified all pixel locations on the map (points) that had sufficient resources in the surrounding area to maintain a home range (termed a "pseudo-home range"). These locations were converted to a mathematical graph, where any two points were linked if two pseudo-home ranges could exist on the landscape without violating territory boundaries. We used the program Cliquer to find the maximum clique of each graph. The resulting estimates of N(k) = 236 Ovenbirds and N(k) = 42 female bobcats were sensitive to different assumptions and model inputs. Estimates of N(k) via alternative, ad hoc methods were 1.4 to > 30 times greater than the maximum clique estimate, suggesting that the alternative results may be upwardly biased. The maximum clique analysis was computationally intensive but could handle problems with < 1500 total pseudo-home ranges (points). Given present computational constraints, it is best suited for species that occur in clustered distributions (where the problem can be broken into several, smaller problems), or for species with large home ranges relative to grid scale where resampling the points to a coarser resolution can reduce the problem to manageable proportions.

  10. 24 CFR 241.530 - Maximum fees and charges by lender.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... AUTHORITIES SUPPLEMENTARY FINANCING FOR INSURED PROJECT MORTGAGES Eligibility Requirements-Supplemental Loans... Individual Utility Meters in Multifamily Projects Without a HUD-Insured or HUD-Held Mortgage Fees and Charges...

  11. 14 CFR 23.703 - Takeoff warning system.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... AIRWORTHINESS STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY AIRPLANES Design and Construction Control Systems § 23.703 Takeoff warning system. For all airplanes with a maximum weight more than 6,000...

  12. Stochastic information transfer from cochlear implant electrodes to auditory nerve fibers

    NASA Astrophysics Data System (ADS)

    Gao, Xiao; Grayden, David B.; McDonnell, Mark D.

    2014-08-01

    Cochlear implants, also called bionic ears, are implanted neural prostheses that can restore lost human hearing function by direct electrical stimulation of auditory nerve fibers. Previously, an information-theoretic framework for numerically estimating the optimal number of electrodes in cochlear implants has been devised. This approach relies on a model of stochastic action potential generation and a discrete memoryless channel model of the interface between the array of electrodes and the auditory nerve fibers. Using these models, the stochastic information transfer from cochlear implant electrodes to auditory nerve fibers is estimated from the mutual information between channel inputs (the locations of electrodes) and channel outputs (the set of electrode-activated nerve fibers). Here we describe a revised model of the channel output in the framework that avoids the side effects caused by an "ambiguity state" in the original model and also makes fewer assumptions about perceptual processing in the brain. A detailed comparison of how different assumptions on fibers and current spread modes impact on the information transfer in the original model and in the revised model is presented. We also mathematically derive an upper bound on the mutual information in the revised model, which becomes tighter as the number of electrodes increases. We found that the revised model leads to a significantly larger maximum mutual information and corresponding number of electrodes compared with the original model and conclude that the assumptions made in this part of the modeling framework are crucial to the model's overall utility.

  13. National Information Utility Seeks to Serve Schools Nationwide.

    ERIC Educational Resources Information Center

    Platzer, Nancy

    1985-01-01

    Outlines the pros and cons of the National Information Utility Program, which is designed to provide current updatable courseware to schools nationwide. The information is broadcast over FM radio and television signals to facilities subscribing to the utility. (MD)

  14. Photoacoustic image reconstruction from ultrasound post-beamformed B-mode image

    NASA Astrophysics Data System (ADS)

    Zhang, Haichong K.; Guo, Xiaoyu; Kang, Hyun Jae; Boctor, Emad M.

    2016-03-01

    A requirement to reconstruct photoacoustic (PA) image is to have a synchronized channel data acquisition with laser firing. Unfortunately, most clinical ultrasound (US) systems don't offer an interface to obtain synchronized channel data. To broaden the impact of clinical PA imaging, we propose a PA image reconstruction algorithm utilizing US B-mode image, which is readily available from clinical scanners. US B-mode image involves a series of signal processing including beamforming, followed by envelope detection, and end with log compression. Yet, it will be defocused when PA signals are input due to incorrect delay function. Our approach is to reverse the order of image processing steps and recover the original US post-beamformed radio-frequency (RF) data, in which a synthetic aperture based PA rebeamforming algorithm can be further applied. Taking B-mode image as the input, we firstly recovered US postbeamformed RF data by applying log decompression and convoluting an acoustic impulse response to combine carrier frequency information. Then, the US post-beamformed RF data is utilized as pre-beamformed RF data for the adaptive PA beamforming algorithm, and the new delay function is applied by taking into account that the focus depth in US beamforming is at the half depth of the PA case. The feasibility of the proposed method was validated through simulation, and was experimentally demonstrated using an acoustic point source. The point source was successfully beamformed from a US B-mode image, and the full with at the half maximum of the point improved 3.97 times. Comparing this result to the ground-truth reconstruction using channel data, the FWHM was slightly degraded with 1.28 times caused by information loss during envelope detection and convolution of the RF information.

  15. Data Assimilation using observed streamflow and remotely-sensed soil moisture for improving sub-seasonal-to-seasonal forecasting

    NASA Astrophysics Data System (ADS)

    Arumugam, S.; Mazrooei, A.; Lakshmi, V.; Wood, A.

    2017-12-01

    Subseasonal-to-seasonal (S2S) forecasts of soil moisture and streamflow provides critical information for water and agricultural systems to support short-term planning and mangement. This study evaluates the role of observed streamflow and remotely-sensed soil moisture from SMAP (Soil Moisture Active Passive) mission in improving S2S streamflow and soil moisture forecasting using data assimilation (DA). We first show the ability to forecast soil moisture at monthly-to-seaasonal time scale by forcing climate forecasts with NASA's Land Information System and then compares the developed soil moisture forecast with the SMAP data over the Southeast US. Our analyses show significant skill in forecasting real-time soil moisture over 1-3 months using climate information. We also show that the developed soil moisture forecasts capture the observed severe drought conditions (2007-2008) over the Southeast US. Following that, we consider both SMAP data and observed streamflow for improving S2S streamflow and soil moisture forecasts for a pilot study area, Tar River basin, in NC. Towards this, we consider variational assimilation (VAR) of gauge-measured daily streamflow data in improving initial hydrologic conditions of Variable Infiltration Capacity (VIC) model. The utility of data assimilation is then assessed in improving S2S forecasts of streamflow and soil moisture through a retrospective analyses. Furthermore, the optimal frequency of data assimilation and optimal analysis window (number of past observations to use) are also assessed in order to achieve the maximum improvement in S2S forecasts of streamflow and soil moisture. Potential utility of updating initial conditions using DA and providing skillful forcings are also discussed.

  16. Dissolved organic phosphorus utilization and alkaline phosphatase activity of the dinoflagellate Gymnodinium impudicum isolated from the South Sea of Korea

    NASA Astrophysics Data System (ADS)

    Oh, Seok Jin; Kwon, Hyeong Kyu; Noh, Il Hyeon; Yang, Han-Soeb

    2010-09-01

    This study investigated alkaline phosphatase (APase) activity and dissolved organic and inorganic phosphorus utilization by the harmful dinoflagellate Gymnodinium impudicum (Fraga et Bravo) Hansen et Moestrup isolated from the South Sea of Korea. Under conditions of limited phosphorus, observation of growth kinetics in batch culture yielded a maximum growth rate (μmax) of 0.41 /day and a half saturation constant (Ks) of 0.71 μM. In time-course experiments, APase was induced as dissolved inorganic phosphorus (DIP) concentrations fell below 0.83 μM, a threshold near the estimated Ks; APase activity increased with further DIP depletion to a maximum of 0.70 pmol/cell/h in the senescent phase. Thus, Ks may be an important index of the threshold DIP concentration for APase induction. G. impudicum utilizes a wide variety of dissolved organic phosphorus compounds in addition to DIP. These results suggest that DIP limitation in the Southern Sea of Korea may have led to the spread of G. impudicum along with the harmful dinoflagellate Cochlodinium polykrikoides in recent years.

  17. Utility Tax Avoidance Program in Germany

    DTIC Science & Technology

    2010-09-29

    Defense Education Activity IMCOM-E Installation Management Command - Europe Region LQA Living Quarters Allowance TRO Tax Relief Office USD(P&R) Under...receive Living Quarters Allowance ( LQA ), which is designed to cover the actual cost for rent, utilities, and other expenses required by law or custom up...to the maximum allowable LQA amount. Benefits of Participation in the UTAP Participation in UTAP allows DOD personnel to avoid paying VAT on their

  18. Simultaneous measurement of glucose transport and utilization in the human brain

    PubMed Central

    Shestov, Alexander A.; Emir, Uzay E.; Kumar, Anjali; Henry, Pierre-Gilles; Seaquist, Elizabeth R.

    2011-01-01

    Glucose is the primary fuel for brain function, and determining the kinetics of cerebral glucose transport and utilization is critical for quantifying cerebral energy metabolism. The kinetic parameters of cerebral glucose transport, KMt and Vmaxt, in humans have so far been obtained by measuring steady-state brain glucose levels by proton (1H) NMR as a function of plasma glucose levels and fitting steady-state models to these data. Extraction of the kinetic parameters for cerebral glucose transport necessitated assuming a constant cerebral metabolic rate of glucose (CMRglc) obtained from other tracer studies, such as 13C NMR. Here we present new methodology to simultaneously obtain kinetic parameters for glucose transport and utilization in the human brain by fitting both dynamic and steady-state 1H NMR data with a reversible, non-steady-state Michaelis-Menten model. Dynamic data were obtained by measuring brain and plasma glucose time courses during glucose infusions to raise and maintain plasma concentration at ∼17 mmol/l for ∼2 h in five healthy volunteers. Steady-state brain vs. plasma glucose concentrations were taken from literature and the steady-state portions of data from the five volunteers. In addition to providing simultaneous measurements of glucose transport and utilization and obviating assumptions for constant CMRglc, this methodology does not necessitate infusions of expensive or radioactive tracers. Using this new methodology, we found that the maximum transport capacity for glucose through the blood-brain barrier was nearly twofold higher than maximum cerebral glucose utilization. The glucose transport and utilization parameters were consistent with previously published values for human brain. PMID:21791622

  19. A Comparison of Item Selection Techniques for Testlets

    ERIC Educational Resources Information Center

    Murphy, Daniel L.; Dodd, Barbara G.; Vaughn, Brandon K.

    2010-01-01

    This study examined the performance of the maximum Fisher's information, the maximum posterior weighted information, and the minimum expected posterior variance methods for selecting items in a computerized adaptive testing system when the items were grouped in testlets. A simulation study compared the efficiency of ability estimation among the…

  20. 48 CFR 552.219-73 - Goals for Subcontracting Plan.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...: Goals for Subcontracting Plan (JUN 2005) (a) Maximum practicable utilization of small, HUBZone small... correct deficiencies in a plan within the time specified by the Contracting Officer shall make the offeror...

  1. 48 CFR 552.219-73 - Goals for Subcontracting Plan.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...: Goals for Subcontracting Plan (JUN 2005) (a) Maximum practicable utilization of small, HUBZone small... correct deficiencies in a plan within the time specified by the Contracting Officer shall make the offeror...

  2. 48 CFR 552.219-73 - Goals for Subcontracting Plan.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...: Goals for Subcontracting Plan (JUN 2005) (a) Maximum practicable utilization of small, HUBZone small... correct deficiencies in a plan within the time specified by the Contracting Officer shall make the offeror...

  3. 48 CFR 552.219-73 - Goals for Subcontracting Plan.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...: Goals for Subcontracting Plan (JUN 2005) (a) Maximum practicable utilization of small, HUBZone small... correct deficiencies in a plan within the time specified by the Contracting Officer shall make the offeror...

  4. Nonlinear Performance Seeking Control using Fuzzy Model Reference Learning Control and the Method of Steepest Descent

    NASA Technical Reports Server (NTRS)

    Kopasakis, George

    1997-01-01

    Performance Seeking Control (PSC) attempts to find and control the process at the operating condition that will generate maximum performance. In this paper a nonlinear multivariable PSC methodology will be developed, utilizing the Fuzzy Model Reference Learning Control (FMRLC) and the method of Steepest Descent or Gradient (SDG). This PSC control methodology employs the SDG method to find the operating condition that will generate maximum performance. This operating condition is in turn passed to the FMRLC controller as a set point for the control of the process. The conventional SDG algorithm is modified in this paper in order for convergence to occur monotonically. For the FMRLC control, the conventional fuzzy model reference learning control methodology is utilized, with guidelines generated here for effective tuning of the FMRLC controller.

  5. Effects of long-term microgravitation exposure on cell respiration of the rat musculus soleus fibers.

    PubMed

    Veselova, O M; Ogneva, I V; Larina, I M

    2011-07-01

    Cell respiration of the m. soleus fibers was studied in Wistar rats treated with succinic acid and exposed to microgravitation for 35 days. The results indicated that respiration rates during utilization of endogenous and exogenous substrates and the maximum respiration rate decreased in animals subjected to microgravitation without succinate treatment. The respiration rate during utilization of exogenous substrate did not increase in comparison with that on endogenous substrates. Succinic acid prevented the decrease in respiration rate on endogenous substrates and the maximum respiration rate. On the other hand, the respiration rate on exogenous substrates was reduced in vivarium control rats receiving succinate in comparison with intact control group. That could indicate changed efficiency of complex I of the respiratory chain due to reciprocal regulation of the tricarbonic acid cycle.

  6. Working memory management and predicted utility

    PubMed Central

    Chatham, Christopher H.; Badre, David

    2013-01-01

    Given the limited capacity of working memory (WM), its resources should be allocated strategically. One strategy is filtering, whereby access to WM is granted preferentially to items with the greatest utility. However, reallocation of WM resources might be required if the utility of maintained information subsequently declines. Here, we present behavioral, computational, and neuroimaging evidence that human participants track changes in the predicted utility of information in WM. First, participants demonstrated behavioral costs when the utility of items already maintained in WM declined and resources should be reallocated. An adapted Q-learning model indicated that these costs scaled with the historical utility of individual items. Finally, model-based neuroimaging demonstrated that frontal cortex tracked the utility of items to be maintained in WM, whereas ventral striatum tracked changes in the utility of items maintained in WM to the degree that these items are no longer useful. Our findings suggest that frontostriatal mechanisms track the utility of information in WM, and that these dynamics may predict delays in the removal of information from WM. PMID:23882196

  7. Maximum likelihood convolutional decoding (MCD) performance due to system losses

    NASA Technical Reports Server (NTRS)

    Webster, L.

    1976-01-01

    A model for predicting the computational performance of a maximum likelihood convolutional decoder (MCD) operating in a noisy carrier reference environment is described. This model is used to develop a subroutine that will be utilized by the Telemetry Analysis Program to compute the MCD bit error rate. When this computational model is averaged over noisy reference phase errors using a high-rate interpolation scheme, the results are found to agree quite favorably with experimental measurements.

  8. Experimental optimal maximum-confidence discrimination and optimal unambiguous discrimination of two mixed single-photon states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Steudle, Gesine A.; Knauer, Sebastian; Herzog, Ulrike

    2011-05-15

    We present an experimental implementation of optimum measurements for quantum state discrimination. Optimum maximum-confidence discrimination and optimum unambiguous discrimination of two mixed single-photon polarization states were performed. For the latter the states of rank 2 in a four-dimensional Hilbert space are prepared using both path and polarization encoding. Linear optics and single photons from a true single-photon source based on a semiconductor quantum dot are utilized.

  9. Comparison between fluorimetry and oximetry techniques to measure photosynthesis in the diatom Skeletonema costatum cultivated under simulated seasonal conditions.

    PubMed

    Lefebvre, Sébastien; Mouget, Jean-Luc; Loret, Pascale; Rosa, Philippe; Tremblin, Gérard

    2007-02-01

    This study reports comparison of two techniques measuring photosynthesis in the ubiquitous diatom Skeletonema costatum, i.e., the classical oximetry and the recent modulated fluorimetry. Microalgae in semi-continuous cultures were exposed to five different environmental conditions simulating a seasonal effect with co-varying temperature, photoperiod and incident light. Photosynthesis was assessed by gross rate of oxygen evolution (P(B)) and the electron transport rate (ETR) measurements. The two techniques were linearly related within seasonal treatments along the course of the P/E curves. The light saturation intensity parameters (Ek and Ek(ETR)), and the maximum electron transport rate increased significantly with the progression of the season while the maximum light utilization efficiency for ETR (alpha(ETR)) was constant. By contrast, the maximum gross oxygen photosynthetic capacity (Pmax(B)) and the maximum light utilization efficiency for P(B) (alpha(B)) increased from December to May treatment but decreased from May to July treatment. Both techniques showed clear photoacclimation in microalgae with the progression of the season, as illustrated by changes in photosynthetic parameters. The relationship between the two techniques changed when high temperature, photoperiod and incident light were combined, possibly due to an overestimation of the PAR--averaged chlorophyll-specific absorption cross-section. Despite this change, our results illustrate the strong suitability of in vivo chlorophyll fluorimetry to estimate primary production in the field.

  10. Dynamic impedance compensation for wireless power transfer using conjugate power

    NASA Astrophysics Data System (ADS)

    Liu, Suqi; Tan, Jianping; Wen, Xue

    2018-02-01

    Wireless power transfer (WPT) via coupled magnetic resonances has been in development for over a decade. However, the frequency splitting phenomenon occurs in the over-coupled region. Thus, the output power of the two-coil system achieves the maximum output power at the two splitting angular frequencies, and not at the natural resonant angular frequency. According to the maximum power transfer theorem, the impedance compensation method was adopted in many WPT projects. However, it remains a challenge to achieve the maximum output power and transmission efficiency in a fixed-frequency mode. In this study, dynamic impedance compensation for WPT was presented by utilizing the compensator within a virtual three-coil WPT system. First, the circuit model was established and transfer characteristics of a system were studied by utilizing circuit theories. Second, the power superposition of the WPT system was carefully researched. When a pair of compensating coils was inserted into the transmitter loop, the conjugate power of the compensator loop was created via magnetic coupling of the two compensating coils that insert into the transmitter loop. The mechanism for dynamic impedance compensation for wireless power transfer was then provided by investigating a virtual three-coil WPT system. Finally, the experimental circuit of a virtual three-coil WPT system was designed, and experimental results are consistent with the theoretical analysis, which achieves the maximum output power and transmission efficiency.

  11. The Texas space flight liability act and efficient regulation for the private commercial space flight era

    NASA Astrophysics Data System (ADS)

    Johnson, Christopher D.

    2013-12-01

    In the spring of 2011, the American state of Texas passed into law an act limiting the liability of commercial space flight entities. Under it, those companies would not be liable for space flight participant injuries, except in cases of intentional injury or injury proximately caused by the company's gross negligence. An analysis within the framework of international and national space law, but especially informed by the academic discipline of law and economics, discusses the incentives of all relevant parties and attempts to understand whether the law is economically "efficient" (allocating resources so as to yield maximum utility), and suited to further the development of the fledgling commercial suborbital tourism industry. Insights into the Texas law are applicable to other states hoping to foster commercial space tourism and considering space tourism related legislation.

  12. Strategies for converting to a DBMS environment

    NASA Technical Reports Server (NTRS)

    Durban, D. M.

    1984-01-01

    The conversion to data base management systems processing techniques consists of three different strategies - one for each of the major stages in the development process. Each strategy was chosen for its approach in bringing about a smooth evolutionary type transition from one mode of operation to the next. The initial strategy of the indoctrination stage consisted of: (1) providing maximum access to current administrative data as soon as possible; (2) select and developing small prototype systems; (3) establishing a user information center as a central focal point for user training and assistance; and (4) developing a training program for programmers, management and ad hoc users in DBMS application and utilization. Security, the rate of the data dictionary, and data base tuning and capacity planning, and the development of a change of attitude in an automated office are issues meriting consideration.

  13. Online stochastic optimization of radiotherapy patient scheduling.

    PubMed

    Legrain, Antoine; Fortin, Marie-Andrée; Lahrichi, Nadia; Rousseau, Louis-Martin

    2015-06-01

    The effective management of a cancer treatment facility for radiation therapy depends mainly on optimizing the use of the linear accelerators. In this project, we schedule patients on these machines taking into account their priority for treatment, the maximum waiting time before the first treatment, and the treatment duration. We collaborate with the Centre Intégré de Cancérologie de Laval to determine the best scheduling policy. Furthermore, we integrate the uncertainty related to the arrival of patients at the center. We develop a hybrid method combining stochastic optimization and online optimization to better meet the needs of central planning. We use information on the future arrivals of patients to provide an accurate picture of the expected utilization of resources. Results based on real data show that our method outperforms the policies typically used in treatment centers.

  14. Maximum entropy principal for transportation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bilich, F.; Da Silva, R.

    In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utilitymore » concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.« less

  15. Population characteristics of a recovering US Virgin Islands red hind spawning aggregation following protection

    PubMed Central

    Nemeth, Richard S.

    2006-01-01

    Many species of groupers form spawning aggregations, dramatic events where 100s to 1000s of individuals gather annually at specific locations for reproduction. Spawning aggregations are often targeted by local fishermen, making them extremely vulnerable to over fishing. The Red Hind Bank Marine Conservation District located in St. Thomas, United States Virgin Islands, was closed seasonally in 1990 and closed permanently in 1999 to protect an important red hind Epinephelus guttatus spawning site. This study provides some of the first information on the population response of a spawning aggregation located within a marine protected area. Tag-and-release fishing and fish transects were used to evaluate population characteristics and habitat utilization patterns of a red hind spawning aggregation between 1999 and 2004. Compared with studies conducted before the permanent closure, the average size of red hind increased mostly during the seasonal closure period (10 cm over 12 yr), but the maximum total length of male red hind increased by nearly 7 cm following permanent closure. Average density and biomass of spawning red hind increased by over 60% following permanent closure whereas maximum spawning density more than doubled. Information from tag returns indicated that red hind departed the protected area following spawning and migrated 6 to 33 km to a ca. 500 km2 area. Protection of the spawning aggregation site may have also contributed to an overall increase in the size of red hind caught in the commercial fishery, thus increasing the value of the grouper fishery for local fishermen. PMID:16612415

  16. Information dynamics in living systems: prokaryotes, eukaryotes, and cancer.

    PubMed

    Frieden, B Roy; Gatenby, Robert A

    2011-01-01

    Living systems use information and energy to maintain stable entropy while far from thermodynamic equilibrium. The underlying first principles have not been established. We propose that stable entropy in living systems, in the absence of thermodynamic equilibrium, requires an information extremum (maximum or minimum), which is invariant to first order perturbations. Proliferation and death represent key feedback mechanisms that promote stability even in a non-equilibrium state. A system moves to low or high information depending on its energy status, as the benefit of information in maintaining and increasing order is balanced against its energy cost. Prokaryotes, which lack specialized energy-producing organelles (mitochondria), are energy-limited and constrained to an information minimum. Acquisition of mitochondria is viewed as a critical evolutionary step that, by allowing eukaryotes to achieve a sufficiently high energy state, permitted a phase transition to an information maximum. This state, in contrast to the prokaryote minima, allowed evolution of complex, multicellular organisms. A special case is a malignant cell, which is modeled as a phase transition from a maximum to minimum information state. The minimum leads to a predicted power-law governing the in situ growth that is confirmed by studies measuring growth of small breast cancers. We find living systems achieve a stable entropic state by maintaining an extreme level of information. The evolutionary divergence of prokaryotes and eukaryotes resulted from acquisition of specialized energy organelles that allowed transition from information minima to maxima, respectively. Carcinogenesis represents a reverse transition: of an information maximum to minimum. The progressive information loss is evident in accumulating mutations, disordered morphology, and functional decline characteristics of human cancers. The findings suggest energy restriction is a critical first step that triggers the genetic mutations that drive somatic evolution of the malignant phenotype.

  17. Optimal route discovery for soft QOS provisioning in mobile ad hoc multimedia networks

    NASA Astrophysics Data System (ADS)

    Huang, Lei; Pan, Feng

    2007-09-01

    In this paper, we propose an optimal routing discovery algorithm for ad hoc multimedia networks whose resource keeps changing, First, we use stochastic models to measure the network resource availability, based on the information about the location and moving pattern of the nodes, as well as the link conditions between neighboring nodes. Then, for a certain multimedia packet flow to be transmitted from a source to a destination, we formulate the optimal soft-QoS provisioning problem as to find the best route that maximize the probability of satisfying its desired QoS requirements in terms of the maximum delay constraints. Based on the stochastic network resource model, we developed three approaches to solve the formulated problem: A centralized approach serving as the theoretical reference, a distributed approach that is more suitable to practical real-time deployment, and a distributed dynamic approach that utilizes the updated time information to optimize the routing for each individual packet. Examples of numerical results demonstrated that using the route discovered by our distributed algorithm in a changing network environment, multimedia applications could achieve better QoS statistically.

  18. Summary of Sonic Boom Rise Times Observed During FAA Community Response Studies over a 6-Month Period in the Oklahoma City Area

    NASA Technical Reports Server (NTRS)

    Maglieri, Domenic J.; Sothcott, Victor E.

    1990-01-01

    The sonic boom signature data acquired from about 1225 supersonic flights, over a 6-month period in 1964 in the Oklahoma City area, was enhanced with the addition of data relating to rise times and total signature duration. These later parameters, not available at the time of publication of the original report on the Oklahoma City sonic boom exposures, are listed in tabular form along with overpressure, positive impulse, positive duration, and waveform category. Airplane operating information along with the surface weather observations are also included. Sonic boom rise times include readings to the 1/2, 3/4, and maximum overpressure values. Rise time relative probabilities for various lateral locations from the ground track of 0, 5, and 10 miles are presented along with the variation of rise times with flight altitude. The tabulated signature data, along with corresponding airplane operating conditions and surface and upper level atmospheric information, are also available on electronic files to provide it in the format for more efficient and effective utilization.

  19. [Hospice and palliative care in the outpatient department].

    PubMed

    Ikenaga, M; Tsuneto, S

    2000-10-01

    In the medical environment, information disclosure to patients and respect of autonomy have spread rapidly. Today, many terminally-ill cancer patients wish to spend as much time at home as possible. In such situations the patient who has been informed that curative treatments are no longer expected to be beneficial can now hope to receive home care and visiting care from hospice/palliative care services. The essential concepts of hospice/palliative care are symptom management, communication, family care and a multidisciplinary approach. These concepts are also important in the outpatient department. In particular, medical staff need to understand and utilize management strategies for common symptoms from which terminally-ill cancer patients suffer (ex. cancer pain, anorexia/fatigue, dyspnea, nausea/vomiting, constipation, hypercalcemia and psychological symptoms). They also need to know how to use continuous subcutaneous infusion for symptom management in the patients last few days. The present paper explains the clinical practices of hospice/palliative care in the outpatient department. Also discussed is support of individual lives so that maximum QOL is provided for patients kept at home.

  20. Efficient Bayesian experimental design for contaminant source identification

    NASA Astrophysics Data System (ADS)

    Zhang, Jiangjiang; Zeng, Lingzao; Chen, Cheng; Chen, Dingjiang; Wu, Laosheng

    2015-01-01

    In this study, an efficient full Bayesian approach is developed for the optimal sampling well location design and source parameters identification of groundwater contaminants. An information measure, i.e., the relative entropy, is employed to quantify the information gain from concentration measurements in identifying unknown parameters. In this approach, the sampling locations that give the maximum expected relative entropy are selected as the optimal design. After the sampling locations are determined, a Bayesian approach based on Markov Chain Monte Carlo (MCMC) is used to estimate unknown parameters. In both the design and estimation, the contaminant transport equation is required to be solved many times to evaluate the likelihood. To reduce the computational burden, an interpolation method based on the adaptive sparse grid is utilized to construct a surrogate for the contaminant transport equation. The approximated likelihood can be evaluated directly from the surrogate, which greatly accelerates the design and estimation process. The accuracy and efficiency of our approach are demonstrated through numerical case studies. It is shown that the methods can be used to assist in both single sampling location and monitoring network design for contaminant source identifications in groundwater.

  1. Utility-based early modulation of processing distracting stimulus information.

    PubMed

    Wendt, Mike; Luna-Rodriguez, Aquiles; Jacobsen, Thomas

    2014-12-10

    Humans are selective information processors who efficiently prevent goal-inappropriate stimulus information to gain control over their actions. Nonetheless, stimuli, which are both unnecessary for solving a current task and liable to cue an incorrect response (i.e., "distractors"), frequently modulate task performance, even when consistently paired with a physical feature that makes them easily discernible from target stimuli. Current models of cognitive control assume adjustment of the processing of distractor information based on the overall distractor utility (e.g., predictive value regarding the appropriate response, likelihood to elicit conflict with target processing). Although studies on distractor interference have supported the notion of utility-based processing adjustment, previous evidence is inconclusive regarding the specificity of this adjustment for distractor information and the stage(s) of processing affected. To assess the processing of distractors during sensory-perceptual phases we applied EEG recording in a stimulus identification task, involving successive distractor-target presentation, and manipulated the overall distractor utility. Behavioral measures replicated previously found utility modulations of distractor interference. Crucially, distractor-evoked visual potentials (i.e., posterior N1) were more pronounced in high-utility than low-utility conditions. This effect generalized to distractors unrelated to the utility manipulation, providing evidence for item-unspecific adjustment of early distractor processing to the experienced utility of distractor information. Copyright © 2014 the authors 0270-6474/14/3416720-06$15.00/0.

  2. Comparison of methods of extracting information for meta-analysis of observational studies in nutritional epidemiology.

    PubMed

    Bae, Jong-Myon

    2016-01-01

    A common method for conducting a quantitative systematic review (QSR) for observational studies related to nutritional epidemiology is the "highest versus lowest intake" method (HLM), in which only the information concerning the effect size (ES) of the highest category of a food item is collected on the basis of its lowest category. However, in the interval collapsing method (ICM), a method suggested to enable a maximum utilization of all available information, the ES information is collected by collapsing all categories into a single category. This study aimed to compare the ES and summary effect size (SES) between the HLM and ICM. A QSR for evaluating the citrus fruit intake and risk of pancreatic cancer and calculating the SES by using the HLM was selected. The ES and SES were estimated by performing a meta-analysis using the fixed-effect model. The directionality and statistical significance of the ES and SES were used as criteria for determining the concordance between the HLM and ICM outcomes. No significant differences were observed in the directionality of SES extracted by using the HLM or ICM. The application of the ICM, which uses a broader information base, yielded more-consistent ES and SES, and narrower confidence intervals than the HLM. The ICM is advantageous over the HLM owing to its higher statistical accuracy in extracting information for QSR on nutritional epidemiology. The application of the ICM should hence be recommended for future studies.

  3. Report of the Governor's Blue Ribbon Transportation Task Force

    DOT National Transportation Integrated Search

    1982-12-01

    Governor Ray appointed the Blue Ribbon Transportation Task Force to provide guidance concerning specific steps that can be taken to: achieve maximum efficiency in the utilization of transportation resources; preserve essential transportation services...

  4. Applying Bayesian Item Selection Approaches to Adaptive Tests Using Polytomous Items

    ERIC Educational Resources Information Center

    Penfield, Randall D.

    2006-01-01

    This study applied the maximum expected information (MEI) and the maximum posterior-weighted information (MPI) approaches of computer adaptive testing item selection to the case of a test using polytomous items following the partial credit model. The MEI and MPI approaches are described. A simulation study compared the efficiency of ability…

  5. 78 FR 9035 - Renewal and Revision of a Previously Approved Information Collection; Comment Request; State...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-07

    ... maximum advertised speed, technology type and spectrum (if applicable) for each broadband provider... funding to collect the maximum advertised speed and technology type to which various classes of Community... businesses use the data to identify where broadband is available, the advertised speeds and other information...

  6. Two-trait-locus linkage analysis: A powerful strategy for mapping complex genetic traits

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schork, N.J.; Boehnke, M.; Terwilliger, J.D.

    1993-11-01

    Nearly all diseases mapped to date follow clear Mendelian, single-locus segregation patterns. In contrast, many common familial diseases such as diabetes, psoriasis, several forms of cancer, and schizophrenia are familial and appear to have a genetic component but do not exhibit simple Mendelian transmission. More complex models are required to explain the genetics of these important diseases. In this paper, the authors explore two-trait-locus, two-marker-locus linkage analysis in which two trait loci are mapped simultaneously to separate genetic markers. The authors compare the utility of this approach to standard one-trait-locus, one-marker-locus linkage analysis with and without allowance for heterogeneity. Themore » authors also compare the utility of the two-trait-locus, two-marker-locus analysis to two-trait-locus, one-marker-locus linkage analysis. For common diseases, pedigrees are often bilineal, with disease genes entering via two or more unrelated pedigree members. Since such pedigrees often are avoided in linkage studies, the authors also investigate the relative information content of unilineal and bilineal pedigrees. For the dominant-or-recessive and threshold models that the authors consider, the authors find that two-trait-locus, two-marker-locus linkage analysis can provide substantially more linkage information, as measured by expected maximum lod score, than standard one-trait-locus, one-marker-locus methods, even allowing for heterogeneity, while, for a dominant-or-dominant generating model, one-locus models that allow for heterogeneity extract essentially as much information as the two-trait-locus methods. For these three models, the authors also find that bilineal pedigrees provide sufficient linkage information to warrant their inclusion in such studies. The authors discuss strategies for assessing the significance of the two linkages assumed in two-trait-locus, two-marker-locus models. 37 refs., 1 fig., 4 tabs.« less

  7. 78 FR 75366 - 30-Day Notice of Proposed Information Collection: Public Housing Energy Audits and Utility...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-12-11

    ... Information Collection: Public Housing Energy Audits and Utility Allowances AGENCY: Office of the Chief... Title of Information Collection: Public Housing Energy Audits and Utility Allowances. OMB Approval... C, Energy Audit and Energy Conservation Measures, requires PHAs to complete energy audits once every...

  8. Brain tissues volume measurements from 2D MRI using parametric approach

    NASA Astrophysics Data System (ADS)

    L'vov, A. A.; Toropova, O. A.; Litovka, Yu. V.

    2018-04-01

    The purpose of the paper is to propose a fully automated method of volume assessment of structures within human brain. Our statistical approach uses maximum interdependency principle for decision making process of measurements consistency and unequal observations. Detecting outliers performed using maximum normalized residual test. We propose a statistical model which utilizes knowledge of tissues distribution in human brain and applies partial data restoration for precision improvement. The approach proposes completed computationally efficient and independent from segmentation algorithm used in the application.

  9. Scheduling for Parallel Supercomputing: A Historical Perspective of Achievable Utilization

    NASA Technical Reports Server (NTRS)

    Jones, James Patton; Nitzberg, Bill

    1999-01-01

    The NAS facility has operated parallel supercomputers for the past 11 years, including the Intel iPSC/860, Intel Paragon, Thinking Machines CM-5, IBM SP-2, and Cray Origin 2000. Across this wide variety of machine architectures, across a span of 10 years, across a large number of different users, and through thousands of minor configuration and policy changes, the utilization of these machines shows three general trends: (1) scheduling using a naive FIFO first-fit policy results in 40-60% utilization, (2) switching to the more sophisticated dynamic backfilling scheduling algorithm improves utilization by about 15 percentage points (yielding about 70% utilization), and (3) reducing the maximum allowable job size further increases utilization. Most surprising is the consistency of these trends. Over the lifetime of the NAS parallel systems, we made hundreds, perhaps thousands, of small changes to hardware, software, and policy, yet, utilization was affected little. In particular these results show that the goal of achieving near 100% utilization while supporting a real parallel supercomputing workload is unrealistic.

  10. Routine health information system utilization and factors associated thereof among health workers at government health institutions in East Gojjam Zone, Northwest Ethiopia.

    PubMed

    Shiferaw, Atsede Mazengia; Zegeye, Dessalegn Tegabu; Assefa, Solomon; Yenit, Melaku Kindie

    2017-08-07

    Using reliable information from routine health information systems over time is an important aid to improving health outcomes, tackling disparities, enhancing efficiency, and encouraging innovation. In Ethiopia, routine health information utilization for enhancing performance is poor among health workers, especially at the peripheral levels of health facilities. Therefore, this study aimed to assess routine health information system utilization and associated factors among health workers at government health institutions in East Gojjam Zone, Northwest Ethiopia. An institution based cross-sectional study was conducted at government health institutions of East Gojjam Zone, Northwest Ethiopia from April to May, 2013. A total of 668 health workers were selected from government health institutions, using the cluster sampling technique. Data collected using a standard structured and self-administered questionnaire and an observational checklist were cleaned, coded, and entered into Epi-info version 3.5.3, and transferred into SPSS version 20 for further statistical analysis. Variables with a p-value of less than 0.05 at multiple logistic regression analysis were considered statistically significant factors for the utilization of routine health information systems. The study revealed that 45.8% of the health workers had a good level of routine health information utilization. HMIS training [AOR = 2.72, 95% CI: 1.60, 4.62], good data analysis skills [AOR = 6.40, 95%CI: 3.93, 10.37], supervision [AOR = 2.60, 95% CI: 1.42, 4.75], regular feedback [AOR = 2.20, 95% CI: 1.38, 3.51], and favorable attitude towards health information utilization [AOR = 2.85, 95% CI: 1.78, 4.54] were found significantly associated with a good level of routine health information utilization. More than half of the health workers working at government health institutions of East Gojjam were poor health information users compared with the findings of others studies. HMIS training, data analysis skills, supervision, regular feedback, and favorable attitude were factors related to routine health information system utilization. Therefore, a comprehensive training, supportive supervision, and regular feedback are highly recommended for improving routine health information utilization among health workers at government health facilities.

  11. The application of a Grey Markov Model to forecasting annual maximum water levels at hydrological stations

    NASA Astrophysics Data System (ADS)

    Dong, Sheng; Chi, Kun; Zhang, Qiyi; Zhang, Xiangdong

    2012-03-01

    Compared with traditional real-time forecasting, this paper proposes a Grey Markov Model (GMM) to forecast the maximum water levels at hydrological stations in the estuary area. The GMM combines the Grey System and Markov theory into a higher precision model. The GMM takes advantage of the Grey System to predict the trend values and uses the Markov theory to forecast fluctuation values, and thus gives forecast results involving two aspects of information. The procedure for forecasting annul maximum water levels with the GMM contains five main steps: 1) establish the GM (1, 1) model based on the data series; 2) estimate the trend values; 3) establish a Markov Model based on relative error series; 4) modify the relative errors caused in step 2, and then obtain the relative errors of the second order estimation; 5) compare the results with measured data and estimate the accuracy. The historical water level records (from 1960 to 1992) at Yuqiao Hydrological Station in the estuary area of the Haihe River near Tianjin, China are utilized to calibrate and verify the proposed model according to the above steps. Every 25 years' data are regarded as a hydro-sequence. Eight groups of simulated results show reasonable agreement between the predicted values and the measured data. The GMM is also applied to the 10 other hydrological stations in the same estuary. The forecast results for all of the hydrological stations are good or acceptable. The feasibility and effectiveness of this new forecasting model have been proved in this paper.

  12. Assessing Field-Specific Risk of Soybean Sudden Death Syndrome Using Satellite Imagery in Iowa.

    PubMed

    Yang, S; Li, X; Chen, C; Kyveryga, P; Yang, X B

    2016-08-01

    Moderate resolution imaging spectroradiometer (MODIS) satellite imagery from 2004 to 2013 were used to assess the field-specific risks of soybean sudden death syndrome (SDS) caused by Fusarium virguliforme in Iowa. Fields with a high frequency of significant decrease (>10%) of the normalized difference vegetation index (NDVI) observed in late July to middle August on historical imagery were hypothetically considered as high SDS risk. These high-risk fields had higher slopes and shorter distances to flowlines, e.g., creeks and drainages, particularly in the Des Moines lobe. Field data in 2014 showed a significantly higher SDS level in the high-risk fields than fields selected without considering NDVI information. On average, low-risk fields had 10 times lower F. virguliforme soil density, determined by quantitative polymerase chain reaction, compared with other surveyed fields. Ordinal logistic regression identified positive correlations between SDS and slope, June NDVI, and May maximum temperature, but high June maximum temperature hindered SDS. A modeled SDS risk map showed a clear trend of potential disease occurrences across Iowa. Landsat imagery was analyzed similarly, to discuss the ability to utilize higher spatial resolution data. The results demonstrated the great potential of both MODIS and Landsat imagery for SDS field-specific risk assessment.

  13. Characterization of indoor aerosol temporal variations for the real-time management of indoor air quality

    NASA Astrophysics Data System (ADS)

    Ciuzas, Darius; Prasauskas, Tadas; Krugly, Edvinas; Sidaraviciute, Ruta; Jurelionis, Andrius; Seduikyte, Lina; Kauneliene, Violeta; Wierzbicka, Aneta; Martuzevicius, Dainius

    2015-10-01

    The study presents the characterization of dynamic patterns of indoor particulate matter (PM) during various pollution episodes for real-time IAQ management. The variation of PM concentrations was assessed for 20 indoor activities, including cooking related sources, other thermal sources, personal care and household products. The pollution episodes were modelled in full-scale test chamber representing a standard usual living room with the forced ventilation of 0.5 h-1. In most of the pollution episodes, the maximum concentration of particles in exhaust air was reached within a few minutes. The most rapid increase in particle concentration was during thermal source episodes such as candle, cigarette, incense stick burning and cooking related sources, while the slowest decay of concentrations was associated with sources, emitting ultrafine particle precursors, such as furniture polisher spraying, floor wet mopping with detergent etc. Placement of the particle sensors in the ventilation exhaust vs. in the centre of the ceiling yielded comparable results for both measured maximum concentrations and temporal variations, indicating that both locations were suitable for the placement of sensors for the management of IAQ. The obtained data provides information that may be utilized considering measurements of aerosol particles as indicators for the real-time management of IAQ.

  14. Bicriteria Network Optimization Problem using Priority-based Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Gen, Mitsuo; Lin, Lin; Cheng, Runwei

    Network optimization is being an increasingly important and fundamental issue in the fields such as engineering, computer science, operations research, transportation, telecommunication, decision support systems, manufacturing, and airline scheduling. In many applications, however, there are several criteria associated with traversing each edge of a network. For example, cost and flow measures are both important in the networks. As a result, there has been recent interest in solving Bicriteria Network Optimization Problem. The Bicriteria Network Optimization Problem is known a NP-hard. The efficient set of paths may be very large, possibly exponential in size. Thus the computational effort required to solve it can increase exponentially with the problem size in the worst case. In this paper, we propose a genetic algorithm (GA) approach used a priority-based chromosome for solving the bicriteria network optimization problem including maximum flow (MXF) model and minimum cost flow (MCF) model. The objective is to find the set of Pareto optimal solutions that give possible maximum flow with minimum cost. This paper also combines Adaptive Weight Approach (AWA) that utilizes some useful information from the current population to readjust weights for obtaining a search pressure toward a positive ideal point. Computer simulations show the several numerical experiments by using some difficult-to-solve network design problems, and show the effectiveness of the proposed method.

  15. Bolus-dependent dosimetric effect of positioning errors for tangential scalp radiotherapy with helical tomotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lobb, Eric, E-mail: eclobb2@gmail.com

    2014-04-01

    The dosimetric effect of errors in patient position is studied on-phantom as a function of simulated bolus thickness to assess the need for bolus utilization in scalp radiotherapy with tomotherapy. A treatment plan is generated on a cylindrical phantom, mimicking a radiotherapy technique for the scalp utilizing primarily tangential beamlets. A planning target volume with embedded scalplike clinical target volumes (CTVs) is planned to a uniform dose of 200 cGy. Translational errors in phantom position are introduced in 1-mm increments and dose is recomputed from the original sinogram. For each error the maximum dose, minimum dose, clinical target dose homogeneitymore » index (HI), and dose-volume histogram (DVH) are presented for simulated bolus thicknesses from 0 to 10 mm. Baseline HI values for all bolus thicknesses were in the 5.5 to 7.0 range, increasing to a maximum of 18.0 to 30.5 for the largest positioning errors when 0 to 2 mm of bolus is used. Utilizing 5 mm of bolus resulted in a maximum HI value of 9.5 for the largest positioning errors. Using 0 to 2 mm of bolus resulted in minimum and maximum dose values of 85% to 94% and 118% to 125% of the prescription dose, respectively. When using 5 mm of bolus these values were 98.5% and 109.5%. DVHs showed minimal changes in CTV dose coverage when using 5 mm of bolus, even for the largest positioning errors. CTV dose homogeneity becomes increasingly sensitive to errors in patient position as bolus thickness decreases when treating the scalp with primarily tangential beamlets. Performing a radial expansion of the scalp CTV into 5 mm of bolus material minimizes dosimetric sensitivity to errors in patient position as large as 5 mm and is therefore recommended.« less

  16. Water and Sewage Utilities Sector (NAICS 2213)

    EPA Pesticide Factsheets

    Environmental regulation information for water utilities, including drinking and wastewater treatment facilities. Includes links to NESHAP for POTW, compliance information, and information about pretreatment programs.

  17. 41 CFR 102-79.10 - What basic assignment and utilization of space policy governs an Executive agency?

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... must provide a quality workplace environment that supports program operations, preserves the value of... fitness facilities in the workplace when adequately justified. An Executive agency must promote maximum...

  18. Use of screenings to produce HMA mixtures

    DOT National Transportation Integrated Search

    2002-10-01

    Thin-lift hot mix asphalt (HMA) layers are utilized in almost every maintenance and rehabilitation application. These mix types require smaller maximum particle sizes than most conventional HMA surface layers. Although the primary functions of thin-l...

  19. MERCURY SPECIATION AND CAPTURE

    EPA Science Inventory

    In December 2000, the U.S. Environmental Protection Agency (USEPA) announced its intent to regulate mercury emissions from coal-fired electric utility steam generating plants. Maximum achievable control technology (MACT) requirements are to be proposed by December 2003 and finali...

  20. A general methodology for population analysis

    NASA Astrophysics Data System (ADS)

    Lazov, Petar; Lazov, Igor

    2014-12-01

    For a given population with N - current and M - maximum number of entities, modeled by a Birth-Death Process (BDP) with size M+1, we introduce utilization parameter ρ, ratio of the primary birth and death rates in that BDP, which, physically, determines (equilibrium) macrostates of the population, and information parameter ν, which has an interpretation as population information stiffness. The BDP, modeling the population, is in the state n, n=0,1,…,M, if N=n. In presence of these two key metrics, applying continuity law, equilibrium balance equations concerning the probability distribution pn, n=0,1,…,M, of the quantity N, pn=Prob{N=n}, in equilibrium, and conservation law, and relying on the fundamental concepts population information and population entropy, we develop a general methodology for population analysis; thereto, by definition, population entropy is uncertainty, related to the population. In this approach, what is its essential contribution, the population information consists of three basic parts: elastic (Hooke's) or absorption/emission part, synchronization or inelastic part and null part; the first two parts, which determine uniquely the null part (the null part connects them), are the two basic components of the Information Spectrum of the population. Population entropy, as mean value of population information, follows this division of the information. A given population can function in information elastic, antielastic and inelastic regime. In an information linear population, the synchronization part of the information and entropy is absent. The population size, M+1, is the third key metric in this methodology. Namely, right supposing a population with infinite size, the most of the key quantities and results for populations with finite size, emerged in this methodology, vanish.

  1. Maximizing power generation from dark fermentation effluents in microbial fuel cell by selective enrichment of exoelectrogens and optimization of anodic operational parameters.

    PubMed

    Varanasi, Jhansi L; Sinha, Pallavi; Das, Debabrata

    2017-05-01

    To selectively enrich an electrogenic mixed consortium capable of utilizing dark fermentative effluents as substrates in microbial fuel cells and to further enhance the power outputs by optimization of influential anodic operational parameters. A maximum power density of 1.4 W/m 3 was obtained by an enriched mixed electrogenic consortium in microbial fuel cells using acetate as substrate. This was further increased to 5.43 W/m 3 by optimization of influential anodic parameters. By utilizing dark fermentative effluents as substrates, the maximum power densities ranged from 5.2 to 6.2 W/m 3 with an average COD removal efficiency of 75% and a columbic efficiency of 10.6%. A simple strategy is provided for selective enrichment of electrogenic bacteria that can be used in microbial fuel cells for generating power from various dark fermentative effluents.

  2. Losses in chopper-controlled DC series motors

    NASA Technical Reports Server (NTRS)

    Hamilton, H. B.

    1982-01-01

    Motors for electric vehicle (EV) applications must have different features than dc motors designed for industrial applications. The EV motor application is characterized by the following requirements: (1) the need for highest possible efficiency from light load to overload, for maximum EV range, (2) large short time overload capability (The ratio of peak to average power varies from 5/1 in heavy city traffic to 3/1 in suburban driving situations) and (3) operation from power supply voltage levels of 84 to 144 volts (probably 120 volts maximum). A test facility utilizing a dc generator as a substitute for a battery pack was designed and utilized. Criteria for the design of such a facility are presented. Two motors, differing in design detail, commercially available for EV use were tested. Losses measured are discussed, as are waves forms and their harmonic content, the measurements of resistance and inductance, EV motor/chopper application criteria, and motor design considerations.

  3. Influence of temperature on flavour compound production from citrate by Lactobacillus rhamnosus ATCC 7469.

    PubMed

    De Figueroa, R M; Oliver, G; Benito de Cárdenas, I L

    2001-03-01

    The citrate utilization by Lactobacillus rhamnosus ATCC 7469 was found to be temperature-dependent. The maximum citrate utilization and incorporation of [1,5-14C]citrate rate were observed at 37 degreesC. At this temperature, maximum citrate lyase activity and specific diacetyl and acetoin production (Y(DA%)) were observed. The high levels of alpha-acetolactate synthase and low levels of diacetyl reductase, acetoin reductase and L-lactate dehydrogenase found at 37 degreesC led to an accumulation of diacetyl and acetoin. Optimum lactic acid production was observed at 45 degreesC, according to the high lactate dehydrogenase activity. The NADH oxidase activity increased with increasing culture temperature from 22 degreesC to 37 degreesC. Thus there are greater quantities of pyruvate available for the production of alpha-acetolactate, diacetyl and aceotin, and less diacetyl and acetoin are reduced.

  4. Utilization of Information and Communication Technologies as a Predictor of Educational Stress on Secondary School Students

    ERIC Educational Resources Information Center

    Eskicumali, Ahmet; Arslan, Serhat; Demirtas, Zeynep

    2015-01-01

    The purpose of this study is to examine the relationship between utilization of information and communication technologies and educational stress. Participants were 411 secondary school students. Educational Stress Scale and Utilization of Information and Communication Technologies Scale were used as measures. The relationships between students'…

  5. 18 CFR 38.2 - Communication and information sharing among public utilities and pipelines.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... STANDARDS FOR PUBLIC UTILITY BUSINESS OPERATIONS AND COMMUNICATIONS § 38.2 Communication and information... 18 Conservation of Power and Water Resources 1 2014-04-01 2014-04-01 false Communication and information sharing among public utilities and pipelines. 38.2 Section 38.2 Conservation of Power and Water...

  6. The exometabolome of Clostridium thermocellum reveals overflow metabolism at high cellulose loading

    DOE PAGES

    Holwerda, Evert K.; Thorne, Philip G.; Olson, Daniel G.; ...

    2014-10-21

    Background: Clostridium thermocellum is a model thermophilic organism for the production of biofuels from lignocellulosic substrates. The majority of publications studying the physiology of this organism use substrate concentrations of ≤10 g/L. However, industrially relevant concentrations of substrate start at 100 g/L carbohydrate, which corresponds to approximately 150 g/L solids. To gain insight into the physiology of fermentation of high substrate concentrations, we studied the growth on, and utilization of high concentrations of crystalline cellulose varying from 50 to 100 g/L by C. thermocellum. Results: Using a defined medium, batch cultures of C. thermocellum achieved 93% conversion of cellulose (Avicel)more » initially present at 100 g/L. The maximum rate of substrate utilization increased with increasing substrate loading. During fermentation of 100 g/L cellulose, growth ceased when about half of the substrate had been solubilized. However, fermentation continued in an uncoupled mode until substrate utilization was almost complete. In addition to commonly reported fermentation products, amino acids - predominantly L-valine and L-alanine - were secreted at concentrations up to 7.5 g/L. Uncoupled metabolism was also accompanied by products not documented previously for C. thermocellum, including isobutanol, meso- and RR/SS-2,3-butanediol and trace amounts of 3-methyl-1-butanol, 2-methyl-1-butanol and 1-propanol. We hypothesize that C. thermocellum uses overflow metabolism to balance its metabolism around the pyruvate node in glycolysis. In conclusion: C. thermocellum is able to utilize industrially relevant concentrations of cellulose, up to 93 g/L. We report here one of the highest degrees of crystalline cellulose utilization observed thus far for a pure culture of C. thermocellum, the highest maximum substrate utilization rate and the highest amount of isobutanol produced by a wild-type organism.« less

  7. Simultaneous measurement of glucose transport and utilization in the human brain.

    PubMed

    Shestov, Alexander A; Emir, Uzay E; Kumar, Anjali; Henry, Pierre-Gilles; Seaquist, Elizabeth R; Öz, Gülin

    2011-11-01

    Glucose is the primary fuel for brain function, and determining the kinetics of cerebral glucose transport and utilization is critical for quantifying cerebral energy metabolism. The kinetic parameters of cerebral glucose transport, K(M)(t) and V(max)(t), in humans have so far been obtained by measuring steady-state brain glucose levels by proton ((1)H) NMR as a function of plasma glucose levels and fitting steady-state models to these data. Extraction of the kinetic parameters for cerebral glucose transport necessitated assuming a constant cerebral metabolic rate of glucose (CMR(glc)) obtained from other tracer studies, such as (13)C NMR. Here we present new methodology to simultaneously obtain kinetic parameters for glucose transport and utilization in the human brain by fitting both dynamic and steady-state (1)H NMR data with a reversible, non-steady-state Michaelis-Menten model. Dynamic data were obtained by measuring brain and plasma glucose time courses during glucose infusions to raise and maintain plasma concentration at ∼17 mmol/l for ∼2 h in five healthy volunteers. Steady-state brain vs. plasma glucose concentrations were taken from literature and the steady-state portions of data from the five volunteers. In addition to providing simultaneous measurements of glucose transport and utilization and obviating assumptions for constant CMR(glc), this methodology does not necessitate infusions of expensive or radioactive tracers. Using this new methodology, we found that the maximum transport capacity for glucose through the blood-brain barrier was nearly twofold higher than maximum cerebral glucose utilization. The glucose transport and utilization parameters were consistent with previously published values for human brain.

  8. Stratified and Maximum Information Item Selection Procedures in Computer Adaptive Testing

    ERIC Educational Resources Information Center

    Deng, Hui; Ansley, Timothy; Chang, Hua-Hua

    2010-01-01

    In this study we evaluated and compared three item selection procedures: the maximum Fisher information procedure (F), the a-stratified multistage computer adaptive testing (CAT) (STR), and a refined stratification procedure that allows more items to be selected from the high a strata and fewer items from the low a strata (USTR), along with…

  9. Uncertainty estimation of the self-thinning process by Maximum-Entropy Principle

    Treesearch

    Shoufan Fang; George Z. Gertner

    2000-01-01

    When available information is scarce, the Maximum-Entropy Principle can estimate the distributions of parameters. In our case study, we estimated the distributions of the parameters of the forest self-thinning process based on literature information, and we derived the conditional distribution functions and estimated the 95 percent confidence interval (CI) of the self-...

  10. Impact of Violation of the Missing-at-Random Assumption on Full-Information Maximum Likelihood Method in Multidimensional Adaptive Testing

    ERIC Educational Resources Information Center

    Han, Kyung T.; Guo, Fanmin

    2014-01-01

    The full-information maximum likelihood (FIML) method makes it possible to estimate and analyze structural equation models (SEM) even when data are partially missing, enabling incomplete data to contribute to model estimation. The cornerstone of FIML is the missing-at-random (MAR) assumption. In (unidimensional) computerized adaptive testing…

  11. Peer Review for EPA’s Proposed Approaches to Inform the Derivation of a Maximum Contaminant Level Goal for Perchlorate in Drinking Water

    EPA Science Inventory

    EPA is developing approaches to inform the derivation of a Maximum Contaminant Level Goal (MCLG) for perchlorate in drinking water under the Safe Drinking Water Act. EPA previously conducted an independent, external, scientific peer review of the draft biologically-based dose-res...

  12. Low power data acquisition unit for autonomous geophysical instrumentation

    NASA Astrophysics Data System (ADS)

    Prystai, Andrii

    2017-04-01

    The development of an autonomous instrumentation for field research is always a challenge which needs knowledge and application of recent advances in technology and components production. Using this information a super-low power, low-cost, stand-alone GPS time synchronized data acquisition unit was created. It comprises an extended utilization of the microcontroller modules and peripherals and special firmware with flexible PLL parameters. The present report is devoted to a discussion of synchronization mode of data sampling in autonomous field instruments with possibility of GPS random breaks. In the result the achieved sampling timing accuracy is better than ± 60 ns without phase jumps and distortion plus fixed shift depending on the sample rate. The main application of the system is for simultaneous measurement of several channels from magnetic and electric sensors in field conditions for magneto-telluric instruments. First utilization of this system was in the new developed versions of LEMI-026 magnetometer and LEMI-423 field station, where it was applied for digitizing of up to 6 analogue channels with 32-bit resolution in the range ± 2.5V, digital filtration (LPF) and maximum sample rate 4kS/s. It is ready for record in 5 minutes after being turned on. Recently, this system was successfully utilized with the drone-portable magnetometers destined for the search of metallic objects, like UXO, in rural areas, research of engineering underground structure and for mapping ore bodies. The successful tests of drone-portable system were made and tests results are also discussed.

  13. Evaluation and Refinement of a Field-Portable Drinking Water Toxicity Sensor Utilizing Electric Cell-Substrate Impedance Sensing and a Fluidic Biochip

    DTIC Science & Technology

    2014-01-01

    Potential interferences tested were chlorine and chloramine (commonly used for drinking water disinfection ), geosmin and 2-methyl-isoborneol (MIB...Protection Agency maximum residual disinfectant level for chlorine and chloramine is set at 4 mg l1 under the Safe Drinking Water Act and thus would...Evaluation and refinement of a field-portable drinking water toxicity sensor utilizing electric cell–substrate impedance sensing and a fluidic

  14. In the queue for total joint replacement: patients' perspectives on waiting times. Ontario Hip and Knee Replacement Project Team.

    PubMed

    Llewellyn-Thomas, H A; Arshinoff, R; Bell, M; Williams, J I; Naylor, C D

    1998-02-01

    We assessed patients on the waiting lists of a purposive sample of orthopaedic surgeons in Ontario, Canada, to determine patients' attitudes towards time waiting for hip or knee replacement. We focused on 148 patients who did not have a definite operative date, obtaining complete information on 124 (84%). Symptom severity was assessed with the Western Ontario/McMaster Osteoarthritis Index and a disease-specific standard gamble was used to elicit patients' overall utility for their arthritic state. Next, in a trade-off task, patients considered a hypothetical choice between a 1-month wait for a surgeon who could provide a 2% risk of post-operative mortality, or a 6-month wait for joint replacement with a 1% risk of post-operative mortality. Waiting times were then shifted systematically until the patient abandoned his/her initial choice, generating a conditional maximal acceptable wait time. Patients were divided in their attitudes, with 57% initially choosing a 6-month wait with a 1% mortality risk. The overall distribution of conditional maximum acceptable wait time scores ranged from 1 to 26 months, with a median of 7 months. Utility values were independently but weakly associated with patients' tolerance of waiting times (adjusted R-square = 0.059, P = 0.004). After splitting the sample along the median into subgroups with a relatively 'low' and 'high' tolerance for waiting, the subgroup with the apparently lower tolerance for waiting reported lower utility scores (z = 2.951; P = 0.004) and shorter times since their surgeon first advised them of the need for surgery (z = 3.014; P = 0.003). These results suggest that, in the establishment and monitoring of a queue management system for quality-of-life-enhancing surgery, patients' own perceptions of their overall symptomatic burden and ability to tolerate delayed relief should be considered along with information derived from clinical judgements and pre-weighted health status instruments.

  15. 7 CFR 4280.161 - Direct Loan Process.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... RURAL UTILITIES SERVICE, DEPARTMENT OF AGRICULTURE LOANS AND GRANTS Renewable Energy Systems and Energy... available for direct loans; (2) Applicant and project eligibility criteria; (3) Minimum and maximum loan...; (11) Construction planning and performing development; (12) Requirements after project construction...

  16. 24 CFR 241.565 - Maximum loan amount.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... Purchase and Installation of Energy Conserving Improvements, Solar Energy Systems, and Individual Utility... energy conserving improvements including the purchase thereof, cost of installation, architect's fees... of the energy conserving improvements. (b) An amount which, when added to the existing outstanding...

  17. 24 CFR 241.565 - Maximum loan amount.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... Purchase and Installation of Energy Conserving Improvements, Solar Energy Systems, and Individual Utility... energy conserving improvements including the purchase thereof, cost of installation, architect's fees... of the energy conserving improvements. (b) An amount which, when added to the existing outstanding...

  18. 24 CFR 241.565 - Maximum loan amount.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... Purchase and Installation of Energy Conserving Improvements, Solar Energy Systems, and Individual Utility... energy conserving improvements including the purchase thereof, cost of installation, architect's fees... of the energy conserving improvements. (b) An amount which, when added to the existing outstanding...

  19. 24 CFR 241.565 - Maximum loan amount.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... Purchase and Installation of Energy Conserving Improvements, Solar Energy Systems, and Individual Utility... energy conserving improvements including the purchase thereof, cost of installation, architect's fees... of the energy conserving improvements. (b) An amount which, when added to the existing outstanding...

  20. 24 CFR 241.565 - Maximum loan amount.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... Purchase and Installation of Energy Conserving Improvements, Solar Energy Systems, and Individual Utility... energy conserving improvements including the purchase thereof, cost of installation, architect's fees... of the energy conserving improvements. (b) An amount which, when added to the existing outstanding...

  1. Budget Impact Analysis of PCSK9 Inhibitors for the Management of Adult Patients with Heterozygous Familial Hypercholesterolemia or Clinical Atherosclerotic Cardiovascular Disease.

    PubMed

    Mallya, Usha G; Boklage, Susan H; Koren, Andrew; Delea, Thomas E; Mullins, C Daniel

    2018-01-01

    The aim of this study was to assess the budget impact of introducing the proprotein convertase subtilisin/kexin type 9 inhibitors (PCSK9i) alirocumab and evolocumab to market for the treatment of adults with heterozygous familial hypercholesterolemia or clinical atherosclerotic cardiovascular (CV) disease requiring additional lowering of low-density lipoprotein cholesterol (LDL-C). A 3-year model estimated the costs of lipid-modifying therapy (LMT) and CV events to a hypothetical US health plan of 1 million members, comparing two scenarios-with and without the availability of PCSK9i as add-on therapy to statins. Proportions of patients with uncontrolled LDL-C despite receiving statins, and at risk of CV events, were estimated from real-world data. Total undiscounted annual LMT costs (2017 prices, including PCSK9i costs of $14,563.50), dispensing and healthcare costs, including the costs of CV events, were estimated for all prevalent patients in the target population, based on baseline risk factors. Maximum PCSK9i utilization of 1-5% over 3 years according to risk group (following the same pattern as current ezetimibe use), and 5-10% as a secondary scenario, were assumed. Total healthcare budget impacts per target patient (and per member) per month for years 1, 2 and 3 were $3.62($0.10), $7.22($0.20) and $10.79($0.30), respectively, assuming 1-5% maximum PCSK9i utilization, and $15.81($0.44), $31.52($0.88) and $47.12($1.31), respectively, assuming 5-10% utilization. Results were sensitive to changes in model timeframe, years to maximum PCSK9i utilization and PCSK9i costs. The budget impact of PCSK9i as add-on therapy to statins for patients with hypercholesterolemia is relatively low compared with published estimates for other specialty biologics. Drug cost rebates and discounts are likely to further reduce budget impact.

  2. Optimal spatio-temporal design of water quality monitoring networks for reservoirs: Application of the concept of value of information

    NASA Astrophysics Data System (ADS)

    Maymandi, Nahal; Kerachian, Reza; Nikoo, Mohammad Reza

    2018-03-01

    This paper presents a new methodology for optimizing Water Quality Monitoring (WQM) networks of reservoirs and lakes using the concept of the value of information (VOI) and utilizing results of a calibrated numerical water quality simulation model. With reference to the value of information theory, water quality of every checkpoint with a specific prior probability differs in time. After analyzing water quality samples taken from potential monitoring points, the posterior probabilities are updated using the Baye's theorem, and VOI of the samples is calculated. In the next step, the stations with maximum VOI is selected as optimal stations. This process is repeated for each sampling interval to obtain optimal monitoring network locations for each interval. The results of the proposed VOI-based methodology is compared with those obtained using an entropy theoretic approach. As the results of the two methodologies would be partially different, in the next step, the results are combined using a weighting method. Finally, the optimal sampling interval and location of WQM stations are chosen using the Evidential Reasoning (ER) decision making method. The efficiency and applicability of the methodology are evaluated using available water quantity and quality data of the Karkheh Reservoir in the southwestern part of Iran.

  3. 20 CFR 10.806 - How are the maximum fees defined?

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... AMENDED Information for Medical Providers Medical Fee Schedule § 10.806 How are the maximum fees defined? For professional medical services, the Director shall maintain a schedule of maximum allowable fees... Procedural Terminology (HCPCS/CPT) code which represents the relative skill, effort, risk and time required...

  4. 20 CFR 10.806 - How are the maximum fees defined?

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... AMENDED Information for Medical Providers Medical Fee Schedule § 10.806 How are the maximum fees defined? For professional medical services, the Director shall maintain a schedule of maximum allowable fees.../Current Procedural Terminology (HCPCS/CPT) code which represents the relative skill, effort, risk and time...

  5. 20 CFR 10.806 - How are the maximum fees defined?

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... AMENDED Information for Medical Providers Medical Fee Schedule § 10.806 How are the maximum fees defined? For professional medical services, the Director shall maintain a schedule of maximum allowable fees... Procedural Terminology (HCPCS/CPT) code which represents the relative skill, effort, risk and time required...

  6. 20 CFR 10.806 - How are the maximum fees defined?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... AMENDED Information for Medical Providers Medical Fee Schedule § 10.806 How are the maximum fees defined? For professional medical services, the Director shall maintain a schedule of maximum allowable fees... Procedural Terminology (HCPCS/CPT) code which represents the relative skill, effort, risk and time required...

  7. 20 CFR 10.806 - How are the maximum fees defined?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... AMENDED Information for Medical Providers Medical Fee Schedule § 10.806 How are the maximum fees defined? For professional medical services, the Director shall maintain a schedule of maximum allowable fees.../Current Procedural Terminology (HCPCS/CPT) code which represents the relative skill, effort, risk and time...

  8. Human vision is determined based on information theory.

    PubMed

    Delgado-Bonal, Alfonso; Martín-Torres, Javier

    2016-11-03

    It is commonly accepted that the evolution of the human eye has been driven by the maximum intensity of the radiation emitted by the Sun. However, the interpretation of the surrounding environment is constrained not only by the amount of energy received but also by the information content of the radiation. Information is related to entropy rather than energy. The human brain follows Bayesian statistical inference for the interpretation of visual space. The maximization of information occurs in the process of maximizing the entropy. Here, we show that the photopic and scotopic vision absorption peaks in humans are determined not only by the intensity but also by the entropy of radiation. We suggest that through the course of evolution, the human eye has not adapted only to the maximum intensity or to the maximum information but to the optimal wavelength for obtaining information. On Earth, the optimal wavelengths for photopic and scotopic vision are 555 nm and 508 nm, respectively, as inferred experimentally. These optimal wavelengths are determined by the temperature of the star (in this case, the Sun) and by the atmospheric composition.

  9. Human vision is determined based on information theory

    NASA Astrophysics Data System (ADS)

    Delgado-Bonal, Alfonso; Martín-Torres, Javier

    2016-11-01

    It is commonly accepted that the evolution of the human eye has been driven by the maximum intensity of the radiation emitted by the Sun. However, the interpretation of the surrounding environment is constrained not only by the amount of energy received but also by the information content of the radiation. Information is related to entropy rather than energy. The human brain follows Bayesian statistical inference for the interpretation of visual space. The maximization of information occurs in the process of maximizing the entropy. Here, we show that the photopic and scotopic vision absorption peaks in humans are determined not only by the intensity but also by the entropy of radiation. We suggest that through the course of evolution, the human eye has not adapted only to the maximum intensity or to the maximum information but to the optimal wavelength for obtaining information. On Earth, the optimal wavelengths for photopic and scotopic vision are 555 nm and 508 nm, respectively, as inferred experimentally. These optimal wavelengths are determined by the temperature of the star (in this case, the Sun) and by the atmospheric composition.

  10. Human vision is determined based on information theory

    PubMed Central

    Delgado-Bonal, Alfonso; Martín-Torres, Javier

    2016-01-01

    It is commonly accepted that the evolution of the human eye has been driven by the maximum intensity of the radiation emitted by the Sun. However, the interpretation of the surrounding environment is constrained not only by the amount of energy received but also by the information content of the radiation. Information is related to entropy rather than energy. The human brain follows Bayesian statistical inference for the interpretation of visual space. The maximization of information occurs in the process of maximizing the entropy. Here, we show that the photopic and scotopic vision absorption peaks in humans are determined not only by the intensity but also by the entropy of radiation. We suggest that through the course of evolution, the human eye has not adapted only to the maximum intensity or to the maximum information but to the optimal wavelength for obtaining information. On Earth, the optimal wavelengths for photopic and scotopic vision are 555 nm and 508 nm, respectively, as inferred experimentally. These optimal wavelengths are determined by the temperature of the star (in this case, the Sun) and by the atmospheric composition. PMID:27808236

  11. Neuro-classification of multi-type Landsat Thematic Mapper data

    NASA Technical Reports Server (NTRS)

    Zhuang, Xin; Engel, Bernard A.; Fernandez, R. N.; Johannsen, Chris J.

    1991-01-01

    Neural networks have been successful in image classification and have shown potential for classifying remotely sensed data. This paper presents classifications of multitype Landsat Thematic Mapper (TM) data using neural networks. The Landsat TM Image for March 23, 1987 with accompanying ground observation data for a study area In Miami County, Indiana, U.S.A. was utilized to assess recognition of crop residues. Principal components and spectral ratio transformations were performed on the TM data. In addition, a layer of the geographic information system (GIS) for the study site was incorporated to generate GIS-enhanced TM data. This paper discusses (1) the performance of neuro-classification on each type of data, (2) how neural networks recognized each type of data as a new image and (3) comparisons of the results for each type of data obtained using neural networks, maximum likelihood, and minimum distance classifiers.

  12. Mapping forest vegetation with ERTS-1 MSS data and automatic data processing techniques

    NASA Technical Reports Server (NTRS)

    Messmore, J.; Copeland, G. E.; Levy, G. F.

    1975-01-01

    This study was undertaken with the intent of elucidating the forest mapping capabilities of ERTS-1 MSS data when analyzed with the aid of LARS' automatic data processing techniques. The site for this investigation was the Great Dismal Swamp, a 210,000 acre wilderness area located on the Middle Atlantic coastal plain. Due to inadequate ground truth information on the distribution of vegetation within the swamp, an unsupervised classification scheme was utilized. Initially pictureprints, resembling low resolution photographs, were generated in each of the four ERTS-1 channels. Data found within rectangular training fields was then clustered into 13 spectral groups and defined statistically. Using a maximum likelihood classification scheme, the unknown data points were subsequently classified into one of the designated training classes. Training field data was classified with a high degree of accuracy (greater than 95%), and progress is being made towards identifying the mapped spectral classes.

  13. Mapping forest vegetation with ERTS-1 MSS data and automatic data processing techniques

    NASA Technical Reports Server (NTRS)

    Messmore, J.; Copeland, G. E.; Levy, G. F.

    1975-01-01

    This study was undertaken with the intent of elucidating the forest mapping capabilities of ERTS-1 MSS data when analyzed with the aid of LARS' automatic data processing techniques. The site for this investigation was the Great Dismal Swamp, a 210,000 acre wilderness area located on the Middle Atlantic coastal plain. Due to inadequate ground truth information on the distribution of vegetation within the swamp, an unsupervised classification scheme was utilized. Initially pictureprints, resembling low resolution photographs, were generated in each of the four ERTS-1 channels. Data found within rectangular training fields was then clustered into 13 spectral groups and defined statistically. Using a maximum likelihood classification scheme, the unknown data points were subsequently classified into one of the designated training classes. Training field data was classified with a high degree of accuracy (greater than 95 percent), and progress is being made towards identifying the mapped spectral classes.

  14. Evaluating principal surrogate endpoints with time-to-event data accounting for time-varying treatment efficacy.

    PubMed

    Gabriel, Erin E; Gilbert, Peter B

    2014-04-01

    Principal surrogate (PS) endpoints are relatively inexpensive and easy to measure study outcomes that can be used to reliably predict treatment effects on clinical endpoints of interest. Few statistical methods for assessing the validity of potential PSs utilize time-to-event clinical endpoint information and to our knowledge none allow for the characterization of time-varying treatment effects. We introduce the time-dependent and surrogate-dependent treatment efficacy curve, ${\\mathrm {TE}}(t|s)$, and a new augmented trial design for assessing the quality of a biomarker as a PS. We propose a novel Weibull model and an estimated maximum likelihood method for estimation of the ${\\mathrm {TE}}(t|s)$ curve. We describe the operating characteristics of our methods via simulations. We analyze data from the Diabetes Control and Complications Trial, in which we find evidence of a biomarker with value as a PS.

  15. Which Tibial Tray Design Achieves Maximum Coverage and Ideal Rotation: Anatomic, Symmetric, or Asymmetric? An MRI-based study.

    PubMed

    Stulberg, S David; Goyal, Nitin

    2015-10-01

    Two goals of tibial tray placement in TKA are to maximize coverage and establish proper rotation. Our purpose was to utilize MRI information obtained as part of PSI planning to determine the impact of tibial tray design on the relationship between coverage and rotation. MR images for 100 consecutive knees were uploaded into PSI software. Preoperative planning software was used to evaluate 3 different tray designs: anatomic, symmetric, and asymmetric. Approximately equally good coverage was achieved with all three trays. However, the anatomic compared to symmetric/asymmetric trays required less malrotation (0.3° vs 3.0/2.4°; P < 0.001), with a higher proportion of cases within 5° of neutral (97% vs 73/77%; P < 0.001). In this study, the anatomic tibia optimized the relationship between coverage and rotation. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Development and Implementation of an Ultrasonic Method to Characterize Acoustic and Mechanical Fingernail Properties

    NASA Astrophysics Data System (ADS)

    Vacarescu, Rares Anthony

    The human fingernail is a vital organ used by humans on a daily basis and can provide an immense supply of information based on the biological feedback of the body. By studying the quantitative mechanical and acoustic properties of fingernails, a better understanding of the scarcely-investigated field of ungual research can be explored. Investigating fingernail properties with the use of pulse-echo ultrasound is the aim of this thesis. This thesis involves the application of a developed portable ultrasonic device in a hospital-based data collection and the advancement of ultrasonic methodology to include the calculation of acoustic impedance, density and elasticity. The results of the thesis show that the reflectance method can be utilized to determine fingernail properties with a maximum 17% deviation from literature. Repeatability of measurements fell within a 95% confidence interval. Thus, the ultrasonic reflectance method was validated and may have potential clinical and cosmetic applications.

  17. Far-forward collective scattering measurements by FIR polarimeter-interferometer on J-TEXT tokamak

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, P.; Chen, J., E-mail: jiech@hust.edu.cn; Gao, L.

    The multi-channel three-wave polarimeter-interferometer system on J-TEXT tokamak has been exploited to measure far-forward collective scattering from electron density fluctuations. The diagnostic utilizes far infrared lasers operated at 432 μm with 17-channel vertical chords (3 cm chord spacing), covering the entire cross section of plasma. Scattering laser power is measured using a high-sensitivity Schottky planar diode mixer which can also detect polarimetric and interferometric phase simultaneously. The system provides a line-integrated measurement of density fluctuations with maximum measurable wave number: k{sub ⊥max} ≤ 2 cm{sup −1} and time response up to 350 kHz. Feasibility of the diagnostic has been tested,more » showing higher sensitivity to detect fluctuation than interferometric measurement. Capability of providing spatial-resolved information of fluctuation has also been demonstrated in preliminary experimental applications.« less

  18. Ground-source heat pump case studies and utility programs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lienau, P.J.; Boyd, T.L.; Rogers, R.L.

    1995-04-01

    Ground-source heat pump systems are one of the promising new energy technologies that has shown rapid increase in usage over the past ten years in the United States. These systems offer substantial benefits to consumers and utilities in energy (kWh) and demand (kW) savings. The purpose of this study was to determine what existing monitored data was available mainly from electric utilities on heat pump performance, energy savings and demand reduction for residential, school and commercial building applications. In order to verify the performance, information was collected for 253 case studies from mainly utilities throughout the United States. The casemore » studies were compiled into a database. The database was organized into general information, system information, ground system information, system performance, and additional information. Information was developed on the status of demand-side management of ground-source heat pump programs for about 60 electric utility and rural electric cooperatives on marketing, incentive programs, barriers to market penetration, number units installed in service area, and benefits.« less

  19. The Berkeley SuperNova Ia Program (BSNIP): Dataset and Initial Analysis

    NASA Astrophysics Data System (ADS)

    Silverman, Jeffrey; Ganeshalingam, M.; Kong, J.; Li, W.; Filippenko, A.

    2012-01-01

    I will present spectroscopic data from the Berkeley SuperNova Ia Program (BSNIP), their initial analysis, and the results of attempts to use spectral information to improve cosmological distance determinations to Type Ia supernova (SNe Ia). The dataset consists of 1298 low-redshift (z< 0.2) optical spectra of 582 SNe Ia observed from 1989 through the end of 2008. Many of the SNe have well-calibrated light curves with measured distance moduli as well as spectra that have been corrected for host-galaxy contamination. I will also describe the spectral classification scheme employed (using the SuperNova Identification code, SNID; Blondin & Tonry 2007) which utilizes a newly constructed set of SNID spectral templates. The sheer size of the BSNIP dataset and the consistency of the observation and reduction methods make this sample unique among all other published SN Ia datasets. I will also discuss measurements of the spectral features of about one-third of the spectra which were obtained within 20 days of maximum light. I will briefly describe the adopted method of automated, robust spectral-feature definition and measurement which expands upon similar previous studies. Comparisons of these measurements of SN Ia spectral features to photometric observables will be presented with an eye toward using spectral information to calculate more accurate cosmological distances. Finally, I will comment on related projects which also utilize the BSNIP dataset that are planned for the near future. This research was supported by NSF grant AST-0908886 and the TABASGO Foundation. I am grateful to Marc J. Staley for a Graduate Fellowship.

  20. Net reclassification index at event rate: properties and relationships.

    PubMed

    Pencina, Michael J; Steyerberg, Ewout W; D'Agostino, Ralph B

    2017-12-10

    The net reclassification improvement (NRI) is an attractively simple summary measure quantifying improvement in performance because of addition of new risk marker(s) to a prediction model. Originally proposed for settings with well-established classification thresholds, it quickly extended into applications with no thresholds in common use. Here we aim to explore properties of the NRI at event rate. We express this NRI as a difference in performance measures for the new versus old model and show that the quantity underlying this difference is related to several global as well as decision analytic measures of model performance. It maximizes the relative utility (standardized net benefit) across all classification thresholds and can be viewed as the Kolmogorov-Smirnov distance between the distributions of risk among events and non-events. It can be expressed as a special case of the continuous NRI, measuring reclassification from the 'null' model with no predictors. It is also a criterion based on the value of information and quantifies the reduction in expected regret for a given regret function, casting the NRI at event rate as a measure of incremental reduction in expected regret. More generally, we find it informative to present plots of standardized net benefit/relative utility for the new versus old model across the domain of classification thresholds. Then, these plots can be summarized with their maximum values, and the increment in model performance can be described by the NRI at event rate. We provide theoretical examples and a clinical application on the evaluation of prognostic biomarkers for atrial fibrillation. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  1. 77 FR 35657 - Information Collection Activity; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-06-14

    ... Information Collection Activity; Comment Request AGENCY: Rural Utilities Service, USDA. ACTION: Notice and... 35, as amended), the United States Department of Agriculture (USDA) Rural Development administers rural utilities programs through the Rural Utilities Service. The USDA Rural Development invites...

  2. Attitude determination and calibration using a recursive maximum likelihood-based adaptive Kalman filter

    NASA Technical Reports Server (NTRS)

    Kelly, D. A.; Fermelia, A.; Lee, G. K. F.

    1990-01-01

    An adaptive Kalman filter design that utilizes recursive maximum likelihood parameter identification is discussed. At the center of this design is the Kalman filter itself, which has the responsibility for attitude determination. At the same time, the identification algorithm is continually identifying the system parameters. The approach is applicable to nonlinear, as well as linear systems. This adaptive Kalman filter design has much potential for real time implementation, especially considering the fast clock speeds, cache memory and internal RAM available today. The recursive maximum likelihood algorithm is discussed in detail, with special attention directed towards its unique matrix formulation. The procedure for using the algorithm is described along with comments on how this algorithm interacts with the Kalman filter.

  3. Navigator alignment using radar scan

    DOEpatents

    Doerry, Armin W.; Marquette, Brandeis

    2016-04-05

    The various technologies presented herein relate to the determination of and correction of heading error of platform. Knowledge of at least one of a maximum Doppler frequency or a minimum Doppler bandwidth pertaining to a plurality of radar echoes can be utilized to facilitate correction of the heading error. Heading error can occur as a result of component drift. In an ideal situation, a boresight direction of an antenna or the front of an aircraft will have associated therewith at least one of a maximum Doppler frequency or a minimum Doppler bandwidth. As the boresight direction of the antenna strays from a direction of travel at least one of the maximum Doppler frequency or a minimum Doppler bandwidth will shift away, either left or right, from the ideal situation.

  4. Utilization of information technology in eastern North Carolina physician practices: determining the existence of a digital divide.

    PubMed

    Rosenthal, David A; Layman, Elizabeth J

    2008-02-13

    The United States Department of Health and Human Services (DHHS) has emphasized the importance of utilizing health information technologies, thus making the availability of electronic resources critical for physicians across the country. However, few empirical assessments exist regarding the current status of computerization and utilization of electronic resources in physician offices and physicians' perceptions of the advantages and disadvantages of computerization. Through a survey of physicians' utilization and perceptions of health information technology, this study found that a "digital divide" existed for eastern North Carolina physicians in smaller physician practices. The physicians in smaller practices were less likely to utilize or be interested in utilizing electronic health records, word processing applications, and the Internet.

  5. Juice blends--a way of utilization of under-utilized fruits, vegetables, and spices: a review.

    PubMed

    Bhardwaj, Raju Lal; Pandey, Shruti

    2011-07-01

    The post-harvest shelf life of maximum of fruits and vegetables is very limited due to their perishable nature. In India more then 20-25 percent of fruits and vegetables are spoiled before utilization. Despite being the world's second largest producer of fruits and vegetables, in India only 1.5 percent of the total fruits and vegetables produced are processed. Maximum amounts of fruit and vegetable juices turn bitter after extraction due to conversion of chemical compounds. In spite of being under utilized, the utilization of highly nutritive fruits and vegetables is very limited due to high acidity, astringency, bitterness, and some other factors. While improving flavor, palatability, and nutritive and medicinal value of various fruit juices such as aonla, mango, papaya, pineapple, citrus, ber, pear, apple, watermelon, and vegetables including bottle gourd, carrot, beet root, bitter gourd, medicinal plants like aloe vera and spices can also be used for juice blending. All these natural products are valued very highly for their refreshing juice, nutritional value, pleasant flavor, and medicinal properties. Fruits and vegetables are also a rich source of sugars, vitamins, and minerals. However, some fruits and vegetables have an off flavor and bitterness although they are an excellent source of vitamins, enzymes, and minerals. Therefore, blending of two or more fruit and vegetable juices with spices extract for the preparation of nutritive ready-to-serve (RTS), beverages is thought to be a convenient and economic alternative for utilization of these fruits and vegetables. Moreover, one could think of a new product development through blending in the form of a natural health drink, which may also serve as an appetizer. The present review focuses on the blending of fruits, under-utilized fruits, vegetables, medicinal plants, and spices in appropriate proportions for the preparation of natural fruit and vegetable based nutritive beverages.

  6. 48 CFR 19.705-7 - Liquidated damages.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... SOCIOECONOMIC PROGRAMS SMALL BUSINESS PROGRAMS The Small Business Subcontracting Program 19.705-7 Liquidated damages. (a) Maximum practicable utilization of small business, veteran-owned small business, service-disabled veteran-owned small business, HUBZone small business, small disadvantaged business and women-owned...

  7. 48 CFR 19.705-7 - Liquidated damages.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... SOCIOECONOMIC PROGRAMS SMALL BUSINESS PROGRAMS The Small Business Subcontracting Program 19.705-7 Liquidated damages. (a) Maximum practicable utilization of small business, veteran-owned small business, service-disabled veteran-owned small business, HUBZone small business, small disadvantaged business and women-owned...

  8. 48 CFR 19.705-7 - Liquidated damages.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... SOCIOECONOMIC PROGRAMS SMALL BUSINESS PROGRAMS The Small Business Subcontracting Program 19.705-7 Liquidated damages. (a) Maximum practicable utilization of small business, veteran-owned small business, service-disabled veteran-owned small business, HUBZone small business, small disadvantaged business and women-owned...

  9. 48 CFR 19.705-7 - Liquidated damages.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... SOCIOECONOMIC PROGRAMS SMALL BUSINESS PROGRAMS The Small Business Subcontracting Program 19.705-7 Liquidated damages. (a) Maximum practicable utilization of small business, veteran-owned small business, service-disabled veteran-owned small business, HUBZone small business, small disadvantaged business and women-owned...

  10. Advancements in medicine from aerospace research

    NASA Technical Reports Server (NTRS)

    Wooten, F. T.

    1971-01-01

    NASA has taken the lead in implementing the concept of technology utilization, and the Technology Utilization Program is the first vital step in the goal of a technological society to insure maximum benefit from the costs of technology. Experience has shown that the active approach to technology transfer is unique and is well received in the medical profession when appropriate problems are tackled. The problem solving approach is a useful one at the precise time when medicine is recognizing the need for new technology.

  11. A Study to Determine the Cost of Quality Assurance to the Department of Surgery at US Army Medical Department Activity. Fort Benning, Georgia

    DTIC Science & Technology

    1984-07-01

    to make maximum use of available time. General Surgery averaged ninety-six operative procedures per month during 1983, ophthalmology averaged fifteen...health care providers 2. Document evaluation of: a) Surgical care review (tissue review) b) Blood utilization c) Antibiotics utilization d) Pharmacy and...21. Number Surgical Procedures Performed 470 470 on Patients Undergoing all Types of Surgery APPENDIX L EXAMPLES OF AUDIT CRITERIA 103 SERVICE MONTH

  12. Comparing digital data processing techniques for surface mine and reclamation monitoring

    NASA Technical Reports Server (NTRS)

    Witt, R. G.; Bly, B. G.; Campbell, W. J.; Bloemer, H. H. L.; Brumfield, J. O.

    1982-01-01

    The results of three techniques used for processing Landsat digital data are compared for their utility in delineating areas of surface mining and subsequent reclamation. An unsupervised clustering algorithm (ISOCLS), a maximum-likelihood classifier (CLASFY), and a hybrid approach utilizing canonical analysis (ISOCLS/KLTRANS/ISOCLS) were compared by means of a detailed accuracy assessment with aerial photography at NASA's Goddard Space Flight Center. Results show that the hybrid approach was superior to the traditional techniques in distinguishing strip mined and reclaimed areas.

  13. Development of a Medicare Beneficiary Comprehension Test: Assessing Medicare Part D Beneficiaries' Comprehension of Their Benefits

    PubMed Central

    Aruru, Meghana V.; Salmon, J. Warren

    2013-01-01

    Background Medicare Part D, the senior prescription drug benefit plan, was introduced through the Medicare Modernization Act of 2003. Medicare beneficiaries receive information about plan options through multiple sources, and it is often assumed by consumer health plans and healthcare providers that beneficiaries can understand and compare plan information. Medicare beneficiaries are older, may have cognitive problems, and may not have a true understanding of managed care. They are more likely than younger persons to have inadequate health literacy, thereby demonstrating significant gaps in knowledge and information about healthcare. Objective To develop a Medicare Beneficiary Comprehension Test (MBCT) to evaluate Medicare beneficiaries' understanding of Part D plan concepts, as presented in the 2008 Medicare & You handbook. Methods A 10-question MBCT was developed using a case-vignette approach that required beneficiaries to read portions of the Medicare & You handbook and answer Part D–related questions associated with healthcare decision-making. The test was divided into 2 sections: (I) insurance concepts and (II) utilization management/appeals and grievances to cover standard terminology, as well as newer utilization management and appeals and grievances procedures that are unique to Part D. The test was administered to 100 beneficiaries at 2 sites—a university geriatrics clinic and a private retirement facility. Beneficiaries were tested for cognition and health literacy before being administered the test. Results The mean score on the MBCT was 3.5 of a maximum of 5, with no statistical difference found between both sites. Ten faculty members and 4 graduate students assessed the content validity of the instrument using a 4-point Likert rating rubric. The construct validity of the instrument was assessed using a principal components analysis with varimax rotation. The principal components analysis yielded 4 factors that were labeled as “Plan D concepts,” “managed care/utilization management,” “cost-sharing,” and “plan comparisons.” The factor analysis indicated that the test is multidimensional and did measure the construct. Conclusions Medicare beneficiaries' understanding of Part D may play a key role in the management of their drug use and health and the associated outcomes. The MBCT and its pending revisions can be administered to beneficiaries with differing health outcomes. Medicare beneficiaries are often faced with several pieces of information involving a complex array of choices amidst bewildering plan options. It is crucial that beneficiaries and/or their family members involved in the decision-making process understand the plan benefits to truly make an informed decision. As the number of Medicare beneficiaries increases over the coming years with the baby boomers, it becomes even more imperative that the elderly have improved access to treatments that can achieve desirable outcomes. Measuring comprehension by Medicare beneficiaries may be an initial step toward understanding more complex issues, such as treatment adherence, decision-making, and, ultimately, trends in healthcare utilization and outcomes. PMID:24991375

  14. 77 FR 26735 - Information Collection Activity; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-05-07

    ... AGENCY: Rural Utilities Service, USDA. ACTION: Notice and request for comments. SUMMARY: In accordance... Department of Agriculture (USDA) Rural Development administers rural utilities programs through the Rural Utilities Service (RUS). The USDA Rural Development invites comments on the following information...

  15. A maximum likelihood convolutional decoder model vs experimental data comparison

    NASA Technical Reports Server (NTRS)

    Chen, R. Y.

    1979-01-01

    This article describes the comparison of a maximum likelihood convolutional decoder (MCD) prediction model and the actual performance of the MCD at the Madrid Deep Space Station. The MCD prediction model is used to develop a subroutine that has been utilized by the Telemetry Analysis Program (TAP) to compute the MCD bit error rate for a given signal-to-noise ratio. The results indicate that that the TAP can predict quite well compared to the experimental measurements. An optimal modulation index also can be found through TAP.

  16. An Algorithm for Efficient Maximum Likelihood Estimation and Confidence Interval Determination in Nonlinear Estimation Problems

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick Charles

    1985-01-01

    An algorithm for maximum likelihood (ML) estimation is developed with an efficient method for approximating the sensitivities. The algorithm was developed for airplane parameter estimation problems but is well suited for most nonlinear, multivariable, dynamic systems. The ML algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort. MNRES determines the sensitivities with less computational effort than using either a finite-difference method or integrating the analytically determined sensitivity equations. MNRES eliminates the need to derive sensitivity equations for each new model, thus eliminating algorithm reformulation with each new model and providing flexibility to use model equations in any format that is convenient. A random search technique for determining the confidence limits of ML parameter estimates is applied to nonlinear estimation problems for airplanes. The confidence intervals obtained by the search are compared with Cramer-Rao (CR) bounds at the same confidence level. It is observed that the degree of nonlinearity in the estimation problem is an important factor in the relationship between CR bounds and the error bounds determined by the search technique. The CR bounds were found to be close to the bounds determined by the search when the degree of nonlinearity was small. Beale's measure of nonlinearity is developed in this study for airplane identification problems; it is used to empirically correct confidence levels for the parameter confidence limits. The primary utility of the measure, however, was found to be in predicting the degree of agreement between Cramer-Rao bounds and search estimates.

  17. Technical note: comparability of Hrdlička's Catalog of Crania data based on measurement landmark definitions.

    PubMed

    Stojanowski, Christopher M; Euber, Julie K

    2011-09-01

    Archival sources of data are critical anthropological resources that inform inferences about human biology and evolutionary history. Craniometric data are one of the most widely available sources of information on human population history because craniometrics were critical in early 20th century debates about race and biological variation. As such, extensive databases of raw craniometric data were published at the same time that the field was working to standardize measurement protocol. Hrdlička published between 10 and 16 raw craniometric variables for over 8,000 individuals in a series of seven catalogs throughout his career. With a New World emphasis, Hrdlička's data complement those of Howells (1973, 1989) and the two databases have been combined in the past. In this note we verify the consistency of Hrdlička's measurement protocol throughout the Catalog series and compare these definitions to those used by Howells. We conclude that 12 measurements are comparable throughout the Catalogs, with five of these equivalent to Howells' measurements: maximum cranial breadth (XCB), basion-bregma height (BBH), maximum bizygomatic breadth (ZYB), nasal breadth (NLB), and breadth of the upper alveolar arch (MAB). Most of Hrdlička's measurements are not strictly comparable to those of Howells, thus limiting the utility of combined datasets for multivariate analysis. Four measurements are inconsistently defined by Hrdlička and we recommend not using these data: nasal height, orbit breadth, orbit height, and menton-nasion height. This note promotes Hrdlička's tireless efforts at data collection and re-emphasizes observer error as a legitimate concern in craniometry as the field shifts to morphometric digital data acquisition. 2011 Wiley-Liss, Inc.

  18. Proposed U.S. Geological Survey standard for digital orthophotos

    USGS Publications Warehouse

    Hooper, David; Caruso, Vincent

    1991-01-01

    The U.S. Geological Survey has added the new category of digital orthophotos to the National Digital Cartographic Data Base. This differentially rectified digital image product enables users to take advantage of the properties of current photoimagery as a source of geographic information. The product and accompanying standard were implemented in spring 1991. The digital orthophotos will be quadrangle based and cast on the Universal Transverse Mercator projection and will extend beyond the 3.75-minute or 7.5-minute quadrangle area at least 300 meters to form a rectangle. The overedge may be used for mosaicking with adjacent digital orthophotos. To provide maximum information content and utility to the user, metadata (header) records exist at the beginning of the digital orthophoto file. Header information includes the photographic source type, date, instrumentation used to create the digital orthophoto, and information relating to the DEM that was used in the rectification process. Additional header information is included on transformation constants from the 1927 and 1983 North American Datums to the orthophoto internal file coordinates to enable the user to register overlays on either datum. The quadrangle corners in both datums are also imprinted on the image. Flexibility has been built into the digital orthophoto format for future enhancements, such as the provision to include the corresponding digital elevation model elevations used to rectify the orthophoto. The digital orthophoto conforms to National Map Accuracy Standards and provides valuable mapping data that can be used as a tool for timely revision of standard map products, for land use and land cover studies, and as a digital layer in a geographic information system.

  19. The utilization of poisons information resources in Australasia.

    PubMed

    Fountain, J S; Reith, D M; Holt, A

    2014-02-01

    To identify poisons information resources most commonly utilized by Australasian Emergency Department staff, and examine attitudes regarding the benefits and user experience of the electronic products used. A survey tool was mailed to six Emergency Departments each in New Zealand and Australia to be answered by medical and nursing staff. Eighty six (71.7%) responses were received from the 120 survey forms sent: 70 (81%) responders were medical staff, the remainder nursing. Electronic resources were the most accessed poisons information resource in New Zealand; Australians preferring discussion with a colleague; Poisons Information Centers were the least utilized resource in both countries. With regard to electronic resources, further differences were recognized between countries in: ease of access, ease of use, quality of information and quantity of information, with New Zealand better in all four themes. New Zealand ED staff favored electronic poisons information resources while Australians preferred discussion with a colleague. That Poisons Information Centers were the least utilized resource was surprising. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  20. 75 FR 3763 - Proposed Collection; Comment Request for Review of a Currently Approved Information Collection

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-22

    ... information about the cost to elect less than the maximum survivor annuity. This letter may be used to decline... about the cost to elect the maximum survivor annuity. This letter may be used to ask for more... who do not have a former spouse who is entitled to a survivor annuity benefit. RI 20-63B is for those...

  1. Decentralized Optimal Dispatch of Photovoltaic Inverters in Residential Distribution Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dall'Anese, Emiliano; Dhople, Sairaj V.; Johnson, Brian B.

    Summary form only given. Decentralized methods for computing optimal real and reactive power setpoints for residential photovoltaic (PV) inverters are developed in this paper. It is known that conventional PV inverter controllers, which are designed to extract maximum power at unity power factor, cannot address secondary performance objectives such as voltage regulation and network loss minimization. Optimal power flow techniques can be utilized to select which inverters will provide ancillary services, and to compute their optimal real and reactive power setpoints according to well-defined performance criteria and economic objectives. Leveraging advances in sparsity-promoting regularization techniques and semidefinite relaxation, this papermore » shows how such problems can be solved with reduced computational burden and optimality guarantees. To enable large-scale implementation, a novel algorithmic framework is introduced - based on the so-called alternating direction method of multipliers - by which optimal power flow-type problems in this setting can be systematically decomposed into sub-problems that can be solved in a decentralized fashion by the utility and customer-owned PV systems with limited exchanges of information. Since the computational burden is shared among multiple devices and the requirement of all-to-all communication can be circumvented, the proposed optimization approach scales favorably to large distribution networks.« less

  2. Overcoming urban GPS navigation challenges through the use of MEMS inertial sensors and proper verification of navigation system performance

    NASA Astrophysics Data System (ADS)

    Vinande, Eric T.

    This research proposes several means to overcome challenges in the urban environment to ground vehicle global positioning system (GPS) receiver navigation performance through the integration of external sensor information. The effects of narrowband radio frequency interference and signal attenuation, both common in the urban environment, are examined with respect to receiver signal tracking processes. Low-cost microelectromechanical systems (MEMS) inertial sensors, suitable for the consumer market, are the focus of receiver augmentation as they provide an independent measure of motion and are independent of vehicle systems. A method for estimating the mounting angles of an inertial sensor cluster utilizing typical urban driving maneuvers is developed and is able to provide angular measurements within two degrees of truth. The integration of GPS and MEMS inertial sensors is developed utilizing a full state navigation filter. Appropriate statistical methods are developed to evaluate the urban environment navigation improvement due to the addition of MEMS inertial sensors. A receiver evaluation metric that combines accuracy, availability, and maximum error measurements is presented and evaluated over several drive tests. Following a description of proper drive test techniques, record and playback systems are evaluated as the optimal way of testing multiple receivers and/or integrated navigation systems in the urban environment as they simplify vehicle testing requirements.

  3. 78 FR 70915 - Information Collection Activity; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-27

    ... DEPARTMENT OF AGRICULTURE Rural Utilities Service Information Collection Activity; Comment Request AGENCY: Rural Utilities Service, USDA. ACTION: Notice and request for comments. SUMMARY: In accordance with the Paperwork Reduction Act of 1995 (44 U.S.C. Chapter 35, as amended), the Rural Utilities...

  4. 75 FR 51977 - Information Collection Activity; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-24

    ... DEPARTMENT OF AGRICULTURE Rural Utilities Service Information Collection Activity; Comment Request AGENCY: Rural Utilities Service, USDA. ACTION: Notice and request for comments. SUMMARY: In accordance with the Paperwork Reduction Act of 1995 (44 U.S.C. Chapter 35, as amended), the Rural Utilities...

  5. Utilization of Satellite Data to Identify and Monitor Changes in Frequency of Meteorological Events

    NASA Astrophysics Data System (ADS)

    Mast, J. C.; Dessler, A. E.

    2017-12-01

    Increases in temperature and climate variability due to human-induced climate change is increasing the frequency and magnitude of extreme heat events (i.e., heatwaves). This will have a detrimental impact on the health of human populations and habitability of certain land locations. Here we seek to utilize satellite data records to identify and monitor extreme heat events. We analyze satellite data sets (MODIS and AIRS land surface temperatures (LST) and water vapor profiles (WV)) due to their global coverage and stable calibration. Heat waves are identified based on the frequency of maximum daily temperatures above a threshold, determined as follows. Land surface temperatures are gridded into uniform latitude/longitude bins. Maximum daily temperatures per bin are determined and probability density functions (PDF) of these maxima are constructed monthly and seasonally. For each bin, a threshold is calculated at the 95th percentile of the PDF of maximum temperatures. Per each bin, an extreme heat event is defined based on the frequency of monthly and seasonal days exceeding the threshold. To account for the decreased ability of the human body to thermoregulate with increasing moisture, and to assess lethality of the heat events, we determine the wet-bulb temperature at the locations of extreme heat events. Preliminary results will be presented.

  6. Superfast maximum-likelihood reconstruction for quantum tomography

    NASA Astrophysics Data System (ADS)

    Shang, Jiangwei; Zhang, Zhengyun; Ng, Hui Khoon

    2017-06-01

    Conventional methods for computing maximum-likelihood estimators (MLE) often converge slowly in practical situations, leading to a search for simplifying methods that rely on additional assumptions for their validity. In this work, we provide a fast and reliable algorithm for maximum-likelihood reconstruction that avoids this slow convergence. Our method utilizes the state-of-the-art convex optimization scheme, an accelerated projected-gradient method, that allows one to accommodate the quantum nature of the problem in a different way than in the standard methods. We demonstrate the power of our approach by comparing its performance with other algorithms for n -qubit state tomography. In particular, an eight-qubit situation that purportedly took weeks of computation time in 2005 can now be completed in under a minute for a single set of data, with far higher accuracy than previously possible. This refutes the common claim that MLE reconstruction is slow and reduces the need for alternative methods that often come with difficult-to-verify assumptions. In fact, recent methods assuming Gaussian statistics or relying on compressed sensing ideas are demonstrably inapplicable for the situation under consideration here. Our algorithm can be applied to general optimization problems over the quantum state space; the philosophy of projected gradients can further be utilized for optimization contexts with general constraints.

  7. ANN based Real-Time Estimation of Power Generation of Different PV Module Types

    NASA Astrophysics Data System (ADS)

    Syafaruddin; Karatepe, Engin; Hiyama, Takashi

    Distributed generation is expected to become more important in the future generation system. Utilities need to find solutions that help manage resources more efficiently. Effective smart grid solutions have been experienced by using real-time data to help refine and pinpoint inefficiencies for maintaining secure and reliable operating conditions. This paper proposes the application of Artificial Neural Network (ANN) for the real-time estimation of the maximum power generation of PV modules of different technologies. An intelligent technique is necessary required in this case due to the relationship between the maximum power of PV modules and the open circuit voltage and temperature is nonlinear and can't be easily expressed by an analytical expression for each technology. The proposed ANN method is using input signals of open circuit voltage and cell temperature instead of irradiance and ambient temperature to determine the estimated maximum power generation of PV modules. It is important for the utility to have the capability to perform this estimation for optimal operating points and diagnostic purposes that may be an early indicator of a need for maintenance and optimal energy management. The proposed method is accurately verified through a developed real-time simulator on the daily basis of irradiance and cell temperature changes.

  8. Light dependence of carboxylation capacity for C3 photosynthesis models

    USDA-ARS?s Scientific Manuscript database

    Photosynthesis at high light is often modelled by assuming limitation by the maximum capacity of Rubisco carboxylation at low carbon dioxide concentrations, by electron transport capacity at higher concentrations, and sometimes by triose-phosphate utilization rate at the highest concentrations. Pho...

  9. 36 CFR 3.15 - What is the maximum noise level for the operation of a vessel?

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ...: (1) 75dB(A) measured utilizing test procedures applicable to vessels underway (Society of Automotive... (Society of Automotive Engineers SAE—J2005). (b) An authorized person who has reason to believe that a...

  10. 36 CFR 3.15 - What is the maximum noise level for the operation of a vessel?

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ...: (1) 75dB(A) measured utilizing test procedures applicable to vessels underway (Society of Automotive... (Society of Automotive Engineers SAE—J2005). (b) An authorized person who has reason to believe that a...

  11. 36 CFR 3.15 - What is the maximum noise level for the operation of a vessel?

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ...: (1) 75dB(A) measured utilizing test procedures applicable to vessels underway (Society of Automotive... (Society of Automotive Engineers SAE—J2005). (b) An authorized person who has reason to believe that a...

  12. ALTERNATIVE OXIDANT AND DISINFECTANT TREATMENT STRATEGIES FOR CONTROLLING TRIHALOMETHANE FORMATION

    EPA Science Inventory

    To comply with the maximum contaminant level (MCL) for total trihalomethanes (TTHM), many utilities have modified their pre-oxidation and disinfection practices by switching to alternative oxidants and disinfectants in place of free chlorine. To evaluate the impact of these chang...

  13. REMOVAL OF ALACHLOR FROM DRINKING WATER

    EPA Science Inventory

    Alachlor (Lasso) is a pre-emergent herbicide used in the production of corn and soybeans. U.S. EPA has studied control of alachlor in drinking water treatment processes to define treatability before setting maximum contaminant levels and to assist water utilities in selecting con...

  14. 237 E. Ontario St., January 2017, Lindsay Light Radiological Survey

    EPA Pesticide Factsheets

    Radiological Survey of Right-of-Way Utility Excavation. The measurements within the excavations and of the soil did not exceed the instrument USEPA threshold and ranged from a minimum of 4,800 cpm to a maximum of 8,300 cpm unshielded.

  15. ANIMAL MANURES AS FEEDSTUFFS: CATTLE MANURE FEEDING TRIALS

    EPA Science Inventory

    The utilization of 'as-collected' and processed beef cattle and dairy cow manure, manure screenings and anaerobically digested cattle manures was evaluated on the basis of the results of feeding trials reported in the literature. The maximum level of incorporating these manures i...

  16. Creating an Agent Based Framework to Maximize Information Utility

    DTIC Science & Technology

    2008-03-01

    information utility may be a qualitative description of the information, where one would expect the adjectives low value, fair value , high value. For...operations. Information in this category may have a fair value rating. Finally, many seemingly unrelated events, such as reports of snipers in buildings

  17. Health Information Obtained From the Internet and Changes in Medical Decision Making: Questionnaire Development and Cross-Sectional Survey.

    PubMed

    Chen, Yen-Yuan; Li, Chia-Ming; Liang, Jyh-Chong; Tsai, Chin-Chung

    2018-02-12

    The increasing utilization of the internet has provided a better opportunity for people to search online for health information, which was not easily available to them in the past. Studies reported that searching on the internet for health information may potentially influence an individual's decision making to change her health-seeking behaviors. The objectives of this study were to (1) develop and validate 2 questionnaires to estimate the strategies of problem-solving in medicine and utilization of online health information, (2) determine the association between searching online for health information and utilization of online health information, and (3) determine the association between online medical help-seeking and utilization of online health information. The Problem Solving in Medicine and Online Health Information Utilization questionnaires were developed and implemented in this study. We conducted confirmatory factor analysis to examine the structure of the factor loadings and intercorrelations for all the items and dimensions. We employed Pearson correlation coefficients for examining the correlations between each dimension of the Problem Solving in Medicine questionnaire and each dimension of the Online Health Information Utilization questionnaire. Furthermore, we conducted structure equation modeling for examining the possible linkage between each of the 6 dimensions of the Problem Solving in Medicine questionnaire and each of the 3 dimensions of the Online Health Information Utilization questionnaire. A total of 457 patients participated in this study. Pearson correlation coefficients ranged from .12 to .41, all with statistical significance, implying that each dimension of the Problem Solving in Medicine questionnaire was significantly associated with each dimension of the Online Health Information Utilization questionnaire. Patients with the strategy of online health information search for solving medical problems positively predicted changes in medical decision making (P=.01), consulting with others (P<.001), and promoting self-efficacy on deliberating the online health information (P<.001) based on the online health information they obtained. Present health care professionals have a responsibility to acknowledge that patients' medical decision making may be changed based on additional online health information. Health care professionals should assist patients' medical decision making by initiating as much dialogue with patients as possible, providing credible and convincing health information to patients, and guiding patients where to look for accurate, comprehensive, and understandable online health information. By doing so, patients will avoid becoming overwhelmed with extraneous and often conflicting health information. Educational interventions to promote health information seekers' ability to identify, locate, obtain, read, understand, evaluate, and effectively use online health information are highly encouraged. ©Yen-Yuan Chen, Chia-Ming Li, Jyh-Chong Liang, Chin-Chung Tsai. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 12.02.2018.

  18. Information Alchemy: Transforming Information through Knowledge Utilization.

    ERIC Educational Resources Information Center

    Backer, Thomas E.

    1993-01-01

    Provides an overview of knowledge utilization, what it encompasses, and its three waves of activity in America. Basic principles and strategies to consider are listed, and an example of how knowledge utilization is applied by the Center for Mental Health Services is given. (17 references) (EA)

  19. Teach It, Don't Preach It: The Differential Effects of Directly-communicated and Self-generated Utility Value Information.

    PubMed

    Canning, Elizabeth A; Harackiewicz, Judith M

    2015-03-01

    Social-psychological interventions in education have used a variety of "self-persuasion" or "saying-is-believing" techniques to encourage students to articulate key intervention messages. These techniques are used in combination with more overt strategies, such as the direct communication of messages in order to promote attitude change. However, these different strategies have rarely been systematically compared, particularly in controlled laboratory settings. We focus on one intervention based in expectancy-value theory designed to promote perceptions of utility value in the classroom and test different intervention techniques to promote interest and performance. Across three laboratory studies, we used a mental math learning paradigm in which we varied whether students wrote about utility value for themselves or received different forms of directly-communicated information about the utility value of a novel mental math technique. In Study 1, we examined the difference between directly-communicated and self-generated utility-value information and found that directly-communicated utility-value information undermined performance and interest for individuals who lacked confidence, but that self-generated utility had positive effects. However, Study 2 suggests that these negative effects of directly-communicated utility value can be ameliorated when participants are also given the chance to generate their own examples of utility value, revealing a synergistic effect of directly-communicated and self-generated utility value. In Study 3, we found that individuals who lacked confidence benefited more when everyday examples of utility value were communicated, rather than career and school examples.

  20. Electric utilities and the info-way - are electrics and telcos fellow travelers or competitors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ashworth, M.J.

    1994-03-15

    This article examines the future role of telecommunications and the so-called information superhighway in the operations of electric utilities. Utilities should take advantage of information technology through informal alliances with telecommunications hardware and service suppliers, should limit investments in alternative meter-level technologies to those that are cheap, easily integrated, and flexible, and should consider outsourcing network implementation, maintenance, and management functions.

  1. Comparing methods to estimate Reineke’s maximum size-density relationship species boundary line slope

    Treesearch

    Curtis L. VanderSchaaf; Harold E. Burkhart

    2010-01-01

    Maximum size-density relationships (MSDR) provide natural resource managers useful information about the relationship between tree density and average tree size. Obtaining a valid estimate of how maximum tree density changes as average tree size changes is necessary to accurately describe these relationships. This paper examines three methods to estimate the slope of...

  2. 75 FR 176 - Agency Information Collection Activities; Request for Comment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-04

    ... previously approved information collection consisting of a customer survey form. OSC is required by law to... the proper performance of OSC functions, including whether the information will have practical utility... to enhance the quality, utility, and clarity of the information to be collected; and (d) ways to...

  3. 75 FR 9003 - Agency Information Collection Activities; Request for Comment

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-02-26

    ... a previously approved information collection consisting of a customer survey form. OSC is required... practical utility; (b) the accuracy of OSC's estimate of the burden of the proposed collections of information; (c) ways to enhance the quality, utility, and clarity of the information to be collected; and (d...

  4. 78 FR 64520 - Agency Information Collection Activities: Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-10-29

    ... have practical utility; (b) the accuracy of the agency's estimate of the burden of the proposed collection of information; (c) ways to enhance the quality, utility, and clarity of the information to be... through the use of automated collection techniques or other forms of information technology. Proposed...

  5. 77 FR 65532 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-29

    ... information is vital for making prudent financial decisions. Need and Use of the Information: RBS will collect... agency, including whether the information will have practical utility; (b) the accuracy of the agency's... quality, utility and clarity of the information to be collected; (d) ways to minimize the burden of the...

  6. 78 FR 43848 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-22

    ... agency, including whether the information will have practical utility; (b) the accuracy of the agency's... quality, utility and clarity of the information to be collected; (d) ways to minimize the burden of the... across the country. The data and information collected through EARS will inform management decisions...

  7. Maximum mutual information estimation of a simplified hidden MRF for offline handwritten Chinese character recognition

    NASA Astrophysics Data System (ADS)

    Xiong, Yan; Reichenbach, Stephen E.

    1999-01-01

    Understanding of hand-written Chinese characters is at such a primitive stage that models include some assumptions about hand-written Chinese characters that are simply false. So Maximum Likelihood Estimation (MLE) may not be an optimal method for hand-written Chinese characters recognition. This concern motivates the research effort to consider alternative criteria. Maximum Mutual Information Estimation (MMIE) is an alternative method for parameter estimation that does not derive its rationale from presumed model correctness, but instead examines the pattern-modeling problem in automatic recognition system from an information- theoretic point of view. The objective of MMIE is to find a set of parameters in such that the resultant model allows the system to derive from the observed data as much information as possible about the class. We consider MMIE for recognition of hand-written Chinese characters using on a simplified hidden Markov Random Field. MMIE provides improved performance improvement over MLE in this application.

  8. The Efficient Utilization of Open Source Information

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baty, Samuel R.

    These are a set of slides on the efficient utilization of open source information. Open source information consists of a vast set of information from a variety of sources. Not only does the quantity of open source information pose a problem, the quality of such information can hinder efforts. To show this, two case studies are mentioned: Iran and North Korea, in order to see how open source information can be utilized. The huge breadth and depth of open source information can complicate an analysis, especially because open information has no guarantee of accuracy. Open source information can provide keymore » insights either directly or indirectly: looking at supporting factors (flow of scientists, products and waste from mines, government budgets, etc.); direct factors (statements, tests, deployments). Fundamentally, it is the independent verification of information that allows for a more complete picture to be formed. Overlapping sources allow for more precise bounds on times, weights, temperatures, yields or other issues of interest in order to determine capability. Ultimately, a "good" answer almost never comes from an individual, but rather requires the utilization of a wide range of skill sets held by a team of people.« less

  9. Real-time information management environment (RIME)

    NASA Astrophysics Data System (ADS)

    DeCleene, Brian T.; Griffin, Sean; Matchett, Garry; Niejadlik, Richard

    2000-08-01

    Whereas data mining and exploitation improve the quality and quantity of information available to the user, there remains a mission requirement to assist the end-user in managing the access to this information and ensuring that the appropriate information is delivered to the right user in time to make decisions and take action. This paper discusses TASC's federated architecture to next- generation information management, contrasts the approach against emerging technologies, and quantifies the performance gains. This architecture and implementation, known as Real-time Information Management Environment (RIME), is based on two key concepts: information utility and content-based channelization. The introduction of utility allows users to express the importance and delivery requirements of their information needs in the context of their mission. Rather than competing for resources on a first-come/first-served basis, the infrastructure employs these utility functions to dynamically react to unanticipated loading by optimizing the delivered information utility. Furthermore, commander's resource policies shape these functions to ensure that resources are allocated according to military doctrine. Using information about the desired content, channelization identifies opportunities to aggregate users onto shared channels reducing redundant transmissions. Hence, channelization increases the information throughput of the system and balances sender/receiver processing load.

  10. Performance Analysis of IEEE 802.11g TCM Waveforms Transmitted over a Channel with Pulse-Noise Interference

    DTIC Science & Technology

    2007-06-01

    17 Table 2. Best (maximum free distance) rate r=2/3 punctured convolutional code ...Hamming distance between all pairs of non-zero paths. Table 2 lists the best rate r=2/3, punctured convolutional code information weight structure dB...Table 2. Best (maximum free distance) rate r=2/3 punctured convolutional code information weight structure. (From: [12]). K freed freeB

  11. Scene Semantic Segmentation from Indoor Rgb-D Images Using Encode-Decoder Fully Convolutional Networks

    NASA Astrophysics Data System (ADS)

    Wang, Z.; Li, T.; Pan, L.; Kang, Z.

    2017-09-01

    With increasing attention for the indoor environment and the development of low-cost RGB-D sensors, indoor RGB-D images are easily acquired. However, scene semantic segmentation is still an open area, which restricts indoor applications. The depth information can help to distinguish the regions which are difficult to be segmented out from the RGB images with similar color or texture in the indoor scenes. How to utilize the depth information is the key problem of semantic segmentation for RGB-D images. In this paper, we propose an Encode-Decoder Fully Convolutional Networks for RGB-D image classification. We use Multiple Kernel Maximum Mean Discrepancy (MK-MMD) as a distance measure to find common and special features of RGB and D images in the network to enhance performance of classification automatically. To explore better methods of applying MMD, we designed two strategies; the first calculates MMD for each feature map, and the other calculates MMD for whole batch features. Based on the result of classification, we use the full connect CRFs for the semantic segmentation. The experimental results show that our method can achieve a good performance on indoor RGB-D image semantic segmentation.

  12. Linking pesticides and human health: a geographic information system (GIS) and Landsat remote sensing method to estimate agricultural pesticide exposure.

    PubMed

    VoPham, Trang; Wilson, John P; Ruddell, Darren; Rashed, Tarek; Brooks, Maria M; Yuan, Jian-Min; Talbott, Evelyn O; Chang, Chung-Chou H; Weissfeld, Joel L

    2015-08-01

    Accurate pesticide exposure estimation is integral to epidemiologic studies elucidating the role of pesticides in human health. Humans can be exposed to pesticides via residential proximity to agricultural pesticide applications (drift). We present an improved geographic information system (GIS) and remote sensing method, the Landsat method, to estimate agricultural pesticide exposure through matching pesticide applications to crops classified from temporally concurrent Landsat satellite remote sensing images in California. The image classification method utilizes Normalized Difference Vegetation Index (NDVI) values in a combined maximum likelihood classification and per-field (using segments) approach. Pesticide exposure is estimated according to pesticide-treated crop fields intersecting 500 m buffers around geocoded locations (e.g., residences) in a GIS. Study results demonstrate that the Landsat method can improve GIS-based pesticide exposure estimation by matching more pesticide applications to crops (especially temporary crops) classified using temporally concurrent Landsat images compared to the standard method that relies on infrequently updated land use survey (LUS) crop data. The Landsat method can be used in epidemiologic studies to reconstruct past individual-level exposure to specific pesticides according to where individuals are located.

  13. Heat Rejection from a Variable Conductance Heat Pipe Radiator Panel

    NASA Technical Reports Server (NTRS)

    Jaworske, D. A.; Gibson, M. A.; Hervol, D. S.

    2012-01-01

    A titanium-water heat pipe radiator having an innovative proprietary evaporator configuration was evaluated in a large vacuum chamber equipped with liquid nitrogen cooled cold walls. The radiator was manufactured by Advanced Cooling Technologies, Inc. (ACT), Lancaster, PA, and delivered as part of a Small Business Innovative Research effort. The radiator panel consisted of five titanium-water heat pipes operating as thermosyphons, sandwiched between two polymer matrix composite face sheets. The five variable conductance heat pipes were purposely charged with a small amount of non-condensable gas to control heat flow through the condenser. Heat rejection was evaluated over a wide range of inlet water temperature and flow conditions, and heat rejection was calculated in real-time utilizing a data acquisition system programmed with the Stefan-Boltzmann equation. Thermography through an infra-red transparent window identified heat flow across the panel. Under nominal operation, a maximum heat rejection value of over 2200 Watts was identified. The thermal vacuum evaluation of heat rejection provided critical information on understanding the radiator s performance, and in steady state and transient scenarios provided useful information for validating current thermal models in support of the Fission Power Systems Project.

  14. The phylogenetic utility of acetyltransferase (ARD1) and glutaminyl tRNA synthetase (QtRNA) for reconstructing Cenozoic relationships as exemplified by the large Australian cicada Pauropsalta generic complex.

    PubMed

    Owen, Christopher L; Marshall, David C; Hill, Kathy B R; Simon, Chris

    2015-02-01

    The Pauropsalta generic complex is a large group of cicadas (72 described spp.; >82 undescribed spp.) endemic to Australia. No previous molecular work on deep level relationships within this complex has been conducted, but a recent morphological revision and phylogenetic analysis proposed relationships among the 11 genera. We present here the first comprehensive molecular phylogeny of the complex using five loci (1 mtDNA, 4 nDNA), two of which are from nuclear genes new to cicada systematics. We compare the molecular phylogeny to the morphological phylogeny. We evaluate the phylogenetic informativeness of the new loci to traditional cicada systematics loci to generate a baseline of performance and behavior to aid in gene choice decisions in future systematic and phylogenomic studies. Our maximum likelihood and Bayesian inference phylogenies strongly support the monophyly of most of the newly described genera; however, relationships among genera differ from the morphological phylogeny. A comparison of phylogenetic informativeness among all loci revealed that COI 3rd positions dominate the informativeness profiles relative to all other loci but exhibit some among taxon nucleotide bias. After removing COI 3rd positions, COI 1st positions dominate near the terminals, while the period intron has the most phylogenetic informativeness near the root. Among the nuclear loci, ARD1 and QtRNA have lower phylogenetic informativeness than period intron and elongation factor 1 alpha intron, but the informativeness increases at you move from the tips to the root. The increase in phylogenetic informativeness deeper in the tree suggests these loci may be useful for resolving older relationships. Copyright © 2015. Published by Elsevier Inc.

  15. The Child and Adolescent Psychiatry Trials Network

    ERIC Educational Resources Information Center

    March, John S.; Silva, Susan G.; Compton, Scott; Anthony, Ginger; DeVeaugh-Geiss, Joseph; Califf, Robert; Krishnan, Ranga

    2004-01-01

    Objective: The current generation of clinical trials in pediatric psychiatry often fails to maximize clinical utility for practicing clinicians, thereby diluting its impact. Method: To attain maximum clinical relevance and acceptability, the Child and Adolescent Psychiatry Trials Network (CAPTN) will transport to pediatric psychiatry the practical…

  16. 76 FR 36961 - Standards and Specifications for Timber Products Acceptable for Use by Rural Utilities Service...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-06-24

    ... limited to the equivalent displacement of a knot \\3/8\\ of an inch deep on one face and the maximum round.../2\\ the equivalent displacement of a round knot permitted at that location, provided that the depth...

  17. Proposed industrial recoverd materials utilization targets for the textile mill products industry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1979-05-01

    Materials recovery targets were established to represent the maximum technically and economically feasible increase in the use of energy-saving materials by January 1, 1987. This report describes targets for the textile industry and describes how those targets were determined. (MCW)

  18. REAL-TIME CONTROL OF COMBINED SEWER NETWORKS

    EPA Science Inventory

    Real-time control (RTC) is a custom-designed management program for a specific urban sewerage system during a wet-weather event. The function of RTC is to assure efficient operation of the sewerage system and maximum utilization of existing storage capacity, either to fully conta...

  19. DISCOVERING SPATIO-TEMPORAL MODELS OF THE SPREAD OF WEST NILE VIRUS

    EPA Science Inventory

    Understanding interactions among pathogens, hosts, and the environment is important in developing rapid response to disease outbreak. To facilitate the development of control strategies during an outbreak, we have developed a tool for utilizing data to its maximum extent to deter...

  20. Report: Additional Analyses of Mercury Emissions Needed Before EPA Finalizes Rules for Coal-Fired Electric Utilities

    EPA Pesticide Factsheets

    Report #2005-P-00003, February 3, 2005. Evidence indicates that EPA senior management instructed EPA staff to develop a Maximum Achievable Control Technology (MACT) standard for mercury that would result in national emissions of 34 tons annually.

  1. Batch Tests To Determine Activity Distribution and Kinetic Parameters for Acetate Utilization in Expanded-Bed Anaerobic Reactors

    PubMed Central

    Fox, Peter; Suidan, Makram T.

    1990-01-01

    Batch tests to measure maximum acetate utilization rates were used to determine the distribution of acetate utilizers in expanded-bed sand and expanded-bed granular activated carbon (GAC) reactors. The reactors were fed a mixture of acetate and 3-ethylphenol, and they contained the same predominant aceticlastic methanogen, Methanothrix sp. Batch tests were performed both on the entire reactor contents and with media removed from the reactors. Results indicated that activity was evenly distributed within the GAC reactors, whereas in the sand reactor a sludge blanket on top of the sand bed contained approximately 50% of the activity. The Monod half-velocity constant (Ks) for the acetate-utilizing methanogens in two expanded-bed GAC reactors was searched for by combining steady-state results with batch test data. All parameters necessary to develop a model with Monod kinetics were experimentally determined except for Ks. However, Ks was a function of the effluent 3-ethylphenol concentration, and batch test results demonstrated that maximum acetate utilization rates were not a function of the effluent 3-ethylphenol concentration. Addition of a competitive inhibition term into the Monod expression predicted the dependence of Ks on the effluent 3-ethylphenol concentration. A two-parameter search determined a Ks of 8.99 mg of acetate per liter and a Ki of 2.41 mg of 3-ethylphenol per liter. Model predictions were in agreement with experimental observations for all effluent 3-ethylphenol concentrations. Batch tests measured the activity for a specific substrate and determined the distribution of activity in the reactor. The use of steady-state data in conjunction with batch test results reduced the number of unknown kinetic parameters and thereby reduced the uncertainty in the results and the assumptions made. PMID:16348175

  2. Batch tests to determine activity distribution and kinetic parameters for acetate utilization in expanded-bed anaerobic reactors.

    PubMed

    Fox, P; Suidan, M T

    1990-04-01

    Batch tests to measure maximum acetate utilization rates were used to determine the distribution of acetate utilizers in expanded-bed sand and expanded-bed granular activated carbon (GAC) reactors. The reactors were fed a mixture of acetate and 3-ethylphenol, and they contained the same predominant aceticlastic methanogen, Methanothrix sp. Batch tests were performed both on the entire reactor contents and with media removed from the reactors. Results indicated that activity was evenly distributed within the GAC reactors, whereas in the sand reactor a sludge blanket on top of the sand bed contained approximately 50% of the activity. The Monod half-velocity constant (K(s)) for the acetate-utilizing methanogens in two expanded-bed GAC reactors was searched for by combining steady-state results with batch test data. All parameters necessary to develop a model with Monod kinetics were experimentally determined except for K(s). However, K(s) was a function of the effluent 3-ethylphenol concentration, and batch test results demonstrated that maximum acetate utilization rates were not a function of the effluent 3-ethylphenol concentration. Addition of a competitive inhibition term into the Monod expression predicted the dependence of K(s) on the effluent 3-ethylphenol concentration. A two-parameter search determined a K(s) of 8.99 mg of acetate per liter and a K(i) of 2.41 mg of 3-ethylphenol per liter. Model predictions were in agreement with experimental observations for all effluent 3-ethylphenol concentrations. Batch tests measured the activity for a specific substrate and determined the distribution of activity in the reactor. The use of steady-state data in conjunction with batch test results reduced the number of unknown kinetic parameters and thereby reduced the uncertainty in the results and the assumptions made.

  3. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE PAGES

    Ye, Xin; Garikapati, Venu M.; You, Daehyun; ...

    2017-11-08

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  4. A practical method to test the validity of the standard Gumbel distribution in logit-based multinomial choice models of travel behavior

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ye, Xin; Garikapati, Venu M.; You, Daehyun

    Most multinomial choice models (e.g., the multinomial logit model) adopted in practice assume an extreme-value Gumbel distribution for the random components (error terms) of utility functions. This distributional assumption offers a closed-form likelihood expression when the utility maximization principle is applied to model choice behaviors. As a result, model coefficients can be easily estimated using the standard maximum likelihood estimation method. However, maximum likelihood estimators are consistent and efficient only if distributional assumptions on the random error terms are valid. It is therefore critical to test the validity of underlying distributional assumptions on the error terms that form the basismore » of parameter estimation and policy evaluation. In this paper, a practical yet statistically rigorous method is proposed to test the validity of the distributional assumption on the random components of utility functions in both the multinomial logit (MNL) model and multiple discrete-continuous extreme value (MDCEV) model. Based on a semi-nonparametric approach, a closed-form likelihood function that nests the MNL or MDCEV model being tested is derived. The proposed method allows traditional likelihood ratio tests to be used to test violations of the standard Gumbel distribution assumption. Simulation experiments are conducted to demonstrate that the proposed test yields acceptable Type-I and Type-II error probabilities at commonly available sample sizes. The test is then applied to three real-world discrete and discrete-continuous choice models. For all three models, the proposed test rejects the validity of the standard Gumbel distribution in most utility functions, calling for the development of robust choice models that overcome adverse effects of violations of distributional assumptions on the error terms in random utility functions.« less

  5. Utilization of different waste proteins to create a novel PGPR-containing bio-organic fertilizer

    PubMed Central

    Huang, Yan; Sun, Li; Zhao, Jianshu; Huang, Rong; Li, Rong; Shen, Qirong

    2015-01-01

    High-quality bio-organic fertilizers (BIOs) cannot be produced without the addition of some proteins, while many waste proteins are haphazardly disposed, causing serious environmental pollution. In this study, several waste proteins were used as additives to assist with the reproduction of the functional microbe (Bacillus amyloliquefaciens SQR9) inoculated into matured composts to produce BIOs. An optimized composition of solid-state fermentation (SSF) raw materials was predicted by response surface methodology and experimental validation. The results showed that 7.61% (w/w, DW, the same below) rapeseed meal, 8.85% expanded feather meal, 6.47% dewatered blue algal sludge and 77.07% chicken compost resulted in maximum biomass of strain SQR-9 and the maximum amount of lipopeptides 7 days after SSF. Spectroscopy experiments showed that the inner material structural changes in the novel SSF differed from the control and the novel BIO had higher dissolved organic matter. This study offers a high value-added utilization of waste proteins for producing economical but high-quality BIO. PMID:25586328

  6. The 25 kWe solar thermal Stirling hydraulic engine system: Conceptual design

    NASA Technical Reports Server (NTRS)

    White, Maurice; Emigh, Grant; Noble, Jack; Riggle, Peter; Sorenson, Torvald

    1988-01-01

    The conceptual design and analysis of a solar thermal free-piston Stirling hydraulic engine system designed to deliver 25 kWe when coupled to a 11 meter test bed concentrator is documented. A manufacturing cost assessment for 10,000 units per year was made. The design meets all program objectives including a 60,000 hr design life, dynamic balancing, fully automated control, more than 33.3 percent overall system efficiency, properly conditioned power, maximum utilization of annualized insolation, and projected production costs. The system incorporates a simple, rugged, reliable pool boiler reflux heat pipe to transfer heat from the solar receiver to the Stirling engine. The free-piston engine produces high pressure hydraulic flow which powers a commercial hydraulic motor that, in turn, drives a commercial rotary induction generator. The Stirling hydraulic engine uses hermetic bellows seals to separate helium working gas from hydraulic fluid which provides hydrodynamic lubrication to all moving parts. Maximum utilization of highly refined, field proven commercial components for electric power generation minimizes development cost and risk.

  7. Sort-Mid tasks scheduling algorithm in grid computing.

    PubMed

    Reda, Naglaa M; Tawfik, A; Marzok, Mohamed A; Khamis, Soheir M

    2015-11-01

    Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan.

  8. Performance analysis of a large-grain dataflow scheduling paradigm

    NASA Technical Reports Server (NTRS)

    Young, Steven D.; Wills, Robert W.

    1993-01-01

    A paradigm for scheduling computations on a network of multiprocessors using large-grain data flow scheduling at run time is described and analyzed. The computations to be scheduled must follow a static flow graph, while the schedule itself will be dynamic (i.e., determined at run time). Many applications characterized by static flow exist, and they include real-time control and digital signal processing. With the advent of computer-aided software engineering (CASE) tools for capturing software designs in dataflow-like structures, macro-dataflow scheduling becomes increasingly attractive, if not necessary. For parallel implementations, using the macro-dataflow method allows the scheduling to be insulated from the application designer and enables the maximum utilization of available resources. Further, by allowing multitasking, processor utilizations can approach 100 percent while they maintain maximum speedup. Extensive simulation studies are performed on 4-, 8-, and 16-processor architectures that reflect the effects of communication delays, scheduling delays, algorithm class, and multitasking on performance and speedup gains.

  9. Sort-Mid tasks scheduling algorithm in grid computing

    PubMed Central

    Reda, Naglaa M.; Tawfik, A.; Marzok, Mohamed A.; Khamis, Soheir M.

    2014-01-01

    Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan. PMID:26644937

  10. Development and application of a continuous fast microwave pyrolysis system for sewage sludge utilization.

    PubMed

    Zhou, Junwen; Liu, Shiyu; Zhou, Nan; Fan, Liangliang; Zhang, Yaning; Peng, Peng; Anderson, Erik; Ding, Kuan; Wang, Yunpu; Liu, Yuhuan; Chen, Paul; Ruan, Roger

    2018-05-01

    A continuous fast microwave-assisted pyrolysis system was designed, fabricated, and tested with sewage sludge. The system is equipped with continuous biomass feeding, mixing of biomass and microwave absorbent, and separated catalyst upgrading. The effect of the sludge pyrolysis temperature (450, 500, 550, and 600 °C) on the products yield, distribution and potentially energy recovery were investigated. The physical, chemical, and energetic properties of the raw sewage sludge and bio-oil, char and gas products obtained were analyzed using elemental analyzer, GC-MS, Micro-GC, SEM and ICP-OES. While the maximum bio-oil yield of 41.39 wt% was obtained at pyrolysis temperature of 550 °C, the optimal pyrolysis temperature for maximum overall energy recovery was 500 °C. The absence of carrier gas in the process may be responsible for the high HHV of gas products. This work could provide technical support for microwave-assisted system scale-up and sewage sludge utilization. Copyright © 2018 Elsevier Ltd. All rights reserved.

  11. Experimental Observations for Determining the Maximum Torque Values to Apply to Composite Components Mechanically Joined With Fasteners (MSFC Center Director's Discretionary Fund Final Report, Proj. 03-13}

    NASA Technical Reports Server (NTRS)

    Thomas, F. P.

    2006-01-01

    Aerospace structures utilize innovative, lightweight composite materials for exploration activities. These structural components, due to various reasons including size limitations, manufacturing facilities, contractual obligations, or particular design requirements, will have to be joined. The common methodologies for joining composite components are the adhesively bonded and mechanically fastened joints and, in certain instances, both methods are simultaneously incorporated into the design. Guidelines and recommendations exist for engineers to develop design criteria and analyze and test composites. However, there are no guidelines or recommendations based on analysis or test data to specify a torque or torque range to apply to metallic mechanical fasteners used to join composite components. Utilizing the torque tension machine at NASA s Marshall Space Flight Center, an initial series of tests were conducted to determine the maximum torque that could be applied to a composite specimen. Acoustic emissions were used to nondestructively assess the specimens during the tests and thermographic imaging after the tests.

  12. Trajectory Dispersed Vehicle Process for Space Launch System

    NASA Technical Reports Server (NTRS)

    Statham, Tamara; Thompson, Seth

    2017-01-01

    The Space Launch System (SLS) vehicle is part of NASA's deep space exploration plans that includes manned missions to Mars. Manufacturing uncertainties in design parameters are key considerations throughout SLS development as they have significant effects on focus parameters such as lift-off-thrust-to-weight, vehicle payload, maximum dynamic pressure, and compression loads. This presentation discusses how the SLS program captures these uncertainties by utilizing a 3 degree of freedom (DOF) process called Trajectory Dispersed (TD) analysis. This analysis biases nominal trajectories to identify extremes in the design parameters for various potential SLS configurations and missions. This process utilizes a Design of Experiments (DOE) and response surface methodologies (RSM) to statistically sample uncertainties, and develop resulting vehicles using a Maximum Likelihood Estimate (MLE) process for targeting uncertainties bias. These vehicles represent various missions and configurations which are used as key inputs into a variety of analyses in the SLS design process, including 6 DOF dispersions, separation clearances, and engine out failure studies.

  13. Utilization of different waste proteins to create a novel PGPR-containing bio-organic fertilizer

    NASA Astrophysics Data System (ADS)

    Huang, Yan; Sun, Li; Zhao, Jianshu; Huang, Rong; Li, Rong; Shen, Qirong

    2015-01-01

    High-quality bio-organic fertilizers (BIOs) cannot be produced without the addition of some proteins, while many waste proteins are haphazardly disposed, causing serious environmental pollution. In this study, several waste proteins were used as additives to assist with the reproduction of the functional microbe (Bacillus amyloliquefaciens SQR9) inoculated into matured composts to produce BIOs. An optimized composition of solid-state fermentation (SSF) raw materials was predicted by response surface methodology and experimental validation. The results showed that 7.61% (w/w, DW, the same below) rapeseed meal, 8.85% expanded feather meal, 6.47% dewatered blue algal sludge and 77.07% chicken compost resulted in maximum biomass of strain SQR-9 and the maximum amount of lipopeptides 7 days after SSF. Spectroscopy experiments showed that the inner material structural changes in the novel SSF differed from the control and the novel BIO had higher dissolved organic matter. This study offers a high value-added utilization of waste proteins for producing economical but high-quality BIO.

  14. Trust in the Health Care System and the Use of Preventive Health Services by Older Black and White Adults

    PubMed Central

    Schulz, Richard; Harris, Roderick; Silverman, Myrna; Thomas, Stephen B.

    2009-01-01

    Objectives. We sought to find racial differences in the effects of trust in the health care system on preventive health service use among older adults. Methods. We conducted a telephone survey with 1681 Black and White older adults. Survey questions explored respondents' trust in physicians, medical research, and health information sources. We used logistic regression and controlled for covariates to assess effects of race and trust on the use of preventive health services. Results. We identified 4 types of trust through factor analysis: trust in one's own personal physician, trust in the competence of physicians' care, and trust in formal and informal health information sources. Blacks had significantly less trust in their own physicians and greater trust in informal health information sources than did Whites. Greater trust in one's own physician was associated with utilization of routine checkups, prostate-specific antigen tests, and mammograms, but not with flu shots. Greater trust in informal information sources was associated with utilization of mammograms. Conclusions. Trust in one's own personal physician is associated with utilization of preventive health services. Blacks' relatively high distrust of their physicians likely contributes to health disparities by causing reduced utilization of preventive services. Health information disseminated to Blacks through informal means is likely to increase Blacks' utilization of preventive health services. PMID:18923129

  15. A Spiking Neural Network Methodology and System for Learning and Comparative Analysis of EEG Data From Healthy Versus Addiction Treated Versus Addiction Not Treated Subjects.

    PubMed

    Doborjeh, Maryam Gholami; Wang, Grace Y; Kasabov, Nikola K; Kydd, Robert; Russell, Bruce

    2016-09-01

    This paper introduces a method utilizing spiking neural networks (SNN) for learning, classification, and comparative analysis of brain data. As a case study, the method was applied to electroencephalography (EEG) data collected during a GO/NOGO cognitive task performed by untreated opiate addicts, those undergoing methadone maintenance treatment (MMT) for opiate dependence and a healthy control group. the method is based on an SNN architecture called NeuCube, trained on spatiotemporal EEG data. NeuCube was used to classify EEG data across subject groups and across GO versus NOGO trials, but also facilitated a deeper comparative analysis of the dynamic brain processes. This analysis results in a better understanding of human brain functioning across subject groups when performing a cognitive task. In terms of the EEG data classification, a NeuCube model obtained better results (the maximum obtained accuracy: 90.91%) when compared with traditional statistical and artificial intelligence methods (the maximum obtained accuracy: 50.55%). more importantly, new information about the effects of MMT on cognitive brain functions is revealed through the analysis of the SNN model connectivity and its dynamics. this paper presented a new method for EEG data modeling and revealed new knowledge on brain functions associated with mental activity which is different from the brain activity observed in a resting state of the same subjects.

  16. Algorithm for pose estimation based on objective function with uncertainty-weighted measuring error of feature point cling to the curved surface.

    PubMed

    Huo, Ju; Zhang, Guiyang; Yang, Ming

    2018-04-20

    This paper is concerned with the anisotropic and non-identical gray distribution of feature points clinging to the curved surface, upon which a high precision and uncertainty-resistance algorithm for pose estimation is proposed. Weighted contribution of uncertainty to the objective function of feature points measuring error is analyzed. Then a novel error objective function based on the spatial collinear error is constructed by transforming the uncertainty into a covariance-weighted matrix, which is suitable for the practical applications. Further, the optimized generalized orthogonal iterative (GOI) algorithm is utilized for iterative solutions such that it avoids the poor convergence and significantly resists the uncertainty. Hence, the optimized GOI algorithm extends the field-of-view applications and improves the accuracy and robustness of the measuring results by the redundant information. Finally, simulation and practical experiments show that the maximum error of re-projection image coordinates of the target is less than 0.110 pixels. Within the space 3000  mm×3000  mm×4000  mm, the maximum estimation errors of static and dynamic measurement for rocket nozzle motion are superior to 0.065° and 0.128°, respectively. Results verify the high accuracy and uncertainty attenuation performance of the proposed approach and should therefore have potential for engineering applications.

  17. Numerical simulation and experimental research on interaction of micro-defects and laser ultrasonic signal

    NASA Astrophysics Data System (ADS)

    Guo, Hualing; Zheng, Bin; Liu, Hui

    2017-11-01

    In the present research, the mechanism governing the interaction between laser-generated ultrasonic wave and the micro-defects on an aluminum plate has been studied by virtue of numerical simulation as well as practical experiments. Simulation results indicate that broadband ultrasonic waves are caused mainly by surface waves, and that the surface waves produced by micro-defects could be utilized for the detection of micro-defects because these waves reflect as much information of the defects as possible. In the research, a laser-generated ultrasonic wave testing system with a surface wave probe has been established for the detection of micro-defects, and the surface waves produced by the defects with different depths on an aluminum plate have been tested by using the system. The interaction between defect depth and the maximum amplitude of the surface wave and that between defect depth and the center frequency of the surface wave have also been analyzed in detail. Research results indicate that, when the defect depth is less than half of the wavelength of the surface wave, the maximum amplitude and the center frequency of the surface wave are in linear proportion to the defect depth. Sound consistency of experimental results with theoretical simulation indicates that the system as established in the present research could be adopted for the quantitative detection of micro-defects.

  18. Implementation of bayesian model averaging on the weather data forecasting applications utilizing open weather map

    NASA Astrophysics Data System (ADS)

    Rahmat, R. F.; Nasution, F. R.; Seniman; Syahputra, M. F.; Sitompul, O. S.

    2018-02-01

    Weather is condition of air in a certain region at a relatively short period of time, measured with various parameters such as; temperature, air preasure, wind velocity, humidity and another phenomenons in the atmosphere. In fact, extreme weather due to global warming would lead to drought, flood, hurricane and other forms of weather occasion, which directly affects social andeconomic activities. Hence, a forecasting technique is to predict weather with distinctive output, particullary mapping process based on GIS with information about current weather status in certain cordinates of each region with capability to forecast for seven days afterward. Data used in this research are retrieved in real time from the server openweathermap and BMKG. In order to obtain a low error rate and high accuracy of forecasting, the authors use Bayesian Model Averaging (BMA) method. The result shows that the BMA method has good accuracy. Forecasting error value is calculated by mean square error shows (MSE). The error value emerges at minumum temperature rated at 0.28 and maximum temperature rated at 0.15. Meanwhile, the error value of minimum humidity rates at 0.38 and the error value of maximum humidity rates at 0.04. Afterall, the forecasting error rate of wind speed is at 0.076. The lower the forecasting error rate, the more optimized the accuracy is.

  19. Induction Bonding of Prepreg Tape and Titanium Foil

    NASA Technical Reports Server (NTRS)

    Messier, Bernadette C.; Hinkley, Jeffrey A.; Johnston, Norman J.

    1998-01-01

    Hybrid structural laminates made of titanium foil and carbon fiber reinforced polymer composite offer a potential for improved performance in aircraft structural applications. To obtain information needed for the automated fabrication of hybrid laminates, a series of bench scale tests were conducted of the magnetic induction bonding of titanium foil and thermoplastic prepreg tape. Foil and prepreg specimens were placed in the gap of a toroid magnet mounted in a bench press. Several magnet power supplies were used to study power at levels from 0.5 to 1.75 kW and frequencies from 50 to 120 kHz. Sol-gel surface-treated titanium foil, 0.0125 cm thick, and PIXA/IM7 prepreg tape were used in several lay-up configurations. Data were obtained on wedge peel bond strength, heating rate, and temperature ramp over a range of magnet power levels and frequencies at different "power-on" times for several magnet gap dimensions. These data will be utilized in assessing the potential for automated processing. Peel strengths of foil-tape bonds depended on the maximum temperature reached during heating and on the applied pressure. Maximum peel strengths were achieved at 1.25kW and 8OkHz. Induction heating of the foil appears to be capable of good bonding up to 10 plies of tape. Heat transfer calculations indicate that a 20-40 C temperature difference exists across the tape thickness during heat-up.

  20. Nature and Utilization of Civil Commitment for Substance Abuse in the United States.

    PubMed

    Christopher, Paul P; Pinals, Debra A; Stayton, Taylor; Sanders, Kellie; Blumberg, Lester

    2015-09-01

    Substance abuse is a leading cause of morbidity and mortality in the United States. Although civil commitment has been used to address substance abuse for more than a century, little is known today about the nature and use of substance-related commitment laws in the United States. We examined statutes between July 2010 and October 2012 from all 50 states and the District of Columbia for provisions authorizing civil commitment of adults for substance abuse and recorded the criteria and evidentiary standard for commitment and the location and the maximum duration of commitment orders. High-level state representatives evaluated these data and provided information on the use of commitment. Thirty-three states have statutory provisions for the civil commitment of persons because of substance abuse. The application of these statutes ranged from a few commitment cases to thousands annually. Although dangerousness was the most common basis for commitment, many states permitted it in other contexts. The maximum duration of treatment ranged from less than 1 month to more than 1 year for both initial and subsequent civil commitment orders. These findings show wide variability in the nature and application of civil commitment statutes for substance abuse in the United States. Such diversity reflects a lack of consensus on the role that civil commitment should play in managing substance abuse and the problems associated with it. © 2015 American Academy of Psychiatry and the Law.

  1. Expected Utility Based Decision Making under Z-Information and Its Application.

    PubMed

    Aliev, Rashad R; Mraiziq, Derar Atallah Talal; Huseynov, Oleg H

    2015-01-01

    Real-world decision relevant information is often partially reliable. The reasons are partial reliability of the source of information, misperceptions, psychological biases, incompetence, and so forth. Z-numbers based formalization of information (Z-information) represents a natural language (NL) based value of a variable of interest in line with the related NL based reliability. What is important is that Z-information not only is the most general representation of real-world imperfect information but also has the highest descriptive power from human perception point of view as compared to fuzzy number. In this study, we present an approach to decision making under Z-information based on direct computation over Z-numbers. This approach utilizes expected utility paradigm and is applied to a benchmark decision problem in the field of economics.

  2. [Utilities: a solution of a decision problem?].

    PubMed

    Koller, Michael; Ohmann, Christian; Lorenz, Wilfried

    2008-01-01

    Utility is a concept that originates from utilitarianism, a highly influential philosophical school in the Anglo-American world. The cornerstone of utilitarianism is the principle of maximum happiness or utility. In the medical sciences, this utility approach has been adopted and developed within the field of medical decision making. On an operational level, utility is the evaluation of a health state or an outcome on a one-dimensional scale ranging from 0 (death) to 1 (perfect health). By adding the concept of expectancy, the graphic representation of both concepts in a decision tree results in the specification of expected utilities and helps to resolve complex medical decision problems. Criticism of the utility approach relates to the rational perspective on humans (which is rejected by a considerable fraction of research in psychology) and to the artificial methods used in the evaluation of utility, such as Standard Gamble or Time Trade Off. These may well be the reason why the utility approach has never been accepted in Germany. Nevertheless, innovative concepts for defining goals in health care are urgently required, as the current debate in Germany on "Nutzen" (interestingly translated as 'benefit' instead of as 'utility') and integrated outcome models indicates. It remains to be seen whether this discussion will lead to a re-evaluation of the utility approach.

  3. A random utility model of delay discounting and its application to people with externalizing psychopathology.

    PubMed

    Dai, Junyi; Gunn, Rachel L; Gerst, Kyle R; Busemeyer, Jerome R; Finn, Peter R

    2016-10-01

    Previous studies have demonstrated that working memory capacity plays a central role in delay discounting in people with externalizing psychopathology. These studies used a hyperbolic discounting model, and its single parameter-a measure of delay discounting-was estimated using the standard method of searching for indifference points between intertemporal options. However, there are several problems with this approach. First, the deterministic perspective on delay discounting underlying the indifference point method might be inappropriate. Second, the estimation procedure using the R2 measure often leads to poor model fit. Third, when parameters are estimated using indifference points only, much of the information collected in a delay discounting decision task is wasted. To overcome these problems, this article proposes a random utility model of delay discounting. The proposed model has 2 parameters, 1 for delay discounting and 1 for choice variability. It was fit to choice data obtained from a recently published data set using both maximum-likelihood and Bayesian parameter estimation. As in previous studies, the delay discounting parameter was significantly associated with both externalizing problems and working memory capacity. Furthermore, choice variability was also found to be significantly associated with both variables. This finding suggests that randomness in decisions may be a mechanism by which externalizing problems and low working memory capacity are associated with poor decision making. The random utility model thus has the advantage of disclosing the role of choice variability, which had been masked by the traditional deterministic model. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  4. 78 FR 40096 - Information Collection Activity; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-07-03

    ... DEPARTMENT OF AGRICULTURE Rural Utilities Service Information Collection Activity; Comment Request AGENCY: Rural Utilities Service, USDA. ACTION: Notice and request for comments. SUMMARY: In accordance... Service (RUS) invites comments on this information collection for which approval from the Office of...

  5. 42 CFR 480.101 - Scope and definitions.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... ORGANIZATION INFORMATION Utilization and Quality Control Quality Improvement Organizations (QIOs) General... governing— (1) Disclosure of information collected, acquired or generated by a Utilization and Quality... of the problem and follow-up. Quality review study information means all documentation related to the...

  6. 42 CFR 480.101 - Scope and definitions.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... ORGANIZATION INFORMATION Utilization and Quality Control Quality Improvement Organizations (QIOs) General... governing— (1) Disclosure of information collected, acquired or generated by a Utilization and Quality... of the problem and follow-up. Quality review study information means all documentation related to the...

  7. 42 CFR 480.101 - Scope and definitions.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... ORGANIZATION INFORMATION Utilization and Quality Control Quality Improvement Organizations (QIOs) General... governing— (1) Disclosure of information collected, acquired or generated by a Utilization and Quality... of the problem and follow-up. Quality review study information means all documentation related to the...

  8. Theoretical Calculations on the Feasibility of Microalgal Biofuels: Utilization of Marine Resources Could Help Realizing the Potential of Microalgae

    PubMed Central

    Park, Hanwool

    2016-01-01

    Abstract Microalgae have long been considered as one of most promising feedstocks with better characteristics for biofuels production over conventional energy crops. There have been a wide range of estimations on the feasibility of microalgal biofuels based on various productivity assumptions and data from different scales. The theoretical maximum algal biofuel productivity, however, can be calculated by the amount of solar irradiance and photosynthetic efficiency (PE), assuming other conditions are within the optimal range. Using the actual surface solar irradiance data around the world and PE of algal culture systems, maximum algal biomass and biofuel productivities were calculated, and feasibility of algal biofuel were assessed with the estimation. The results revealed that biofuel production would not easily meet the economic break‐even point and may not be sustainable at a large‐scale with the current algal biotechnology. Substantial reductions in the production cost, improvements in lipid productivity, recycling of resources, and utilization of non‐conventional resources will be necessary for feasible mass production of algal biofuel. Among the emerging technologies, cultivation of microalgae in the ocean shows great potentials to meet the resource requirements and economic feasibility in algal biofuel production by utilizing various marine resources. PMID:27782372

  9. Theoretical Calculations on the Feasibility of Microalgal Biofuels: Utilization of Marine Resources Could Help Realizing the Potential of Microalgae.

    PubMed

    Park, Hanwool; Lee, Choul-Gyun

    2016-11-01

    Microalgae have long been considered as one of most promising feedstocks with better characteristics for biofuels production over conventional energy crops. There have been a wide range of estimations on the feasibility of microalgal biofuels based on various productivity assumptions and data from different scales. The theoretical maximum algal biofuel productivity, however, can be calculated by the amount of solar irradiance and photosynthetic efficiency (PE), assuming other conditions are within the optimal range. Using the actual surface solar irradiance data around the world and PE of algal culture systems, maximum algal biomass and biofuel productivities were calculated, and feasibility of algal biofuel were assessed with the estimation. The results revealed that biofuel production would not easily meet the economic break-even point and may not be sustainable at a large-scale with the current algal biotechnology. Substantial reductions in the production cost, improvements in lipid productivity, recycling of resources, and utilization of non-conventional resources will be necessary for feasible mass production of algal biofuel. Among the emerging technologies, cultivation of microalgae in the ocean shows great potentials to meet the resource requirements and economic feasibility in algal biofuel production by utilizing various marine resources. © 2016 The Authors. Biotechnology Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. 78 FR 55238 - Notice of Request for Extension of a Currently Approved Information Collection

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-10

    ... performance of the functions of the agency, including whether the information will have practical utility; (b..., utility, and clarity of the information to be collected; and (d) ways to minimize the burden of the..., electronic, mechanical, or other technological collection techniques or other forms of information technology...

  11. 75 FR 13073 - Notice of Request for Extension of a Currently Approved Information Collection

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-18

    ... performance of the functions of the agency, including whether the information will have practical utility; (b..., utility, and clarity of the information to be collected; and (d) ways to minimize the burden of the..., electronic, mechanical, or other technological collection techniques or other forms of information technology...

  12. 78 FR 59049 - 60-Day Notice of Proposed Information Collection: Public Housing Energy Audits and Utility...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-25

    ... Information Collection: Public Housing Energy Audits and Utility Allowances AGENCY: Office of the Assistant.... Overview of Information Collection Title of Information Collection: Public Housing Energy Audits and... proposed use: 24 CFR 965.301, Subpart C, Energy Audit and Energy Conservation Measures, requires PHAs to...

  13. Bayesian or Laplacien inference, entropy and information theory and information geometry in data and signal processing

    NASA Astrophysics Data System (ADS)

    Mohammad-Djafari, Ali

    2015-01-01

    The main object of this tutorial article is first to review the main inference tools using Bayesian approach, Entropy, Information theory and their corresponding geometries. This review is focused mainly on the ways these tools have been used in data, signal and image processing. After a short introduction of the different quantities related to the Bayes rule, the entropy and the Maximum Entropy Principle (MEP), relative entropy and the Kullback-Leibler divergence, Fisher information, we will study their use in different fields of data and signal processing such as: entropy in source separation, Fisher information in model order selection, different Maximum Entropy based methods in time series spectral estimation and finally, general linear inverse problems.

  14. 77 FR 76169 - Increase in Maximum Tuition and Fee Amounts Payable under the Post-9/11 GI Bill

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-26

    .... Correspondence $9,324.89. Post 9/11 Entitlement Charge Amount for Tests Licensing and Certification Tests... DEPARTMENT OF VETERANS AFFAIRS Increase in Maximum Tuition and Fee Amounts Payable under the Post... this notice is to inform the public of the increase in the Post-9/11 GI Bill maximum tuition and fee...

  15. Nutritional information and health warnings on wine labels: Exploring consumer interest and preferences.

    PubMed

    Annunziata, A; Pomarici, E; Vecchio, R; Mariani, A

    2016-11-01

    This paper aims to contribute to the current debate on the inclusion of nutritional information and health warnings on wine labels, exploring consumers' interest and preferences. The results of a survey conducted on a sample of Italian wine consumers (N = 300) show the strong interest of respondents in the inclusion of such information on the label. Conjoint analysis reveals that consumers assign greater utility to health warnings, followed by nutritional information. Cluster analysis shows the existence of three different consumer segments. The first cluster, which included mainly female consumers (over 55) and those with high wine involvement, revealed greater awareness of the links between wine and health and better knowledge of wine nutritional properties, preferring a more detailed nutritional label, such as a panel with GDA%. By contrast, the other two clusters, consisting of individuals who generally find it more difficult to understand nutritional labels, preferred the less detailed label of a glass showing calories. The second and largest cluster comprising mainly younger men (under 44), showed the highest interest in health warnings while the third cluster - with a relatively low level of education - preferred the specification of the number of glasses not to exceed. Our results support the idea that the policy maker should consider introducing a mandatory nutritional label in the easier-to-implement and not-too-costly form of a glass with calories, rotating health warnings and the maximum number of glasses not to exceed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Assessment of precursory information in seismo-electromagnetic phenomena

    NASA Astrophysics Data System (ADS)

    Han, P.; Hattori, K.; Zhuang, J.

    2017-12-01

    Previous statistical studies showed that there were correlations between seismo-electromagnetic phenomena and sizeable earthquakes in Japan. In this study, utilizing Molchan's error diagram, we evaluate whether these phenomena contain precursory information and discuss how they can be used in short-term forecasting of large earthquake events. In practice, for given series of precursory signals and related earthquake events, each prediction strategy is characterized by the leading time of alarms, the length of alarm window, the alarm radius (area) and magnitude. The leading time is the time length between a detected anomaly and its following alarm, and the alarm window is the duration that an alarm lasts. The alarm radius and magnitude are maximum predictable distance and minimum predictable magnitude of earthquake events, respectively. We introduce the modified probability gain (PG') and the probability difference (D') to quantify the forecasting performance and to explore the optimal prediction parameters for a given electromagnetic observation. The above methodology is firstly applied to ULF magnetic data and GPS-TEC data. The results show that the earthquake predictions based on electromagnetic anomalies are significantly better than random guesses, indicating the data contain potential useful precursory information. Meanwhile, we reveal the optimal prediction parameters for both observations. The methodology proposed in this study could be also applied to other pre-earthquake phenomena to find out whether there is precursory information, and then on this base explore the optimal alarm parameters in practical short-term forecast.

  17. Options for Martian propellant production

    NASA Technical Reports Server (NTRS)

    Dowler, Warren; French, James; Ramohalli, Kumar

    1991-01-01

    A quantitative evaluation methodology for utilizing in-situ resources on Mars for the production of useful substances. The emphasis is on the chemical processes. Various options considering different feedstock (mostly, carbon dioxide, water, and iron oxides) are carefully examined for the product mix and the energy needs. Oxygen, carbon monoxide, alcohols, and other chemicals are the end products. The chemical processes involve electrolysis, methanation, and variations. It is shown that maximizing the product utility is more important than the production of oxygen, methane, or alcohols. An important factor is the storage of the chemicals produced. The product utility is dependent, to some extent, upon the mission. A combination of the stability, the enthalpy of formation, and the mass fraction of the products is seen to yield a fairly good quantitative feel for the overall utility and maximum mission impact.

  18. Teach It, Don’t Preach It: The Differential Effects of Directly-communicated and Self-generated Utility Value Information

    PubMed Central

    Canning, Elizabeth A.; Harackiewicz, Judith M.

    2015-01-01

    Social-psychological interventions in education have used a variety of “self-persuasion” or “saying-is-believing” techniques to encourage students to articulate key intervention messages. These techniques are used in combination with more overt strategies, such as the direct communication of messages in order to promote attitude change. However, these different strategies have rarely been systematically compared, particularly in controlled laboratory settings. We focus on one intervention based in expectancy-value theory designed to promote perceptions of utility value in the classroom and test different intervention techniques to promote interest and performance. Across three laboratory studies, we used a mental math learning paradigm in which we varied whether students wrote about utility value for themselves or received different forms of directly-communicated information about the utility value of a novel mental math technique. In Study 1, we examined the difference between directly-communicated and self-generated utility-value information and found that directly-communicated utility-value information undermined performance and interest for individuals who lacked confidence, but that self-generated utility had positive effects. However, Study 2 suggests that these negative effects of directly-communicated utility value can be ameliorated when participants are also given the chance to generate their own examples of utility value, revealing a synergistic effect of directly-communicated and self-generated utility value. In Study 3, we found that individuals who lacked confidence benefited more when everyday examples of utility value were communicated, rather than career and school examples. PMID:26495326

  19. High-Lift Capability of Low Aspect Ratio Wings Utilizing Circulation Control and Upper Surface Blowing

    DTIC Science & Technology

    1980-07-01

    span, ft (m) CD Drag coefficient, D/qS I CD Drag coefficient at zero lift CL Lift coefficient, L/qS CL Lift curve elope, aCL/aa I CL Maximum lift...recording on magnetic tape utilizing a Beckman 210 high-speed acquistion system. The wing-fuselage model was mounted in the test section such that...6, 7, and 8 show the tip sails have little impact on the zero or low-lift drag, but these j sails definitely influence the induced drag that is deve

  20. Martian resource utilization. 1: Plant design and transportation selection criteria

    NASA Technical Reports Server (NTRS)

    Kaloupis, Peter; Nolan, Peter E.; Cutler, Andrew H.

    1992-01-01

    Indigenous Space Materials Utilization (ISMU) provides an opportunity to make Mars exploration mission scenarios more affordable by reducing the initial mass necessary in Low Earth Orbit (LEO). Martian propellant production is discussed in terms of simple design and economic tradeoffs. Fuel and oxidizer combinations included are H2/O2, CH4/O2, and CO/O2. Process flow diagrams with power and mass flow requirements are presented for a variety of processes, and some design requirements are derived. Maximum allowable plant masses for single use amortization are included.

  1. Martian resource utilization. 1: Plant design and transportation selection criteria

    NASA Astrophysics Data System (ADS)

    Kaloupis, Peter; Nolan, Peter E.; Cutler, Andrew H.

    Indigenous Space Materials Utilization (ISMU) provides an opportunity to make Mars exploration mission scenarios more affordable by reducing the initial mass necessary in Low Earth Orbit (LEO). Martian propellant production is discussed in terms of simple design and economic tradeoffs. Fuel and oxidizer combinations included are H2/O2, CH4/O2, and CO/O2. Process flow diagrams with power and mass flow requirements are presented for a variety of processes, and some design requirements are derived. Maximum allowable plant masses for single use amortization are included.

  2. Flight and Preflight Tests of a Ram Jet Burning Magnesium Slurry Fuel and Utilizing a Solid-propellant Gas Generator for Fuel Expulsion

    NASA Technical Reports Server (NTRS)

    Bartlett, Walter, A , jr; Hagginbotham, William K , Jr

    1955-01-01

    Data obtained from the first flight test of a ram jet utilizing a magnesium slurry fuel are presented. The ram jet accelerated from a Mach number of 1.75 to a Mach number of 3.48 in 15.5 seconds. During this period a maximum values of air specific impulse and gross thrust coefficient were calculated to be 151 seconds and 0.658, respectively. The rocket gas generator used as a fuel-pumping system operated successfully.

  3. Maneuvering strategies using CMGs

    NASA Technical Reports Server (NTRS)

    Oh, H. S.; Vadali, S. R.

    1988-01-01

    This paper considers control strategies for maneuvering spacecraft using Single-Gimbal Control Momentum Gyros (CMGs). A pyramid configuration using four gyros is utilized. Preferred initial gimbal angles for maximum utilization of CMG momentum are obtained for some known torque commands. Feedback control laws are derived from the stability point of view by using the Liapunov's Second Theorem. The gyro rates are obtained by the pseudo-inverse technique. The effect of gimbal rate bounds on controllability are studied for an example maneuver. Singularity avoidance is based on limiting the gyro rates depending on a singularity index.

  4. Baseline tests of the power-train electric delivery van

    NASA Technical Reports Server (NTRS)

    Lumannick, S.; Dustin, M. O.; Bozek, J. M.

    1977-01-01

    Vehicle maximum speed, range at constant speed, range over stop-and-go driving schedules, maximum acceleration, gradeability, gradeability limit, road energy consumption, road power, indicated energy consumption, braking capability, battery charger efficiency, and battery characteristics were determined for a modified utility van powered by sixteen 6-volt batteries connected in series. A chopper controller actuated by a foot accelerator pedal changes the voltage applied to the 22-kilowatt (30-hp) series-wound drive motor. In addition to the conventional hydraulic braking system, the vehicle has hydraulic regenerative braking. Cycle tests and acceleration tests were conducted with and without hydraulic regeneration.

  5. Impact of Parameter Uncertainty Assessment of Critical SWAT Output Simulations

    USDA-ARS?s Scientific Manuscript database

    Watershed models are increasingly being utilized to evaluate alternate management scenarios for improving water quality. The concern for using these tools in extensive programs such as the National Total Maximum Daily Load (TMDL) program is that the certainty of model results and efficacy of managem...

  6. 14 CFR 23.71 - Glide: Single-engine airplanes.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Glide: Single-engine airplanes. 23.71... AIRWORTHINESS STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY AIRPLANES Flight Performance § 23.71 Glide: Single-engine airplanes. The maximum horizontal distance traveled in still air, in nautical miles...

  7. 14 CFR 23.71 - Glide: Single-engine airplanes.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Glide: Single-engine airplanes. 23.71... AIRWORTHINESS STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY AIRPLANES Flight Performance § 23.71 Glide: Single-engine airplanes. The maximum horizontal distance traveled in still air, in nautical miles...

  8. 14 CFR 23.71 - Glide: Single-engine airplanes.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 1 2013-01-01 2013-01-01 false Glide: Single-engine airplanes. 23.71... AIRWORTHINESS STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY AIRPLANES Flight Performance § 23.71 Glide: Single-engine airplanes. The maximum horizontal distance traveled in still air, in nautical miles...

  9. 14 CFR 23.3 - Airplane categories.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 1 2012-01-01 2012-01-01 false Airplane categories. 23.3 Section 23.3... STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY AIRPLANES General § 23.3 Airplane categories... airplanes that have a seating configuration, excluding pilot seats, of nine or less, a maximum certificated...

  10. 24 CFR 965.505 - Standards for allowances for utilities.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... PHA installs air conditioning, it shall provide, to the maximum extent economically feasible, systems... systems that offer each resident the option to choose air conditioning shall include retail meters or... allowances. For systems that offer residents the option to choose air conditioning but cannot be checkmetered...

  11. 14 CFR 23.683 - Operation tests.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY AIRPLANES Design and Construction Control Systems § 23.683 Operation tests. (a) It must be shown by operation tests that, when the controls are... controls, loads not less than those corresponding to the maximum pilot effort established under § 23.405...

  12. 14 CFR 23.683 - Operation tests.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY AIRPLANES Design and Construction Control Systems § 23.683 Operation tests. (a) It must be shown by operation tests that, when the controls are... controls, loads not less than those corresponding to the maximum pilot effort established under § 23.405...

  13. 14 CFR 23.683 - Operation tests.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY AIRPLANES Design and Construction Control Systems § 23.683 Operation tests. (a) It must be shown by operation tests that, when the controls are... controls, loads not less than those corresponding to the maximum pilot effort established under § 23.405...

  14. 14 CFR 23.683 - Operation tests.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY AIRPLANES Design and Construction Control Systems § 23.683 Operation tests. (a) It must be shown by operation tests that, when the controls are... controls, loads not less than those corresponding to the maximum pilot effort established under § 23.405...

  15. 14 CFR 23.683 - Operation tests.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY AIRPLANES Design and Construction Control Systems § 23.683 Operation tests. (a) It must be shown by operation tests that, when the controls are... controls, loads not less than those corresponding to the maximum pilot effort established under § 23.405...

  16. 78 FR 67465 - Loan Guaranty: Maximum Allowable Attorney Fees

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-12

    ... foreclosure attorney fee. This fee recognizes the additional work required to resume the foreclosure action, while also accounting for the expectation that some work from the previous action may be utilized in... for legal fees in connection with the termination of single-family housing loans, including...

  17. 14 CFR 23.77 - Balked landing.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... of more than 6,000 pounds maximum weight and each normal, utility, and acrobatic category turbine... movement of the power controls from minimum flight-idle position; (2) The landing gear extended; (3) The... of movement of the power controls from the minimum flight idle position; (2) Landing gear extended...

  18. 29 CFR 515.1 - Definitions.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... maximum-hour or State minimum-wage regulations. (f) Official forms. The term official forms means forms... 29 Labor 3 2010-07-01 2010-07-01 false Definitions. 515.1 Section 515.1 Labor Regulations Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR REGULATIONS UTILIZATION OF STATE...

  19. Distillation Time as Tool for Improved Antimalarial Activity and Differential Oil Composition of Cumin Seed Oil.

    PubMed

    Zheljazkov, Valtcho D; Gawde, Archana; Cantrell, Charles L; Astatkie, Tess; Schlegel, Vicki

    2015-01-01

    A steam distillation extraction kinetics experiment was conducted to estimate essential oil yield, composition, antimalarial, and antioxidant capacity of cumin (Cuminum cyminum L.) seed (fruits). Furthermore, regression models were developed to predict essential oil yield and composition for a given duration of the steam distillation time (DT). Ten DT durations were tested in this study: 5, 7.5, 15, 30, 60, 120, 240, 360, 480, and 600 min. Oil yields increased with an increase in the DT. Maximum oil yield (content, 2.3 g/100 seed), was achieved at 480 min; longer DT did not increase oil yields. The concentrations of the major oil constituents α-pinene (0.14-0.5% concentration range), β-pinene (3.7-10.3% range), γ-cymene (5-7.3% range), γ-terpinene (1.8-7.2% range), cumin aldehyde (50-66% range), α-terpinen-7-al (3.8-16% range), and β-terpinen-7-al (12-20% range) varied as a function of the DT. The concentrations of α-pinene, β-pinene, γ-cymene, γ-terpinene in the oil increased with the increase of the duration of the DT; α-pinene was highest in the oil obtained at 600 min DT, β-pinene and γ-terpinene reached maximum concentrations in the oil at 360 min DT; γ-cymene reached a maximum in the oil at 60 min DT, cumin aldehyde was high in the oils obtained at 5-60 min DT, and low in the oils obtained at 240-600 min DT, α-terpinen-7-al reached maximum in the oils obtained at 480 or 600 min DT, whereas β-terpinen-7-al reached a maximum concentration in the oil at 60 min DT. The yield of individual oil constituents (calculated from the oil yields and the concentration of a given compound at a particular DT) increased and reached a maximum at 480 or 600 min DT. The antimalarial activity of the cumin seed oil obtained during the 0-5 and at 5-7.5 min DT timeframes was twice higher than the antimalarial activity of the oils obtained at the other DT. This study opens the possibility for distinct marketing and utilization for these improved oils. The antioxidant capacity of the oil was highest in the oil obtained at 30 min DT and lowest in the oil from 360 min DT. The Michaelis-Menton and the Power nonlinear regression models developed in this study can be utilized to predict essential oil yield and composition of cumin seed at any given duration of DT and may also be useful to compare previous reports on cumin oil yield and composition. DT can be utilized to obtain cumin seed oil with improved antimalarial activity, improved antioxidant capacity, and with various compositions.

  20. Ensemble-based methods for forecasting census in hospital units

    PubMed Central

    2013-01-01

    Background The ability to accurately forecast census counts in hospital departments has considerable implications for hospital resource allocation. In recent years several different methods have been proposed forecasting census counts, however many of these approaches do not use available patient-specific information. Methods In this paper we present an ensemble-based methodology for forecasting the census under a framework that simultaneously incorporates both (i) arrival trends over time and (ii) patient-specific baseline and time-varying information. The proposed model for predicting census has three components, namely: current census count, number of daily arrivals and number of daily departures. To model the number of daily arrivals, we use a seasonality adjusted Poisson Autoregressive (PAR) model where the parameter estimates are obtained via conditional maximum likelihood. The number of daily departures is predicted by modeling the probability of departure from the census using logistic regression models that are adjusted for the amount of time spent in the census and incorporate both patient-specific baseline and time varying patient-specific covariate information. We illustrate our approach using neonatal intensive care unit (NICU) data collected at Women & Infants Hospital, Providence RI, which consists of 1001 consecutive NICU admissions between April 1st 2008 and March 31st 2009. Results Our results demonstrate statistically significant improved prediction accuracy for 3, 5, and 7 day census forecasts and increased precision of our forecasting model compared to a forecasting approach that ignores patient-specific information. Conclusions Forecasting models that utilize patient-specific baseline and time-varying information make the most of data typically available and have the capacity to substantially improve census forecasts. PMID:23721123

  1. Ensemble-based methods for forecasting census in hospital units.

    PubMed

    Koestler, Devin C; Ombao, Hernando; Bender, Jesse

    2013-05-30

    The ability to accurately forecast census counts in hospital departments has considerable implications for hospital resource allocation. In recent years several different methods have been proposed forecasting census counts, however many of these approaches do not use available patient-specific information. In this paper we present an ensemble-based methodology for forecasting the census under a framework that simultaneously incorporates both (i) arrival trends over time and (ii) patient-specific baseline and time-varying information. The proposed model for predicting census has three components, namely: current census count, number of daily arrivals and number of daily departures. To model the number of daily arrivals, we use a seasonality adjusted Poisson Autoregressive (PAR) model where the parameter estimates are obtained via conditional maximum likelihood. The number of daily departures is predicted by modeling the probability of departure from the census using logistic regression models that are adjusted for the amount of time spent in the census and incorporate both patient-specific baseline and time varying patient-specific covariate information. We illustrate our approach using neonatal intensive care unit (NICU) data collected at Women & Infants Hospital, Providence RI, which consists of 1001 consecutive NICU admissions between April 1st 2008 and March 31st 2009. Our results demonstrate statistically significant improved prediction accuracy for 3, 5, and 7 day census forecasts and increased precision of our forecasting model compared to a forecasting approach that ignores patient-specific information. Forecasting models that utilize patient-specific baseline and time-varying information make the most of data typically available and have the capacity to substantially improve census forecasts.

  2. Electric utility companies and geothermal power

    NASA Technical Reports Server (NTRS)

    Pivirotto, D. S.

    1976-01-01

    The requirements of the electric utility industry as the primary potential market for geothermal energy are analyzed, based on a series of structured interviews with utility companies and financial institution executives. The interviews were designed to determine what information and technologies would be required before utilities would make investment decisions in favor of geothermal energy, the time frame in which the information and technologies would have to be available, and the influence of the governmental politics. The paper describes the geothermal resources, electric utility industry, its structure, the forces influencing utility companies, and their relationship to geothermal energy. A strategy for federal stimulation of utility investment in geothermal energy is suggested. Possibilities are discussed for stimulating utility investment through financial incentives, amelioration of institutional barriers, and technological improvements.

  3. On the statistical equivalence of restrained-ensemble simulations with the maximum entropy method

    PubMed Central

    Roux, Benoît; Weare, Jonathan

    2013-01-01

    An issue of general interest in computer simulations is to incorporate information from experiments into a structural model. An important caveat in pursuing this goal is to avoid corrupting the resulting model with spurious and arbitrary biases. While the problem of biasing thermodynamic ensembles can be formulated rigorously using the maximum entropy method introduced by Jaynes, the approach can be cumbersome in practical applications with the need to determine multiple unknown coefficients iteratively. A popular alternative strategy to incorporate the information from experiments is to rely on restrained-ensemble molecular dynamics simulations. However, the fundamental validity of this computational strategy remains in question. Here, it is demonstrated that the statistical distribution produced by restrained-ensemble simulations is formally consistent with the maximum entropy method of Jaynes. This clarifies the underlying conditions under which restrained-ensemble simulations will yield results that are consistent with the maximum entropy method. PMID:23464140

  4. Modeling of depth to base of Last Glacial Maximum and seafloor sediment thickness for the California State Waters Map Series, eastern Santa Barbara Channel, California

    USGS Publications Warehouse

    Wong, Florence L.; Phillips, Eleyne L.; Johnson, Samuel Y.; Sliter, Ray W.

    2012-01-01

    Models of the depth to the base of Last Glacial Maximum and sediment thickness over the base of Last Glacial Maximum for the eastern Santa Barbara Channel are a key part of the maps of shallow subsurface geology and structure for offshore Refugio to Hueneme Canyon, California, in the California State Waters Map Series. A satisfactory interpolation of the two datasets that accounted for regional geologic structure was developed using geographic information systems modeling and graphics software tools. Regional sediment volumes were determined from the model. Source data files suitable for geographic information systems mapping applications are provided.

  5. A 15 kWe (nominal) solar thermal-electric power conversion concept definition study: Steam Rankin reciprocator system

    NASA Technical Reports Server (NTRS)

    Wingenback, W.; Carter, J., Jr.

    1979-01-01

    A conceptual design of a 3600 rpm reciprocation expander was developed for maximum thermal input power of 80 kW. The conceptual design covered two engine configurations; a single cylinder design for simple cycle operation and a two cylinder design for reheat cycle operation. The reheat expander contains a high pressure cylinder and a low pressure cylinder with steam being reheated to the initial inlet temperature after expansion in the high pressure cylinder. Power generation is accomplished with a three-phase induction motor coupled directly to the expander and connected electrically to the public utility power grid. The expander, generator, water pump and control system weigh 297 kg and are dish mounted. The steam condenser, water tank and accessory pumps are ground based. Maximum heat engine efficiency is 33 percent: maximum power conversion efficiency is 30 percent. Total cost is $3,307 or $138 per kW of maximum output power.

  6. Potential role of motion for enhancing maximum output energy of triboelectric nanogenerator

    NASA Astrophysics Data System (ADS)

    Byun, Kyung-Eun; Lee, Min-Hyun; Cho, Yeonchoo; Nam, Seung-Geol; Shin, Hyeon-Jin; Park, Seongjun

    2017-07-01

    Although triboelectric nanogenerator (TENG) has been explored as one of the possible candidates for the auxiliary power source of portable and wearable devices, the output energy of a TENG is still insufficient to charge the devices with daily motion. Moreover, the fundamental aspects of the maximum possible energy of a TENG related with human motion are not understood systematically. Here, we confirmed the possibility of charging commercialized portable and wearable devices such as smart phones and smart watches by utilizing the mechanical energy generated by human motion. We confirmed by theoretical extraction that the maximum possible energy is related with specific form factors of a TENG. Furthermore, we experimentally demonstrated the effect of human motion in an aspect of the kinetic energy and impulse using varying velocity and elasticity, and clarified how to improve the maximum possible energy of a TENG. This study gives insight into design of a TENG to obtain a large amount of energy in a limited space.

  7. Drivers of Intra-Summer Seasonality and Daily Variability of Coastal Low Cloudiness in California Subregions

    NASA Astrophysics Data System (ADS)

    Schwartz, R. E.; Iacobellis, S.; Gershunov, A.; Williams, P.; Cayan, D. R.

    2014-12-01

    Summertime low cloud intrusion into the terrestrial west coast of North America impacts human, ecological, and logistical systems. Over a broad region of the West Coast, summer (May - September) coastal low cloudiness (CLC) varies coherently on interannual to interdecadal timescales and has been found to be organized by North Pacific sea surface temperature. Broad-scale studies of low stratiform cloudiness over ocean basins also find that the season of maximum low stratus corresponds to the season of maximum lower tropospheric stability (LTS) or estimated inversion strength. We utilize a 18-summer record of CLC derived from NASA/NOAA Geostationary Operational Environmental Satellite (GOES) at 4km resolution over California (CA) to make a more nuanced spatial and temporal examination of intra-summer variability in CLC and its drivers. We find that uniform spatial coherency over CA is not apparent for intra-summer variability in CLC. On monthly to daily timescales, at least two distinct subregions of coastal California (CA) can be identified, where relationships between meteorology and stratus variability appear to change throughout summer in each subregion. While north of Point Conception and offshore the timing of maximum CLC is closely coincident with maximum LTS, in the Southern CA Bight and northern Baja region, maximum CLC occurs up to about a month before maximum LTS. It appears that summertime CLC in this southern region is not as strongly related as in the northern region to LTS. In particular, although the relationship is strong in May and June, starting in July the daily relationship between LTS and CLC in the south begins to deteriorate. Preliminary results indicate a moderate association between decreased CLC in the south and increased precipitable water content above 850 hPa on daily time scales beginning in July. Relationships between daily CLC variability and meteorological variables including winds, inland temperatures, relative humidity, and geopotential heights within and above the marine boundary layer are investigated and dissected by month, CA subregion, and cloud height. The rich spatial detail of the satellite derived CLC record is utilized to examine the propagation in time and space of CLC on synoptic scales within and among subregions.

  8. A data fusion-based methodology for optimal redesign of groundwater monitoring networks

    NASA Astrophysics Data System (ADS)

    Hosseini, Marjan; Kerachian, Reza

    2017-09-01

    In this paper, a new data fusion-based methodology is presented for spatio-temporal (S-T) redesigning of Groundwater Level Monitoring Networks (GLMNs). The kriged maps of three different criteria (i.e. marginal entropy of water table levels, estimation error variances of mean values of water table levels, and estimation values of long-term changes in water level) are combined for determining monitoring sub-areas of high and low priorities in order to consider different spatial patterns for each sub-area. The best spatial sampling scheme is selected by applying a new method, in which a regular hexagonal gridding pattern and the Thiessen polygon approach are respectively utilized in sub-areas of high and low monitoring priorities. An Artificial Neural Network (ANN) and a S-T kriging models are used to simulate water level fluctuations. To improve the accuracy of the predictions, results of the ANN and S-T kriging models are combined using a data fusion technique. The concept of Value of Information (VOI) is utilized to determine two stations with maximum information values in both sub-areas with high and low monitoring priorities. The observed groundwater level data of these two stations are considered for the power of trend detection, estimating periodic fluctuations and mean values of the stationary components, which are used for determining non-uniform sampling frequencies for sub-areas. The proposed methodology is applied to the Dehgolan plain in northwestern Iran. The results show that a new sampling configuration with 35 and 7 monitoring stations and sampling intervals of 20 and 32 days, respectively in sub-areas with high and low monitoring priorities, leads to a more efficient monitoring network than the existing one containing 52 monitoring stations and monthly temporal sampling.

  9. Factors shaping effective utilization of health information technology in urban safety-net clinics.

    PubMed

    George, Sheba; Garth, Belinda; Fish, Allison; Baker, Richard

    2013-09-01

    Urban safety-net clinics are considered prime targets for the adoption of health information technology innovations; however, little is known about their utilization in such safety-net settings. Current scholarship provides limited guidance on the implementation of health information technology into safety-net settings as it typically assumes that adopting institutions have sufficient basic resources. This study addresses this gap by exploring the unique challenges urban resource-poor safety-net clinics must consider when adopting and utilizing health information technology. In-depth interviews (N = 15) were used with key stakeholders (clinic chief executive officers, medical directors, nursing directors, chief financial officers, and information technology directors) from staff at four clinics to explore (a) nonhealth information technology-related clinic needs, (b) how health information technology may provide solutions, and (c) perceptions of and experiences with health information technology. Participants identified several challenges, some of which appear amenable to health information technology solutions. Also identified were requirements for effective utilization of health information technology including physical infrastructural improvements, funding for equipment/training, creation of user groups to share health information technology knowledge/experiences, and specially tailored electronic billing guidelines. We found that despite the potential benefit that can be derived from health information technologies, the unplanned and uninformed introduction of these tools into these settings might actually create more problems than are solved. From these data, we were able to identify a set of factors that should be considered when integrating health information technology into the existing workflows of low-resourced urban safety-net clinics in order to maximize their utilization and enhance the quality of health care in such settings.

  10. Communication methods, systems, apparatus, and devices involving RF tag registration

    DOEpatents

    Burghard, Brion J [W. Richland, WA; Skorpik, James R [Kennewick, WA

    2008-04-22

    One technique of the present invention includes a number of Radio Frequency (RF) tags that each have a different identifier. Information is broadcast to the tags from an RF tag interrogator. This information corresponds to a maximum quantity of tag response time slots that are available. This maximum quantity may be less than the total number of tags. The tags each select one of the time slots as a function of the information and a random number provided by each respective tag. The different identifiers are transmitted to the interrogator from at least a subset of the RF tags.

  11. An adaptive Fuzzy C-means method utilizing neighboring information for breast tumor segmentation in ultrasound images.

    PubMed

    Feng, Yuan; Dong, Fenglin; Xia, Xiaolong; Hu, Chun-Hong; Fan, Qianmin; Hu, Yanle; Gao, Mingyuan; Mutic, Sasa

    2017-07-01

    Ultrasound (US) imaging has been widely used in breast tumor diagnosis and treatment intervention. Automatic delineation of the tumor is a crucial first step, especially for the computer-aided diagnosis (CAD) and US-guided breast procedure. However, the intrinsic properties of US images such as low contrast and blurry boundaries pose challenges to the automatic segmentation of the breast tumor. Therefore, the purpose of this study is to propose a segmentation algorithm that can contour the breast tumor in US images. To utilize the neighbor information of each pixel, a Hausdorff distance based fuzzy c-means (FCM) method was adopted. The size of the neighbor region was adaptively updated by comparing the mutual information between them. The objective function of the clustering process was updated by a combination of Euclid distance and the adaptively calculated Hausdorff distance. Segmentation results were evaluated by comparing with three experts' manual segmentations. The results were also compared with a kernel-induced distance based FCM with spatial constraints, the method without adaptive region selection, and conventional FCM. Results from segmenting 30 patient images showed the adaptive method had a value of sensitivity, specificity, Jaccard similarity, and Dice coefficient of 93.60 ± 5.33%, 97.83 ± 2.17%, 86.38 ± 5.80%, and 92.58 ± 3.68%, respectively. The region-based metrics of average symmetric surface distance (ASSD), root mean square symmetric distance (RMSD), and maximum symmetric surface distance (MSSD) were 0.03 ± 0.04 mm, 0.04 ± 0.03 mm, and 1.18 ± 1.01 mm, respectively. All the metrics except sensitivity were better than that of the non-adaptive algorithm and the conventional FCM. Only three region-based metrics were better than that of the kernel-induced distance based FCM with spatial constraints. Inclusion of the pixel neighbor information adaptively in segmenting US images improved the segmentation performance. The results demonstrate the potential application of the method in breast tumor CAD and other US-guided procedures. © 2017 American Association of Physicists in Medicine.

  12. 78 FR 21711 - Proposed Information Collection (Regulation for Reconsideration of Denied Claims) Activity...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-11

    ... reduces both formal appeals and allows decision making to be more responsive to veterans using the VA... information will have practical utility; (2) the accuracy of VHA's estimate of the burden of the proposed collection of information; (3) ways to enhance the quality, utility, and clarity of the information to be...

  13. A method of classification for multisource data in remote sensing based on interval-valued probabilities

    NASA Technical Reports Server (NTRS)

    Kim, Hakil; Swain, Philip H.

    1990-01-01

    An axiomatic approach to intervalued (IV) probabilities is presented, where the IV probability is defined by a pair of set-theoretic functions which satisfy some pre-specified axioms. On the basis of this approach representation of statistical evidence and combination of multiple bodies of evidence are emphasized. Although IV probabilities provide an innovative means for the representation and combination of evidential information, they make the decision process rather complicated. It entails more intelligent strategies for making decisions. The development of decision rules over IV probabilities is discussed from the viewpoint of statistical pattern recognition. The proposed method, so called evidential reasoning method, is applied to the ground-cover classification of a multisource data set consisting of Multispectral Scanner (MSS) data, Synthetic Aperture Radar (SAR) data, and digital terrain data such as elevation, slope, and aspect. By treating the data sources separately, the method is able to capture both parametric and nonparametric information and to combine them. Then the method is applied to two separate cases of classifying multiband data obtained by a single sensor. In each case a set of multiple sources is obtained by dividing the dimensionally huge data into smaller and more manageable pieces based on the global statistical correlation information. By a divide-and-combine process, the method is able to utilize more features than the conventional maximum likelihood method.

  14. Assessment of MODIS-EVI, MODIS-NDVI and VEGETATION-NDVI composite data using agricultural measurements: an example at corn fields in western Mexico.

    PubMed

    Chen, Pei-Yu; Fedosejevs, Gunar; Tiscareño-López, Mario; Arnold, Jeffrey G

    2006-08-01

    Although several types of satellite data provide temporal information of the land use at no cost, digital satellite data applications for agricultural studies are limited compared to applications for forest management. This study assessed the suitability of vegetation indices derived from the TERRA-Moderate Resolution Imaging Spectroradiometer (MODIS) sensor and SPOT-VEGETATION (VGT) sensor for identifying corn growth in western Mexico. Overall, the Normalized Difference Vegetation Index (NDVI) composites from the VGT sensor based on bi-directional compositing method produced vegetation information most closely resembling actual crop conditions. The NDVI composites from the MODIS sensor exhibited saturated signals starting 30 days after planting, but corresponded to green leaf senescence in April. The temporal NDVI composites from the VGT sensor based on the maximum value method had a maximum plateau for 80 days, which masked the important crop transformation from vegetative stage to reproductive stage. The Enhanced Vegetation Index (EVI) composites from the MODIS sensor reached a maximum plateau 40 days earlier than the occurrence of maximum leaf area index (LAI) and maximum intercepted fraction of photosynthetic active radiation (fPAR) derived from in-situ measurements. The results of this study showed that the 250-m resolution MODIS data did not provide more accurate vegetation information for corn growth description than the 500-m and 1000-m resolution MODIS data.

  15. Urban Underground Pipelines Mapping Using Ground Penetrating Radar

    NASA Astrophysics Data System (ADS)

    Jaw, S. W.; M, Hashim

    2014-02-01

    Underground spaces are now being given attention to exploit for transportation, utilities, and public usage. The underground has become a spider's web of utility networks. Mapping of underground utility pipelines has become a challenging and difficult task. As such, mapping of underground utility pipelines is a "hit-and-miss" affair, and results in many catastrophic damages, particularly in urban areas. Therefore, this study was conducted to extract locational information of the urban underground utility pipeline using trenchless measuring tool, namely ground penetrating radar (GPR). The focus of this study was to conduct underground utility pipeline mapping for retrieval of geometry properties of the pipelines, using GPR. In doing this, a series of tests were first conducted at the preferred test site and real-life experiment, followed by modeling of field-based model using Finite-Difference Time-Domain (FDTD). Results provide the locational information of underground utility pipelines associated with its mapping accuracy. Eventually, this locational information of the underground utility pipelines is beneficial to civil infrastructure management and maintenance which in the long term is time-saving and critically important for the development of metropolitan areas.

  16. Earthquake Early Warning in Japan - Result of recent two years -

    NASA Astrophysics Data System (ADS)

    Shimoyama, T.; Doi, K.; Kiyomoto, M.; Hoshiba, M.

    2009-12-01

    Japan Meteorological Agency(JMA) started to provide Earthquake Early Warning(EEW) to the general public in October 2007. It was followed by provision of EEW to a limited number of users who understand the technical limit of EEW and can utilize it for automatic control from August 2006. Earthquake Early Warning in Japan definitely means information of estimated amplitude and arrival time of a strong ground motion after fault rupture occurred. In other words, the EEW provided by JMA is defined as a forecast of a strong ground motion before the strong motion arrival. EEW of JMA is to enable advance countermeasures to disasters caused by strong ground motions with providing a warning message of anticipating strong ground motion before the S wave arrival. However, due to its very short available time period, there should need some measures and ideas to provide rapidly EEW and utilize it properly. - EEW is issued to general public when the maximum seismic intensity 5 lower (JMA scale) or greater is expected. - EEW message contains origin time, epicentral region name, and names of areas (unit is about 1/3 to 1/4 of one prefecture) where seismic intensity 4 or greater is expected. Expected arrival time is not included because it differs substantially even in one unit area. - EEW is to be broadcast through the broadcasting media(TV, radio and City Administrative Disaster Management Radio), and is delivered to cellular phones through cell broadcast system. For those who would like to know the more precise estimation and smaller earthquake information at their point of their properties, JMA allows designated private companies to provide forecast of strong ground motion, in which the estimation of a seismic intensity as well as arrival time of S-wave are contained, at arbitrary places under the JMA’s technical assurance. From October, 2007 to August, 2009, JMA issued 11 warnings to general public expecting seismic intensity “5 lower” or greater, including M=7.2 inland earthquake at Tohoku district (Iwate-Miyagi-nairiku-earthquakes; June 14, 2008) and M=6.5 earthquake at Suruga bay (August, 11, 2009). For 7 cases out of 11 cases seismic intensity “5 lower” or greater were actually observed; for 3 cases, observed maximum seismic intensity was 4; for 1 case it was false alarm. During this period, 10 earthquakes occurred for which observed maximum seismic intensity was “5 lower” or greater. For 7 cases out of 10, JMA issued the warnings to general public; for 3 cases the warnings were not issued because expected seismic intensity was 4. The false alarm, which occurred on 25, August, 2009 by software bag, raised discussion how the false warning should be canceled. In this study, we will summarize the performance of the system ,and introduce some examples of the actual issuance .

  17. Utility-preserving anonymization for health data publishing.

    PubMed

    Lee, Hyukki; Kim, Soohyung; Kim, Jong Wook; Chung, Yon Dohn

    2017-07-11

    Publishing raw electronic health records (EHRs) may be considered as a breach of the privacy of individuals because they usually contain sensitive information. A common practice for the privacy-preserving data publishing is to anonymize the data before publishing, and thus satisfy privacy models such as k-anonymity. Among various anonymization techniques, generalization is the most commonly used in medical/health data processing. Generalization inevitably causes information loss, and thus, various methods have been proposed to reduce information loss. However, existing generalization-based data anonymization methods cannot avoid excessive information loss and preserve data utility. We propose a utility-preserving anonymization for privacy preserving data publishing (PPDP). To preserve data utility, the proposed method comprises three parts: (1) utility-preserving model, (2) counterfeit record insertion, (3) catalog of the counterfeit records. We also propose an anonymization algorithm using the proposed method. Our anonymization algorithm applies full-domain generalization algorithm. We evaluate our method in comparison with existence method on two aspects, information loss measured through various quality metrics and error rate of analysis result. With all different types of quality metrics, our proposed method show the lower information loss than the existing method. In the real-world EHRs analysis, analysis results show small portion of error between the anonymized data through the proposed method and original data. We propose a new utility-preserving anonymization method and an anonymization algorithm using the proposed method. Through experiments on various datasets, we show that the utility of EHRs anonymized by the proposed method is significantly better than those anonymized by previous approaches.

  18. Report: Total Maximum Daily Load Program Needs Better Data and Measures to Demonstrate Environmental Results

    EPA Pesticide Factsheets

    Report #2007-P-00036, September 19, 2007. EPA does not have comprehensive information on the outcomes of the Total Maximum Daily Load (TMDL) program nationwide, nor national data on TMDL implementation activities.

  19. Maximum-Entropy Inference with a Programmable Annealer

    PubMed Central

    Chancellor, Nicholas; Szoke, Szilard; Vinci, Walter; Aeppli, Gabriel; Warburton, Paul A.

    2016-01-01

    Optimisation problems typically involve finding the ground state (i.e. the minimum energy configuration) of a cost function with respect to many variables. If the variables are corrupted by noise then this maximises the likelihood that the solution is correct. The maximum entropy solution on the other hand takes the form of a Boltzmann distribution over the ground and excited states of the cost function to correct for noise. Here we use a programmable annealer for the information decoding problem which we simulate as a random Ising model in a field. We show experimentally that finite temperature maximum entropy decoding can give slightly better bit-error-rates than the maximum likelihood approach, confirming that useful information can be extracted from the excited states of the annealer. Furthermore we introduce a bit-by-bit analytical method which is agnostic to the specific application and use it to show that the annealer samples from a highly Boltzmann-like distribution. Machines of this kind are therefore candidates for use in a variety of machine learning applications which exploit maximum entropy inference, including language processing and image recognition. PMID:26936311

  20. Decision-Support Tools and Databases to Inform Regional Stormwater Utility Development in New England

    EPA Science Inventory

    Development of stormwater utilities requires information on existing stormwater infrastructure and impervious cover as well as costs and benefits of stormwater management options. US EPA has developed a suite of databases and tools that can inform decision-making by regional sto...

  1. Applications of flood depth from rapid post-event footprint generation

    NASA Astrophysics Data System (ADS)

    Booth, Naomi; Millinship, Ian

    2015-04-01

    Immediately following large flood events, an indication of the area flooded (i.e. the flood footprint) can be extremely useful for evaluating potential impacts on exposed property and infrastructure. Specifically, such information can help insurance companies estimate overall potential losses, deploy claims adjusters and ultimately assists the timely payment of due compensation to the public. Developing these datasets from remotely sensed products seems like an obvious choice. However, there are a number of important drawbacks which limit their utility in the context of flood risk studies. For example, external agencies have no control over the region that is surveyed, the time at which it is surveyed (which is important as the maximum extent would ideally be captured), and how freely accessible the outputs are. Moreover, the spatial resolution of these datasets can be low, and considerable uncertainties in the flood extents exist where dry surfaces give similar return signals to water. Most importantly of all, flood depths are required to estimate potential damages, but generally cannot be estimated from satellite imagery alone. In response to these problems, we have developed an alternative methodology for developing high-resolution footprints of maximum flood extent which do contain depth information. For a particular event, once reports of heavy rainfall are received, we begin monitoring real-time flow data and extracting peak values across affected areas. Next, using statistical extreme value analyses of historic flow records at the same measured locations, the return periods of the maximum event flow at each gauged location are estimated. These return periods are then interpolated along each river and matched to JBA's high-resolution hazard maps, which already exist for a series of design return periods. The extent and depth of flooding associated with the event flow is extracted from the hazard maps to create a flood footprint. Georeferenced ground, aerial and satellite images are used to establish defence integrity, highlight breach locations and validate our footprint. We have implemented this method to create seven flood footprints, including river flooding in central Europe and coastal flooding associated with Storm Xaver in the UK (both in 2013). The inclusion of depth information allows damages to be simulated and compared to actual damage and resultant loss which become available after the event. In this way, we can evaluate depth-damage functions used in catastrophe models and reduce their associated uncertainty. In further studies, the depth data could be used at an individual property level to calibrate property type specific depth-damage functions.

  2. Kinetic modeling of lactic acid production from batch submerged fermentation of cheese whey

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tango, M.S.A.; Ghaly, A.E.

    1999-12-01

    A kinetic model for the production of lactic acid through batch submerged fermentation of cheese whey using Lactobacillus helveticus was developed. The model accounts for the effect of substrate limitation, substrate inhibition, lactic acid inhibition, maintenance energy and cell death on the cell growth, substrate utilization, and lactic acid production during the fermentation process. The model was evaluated using experimental data from Tango and Ghaly (1999). The predicted results obtained from the model compared well with experimental (R{sup 2} = 0.92--0.98). The model was also used to investigate the effect of the initial substrate concentration on the lag period, fermentationmore » time, specific growth rate, and cell productivity during batch fermentation. The maximum specific growth rate ({micro}{sub m}), the saturation constant (K{sub S}), the substrate inhibition constant (K{sub IS}), and the lactic acid inhibition constant (K{sub IP}) were found to be 0.25h{sup {minus}1}, 0.9 g/L, 250.0 g/L, and 60.0 g/L, respectively. High initial lactose concentration in cheese whey reduced both the specific growth rate and substrate utilization rate due to the substrate inhibition phenomenon. The maximum lactic acid production occurred at about 100 g/L initial lactose concentration after 40 h of fermentation. The maximum lactic acid concentration above which Lactobacillus helveticus did not grow was found to be 80.0 g/L.« less

  3. Tannic acid degradation by Klebsiella strains isolated from goat feces

    PubMed Central

    Tahmourespour, Arezoo; Tabatabaee, Nooroldin; Khalkhali, Hossein; Amini, Imane

    2016-01-01

    Background and Objectives: Tannins are toxic polyphenols that either bind and precipitate or condense proteins. The high tannin content of some plants is the preliminary limitation of using them as a ruminant feed. So, the aim of this study was the isolation and characterization of tannic acid degrading bacterial strains from goat feces before and after feeding on Pistachio-Soft Hulls as tannin rich diet (TRD). Materials and Methods: Bacterial strains capable of utilizing tannic acid as sole carbon and energy source were isolated and characterized from goat feces before and after feeding on TRD. Tannase activity, maximum tolerable concentration and biodegradation potential were assessed. Results: Four tannase positive isolates were identified as Klebsiella pneumoniae. Isolated strains showed the maximum tolerable concentration of 64g/L of tannin. The tannic acid degradation percentage at a concentration of 15.0 g/L reached a maximum of 68% after 24 h incubation, and more than 98% after 72 h incubation. The pH of the medium also decreased along with tannic acid utilization. Conclusions: It is obvious that TRD induced adaptive responses. Thus, while the bacteria were able to degrade and detoxify the tannic acids, they had to adapt in the presence of high concentrations of tannic acid. So, these isolates have an amazing potential for application in bioremediation, waste water treatment, also reduction of tannins antinutritional effects in animal feeds. PMID:27092220

  4. Evaluation of glued-diaphragm fibre optic pressure sensors in a shock tube

    NASA Astrophysics Data System (ADS)

    Sharifian, S. Ahmad; Buttsworth, David R.

    2007-02-01

    Glued-diaphragm fibre optic pressure sensors that utilize standard telecommunications components which are based on Fabry-Perot interferometry are appealing in a number of respects. Principally, they have high spatial and temporal resolution and are low in cost. These features potentially make them well suited to operation in extreme environments produced in short-duration high-enthalpy wind tunnel facilities where spatial and temporal resolution are essential, but attrition rates for sensors are typically very high. The sensors we consider utilize a zirconia ferrule substrate and a thin copper foil which are bonded together using an adhesive. The sensors show a fast response and can measure fluctuations with a frequency up to 250 kHz. The sensors also have a high spatial resolution on the order of 0.1 mm. However, with the interrogation and calibration processes adopted in this work, apparent errors of up to 30% of the maximum pressure have been observed. Such errors are primarily caused by mechanical hysteresis and adhesive viscoelasticity. If a dynamic calibration is adopted, the maximum measurement error can be limited to about 10% of the maximum pressure. However, a better approach is to eliminate the adhesive from the construction process or design the diaphragm and substrate in a way that does not require the adhesive to carry a significant fraction of the mechanical loading.

  5. Highly efficient blue and warm white organic light-emitting diodes with a simplified structure

    NASA Astrophysics Data System (ADS)

    Li, Xiang-Long; Ouyang, Xinhua; Chen, Dongcheng; Cai, Xinyi; Liu, Ming; Ge, Ziyi; Cao, Yong; Su, Shi-Jian

    2016-03-01

    Two blue fluorescent emitters were utilized to construct simplified organic light-emitting diodes (OLEDs) and the remarkable difference in device performance was carefully illustrated. A maximum current efficiency of 4.84 cd A-1 (corresponding to a quantum efficiency of 4.29%) with a Commission Internationale de l’Eclairage (CIE) coordinate of (0.144, 0.127) was achieved by using N,N-diphenyl-4″-(1-phenyl-1H-benzo[d]imidazol-2-yl)-[1, 1‧:4‧, 1″-terphenyl]-4-amine (BBPI) as a non-doped emission layer of the simplified blue OLEDs without carrier-transport layers. In addition, simplified fluorescent/phosphorescent (F/P) hybrid warm white OLEDs without carrier-transport layers were fabricated by utilizing BBPI as (1) the blue emitter and (2) the host of a complementary yellow phosphorescent emitter (PO-01). A maximum current efficiency of 36.8 cd A-1 and a maximum power efficiency of 38.6 lm W-1 were achieved as a result of efficient energy transfer from the host to the guest and good triplet exciton confinement on the phosphorescent molecules. The blue and white OLEDs are among the most efficient simplified fluorescent blue and F/P hybrid white devices, and their performance is even comparable to that of most previously reported complicated multi-layer devices with carrier-transport layers.

  6. Model Legislation: Gifted and Talented.

    ERIC Educational Resources Information Center

    Foster, Andrew H.; And Others

    This report presents a model state legislative bill to provide for the special needs of gifted and talented students. The model bill utilizes a "best practices" framework and attempts to be fiscally responsible and provide maximum flexibility while meeting the needs of gifted and talented students. The model legislation itself begins with a…

  7. WVR-EMAP A SMALL WATERSHED CHARACTERIZATION, CLASSIFICATION, AND ASSESSMENT FOR WEST VIRGINIA UTILIZING EMAP DESIGN AND TOOLS

    EPA Science Inventory

    Nationwide, there is a strong need to streamline methods for assessing impairment of surface waters (305b listings), diagnosing cause of biological impairment (303d listings), estimating total maximum daily loads (TMDLs), and/or prioritizing watershed restoration activities (Unif...

  8. Satellite retrievals of leaf chlorophyll and photosynthetic capacity for improved modeling of GPP

    USDA-ARS?s Scientific Manuscript database

    This study investigates the utility of in-situ and satellite-based leaf chlorophyll (Chl) estimates for quantifying leaf photosynthetic capacity and for constraining model simulations of Gross Primary Productivity (GPP) over a corn field in Maryland, U.S.A. The maximum rate of carboxylation (Vmax) r...

  9. EXTRACTION AND SPECIATION OF ARSENIC CONTAINING DRINKING WATER TREATMENT SOLIDS BY IC-ICP-MS

    EPA Science Inventory

    In 2001, the U.S. Environmental Protection Agency (EPA) passed the Arsenic Rule, which established a maximum contaminant level of 105g/L. Compliance with this regulation has caused a number of drinking water utilities to investigate potential treatment options. The adsorption o...

  10. 77 FR 66585 - Atlantic Coastal Fisheries Cooperative Management Act Provisions; General Provisions for Domestic...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-11-06

    ... 11 commercial fishing vessels from the following Federal American lobster regulations: (1) Gear... patterns of larval dispersal and settlement in the offshore Lobster Management Area 3 (Area 3), 11 federally permitted vessels would utilize a maximum combined total of 50 modified lobster traps to target...

  11. Fluorescent lamp with static magnetic field generating means

    DOEpatents

    Moskowitz, Philip E.; Maya, Jakob

    1987-01-01

    A fluorescent lamp wherein magnetic field generating means (e.g., permanent magnets) are utilized to generate a static magnetic field across the respective electrode structures of the lamp such that maximum field strength is located at the electrode's filament. An increase in efficacy during operation has been observed.

  12. Significance of Life Skills Education

    ERIC Educational Resources Information Center

    Prajapati, Ravindra K.; Sharma, Bosky; Sharma, Dharmendra

    2017-01-01

    Adolescence is a period when the intellectual, physical, social, emotional and all the capabilities are very high, but, unfortunately, most of the adolescents are unable to utilize their potential to maximum due to various reasons. They face many emerging issues such as global warming, famines, poverty, suicide, population explosion as well as…

  13. Applied Missing Data Analysis. Methodology in the Social Sciences Series

    ERIC Educational Resources Information Center

    Enders, Craig K.

    2010-01-01

    Walking readers step by step through complex concepts, this book translates missing data techniques into something that applied researchers and graduate students can understand and utilize in their own research. Enders explains the rationale and procedural details for maximum likelihood estimation, Bayesian estimation, multiple imputation, and…

  14. Sharing Teaching Ideas.

    ERIC Educational Resources Information Center

    Touval, Ayana

    1992-01-01

    Introduces the concept of maximum and minimum function values as turning points on the function's graphic representation and presents a method for finding these values without using calculus. The process of utilizing transformations to find the turning point of a quadratic function is extended to find the turning points of cubic functions. (MDH)

  15. Notes on Discounting

    ERIC Educational Resources Information Center

    Rachlin, Howard

    2006-01-01

    In general, if a variable can be expressed as a function of its own maximum value, that function may be called a discount function. Delay discounting and probability discounting are commonly studied in psychology, but memory, matching, and economic utility also may be viewed as discounting processes. When they are so viewed, the discount function…

  16. Nursing and therapy: partnering for successful niche programs.

    PubMed

    Samson, Barbara; Anderson, Lisa

    2007-02-01

    Changing market environment, increased patient expectations, and emphasis on improving functional outcomes led to the development of orthopedic and cardiac niche programs at one agency. Through these programs, it was learned how to best utilize the strengths of nursing and therapy to achieve maximum success for both patients and the organization.

  17. 14 CFR 23.65 - Climb: All engines operating.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 1 2014-01-01 2014-01-01 false Climb: All engines operating. 23.65 Section... Climb: All engines operating. (a) Each normal, utility, and acrobatic category reciprocating engine... than maximum continuous power on each engine; (2) The landing gear retracted; (3) The wing flaps in the...

  18. 14 CFR 23.65 - Climb: All engines operating.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Climb: All engines operating. 23.65 Section... Climb: All engines operating. (a) Each normal, utility, and acrobatic category reciprocating engine... than maximum continuous power on each engine; (2) The landing gear retracted; (3) The wing flaps in the...

  19. 47 CFR 80.37 - One authorization for a plurality of stations.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... authorize a designated maximum number of marine utility stations operating at temporary unspecified... 47 Telecommunication 5 2014-10-01 2014-10-01 false One authorization for a plurality of stations... SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Applications and Licenses § 80.37 One...

  20. 47 CFR 80.37 - One authorization for a plurality of stations.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... authorize a designated maximum number of marine utility stations operating at temporary unspecified... 47 Telecommunication 5 2012-10-01 2012-10-01 false One authorization for a plurality of stations... SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Applications and Licenses § 80.37 One...

  1. 47 CFR 80.37 - One authorization for a plurality of stations.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... authorize a designated maximum number of marine utility stations operating at temporary unspecified... 47 Telecommunication 5 2010-10-01 2010-10-01 false One authorization for a plurality of stations... SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Applications and Licenses § 80.37 One...

  2. 47 CFR 80.37 - One authorization for a plurality of stations.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... authorize a designated maximum number of marine utility stations operating at temporary unspecified... 47 Telecommunication 5 2013-10-01 2013-10-01 false One authorization for a plurality of stations... SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Applications and Licenses § 80.37 One...

  3. 47 CFR 80.37 - One authorization for a plurality of stations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... authorize a designated maximum number of marine utility stations operating at temporary unspecified... 47 Telecommunication 5 2011-10-01 2011-10-01 false One authorization for a plurality of stations... SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Applications and Licenses § 80.37 One...

  4. Fluorescent lamp with static magnetic field generating means

    DOEpatents

    Moskowitz, P.E.; Maya, J.

    1987-09-08

    A fluorescent lamp wherein magnetic field generating means (e.g., permanent magnets) are utilized to generate a static magnetic field across the respective electrode structures of the lamp such that maximum field strength is located at the electrode's filament. An increase in efficacy during operation has been observed. 2 figs.

  5. 14 CFR 23.73 - Reference landing approach speed.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Reference landing approach speed. 23.73... Reference landing approach speed. (a) For normal, utility, and acrobatic category reciprocating engine-powered airplanes of 6,000 pounds or less maximum weight, the reference landing approach speed, VREF, must...

  6. 14 CFR 23.73 - Reference landing approach speed.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 1 2011-01-01 2011-01-01 false Reference landing approach speed. 23.73... Reference landing approach speed. (a) For normal, utility, and acrobatic category reciprocating engine-powered airplanes of 6,000 pounds or less maximum weight, the reference landing approach speed, VREF, must...

  7. Evolutionary Scheduler for the Deep Space Network

    NASA Technical Reports Server (NTRS)

    Guillaume, Alexandre; Lee, Seungwon; Wang, Yeou-Fang; Zheng, Hua; Chau, Savio; Tung, Yu-Wen; Terrile, Richard J.; Hovden, Robert

    2010-01-01

    A computer program assists human schedulers in satisfying, to the maximum extent possible, competing demands from multiple spacecraft missions for utilization of the transmitting/receiving Earth stations of NASA s Deep Space Network. The program embodies a concept of optimal scheduling to attain multiple objectives in the presence of multiple constraints.

  8. Utilization of satellite imagery by in-flight aircraft. [for weather information

    NASA Technical Reports Server (NTRS)

    Luers, J. K.

    1976-01-01

    Present and future utilization of satellite weather data by commercial aircraft while in flight was assessed. Weather information of interest to aviation that is available or will become available with future geostationary satellites includes the following: severe weather areas, jet stream location, weather observation at destination airport, fog areas, and vertical temperature profiles. Utilization of this information by in-flight aircraft is especially beneficial for flights over the oceans or over remote land areas where surface-based observations and communications are sparse and inadequate.

  9. Enriching step-based product information models to support product life-cycle activities

    NASA Astrophysics Data System (ADS)

    Sarigecili, Mehmet Ilteris

    The representation and management of product information in its life-cycle requires standardized data exchange protocols. Standard for Exchange of Product Model Data (STEP) is such a standard that has been used widely by the industries. Even though STEP-based product models are well defined and syntactically correct, populating product data according to these models is not easy because they are too big and disorganized. Data exchange specifications (DEXs) and templates provide re-organized information models required in data exchange of specific activities for various businesses. DEXs show us it would be possible to organize STEP-based product models in order to support different engineering activities at various stages of product life-cycle. In this study, STEP-based models are enriched and organized to support two engineering activities: materials information declaration and tolerance analysis. Due to new environmental regulations, the substance and materials information in products have to be screened closely by manufacturing industries. This requires a fast, unambiguous and complete product information exchange between the members of a supply chain. Tolerance analysis activity, on the other hand, is used to verify the functional requirements of an assembly considering the worst case (i.e., maximum and minimum) conditions for the part/assembly dimensions. Another issue with STEP-based product models is that the semantics of product data are represented implicitly. Hence, it is difficult to interpret the semantics of data for different product life-cycle phases for various application domains. OntoSTEP, developed at NIST, provides semantically enriched product models in OWL. In this thesis, we would like to present how to interpret the GD & T specifications in STEP for tolerance analysis by utilizing OntoSTEP.

  10. Towards the understanding of network information processing in biology

    NASA Astrophysics Data System (ADS)

    Singh, Vijay

    Living organisms perform incredibly well in detecting a signal present in the environment. This information processing is achieved near optimally and quite reliably, even though the sources of signals are highly variable and complex. The work in the last few decades has given us a fair understanding of how individual signal processing units like neurons and cell receptors process signals, but the principles of collective information processing on biological networks are far from clear. Information processing in biological networks, like the brain, metabolic circuits, cellular-signaling circuits, etc., involves complex interactions among a large number of units (neurons, receptors). The combinatorially large number of states such a system can exist in makes it impossible to study these systems from the first principles, starting from the interactions between the basic units. The principles of collective information processing on such complex networks can be identified using coarse graining approaches. This could provide insights into the organization and function of complex biological networks. Here I study models of biological networks using continuum dynamics, renormalization, maximum likelihood estimation and information theory. Such coarse graining approaches identify features that are essential for certain processes performed by underlying biological networks. We find that long-range connections in the brain allow for global scale feature detection in a signal. These also suppress the noise and remove any gaps present in the signal. Hierarchical organization with long-range connections leads to large-scale connectivity at low synapse numbers. Time delays can be utilized to separate a mixture of signals with temporal scales. Our observations indicate that the rules in multivariate signal processing are quite different from traditional single unit signal processing.

  11. Personal utility in genomic testing: is there such a thing?

    PubMed

    Bunnik, Eline M; Janssens, A Cecile J W; Schermer, Maartje H N

    2015-04-01

    In ethical and regulatory discussions on new applications of genomic testing technologies, the notion of 'personal utility' has been mentioned repeatedly. It has been used to justify direct access to commercially offered genomic testing or feedback of individual research results to research or biobank participants. Sometimes research participants or consumers claim a right to genomic information with an appeal to personal utility. As of yet, no systematic account of the umbrella notion of personal utility has been given. This paper offers a definition of personal utility that places it in the middle of the spectrum between clinical utility and personal perceptions of utility, and that acknowledges its normative charge. The paper discusses two perspectives on personal utility, the healthcare perspective and the consumer perspective, and argues that these are too narrow and too wide, respectively. Instead, it proposes a normative definition of personal utility that postulates information and potential use as necessary conditions of utility. This definition entails that perceived utility does not equal personal utility, and that expert judgment may be necessary to help determine whether a genomic test can have personal utility for someone. Two examples of genomic tests are presented to illustrate the discrepancies between perceived utility and our proposed definition of personal utility. The paper concludes that while there is room for the notion of personal utility in the ethical evaluation and regulation of genomic tests, the justificatory role of personal utility is not unlimited. For in the absence of clinical validity and reasonable potential use of information, there is no personal utility. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.

  12. Voltage Impacts of Utility-Scale Distributed Wind

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Allen, A.

    2014-09-01

    Although most utility-scale wind turbines in the United States are added at the transmission level in large wind power plants, distributed wind power offers an alternative that could increase the overall wind power penetration without the need for additional transmission. This report examines the distribution feeder-level voltage issues that can arise when adding utility-scale wind turbines to the distribution system. Four of the Pacific Northwest National Laboratory taxonomy feeders were examined in detail to study the voltage issues associated with adding wind turbines at different distances from the sub-station. General rules relating feeder resistance up to the point of turbinemore » interconnection to the expected maximum voltage change levels were developed. Additional analysis examined line and transformer overvoltage conditions.« less

  13. An Examination of the Utilization of Electronic Government Services by Minority Small Businesses

    ERIC Educational Resources Information Center

    Ford, Wendy G.

    2010-01-01

    There are a wide variety of e-government information and services that small business owners and managers can utilize. However, in spite of all of the service incentives and initiatives to promote e-government, research studies have shown that this information is not widely accessed. Studies that explore the utilization of e-government information…

  14. Which information resources are used by general practitioners for updating knowledge regarding diabetes?

    PubMed

    Tabatabaei-Malazy, Ozra; Nedjat, Saharnaz; Majdzadeh, Reza

    2012-04-01

    Little is known about the degree of utilization of information resources on diabetes by general practitioners (GPs) and its impact on their clinical behavior in developing countries. Such information is vital if GPs' diabetes knowledge is to be improved. This cross-sectional study recruited 319 GPs in the summer of 2008. Questions were about the updates on diabetes knowledge in the previous two years, utilization of information resources (domestic and foreign journals, congresses, the Internet, reference books, mass media, and peers), attitude toward the importance of each resource, and impact of each resource on clinical behavior. A total of 62% of GPs had used information resources for improving their knowledge on diabetes in the previous two years. Domestic journals accounted for the highest utilization (30%) and the highest importance score (83 points from 100); with the importance score not being affected by sex, years elapsed after graduation, and numbers of diabetic visits. Clinical behavior was not influenced by the information resources listed; whereas knowledge upgrade, irrespective of the sources utilized, had a significantly positive correlation with clinical behavior. Domestic journals constituted the main information resource utilized by the GPs; this resource, however, in tandem with the other information resources on diabetes exerted no significant impact on the GPs' clinical behavior. In contrast to the developed countries, clinical guidelines do not have any place as a source of information and or practice. Indubitably, the improvement of diabetes knowledge transfer requires serious interventions to improve information resources as well as the structure of scientific gatherings and collaborations.

  15. A discrete choice experiment to obtain a tariff for valuing informal care situations measured with the CarerQol instrument.

    PubMed

    Hoefman, Renske J; van Exel, Job; Rose, John M; van de Wetering, E J; Brouwer, Werner B F

    2014-01-01

    Economic evaluations adopting a societal perspective need to include informal care whenever relevant. However, in practice, informal care is often neglected, because there are few validated instruments to measure and value informal care for inclusion in economic evaluations. The CarerQol, which is such an instrument, measures the impact of informal care on 7 important burden dimensions (CarerQol-7D) and values this in terms of general quality of life (CarerQol-VAS). The objective of the study was to calculate utility scores based on relative utility weights for the CarerQol-7D. These tariffs will facilitate inclusion of informal care in economic evaluations. The CarerQol-7D tariff was derived with a discrete choice experiment conducted as an Internet survey among the general adult population in the Netherlands (N = 992). The choice set contained 2 unlabeled alternatives described in terms of the 7 CarerQol-7D dimensions (level range: "no,"some," and "a lot"). An efficient experimental design with priors obtained from a pilot study (N = 104) was used. Data were analyzed with a panel mixed multinomial parameter model including main and interaction effects of the attributes. The utility attached to informal care situations was significantly higher when this situation was more attractive in terms of fewer problems and more fulfillment or support. The interaction term between the CarerQol-7D dimensions physical health and mental health problems also significantly explained this utility. The tariff was constructed by adding up the relative utility weights per category of all CarerQol-7D dimensions and the interaction term. We obtained a tariff providing standard utility scores for caring situations described with the CarerQol-7D. This facilitates the inclusion of informal care in economic evaluations.

  16. The Associations between Health Literacy, Reasons for Seeking Health Information, and Information Sources Utilized by Taiwanese Adults

    ERIC Educational Resources Information Center

    Wei, Mi-Hsiu

    2014-01-01

    Objective: To determine the associations between health literacy, the reasons for seeking health information, and the information sources utilized by Taiwanese adults. Method: A cross-sectional survey of 752 adults residing in rural and urban areas of Taiwan was conducted via questionnaires. Chi-squared tests and logistic regression were used for…

  17. 76 FR 73648 - Agency Information Collection Activities: Proposed Collection; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-29

    ... burden; (3) ways to enhance the quality, utility, and clarity of the information to be collected; and (4... collection; Title of Information Collection: Request for Certification as a Rural Health Clinic Form and... of Rural Health Clinic (RHC) Services under the Medicare/ Medicaid Program, is utilized as an...

  18. Introduction to the Graduation Tracking System (GTS)

    ERIC Educational Resources Information Center

    Alabama Department of Education, 2011

    2011-01-01

    This guide is a training and supportive tool for use by local education agencies (LEAs) in the state of Alabama that are utilizing the Science, Technology and Innovation (STI) Information-INow-INFocus information system software. The Graduation Tracking System (GTS) utilizes existing STI technology to capture student information pertaining to…

  19. The ICCB Computer Based Facilities Inventory & Utilization Management Information Subsystem.

    ERIC Educational Resources Information Center

    Lach, Ivan J.

    The Illinois Community College Board (ICCB) Facilities Inventory and Utilization subsystem, a part of the ICCB management information system, was designed to provide decision makers with needed information to better manage the facility resources of Illinois community colleges. This subsystem, dependent upon facilities inventory data and course…

  20. 75 FR 58389 - Federal Acquisition Regulation; Information Collection; American Recovery and Reinvestment Act...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-09-24

    ... proper performance of functions of the FAR, and whether it will have practical utility; whether our... and methodology; ways to enhance the quality, utility, and clarity of the information to be collected... INFORMATION CONTACT: Mr. Ernest Woodson, Procurement Analyst, Contract Policy Branch, at telephone (202) 501...

Top