Noel, Jonathan K; Xuan, Ziming; Babor, Thomas F
2017-07-03
Beer marketing in the United States is controlled through self-regulation, whereby the beer industry has created a marketing code and enforces its use. We performed a thematic content analysis on beer ads broadcast during a U.S. college athletic event and determined which themes are associated with violations of a self-regulated alcohol marketing code. 289 beer ads broadcast during the U.S. NCAA Men's and Women's 1999-2008 basketball tournaments were assessed for the presence of 23 thematic content areas. Associations between themes and violations of the U.S. Beer Institute's Marketing and Advertising Code were determined using generalized linear models. Humor (61.3%), taste (61.0%), masculinity (49.2%), and enjoyment (36.5%) were the most prevalent content areas. Nine content areas (i.e., conformity, ethnicity, sensation seeking, sociability, romance, special occasions, text responsibility messages, tradition, and individuality) were positively associated with code violations (p < 0.001-0.042). There were significantly more content areas positively associated with code violations than content areas negatively associated with code violations (p < 0.001). Several thematic content areas were positively associated with code violations. The results can inform existing efforts to revise self-regulated alcohol marketing codes to ensure better protection of vulnerable populations. The use of several themes is concerning in relation to adolescent alcohol use and health disparities.
Predicting Regulatory Compliance in Beer Advertising on Facebook.
Noel, Jonathan K; Babor, Thomas F
2017-11-01
The prevalence of alcohol advertising has been growing on social media platforms. The purpose of this study was to evaluate alcohol advertising on Facebook for regulatory compliance and thematic content. A total of 50 Budweiser and Bud Light ads posted on Facebook within 1 month of the 2015 NFL Super Bowl were evaluated for compliance with a self-regulated alcohol advertising code and for thematic content. An exploratory sensitivity/specificity analysis was conducted to determine if thematic content could predict code violations. The code violation rate was 82%, with violations prevalent in guidelines prohibiting the association of alcohol with success (Guideline 5) and health benefits (Guideline 3). Overall, 21 thematic content areas were identified. Displaying the product (62%) and adventure/sensation seeking (52%) were the most prevalent. There was perfect specificity (100%) for 10 content areas for detecting any code violation (animals, negative emotions, positive emotions, games/contests/promotions, female characters, minorities, party, sexuality, night-time, sunrise) and high specificity (>80%) for 10 content areas for detecting violations of guidelines intended to protect minors (animals, negative emotions, famous people, friendship, games/contests/promotions, minorities, responsibility messages, sexuality, sunrise, video games). The high prevalence of code violations indicates a failure of self-regulation to prevent potentially harmful content from appearing in alcohol advertising, including explicit code violations (e.g. sexuality). Routine violations indicate an unwillingness to restrict advertising content for public health purposes, and statutory restrictions may be necessary to sufficiently deter alcohol producers from repeatedly violating marketing codes. Violations of a self-regulated alcohol advertising code are prevalent in a sample of beer ads published on Facebook near the US National Football League's Super Bowl. Overall, 16 thematic content areas demonstrated high specificity for code violations. Alcohol advertising codes should be updated to expressly prohibit the use of such content. © The Author 2017. Medical Council on Alcohol and Oxford University Press. All rights reserved.
Smith, Katherine C; Cukier, Samantha; Jernigan, David H
2014-10-01
We analyzed beer, spirits, and alcopop magazine advertisements to determine adherence to federal and voluntary advertising standards. We assessed the efficacy of these standards in curtailing potentially damaging content and protecting public health. We obtained data from a content analysis of a census of 1795 unique advertising creatives for beer, spirits, and alcopops placed in nationally available magazines between 2008 and 2010. We coded creatives for manifest content and adherence to federal regulations and industry codes. Advertisements largely adhered to existing regulations and codes. We assessed only 23 ads as noncompliant with federal regulations and 38 with industry codes. Content consistent with the codes was, however, often culturally positive in terms of aspirational depictions. In addition, creatives included degrading and sexualized images, promoted risky behavior, and made health claims associated with low-calorie content. Existing codes and regulations are largely followed regarding content but do not adequately protect against content that promotes unhealthy and irresponsible consumption and degrades potentially vulnerable populations in its depictions. Our findings suggest further limitations and enhanced federal oversight may be necessary to protect public health.
Compression performance of HEVC and its format range and screen content coding extensions
NASA Astrophysics Data System (ADS)
Li, Bin; Xu, Jizheng; Sullivan, Gary J.
2015-09-01
This paper presents a comparison-based test of the objective compression performance of the High Efficiency Video Coding (HEVC) standard, its format range extensions (RExt), and its draft screen content coding extensions (SCC). The current dominant standard, H.264/MPEG-4 AVC, is used as an anchor reference in the comparison. The conditions used for the comparison tests were designed to reflect relevant application scenarios and to enable a fair comparison to the maximum extent feasible - i.e., using comparable quantization settings, reference frame buffering, intra refresh periods, rate-distortion optimization decision processing, etc. It is noted that such PSNR-based objective comparisons generally provide more conservative estimates of HEVC benefit than are found in subjective studies. The experimental results show that, when compared with H.264/MPEG-4 AVC, HEVC version 1 provides a bit rate savings for equal PSNR of about 23% for all-intra coding, 34% for random access coding, and 38% for low-delay coding. This is consistent with prior studies and the general characterization that HEVC can provide about a bit rate savings of about 50% for equal subjective quality for most applications. The HEVC format range extensions provide a similar bit rate savings of about 13-25% for all-intra coding, 28-33% for random access coding, and 32-38% for low-delay coding at different bit rate ranges. For lossy coding of screen content, the HEVC screen content coding extensions achieve a bit rate savings of about 66%, 63%, and 61% for all-intra coding, random access coding, and low-delay coding, respectively. For lossless coding, the corresponding bit rate savings are about 40%, 33%, and 32%, respectively.
Quality Scalability Aware Watermarking for Visual Content.
Bhowmik, Deepayan; Abhayaratne, Charith
2016-11-01
Scalable coding-based content adaptation poses serious challenges to traditional watermarking algorithms, which do not consider the scalable coding structure and hence cannot guarantee correct watermark extraction in media consumption chain. In this paper, we propose a novel concept of scalable blind watermarking that ensures more robust watermark extraction at various compression ratios while not effecting the visual quality of host media. The proposed algorithm generates scalable and robust watermarked image code-stream that allows the user to constrain embedding distortion for target content adaptations. The watermarked image code-stream consists of hierarchically nested joint distortion-robustness coding atoms. The code-stream is generated by proposing a new wavelet domain blind watermarking algorithm guided by a quantization based binary tree. The code-stream can be truncated at any distortion-robustness atom to generate the watermarked image with the desired distortion-robustness requirements. A blind extractor is capable of extracting watermark data from the watermarked images. The algorithm is further extended to incorporate a bit-plane discarding-based quantization model used in scalable coding-based content adaptation, e.g., JPEG2000. This improves the robustness against quality scalability of JPEG2000 compression. The simulation results verify the feasibility of the proposed concept, its applications, and its improved robustness against quality scalable content adaptation. Our proposed algorithm also outperforms existing methods showing 35% improvement. In terms of robustness to quality scalable video content adaptation using Motion JPEG2000 and wavelet-based scalable video coding, the proposed method shows major improvement for video watermarking.
Cukier, Samantha; Jernigan, David H.
2014-01-01
Objectives. We analyzed beer, spirits, and alcopop magazine advertisements to determine adherence to federal and voluntary advertising standards. We assessed the efficacy of these standards in curtailing potentially damaging content and protecting public health. Methods. We obtained data from a content analysis of a census of 1795 unique advertising creatives for beer, spirits, and alcopops placed in nationally available magazines between 2008 and 2010. We coded creatives for manifest content and adherence to federal regulations and industry codes. Results. Advertisements largely adhered to existing regulations and codes. We assessed only 23 ads as noncompliant with federal regulations and 38 with industry codes. Content consistent with the codes was, however, often culturally positive in terms of aspirational depictions. In addition, creatives included degrading and sexualized images, promoted risky behavior, and made health claims associated with low-calorie content. Conclusions. Existing codes and regulations are largely followed regarding content but do not adequately protect against content that promotes unhealthy and irresponsible consumption and degrades potentially vulnerable populations in its depictions. Our findings suggest further limitations and enhanced federal oversight may be necessary to protect public health. PMID:24228667
CH-TRU Waste Content Codes (CH-TRUCON)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Washington TRU Solutions LLC
2007-08-15
The CH-TRU Waste Content Codes (CH-TRUCON) document describes the inventory of the U.S. Department of Energy (DOE) CH-TRU waste within the transportation parameters specified by the Contact-Handled Transuranic Waste Authorized Methods for Payload Control (CH-TRAMPAC). The CH-TRAMPAC defines the allowable payload for the Transuranic Package Transporter-II (TRUPACT-II) and HalfPACT packagings. This document is a catalog of TRUPACT-II and HalfPACT authorized contents and a description of the methods utilized to demonstrate compliance with the CH-TRAMPAC. A summary of currently approved content codes by site is presented in Table 1. The CH-TRAMPAC describes "shipping categories" that are assigned to each payload container.more » Multiple shipping categories may be assigned to a single content code. A summary of approved content codes and corresponding shipping categories is provided in Table 2, which consists of Tables 2A, 2B, and 2C. Table 2A provides a summary of approved content codes and corresponding shipping categories for the "General Case," which reflects the assumption of a 60-day shipping period as described in the CH-TRAMPAC and Appendix 3.4 of the CH-TRU Payload Appendices. For shipments to be completed within an approximately 1,000-mile radius, a shorter shipping period of 20 days is applicable as described in the CH-TRAMPAC and Appendix 3.5 of the CH-TRU Payload Appendices. For shipments to WIPP from Los Alamos National Laboratory (LANL), Nevada Test Site, and Rocky Flats Environmental Technology Site, a 20-day shipping period is applicable. Table 2B provides a summary of approved content codes and corresponding shipping categories for "Close-Proximity Shipments" (20-day shipping period). For shipments implementing the controls specified in the CH-TRAMPAC and Appendix 3.6 of the CH-TRU Payload Appendices, a 10-day shipping period is applicable. Table 2C provides a summary of approved content codes and corresponding shipping categories for "Controlled Shipments" (10-day shipping period).« less
CH-TRU Waste Content Codes (CH-TRUCON)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Washington TRU Solutions LLC
2007-06-15
The CH-TRU Waste Content Codes (CH-TRUCON) document describes the inventory of the U.S. Department of Energy (DOE) CH-TRU waste within the transportation parameters specified by the Contact-Handled Transuranic Waste Authorized Methods for Payload Control (CH-TRAMPAC). The CH-TRAMPAC defines the allowable payload for the Transuranic Package Transporter-II (TRUPACT-II) and HalfPACT packagings. This document is a catalog of TRUPACT-II and HalfPACT authorized contents and a description of the methods utilized to demonstrate compliance with the CH-TRAMPAC. A summary of currently approved content codes by site is presented in Table 1. The CH-TRAMPAC describes "shipping categories" that are assigned to each payload container.more » Multiple shipping categories may be assigned to a single content code. A summary of approved content codes and corresponding shipping categories is provided in Table 2, which consists of Tables 2A, 2B, and 2C. Table 2A provides a summary of approved content codes and corresponding shipping categories for the "General Case," which reflects the assumption of a 60-day shipping period as described in the CH-TRAMPAC and Appendix 3.4 of the CH-TRU Payload Appendices. For shipments to be completed within an approximately 1,000-mile radius, a shorter shipping period of 20 days is applicable as described in the CH-TRAMPAC and Appendix 3.5 of the CH-TRU Payload Appendices. For shipments to WIPP from Los Alamos National Laboratory (LANL), Nevada Test Site, and Rocky Flats Environmental Technology Site, a 20-day shipping period is applicable. Table 2B provides a summary of approved content codes and corresponding shipping categories for "Close-Proximity Shipments" (20-day shipping period). For shipments implementing the controls specified in the CH-TRAMPAC and Appendix 3.6 of the CH-TRU Payload Appendices, a 10-day shipping period is applicable. Table 2C provides a summary of approved content codes and corresponding shipping categories for "Controlled Shipments" (10-day shipping period).« less
CH-TRU Waste Content Codes (CH-TRUCON)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Washington TRU Solutions LLC
2007-09-20
The CH-TRU Waste Content Codes (CH-TRUCON) document describes the inventory of the U.S. Department of Energy (DOE) CH-TRU waste within the transportation parameters specified by the Contact-Handled Transuranic Waste Authorized Methods for Payload Control (CH-TRAMPAC). The CH-TRAMPAC defines the allowable payload for the Transuranic Package Transporter-II (TRUPACT-II) and HalfPACT packagings. This document is a catalog of TRUPACT-II and HalfPACT authorized contents and a description of the methods utilized to demonstrate compliance with the CH-TRAMPAC. A summary of currently approved content codes by site is presented in Table 1. The CH-TRAMPAC describes "shipping categories" that are assigned to each payload container.more » Multiple shipping categories may be assigned to a single content code. A summary of approved content codes and corresponding shipping categories is provided in Table 2, which consists of Tables 2A, 2B, and 2C. Table 2A provides a summary of approved content codes and corresponding shipping categories for the "General Case," which reflects the assumption of a 60-day shipping period as described in the CH-TRAMPAC and Appendix 3.4 of the CH-TRU Payload Appendices. For shipments to be completed within an approximately 1,000-mile radius, a shorter shipping period of 20 days is applicable as described in the CH-TRAMPAC and Appendix 3.5 of the CH-TRU Payload Appendices. For shipments to WIPP from Los Alamos National Laboratory (LANL), Nevada Test Site, and Rocky Flats Environmental Technology Site, a 20-day shipping period is applicable. Table 2B provides a summary of approved content codes and corresponding shipping categories for "Close-Proximity Shipments" (20-day shipping period). For shipments implementing the controls specified in the CH-TRAMPAC and Appendix 3.6 of the CH-TRU Payload Appendices, a 10-day shipping period is applicable. Table 2C provides a summary of approved content codes and corresponding shipping categories for "Controlled Shipments" (10-day shipping period).« less
CH-TRU Waste Content Codes (CH-TRUCON)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Washington TRU Solutions LLC
2006-06-20
The CH-TRU Waste Content Codes (CH-TRUCON) document describes the inventory of the U.S. Department of Energy (DOE) CH-TRU waste within the transportation parameters specified by the Contact-Handled Transuranic Waste Authorized Methods for Payload Control (CH-TRAMPAC). The CH-TRAMPAC defines the allowable payload for the Transuranic Package Transporter-II (TRUPACT-II) and HalfPACT packagings. This document is a catalog of TRUPACT-II and HalfPACT authorized contents and a description of the methods utilized to demonstrate compliance with the CH-TRAMPAC. A summary of currently approved content codes by site is presented in Table 1. The CH-TRAMPAC describes "shipping categories" that are assigned to each payload container.more » Multiple shipping categories may be assigned to a single content code. A summary of approved content codes and corresponding shipping categories is provided in Table 2, which consists of Tables 2A, 2B, and 2C. Table 2A provides a summary of approved content codes and corresponding shipping categories for the "General Case," which reflects the assumption of a 60-day shipping period as described in the CH-TRAMPAC and Appendix 3.4 of the CH-TRU Payload Appendices. For shipments to be completed within an approximately 1,000-mile radius, a shorter shipping period of 20 days is applicable as described in the CH-TRAMPAC and Appendix 3.5 of the CH-TRU Payload Appendices. For shipments to WIPP from Los Alamos National Laboratory (LANL), Nevada Test Site, and Rocky Flats Environmental Technology Site, a 20-day shipping period is applicable. Table 2B provides a summary of approved content codes and corresponding shipping categories for "Close-Proximity Shipments" (20-day shipping period). For shipments implementing the controls specified in the CH-TRAMPAC and Appendix 3.6 of the CH-TRU Payload Appendices, a 10-day shipping period is applicable. Table 2C provides a summary of approved content codes and corresponding shipping categories for "Controlled Shipments" (10-day shipping period).« less
CH-TRU Waste Content Codes (CH-TRUCON)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Washington TRU Solutions LLC
2006-01-18
The CH-TRU Waste Content Codes (CH-TRUCON) document describes the inventory of the U.S. Department of Energy (DOE) CH-TRU waste within the transportation parameters specified by the Contact-Handled Transuranic Waste Authorized Methods for Payload Control (CH-TRAMPAC). The CH-TRAMPAC defines the allowable payload for the Transuranic Package Transporter-II (TRUPACT-II) and HalfPACT packagings. This document is a catalog of TRUPACT-II and HalfPACT authorized contents and a description of the methods utilized to demonstrate compliance with the CH-TRAMPAC. A summary of currently approved content codes by site is presented in Table 1. The CH-TRAMPAC describes "shipping categories" that are assigned to each payload container.more » Multiple shipping categories may be assigned to a single content code. A summary of approved content codes and corresponding shipping categories is provided in Table 2, which consists of Tables 2A, 2B, and 2C. Table 2A provides a summary of approved content codes and corresponding shipping categories for the "General Case," which reflects the assumption of a 60-day shipping period as described in the CH-TRAMPAC and Appendix 3.4 of the CH-TRU Payload Appendices. For shipments to be completed within an approximately 1,000-mile radius, a shorter shipping period of 20 days is applicable as described in the CH-TRAMPAC and Appendix 3.5 of the CH-TRU Payload Appendices. For shipments to WIPP from Los Alamos National Laboratory (LANL), Nevada Test Site, and Rocky Flats Environmental Technology Site, a 20-day shipping period is applicable. Table 2B provides a summary of approved content codes and corresponding shipping categories for "Close-Proximity Shipments" (20-day shipping period). For shipments implementing the controls specified in the CH-TRAMPAC and Appendix 3.6 of the CH-TRU Payload Appendices, a 10-day shipping period is applicable. Table 2C provides a summary of approved content codes and corresponding shipping categories for "Controlled Shipments" (10-day shipping period).« less
CH-TRU Waste Content Codes (CH-TRUCON)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Washington TRU Solutions LLC
2006-08-15
The CH-TRU Waste Content Codes (CH-TRUCON) document describes the inventory of the U.S. Department of Energy (DOE) CH-TRU waste within the transportation parameters specified by the Contact-Handled Transuranic Waste Authorized Methods for Payload Control (CH-TRAMPAC). The CH-TRAMPAC defines the allowable payload for the Transuranic Package Transporter-II (TRUPACT-II) and HalfPACT packagings. This document is a catalog of TRUPACT-II and HalfPACT authorized contents and a description of the methods utilized to demonstrate compliance with the CH-TRAMPAC. A summary of currently approved content codes by site is presented in Table 1. The CH-TRAMPAC describes "shipping categories" that are assigned to each payload container.more » Multiple shipping categories may be assigned to a single content code. A summary of approved content codes and corresponding shipping categories is provided in Table 2, which consists of Tables 2A, 2B, and 2C. Table 2A provides a summary of approved content codes and corresponding shipping categories for the "General Case," which reflects the assumption of a 60-day shipping period as described in the CH-TRAMPAC and Appendix 3.4 of the CH-TRU Payload Appendices. For shipments to be completed within an approximately 1,000-mile radius, a shorter shipping period of 20 days is applicable as described in the CH-TRAMPAC and Appendix 3.5 of the CH-TRU Payload Appendices. For shipments to WIPP from Los Alamos National Laboratory (LANL), Nevada Test Site, and Rocky Flats Environmental Technology Site, a 20-day shipping period is applicable. Table 2B provides a summary of approved content codes and corresponding shipping categories for "Close-Proximity Shipments" (20-day shipping period). For shipments implementing the controls specified in the CH-TRAMPAC and Appendix 3.6 of the CH-TRU Payload Appendices, a 10-day shipping period is applicable. Table 2C provides a summary of approved content codes and corresponding shipping categories for "Controlled Shipments" (10-day shipping period).« less
CH-TRU Waste Content Codes (CH-TRUCON)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Washington TRU Solutions LLC
2006-12-20
The CH-TRU Waste Content Codes (CH-TRUCON) document describes the inventory of the U.S. Department of Energy (DOE) CH-TRU waste within the transportation parameters specified by the Contact-Handled Transuranic Waste Authorized Methods for Payload Control (CH-TRAMPAC). The CH-TRAMPAC defines the allowable payload for the Transuranic Package Transporter-II (TRUPACT-II) and HalfPACT packagings. This document is a catalog of TRUPACT-II and HalfPACT authorized contents and a description of the methods utilized to demonstrate compliance with the CH-TRAMPAC. A summary of currently approved content codes by site is presented in Table 1. The CH-TRAMPAC describes "shipping categories" that are assigned to each payload container.more » Multiple shipping categories may be assigned to a single content code. A summary of approved content codes and corresponding shipping categories is provided in Table 2, which consists of Tables 2A, 2B, and 2C. Table 2A provides a summary of approved content codes and corresponding shipping categories for the "General Case," which reflects the assumption of a 60-day shipping period as described in the CH-TRAMPAC and Appendix 3.4 of the CH-TRU Payload Appendices. For shipments to be completed within an approximately 1,000-mile radius, a shorter shipping period of 20 days is applicable as described in the CH-TRAMPAC and Appendix 3.5 of the CH-TRU Payload Appendices. For shipments to WIPP from Los Alamos National Laboratory (LANL), Nevada Test Site, and Rocky Flats Environmental Technology Site, a 20-day shipping period is applicable. Table 2B provides a summary of approved content codes and corresponding shipping categories for "Close-Proximity Shipments" (20-day shipping period). For shipments implementing the controls specified in the CH-TRAMPAC and Appendix 3.6 of the CH-TRU Payload Appendices, a 10-day shipping period is applicable. Table 2C provides a summary of approved content codes and corresponding shipping categories for "Controlled Shipments" (10-day shipping period).« less
CH-TRU Waste Content Codes (CH-TRUCON)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Washington TRU Solutions LLC
2007-02-15
The CH-TRU Waste Content Codes (CH-TRUCON) document describes the inventory of the U.S. Department of Energy (DOE) CH-TRU waste within the transportation parameters specified by the Contact-Handled Transuranic Waste Authorized Methods for Payload Control (CH-TRAMPAC). The CH-TRAMPAC defines the allowable payload for the Transuranic Package Transporter-II (TRUPACT-II) and HalfPACT packagings. This document is a catalog of TRUPACT-II and HalfPACT authorized contents and a description of the methods utilized to demonstrate compliance with the CH-TRAMPAC. A summary of currently approved content codes by site is presented in Table 1. The CH-TRAMPAC describes "shipping categories" that are assigned to each payload container.more » Multiple shipping categories may be assigned to a single content code. A summary of approved content codes and corresponding shipping categories is provided in Table 2, which consists of Tables 2A, 2B, and 2C. Table 2A provides a summary of approved content codes and corresponding shipping categories for the "General Case," which reflects the assumption of a 60-day shipping period as described in the CH-TRAMPAC and Appendix 3.4 of the CH-TRU Payload Appendices. For shipments to be completed within an approximately 1,000-mile radius, a shorter shipping period of 20 days is applicable as described in the CH-TRAMPAC and Appendix 3.5 of the CH-TRU Payload Appendices. For shipments to WIPP from Los Alamos National Laboratory (LANL), Nevada Test Site, and Rocky Flats Environmental Technology Site, a 20-day shipping period is applicable. Table 2B provides a summary of approved content codes and corresponding shipping categories for "Close-Proximity Shipments" (20-day shipping period). For shipments implementing the controls specified in the CH-TRAMPAC and Appendix 3.6 of the CH-TRU Payload Appendices, a 10-day shipping period is applicable. Table 2C provides a summary of approved content codes and corresponding shipping categories for "Controlled Shipments" (10-day shipping period).« less
CH-TRU Waste Content Codes (CH-TRUCON)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Washington TRU Solutions LLC
2006-09-15
The CH-TRU Waste Content Codes (CH-TRUCON) document describes the inventory of the U.S. Department of Energy (DOE) CH-TRU waste within the transportation parameters specified by the Contact-Handled Transuranic Waste Authorized Methods for Payload Control (CH-TRAMPAC). The CH-TRAMPAC defines the allowable payload for the Transuranic Package Transporter-II (TRUPACT-II) and HalfPACT packagings. This document is a catalog of TRUPACT-II and HalfPACT authorized contents and a description of the methods utilized to demonstrate compliance with the CH-TRAMPAC. A summary of currently approved content codes by site is presented in Table 1. The CH-TRAMPAC describes "shipping categories" that are assigned to each payload container.more » Multiple shipping categories may be assigned to a single content code. A summary of approved content codes and corresponding shipping categories is provided in Table 2, which consists of Tables 2A, 2B, and 2C. Table 2A provides a summary of approved content codes and corresponding shipping categories for the "General Case," which reflects the assumption of a 60-day shipping period as described in the CH-TRAMPAC and Appendix 3.4 of the CH-TRU Payload Appendices. For shipments to be completed within an approximately 1,000-mile radius, a shorter shipping period of 20 days is applicable as described in the CH-TRAMPAC and Appendix 3.5 of the CH-TRU Payload Appendices. For shipments to WIPP from Los Alamos National Laboratory (LANL), Nevada Test Site, and Rocky Flats Environmental Technology Site, a 20-day shipping period is applicable. Table 2B provides a summary of approved content codes and corresponding shipping categories for "Close-Proximity Shipments" (20-day shipping period). For shipments implementing the controls specified in the CH-TRAMPAC and Appendix 3.6 of the CH-TRU Payload Appendices, a 10-day shipping period is applicable. Table 2C provides a summary of approved content codes and corresponding shipping categories for "Controlled Shipments" (10-day shipping period).« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Washington TRU Solutions LLC
2008-01-16
The CH-TRU Waste Content Codes (CH-TRUCON) document describes the inventory of the U.S. Department of Energy (DOE) CH-TRU waste within the transportation parameters specified by the Contact-Handled Transuranic Waste Authorized Methods for Payload Control (CH-TRAMPAC). The CH-TRAMPAC defines the allowable payload for the Transuranic Package Transporter-II (TRUPACT-II) and HalfPACT packagings. This document is a catalog of TRUPACT-II and HalfPACT authorized contents and a description of the methods utilized to demonstrate compliance with the CH-TRAMPAC. A summary of currently approved content codes by site is presented in Table 1. The CH-TRAMPAC describes "shipping categories" that are assigned to each payload container.more » Multiple shipping categories may be assigned to a single content code. A summary of approved content codes and corresponding shipping categories is provided in Table 2, which consists of Tables 2A, 2B, and 2C. Table 2A provides a summary of approved content codes and corresponding shipping categories for the "General Case," which reflects the assumption of a 60-day shipping period as described in the CH-TRAMPAC and Appendix 3.4 of the CH-TRU Payload Appendices. For shipments to be completed within an approximately 1,000-mile radius, a shorter shipping period of 20 days is applicable as described in the CH-TRAMPAC and Appendix 3.5 of the CH-TRU Payload Appendices. For shipments to WIPP from Los Alamos National Laboratory (LANL), Nevada Test Site, and Rocky Flats Environmental Technology Site, a 20-day shipping period is applicable. Table 2B provides a summary of approved content codes and corresponding shipping categories for "Close-Proximity Shipments" (20-day shipping period). For shipments implementing the controls specified in the CH-TRAMPAC and Appendix 3.6 of the CH-TRU Payload Appendices, a 10-day shipping period is applicable. Table 2C provides a summary of approved content codes and corresponding shipping categories for "Controlled Shipments" (10-day shipping period).« less
Xuan, Ziming; Damon, Donna; Noel, Jonathan
2013-01-01
Objectives. We evaluated advertising code violations using the US Beer Institute guidelines for responsible advertising. Methods. We applied the Delphi rating technique to all beer ads (n = 289) broadcast in national markets between 1999 and 2008 during the National Collegiate Athletic Association basketball tournament games. Fifteen public health professionals completed ratings using quantitative scales measuring the content of alcohol advertisements (e.g., perceived actor age, portrayal of excessive drinking) according to 1997 and 2006 versions of the Beer Institute Code. Results. Depending on the code version, exclusion criteria, and scoring method, expert raters found that between 35% and 74% of the ads had code violations. There were significant differences among producers in the frequency with which ads with violations were broadcast, but not in the proportions of unique ads with violations. Guidelines most likely to be violated included the association of beer drinking with social success and the use of content appealing to persons younger than 21 years. Conclusions. The alcohol industry’s current self-regulatory framework is ineffective at preventing content violations but could be improved by the use of new rating procedures designed to better detect content code violations. PMID:23947318
Babor, Thomas F; Xuan, Ziming; Damon, Donna; Noel, Jonathan
2013-10-01
We evaluated advertising code violations using the US Beer Institute guidelines for responsible advertising. We applied the Delphi rating technique to all beer ads (n = 289) broadcast in national markets between 1999 and 2008 during the National Collegiate Athletic Association basketball tournament games. Fifteen public health professionals completed ratings using quantitative scales measuring the content of alcohol advertisements (e.g., perceived actor age, portrayal of excessive drinking) according to 1997 and 2006 versions of the Beer Institute Code. Depending on the code version, exclusion criteria, and scoring method, expert raters found that between 35% and 74% of the ads had code violations. There were significant differences among producers in the frequency with which ads with violations were broadcast, but not in the proportions of unique ads with violations. Guidelines most likely to be violated included the association of beer drinking with social success and the use of content appealing to persons younger than 21 years. The alcohol industry's current self-regulatory framework is ineffective at preventing content violations but could be improved by the use of new rating procedures designed to better detect content code violations.
Lewkowitz, Adam K; O'Donnell, Betsy E; Nakagawa, Sanae; Vargas, Juan E; Zlatnik, Marya G
2016-03-01
Text4baby is the only free text-message program for pregnancy available. Our objective was to determine whether content differed between Text4baby and popular pregnancy smart phone applications (apps). Researchers enrolled in Text4baby in 2012 and downloaded the four most-popular free pregnancy smart phone apps in July 2013; content was re-extracted in February 2014. Messages were assigned thematic codes. Two researchers coded messages independently before reviewing all the codes jointly to ensure consistency. Logistic regression modeling determined statistical differences between Text4baby and smart phone apps. About 1399 messages were delivered. Of these, 333 messages had content related to more than one theme and were coded as such, resulting in 1820 codes analyzed. Compared to smart phone apps, Text4baby was significantly more likely to have content regarding Postpartum Planning, Seeking Care, Recruitment and Prevention and significantly less likely to mention Normal Pregnancy Symptoms. No messaging program included content regarding postpartum contraception. To improve content without increasing text message number, Text4baby could replace messages on recruitment with messages regarding normal pregnancy symptoms, fetal development and postpartum contraception.
Health and nutrition content claims on Australian fast-food websites.
Wellard, Lyndal; Koukoumas, Alexandra; Watson, Wendy L; Hughes, Clare
2017-03-01
To determine the extent that Australian fast-food websites contain nutrition content and health claims, and whether these claims are compliant with the new provisions of the Australia New Zealand Food Standards Code ('the Code'). Systematic content analysis of all web pages to identify nutrition content and health claims. Nutrition information panels were used to determine whether products with claims met Nutrient Profiling Scoring Criteria (NPSC) and qualifying criteria, and to compare them with the Code to determine compliance. Australian websites of forty-four fast-food chains including meals, bakery, ice cream, beverage and salad chains. Any products marketed on the websites using health or nutrition content claims. Of the forty-four fast-food websites, twenty (45 %) had at least one claim. A total of 2094 claims were identified on 371 products, including 1515 nutrition content (72 %) and 579 health claims (28 %). Five fast-food products with health (5 %) and 157 products with nutrition content claims (43 %) did not meet the requirements of the Code to allow them to carry such claims. New provisions in the Code came into effect in January 2016 after a 3-year transition. Food regulatory agencies should review fast-food websites to ensure compliance with the qualifying criteria for nutrition content and health claim regulations. This would prevent consumers from viewing unhealthy foods as healthier choices. Healthy choices could be facilitated by applying NPSC to nutrition content claims. Fast-food chains should be educated on the requirements of the Code regarding claims.
Toward Developing a Universal Code of Ethics for Adult Educators.
ERIC Educational Resources Information Center
Siegel, Irwin H.
2000-01-01
Presents conflicting viewpoints on a universal code of ethics for adult educators. Suggests objectives of a code (guidance for practice, policymaking direction, common reference point, shared values). Outlines content and methods for implementing a code. (SK)
Use of Code-Switching in Multilingual Content Subject and Language Classrooms
ERIC Educational Resources Information Center
Gwee, Susan; Saravanan, Vanithamani
2018-01-01
Research literature has shown that teachers code-switched to a language which is not the medium of instruction to help students understand subject matter and establish interpersonal relations with them. However, little is known about the extent to which teachers code-switch in content subject classrooms compared to language classrooms. Using…
Verification testing of the compression performance of the HEVC screen content coding extensions
NASA Astrophysics Data System (ADS)
Sullivan, Gary J.; Baroncini, Vittorio A.; Yu, Haoping; Joshi, Rajan L.; Liu, Shan; Xiu, Xiaoyu; Xu, Jizheng
2017-09-01
This paper reports on verification testing of the coding performance of the screen content coding (SCC) extensions of the High Efficiency Video Coding (HEVC) standard (Rec. ITU-T H.265 | ISO/IEC 23008-2 MPEG-H Part 2). The coding performance of HEVC screen content model (SCM) reference software is compared with that of the HEVC test model (HM) without the SCC extensions, as well as with the Advanced Video Coding (AVC) joint model (JM) reference software, for both lossy and mathematically lossless compression using All-Intra (AI), Random Access (RA), and Lowdelay B (LB) encoding structures and using similar encoding techniques. Video test sequences in 1920×1080 RGB 4:4:4, YCbCr 4:4:4, and YCbCr 4:2:0 colour sampling formats with 8 bits per sample are tested in two categories: "text and graphics with motion" (TGM) and "mixed" content. For lossless coding, the encodings are evaluated in terms of relative bit-rate savings. For lossy compression, subjective testing was conducted at 4 quality levels for each coding case, and the test results are presented through mean opinion score (MOS) curves. The relative coding performance is also evaluated in terms of Bjøntegaard-delta (BD) bit-rate savings for equal PSNR quality. The perceptual tests and objective metric measurements show a very substantial benefit in coding efficiency for the SCC extensions, and provided consistent results with a high degree of confidence. For TGM video, the estimated bit-rate savings ranged from 60-90% relative to the JM and 40-80% relative to the HM, depending on the AI/RA/LB configuration category and colour sampling format.
SIMINOFF, LAURA A.; STEP, MARY M.
2011-01-01
Many observational coding schemes have been offered to measure communication in health care settings. These schemes fall short of capturing multiple functions of communication among providers, patients, and other participants. After a brief review of observational communication coding, the authors present a comprehensive scheme for coding communication that is (a) grounded in communication theory, (b) accounts for instrumental and relational communication, and (c) captures important contextual features with tailored coding templates: the Siminoff Communication Content & Affect Program (SCCAP). To test SCCAP reliability and validity, the authors coded data from two communication studies. The SCCAP provided reliable measurement of communication variables including tailored content areas and observer ratings of speaker immediacy, affiliation, confirmation, and disconfirmation behaviors. PMID:21213170
Sorimachi, Kenji; Okayasu, Teiji
2015-01-01
The complete vertebrate mitochondrial genome consists of 13 coding genes. We used this genome to investigate the existence of natural selection in vertebrate evolution. From the complete mitochondrial genomes, we predicted nucleotide contents and then separated these values into coding and non-coding regions. When nucleotide contents of a coding or non-coding region were plotted against the nucleotide content of the complete mitochondrial genomes, we obtained linear regression lines only between homonucleotides and their analogs. On every plot using G or A content purine, G content in aquatic vertebrates was higher than that in terrestrial vertebrates, while A content in aquatic vertebrates was lower than that in terrestrial vertebrates. Based on these relationships, vertebrates were separated into two groups, terrestrial and aquatic. However, using C or T content pyrimidine, clear separation between these two groups was not obtained. The hagfish (Eptatretus burgeri) was further separated from both terrestrial and aquatic vertebrates. Based on these results, nucleotide content relationships predicted from the complete vertebrate mitochondrial genomes reveal the existence of natural selection based on evolutionary separation between terrestrial and aquatic vertebrate groups. In addition, we propose that separation of the two groups might be linked to ammonia detoxification based on high G and low A contents, which encode Glu rich and Lys poor proteins.
Coding For Compression Of Low-Entropy Data
NASA Technical Reports Server (NTRS)
Yeh, Pen-Shu
1994-01-01
Improved method of encoding digital data provides for efficient lossless compression of partially or even mostly redundant data from low-information-content source. Method of coding implemented in relatively simple, high-speed arithmetic and logic circuits. Also increases coding efficiency beyond that of established Huffman coding method in that average number of bits per code symbol can be less than 1, which is the lower bound for Huffman code.
Code of Federal Regulations, 2010 CFR
2010-04-01
... CONSUMPTION INFANT FORMULA QUALITY CONTROL PROCEDURES Quality Control Procedures for Assuring Nutrient Content of Infant Formulas § 106.90 Coding. The manufacturer shall code all infant formulas in conformity...
An audit of alcohol brand websites.
Gordon, Ross
2011-11-01
The study investigated the nature and content of alcohol brand websites in the UK. The research involved an audit of the websites of the 10 leading alcohol brands by sales in the UK across four categories: lager, spirits, Flavoured Alcoholic Beverages and cider/perry. Each site was visited twice over a 1-month period with site features and content recorded using a pro-forma. The content of websites was then reviewed against the regulatory codes governing broadcast advertising of alcohol. It was found that 27 of 40 leading alcohol brands had a dedicated website. Sites featured sophisticated content, including sports and music sections, games, downloads and competitions. Case studies of two brand websites demonstrate the range of content features on such sites. A review of the application of regulatory codes covering traditional advertising found some content may breach the codes. Study findings illustrate the sophisticated range of content accessible on alcohol brand websites. When applying regulatory codes covering traditional alcohol marketing channels it is apparent that some content on alcohol brand websites would breach the codes. This suggests the regulation of alcohol brand websites may be an issue requiring attention from policymakers. Further research in this area would help inform this process. © 2010 Australasian Professional Society on Alcohol and other Drugs.
A Content Analysis of Testosterone Websites: Sex, Muscle, and Male Age-Related Thematic Differences
Ivanov, Nicholas; Vuong, Jimmy; Gray, Peter B.
2017-01-01
Male testosterone supplementation is a large and growing industry. How is testosterone marketed to male consumers online? The present exploratory study entailed a content coding analysis of the home pages of 49 websites focused on testosterone supplementation for men in the United States. Four hypotheses concerning anticipated age-related differences in content coding were also tested: more frequent longevity content toward older men, and more frequent social dominance/physical formidability, muscle, and sex content toward younger men. Codes were created based on inductive observations and drawing upon the medical, life history, and human behavioral endocrinology literatures. Approximately half (n = 24) of websites were oriented toward younger men (estimated audience of men 40 years of age or younger) and half (n = 25) toward older men (estimated audience over 40 years of age). Results indicated that the most frequent content codes concerned online sales (e.g., product and purchasing information). Apart from sales information, the most frequent codes concerned, in order, muscle, sex/sexual functioning, low T, energy, fat, strength, aging, and well-being, with all four hypotheses also supported. These findings are interpreted in the light of medical, evolutionary life history, and human behavioral endocrinology approaches. PMID:29025355
A Content Analysis of Testosterone Websites: Sex, Muscle, and Male Age-Related Thematic Differences.
Ivanov, Nicholas; Vuong, Jimmy; Gray, Peter B
2018-03-01
Male testosterone supplementation is a large and growing industry. How is testosterone marketed to male consumers online? The present exploratory study entailed a content coding analysis of the home pages of 49 websites focused on testosterone supplementation for men in the United States. Four hypotheses concerning anticipated age-related differences in content coding were also tested: more frequent longevity content toward older men, and more frequent social dominance/physical formidability, muscle, and sex content toward younger men. Codes were created based on inductive observations and drawing upon the medical, life history, and human behavioral endocrinology literatures. Approximately half ( n = 24) of websites were oriented toward younger men (estimated audience of men 40 years of age or younger) and half ( n = 25) toward older men (estimated audience over 40 years of age). Results indicated that the most frequent content codes concerned online sales (e.g., product and purchasing information). Apart from sales information, the most frequent codes concerned, in order, muscle, sex/sexual functioning, low T, energy, fat, strength, aging, and well-being, with all four hypotheses also supported. These findings are interpreted in the light of medical, evolutionary life history, and human behavioral endocrinology approaches.
Kenne, Deric; Wolfram, Taylor M; Abram, Jenica K; Fleming, Michael
2016-01-01
Background Given the high penetration of social media use, social media has been proposed as a method for the dissemination of information to health professionals and patients. This study explored the potential for social media dissemination of the Academy of Nutrition and Dietetics Evidence-Based Nutrition Practice Guideline (EBNPG) for Heart Failure (HF). Objectives The objectives were to (1) describe the existing social media content on HF, including message content, source, and target audience, and (2) describe the attitude of physicians and registered dietitian nutritionists (RDNs) who care for outpatient HF patients toward the use of social media as a method to obtain information for themselves and to share this information with patients. Methods The methods were divided into 2 parts. Part 1 involved conducting a content analysis of tweets related to HF, which were downloaded from Twitonomy and assigned codes for message content (19 codes), source (9 codes), and target audience (9 codes); code frequency was described. A comparison in the popularity of tweets (those marked as favorites or retweeted) based on applied codes was made using t tests. Part 2 involved conducting phone interviews with RDNs and physicians to describe health professionals’ attitude toward the use of social media to communicate general health information and information specifically related to the HF EBNPG. Interviews were transcribed and coded; exemplar quotes representing frequent themes are presented. Results The sample included 294 original tweets with the hashtag “#heartfailure.” The most frequent message content codes were “HF awareness” (166/294, 56.5%) and “patient support” (97/294, 33.0%). The most frequent source codes were “professional, government, patient advocacy organization, or charity” (112/277, 40.4%) and “patient or family” (105/277, 37.9%). The most frequent target audience codes were “unable to identify” (111/277, 40.1%) and “other” (55/277, 19.9%). Significant differences were found in the popularity of tweets with (mean 1, SD 1.3 favorites) or without (mean 0.7, SD 1.3 favorites), the content code being “HF research” (P=.049). Tweets with the source code “professional, government, patient advocacy organizations, or charities” were significantly more likely to be marked as a favorite and retweeted than those without this source code (mean 1.2, SD 1.4 vs mean 0.8, SD 1.2, P=.03) and (mean 1.5, SD 1.8 vs mean 0.9, SD 2.0, P=.03). Interview participants believed that social media was a useful way to gather professional information. They did not believe that social media was useful for communicating with patients due to privacy concerns and the fact that the information had to be kept general rather than be tailored for a specific patient and the belief that their patients did not use social media or technology. Conclusions Existing Twitter content related to HF comes from a combination of patients and evidence-based organizations; however, there is little nutrition content. That gap may present an opportunity for EBNPG dissemination. Health professionals use social media to gather information for themselves but are skeptical of its value when communicating with patients, particularly due to privacy concerns and misconceptions about the characteristics of social media users. PMID:27847349
Hand, Rosa K; Kenne, Deric; Wolfram, Taylor M; Abram, Jenica K; Fleming, Michael
2016-11-15
Given the high penetration of social media use, social media has been proposed as a method for the dissemination of information to health professionals and patients. This study explored the potential for social media dissemination of the Academy of Nutrition and Dietetics Evidence-Based Nutrition Practice Guideline (EBNPG) for Heart Failure (HF). The objectives were to (1) describe the existing social media content on HF, including message content, source, and target audience, and (2) describe the attitude of physicians and registered dietitian nutritionists (RDNs) who care for outpatient HF patients toward the use of social media as a method to obtain information for themselves and to share this information with patients. The methods were divided into 2 parts. Part 1 involved conducting a content analysis of tweets related to HF, which were downloaded from Twitonomy and assigned codes for message content (19 codes), source (9 codes), and target audience (9 codes); code frequency was described. A comparison in the popularity of tweets (those marked as favorites or retweeted) based on applied codes was made using t tests. Part 2 involved conducting phone interviews with RDNs and physicians to describe health professionals' attitude toward the use of social media to communicate general health information and information specifically related to the HF EBNPG. Interviews were transcribed and coded; exemplar quotes representing frequent themes are presented. The sample included 294 original tweets with the hashtag "#heartfailure." The most frequent message content codes were "HF awareness" (166/294, 56.5%) and "patient support" (97/294, 33.0%). The most frequent source codes were "professional, government, patient advocacy organization, or charity" (112/277, 40.4%) and "patient or family" (105/277, 37.9%). The most frequent target audience codes were "unable to identify" (111/277, 40.1%) and "other" (55/277, 19.9%). Significant differences were found in the popularity of tweets with (mean 1, SD 1.3 favorites) or without (mean 0.7, SD 1.3 favorites), the content code being "HF research" (P=.049). Tweets with the source code "professional, government, patient advocacy organizations, or charities" were significantly more likely to be marked as a favorite and retweeted than those without this source code (mean 1.2, SD 1.4 vs mean 0.8, SD 1.2, P=.03) and (mean 1.5, SD 1.8 vs mean 0.9, SD 2.0, P=.03). Interview participants believed that social media was a useful way to gather professional information. They did not believe that social media was useful for communicating with patients due to privacy concerns and the fact that the information had to be kept general rather than be tailored for a specific patient and the belief that their patients did not use social media or technology. Existing Twitter content related to HF comes from a combination of patients and evidence-based organizations; however, there is little nutrition content. That gap may present an opportunity for EBNPG dissemination. Health professionals use social media to gather information for themselves but are skeptical of its value when communicating with patients, particularly due to privacy concerns and misconceptions about the characteristics of social media users. ©Rosa K Hand, Deric Kenne, Taylor M Wolfram, Jenica K Abram, Michael Fleming. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 15.11.2016.
Wong, Alex W K; Lau, Stephen C L; Fong, Mandy W M; Cella, David; Lai, Jin-Shei; Heinemann, Allen W
2018-04-03
To determine the extent to which the content of the Quality of Life in Neurological Disorders (Neuro-QoL) covers the International Classification of Functioning, Disability and Health (ICF) Core Sets for multiple sclerosis (MS), stroke, spinal cord injury (SCI), and traumatic brain injury (TBI) using summary linkage indicators. Content analysis by linking content of the Neuro-QoL to corresponding ICF codes of each Core Set for MS, stroke, SCI, and TBI. Three academic centers. None. None. Four summary linkage indicators proposed by MacDermid et al were estimated to compare the content coverage between Neuro-QoL and the ICF codes of Core Sets for MS, stroke, MS, and TBI. Neuro-QoL represented 20% to 30% Core Set codes for different conditions in which more codes in Core Sets for MS (29%), stroke (28%), and TBI (28%) were covered than those for SCI in the long-term (20%) and early postacute (19%) contexts. Neuro-QoL represented nearly half of the unique Activity and Participation codes (43%-49%) and less than one third of the unique Body Function codes (12%-32%). It represented fewer Environmental Factors codes (2%-6%) and no Body Structures codes. Absolute linkage indicators found that at least 60% of Neuro-QoL items were linked to Core Set codes (63%-95%), but many items covered the same codes as revealed by unique linkage indicators (7%-13%), suggesting high concept redundancy among items. The Neuro-QoL links more closely to ICF Core Sets for stroke, MS, and TBI than to those for SCI, and primarily covers activity and participation ICF domains. Other instruments are needed to address concepts not measured by the Neuro-QoL when a comprehensive health assessment is needed. Copyright © 2018 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Siderits, Richard; Yates, Stacy; Rodriguez, Arelis; Lee, Tina; Rimmer, Cheryl; Roche, Mark
2011-01-01
Quick Response (QR) Codes are standard in supply management and seen with increasing frequency in advertisements. They are now present regularly in healthcare informatics and education. These 2-dimensional square bar codes, originally designed by the Toyota car company, are free of license and have a published international standard. The codes can be generated by free online software and the resulting images incorporated into presentations. The images can be scanned by "smart" phones and tablets using either the iOS or Android platforms, which link the device with the information represented by the QR code (uniform resource locator or URL, online video, text, v-calendar entries, short message service [SMS] and formatted text). Once linked to the device, the information can be viewed at any time after the original presentation, saved in the device or to a Web-based "cloud" repository, printed, or shared with others via email or Bluetooth file transfer. This paper describes how we use QR codes in our tumor board presentations, discusses the benefits, the different QR codes from Web links and how QR codes facilitate the distribution of educational content.
Spatial transform coding of color images.
NASA Technical Reports Server (NTRS)
Pratt, W. K.
1971-01-01
The application of the transform-coding concept to the coding of color images represented by three primary color planes of data is discussed. The principles of spatial transform coding are reviewed and the merits of various methods of color-image representation are examined. A performance analysis is presented for the color-image transform-coding system. Results of a computer simulation of the coding system are also given. It is shown that, by transform coding, the chrominance content of a color image can be coded with an average of 1.0 bits per element or less without serious degradation. If luminance coding is also employed, the average rate reduces to about 2.0 bits per element or less.
ERIC Educational Resources Information Center
Hau, Goh Bak; Siraj, Saedah; Alias, Norlidah; Rauf, Rose Amnah Abd.; Zakaria, Abd. Razak; Darusalam, Ghazali
2013-01-01
This study provides a content analysis of selected articles in the field of QR code and its application in educational context that were published in journals and proceedings of international conferences and workshops from 2006 to 2011. These articles were cross analysed by published years, journal, and research topics. Further analysis was…
Babor, Thomas F; Xuan, Ziming; Damon, Donna
2013-10-01
This study evaluated the use of a modified Delphi technique in combination with a previously developed alcohol advertising rating procedure to detect content violations in the U.S. Beer Institute Code. A related aim was to estimate the minimum number of raters needed to obtain reliable evaluations of code violations in television commercials. Six alcohol ads selected for their likelihood of having code violations were rated by community and expert participants (N = 286). Quantitative rating scales were used to measure the content of alcohol advertisements based on alcohol industry self-regulatory guidelines. The community group participants represented vulnerability characteristics that industry codes were designed to protect (e.g., age <21); experts represented various health-related professions, including public health, human development, alcohol research, and mental health. Alcohol ads were rated on 2 occasions separated by 1 month. After completing Time 1 ratings, participants were randomized to receive feedback from 1 group or the other. Findings indicate that (i) ratings at Time 2 had generally reduced variance, suggesting greater consensus after feedback, (ii) feedback from the expert group was more influential than that of the community group in developing group consensus, (iii) the expert group found significantly fewer violations than the community group, (iv) experts representing different professional backgrounds did not differ among themselves in the number of violations identified, and (v) a rating panel composed of at least 15 raters is sufficient to obtain reliable estimates of code violations. The Delphi technique facilitates consensus development around code violations in alcohol ad content and may enhance the ability of regulatory agencies to monitor the content of alcoholic beverage advertising when combined with psychometric-based rating procedures. Copyright © 2013 by the Research Society on Alcoholism.
Babor, Thomas F.; Xuan, Ziming; Damon, Donna
2013-01-01
Background This study evaluated the use of a modified Delphi technique in combination with a previously developed alcohol advertising rating procedure to detect content violations in the US Beer Institute code. A related aim was to estimate the minimum number of raters needed to obtain reliable evaluations of code violations in television commercials. Methods Six alcohol ads selected for their likelihood of having code violations were rated by community and expert participants (N=286). Quantitative rating scales were used to measure the content of alcohol advertisements based on alcohol industry self-regulatory guidelines. The community group participants represented vulnerability characteristics that industry codes were designed to protect (e.g., age < 21); experts represented various health-related professions, including public health, human development, alcohol research and mental health. Alcohol ads were rated on two occasions separated by one month. After completing Time 1 ratings, participants were randomized to receive feedback from one group or the other. Results Findings indicate that (1) ratings at Time 2 had generally reduced variance, suggesting greater consensus after feedback, (2) feedback from the expert group was more influential than that of the community group in developing group consensus, (3) the expert group found significantly fewer violations than the community group, (4) experts representing different professional backgrounds did not differ among themselves in the number of violations identified; (5) a rating panel composed of at least 15 raters is sufficient to obtain reliable estimates of code violations. Conclusions The Delphi Technique facilitates consensus development around code violations in alcohol ad content and may enhance the ability of regulatory agencies to monitor the content of alcoholic beverage advertising when combined with psychometric-based rating procedures. PMID:23682927
Chroma sampling and modulation techniques in high dynamic range video coding
NASA Astrophysics Data System (ADS)
Dai, Wei; Krishnan, Madhu; Topiwala, Pankaj
2015-09-01
High Dynamic Range and Wide Color Gamut (HDR/WCG) Video Coding is an area of intense research interest in the engineering community, for potential near-term deployment in the marketplace. HDR greatly enhances the dynamic range of video content (up to 10,000 nits), as well as broadens the chroma representation (BT.2020). The resulting content offers new challenges in its coding and transmission. The Moving Picture Experts Group (MPEG) of the International Standards Organization (ISO) is currently exploring coding efficiency and/or the functionality enhancements of the recently developed HEVC video standard for HDR and WCG content. FastVDO has developed an advanced approach to coding HDR video, based on splitting the HDR signal into a smoothed luminance (SL) signal, and an associated base signal (B). Both signals are then chroma downsampled to YFbFr 4:2:0 signals, using advanced resampling filters, and coded using the Main10 High Efficiency Video Coding (HEVC) standard, which has been developed jointly by ISO/IEC MPEG and ITU-T WP3/16 (VCEG). Our proposal offers both efficient coding, and backwards compatibility with the existing HEVC Main10 Profile. That is, an existing Main10 decoder can produce a viewable standard dynamic range video, suitable for existing screens. Subjective tests show visible improvement over the anchors. Objective tests show a sizable gain of over 25% in PSNR (RGB domain) on average, for a key set of test clips selected by the ISO/MPEG committee.
Treatment provider's knowledge of the Health and Disability Commissioner's Code of Consumer Rights.
Townshend, Philip L; Sellman, J Douglas
2002-06-01
The Health and Disability Commissioner's (HDC) Code of Health and and Disability Consumers' Rights (the Code) defines in law the rights of consumers of health and disability services in New Zealand. In the first few years after the publication health educators, service providers and the HDC extensively promoted the Code. Providers of health and disability services would be expected to be knowledgeable about the areas covered by the Code if it is routinely used in the development and monitoring of treatment plans. In this study knowledge of the Code was tested in a random sample of 217 clinical staff that included medical staff, psychologists and counsellors working in Alcohol and Drug Treatment (A&D) centres in New Zealand. Any response showing awareness of a right, regardless of wording, was taken as a positive response as it was the areas covered by rights rather than their actual wording that was considered to be the important knowledge for providers. The main finding of this research was that 23% of staff surveyed were aware of none of the ten rights in the Code and only 6% were aware of more than five of the ten rights. Relating these data to results from a wider sample of treatment providers raises the possibility that A&D treatment providers are slightly more aware of the content of the Code than a general sample of health and disability service providers however overall awareness of the content of the Code by health providers is very low. These results imply that consumer rights issues are not prominent in the minds of providers perhaps indicating an ethical blind spot on their part. Ignorance of the content of the Code may indicate that the treatment community do not find it a useful working document or alternatively that clinicians are content to rely on their own good intentions to preserve the rights of their patients. Further research will be required to explain this lack of knowledge, however the current situation is that consumers cannot rely on clinicians being aware of the consumer's rights in health and disability services.
Content-Based Multi-Channel Network Coding Algorithm in the Millimeter-Wave Sensor Network
Lin, Kai; Wang, Di; Hu, Long
2016-01-01
With the development of wireless technology, the widespread use of 5G is already an irreversible trend, and millimeter-wave sensor networks are becoming more and more common. However, due to the high degree of complexity and bandwidth bottlenecks, the millimeter-wave sensor network still faces numerous problems. In this paper, we propose a novel content-based multi-channel network coding algorithm, which uses the functions of data fusion, multi-channel and network coding to improve the data transmission; the algorithm is referred to as content-based multi-channel network coding (CMNC). The CMNC algorithm provides a fusion-driven model based on the Dempster-Shafer (D-S) evidence theory to classify the sensor nodes into different classes according to the data content. By using the result of the classification, the CMNC algorithm also provides the channel assignment strategy and uses network coding to further improve the quality of data transmission in the millimeter-wave sensor network. Extensive simulations are carried out and compared to other methods. Our simulation results show that the proposed CMNC algorithm can effectively improve the quality of data transmission and has better performance than the compared methods. PMID:27376302
Mobile Code: The Future of the Internet
1999-01-01
code ( mobile agents) to multiple proxies or servers " Customization " (e.g., re-formatting, filtering, metasearch) Information overload Diversified... Mobile code is necessary, rather than client-side code, since many customization features (such as information monitoring) do not work if the...economic foundation for Web sites, many Web sites earn money solely from advertisements . If these sites allow mobile agents to easily access the content
Code of Ethics for Electrical Engineers
NASA Astrophysics Data System (ADS)
Matsuki, Junya
The Institute of Electrical Engineers of Japan (IEEJ) has established the rules of practice for its members recently, based on its code of ethics enacted in 1998. In this paper, first, the characteristics of the IEEJ 1998 ethical code are explained in detail compared to the other ethical codes for other fields of engineering. Secondly, the contents which shall be included in the modern code of ethics for electrical engineers are discussed. Thirdly, the newly-established rules of practice and the modified code of ethics are presented. Finally, results of questionnaires on the new ethical code and rules which were answered on May 23, 2007, by 51 electrical and electronic students of the University of Fukui are shown.
Coding visual features extracted from video sequences.
Baroffio, Luca; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano
2014-05-01
Visual features are successfully exploited in several applications (e.g., visual search, object recognition and tracking, etc.) due to their ability to efficiently represent image content. Several visual analysis tasks require features to be transmitted over a bandwidth-limited network, thus calling for coding techniques to reduce the required bit budget, while attaining a target level of efficiency. In this paper, we propose, for the first time, a coding architecture designed for local features (e.g., SIFT, SURF) extracted from video sequences. To achieve high coding efficiency, we exploit both spatial and temporal redundancy by means of intraframe and interframe coding modes. In addition, we propose a coding mode decision based on rate-distortion optimization. The proposed coding scheme can be conveniently adopted to implement the analyze-then-compress (ATC) paradigm in the context of visual sensor networks. That is, sets of visual features are extracted from video frames, encoded at remote nodes, and finally transmitted to a central controller that performs visual analysis. This is in contrast to the traditional compress-then-analyze (CTA) paradigm, in which video sequences acquired at a node are compressed and then sent to a central unit for further processing. In this paper, we compare these coding paradigms using metrics that are routinely adopted to evaluate the suitability of visual features in the context of content-based retrieval, object recognition, and tracking. Experimental results demonstrate that, thanks to the significant coding gains achieved by the proposed coding scheme, ATC outperforms CTA with respect to all evaluation metrics.
Content Coding of Psychotherapy Transcripts Using Labeled Topic Models.
Gaut, Garren; Steyvers, Mark; Imel, Zac E; Atkins, David C; Smyth, Padhraic
2017-03-01
Psychotherapy represents a broad class of medical interventions received by millions of patients each year. Unlike most medical treatments, its primary mechanisms are linguistic; i.e., the treatment relies directly on a conversation between a patient and provider. However, the evaluation of patient-provider conversation suffers from critical shortcomings, including intensive labor requirements, coder error, nonstandardized coding systems, and inability to scale up to larger data sets. To overcome these shortcomings, psychotherapy analysis needs a reliable and scalable method for summarizing the content of treatment encounters. We used a publicly available psychotherapy corpus from Alexander Street press comprising a large collection of transcripts of patient-provider conversations to compare coding performance for two machine learning methods. We used the labeled latent Dirichlet allocation (L-LDA) model to learn associations between text and codes, to predict codes in psychotherapy sessions, and to localize specific passages of within-session text representative of a session code. We compared the L-LDA model to a baseline lasso regression model using predictive accuracy and model generalizability (measured by calculating the area under the curve (AUC) from the receiver operating characteristic curve). The L-LDA model outperforms the lasso logistic regression model at predicting session-level codes with average AUC scores of 0.79, and 0.70, respectively. For fine-grained level coding, L-LDA and logistic regression are able to identify specific talk-turns representative of symptom codes. However, model performance for talk-turn identification is not yet as reliable as human coders. We conclude that the L-LDA model has the potential to be an objective, scalable method for accurate automated coding of psychotherapy sessions that perform better than comparable discriminative methods at session-level coding and can also predict fine-grained codes.
Content Coding of Psychotherapy Transcripts Using Labeled Topic Models
Gaut, Garren; Steyvers, Mark; Imel, Zac E; Atkins, David C; Smyth, Padhraic
2016-01-01
Psychotherapy represents a broad class of medical interventions received by millions of patients each year. Unlike most medical treatments, its primary mechanisms are linguistic; i.e., the treatment relies directly on a conversation between a patient and provider. However, the evaluation of patient-provider conversation suffers from critical shortcomings, including intensive labor requirements, coder error, non-standardized coding systems, and inability to scale up to larger data sets. To overcome these shortcomings, psychotherapy analysis needs a reliable and scalable method for summarizing the content of treatment encounters. We used a publicly-available psychotherapy corpus from Alexander Street press comprising a large collection of transcripts of patient-provider conversations to compare coding performance for two machine learning methods. We used the Labeled Latent Dirichlet Allocation (L-LDA) model to learn associations between text and codes, to predict codes in psychotherapy sessions, and to localize specific passages of within-session text representative of a session code. We compared the L-LDA model to a baseline lasso regression model using predictive accuracy and model generalizability (measured by calculating the area under the curve (AUC) from the receiver operating characteristic (ROC) curve). The L-LDA model outperforms the lasso logistic regression model at predicting session-level codes with average AUC scores of .79, and .70, respectively. For fine-grained level coding, L-LDA and logistic regression are able to identify specific talk-turns representative of symptom codes. However, model performance for talk-turn identification is not yet as reliable as human coders. We conclude that the L-LDA model has the potential to be an objective, scaleable method for accurate automated coding of psychotherapy sessions that performs better than comparable discriminative methods at session-level coding and can also predict fine-grained codes. PMID:26625437
ERIC Educational Resources Information Center
Du, Jie; Wimmer, Hayden; Rada, Roy
2018-01-01
This study investigates the delivery of the "Hour of Code" tutorials to college students. The college students who participated in this study were surveyed about their opinion of the Hour of Code. First, the students' comments were discussed. Next, a content analysis of the offered tutorials highlights their reliance on visual…
Bohlin, Jon; Eldholm, Vegard; Pettersson, John H O; Brynildsrud, Ola; Snipen, Lars
2017-02-10
The core genome consists of genes shared by the vast majority of a species and is therefore assumed to have been subjected to substantially stronger purifying selection than the more mobile elements of the genome, also known as the accessory genome. Here we examine intragenic base composition differences in core genomes and corresponding accessory genomes in 36 species, represented by the genomes of 731 bacterial strains, to assess the impact of selective forces on base composition in microbes. We also explore, in turn, how these results compare with findings for whole genome intragenic regions. We found that GC content in coding regions is significantly higher in core genomes than accessory genomes and whole genomes. Likewise, GC content variation within coding regions was significantly lower in core genomes than in accessory genomes and whole genomes. Relative entropy in coding regions, measured as the difference between observed and expected trinucleotide frequencies estimated from mononucleotide frequencies, was significantly higher in the core genomes than in accessory and whole genomes. Relative entropy was positively associated with coding region GC content within the accessory genomes, but not within the corresponding coding regions of core or whole genomes. The higher intragenic GC content and relative entropy, as well as the lower GC content variation, observed in the core genomes is most likely associated with selective constraints. It is unclear whether the positive association between GC content and relative entropy in the more mobile accessory genomes constitutes signatures of selection or selective neutral processes.
NASA Technical Reports Server (NTRS)
Barry, Matthew R.; Osborne, Richard N.
2005-01-01
The RoseDoclet computer program extends the capability of Java doclet software to automatically synthesize Unified Modeling Language (UML) content from Java language source code. [Doclets are Java-language programs that use the doclet application programming interface (API) to specify the content and format of the output of Javadoc. Javadoc is a program, originally designed to generate API documentation from Java source code, now also useful as an extensible engine for processing Java source code.] RoseDoclet takes advantage of Javadoc comments and tags already in the source code to produce a UML model of that code. RoseDoclet applies the doclet API to create a doclet passed to Javadoc. The Javadoc engine applies the doclet to the source code, emitting the output format specified by the doclet. RoseDoclet emits a Rose model file and populates it with fully documented packages, classes, methods, variables, and class diagrams identified in the source code. The way in which UML models are generated can be controlled by use of new Javadoc comment tags that RoseDoclet provides. The advantage of using RoseDoclet is that Javadoc documentation becomes leveraged for two purposes: documenting the as-built API and keeping the design documentation up to date.
Assessment of Self-Regulatory Code Violations in Brazilian Television Beer Advertisements*
Vendrame, Alan; Pinsky, Ilana; Souza E Silva, Rebeca; Babor, Thomas
2010-01-01
Objective: Research suggests that alcoholic beverage advertisements may have an adverse effect on teenagers and young adults, owing to their vulnerability to suggestive message content. This study was designed to evaluate perceived violations of the content guidelines of the Brazilian alcohol marketing self-regulation code, based on ratings of the five most popular beer advertisements broadcast on television in the summer of 2005–2006 and during the 2006 FIFA (Fédération Internationale de Football Association) World Cup games. Method: Five beer advertisements were selected from a previous study showing that they were perceived to be highly appealing to a sample of Brazilian teenagers. These advertisements were evaluated by a sample of Brazilian high school students using a rating procedure designed to measure the content of alcohol advertisements covered in industry self-regulation codes. Results: All five advertisements were found to violate multiple guidelines of the Brazilian code of marketing self-regulation. The advertisement with the greatest number of violations was Antarctica's “Male Repellent,” which was perceived to violate 11 of the 16 guidelines in the code. Two advertisements had nine violations, and one had eight. The guidelines most likely to be violated by these advertisements were Guideline 1, which is aimed at protecting children and teenagers, and Guideline 2, which prohibits content encouraging excessive and irresponsible alcoholic beverage consumption. Conclusions: The five beer advertisements rated as most appealing to Brazilian teenagers were perceived by a sample of the same population to have violated numerous principles of the Brazilian self-regulation code governing the marketing of alcoholic beverages. Because of these numerous perceived code violations, it now seems important for regulatory authorities to submit industry marketing content to more systematic evaluation by young people and public health experts and for researchers to focus more on the ways in which alcohol advertising influences early onset of drinking and excessive alcohol consumption. PMID:20409439
Assessment of self-regulatory code violations in Brazilian television beer advertisements.
Vendrame, Alan; Pinsky, Ilana; e Silva, Rebeca Souza; Babor, Thomas
2010-05-01
Research suggests that alcoholic beverage advertisements may have an adverse effect on teenagers and young adults, owing to their vulnerability to suggestive message content. This study was designed to evaluate perceived violations of the content guidelines of the Brazilian alcohol marketing self-regulation code, based on ratings of the five most popular beer advertisements broadcast on television in the summer of 2005-2006 and during the 2006 FIFA (Federation Internationale de Football Association) World Cup games. Five beer advertisements were selected from a previous study showing that they were perceived to be highly appealing to a sample of Brazilian teenagers. These advertisements were evaluated by a sample of Brazilian high school students using a rating procedure designed to measure the content of alcohol advertisements covered in industry self-regulation codes. All five advertisements were found to violate multiple guidelines of the Brazilian code of marketing self-regulation. The advertisement with the greatest number of violations was Antarctica's "Male Repellent," which was perceived to violate 11 of the 16 guidelines in the code. Two advertisements had nine violations, and one had eight. The guidelines most likely to be violated by these advertisements were Guideline 1, which is aimed at protecting children and teenagers, and Guideline 2, which prohibits content encouraging excessive and irresponsible alcoholic beverage consumption. The five beer advertisements rated as most appealing to Brazilian teenagers were perceived by a sample of the same population to have violated numerous principles of the Brazilian self-regulation code governing the marketing of alcoholic beverages. Because of these numerous perceived code violations, it now seems important for regulatory authorities to submit industry marketing content to more systematic evaluation by young people and public health experts and for researchers to focus more on the ways in which alcohol advertising influences early onset of drinking and excessive alcohol consumption.
The "Motherese" of Mr. Rogers: A Description of the Dialogue of Educational Television Programs.
ERIC Educational Resources Information Center
Rice, Mabel L.; Haight, Patti L.
Dialogue from 30-minute samples from "Sesame Street" and "Mr. Rogers' Neighborhood" was coded for grammar, content, and discourse. Grammatical analysis used the LINGQUEST computer-assisted language assessment program (Mordecai, Palen, and Palmer 1982). Content coding was based on categories developed by Rice (1984) and…
Prescription Drug Abuse Information in D.A.R.E.
ERIC Educational Resources Information Center
Morris, Melissa C.; Cline, Rebecca J. Welch; Weiler, Robert M.; Broadway, S. Camille
2006-01-01
This investigation was designed to examine prescription drug-related content and learning objectives in Drug Abuse Resistance Education (D.A.R.E.) for upper elementary and middle schools. Specific prescription-drug topics and context associated with content and objectives were coded. The coding system for topics included 126 topics organized…
Analyzing Prosocial Content on T.V.
ERIC Educational Resources Information Center
Davidson, Emily S.; Neale, John M.
To enhance knowledge of television content, a prosocial code was developed by watching a large number of potentially prosocial television programs and making notes on all the positive acts. The behaviors were classified into a workable number of categories. The prosocial code is largely verbal and contains seven categories which fall into two…
Content Analysis Coding Schemes for Online Asynchronous Discussion
ERIC Educational Resources Information Center
Weltzer-Ward, Lisa
2011-01-01
Purpose: Researchers commonly utilize coding-based analysis of classroom asynchronous discussion contributions as part of studies of online learning and instruction. However, this analysis is inconsistent from study to study with over 50 coding schemes and procedures applied in the last eight years. The aim of this article is to provide a basis…
Dong, Lu; Zhao, Xin; Ong, Stacie L; Harvey, Allison G
2017-10-01
The current study examined whether and which specific contents of patients' memory for cognitive therapy (CT) were associated with treatment adherence and outcome. Data were drawn from a pilot RCT of forty-eight depressed adults, who received either CT plus Memory Support Intervention (CT + Memory Support) or CT-as-usual. Patients' memory for treatment was measured using the Patient Recall Task and responses were coded into cognitive behavioral therapy (CBT) codes, such as CBT Model and Cognitive Restructuring, and non-CBT codes, such as individual coping strategies and no code. Treatment adherence was measured using therapist and patient ratings during treatment. Depression outcomes included treatment response, remission, and recurrence. Total number of CBT codes recalled was not significantly different comparing CT + Memory Support to CT-as-usual. Total CBT codes recalled were positively associated with adherence, while non-CBT codes recalled were negatively associated with adherence. Treatment responders (vs. non-responders) exhibited a significant increase in their recall of Cognitive Restructuring from session 7 to posttreatment. Greater recall of Cognitive Restructuring was marginally significantly associated with remission. Greater total number of CBT codes recalled (particularly CBT Model) was associated with non-recurrence of depression. Results highlight the important relationships between patients' memory for treatment and treatment adherence and outcome. Copyright © 2017 Elsevier Ltd. All rights reserved.
Does the Genetic Code Have A Eukaryotic Origin?
Zhang, Zhang; Yu, Jun
2013-01-01
In the RNA world, RNA is assumed to be the dominant macromolecule performing most, if not all, core “house-keeping” functions. The ribo-cell hypothesis suggests that the genetic code and the translation machinery may both be born of the RNA world, and the introduction of DNA to ribo-cells may take over the informational role of RNA gradually, such as a mature set of genetic code and mechanism enabling stable inheritance of sequence and its variation. In this context, we modeled the genetic code in two content variables—GC and purine contents—of protein-coding sequences and measured the purine content sensitivities for each codon when the sensitivity (% usage) is plotted as a function of GC content variation. The analysis leads to a new pattern—the symmetric pattern—where the sensitivity of purine content variation shows diagonally symmetry in the codon table more significantly in the two GC content invariable quarters in addition to the two existing patterns where the table is divided into either four GC content sensitivity quarters or two amino acid diversity halves. The most insensitive codon sets are GUN (valine) and CAN (CAR for asparagine and CAY for aspartic acid) and the most biased amino acid is valine (always over-estimated) followed by alanine (always under-estimated). The unique position of valine and its codons suggests its key roles in the final recruitment of the complete codon set of the canonical table. The distinct choice may only be attributable to sequence signatures or signals of splice sites for spliceosomal introns shared by all extant eukaryotes. PMID:23402863
Babor, Thomas F; Xuan, Ziming; Proctor, Dwayne
2008-03-01
The purposes of this study were to develop reliable procedures to monitor the content of alcohol advertisements broadcast on television and in other media, and to detect violations of the content guidelines of the alcohol industry's self-regulation codes. A set of rating-scale items was developed to measure the content guidelines of the 1997 version of the U.S. Beer Institute Code. Six focus groups were conducted with 60 college students to evaluate the face validity of the items and the feasibility of the procedure. A test-retest reliability study was then conducted with 74 participants, who rated five alcohol advertisements on two occasions separated by 1 week. Average correlations across all advertisements using three reliability statistics (r, rho, and kappa) were almost all statistically significant and the kappas were good for most items, which indicated high test-retest agreement. We also found high interrater reliabilities (intraclass correlations) among raters for item-level and guideline-level violations, indicating that regardless of the specific item, raters were consistent in their general evaluations of the advertisements. Naïve (untrained) raters can provide consistent (reliable) ratings of the main content guidelines proposed in the U.S. Beer Institute Code. The rating procedure may have future applications for monitoring compliance with industry self-regulation codes and for conducting research on the ways in which alcohol advertisements are perceived by young adults and other vulnerable populations.
Proposal for a new content model for the Austrian Procedure Catalogue.
Neururer, Sabrina B; Pfeiffer, Karl P
2013-01-01
The Austrian Procedure Catalogue is used for procedure coding in Austria. Its architecture and content has some major weaknesses. The aim of this study is the presentation of a new potential content model for this classification system consisting of main characteristics of health interventions. It is visualized using a UML class diagram. Based on this proposition, an implementation of an ontology for procedure coding is planned.
NAEYC Code of Ethical Conduct. Revised = Codigo de Conducta Etica. Revisada
ERIC Educational Resources Information Center
National Association of Elementary School Principals (NAESP), 2005
2005-01-01
This document presents a code of ethics for early childhood educators that offers guidelines for responsible behavior and sets forth a common basis for resolving ethical dilemmas encountered in early education. It represents the English and Spanish versions of the revised code. Its contents were approved by the NAEYC Governing Board in April 2005…
ERIC Educational Resources Information Center
Foster, Catherine; McMenemy, David
2012-01-01
Thirty-six ethical codes from national professional associations were studied, the aim to test whether librarians have global shared values or if political and cultural contexts have significantly influenced the codes' content. Gorman's eight core values of stewardship, service, intellectual freedom, rationalism, literacy and learning, equity of…
van der Mei, Sijrike F; Dijkers, Marcel P J M; Heerkens, Yvonne F
2011-12-01
To examine to what extent the concept and the domains of participation as defined in the International Classification of Functioning, Disability and Health (ICF) are represented in general cancer-specific health-related quality of life (HRQOL) instruments. Using the ICF linking rules, two coders independently extracted the meaningful concepts of ten instruments and linked these to ICF codes. The proportion of concepts that could be linked to ICF codes ranged from 68 to 95%. Although all instruments contained concepts linked to Participation (Chapters d7-d9 of the classification of 'Activities and Participation'), the instruments covered only a small part of all available ICF codes. The proportion of ICF codes in the instruments that were participation related ranged from 3 to 35%. 'Major life areas' (d8) was the most frequently used Participation Chapter, with d850 'remunerative employment' as the most used ICF code. The number of participation-related ICF codes covered in the instruments is limited. General cancer-specific HRQOL instruments only assess social life of cancer patients to a limited degree. This study's information on the content of these instruments may guide researchers in selecting the appropriate instrument for a specific research purpose.
Using a Mixed Methods Content Analysis to Analyze Mission Statements from Colleges of Engineering
ERIC Educational Resources Information Center
Creamer, Elizabeth G.; Ghoston, Michelle
2013-01-01
A mixed method design was used to conduct a content analysis of the mission statements of colleges of engineering to map inductively derived codes with the EC 2000 outcomes and to test if any of the codes were significantly associated with institutions with reasonably strong representation of women. Most institution's (25 of 48) mission statement…
Detecting well-being via computerized content analysis of brief diary entries.
Tov, William; Ng, Kok Leong; Lin, Han; Qiu, Lin
2013-12-01
Two studies evaluated the correspondence between self-reported well-being and codings of emotion and life content by the Linguistic Inquiry and Word Count (LIWC; Pennebaker, Booth, & Francis, 2011). Open-ended diary responses were collected from 206 participants daily for 3 weeks (Study 1) and from 139 participants twice a week for 8 weeks (Study 2). LIWC negative emotion consistently correlated with self-reported negative emotion. LIWC positive emotion correlated with self-reported positive emotion in Study 1 but not in Study 2. No correlations were observed with global life satisfaction. Using a co-occurrence coding method to combine LIWC emotion codings with life-content codings, we estimated the frequency of positive and negative events in 6 life domains (family, friends, academics, health, leisure, and money). Domain-specific event frequencies predicted self-reported satisfaction in all domains in Study 1 but not consistently in Study 2. We suggest that the correspondence between LIWC codings and self-reported well-being is affected by the number of writing samples collected per day as well as the target period (e.g., past day vs. past week) assessed by the self-report measure. Extensions and possible implications for the analyses of similar types of open-ended data (e.g., social media messages) are discussed. (PsycINFO Database Record (c) 2013 APA, all rights reserved).
Ahmad, Muneer; Jung, Low Tan; Bhuiyan, Al-Amin
2017-10-01
Digital signal processing techniques commonly employ fixed length window filters to process the signal contents. DNA signals differ in characteristics from common digital signals since they carry nucleotides as contents. The nucleotides own genetic code context and fuzzy behaviors due to their special structure and order in DNA strand. Employing conventional fixed length window filters for DNA signal processing produce spectral leakage and hence results in signal noise. A biological context aware adaptive window filter is required to process the DNA signals. This paper introduces a biological inspired fuzzy adaptive window median filter (FAWMF) which computes the fuzzy membership strength of nucleotides in each slide of window and filters nucleotides based on median filtering with a combination of s-shaped and z-shaped filters. Since coding regions cause 3-base periodicity by an unbalanced nucleotides' distribution producing a relatively high bias for nucleotides' usage, such fundamental characteristic of nucleotides has been exploited in FAWMF to suppress the signal noise. Along with adaptive response of FAWMF, a strong correlation between median nucleotides and the Π shaped filter was observed which produced enhanced discrimination between coding and non-coding regions contrary to fixed length conventional window filters. The proposed FAWMF attains a significant enhancement in coding regions identification i.e. 40% to 125% as compared to other conventional window filters tested over more than 250 benchmarked and randomly taken DNA datasets of different organisms. This study proves that conventional fixed length window filters applied to DNA signals do not achieve significant results since the nucleotides carry genetic code context. The proposed FAWMF algorithm is adaptive and outperforms significantly to process DNA signal contents. The algorithm applied to variety of DNA datasets produced noteworthy discrimination between coding and non-coding regions contrary to fixed window length conventional filters. Copyright © 2017 Elsevier B.V. All rights reserved.
High-Content Optical Codes for Protecting Rapid Diagnostic Tests from Counterfeiting.
Gökçe, Onur; Mercandetti, Cristina; Delamarche, Emmanuel
2018-06-19
Warnings and reports on counterfeit diagnostic devices are released several times a year by regulators and public health agencies. Unfortunately, mishandling, altering, and counterfeiting point-of-care diagnostics (POCDs) and rapid diagnostic tests (RDTs) is lucrative, relatively simple and can lead to devastating consequences. Here, we demonstrate how to implement optical security codes in silicon- and nitrocellulose-based flow paths for device authentication using a smartphone. The codes are created by inkjet spotting inks directly on nitrocellulose or on micropillars. Codes containing up to 32 elements per mm 2 and 8 colors can encode as many as 10 45 combinations. Codes on silicon micropillars can be erased by setting a continuous flow path across the entire array of code elements or for nitrocellulose by simply wicking a liquid across the code. Static or labile code elements can further be formed on nitrocellulose to create a hidden code using poly(ethylene glycol) (PEG) or glycerol additives to the inks. More advanced codes having a specific deletion sequence can also be created in silicon microfluidic devices using an array of passive routing nodes, which activate in a particular, programmable sequence. Such codes are simple to fabricate, easy to view, and efficient in coding information; they can be ideally used in combination with information on a package to protect diagnostic devices from counterfeiting.
Towards a European code of medical ethics. Ethical and legal issues.
Patuzzo, Sara; Pulice, Elisabetta
2017-01-01
The feasibility of a common European code of medical ethics is discussed, with consideration and evaluation of the difficulties such a project is going to face, from both the legal and ethical points of view. On the one hand, the analysis will underline the limits of a common European code of medical ethics as an instrument for harmonising national professional rules in the European context; on the other hand, we will highlight some of the potentials of this project, which could be increased and strengthened through a proper rulemaking process and through adequate and careful choice of content. We will also stress specific elements and devices that should be taken into consideration during the establishment of the code, from both procedural and content perspectives. Regarding methodological issues, the limits and potentialities of a common European code of medical ethics will be analysed from an ethical point of view and then from a legal perspective. The aim of this paper is to clarify the framework for the potential but controversial role of the code in the European context, showing the difficulties in enforcing and harmonising national ethical rules into a European code of medical ethics. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
An Examination of the Reliability of the Organizational Assessment Package (OAP).
1981-07-01
reactiv- ity or pretest sensitization (Bracht and Glass, 1968) may occur. In this case, the change from pretest to posttest can be caused just by the...content items. The blocks for supervisor’s code were left blank, work group code was coded as all ones , and each person’s seminar number was coded in...63 5 19 .91 .74 5 (Work Group Effective- ness) 822 19 .83 .42 7 17 .90 .57 7 (Job Related Sati sfacti on ) 823 16 .91 .84 2 18 .93 .87 2 (Job Related
Proceedings of the 14th International Conference on the Numerical Simulation of Plasmas
NASA Astrophysics Data System (ADS)
Partial Contents are as follows: Numerical Simulations of the Vlasov-Maxwell Equations by Coupled Particle-Finite Element Methods on Unstructured Meshes; Electromagnetic PIC Simulations Using Finite Elements on Unstructured Grids; Modelling Travelling Wave Output Structures with the Particle-in-Cell Code CONDOR; SST--A Single-Slice Particle Simulation Code; Graphical Display and Animation of Data Produced by Electromagnetic, Particle-in-Cell Codes; A Post-Processor for the PEST Code; Gray Scale Rendering of Beam Profile Data; A 2D Electromagnetic PIC Code for Distributed Memory Parallel Computers; 3-D Electromagnetic PIC Simulation on the NRL Connection Machine; Plasma PIC Simulations on MIMD Computers; Vlasov-Maxwell Algorithm for Electromagnetic Plasma Simulation on Distributed Architectures; MHD Boundary Layer Calculation Using the Vortex Method; and Eulerian Codes for Plasma Simulations.
Applying a rateless code in content delivery networks
NASA Astrophysics Data System (ADS)
Suherman; Zarlis, Muhammad; Parulian Sitorus, Sahat; Al-Akaidi, Marwan
2017-09-01
Content delivery network (CDN) allows internet providers to locate their services, to map their coverage into networks without necessarily to own them. CDN is part of the current internet infrastructures, supporting multi server applications especially social media. Various works have been proposed to improve CDN performances. Since accesses on social media servers tend to be short but frequent, providing redundant to the transmitted packets to ensure lost packets not degrade the information integrity may improve service performances. This paper examines the implementation of rateless code in the CDN infrastructure. The NS-2 evaluations show that rateless code is able to reduce packet loss up to 50%.
Abrahams, Sheryl W
2012-08-01
The advent of social networking sites and other online communities presents new opportunities and challenges for the promotion, protection, and support of breastfeeding. This study examines the presence of infant formula marketing on popular US social media sites, using the World Health Organization International Code of Marketing of Breast-milk Substitutes (the Code) as a framework. We examined to what extent each of 11 infant formula brands that are widely available in the US had established a social media presence in popular social media venues likely to be visited by expectant parents and families with young children. We then examined current marketing practices, using the Code as a basis for ethical marketing. Infant formula manufacturers have established a social media presence primarily through Facebook pages, interactive features on their own Web sites, mobile apps for new and expecting parents, YouTube videos, sponsored reviews on parenting blogs, and other financial relationships with parenting blogs. Violations of the Code as well as promotional practices unforeseen by the Code were identified. These practices included enabling user-generated content that promotes the use of infant formula, financial relationships between manufacturers and bloggers, and creation of mobile apps for use by parents. An additional concern identified for Code enforcement is lack of transparency in social media-based marketing. The use of social media for formula marketing may demand new strategies for monitoring and enforcing the Code in light of emerging challenges, including suggested content for upcoming consideration for World Health Assembly resolutions.
Practical guide to bar coding for patient medication safety.
Neuenschwander, Mark; Cohen, Michael R; Vaida, Allen J; Patchett, Jeffrey A; Kelly, Jamie; Trohimovich, Barbara
2003-04-15
Bar coding for the medication administration step of the drug-use process is discussed. FDA will propose a rule in 2003 that would require bar-code labels on all human drugs and biologicals. Even with an FDA mandate, manufacturer procrastination and possible shifts in product availability are likely to slow progress. Such delays should not preclude health systems from adopting bar-code-enabled point-of-care (BPOC) systems to achieve gains in patient safety. Bar-code technology is a replacement for traditional keyboard data entry. The elements of bar coding are content, which determines the meaning; data format, which refers to the embedded data and symbology, which describes the "font" in which the machine-readable code is written. For a BPOC system to deliver an acceptable level of patient protection, the hospital must first establish reliable processes for a patient identification band, caregiver badge, and medication bar coding. Medications can have either drug-specific or patient-specific bar codes. Both varieties result in the desired code that supports patient's five rights of drug administration. When medications are not available from the manufacturer in immediate-container bar-coded packaging, other means of applying the bar code must be devised, including the use of repackaging equipment, overwrapping, manual bar coding, and outsourcing. Virtually all medications should be bar coded, the bar code on the label should be easily readable, and appropriate policies, procedures, and checks should be in place. Bar coding has the potential to be not only cost-effective but to produce a return on investment. By bar coding patient identification tags, caregiver badges, and immediate-container medications, health systems can substantially increase patient safety during medication administration.
Green, Nancy
2005-04-01
We developed a Bayesian network coding scheme for annotating biomedical content in layperson-oriented clinical genetics documents. The coding scheme supports the representation of probabilistic and causal relationships among concepts in this domain, at a high enough level of abstraction to capture commonalities among genetic processes and their relationship to health. We are using the coding scheme to annotate a corpus of genetic counseling patient letters as part of the requirements analysis and knowledge acquisition phase of a natural language generation project. This paper describes the coding scheme and presents an evaluation of intercoder reliability for its tag set. In addition to giving examples of use of the coding scheme for analysis of discourse and linguistic features in this genre, we suggest other uses for it in analysis of layperson-oriented text and dialogue in medical communication.
NASA Astrophysics Data System (ADS)
Hanhart, Philippe; Řeřábek, Martin; Ebrahimi, Touradj
2015-09-01
This paper reports the details and results of the subjective evaluations conducted at EPFL to evaluate the responses to the Call for Evidence (CfE) for High Dynamic Range (HDR) and Wide Color Gamut (WCG) Video Coding issued by Moving Picture Experts Group (MPEG). The CfE on HDR/WCG Video Coding aims to explore whether the coding efficiency and/or the functionality of the current version of HEVC standard can be signi_cantly improved for HDR and WCG content. In total, nine submissions, five for Category 1 and four for Category 3a, were compared to the HEVC Main 10 Profile based Anchor. More particularly, five HDR video contents, compressed at four bit rates by each proponent responding to the CfE, were used in the subjective evaluations. Further, the side-by-side presentation methodology was used for the subjective experiment to discriminate small differences between the Anchor and proponents. Subjective results shows that the proposals provide evidence that the coding efficiency can be improved in a statistically noticeable way over MPEG CfE Anchors in terms of perceived quality within the investigated content. The paper further benchmarks the selected objective metrics based on their correlations with the subjective ratings. It is shown that PSNR-DE1000, HDRVDP- 2, and PSNR-Lx can reliably detect visible differences between the proposed encoding solutions and current HEVC standard.
Development of a Coding Instrument to Assess the Quality and Content of Anti-Tobacco Video Games.
Alber, Julia M; Watson, Anna M; Barnett, Tracey E; Mercado, Rebeccah; Bernhardt, Jay M
2015-07-01
Previous research has shown the use of electronic video games as an effective method for increasing content knowledge about the risks of drugs and alcohol use for adolescents. Although best practice suggests that theory, health communication strategies, and game appeal are important characteristics for developing games, no instruments are currently available to examine the quality and content of tobacco prevention and cessation electronic games. This study presents the systematic development of a coding instrument to measure the quality, use of theory, and health communication strategies of tobacco cessation and prevention electronic games. Using previous research and expert review, a content analysis coding instrument measuring 67 characteristics was developed with three overarching categories: type and quality of games, theory and approach, and type and format of messages. Two trained coders applied the instrument to 88 games on four platforms (personal computer, Nintendo DS, iPhone, and Android phone) to field test the instrument. Cohen's kappa for each item ranged from 0.66 to 1.00, with an average kappa value of 0.97. Future research can adapt this coding instrument to games addressing other health issues. In addition, the instrument questions can serve as a useful guide for evidence-based game development.
What does music express? Basic emotions and beyond.
Juslin, Patrik N
2013-01-01
Numerous studies have investigated whether music can reliably convey emotions to listeners, and-if so-what musical parameters might carry this information. Far less attention has been devoted to the actual contents of the communicative process. The goal of this article is thus to consider what types of emotional content are possible to convey in music. I will argue that the content is mainly constrained by the type of coding involved, and that distinct types of content are related to different types of coding. Based on these premises, I suggest a conceptualization in terms of "multiple layers" of musical expression of emotions. The "core" layer is constituted by iconically-coded basic emotions. I attempt to clarify the meaning of this concept, dispel the myths that surround it, and provide examples of how it can be heuristic in explaining findings in this domain. However, I also propose that this "core" layer may be extended, qualified, and even modified by additional layers of expression that involve intrinsic and associative coding. These layers enable listeners to perceive more complex emotions-though the expressions are less cross-culturally invariant and more dependent on the social context and/or the individual listener. This multiple-layer conceptualization of expression in music can help to explain both similarities and differences between vocal and musical expression of emotions.
Development of a Coding Instrument to Assess the Quality and Content of Anti-Tobacco Video Games
Alber, Julia M.; Watson, Anna M.; Barnett, Tracey E.; Mercado, Rebeccah
2015-01-01
Abstract Previous research has shown the use of electronic video games as an effective method for increasing content knowledge about the risks of drugs and alcohol use for adolescents. Although best practice suggests that theory, health communication strategies, and game appeal are important characteristics for developing games, no instruments are currently available to examine the quality and content of tobacco prevention and cessation electronic games. This study presents the systematic development of a coding instrument to measure the quality, use of theory, and health communication strategies of tobacco cessation and prevention electronic games. Using previous research and expert review, a content analysis coding instrument measuring 67 characteristics was developed with three overarching categories: type and quality of games, theory and approach, and type and format of messages. Two trained coders applied the instrument to 88 games on four platforms (personal computer, Nintendo DS, iPhone, and Android phone) to field test the instrument. Cohen's kappa for each item ranged from 0.66 to 1.00, with an average kappa value of 0.97. Future research can adapt this coding instrument to games addressing other health issues. In addition, the instrument questions can serve as a useful guide for evidence-based game development. PMID:26167842
Christophel, Thomas B; Allefeld, Carsten; Endisch, Christian; Haynes, John-Dylan
2018-06-01
Traditional views of visual working memory postulate that memorized contents are stored in dorsolateral prefrontal cortex using an adaptive and flexible code. In contrast, recent studies proposed that contents are maintained by posterior brain areas using codes akin to perceptual representations. An important question is whether this reflects a difference in the level of abstraction between posterior and prefrontal representations. Here, we investigated whether neural representations of visual working memory contents are view-independent, as indicated by rotation-invariance. Using functional magnetic resonance imaging and multivariate pattern analyses, we show that when subjects memorize complex shapes, both posterior and frontal brain regions maintain the memorized contents using a rotation-invariant code. Importantly, we found the representations in frontal cortex to be localized to the frontal eye fields rather than dorsolateral prefrontal cortices. Thus, our results give evidence for the view-independent storage of complex shapes in distributed representations across posterior and frontal brain regions.
Code inspection instructional validation
NASA Technical Reports Server (NTRS)
Orr, Kay; Stancil, Shirley
1992-01-01
The Shuttle Data Systems Branch (SDSB) of the Flight Data Systems Division (FDSD) at Johnson Space Center contracted with Southwest Research Institute (SwRI) to validate the effectiveness of an interactive video course on the code inspection process. The purpose of this project was to determine if this course could be effective for teaching NASA analysts the process of code inspection. In addition, NASA was interested in the effectiveness of this unique type of instruction (Digital Video Interactive), for providing training on software processes. This study found the Carnegie Mellon course, 'A Cure for the Common Code', effective for teaching the process of code inspection. In addition, analysts prefer learning with this method of instruction, or this method in combination with other methods. As is, the course is definitely better than no course at all; however, findings indicate changes are needed. Following are conclusions of this study. (1) The course is instructionally effective. (2) The simulation has a positive effect on student's confidence in his ability to apply new knowledge. (3) Analysts like the course and prefer this method of training, or this method in combination with current methods of training in code inspection, over the way training is currently being conducted. (4) Analysts responded favorably to information presented through scenarios incorporating full motion video. (5) Some course content needs to be changed. (6) Some content needs to be added to the course. SwRI believes this study indicates interactive video instruction combined with simulation is effective for teaching software processes. Based on the conclusions of this study, SwRI has outlined seven options for NASA to consider. SwRI recommends the option which involves creation of new source code and data files, but uses much of the existing content and design from the current course. Although this option involves a significant software development effort, SwRI believes this option will produce the most effective results.
Embed dynamic content in your poster.
Hutchins, B Ian
2013-01-29
A new technology has emerged that will facilitate the presentation of dynamic or otherwise inaccessible data on posters at scientific meetings. Video, audio, or other digital files hosted on mobile-friendly sites can be linked to through a quick response (QR) code, a two-dimensional barcode that can be scanned by smartphones, which then display the content. This approach is more affordable than acquiring tablet computers for playing dynamic content and can reach many users at large conferences. This resource details how to host videos, generate QR codes, and view the associated files on mobile devices.
Reliability in Cross-National Content Analysis.
ERIC Educational Resources Information Center
Peter, Jochen; Lauf, Edmund
2002-01-01
Investigates how coder characteristics such as language skills, political knowledge, coding experience, and coding certainty affected inter-coder and coder-training reliability. Shows that language skills influenced both reliability types. Suggests that cross-national researchers should pay more attention to cross-national assessments of…
Michalaki, M; Oulis, C J; Pandis, N; Eliades, G
2016-12-01
This in vitro study was to classify questionable for caries occlusal surfaces (QCOS) of permanent teeth according to ICDAS codes 1, 2, and 3 and to compare them in terms of enamel mineral composition with the areas of sound tissue of the same tooth. Partially impacted human molars (60) extracted for therapeutic reasons with QCOS were used in the study, photographed via a polarised light microscope and classified according to the ICDAS II (into codes 1, 2, or 3). The crowns were embedded in clear self-cured acrylic resin and longitudinally sectioned at the levels of the characterised lesions and studied by SEM/EDX, to assess enamel mineral composition of the QCOS. Univariate and multivariate random effect regressions were used for Ca (wt%), P (wt%), and Ca/P (wt%). The EDX analysis indicated changes in the Ca and P contents that were more prominent in ICDAS-II code 3 lesions compared to codes 1 and 2 lesions. In these lesions, Ca (wt%) and P (wt%) concentrations were significantly decreased (p = 0.01) in comparison with sound areas. Ca and P (wt%) contents were significantly lower (p = 0.02 and p = 0.01 respectively) for code 3 areas in comparison with codes 1 and 2 areas. Significantly higher (p = 0.01) Ca (wt%) and P (wt%) contents were found on sound areas compared to the lesion areas. The enamel of occlusal surfaces of permanent teeth with ICDAS 1, 2, and 3 lesions was found to have different Ca/P compositions, necessitating further investigation on whether these altered surfaces might behave differently on etching preparation before fissure sealant placement, compared to sound surfaces.
Louder than words: power and conflict in interprofessional education articles, 1954–2013
Paradis, Elise; Whitehead, Cynthia R
2015-01-01
Context Interprofessional education (IPE) aspires to enable collaborative practice. Current IPE offerings, although rapidly proliferating, lack evidence of efficacy and theoretical grounding. Objectives Our research aimed to explore the historical emergence of the field of IPE and to analyse the positioning of this academic field of inquiry. In particular, we sought to investigate the extent to which power and conflict – elements central to interprofessional care – figure in the IPE literature. Methods We used a combination of deductive and inductive automated coding and manual coding to explore the contents of 2191 articles in the IPE literature published between 1954 and 2013. Inductive coding focused on the presence and use of the sociological (rather than statistical) version of power, which refers to hierarchies and asymmetries among the professions. Articles found to be centrally about power were then analysed using content analysis. Results Publications on IPE have grown exponentially in the past decade. Deductive coding of identified articles showed an emphasis on students, learning, programmes and practice. Automated inductive coding of titles and abstracts identified 129 articles potentially about power, but manual coding found that only six articles put power and conflict at the centre. Content analysis of these six articles revealed that two provided tentative explorations of power dynamics, one skirted around this issue, and three explicitly theorised and integrated power and conflict. Conclusions The lack of attention to power and conflict in the IPE literature suggests that many educators do not foreground these issues. Education programmes are expected to transform individuals into effective collaborators, without heed to structural, organisational and institutional factors. In so doing, current constructions of IPE veil the problems that IPE attempts to solve. PMID:25800300
Coding Local and Global Binary Visual Features Extracted From Video Sequences.
Baroffio, Luca; Canclini, Antonio; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano
2015-11-01
Binary local features represent an effective alternative to real-valued descriptors, leading to comparable results for many visual analysis tasks while being characterized by significantly lower computational complexity and memory requirements. When dealing with large collections, a more compact representation based on global features is often preferred, which can be obtained from local features by means of, e.g., the bag-of-visual word model. Several applications, including, for example, visual sensor networks and mobile augmented reality, require visual features to be transmitted over a bandwidth-limited network, thus calling for coding techniques that aim at reducing the required bit budget while attaining a target level of efficiency. In this paper, we investigate a coding scheme tailored to both local and global binary features, which aims at exploiting both spatial and temporal redundancy by means of intra- and inter-frame coding. In this respect, the proposed coding scheme can conveniently be adopted to support the analyze-then-compress (ATC) paradigm. That is, visual features are extracted from the acquired content, encoded at remote nodes, and finally transmitted to a central controller that performs the visual analysis. This is in contrast with the traditional approach, in which visual content is acquired at a node, compressed and then sent to a central unit for further processing, according to the compress-then-analyze (CTA) paradigm. In this paper, we experimentally compare the ATC and the CTA by means of rate-efficiency curves in the context of two different visual analysis tasks: 1) homography estimation and 2) content-based retrieval. Our results show that the novel ATC paradigm based on the proposed coding primitives can be competitive with the CTA, especially in bandwidth limited scenarios.
Coding Local and Global Binary Visual Features Extracted From Video Sequences
NASA Astrophysics Data System (ADS)
Baroffio, Luca; Canclini, Antonio; Cesana, Matteo; Redondi, Alessandro; Tagliasacchi, Marco; Tubaro, Stefano
2015-11-01
Binary local features represent an effective alternative to real-valued descriptors, leading to comparable results for many visual analysis tasks, while being characterized by significantly lower computational complexity and memory requirements. When dealing with large collections, a more compact representation based on global features is often preferred, which can be obtained from local features by means of, e.g., the Bag-of-Visual-Word (BoVW) model. Several applications, including for example visual sensor networks and mobile augmented reality, require visual features to be transmitted over a bandwidth-limited network, thus calling for coding techniques that aim at reducing the required bit budget, while attaining a target level of efficiency. In this paper we investigate a coding scheme tailored to both local and global binary features, which aims at exploiting both spatial and temporal redundancy by means of intra- and inter-frame coding. In this respect, the proposed coding scheme can be conveniently adopted to support the Analyze-Then-Compress (ATC) paradigm. That is, visual features are extracted from the acquired content, encoded at remote nodes, and finally transmitted to a central controller that performs visual analysis. This is in contrast with the traditional approach, in which visual content is acquired at a node, compressed and then sent to a central unit for further processing, according to the Compress-Then-Analyze (CTA) paradigm. In this paper we experimentally compare ATC and CTA by means of rate-efficiency curves in the context of two different visual analysis tasks: homography estimation and content-based retrieval. Our results show that the novel ATC paradigm based on the proposed coding primitives can be competitive with CTA, especially in bandwidth limited scenarios.
Louder than words: power and conflict in interprofessional education articles, 1954-2013.
Paradis, Elise; Whitehead, Cynthia R
2015-04-01
Interprofessional education (IPE) aspires to enable collaborative practice. Current IPE offerings, although rapidly proliferating, lack evidence of efficacy and theoretical grounding. Our research aimed to explore the historical emergence of the field of IPE and to analyse the positioning of this academic field of inquiry. In particular, we sought to investigate the extent to which power and conflict - elements central to interprofessional care - figure in the IPE literature. We used a combination of deductive and inductive automated coding and manual coding to explore the contents of 2191 articles in the IPE literature published between 1954 and 2013. Inductive coding focused on the presence and use of the sociological (rather than statistical) version of power, which refers to hierarchies and asymmetries among the professions. Articles found to be centrally about power were then analysed using content analysis. Publications on IPE have grown exponentially in the past decade. Deductive coding of identified articles showed an emphasis on students, learning, programmes and practice. Automated inductive coding of titles and abstracts identified 129 articles potentially about power, but manual coding found that only six articles put power and conflict at the centre. Content analysis of these six articles revealed that two provided tentative explorations of power dynamics, one skirted around this issue, and three explicitly theorised and integrated power and conflict. The lack of attention to power and conflict in the IPE literature suggests that many educators do not foreground these issues. Education programmes are expected to transform individuals into effective collaborators, without heed to structural, organisational and institutional factors. In so doing, current constructions of IPE veil the problems that IPE attempts to solve. © 2015 The Authors Medical Education Published by John Wiley & Sons Ltd.
Scene-aware joint global and local homographic video coding
NASA Astrophysics Data System (ADS)
Peng, Xiulian; Xu, Jizheng; Sullivan, Gary J.
2016-09-01
Perspective motion is commonly represented in video content that is captured and compressed for various applications including cloud gaming, vehicle and aerial monitoring, etc. Existing approaches based on an eight-parameter homography motion model cannot deal with this efficiently, either due to low prediction accuracy or excessive bit rate overhead. In this paper, we consider the camera motion model and scene structure in such video content and propose a joint global and local homography motion coding approach for video with perspective motion. The camera motion is estimated by a computer vision approach, and camera intrinsic and extrinsic parameters are globally coded at the frame level. The scene is modeled as piece-wise planes, and three plane parameters are coded at the block level. Fast gradient-based approaches are employed to search for the plane parameters for each block region. In this way, improved prediction accuracy and low bit costs are achieved. Experimental results based on the HEVC test model show that up to 9.1% bit rate savings can be achieved (with equal PSNR quality) on test video content with perspective motion. Test sequences for the example applications showed a bit rate savings ranging from 3.7 to 9.1%.
Puckett, Larry; Hitt, Kerie; Alexander, Richard
1998-01-01
names that correspond to the FIPS codes. 2. Tabular component - Nine tab-delimited ASCII lookup tables of animal counts and nutrient estimates organized by 5-digit state/county FIPS (Federal Information Processing Standards) code. Another table lists the county names that correspond to the FIPS codes. The use of trade names is for identification purposes only and does not constitute endorsement by the U.S. Geological Survey.
Optimum Vessel Performance in Evolving Nonlinear Wave Fields
2012-11-01
TEMPEST , the new, nonlinear, time-domain ship motion code being developed by the Navy. Table of Contents Executive Summary i List of Figures iii...domain ship motion code TEMPEST . The radiation and diffraction forces in the level 3.0 version of TEMPEST will be computed by the body-exact strip theory...nonlinear responses of a ship to a seaway are being incorporated into version 3 of TEMPEST , the new, nonlinear, time-domain ship motion code that
The feasibility of QR-code prescription in Taiwan.
Lin, C-H; Tsai, F-Y; Tsai, W-L; Wen, H-W; Hu, M-L
2012-12-01
An ideal Health Care Service is a service system that focuses on patients. Patients in Taiwan have the freedom to fill their prescriptions at any pharmacies contracted with National Health Insurance. Each of these pharmacies uses its own computer system. So far, there are at least ten different systems on the market in Taiwan. To transmit the prescription information from the hospital to the pharmacy accurately and efficiently presents a great issue. This study consisted of two-dimensional applications using a QR-code to capture Patient's identification and prescription information from the hospitals as well as using a webcam to read the QR-code and transfer all data to the pharmacy computer system. Two hospitals and 85 community pharmacies participated in the study. During the trial, all participant pharmacies appraised highly of the accurate transmission of the prescription information. The contents in QR-code prescriptions from Taipei area were picked up efficiently and accurately in pharmacies at Taichung area (middle Taiwan) without software system limit and area limitation. The QR-code device received a patent (No. M376844, March 2010) from Intellectual Property Office Ministry of Economic Affair, China. Our trial has proven that QR-code prescription can provide community pharmacists an efficient, accurate and inexpensive device to digitalize the prescription contents. Consequently, pharmacists can offer better quality of pharmacy service to patients. © 2012 Blackwell Publishing Ltd.
Kassam, Aliya; Sharma, Nishan; Harvie, Margot; O’Beirne, Maeve; Topps, Maureen
2016-01-01
Abstract Objective To conduct a thematic analysis of the College of Family Physicians of Canada’s (CFPC’s) Red Book accreditation standards and the Triple C Competency-based Curriculum objectives with respect to patient safety principles. Design Thematic content analysis of the CFPC’s Red Book accreditation standards and the Triple C curriculum. Setting Canada. Main outcome measures Coding frequency of the patient safety principles (ie, patient engagement; respectful, transparent relationships; complex systems; a just and trusting culture; responsibility and accountability for actions; and continuous learning and improvement) found in the analyzed CFPC documents. Results Within the analyzed CFPC documents, the most commonly found patient safety principle was patient engagement (n = 51 coding references); the least commonly found patient safety principles were a just and trusting culture (n = 5 coding references) and complex systems (n = 5 coding references). Other patient safety principles that were uncommon included responsibility and accountability for actions (n = 7 coding references) and continuous learning and improvement (n = 12 coding references). Conclusion Explicit inclusion of patient safety content such as the use of patient safety principles is needed for residency training programs across Canada to ensure the full spectrum of care is addressed, from community-based care to acute hospital-based care. This will ensure a patient safety culture can be cultivated from residency and sustained into primary care practice. PMID:27965349
NASA Astrophysics Data System (ADS)
Mylnikova, Anna; Yasyukevich, Yury; Yasyukevich, Anna
2017-04-01
We have developed a technique for vertical total electron content (TEC) and differential code biases (DCBs) estimation using data from a single GPS/GLONASS station. The algorithm is based on TEC expansion into Taylor series in space and time (TayAbsTEC). We perform the validation of the technique using Global Ionospheric Maps (GIM) computed by Center for Orbit Determination in Europe (CODE) and Jet Propulsion Laboratory (JPL). We compared differences between absolute vertical TEC (VTEC) from GIM and VTEC evaluated by TayAbsTEC for 2009 year (solar activity minimum - sunspot number about 0), and for 2014 year (solar activity maximum - sunspot number 110). Since there is difference between VTEC from CODE and VTEC from JPL, we compare TayAbsTEC VTEC with both of them. We found that TayAbsTEC VTEC is closer to CODE VTEC than to JPL VTEC. The difference between TayAbsTEC VTEC and GIM VTEC is more noticeable for solar activity maximum (2014) than for solar activity minimum (2009) for both CODE and JPL. The distribution of VTEC differences is close to Gaussian distribution, so we conclude that results of TayAbsTEC are in the agreement with GIM VTEC. We also compared DCBs evaluated by TayAbsTEC and DCBs from GIM, computed by CODE. The TayAbsTEC DCBs are in good agreement with CODE DCBs for GPS satellites, but differ noticeable for GLONASS. We used DCBs to correct slant TEC to find out which DCBs give better results. Slant TEC correction with CODE DCBs produces negative and nonphysical TEC values. Slant TEC correction with TayAbsTEC DCBs doesn't produce such artifacts. The technique we developed is used for VTEC and DCBs calculation given only local GPS/GLONASS networks data. The evaluated VTEC data are in GIM framework which is handy when various data analyses are made.
Document image retrieval through word shape coding.
Lu, Shijian; Li, Linlin; Tan, Chew Lim
2008-11-01
This paper presents a document retrieval technique that is capable of searching document images without OCR (optical character recognition). The proposed technique retrieves document images by a new word shape coding scheme, which captures the document content through annotating each word image by a word shape code. In particular, we annotate word images by using a set of topological shape features including character ascenders/descenders, character holes, and character water reservoirs. With the annotated word shape codes, document images can be retrieved by either query keywords or a query document image. Experimental results show that the proposed document image retrieval technique is fast, efficient, and tolerant to various types of document degradation.
Dual coding theory, word abstractness, and emotion: a critical review of Kousta et al. (2011).
Paivio, Allan
2013-02-01
Kousta, Vigliocco, Del Campo, Vinson, and Andrews (2011) questioned the adequacy of dual coding theory and the context availability model as explanations of representational and processing differences between concrete and abstract words. They proposed an alternative approach that focuses on the role of emotional content in the processing of abstract concepts. Their dual coding critique is, however, based on impoverished and, in some respects, incorrect interpretations of the theory and its implications. This response corrects those gaps and misinterpretations and summarizes research findings that show predicted variations in the effects of dual coding variables in different tasks and contexts. Especially emphasized is an empirically supported dual coding theory of emotion that goes beyond the Kousta et al. emphasis on emotion in abstract semantics. 2013 APA, all rights reserved
VizieR Online Data Catalog: Radiative forces for stellar envelopes (Seaton, 1997)
NASA Astrophysics Data System (ADS)
Seaton, M. J.; Yan, Y.; Mihalas, D.; Pradhan, A. K.
2000-02-01
(1) Primary data files, stages.zz These files give data for the calculation of radiative accelerations, GRAD, for elements with nuclear charge zz. Data are available for zz=06, 07, 08, 10, 11, 12, 13, 14, 16, 18, 20, 24, 25, 26 and 28. Calculations are made using data from the Opacity Project (see papers SYMP and IXZ). The data are given for each ionisation stage, j. They are tabulated on a mesh of (T, Ne, CHI) where T is temperature, Ne electron density and CHI is abundance multiplier. The files include data for ionisation fractions, for each (T, Ne). The file contents are described in the paper ACC and as comments in the code add.f (2) Code add.f This reads a file stages.zz and creates a file acc.zz giving radiative accelerations averaged over ionisation stages. The code prompts for names of input and output files. The code, as provided, gives equal weights (as defined in the paper ACC) to all stages. Th weights are set in SUBROUTINE WEIGHTS, which could be changed to give any weights preferred by the user. The dependence of diffusion coefficients on ionisation stage is given by a function ZET, which is defined in SUBROUTINE ZETA. The expressions used for ZET are as given in the paper. The user can change that subroutine if other expressions are preferred. The output file contains values, ZETBAR, of ZET, averaged over ionisation stages. (3) Files acc.zz Radiative accelerations computed using add.f as provided. The user will need to run the code add.f only if it is required to change the subroutines WEIGHTS or ZETA. The contents of the files acc.zz are described in the paper ACC and in comments contained in the code add.f. (4) Code accfit.f This code gives gives radiative accelerations, and some related data, for a stellar model. Methods used to interpolate data to the values of (T, RHO) for the stellar model are based on those used in the code opfit.for (see the paper OPF). The executable file accfit.com runs accfit.f. It uses a list of files given in accfit.files (see that file for further description). The mesh used for the abundance-multiplier CHI on the output file will generally be finer than that used in the input files acc.zz. The mesh to be used is specified on a file chi.dat. For a test run, the stellar model used is given in the file 10000_4.2 (Teff=10000 K, LOG10(g)=4.2) The output file from that test run is acc100004.2. The contents of the output file are described in the paper ACC and as comments in the code accfit.f. (5) The code diff.f This code reads the output file (e.g. acc1000004.2) created by accfit.f. For any specified depth point in the model and value of CHI, it gives values of radiative accelerations, the quantity ZETBAR required for calculation of diffusion coefficients, and Rosseland-mean opacities. The code prompts for input data. It creates a file recording all data calculated. The code diff.f is intended for incorporation, as a set of subroutines, in codes for diffusion calculations. (1 data file).
On the Information Content of Program Traces
NASA Technical Reports Server (NTRS)
Frumkin, Michael; Hood, Robert; Yan, Jerry; Saini, Subhash (Technical Monitor)
1998-01-01
Program traces are used for analysis of program performance, memory utilization, and communications as well as for program debugging. The trace contains records of execution events generated by monitoring units inserted into the program. The trace size limits the resolution of execution events and restricts the user's ability to analyze the program execution. We present a study of the information content of program traces and develop a coding scheme which reduces the trace size to the limit given by the trace entropy. We apply the coding to the traces of AIMS instrumented programs executed on the IBM SPA and the SCSI Power Challenge and compare it with other coding methods. Our technique shows size of the trace can be reduced by more than a factor of 5.
Sanctions Connected to Dress Code Violations in Secondary School Handbooks
ERIC Educational Resources Information Center
Workman, Jane E.; Freeburg, Elizabeth W.; Lentz-Hees, Elizabeth S.
2004-01-01
This study identifies and evaluates sanctions for dress code violations in secondary school handbooks. Sanctions, or consequences for breaking rules, vary along seven interrelated dimensions: source, formality, retribution, obtrusiveness, magnitude, severity, and pervasiveness. A content analysis of handbooks from 155 public secondary schools…
National Geocoding Converter File 1 : Volume 1. Structure & Content.
DOT National Transportation Integrated Search
1974-01-01
This file contains a record for each county, county equivalent (as defined by the Census Bureau), SMSA county segment and SPLC county segment in the U.S. A record identifies for an area all major county codes and the associated county aggregate codes
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 2 2013-04-01 2013-04-01 false Coding. 106.90 Section 106.90 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) FOOD FOR HUMAN CONSUMPTION INFANT FORMULA QUALITY CONTROL PROCEDURES Quality Control Procedures for Assuring Nutrient Content...
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 2 2011-04-01 2011-04-01 false Coding. 106.90 Section 106.90 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) FOOD FOR HUMAN CONSUMPTION INFANT FORMULA QUALITY CONTROL PROCEDURES Quality Control Procedures for Assuring Nutrient Content...
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 2 2012-04-01 2012-04-01 false Coding. 106.90 Section 106.90 Food and Drugs FOOD AND DRUG ADMINISTRATION, DEPARTMENT OF HEALTH AND HUMAN SERVICES (CONTINUED) FOOD FOR HUMAN CONSUMPTION INFANT FORMULA QUALITY CONTROL PROCEDURES Quality Control Procedures for Assuring Nutrient Content...
Vinje, Kristine Hansen; Phan, Linh Thi Hong; Nguyen, Tuan Thanh; Henjum, Sigrun; Ribe, Lovise Omoijuanfo; Mathisen, Roger
2017-06-01
To review regulations and to perform a media audit of promotion of products under the scope of the International Code of Marketing of Breast-milk Substitutes ('the Code') in South-East Asia. We reviewed national regulations relating to the Code and 800 clips of editorial content, 387 advertisements and 217 Facebook posts from January 2015 to January 2016. We explored the ecological association between regulations and market size, and between the number of advertisements and market size and growth of milk formula. Cambodia, Indonesia, Myanmar, Thailand and Vietnam. Regulations on the child's age for inappropriate marketing of products are all below the Code's updated recommendation of 36 months (i.e. 12 months in Thailand and Indonesia; 24 months in the other three countries) and are voluntary in Thailand. Although the advertisements complied with the national regulations on the age limit, they had content (e.g. stages of milk formula; messages about the benefit; pictures of a child) that confused audiences. Market size and growth of milk formula were positively associated with the number of newborns and the number of advertisements, and were not affected by the current level of implementation of breast-milk substitute laws and regulations. The present media audit reveals inappropriate promotion and insufficient national regulation of products under the scope of the Code in South-East Asia. Strengthened implementation of regulations aligned with the Code's updated recommendation should be part of comprehensive strategies to minimize the harmful effects of advertisements of breast-milk substitutes on maternal and child nutrition and health.
Program ratings do not predict negative content in commercials on children's channels.
Dale, Lourdes P; Klein, Jordana; DiLoreto, James; Pidano, Anne E; Borto, Jolanta W; McDonald, Kathleen; Olson, Heather; Neace, William P
2011-01-01
The aim of this study was to determine the presence of negative content in commercials airing on 3 children's channels (Disney Channel, Nickelodeon, and Cartoon Network). The 1681 commercials were coded with a reliable coding system and content comparisons were made. Although the majority of the commercials were coded as neutral, negative content was present in 13.5% of commercials. This rate was significantly more than the predicted value of zero and more similar to the rates cited in previous research examining content during sporting events. The rate of negative content was less than, but not significantly different from, the rate of positive content. Thus, our findings did not support our hypothesis that there would be more commercials with positive content than with negative content. Logistic regression analysis indicated that channel, and not rating, was a better predictor of the presence of overall negative content and the presence of violent behaviors. Commercials airing on the Cartoon Network had significantly more negative content, and those airing on Disney Channel had significantly less negative content than the other channels. Within the individual channels, program ratings did not relate to the presence of negative content. Parents cannot assume the content of commercials will be consistent with the program rating or label. Pediatricians and psychologists should educate parents about the potential for negative content in commercials and advocate for a commercials rating system to ensure that there is greater parity between children's programs and the corresponding commercials.
What does music express? Basic emotions and beyond
Juslin, Patrik N.
2013-01-01
Numerous studies have investigated whether music can reliably convey emotions to listeners, and—if so—what musical parameters might carry this information. Far less attention has been devoted to the actual contents of the communicative process. The goal of this article is thus to consider what types of emotional content are possible to convey in music. I will argue that the content is mainly constrained by the type of coding involved, and that distinct types of content are related to different types of coding. Based on these premises, I suggest a conceptualization in terms of “multiple layers” of musical expression of emotions. The “core” layer is constituted by iconically-coded basic emotions. I attempt to clarify the meaning of this concept, dispel the myths that surround it, and provide examples of how it can be heuristic in explaining findings in this domain. However, I also propose that this “core” layer may be extended, qualified, and even modified by additional layers of expression that involve intrinsic and associative coding. These layers enable listeners to perceive more complex emotions—though the expressions are less cross-culturally invariant and more dependent on the social context and/or the individual listener. This multiple-layer conceptualization of expression in music can help to explain both similarities and differences between vocal and musical expression of emotions. PMID:24046758
Assessing Teachers' Science Content Knowledge: A Strategy for Assessing Depth of Understanding
NASA Astrophysics Data System (ADS)
McConnell, Tom J.; Parker, Joyce M.; Eberhardt, Jan
2013-06-01
One of the characteristics of effective science teachers is a deep understanding of science concepts. The ability to identify, explain and apply concepts is critical in designing, delivering and assessing instruction. Because some teachers have not completed extensive courses in some areas of science, especially in middle and elementary grades, many professional development programs attempt to strengthen teachers' content knowledge. Assessing this content knowledge is challenging. Concept inventories are reliable and efficient, but do not reveal depth of knowledge. Interviews and observations are time-consuming. The Problem Based Learning Project for Teachers implemented a strategy that includes pre-post instruments in eight content strands that permits blind coding of responses and comparison across teachers and groups of teachers. The instruments include two types of open-ended questions that assess both general knowledge and the ability to apply Big Ideas related to specific science topics. The coding scheme is useful in revealing patterns in prior knowledge and learning, and identifying ideas that are challenging or not addressed by learning activities. The strengths and limitations of the scoring scheme are identified through comparison of the findings to case studies of four participating teachers from middle and elementary schools. The cases include examples of coded pre- and post-test responses to illustrate some of the themes seen in teacher learning. The findings raise questions for future investigation that can be conducted using analyses of the coded responses.
Khrustalev, Vladislav Victorovich
2009-01-01
We showed that GC-content of nucleotide sequences coding for linear B-cell epitopes of herpes simplex virus type 1 (HSV1) glycoprotein B (gB) is higher than GC-content of sequences coding for epitope-free regions of this glycoprotein (G + C = 73 and 64%, respectively). Linear B-cell epitopes have been predicted in HSV1 gB by BepiPred algorithm ( www.cbs.dtu.dk/services/BepiPred ). Proline is an acrophilic amino acid residue (it is usually situated on the surface of protein globules, and so included in linear B-cell epitopes). Indeed, the level of proline is much higher in predicted epitopes of gB than in epitope-free regions (17.8% versus 1.8%). This amino acid is coded by GC-rich codons (CCX) that can be produced due to nucleotide substitutions caused by mutational GC-pressure. GC-pressure will also lead to disappearance of acrophobic phenylalanine, isoleucine, methionine and tyrosine coded by GC-poor codons. Results of our "in-silico directed mutagenesis" showed that single nonsynonymous substitutions in AT to GC direction in two long epitope-free regions of gB will cause formation of new linear epitopes or elongation of previously existing epitopes flanking these regions in 25% of 539 possible cases. The calculations of GC-content and amino acid content have been performed by CodonChanges algorithm ( www.barkovsky.hotmail.ru ).
How to Link to Official Documents from the Government Publishing Office (GPO)
The most consistent way to present up-to-date content from the Federal Register, US Code, Code of Federal Regulations (CFR), and so on is to link to the official version of the document on the GPO's Federal Digital System (FDSys) website.
Side information in coded aperture compressive spectral imaging
NASA Astrophysics Data System (ADS)
Galvis, Laura; Arguello, Henry; Lau, Daniel; Arce, Gonzalo R.
2017-02-01
Coded aperture compressive spectral imagers sense a three-dimensional cube by using two-dimensional projections of the coded and spectrally dispersed source. These imagers systems often rely on FPA detectors, SLMs, micromirror devices (DMDs), and dispersive elements. The use of the DMDs to implement the coded apertures facilitates the capture of multiple projections, each admitting a different coded aperture pattern. The DMD allows not only to collect the sufficient number of measurements for spectrally rich scenes or very detailed spatial scenes but to design the spatial structure of the coded apertures to maximize the information content on the compressive measurements. Although sparsity is the only signal characteristic usually assumed for reconstruction in compressing sensing, other forms of prior information such as side information have been included as a way to improve the quality of the reconstructions. This paper presents the coded aperture design in a compressive spectral imager with side information in the form of RGB images of the scene. The use of RGB images as side information of the compressive sensing architecture has two main advantages: the RGB is not only used to improve the reconstruction quality but to optimally design the coded apertures for the sensing process. The coded aperture design is based on the RGB scene and thus the coded aperture structure exploits key features such as scene edges. Real reconstructions of noisy compressed measurements demonstrate the benefit of the designed coded apertures in addition to the improvement in the reconstruction quality obtained by the use of side information.
Comparison of computer codes for calculating dynamic loads in wind turbines
NASA Technical Reports Server (NTRS)
Spera, D. A.
1977-01-01
Seven computer codes for analyzing performance and loads in large, horizontal axis wind turbines were used to calculate blade bending moment loads for two operational conditions of the 100 kW Mod-0 wind turbine. Results were compared with test data on the basis of cyclic loads, peak loads, and harmonic contents. Four of the seven codes include rotor-tower interaction and three were limited to rotor analysis. With a few exceptions, all calculated loads were within 25 percent of nominal test data.
Team interaction during surgery: a systematic review of communication coding schemes.
Tiferes, Judith; Bisantz, Ann M; Guru, Khurshid A
2015-05-15
Communication problems have been systematically linked to human errors in surgery and a deep understanding of the underlying processes is essential. Although a number of tools exist to assess nontechnical skills, methods to study communication and other team-related processes are far from being standardized, making comparisons challenging. We conducted a systematic review to analyze methods used to study events in the operating room (OR) and to develop a synthesized coding scheme for OR team communication. Six electronic databases were accessed to search for articles that collected individual events during surgery and included detailed coding schemes. Additional articles were added based on cross-referencing. That collection was then classified based on type of events collected, environment type (real or simulated), number of procedures, type of surgical task, team characteristics, method of data collection, and coding scheme characteristics. All dimensions within each coding scheme were grouped based on emergent content similarity. Categories drawn from articles, which focused on communication events, were further analyzed and synthesized into one common coding scheme. A total of 34 of 949 articles met the inclusion criteria. The methodological characteristics and coding dimensions of the articles were summarized. A priori coding was used in nine studies. The synthesized coding scheme for OR communication included six dimensions as follows: information flow, period, statement type, topic, communication breakdown, and effects of communication breakdown. The coding scheme provides a standardized coding method for OR communication, which can be used to develop a priori codes for future studies especially in comparative effectiveness research. Copyright © 2015 Elsevier Inc. All rights reserved.
GPS receiver CODE bias estimation: A comparison of two methods
NASA Astrophysics Data System (ADS)
McCaffrey, Anthony M.; Jayachandran, P. T.; Themens, D. R.; Langley, R. B.
2017-04-01
The Global Positioning System (GPS) is a valuable tool in the measurement and monitoring of ionospheric total electron content (TEC). To obtain accurate GPS-derived TEC, satellite and receiver hardware biases, known as differential code biases (DCBs), must be estimated and removed. The Center for Orbit Determination in Europe (CODE) provides monthly averages of receiver DCBs for a significant number of stations in the International Global Navigation Satellite Systems Service (IGS) network. A comparison of the monthly receiver DCBs provided by CODE with DCBs estimated using the minimization of standard deviations (MSD) method on both daily and monthly time intervals, is presented. Calibrated TEC obtained using CODE-derived DCBs, is accurate to within 0.74 TEC units (TECU) in differenced slant TEC (sTEC), while calibrated sTEC using MSD-derived DCBs results in an accuracy of 1.48 TECU.
Spoken Narrative Assessment: A Supplementary Measure of Children's Creativity
ERIC Educational Resources Information Center
Wong, Miranda Kit-Yi; So, Wing Chee
2016-01-01
This study developed a spoken narrative (i.e., storytelling) assessment as a supplementary measure of children's creativity. Both spoken and gestural contents of children's spoken narratives were coded to assess their verbal and nonverbal creativity. The psychometric properties of the coding system for the spoken narrative assessment were…
2001-01-01
from airports and hotels, Internet cafes , libraries, and even cellular phones. This unmatched versatility has made e-mail the preferred method of...8217 contention, however, was not that the Franchise Tax Board applied the wrong section of the code; it was that the code “unfairly taxed the wife’s
Ethical Challenges in the Teaching of Multicultural Course Work
ERIC Educational Resources Information Center
Fier, Elizabeth Boyer; Ramsey, MaryLou
2005-01-01
The authors explore the ethical issues and challenges frequently encountered by counselor educators of multicultural course work. Existing ethics codes are examined, and the need for greater specificity with regard to teaching courses of multicultural content is addressed. Options for revising existing codes to better address the challenges of…
Detecting Runtime Anomalies in AJAX Applications through Trace Analysis
2011-08-10
statements by adding the instrumentation to the GWT UI classes, leaving the user code untouched. Some content management frameworks such as Drupal [12...Google web toolkit.” http://code.google.com/webtoolkit/. [12] “Form generation – drupal api.” http://api.drupal.org/api/group/form_api/6. 9
Standards for Evaluation of Instructional Materials with Respect to Social Content. 1986 Edition.
ERIC Educational Resources Information Center
California State Dept. of Education, Sacramento. Curriculum Framework and Textbook Development Unit.
The California Legislature recognized the significant place of instructional materials in the formation of a child's attitudes and beliefs when it adopted "Educational Code" sections 60040 through 60044. The "Education Code" sections referred to in this document are intended to help dispel negative stereotypes by emphasizing…
Turbomachinery Heat Transfer and Loss Modeling for 3D Navier-Stokes Codes
NASA Technical Reports Server (NTRS)
DeWitt, Kenneth; Ameri, Ali
2005-01-01
This report's contents focus on making use of NASA Glenn on-site computational facilities,to develop, validate, and apply models for use in advanced 3D Navier-Stokes Computational Fluid Dynamics (CFD) codes to enhance the capability to compute heat transfer and losses in turbomachiney.
A Coding Scheme to Analyse the Online Asynchronous Discussion Forums of University Students
ERIC Educational Resources Information Center
Biasutti, Michele
2017-01-01
The current study describes the development of a content analysis coding scheme to examine transcripts of online asynchronous discussion groups in higher education. The theoretical framework comprises the theories regarding knowledge construction in computer-supported collaborative learning (CSCL) based on a sociocultural perspective. The coding…
CACTI: free, open-source software for the sequential coding of behavioral interactions.
Glynn, Lisa H; Hallgren, Kevin A; Houck, Jon M; Moyers, Theresa B
2012-01-01
The sequential analysis of client and clinician speech in psychotherapy sessions can help to identify and characterize potential mechanisms of treatment and behavior change. Previous studies required coding systems that were time-consuming, expensive, and error-prone. Existing software can be expensive and inflexible, and furthermore, no single package allows for pre-parsing, sequential coding, and assignment of global ratings. We developed a free, open-source, and adaptable program to meet these needs: The CASAA Application for Coding Treatment Interactions (CACTI). Without transcripts, CACTI facilitates the real-time sequential coding of behavioral interactions using WAV-format audio files. Most elements of the interface are user-modifiable through a simple XML file, and can be further adapted using Java through the terms of the GNU Public License. Coding with this software yields interrater reliabilities comparable to previous methods, but at greatly reduced time and expense. CACTI is a flexible research tool that can simplify psychotherapy process research, and has the potential to contribute to the improvement of treatment content and delivery.
NASA Astrophysics Data System (ADS)
Qin, Yi; Wang, Hongjuan; Wang, Zhipeng; Gong, Qiong; Wang, Danchen
2016-09-01
In optical interference-based encryption (IBE) scheme, the currently available methods have to employ the iterative algorithms in order to encrypt two images and retrieve cross-talk free decrypted images. In this paper, we shall show that this goal can be achieved via an analytical process if one of the two images is QR code. For decryption, the QR code is decrypted in the conventional architecture and the decryption has a noisy appearance. Nevertheless, the robustness of QR code against noise enables the accurate acquisition of its content from the noisy retrieval, as a result of which the primary QR code can be exactly regenerated. Thereafter, a novel optical architecture is proposed to recover the grayscale image by aid of the QR code. In addition, the proposal has totally eliminated the silhouette problem existing in the previous IBE schemes, and its effectiveness and feasibility have been demonstrated by numerical simulations.
Weisberg, Jill; McCullough, Stephen; Emmorey, Karen
2018-01-01
Code-blends (simultaneous words and signs) are a unique characteristic of bimodal bilingual communication. Using fMRI, we investigated code-blend comprehension in hearing native ASL-English bilinguals who made a semantic decision (edible?) about signs, audiovisual words, and semantically equivalent code-blends. English and ASL recruited a similar fronto-temporal network with expected modality differences: stronger activation for English in auditory regions of bilateral superior temporal cortex, and stronger activation for ASL in bilateral occipitotemporal visual regions and left parietal cortex. Code-blend comprehension elicited activity in a combination of these regions, and no cognitive control regions were additionally recruited. Furthermore, code-blends elicited reduced activation relative to ASL presented alone in bilateral prefrontal and visual extrastriate cortices, and relative to English alone in auditory association cortex. Consistent with behavioral facilitation observed during semantic decisions, the findings suggest that redundant semantic content induces more efficient neural processing in language and sensory regions during bimodal language integration. PMID:26177161
[Preliminarily application of content analysis to qualitative nursing data].
Liang, Shu-Yuan; Chuang, Yeu-Hui; Wu, Shu-Fang
2012-10-01
Content analysis is a methodology for objectively and systematically studying the content of communication in various formats. Content analysis in nursing research and nursing education is called qualitative content analysis. Qualitative content analysis is frequently applied to nursing research, as it allows researchers to determine categories inductively and deductively. This article examines qualitative content analysis in nursing research from theoretical and practical perspectives. We first describe how content analysis concepts such as unit of analysis, meaning unit, code, category, and theme are used. Next, we describe the basic steps involved in using content analysis, including data preparation, data familiarization, analysis unit identification, creating tentative coding categories, category refinement, and establishing category integrity. Finally, this paper introduces the concept of content analysis rigor, including dependability, confirmability, credibility, and transferability. This article elucidates the content analysis method in order to help professionals conduct systematic research that generates data that are informative and useful in practical application.
Genomics dataset of unidentified disclosed isolates.
Rekadwad, Bhagwan N
2016-09-01
Analysis of DNA sequences is necessary for higher hierarchical classification of the organisms. It gives clues about the characteristics of organisms and their taxonomic position. This dataset is chosen to find complexities in the unidentified DNA in the disclosed patents. A total of 17 unidentified DNA sequences were thoroughly analyzed. The quick response codes were generated. AT/GC content of the DNA sequences analysis was carried out. The QR is helpful for quick identification of isolates. AT/GC content is helpful for studying their stability at different temperatures. Additionally, a dataset on cleavage code and enzyme code studied under the restriction digestion study, which helpful for performing studies using short DNA sequences was reported. The dataset disclosed here is the new revelatory data for exploration of unique DNA sequences for evaluation, identification, comparison and analysis.
Ortwein, Heiderose; Benz, Alexander; Carl, Petra; Huwendiek, Sören; Pander, Tanja; Kiessling, Claudia
2017-02-01
To investigate whether the Verona Coding Definitions of Emotional Sequences to code health providers' responses (VR-CoDES-P) can be used for assessment of medical students' responses to patients' cues and concerns provided in written case vignettes. Student responses in direct speech to patient cues and concerns were analysed in 21 different case scenarios using VR-CoDES-P. A total of 977 student responses were available for coding, and 857 responses were codable with the VR-CoDES-P. In 74.6% of responses, the students used either a "reducing space" statement only or a "providing space" statement immediately followed by a "reducing space" statement. Overall, the most frequent response was explicit information advice (ERIa) followed by content exploring (EPCEx) and content acknowledgement (EPCAc). VR-CoDES-P were applicable to written responses of medical students when they were phrased in direct speech. The application of VR-CoDES-P is reliable and feasible when using the differentiation of "providing" and "reducing space" responses. Communication strategies described by students in non-direct speech were difficult to code and produced many missings. VR-CoDES-P are useful for analysis of medical students' written responses when focusing on emotional issues. Students need precise instructions for their response in the given test format. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
HEVC for high dynamic range services
NASA Astrophysics Data System (ADS)
Kim, Seung-Hwan; Zhao, Jie; Misra, Kiran; Segall, Andrew
2015-09-01
Displays capable of showing a greater range of luminance values can render content containing high dynamic range information in a way such that the viewers have a more immersive experience. This paper introduces the design aspects of a high dynamic range (HDR) system, and examines the performance of the HDR processing chain in terms of compression efficiency. Specifically it examines the relation between recently introduced Society of Motion Picture and Television Engineers (SMPTE) ST 2084 transfer function and the High Efficiency Video Coding (HEVC) standard. SMPTE ST 2084 is designed to cover the full range of an HDR signal from 0 to 10,000 nits, however in many situations the valid signal range of actual video might be smaller than SMPTE ST 2084 supported range. The above restricted signal range results in restricted range of code values for input video data and adversely impacts compression efficiency. In this paper, we propose a code value remapping method that extends the restricted range code values into the full range code values so that the existing standards such as HEVC may better compress the video content. The paper also identifies related non-normative encoder-only changes that are required for remapping method for a fair comparison with anchor. Results are presented comparing the efficiency of the current approach versus the proposed remapping method for HM-16.2.
Technical Support Document for Version 3.4.0 of the COMcheck Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bartlett, Rosemarie; Connell, Linda M.; Gowri, Krishnan
2007-09-14
COMcheck provides an optional way to demonstrate compliance with commercial and high-rise residential building energy codes. Commercial buildings include all use groups except single family and multifamily not over three stories in height. COMcheck was originally based on ANSI/ASHRAE/IES Standard 90.1-1989 (Standard 90.1-1989) requirements and is intended for use with various codes based on Standard 90.1, including the Codification of ASHRAE/IES Standard 90.1-1989 (90.1-1989 Code) (ASHRAE 1989a, 1993b) and ASHRAE/IESNA Standard 90.1-1999 (Standard 90.1-1999). This includes jurisdictions that have adopted the 90.1-1989 Code, Standard 90.1-1989, Standard 90.1-1999, or their own code based on one of these. We view Standard 90.1-1989more » and the 90.1-1989 Code as having equivalent technical content and have used both as source documents in developing COMcheck. This technical support document (TSD) is designed to explain the technical basis for the COMcheck software as originally developed based on the ANSI/ASHRAE/IES Standard 90.1-1989 (Standard 90.1-1989). Documentation for other national model codes and standards and specific state energy codes supported in COMcheck has been added to this report as appendices. These appendices are intended to provide technical documentation for features specific to the supported codes and for any changes made for state-specific codes that differ from the standard features that support compliance with the national model codes and standards.« less
2012-01-01
We have entered a new era in agricultural and biomedical science made possible by remarkable advances in DNA sequencing technologies. The complete sequence of an individual’s set of chromosomes (collectively, its genome) provides a primary genetic code for what makes that individual unique, just as the contents of every personal computer reflect the unique attributes of its owner. But a second code, composed of “epigenetic” layers of information, affects the accessibility of the stored information and the execution of specific tasks. Nature’s second code is enigmatic and must be deciphered if we are to fully understand and optimize the genetic potential of crop plants. The goal of the Epigenomics of Plants International Consortium is to crack this second code, and ultimately master its control, to help catalyze a new green revolution. PMID:22751210
A Struggle for Dominance: Relational Communication Messages in Television Programming.
ERIC Educational Resources Information Center
Barbatsis, Gretchen S.; And Others
Television's messages about sex role behavior were analyzed by collecting and coding spot samples of the ten top ranked programs in prime viewing time and proportionate numbers of daytime soap operas and Saturday morning children's programs. The content analysis was based on a relational coding system developed to assess interpersonal…
ERIC Educational Resources Information Center
Bantum, Erin O'Carroll; Owen, Jason E.
2009-01-01
Psychological interventions provide linguistic data that are particularly useful for testing mechanisms of action and improving intervention methodologies. For this study, emotional expression in an Internet-based intervention for women with breast cancer (n = 63) was analyzed via rater coding and 2 computerized coding methods (Linguistic Inquiry…
Teachers' Code-Switching in Bilingual Classrooms: Exploring Pedagogical and Sociocultural Functions
ERIC Educational Resources Information Center
Cahyani, Hilda; de Courcy, Michele; Barnett, Jenny
2018-01-01
The pedagogical and sociocultural functions of teachers' code-switching are an important factor in achieving the dual goals of content learning and language learning in bilingual programmes. This paper reports on an ethnographic case study investigating how and why teachers switched between languages in tertiary bilingual classrooms in Indonesia,…
ERIC Educational Resources Information Center
Piehler, Timothy F.; Dishion, Thomas J.
2014-01-01
In a sample of 711 ethnically diverse adolescents, the observed interpersonal dynamics of dyadic adolescent friendship interactions were coded to predict early adulthood tobacco, alcohol, and marijuana use. Deviant discussion content within the interactions was coded along with dyadic coregulation (i.e., interpersonal coordination, attention…
Server-Side Includes Made Simple.
ERIC Educational Resources Information Center
Fagan, Jody Condit
2002-01-01
Describes server-side include (SSI) codes which allow Webmasters to insert content into Web pages without programming knowledge. Explains how to enable the codes on a Web server, provides a step-by-step process for implementing them, discusses tags and syntax errors, and includes examples of their use on the Web site for Southern Illinois…
FOG: Fighting the Achilles' Heel of Gossip Protocols with Fountain Codes
NASA Astrophysics Data System (ADS)
Champel, Mary-Luc; Kermarrec, Anne-Marie; Le Scouarnec, Nicolas
Gossip protocols are well known to provide reliable and robust dissemination protocols in highly dynamic systems. Yet, they suffer from high redundancy in the last phase of the dissemination. In this paper, we combine fountain codes (rateless erasure-correcting codes) together with gossip protocols for a robust and fast content dissemination in large-scale dynamic systems. The use of fountain enables to eliminate the unnecessary redundancy of gossip protocols. We propose the design of FOG, which fully exploits the first exponential growth phase (where the data is disseminated exponentially fast) of gossip protocols while avoiding the need for the shrinking phase by using fountain codes. FOG voluntarily increases the number of disseminations but limits those disseminations to the exponential growth phase. In addition, FOG creates a split-graph overlay that splits the peers between encoders and forwarders. Forwarder peers become encoders as soon as they have received the whole content. In order to benefit even further and quicker from encoders, FOG biases the dissemination towards the most advanced peers to make them complete earlier.
Size principle and information theory.
Senn, W; Wyler, K; Clamann, H P; Kleinle, J; Lüscher, H R; Müller, L
1997-01-01
The motor units of a skeletal muscle may be recruited according to different strategies. From all possible recruitment strategies nature selected the simplest one: in most actions of vertebrate skeletal muscles the recruitment of its motor units is by increasing size. This so-called size principle permits a high precision in muscle force generation since small muscle forces are produced exclusively by small motor units. Larger motor units are activated only if the total muscle force has already reached certain critical levels. We show that this recruitment by size is not only optimal in precision but also optimal in an information theoretical sense. We consider the motoneuron pool as an encoder generating a parallel binary code from a common input to that pool. The generated motoneuron code is sent down through the motoneuron axons to the muscle. We establish that an optimization of this motoneuron code with respect to its information content is equivalent to the recruitment of motor units by size. Moreover, maximal information content of the motoneuron code is equivalent to a minimal expected error in muscle force generation.
The agents of natural genome editing.
Witzany, Guenther
2011-06-01
The DNA serves as a stable information storage medium and every protein which is needed by the cell is produced from this blueprint via an RNA intermediate code. More recently it was found that an abundance of various RNA elements cooperate in a variety of steps and substeps as regulatory and catalytic units with multiple competencies to act on RNA transcripts. Natural genome editing on one side is the competent agent-driven generation and integration of meaningful DNA nucleotide sequences into pre-existing genomic content arrangements, and the ability to (re-)combine and (re-)regulate them according to context-dependent (i.e. adaptational) purposes of the host organism. Natural genome editing on the other side designates the integration of all RNA activities acting on RNA transcripts without altering DNA-encoded genes. If we take the genetic code seriously as a natural code, there must be agents that are competent to act on this code because no natural code codes itself as no natural language speaks itself. As code editing agents, viral and subviral agents have been suggested because there are several indicators that demonstrate viruses competent in both RNA and DNA natural genome editing.
Cui, Laizhong; Lu, Nan; Chen, Fu
2014-01-01
Most large-scale peer-to-peer (P2P) live streaming systems use mesh to organize peers and leverage pull scheduling to transmit packets for providing robustness in dynamic environment. The pull scheduling brings large packet delay. Network coding makes the push scheduling feasible in mesh P2P live streaming and improves the efficiency. However, it may also introduce some extra delays and coding computational overhead. To improve the packet delay, streaming quality, and coding overhead, in this paper are as follows. we propose a QoS driven push scheduling approach. The main contributions of this paper are: (i) We introduce a new network coding method to increase the content diversity and reduce the complexity of scheduling; (ii) we formulate the push scheduling as an optimization problem and transform it to a min-cost flow problem for solving it in polynomial time; (iii) we propose a push scheduling algorithm to reduce the coding overhead and do extensive experiments to validate the effectiveness of our approach. Compared with previous approaches, the simulation results demonstrate that packet delay, continuity index, and coding ratio of our system can be significantly improved, especially in dynamic environments. PMID:25114968
Code and codeless ionospheric measurements with NASA's Rogue GPS Receiver
NASA Technical Reports Server (NTRS)
Srinivasan, Jeff M.; Meehan, Tom K.; Young, Lawrence E.
1989-01-01
The NASA/JPL Rogue Receiver is an 8-satellite, non-multiplexed, highly digital global positioning system (GPS) receiver that can obtain dual frequency data either with or without knowledge of the P-code. In addition to its applications for high accuracy geodesy and orbit determination, the Rogue uses GPS satellite signals to measure the total electron content (TEC) of the ionosphere along the lines of sight from the receiver to the satellites. These measurements are used by JPL's Deep Space Network (DSN) for calibrating radiometric data. This paper will discuss Rogue TEC measurements, emphasizing the advantages of a receiver that can use the P-code, when available, but can also obtain reliable dual frequency data when the code is encrypted.
A Brief Report on the Ethical and Legal Guides For Technology Use in Marriage and Family Therapy.
Pennington, Michael; Patton, Rikki; Ray, Amber; Katafiasz, Heather
2017-10-01
Marriage and family therapists (MFTs) use ethical codes and state licensure laws/rules as guidelines for best clinical practice. It is important that professional codes reflect the potential exponential use of technology in therapy. However, current standards regarding technology use lack clarity. To explore this gap, a summative content analysis was conducted on state licensure laws/rules and professional ethical codes to find themes and subthemes among the many aspects of therapy in which technology can be utilized. Findings from the content analysis indicated that while there have been efforts by both state and professional organizations to incorporate guidance for technology use in therapy, a clear and comprehensive "roadmap" is still missing. Future scholarship is needed that develops clearer guidelines for therapists. © 2017 American Association for Marriage and Family Therapy.
Technical Support Document for Version 3.9.0 of the COMcheck Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bartlett, Rosemarie; Connell, Linda M.; Gowri, Krishnan
2011-09-01
COMcheck provides an optional way to demonstrate compliance with commercial and high-rise residential building energy codes. Commercial buildings include all use groups except single family and multifamily not over three stories in height. COMcheck was originally based on ANSI/ASHRAE/IES Standard 90.1-1989 (Standard 90.1-1989) requirements and is intended for use with various codes based on Standard 90.1, including the Codification of ASHRAE/IES Standard 90.1-1989 (90.1-1989 Code) (ASHRAE 1989a, 1993b) and ASHRAE/IESNA Standard 90.1-1999 (Standard 90.1-1999). This includes jurisdictions that have adopted the 90.1-1989 Code, Standard 90.1-1989, Standard 90.1-1999, or their own code based on one of these. We view Standard 90.1-1989more » and the 90.1-1989 Code as having equivalent technical content and have used both as source documents in developing COMcheck. This technical support document (TSD) is designed to explain the technical basis for the COMcheck software as originally developed based on the ANSI/ASHRAE/IES Standard 90.1-1989 (Standard 90.1-1989). Documentation for other national model codes and standards and specific state energy codes supported in COMcheck has been added to this report as appendices. These appendices are intended to provide technical documentation for features specific to the supported codes and for any changes made for state-specific codes that differ from the standard features that support compliance with the national model codes and standards. Beginning with COMcheck version 3.8.0, support for 90.1-1989, 90.1-1999, and the 1998 IECC are no longer included, but those sections remain in this document for reference purposes.« less
Technical Support Document for Version 3.9.1 of the COMcheck Software
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bartlett, Rosemarie; Connell, Linda M.; Gowri, Krishnan
2012-09-01
COMcheck provides an optional way to demonstrate compliance with commercial and high-rise residential building energy codes. Commercial buildings include all use groups except single family and multifamily not over three stories in height. COMcheck was originally based on ANSI/ASHRAE/IES Standard 90.1-1989 (Standard 90.1-1989) requirements and is intended for use with various codes based on Standard 90.1, including the Codification of ASHRAE/IES Standard 90.1-1989 (90.1-1989 Code) (ASHRAE 1989a, 1993b) and ASHRAE/IESNA Standard 90.1-1999 (Standard 90.1-1999). This includes jurisdictions that have adopted the 90.1-1989 Code, Standard 90.1-1989, Standard 90.1-1999, or their own code based on one of these. We view Standard 90.1-1989more » and the 90.1-1989 Code as having equivalent technical content and have used both as source documents in developing COMcheck. This technical support document (TSD) is designed to explain the technical basis for the COMcheck software as originally developed based on the ANSI/ASHRAE/IES Standard 90.1-1989 (Standard 90.1-1989). Documentation for other national model codes and standards and specific state energy codes supported in COMcheck has been added to this report as appendices. These appendices are intended to provide technical documentation for features specific to the supported codes and for any changes made for state-specific codes that differ from the standard features that support compliance with the national model codes and standards. Beginning with COMcheck version 3.8.0, support for 90.1-1989, 90.1-1999, and the 1998 IECC and version 3.9.0 support for 2000 and 2001 IECC are no longer included, but those sections remain in this document for reference purposes.« less
2014-01-01
Background The genome is pervasively transcribed but most transcripts do not code for proteins, constituting non-protein-coding RNAs. Despite increasing numbers of functional reports of individual long non-coding RNAs (lncRNAs), assessing the extent of functionality among the non-coding transcriptional output of mammalian cells remains intricate. In the protein-coding world, transcripts differentially expressed in the context of processes essential for the survival of multicellular organisms have been instrumental in the discovery of functionally relevant proteins and their deregulation is frequently associated with diseases. We therefore systematically identified lncRNAs expressed differentially in response to oncologically relevant processes and cell-cycle, p53 and STAT3 pathways, using tiling arrays. Results We found that up to 80% of the pathway-triggered transcriptional responses are non-coding. Among these we identified very large macroRNAs with pathway-specific expression patterns and demonstrated that these are likely continuous transcripts. MacroRNAs contain elements conserved in mammals and sauropsids, which in part exhibit conserved RNA secondary structure. Comparing evolutionary rates of a macroRNA to adjacent protein-coding genes suggests a local action of the transcript. Finally, in different grades of astrocytoma, a tumor disease unrelated to the initially used cell lines, macroRNAs are differentially expressed. Conclusions It has been shown previously that the majority of expressed non-ribosomal transcripts are non-coding. We now conclude that differential expression triggered by signaling pathways gives rise to a similar abundance of non-coding content. It is thus unlikely that the prevalence of non-coding transcripts in the cell is a trivial consequence of leaky or random transcription events. PMID:24594072
Making Homes Healthy: International Code Council Processes and Patterns.
Coyle, Edward C; Isett, Kimberley R; Rondone, Joseph; Harris, Rebecca; Howell, M Claire Batten; Brandus, Katherine; Hughes, Gwendolyn; Kerfoot, Richard; Hicks, Diana
2016-01-01
Americans spend more than 90% of their time indoors, so it is important that homes are healthy environments. Yet many homes contribute to preventable illnesses via poor air quality, pests, safety hazards, and others. Efforts have been made to promote healthy housing through code changes, but results have been mixed. In support of such efforts, we analyzed International Code Council's (ICC) building code change process to uncover patterns of content and context that may contribute to successful adoptions of model codes. Discover patterns of facilitators and barriers to code amendments proposals. Mixed methods study of ICC records of past code change proposals. N = 2660. N/A. N/A. There were 4 possible outcomes for each code proposal studied: accepted as submitted, accepted as modified, accepted as modified by public comment, and denied. We found numerous correlates for final adoption of model codes proposed to the ICC. The number of proponents listed on a proposal was inversely correlated with success. Organizations that submitted more than 15 proposals had a higher chance of success than those that submitted fewer than 15. Proposals submitted by federal agencies correlated with a higher chance of success. Public comments in favor of a proposal correlated with an increased chance of success, while negative public comment had an even stronger negative correlation. To increase the chance of success, public health officials should submit their code changes through internal ICC committees or a federal agency, limit the number of cosponsors of the proposal, work with (or become) an active proposal submitter, and encourage public comment in favor of passage through their broader coalition.
SMT-Aware Instantaneous Footprint Optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Roy, Probir; Liu, Xu; Song, Shuaiwen
Modern architectures employ simultaneous multithreading (SMT) to increase thread-level parallelism. SMT threads share many functional units and the whole memory hierarchy of a physical core. Without a careful code design, SMT threads can easily contend with each other for these shared resources, causing severe performance degradation. Minimizing SMT thread contention for HPC applications running on dedicated platforms is very challenging, because they usually spawn threads within Single Program Multiple Data (SPMD) models. To address this important issue, we introduce a simple scheme for SMT-aware code optimization, which aims to reduce the memory contention across SMT threads.
CACTI: Free, Open-Source Software for the Sequential Coding of Behavioral Interactions
Glynn, Lisa H.; Hallgren, Kevin A.; Houck, Jon M.; Moyers, Theresa B.
2012-01-01
The sequential analysis of client and clinician speech in psychotherapy sessions can help to identify and characterize potential mechanisms of treatment and behavior change. Previous studies required coding systems that were time-consuming, expensive, and error-prone. Existing software can be expensive and inflexible, and furthermore, no single package allows for pre-parsing, sequential coding, and assignment of global ratings. We developed a free, open-source, and adaptable program to meet these needs: The CASAA Application for Coding Treatment Interactions (CACTI). Without transcripts, CACTI facilitates the real-time sequential coding of behavioral interactions using WAV-format audio files. Most elements of the interface are user-modifiable through a simple XML file, and can be further adapted using Java through the terms of the GNU Public License. Coding with this software yields interrater reliabilities comparable to previous methods, but at greatly reduced time and expense. CACTI is a flexible research tool that can simplify psychotherapy process research, and has the potential to contribute to the improvement of treatment content and delivery. PMID:22815713
User's manual for the FLORA equilibrium and stability code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Freis, R.P.; Cohen, B.I.
1985-04-01
This document provides a user's guide to the content and use of the two-dimensional axisymmetric equilibrium and stability code FLORA. FLORA addresses the low-frequency MHD stability of long-thin axisymmetric tandem mirror systems with finite pressure and finite-larmor-radius effects. FLORA solves an initial-value problem for interchange, rotational, and ballooning stability.
Square One TV: Coding of Segments.
ERIC Educational Resources Information Center
McNeal, Betsy; Singer, Karen
This report describes the system used to code each segment of Square One TV for content analysis of all four seasons of production. The analysis is intended to aid in the assessment of how well Square One is meeting its three goals: (1) to promote positive attitudes toward, and enthusiasm for, mathematics; (2) to encourage the use and application…
Codes, Costs, and Critiques: The Organization of Information in "Library Quarterly", 1931-2004
ERIC Educational Resources Information Center
Olson, Hope A.
2006-01-01
This article reports the results of a quantitative and thematic content analysis of the organization of information literature in the "Library Quarterly" ("LQ") between its inception in 1931 and 2004. The majority of articles in this category were published in the first half of "LQ's" run. Prominent themes have included cataloging codes and the…
Dual Coding Theory, Word Abstractness, and Emotion: A Critical Review of Kousta et al. (2011)
ERIC Educational Resources Information Center
Paivio, Allan
2013-01-01
Kousta, Vigliocco, Del Campo, Vinson, and Andrews (2011) questioned the adequacy of dual coding theory and the context availability model as explanations of representational and processing differences between concrete and abstract words. They proposed an alternative approach that focuses on the role of emotional content in the processing of…
ERIC Educational Resources Information Center
Subba Rao, G. M.; Vijayapushapm, T.; Venkaiah, K.; Pavarala, V.
2012-01-01
Objective: To assess quantity and quality of nutrition and food safety information in science textbooks prescribed by the Central Board of Secondary Education (CBSE), India for grades I through X. Design: Content analysis. Methods: A coding scheme was developed for quantitative and qualitative analyses. Two investigators independently coded the…
User's Guide for RESRAD-OFFSITE
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gnanapragasam, E.; Yu, C.
2015-04-01
The RESRAD-OFFSITE code can be used to model the radiological dose or risk to an offsite receptor. This User’s Guide for RESRAD-OFFSITE Version 3.1 is an update of the User’s Guide for RESRAD-OFFSITE Version 2 contained in the Appendix A of the User’s Manual for RESRAD-OFFSITE Version 2 (ANL/EVS/TM/07-1, DOE/HS-0005, NUREG/CR-6937). This user’s guide presents the basic information necessary to use Version 3.1 of the code. It also points to the help file and other documents that provide more detailed information about the inputs, the input forms and features/tools in the code; two of the features (overriding the source termmore » and computing area factors) are discussed in the appendices to this guide. Section 2 describes how to download and install the code and then verify the installation of the code. Section 3 shows ways to navigate through the input screens to simulate various exposure scenarios and to view the results in graphics and text reports. Section 4 has screen shots of each input form in the code and provides basic information about each parameter to increase the user’s understanding of the code. Section 5 outlines the contents of all the text reports and the graphical output. It also describes the commands in the two output viewers. Section 6 deals with the probabilistic and sensitivity analysis tools available in the code. Section 7 details the various ways of obtaining help in the code.« less
Nang, Roberto N; Monahan, Felicia; Diehl, Glendon B; French, Daniel
2015-04-01
Many institutions collect reports in databases to make important lessons-learned available to their members. The Uniformed Services University of the Health Sciences collaborated with the Peacekeeping and Stability Operations Institute to conduct a descriptive and qualitative analysis of global health engagements (GHEs) contained in the Stability Operations Lessons Learned and Information Management System (SOLLIMS). This study used a summative qualitative content analysis approach involving six steps: (1) a comprehensive search; (2) two-stage reading and screening process to identify first-hand, health-related records; (3) qualitative and quantitative data analysis using MAXQDA, a software program; (4) a word cloud to illustrate word frequencies and interrelationships; (5) coding of individual themes and validation of the coding scheme; and (6) identification of relationships in the data and overarching lessons-learned. The individual codes with the most number of text segments coded included: planning, personnel, interorganizational coordination, communication/information sharing, and resources/supplies. When compared to the Department of Defense's (DoD's) evolving GHE principles and capabilities, the SOLLIMS coding scheme appeared to align well with the list of GHE capabilities developed by the Department of Defense Global Health Working Group. The results of this study will inform practitioners of global health and encourage additional qualitative analysis of other lessons-learned databases. Reprint & Copyright © 2015 Association of Military Surgeons of the U.S.
Describing the content of primary care: limitations of Canadian billing data.
Katz, Alan; Halas, Gayle; Dillon, Michael; Sloshower, Jordan
2012-02-15
Primary health care systems are designed to provide comprehensive patient care. However, the ICD 9 coding system used for billing purposes in Canada neither characterizes nor captures the scope of clinical practice or complexity of physician-patient interactions. This study aims to describe the content of primary care clinical encounters and examine the limitations of using administrative data to capture the content of these visits. Although a number of U.S studies have described the content of primary care encounters, this is the first Canadian study to do so. Study-specific data collection forms were completed by 16 primary care physicians in community health and family practice clinics in Winnipeg, Manitoba, Canada. The data collection forms were completed immediately following the patient encounter and included patient and visit characteristics, such as primary reason for visit, topics discussed, actions taken, degree of complexity as well as diagnosis and ICD-9 codes. Data was collected for 760 patient encounters. The diagnostic codes often did not reflect the dominant topic of the visit or the topic requiring the most amount of time. Physicians often address multiple problems and provide numerous services thus increasing the complexity of care. This is one of the first Canadian studies to critically analyze the content of primary care clinical encounters. The data allowed a greater understanding of primary care clinical encounters and attests to the deficiencies of singular ICD-9 coding which fails to capture the comprehensiveness and complexity of the primary care encounter. As primary care reform initiatives in the U.S and Canada attempt to transform the way family physicians deliver care, it becomes increasingly important that other tools for structuring primary care data are considered in order to help physicians, researchers and policy makers understand the breadth and complexity of primary care.
Genomics dataset on unclassified published organism (patent US 7547531).
Khan Shawan, Mohammad Mahfuz Ali; Hasan, Md Ashraful; Hossain, Md Mozammel; Hasan, Md Mahmudul; Parvin, Afroza; Akter, Salina; Uddin, Kazi Rasel; Banik, Subrata; Morshed, Mahbubul; Rahman, Md Nazibur; Rahman, S M Badier
2016-12-01
Nucleotide (DNA) sequence analysis provides important clues regarding the characteristics and taxonomic position of an organism. With the intention that, DNA sequence analysis is very crucial to learn about hierarchical classification of that particular organism. This dataset (patent US 7547531) is chosen to simplify all the complex raw data buried in undisclosed DNA sequences which help to open doors for new collaborations. In this data, a total of 48 unidentified DNA sequences from patent US 7547531 were selected and their complete sequences were retrieved from NCBI BioSample database. Quick response (QR) code of those DNA sequences was constructed by DNA BarID tool. QR code is useful for the identification and comparison of isolates with other organisms. AT/GC content of the DNA sequences was determined using ENDMEMO GC Content Calculator, which indicates their stability at different temperature. The highest GC content was observed in GP445188 (62.5%) which was followed by GP445198 (61.8%) and GP445189 (59.44%), while lowest was in GP445178 (24.39%). In addition, New England BioLabs (NEB) database was used to identify cleavage code indicating the 5, 3 and blunt end and enzyme code indicating the methylation site of the DNA sequences was also shown. These data will be helpful for the construction of the organisms' hierarchical classification, determination of their phylogenetic and taxonomic position and revelation of their molecular characteristics.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Simonen, F.A.; Khaleel, M.A.
This paper describes a statistical evaluation of the through-thickness copper variation for welds in reactor pressure vessels, and reviews the historical basis for the static and arrest fracture toughness (K{sub Ic} and K{sub Ia}) equations used in the VISA-II code. Copper variability in welds is due to fabrication procedures with copper contents being randomly distributed, variable from one location to another through the thickness of the vessel. The VISA-II procedure of sampling the copper content from a statistical distribution for every 6.35- to 12.7-mm (1/4- to 1/2-in.) layer through the thickness was found to be consistent with the statistical observations.more » However, the parameters of the VISA-II distribution and statistical limits required further investigation. Copper contents at few locations through the thickness were found to exceed the 0.4% upper limit of the VISA-II code. The data also suggest that the mean copper content varies systematically through the thickness. While, the assumption of normality is not clearly supported by the available data, a statistical evaluation based on all the available data results in mean and standard deviations within the VISA-II code limits.« less
NASA Electronic Library System (NELS) optimization
NASA Technical Reports Server (NTRS)
Pribyl, William L.
1993-01-01
This is a compilation of NELS (NASA Electronic Library System) Optimization progress/problem, interim, and final reports for all phases. The NELS database was examined, particularly in the memory, disk contention, and CPU, to discover bottlenecks. Methods to increase the speed of NELS code were investigated. The tasks included restructuring the existing code to interact with others more effectively. An error reporting code to help detect and remove bugs in the NELS was added. Report writing tools were recommended to integrate with the ASV3 system. The Oracle database management system and tools were to be installed on a Sun workstation, intended for demonstration purposes.
Group delay variations of GPS transmitting and receiving antennas
NASA Astrophysics Data System (ADS)
Wanninger, Lambert; Sumaya, Hael; Beer, Susanne
2017-09-01
GPS code pseudorange measurements exhibit group delay variations at the transmitting and the receiving antenna. We calibrated C1 and P2 delay variations with respect to dual-frequency carrier phase observations and obtained nadir-dependent corrections for 32 satellites of the GPS constellation in early 2015 as well as elevation-dependent corrections for 13 receiving antenna models. The combined delay variations reach up to 1.0 m (3.3 ns) in the ionosphere-free linear combination for specific pairs of satellite and receiving antennas. Applying these corrections to the code measurements improves code/carrier single-frequency precise point positioning, ambiguity fixing based on the Melbourne-Wübbena linear combination, and determination of ionospheric total electron content. It also affects fractional cycle biases and differential code biases.
Malt Beverage Brand Popularity Among Youth and Youth-Appealing Advertising Content.
Xuan, Ziming; DeJong, William; Siegel, Michael; Babor, Thomas F
2017-11-01
This study examined whether alcohol brands more popular among youth are more likely to have aired television advertisements that violated the alcohol industry's voluntary code by including youth-appealing content. We obtained a complete list of 288 brand-specific beer advertisements broadcast during the National Collegiate Athletic Association (NCAA) men's and women's basketball tournaments from 1999 to 2008. All ads were rated by a panel of health professionals using a modified Delphi method to assess the presence of youth-appealing content in violation of the alcohol industry's voluntary code. The ads represented 23 alcohol brands. The popularity of these brands was operationalized as the brand-specific popularity of youth alcohol consumption in the past 30 days, as determined by a 2011 to 2012 national survey of underage drinkers. Brand-level popularity was used as the exposure variable to predict the odds of having advertisements with youth-appealing content violations. Accounting for other covariates and the clustering of advertisements within brands, increased brand popularity among underage youth was associated with significantly increased odds of having youth-appeal content violations in ads televised during the NCAA basketball tournament games (adjusted odds ratio = 1.70, 95% CI: 1.38, 2.09). Alcohol brands popular among underage drinkers are more likely to air television advertising that violates the industry's voluntary code which proscribes youth-appealing content. Copyright © 2017 by the Research Society on Alcoholism.
NASA Astrophysics Data System (ADS)
Yang, Qianli; Pitkow, Xaq
2015-03-01
Most interesting natural sensory stimuli are encoded in the brain in a form that can only be decoded nonlinearly. But despite being a core function of the brain, nonlinear population codes are rarely studied and poorly understood. Interestingly, the few existing models of nonlinear codes are inconsistent with known architectural features of the brain. In particular, these codes have information content that scales with the size of the cortical population, even if that violates the data processing inequality by exceeding the amount of information entering the sensory system. Here we provide a valid theory of nonlinear population codes by generalizing recent work on information-limiting correlations in linear population codes. Although these generalized, nonlinear information-limiting correlations bound the performance of any decoder, they also make decoding more robust to suboptimal computation, allowing many suboptimal decoders to achieve nearly the same efficiency as an optimal decoder. Although these correlations are extremely difficult to measure directly, particularly for nonlinear codes, we provide a simple, practical test by which one can use choice-related activity in small populations of neurons to determine whether decoding is suboptimal or optimal and limited by correlated noise. We conclude by describing an example computation in the vestibular system where this theory applies. QY and XP was supported by a grant from the McNair foundation.
Kivisalu, Trisha M; Lewey, Jennifer H; Shaffer, Thomas W; Canfield, Merle L
2016-01-01
The Rorschach Performance Assessment System (R-PAS) aims to provide an evidence-based approach to administration, coding, and interpretation of the Rorschach Inkblot Method (RIM). R-PAS analyzes individualized communications given by respondents to each card to code a wide pool of possible variables. Due to the large number of possible codes that can be assigned to these responses, it is important to consider the concordance rates among different assessors. This study investigated interrater reliability for R-PAS protocols. Data were analyzed from a nonpatient convenience sample of 50 participants who were recruited through networking, local marketing, and advertising efforts from January 2013 through October 2014. Blind recoding was used and discrepancies between the initial and blind coders' ratings were analyzed for each variable with SPSS yielding percent agreement and intraclass correlation values. Data for Location, Space, Contents, Synthesis, Vague, Pairs, Form Quality, Populars, Determinants, and Cognitive and Thematic codes are presented. Rates of agreement for 1,168 responses were higher for more simplistic coding (e.g., Location), whereas agreement was lower for more complex codes (e.g., Cognitive and Thematic codes). Overall, concordance rates achieved good to excellent agreement. Results suggest R-PAS is an effective method with high interrater reliability supporting its empirical basis.
ERIC Educational Resources Information Center
Horwitz, Amy Beth
2010-01-01
The purpose of this investigation is to describe the outcomes of a multi-state study of written discipline policies in a high school setting. This study examines discipline codes of conduct and analyzes the content for behaviors ranging in severity (mild, moderate, and severe) while specifically examining the use of suspension as a punitive…
"You Can Speak German, Sir": On the Complexity of Teachers' L1 Use in CLIL
ERIC Educational Resources Information Center
Gierlinger, Erwin
2015-01-01
Classroom code switching in foreign language teaching is still a controversial issue whose status as a tool of both despair and desire continues to be hotly debated. As the teaching of content and language integrated learning (CLIL) is, by definition, concerned with the learning of a foreign language, one would expect the value of code switching…
ERIC Educational Resources Information Center
Martin, Peter W.; Espiritu, Clemencia C
1996-01-01
Examines how the teacher incorporates elements of both "Bahasa Melayu" and Brunei Malay into content lessons and views code switching in the primary classroom within the wider framework of community language norms and the linguistic pressures on students and teachers. Espiritu shares Martin's concern regarding the quantity and quality of…
The Multitheoretical List of Therapeutic Interventions - 30 items (MULTI-30).
Solomonov, Nili; McCarthy, Kevin S; Gorman, Bernard S; Barber, Jacques P
2018-01-16
To develop a brief version of the Multitheoretical List of Therapeutic Interventions (MULTI-60) in order to decrease completion time burden by approximately half, while maintaining content coverage. Study 1 aimed to select 30 items. Study 2 aimed to examine the reliability and internal consistency of the MULTI-30. Study 3 aimed to validate the MULTI-30 and ensure content coverage. In Study 1, the sample included 186 therapist and 255 patient MULTI ratings, and 164 ratings of sessions coded by trained observers. Internal consistency (Chronbach's alpha and McDonald's omega) was calculated and confirmatory factor analysis was conducted. Psychotherapy experts rated content relevance. Study 2 included a sample of 644 patient and 522 therapist ratings, and 793 codings of psychotherapy sessions. In Study 3, the sample included 33 codings of sessions. A series of regression analyses was conducted to examine replication of previously published findings using the MULTI-30. The MULTI-30 was found valid, reliable, and internally consistent across 2564 ratings examined across the three studies presented. The MULTI-30 a brief and reliable process measure. Future studies are required for further validation.
Kullback Leibler divergence in complete bacterial and phage genomes
Akhter, Sajia; Kashef, Mona T.; Ibrahim, Eslam S.; Bailey, Barbara
2017-01-01
The amino acid content of the proteins encoded by a genome may predict the coding potential of that genome and may reflect lifestyle restrictions of the organism. Here, we calculated the Kullback–Leibler divergence from the mean amino acid content as a metric to compare the amino acid composition for a large set of bacterial and phage genome sequences. Using these data, we demonstrate that (i) there is a significant difference between amino acid utilization in different phylogenetic groups of bacteria and phages; (ii) many of the bacteria with the most skewed amino acid utilization profiles, or the bacteria that host phages with the most skewed profiles, are endosymbionts or parasites; (iii) the skews in the distribution are not restricted to certain metabolic processes but are common across all bacterial genomic subsystems; (iv) amino acid utilization profiles strongly correlate with GC content in bacterial genomes but very weakly correlate with the G+C percent in phage genomes. These findings might be exploited to distinguish coding from non-coding sequences in large data sets, such as metagenomic sequence libraries, to help in prioritizing subsequent analyses. PMID:29204318
Kullback Leibler divergence in complete bacterial and phage genomes.
Akhter, Sajia; Aziz, Ramy K; Kashef, Mona T; Ibrahim, Eslam S; Bailey, Barbara; Edwards, Robert A
2017-01-01
The amino acid content of the proteins encoded by a genome may predict the coding potential of that genome and may reflect lifestyle restrictions of the organism. Here, we calculated the Kullback-Leibler divergence from the mean amino acid content as a metric to compare the amino acid composition for a large set of bacterial and phage genome sequences. Using these data, we demonstrate that (i) there is a significant difference between amino acid utilization in different phylogenetic groups of bacteria and phages; (ii) many of the bacteria with the most skewed amino acid utilization profiles, or the bacteria that host phages with the most skewed profiles, are endosymbionts or parasites; (iii) the skews in the distribution are not restricted to certain metabolic processes but are common across all bacterial genomic subsystems; (iv) amino acid utilization profiles strongly correlate with GC content in bacterial genomes but very weakly correlate with the G+C percent in phage genomes. These findings might be exploited to distinguish coding from non-coding sequences in large data sets, such as metagenomic sequence libraries, to help in prioritizing subsequent analyses.
Improved inter-layer prediction for light field content coding with display scalability
NASA Astrophysics Data System (ADS)
Conti, Caroline; Ducla Soares, Luís.; Nunes, Paulo
2016-09-01
Light field imaging based on microlens arrays - also known as plenoptic, holoscopic and integral imaging - has recently risen up as feasible and prospective technology due to its ability to support functionalities not straightforwardly available in conventional imaging systems, such as: post-production refocusing and depth of field changing. However, to gradually reach the consumer market and to provide interoperability with current 2D and 3D representations, a display scalable coding solution is essential. In this context, this paper proposes an improved display scalable light field codec comprising a three-layer hierarchical coding architecture (previously proposed by the authors) that provides interoperability with 2D (Base Layer) and 3D stereo and multiview (First Layer) representations, while the Second Layer supports the complete light field content. For further improving the compression performance, novel exemplar-based inter-layer coding tools are proposed here for the Second Layer, namely: (i) an inter-layer reference picture construction relying on an exemplar-based optimization algorithm for texture synthesis, and (ii) a direct prediction mode based on exemplar texture samples from lower layers. Experimental results show that the proposed solution performs better than the tested benchmark solutions, including the authors' previous scalable codec.
NASA Astrophysics Data System (ADS)
Boonsong, S.; Siharak, S.; Srikanok, V.
2018-02-01
The purposes of this research were to develop the learning management, which was prepared for the enhancement of students’ Moral Ethics and Code of Ethics in Rajamangala University of Technology Thanyaburi (RMUTT). The contextual study and the ideas for learning management development was conducted by the document study, focus group method and content analysis from the document about moral ethics and code of ethics of the teaching profession concerning Graduate Diploma for Teaching Profession Program. The main tools of this research were the summarize papers and analyse papers. The results of development showed the learning management for the development of moral ethics and code of ethics of the teaching profession for Graduate Diploma for Teaching Profession students could promote desired moral ethics and code of ethics of the teaching profession character by the integrated learning techniques which consisted of Service Learning, Contract System, Value Clarification, Role Playing, and Concept Mapping. The learning management was presented in 3 steps.
NASA Astrophysics Data System (ADS)
Liu, Deyang; An, Ping; Ma, Ran; Yang, Chao; Shen, Liquan; Li, Kai
2016-07-01
Three-dimensional (3-D) holoscopic imaging, also known as integral imaging, light field imaging, or plenoptic imaging, can provide natural and fatigue-free 3-D visualization. However, a large amount of data is required to represent the 3-D holoscopic content. Therefore, efficient coding schemes for this particular type of image are needed. A 3-D holoscopic image coding scheme with kernel-based minimum mean square error (MMSE) estimation is proposed. In the proposed scheme, the coding block is predicted by an MMSE estimator under statistical modeling. In order to obtain the signal statistical behavior, kernel density estimation (KDE) is utilized to estimate the probability density function of the statistical modeling. As bandwidth estimation (BE) is a key issue in the KDE problem, we also propose a BE method based on kernel trick. The experimental results demonstrate that the proposed scheme can achieve a better rate-distortion performance and a better visual rendering quality.
Turbulence dissipation challenge: particle-in-cell simulations
NASA Astrophysics Data System (ADS)
Roytershteyn, V.; Karimabadi, H.; Omelchenko, Y.; Germaschewski, K.
2015-12-01
We discuss application of three particle in cell (PIC) codes to the problems relevant to turbulence dissipation challenge. VPIC is a fully kinetic code extensively used to study a variety of diverse problems ranging from laboratory plasmas to astrophysics. PSC is a flexible fully kinetic code offering a variety of algorithms that can be advantageous to turbulence simulations, including high order particle shapes, dynamic load balancing, and ability to efficiently run on Graphics Processing Units (GPUs). Finally, HYPERS is a novel hybrid (kinetic ions+fluid electrons) code, which utilizes asynchronous time advance and a number of other advanced algorithms. We present examples drawn both from large-scale turbulence simulations and from the test problems outlined by the turbulence dissipation challenge. Special attention is paid to such issues as the small-scale intermittency of inertial range turbulence, mode content of the sub-proton range of scales, the formation of electron-scale current sheets and the role of magnetic reconnection, as well as numerical challenges of applying PIC codes to simulations of astrophysical turbulence.
NASA Astrophysics Data System (ADS)
Fang, Yi; Huang, Yahong
2017-12-01
Conducting sand liquefaction estimated based on codes is the important content of the geotechnical design. However, the result, sometimes, fails to conform to the practical earthquake damages. Based on the damage of Tangshan earthquake and engineering geological conditions, three typical sites are chosen. Moreover, the sand liquefaction probability was evaluated on the three sites by using the method in the Code for Seismic Design of Buildings and the results were compared with the sand liquefaction phenomenon in the earthquake. The result shows that the difference between sand liquefaction estimated based on codes and the practical earthquake damage is mainly attributed to the following two aspects: The primary reasons include disparity between seismic fortification intensity and practical seismic oscillation, changes of groundwater level, thickness of overlying non-liquefied soil layer, local site effect and personal error. Meanwhile, although the judgment methods in the codes exhibit certain universality, they are another reason causing the above difference due to the limitation of basic data and the qualitative anomaly of the judgment formulas.
Conceptual-driven classification for coding advise in health insurance reimbursement.
Li, Sheng-Tun; Chen, Chih-Chuan; Huang, Fernando
2011-01-01
With the non-stop increases in medical treatment fees, the economic survival of a hospital in Taiwan relies on the reimbursements received from the Bureau of National Health Insurance, which in turn depend on the accuracy and completeness of the content of the discharge summaries as well as the correctness of their International Classification of Diseases (ICD) codes. The purpose of this research is to enforce the entire disease classification framework by supporting disease classification specialists in the coding process. This study developed an ICD code advisory system (ICD-AS) that performed knowledge discovery from discharge summaries and suggested ICD codes. Natural language processing and information retrieval techniques based on Zipf's Law were applied to process the content of discharge summaries, and fuzzy formal concept analysis was used to analyze and represent the relationships between the medical terms identified by MeSH. In addition, a certainty factor used as reference during the coding process was calculated to account for uncertainty and strengthen the credibility of the outcome. Two sets of 360 and 2579 textual discharge summaries of patients suffering from cerebrovascular disease was processed to build up ICD-AS and to evaluate the prediction performance. A number of experiments were conducted to investigate the impact of system parameters on accuracy and compare the proposed model to traditional classification techniques including linear-kernel support vector machines. The comparison results showed that the proposed system achieves the better overall performance in terms of several measures. In addition, some useful implication rules were obtained, which improve comprehension of the field of cerebrovascular disease and give insights to the relationships between relevant medical terms. Our system contributes valuable guidance to disease classification specialists in the process of coding discharge summaries, which consequently brings benefits in aspects of patient, hospital, and healthcare system. Copyright © 2010 Elsevier B.V. All rights reserved.
Digitized forensics: retaining a link between physical and digital crime scene traces using QR-codes
NASA Astrophysics Data System (ADS)
Hildebrandt, Mario; Kiltz, Stefan; Dittmann, Jana
2013-03-01
The digitization of physical traces from crime scenes in forensic investigations in effect creates a digital chain-of-custody and entrains the challenge of creating a link between the two or more representations of the same trace. In order to be forensically sound, especially the two security aspects of integrity and authenticity need to be maintained at all times. Especially the adherence to the authenticity using technical means proves to be a challenge at the boundary between the physical object and its digital representations. In this article we propose a new method of linking physical objects with its digital counterparts using two-dimensional bar codes and additional meta-data accompanying the acquired data for integration in the conventional documentation of collection of items of evidence (bagging and tagging process). Using the exemplary chosen QR-code as particular implementation of a bar code and a model of the forensic process, we also supply a means to integrate our suggested approach into forensically sound proceedings as described by Holder et al.1 We use the example of the digital dactyloscopy as a forensic discipline, where currently progress is being made by digitizing some of the processing steps. We show an exemplary demonstrator of the suggested approach using a smartphone as a mobile device for the verification of the physical trace to extend the chain-of-custody from the physical to the digital domain. Our evaluation of the demonstrator is performed towards the readability and the verification of its contents. We can read the bar code despite its limited size of 42 x 42 mm and rather large amount of embedded data using various devices. Furthermore, the QR-code's error correction features help to recover contents of damaged codes. Subsequently, our appended digital signature allows for detecting malicious manipulations of the embedded data.
Rusu, Adina C; Hallner, Dirk
2018-06-06
Depression is a common feature of chronic pain, but there is only limited research into the content and frequency of depressed cognitions in pain patients. This study describes the development of the Sentence Completion Test for Chronic Pain (SCP), an idiographic measure for assessing depressive thinking in chronic pain patients. The sentence completion task requires participants to finish incomplete sentences using their own words to a set of predefined stems that include negative, positive and neutral valenced self-referenced words. In addition, the stems include past, future and world stems, which reflect the theoretical negative triad typical to depression. Complete responses are coded by valence (negative, positive and neutral), pain and health-related content. A total of 89 participants were included in this study. Forty seven adult out-patients formed the depressed pain group and were compared to a non-clinical control sample of 42 healthy control participants. This study comprised several phases: (1) theory-driven generation of coding rules; (2) the development of a coding manual by a panel of experts (3) comparing reliability of coding by expert raters without the use of the coding manual and with the use of the coding manual; (4) preliminary analyses of the construct validity of the SCP. The internal consistency of the SCP was tested using the Kuder-Richardson coefficient (KR-20). Inter-rater agreement was assessed by intra-class correlations (ICC). The content and construct validity of the SCP was investigated by correlation coefficients between SCP negative completions, the Hospital Anxiety and Depression Scale (HADS) depression scores and the number of symptoms on the Structured Clinical Interview for DSM-IV-TR (SCID). As predicted for content validity, the number of SCP negative statements was significantly greater in the depressed pain group and this group also produced significantly fewer positive statements, compared to the healthy control group. The number of negative pain completions and negative health completions was significantly greater in the depressed pain group. As expected, in the depressed pain group, the correlation between SCP negatives and the HADS Depression score was r=0.60 and the correlation between SCP negatives and the number of symptoms on the SCID was r=0.56. The SCP demonstrated good content validity, internal consistency and inter-rater reliability. Uses for this measure, such as complementing questionnaire measures by an idiographic assessment of depressive thinking and generating hypotheses about key problems within a cognitive-behavioural case-formulation, are suggested.
ICD-10 codes used to identify adverse drug events in administrative data: a systematic review.
Hohl, Corinne M; Karpov, Andrei; Reddekopp, Lisa; Doyle-Waters, Mimi; Stausberg, Jürgen
2014-01-01
Adverse drug events, the unintended and harmful effects of medications, are important outcome measures in health services research. Yet no universally accepted set of International Classification of Diseases (ICD) revision 10 codes or coding algorithms exists to ensure their consistent identification in administrative data. Our objective was to synthesize a comprehensive set of ICD-10 codes used to identify adverse drug events. We developed a systematic search strategy and applied it to five electronic reference databases. We searched relevant medical journals, conference proceedings, electronic grey literature and bibliographies of relevant studies, and contacted content experts for unpublished studies. One author reviewed the titles and abstracts for inclusion and exclusion criteria. Two authors reviewed eligible full-text articles and abstracted data in duplicate. Data were synthesized in a qualitative manner. Of 4241 titles identified, 41 were included. We found a total of 827 ICD-10 codes that have been used in the medical literature to identify adverse drug events. The median number of codes used to search for adverse drug events was 190 (IQR 156-289) with a large degree of variability between studies in the numbers and types of codes used. Authors commonly used external injury (Y40.0-59.9) and disease manifestation codes. Only two papers reported on the sensitivity of their code set. Substantial variability exists in the methods used to identify adverse drug events in administrative data. Our work may serve as a point of reference for future research and consensus building in this area.
ICD-10 codes used to identify adverse drug events in administrative data: a systematic review
Hohl, Corinne M; Karpov, Andrei; Reddekopp, Lisa; Stausberg, Jürgen
2014-01-01
Background Adverse drug events, the unintended and harmful effects of medications, are important outcome measures in health services research. Yet no universally accepted set of International Classification of Diseases (ICD) revision 10 codes or coding algorithms exists to ensure their consistent identification in administrative data. Our objective was to synthesize a comprehensive set of ICD-10 codes used to identify adverse drug events. Methods We developed a systematic search strategy and applied it to five electronic reference databases. We searched relevant medical journals, conference proceedings, electronic grey literature and bibliographies of relevant studies, and contacted content experts for unpublished studies. One author reviewed the titles and abstracts for inclusion and exclusion criteria. Two authors reviewed eligible full-text articles and abstracted data in duplicate. Data were synthesized in a qualitative manner. Results Of 4241 titles identified, 41 were included. We found a total of 827 ICD-10 codes that have been used in the medical literature to identify adverse drug events. The median number of codes used to search for adverse drug events was 190 (IQR 156–289) with a large degree of variability between studies in the numbers and types of codes used. Authors commonly used external injury (Y40.0–59.9) and disease manifestation codes. Only two papers reported on the sensitivity of their code set. Conclusions Substantial variability exists in the methods used to identify adverse drug events in administrative data. Our work may serve as a point of reference for future research and consensus building in this area. PMID:24222671
Hearing sounds, understanding actions: action representation in mirror neurons.
Kohler, Evelyne; Keysers, Christian; Umiltà, M Alessandra; Fogassi, Leonardo; Gallese, Vittorio; Rizzolatti, Giacomo
2002-08-02
Many object-related actions can be recognized by their sound. We found neurons in monkey premotor cortex that discharge when the animal performs a specific action and when it hears the related sound. Most of the neurons also discharge when the monkey observes the same action. These audiovisual mirror neurons code actions independently of whether these actions are performed, heard, or seen. This discovery in the monkey homolog of Broca's area might shed light on the origin of language: audiovisual mirror neurons code abstract contents-the meaning of actions-and have the auditory access typical of human language to these contents.
The complete mitochondrial genome of Chrysopa pallens (Insecta, Neuroptera, Chrysopidae).
He, Kun; Chen, Zhe; Yu, Dan-Na; Zhang, Jia-Yong
2012-10-01
The complete mitochondrial genome of Chrysopa pallens (Neuroptera, Chrysopidae) was sequenced. It consists of 13 protein-coding genes, 22 transfer RNA genes, 2 ribosomal RNA (rRNA) genes, and a control region (AT-rich region). The total length of C. pallens mitogenome is 16,723 bp with 79.5% AT content, and the length of control region is 1905 bp with 89.1% AT content. The non-coding regions of C. pallens include control region between 12S rRNA and trnI genes, and a 75-bp space region between trnI and trnQ genes.
Zelingher, Julian; Ash, Nachman
2013-05-01
The IsraeLi healthcare system has undergone major processes for the adoption of health information technologies (HIT), and enjoys high Levels of utilization in hospital and ambulatory care. Coding is an essential infrastructure component of HIT, and ts purpose is to represent data in a simplified and common format, enhancing its manipulation by digital systems. Proper coding of data enables efficient identification, storage, retrieval and communication of data. UtiLization of uniform coding systems by different organizations enables data interoperability between them, facilitating communication and integrating data elements originating in different information systems from various organizations. Current needs in Israel for heaLth data coding include recording and reporting of diagnoses for hospitalized patients, outpatients and visitors of the Emergency Department, coding of procedures and operations, coding of pathology findings, reporting of discharge diagnoses and causes of death, billing codes, organizational data warehouses and national registries. New national projects for cLinicaL data integration, obligatory reporting of quality indicators and new Ministry of Health (MOH) requirements for HIT necessitate a high Level of interoperability that can be achieved only through the adoption of uniform coding. Additional pressures were introduced by the USA decision to stop the maintenance of the ICD-9-CM codes that are also used by Israeli healthcare, and the adoption of ICD-10-C and ICD-10-PCS as the main coding system for billing purpose. The USA has also mandated utilization of SNOMED-CT as the coding terminology for the ELectronic Health Record problem list, and for reporting quality indicators to the CMS. Hence, the Israeli MOH has recently decided that discharge diagnoses will be reported using ICD-10-CM codes, and SNOMED-CT will be used to code the cLinical information in the EHR. We reviewed the characteristics, strengths and weaknesses of these two coding systems. In summary, the adoption of ICD-10-CM is in line with the USA decision to abandon ICD-9-CM, and the Israeli heaLthcare system could benefit from USA heaLthcare efforts in this direction. The Large content of SNOMED-CT and its sophisticated hierarchical data structure will enable advanced cLinicaL decision support and quality improvement applications.
Multimodal Discriminative Binary Embedding for Large-Scale Cross-Modal Retrieval.
Wang, Di; Gao, Xinbo; Wang, Xiumei; He, Lihuo; Yuan, Bo
2016-10-01
Multimodal hashing, which conducts effective and efficient nearest neighbor search across heterogeneous data on large-scale multimedia databases, has been attracting increasing interest, given the explosive growth of multimedia content on the Internet. Recent multimodal hashing research mainly aims at learning the compact binary codes to preserve semantic information given by labels. The overwhelming majority of these methods are similarity preserving approaches which approximate pairwise similarity matrix with Hamming distances between the to-be-learnt binary hash codes. However, these methods ignore the discriminative property in hash learning process, which results in hash codes from different classes undistinguished, and therefore reduces the accuracy and robustness for the nearest neighbor search. To this end, we present a novel multimodal hashing method, named multimodal discriminative binary embedding (MDBE), which focuses on learning discriminative hash codes. First, the proposed method formulates the hash function learning in terms of classification, where the binary codes generated by the learned hash functions are expected to be discriminative. And then, it exploits the label information to discover the shared structures inside heterogeneous data. Finally, the learned structures are preserved for hash codes to produce similar binary codes in the same class. Hence, the proposed MDBE can preserve both discriminability and similarity for hash codes, and will enhance retrieval accuracy. Thorough experiments on benchmark data sets demonstrate that the proposed method achieves excellent accuracy and competitive computational efficiency compared with the state-of-the-art methods for large-scale cross-modal retrieval task.
Direct Sequence Spread Spectrum (DSSS) Receiver, User Manual
2008-01-01
sampled data is clocked in to correlator data registers and a comparison is made between the code and data register contents, producing a correlation ...symbol (equal to the processing gain Gp ) but need not be otherwise synchronised with the spreading codes . This allows a very long and noise- like PRBS...and Q channels are independently but synchronously sampled . Complex Real ADC FIR Filter Interpolator Acquisition Correlators
The "Wow! signal" of the terrestrial genetic code
NASA Astrophysics Data System (ADS)
shCherbak, Vladimir I.; Makukov, Maxim A.
2013-05-01
It has been repeatedly proposed to expand the scope for SETI, and one of the suggested alternatives to radio is the biological media. Genomic DNA is already used on Earth to store non-biological information. Though smaller in capacity, but stronger in noise immunity is the genetic code. The code is a flexible mapping between codons and amino acids, and this flexibility allows modifying the code artificially. But once fixed, the code might stay unchanged over cosmological timescales; in fact, it is the most durable construct known. Therefore it represents an exceptionally reliable storage for an intelligent signature, if that conforms to biological and thermodynamic requirements. As the actual scenario for the origin of terrestrial life is far from being settled, the proposal that it might have been seeded intentionally cannot be ruled out. A statistically strong intelligent-like "signal" in the genetic code is then a testable consequence of such scenario. Here we show that the terrestrial code displays a thorough precision-type orderliness matching the criteria to be considered an informational signal. Simple arrangements of the code reveal an ensemble of arithmetical and ideographical patterns of the same symbolic language. Accurate and systematic, these underlying patterns appear as a product of precision logic and nontrivial computing rather than of stochastic processes (the null hypothesis that they are due to chance coupled with presumable evolutionary pathways is rejected with P-value < 10-13). The patterns are profound to the extent that the code mapping itself is uniquely deduced from their algebraic representation. The signal displays readily recognizable hallmarks of artificiality, among which are the symbol of zero, the privileged decimal syntax and semantical symmetries. Besides, extraction of the signal involves logically straightforward but abstract operations, making the patterns essentially irreducible to any natural origin. Plausible ways of embedding the signal into the code and possible interpretation of its content are discussed. Overall, while the code is nearly optimized biologically, its limited capacity is used extremely efficiently to pass non-biological information.
Autosophy information theory provides lossless data and video compression based on the data content
NASA Astrophysics Data System (ADS)
Holtz, Klaus E.; Holtz, Eric S.; Holtz, Diana
1996-09-01
A new autosophy information theory provides an alternative to the classical Shannon information theory. Using the new theory in communication networks provides both a high degree of lossless compression and virtually unbreakable encryption codes for network security. The bandwidth in a conventional Shannon communication is determined only by the data volume and the hardware parameters, such as image size; resolution; or frame rates in television. The data content, or what is shown on the screen, is irrelevant. In contrast, the bandwidth in autosophy communication is determined only by data content, such as novelty and movement in television images. It is the data volume and hardware parameters that become irrelevant. Basically, the new communication methods use prior 'knowledge' of the data, stored in a library, to encode subsequent transmissions. The more 'knowledge' stored in the libraries, the higher the potential compression ratio. 'Information' is redefined as that which is not already known by the receiver. Everything already known is redundant and need not be re-transmitted. In a perfect communication each transmission code, called a 'tip,' creates a new 'engram' of knowledge in the library in which each tip transmission can represent any amount of data. Autosophy theories provide six separate learning modes, or omni dimensional networks, all of which can be used for data compression. The new information theory reveals the theoretical flaws of other data compression methods, including: the Huffman; Ziv Lempel; LZW codes and commercial compression codes such as V.42bis and MPEG-2.
Public sentiment and discourse about Zika virus on Instagram.
Seltzer, E K; Horst-Martz, E; Lu, M; Merchant, R M
2017-09-01
Social media have strongly influenced the awareness and perceptions of public health emergencies, and a considerable amount of social media content is now shared through images, rather than text alone. This content can impact preparedness and response due to the popularity and real-time nature of social media platforms. We sought to explore how the image-sharing platform Instagram is used for information dissemination and conversation during the current Zika outbreak. This was a retrospective review of publicly posted images about Zika on Instagram. Using the keyword '#zika' we identified 500 images posted on Instagram from May to August 2016. Images were coded by three reviewers and contextual information was collected for each image about sentiment, image type, content, audience, geography, reliability, and engagement. Of 500 images tagged with #zika, 342 (68%) contained content actually related to Zika. Of the 342 Zika-specific images, 299 were coded as 'health' and 193 were coded 'public interest'. Some images had multiple 'health' and 'public interest' codes. Health images tagged with #zika were primarily related to transmission (43%, 129/299) and prevention (48%, 145/299). Transmission-related posts were more often mosquito-human transmission (73%, 94/129) than human-human transmission (27%, 35/129). Mosquito bite prevention posts outnumbered safe sex prevention; (84%, 122/145) and (16%, 23/145) respectively. Images with a target audience were primarily aimed at women (95%, 36/38). Many posts (60%, 61/101) included misleading, incomplete, or unclear information about the virus. Additionally, many images expressed fear and negative sentiment, (79/156, 51%). Instagram can be used to characterize public sentiment and highlight areas of focus for public health, such as correcting misleading or incomplete information or expanding messages to reach diverse audiences. Copyright © 2017 The Royal Society for Public Health. Published by Elsevier Ltd. All rights reserved.
Crucial steps to life: From chemical reactions to code using agents.
Witzany, Guenther
2016-02-01
The concepts of the origin of the genetic code and the definitions of life changed dramatically after the RNA world hypothesis. Main narratives in molecular biology and genetics such as the "central dogma," "one gene one protein" and "non-coding DNA is junk" were falsified meanwhile. RNA moved from the transition intermediate molecule into centre stage. Additionally the abundance of empirical data concerning non-random genetic change operators such as the variety of mobile genetic elements, persistent viruses and defectives do not fit with the dominant narrative of error replication events (mutations) as being the main driving forces creating genetic novelty and diversity. The reductionistic and mechanistic views on physico-chemical properties of the genetic code are no longer convincing as appropriate descriptions of the abundance of non-random genetic content operators which are active in natural genetic engineering and natural genome editing. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
A Radiation Shielding Code for Spacecraft and Its Validation
NASA Technical Reports Server (NTRS)
Shinn, J. L.; Cucinotta, F. A.; Singleterry, R. C.; Wilson, J. W.; Badavi, F. F.; Badhwar, G. D.; Miller, J.; Zeitlin, C.; Heilbronn, L.; Tripathi, R. K.
2000-01-01
The HZETRN code, which uses a deterministic approach pioneered at NASA Langley Research Center, has been developed over the past decade to evaluate the local radiation fields within sensitive materials (electronic devices and human tissue) on spacecraft in the space environment. The code describes the interactions of shield materials with the incident galactic cosmic rays, trapped protons, or energetic protons from solar particle events in free space and low Earth orbit. The content of incident radiations is modified by atomic and nuclear reactions with the spacecraft and radiation shield materials. High-energy heavy ions are fragmented into less massive reaction products, and reaction products are produced by direct knockout of shield constituents or from de-excitation products. An overview of the computational procedures and database which describe these interactions is given. Validation of the code with recent Monte Carlo benchmarks, and laboratory and flight measurement is also included.
Sub-Selective Quantization for Learning Binary Codes in Large-Scale Image Search.
Li, Yeqing; Liu, Wei; Huang, Junzhou
2018-06-01
Recently with the explosive growth of visual content on the Internet, large-scale image search has attracted intensive attention. It has been shown that mapping high-dimensional image descriptors to compact binary codes can lead to considerable efficiency gains in both storage and performing similarity computation of images. However, most existing methods still suffer from expensive training devoted to large-scale binary code learning. To address this issue, we propose a sub-selection based matrix manipulation algorithm, which can significantly reduce the computational cost of code learning. As case studies, we apply the sub-selection algorithm to several popular quantization techniques including cases using linear and nonlinear mappings. Crucially, we can justify the resulting sub-selective quantization by proving its theoretic properties. Extensive experiments are carried out on three image benchmarks with up to one million samples, corroborating the efficacy of the sub-selective quantization method in terms of image retrieval.
Physical Model for the Evolution of the Genetic Code
NASA Astrophysics Data System (ADS)
Yamashita, Tatsuro; Narikiyo, Osamu
2011-12-01
Using the shape space of codons and tRNAs we give a physical description of the genetic code evolution on the basis of the codon capture and ambiguous intermediate scenarios in a consistent manner. In the lowest dimensional version of our description, a physical quantity, codon level is introduced. In terms of the codon levels two scenarios are typically classified into two different routes of the evolutional process. In the case of the ambiguous intermediate scenario we perform an evolutional simulation implemented cost selection of amino acids and confirm a rapid transition of the code change. Such rapidness reduces uncomfortableness of the non-unique translation of the code at intermediate state that is the weakness of the scenario. In the case of the codon capture scenario the survival against mutations under the mutational pressure minimizing GC content in genomes is simulated and it is demonstrated that cells which experience only neutral mutations survive.
Patient Health Goals Elicited During Home Care Admission: A Categorization.
Sockolow, Paulina; Radhakrishnan, Kavita; Chou, Edgar Y; Wojciechowicz, Christine
2017-11-01
Home care agencies are initiating "patient health goal elicitation" activities as part of home care admission planning. We categorized elicited goals and identified "clinically informative" goals at a home care agency. We examined patient goals that admitting clinicians documented in the point-of-care electronic health record; conducted content analysis on patient goal data to develop a coding scheme; grouped goal themes into codes; assigned codes to each goal; and identified goals that were in the patient voice. Of the 1,763 patient records, 16% lacked a goal; only 15 goals were in a patient's voice. Nurse and physician experts identified 12 of the 20 codes as clinically important accounting for 82% of goal occurrences. The most frequent goal documented was safety/falls (23%). Training and consistent communication of the intent and operationalization of patient goal elicitation may address the absence of patient voice and the less than universal recording of home care patients' goals.
An Analysis of Defense Information and Information Technology Articles: A Sixteen-Year Perspective
2009-03-01
exploratory,” or “subjective” ( Denzin & Lincoln , 2000). Existing Research This research is based on content analysis methodologies utilized by Carter...same codes ( Denzin & Lincoln , 2000). Different analysts should code the same text in a similar manner (Weber, 1990). Typically, researchers compute...chosen. Krippendorf recommends an agreement level of at least .70 (Krippendorff, 2004). Some scholars use a cut-off rate of .80 ( Denzin & Lincoln
NASA Astrophysics Data System (ADS)
Chan, V. S.; Wong, C. P. C.; McLean, A. G.; Luo, G. N.; Wirth, B. D.
2013-10-01
The Xolotl code under development by PSI-SciDAC will enhance predictive modeling capability of plasma-facing materials under burning plasma conditions. The availability and application of experimental data to compare to code-calculated observables are key requirements to validate the breadth and content of physics included in the model and ultimately gain confidence in its results. A dedicated effort has been in progress to collect and organize a) a database of relevant experiments and their publications as previously carried out at sample exposure facilities in US and Asian tokamaks (e.g., DIII-D DiMES, and EAST MAPES), b) diagnostic and surface analysis capabilities available at each device, and c) requirements for future experiments with code validation in mind. The content of this evolving database will serve as a significant resource for the plasma-material interaction (PMI) community. Work supported in part by the US Department of Energy under GA-DE-SC0008698, DE-AC52-07NA27344 and DE-AC05-00OR22725.
Low-complexity transcoding algorithm from H.264/AVC to SVC using data mining
NASA Astrophysics Data System (ADS)
Garrido-Cantos, Rosario; De Cock, Jan; Martínez, Jose Luis; Van Leuven, Sebastian; Cuenca, Pedro; Garrido, Antonio
2013-12-01
Nowadays, networks and terminals with diverse characteristics of bandwidth and capabilities coexist. To ensure a good quality of experience, this diverse environment demands adaptability of the video stream. In general, video contents are compressed to save storage capacity and to reduce the bandwidth required for its transmission. Therefore, if these compressed video streams were compressed using scalable video coding schemes, they would be able to adapt to those heterogeneous networks and a wide range of terminals. Since the majority of the multimedia contents are compressed using H.264/AVC, they cannot benefit from that scalability. This paper proposes a low-complexity algorithm to convert an H.264/AVC bitstream without scalability to scalable bitstreams with temporal scalability in baseline and main profiles by accelerating the mode decision task of the scalable video coding encoding stage using machine learning tools. The results show that when our technique is applied, the complexity is reduced by 87% while maintaining coding efficiency.
Overview of codes and tools for nuclear engineering education
NASA Astrophysics Data System (ADS)
Yakovlev, D.; Pryakhin, A.; Medvedeva, L.
2017-01-01
The recent world trends in nuclear education have been developed in the direction of social education, networking, virtual tools and codes. MEPhI as a global leader on the world education market implements new advanced technologies for the distance and online learning and for student research work. MEPhI produced special codes, tools and web resources based on the internet platform to support education in the field of nuclear technology. At the same time, MEPhI actively uses codes and tools from the third parties. Several types of the tools are considered: calculation codes, nuclear data visualization tools, virtual labs, PC-based educational simulators for nuclear power plants (NPP), CLP4NET, education web-platforms, distance courses (MOOCs and controlled and managed content systems). The university pays special attention to integrated products such as CLP4NET, which is not a learning course, but serves to automate the process of learning through distance technologies. CLP4NET organizes all tools in the same information space. Up to now, MEPhI has achieved significant results in the field of distance education and online system implementation.
Up to code: does your company's conduct meet world-class standards?
Paine, Lynn; Deshpandé, Rohit; Margolis, Joshua D; Bettcher, Kim Eric
2005-12-01
Codes of conduct have long been a feature of corporate life. Today, they are arguably a legal necessity--at least for public companies with a presence in the United States. But the issue goes beyond U.S. legal and regulatory requirements. Sparked by corruption and excess of various types, dozens of industry, government, investor, and multisector groups worldwide have proposed codes and guidelines to govern corporate behavior. These initiatives reflect an increasingly global debate on the nature of corporate legitimacy. Given the legal, organizational, reputational, and strategic considerations, few companies will want to be without a code. But what should it say? Apart from a handful of essentials spelled out in Sarbanes-Oxley regulations and NYSE rules, authoritative guidance is sorely lacking. In search of some reference points for managers, the authors undertook a systematic analysis of a select group of codes. In this article, they present their findings in the form of a "codex," a reference source on code content. The Global Business Standards Codex contains a set of overarching principles as well as a set of conduct standards for putting those principles into practice. The GBS Codex is not intended to be adopted as is, but is meant to be used as a benchmark by those wishing to create their own world-class code. The provisions of the codex must be customized to a company's specific business and situation; individual companies' codes will include their own distinctive elements as well. What the codex provides is a starting point grounded in ethical fundamentals and aligned with an emerging global consensus on basic standards of corporate behavior.
Moore, Brian C J
2003-03-01
To review how the properties of sounds are "coded" in the normal auditory system and to discuss the extent to which cochlear implants can and do represent these codes. Data are taken from published studies of the response of the cochlea and auditory nerve to simple and complex stimuli, in both the normal and the electrically stimulated ear. REVIEW CONTENT: The review describes: 1) the coding in the normal auditory system of overall level (which partly determines perceived loudness), spectral shape (which partly determines perceived timbre and the identity of speech sounds), periodicity (which partly determines pitch), and sound location; 2) the role of the active mechanism in the cochlea, and particularly the fast-acting compression associated with that mechanism; 3) the neural response patterns evoked by cochlear implants; and 4) how the response patterns evoked by implants differ from those observed in the normal auditory system in response to sound. A series of specific issues is then discussed, including: 1) how to compensate for the loss of cochlear compression; 2) the effective number of independent channels in a normal ear and in cochlear implantees; 3) the importance of independence of responses across neurons; 4) the stochastic nature of normal neural responses; 5) the possible role of across-channel coincidence detection; and 6) potential benefits of binaural implantation. Current cochlear implants do not adequately reproduce several aspects of the neural coding of sound in the normal auditory system. Improved electrode arrays and coding systems may lead to improved coding and, it is hoped, to better performance.
Lorencatto, Fabiana; West, Robert; Seymour, Natalie; Michie, Susan
2013-06-01
There is a difference between interventions as planned and as delivered in practice. Unless we know what was actually delivered, we cannot understand "what worked" in effective interventions. This study aimed to (a) assess whether an established taxonomy of 53 smoking cessation behavior change techniques (BCTs) may be applied or adapted as a method for reliably specifying the content of smoking cessation behavioral support consultations and (b) develop an effective method for training researchers and practitioners in the reliable application of the taxonomy. Fifteen transcripts of audio-recorded consultations delivered by England's Stop Smoking Services were coded into component BCTs using the taxonomy. Interrater reliability and potential adaptations to the taxonomy to improve coding were discussed following 3 coding waves. A coding training manual was developed through expert consensus and piloted on 10 trainees, assessing coding reliability and self-perceived competence before and after training. An average of 33 BCTs from the taxonomy were identified at least once across sessions and coding waves. Consultations contained on average 12 BCTs (range = 8-31). Average interrater reliability was high (88% agreement). The taxonomy was adapted to simplify coding by merging co-occurring BCTs and refining BCT definitions. Coding reliability and self-perceived competence significantly improved posttraining for all trainees. It is possible to apply a taxonomy to reliably identify and classify BCTs in smoking cessation behavioral support delivered in practice, and train inexperienced coders to do so reliably. This method can be used to investigate variability in provision of behavioral support across services, monitor fidelity of delivery, and identify training needs.
Nihilism, relativism, and Engelhardt.
Wreen, M
1998-01-01
This paper is a critical analysis of Tristram Engelhardt's attempts to avoid unrestricted nihilism and relativism. The focus of attention is his recent book, The Foundations of Bioethics (Oxford University Press, 1996). No substantive or "content-full" bioethics (e.g., that of Roman Catholicism or the Samurai) has an intersubjectively verifiable and universally binding foundation, Engelhardt thinks, for unaided secular reason cannot show that any particular substantive morality (or moral code) is correct. He thus seems to be committed to either nihilism or relativism. The first is the view that there is not even one true or valid moral code, and the second is the view that there is a plurality of true or valid moral codes. However, Engelhardt rejects both nihilism and relativism, at least in unrestricted form. Strictly speaking, he himself is a universalist, someone who believes that there is a single true moral code. Two argumentative strategies are employed by him to fend off unconstrained nihilism and relativism. The first argues that although all attempts to establish a content-full morality on the basis of secular reason fail, secular reason can still establish a content-less, purely procedural morality. Although not content-full and incapable of providing positive direction in life, much less a meaning of life, such a morality does limit the range of relativism and nihilism. The second argues that there is a single true, content-full morality. Grace and revelation, however, are needed to make it available to us; secular reason alone is not up to the task. This second line of argument is not pursued in The Foundations at any length, but it does crop up at times, and if it is sound, nihilism and relativism can be much more thoroughly routed than the first line of argument has it. Engelhardt's position and argumentative strategies are exposed at length and accorded a detailed critical examination. In the end, it is concluded that neither strategy will do, and that Engelhardt is probably committed to some form of relativism.
14 CFR 201.4 - General provisions concerning contents.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 14 Aeronautics and Space 4 2010-01-01 2010-01-01 false General provisions concerning contents. 201.4 Section 201.4 Aeronautics and Space OFFICE OF THE SECRETARY, DEPARTMENT OF TRANSPORTATION... UNITED STATES CODE-[AMENDED] Application Procedures § 201.4 General provisions concerning contents. (a...
Information quality measurement of medical encoding support based on usability.
Puentes, John; Montagner, Julien; Lecornu, Laurent; Cauvin, Jean-Michel
2013-12-01
Medical encoding support systems for diagnoses and medical procedures are an emerging technology that begins to play a key role in billing, reimbursement, and health policies decisions. A significant problem to exploit these systems is how to measure the appropriateness of any automatically generated list of codes, in terms of fitness for use, i.e. their quality. Until now, only information retrieval performance measurements have been applied to estimate the accuracy of codes lists as quality indicator. Such measurements do not give the value of codes lists for practical medical encoding, and cannot be used to globally compare the quality of multiple codes lists. This paper defines and validates a new encoding information quality measure that addresses the problem of measuring medical codes lists quality. It is based on a usability study of how expert coders and physicians apply computer-assisted medical encoding. The proposed measure, named ADN, evaluates codes Accuracy, Dispersion and Noise, and is adapted to the variable length and content of generated codes lists, coping with limitations of previous measures. According to the ADN measure, the information quality of a codes list is fully represented by a single point, within a suitably constrained feature space. Using one scheme, our approach is reliable to measure and compare the information quality of hundreds of codes lists, showing their practical value for medical encoding. Its pertinence is demonstrated by simulation and application to real data corresponding to 502 inpatient stays in four clinic departments. Results are compared to the consensus of three expert coders who also coded this anonymized database of discharge summaries, and to five information retrieval measures. Information quality assessment applying the ADN measure showed the degree of encoding-support system variability from one clinic department to another, providing a global evaluation of quality measurement trends. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
[Bioethical analysis of the Brazilian Dentistry Code of Ethics].
Pyrrho, Monique; do Prado, Mauro Machado; Cordón, Jorge; Garrafa, Volnei
2009-01-01
The Brazilian Dentistry Code of Ethics (DCE), Resolution CFO-71 from May 2006, is an instrument created to guide dentists' behavior in relation to the ethical aspects of professional practice. The purpose of the study is to analyze the above mentioned code comparing the deontological and bioethical focuses. In order to do so, an interpretative analysis of the code and of twelve selected texts was made. Six of the texts were about bioethics and six on deontology, and the analysis was made through the methodological classification of the context units, textual paragraphs and items from the code in the following categories: the referentials of bioethical principlism--autonomy, beneficence, nonmaleficence and justice -, technical aspects and moral virtues related to the profession. Together the four principles represented 22.9%, 39.8% and 54.2% of the content of the DCE, of the deontological texts and of the bioethical texts respectively. In the DCE, 42% of the items referred to virtues, 40.2% were associated to technical aspects and just 22.9% referred to principles. The virtues related to the professionals and the technical aspects together amounted to 70.1% of the code. Instead of focusing on the patient as the subject of the process of oral health care, the DCE focuses on the professional, and it is predominantly turned to legalistic and corporate aspects.
Spacecraft-plasma interaction codes: NASCAP/GEO, NASCAP/LEO, POLAR, DynaPAC, and EPSAT
NASA Technical Reports Server (NTRS)
Mandell, M. J.; Jongeward, G. A.; Cooke, D. L.
1992-01-01
Development of a computer code to simulate interactions between the surfaces of a geometrically complex spacecraft and the space plasma environment involves: (1) defining the relevant physical phenomena and formulating them in appropriate levels of approximation; (2) defining a representation for the 3-D space external to the spacecraft and a means for defining the spacecraft surface geometry and embedding it in the surrounding space; (3) packaging the code so that it is easy and practical to use, interpret, and present the results; and (4) validating the code by continual comparison with theoretical models, ground test data, and spaceflight experiments. The physical content, geometrical capabilities, and application of five S-CUBED developed spacecraft plasma interaction codes are discussed. The NASA Charging Analyzer Program/geosynchronous earth orbit (NASCAP/GEO) is used to illustrate the role of electrostatic barrier formation in daylight spacecraft charging. NASCAP/low Earth orbit (LEO) applications to the CHARGE-2 and Space Power Experiment Aboard Rockets (SPEAR)-1 rocket payloads are shown. DynaPAC application to the SPEAR-2 rocket payloads is described. Environment Power System Analysis Tool (EPSAT) is illustrated by application to Tethered Satellite System 1 (TSS-1), SPEAR-3, and Sundance. A detailed description and application of the Potentials of Large Objects in the Auroral Region (POLAR) Code are presented.
Developing an ethical code for engineers: the discursive approach.
Lozano, J Félix
2006-04-01
From the Hippocratic Oath on, deontological codes and other professional self-regulation mechanisms have been used to legitimize and identify professional groups. New technological challenges and, above all, changes in the socioeconomic environment require adaptable codes which can respond to new demands. We assume that ethical codes for professionals should not simply focus on regulative functions, but must also consider ideological and educative functions. Any adaptations should take into account both contents (values, norms and recommendations) and the drafting process itself. In this article we propose a process for developing a professional ethical code for an official professional association (Colegio Oficial de Ingenieros Industriales de Valencia (COIIV) starting from the philosophical assumptions of discursive ethics but adapting them to critical hermeneutics. Our proposal is based on the Integrity Approach rather than the Compliance Approach. A process aiming to achieve an effective ethical document that fulfils regulative and ideological functions requires a participative, dialogical and reflexive methodology. This process must respond to moral exigencies and demands for efficiency and professional effectiveness. In addition to the methodological proposal we present our experience of producing an ethical code for the industrial engineers' association in Valencia (Spain) where this methodology was applied, and we evaluate the detected problems and future potential.
Clayman, Marla L.; Makoul, Gregory; Harper, Maya M.; Koby, Danielle G.; Williams, Adam R.
2012-01-01
Objectives Describe the development and refinement of a scheme, Detail of Essential Elements and Participants in Shared Decision Making (DEEP-SDM), for coding Shared Decision Making (SDM) while reporting on the characteristics of decisions in a sample of patients with metastatic breast cancer. Methods The Evidence-Based Patient Choice instrument was modified to reflect Makoul and Clayman’s Integrative Model of SDM. Coding was conducted on video recordings of 20 women at the first visit with their medical oncologists after suspicion of disease progression. Noldus Observer XT v.8, a video coding software platform, was used for coding. Results The sample contained 80 decisions (range: 1-11), divided into 150 decision making segments. Most decisions were physician-led, although patients and physicians initiated similar numbers of decision-making conversations. Conclusion DEEP-SDM facilitates content analysis of encounters between women with metastatic breast cancer and their medical oncologists. Despite the fractured nature of decision making, it is possible to identify decision points and to code each of the Essential Elements of Shared Decision Making. Further work should include application of DEEP-SDM to non-cancer encounters. Practice Implications: A better understanding of how decisions unfold in the medical encounter can help inform the relationship of SDM to patient-reported outcomes. PMID:22784391
Hoelzer, Simon; Schweiger, Ralf K; Liu, Raymond; Rudolf, Dirk; Rieger, Joerg; Dudeck, Joachim
2005-01-01
With the introduction of the ICD-10 as the standard for diagnosis, the development of an electronic representation of its complete content, inherent semantics and coding rules is necessary. Our concept refers to current efforts of the CEN/TC 251 to establish a European standard for hierarchical classification systems in healthcare. We have developed an electronic representation of the ICD-10 with the extensible Markup Language (XML) that facilitates the integration in current information systems or coding software taking into account different languages and versions. In this context, XML offers a complete framework of related technologies and standard tools for processing that helps to develop interoperable applications.
Iterative categorization (IC): a systematic technique for analysing qualitative data
2016-01-01
Abstract The processes of analysing qualitative data, particularly the stage between coding and publication, are often vague and/or poorly explained within addiction science and research more broadly. A simple but rigorous and transparent technique for analysing qualitative textual data, developed within the field of addiction, is described. The technique, iterative categorization (IC), is suitable for use with inductive and deductive codes and can support a range of common analytical approaches, e.g. thematic analysis, Framework, constant comparison, analytical induction, content analysis, conversational analysis, discourse analysis, interpretative phenomenological analysis and narrative analysis. Once the data have been coded, the only software required is a standard word processing package. Worked examples are provided. PMID:26806155
New procedures to evaluate visually lossless compression for display systems
NASA Astrophysics Data System (ADS)
Stolitzka, Dale F.; Schelkens, Peter; Bruylants, Tim
2017-09-01
Visually lossless image coding in isochronous display streaming or plesiochronous networks reduces link complexity and power consumption and increases available link bandwidth. A new set of codecs developed within the last four years promise a new level of coding quality, but require new techniques that are sufficiently sensitive to the small artifacts or color variations induced by this new breed of codecs. This paper begins with a summary of the new ISO/IEC 29170-2, a procedure for evaluation of lossless coding and reports the new work by JPEG to extend the procedure in two important ways, for HDR content and for evaluating the differences between still images, panning images and image sequences. ISO/IEC 29170-2 relies on processing test images through a well-defined process chain for subjective, forced-choice psychophysical experiments. The procedure sets an acceptable quality level equal to one just noticeable difference. Traditional image and video coding evaluation techniques, such as, those used for television evaluation have not proven sufficiently sensitive to the small artifacts that may be induced by this breed of codecs. In 2015, JPEG received new requirements to expand evaluation of visually lossless coding for high dynamic range images, slowly moving images, i.e., panning, and image sequences. These requirements are the basis for new amendments of the ISO/IEC 29170-2 procedures described in this paper. These amendments promise to be highly useful for the new content in television and cinema mezzanine networks. The amendments passed the final ballot in April 2017 and are on track to be published in 2018.
Delineating slowly and rapidly evolving fractions of the Drosophila genome.
Keith, Jonathan M; Adams, Peter; Stephen, Stuart; Mattick, John S
2008-05-01
Evolutionary conservation is an important indicator of function and a major component of bioinformatic methods to identify non-protein-coding genes. We present a new Bayesian method for segmenting pairwise alignments of eukaryotic genomes while simultaneously classifying segments into slowly and rapidly evolving fractions. We also describe an information criterion similar to the Akaike Information Criterion (AIC) for determining the number of classes. Working with pairwise alignments enables detection of differences in conservation patterns among closely related species. We analyzed three whole-genome and three partial-genome pairwise alignments among eight Drosophila species. Three distinct classes of conservation level were detected. Sequences comprising the most slowly evolving component were consistent across a range of species pairs, and constituted approximately 62-66% of the D. melanogaster genome. Almost all (>90%) of the aligned protein-coding sequence is in this fraction, suggesting much of it (comprising the majority of the Drosophila genome, including approximately 56% of non-protein-coding sequences) is functional. The size and content of the most rapidly evolving component was species dependent, and varied from 1.6% to 4.8%. This fraction is also enriched for protein-coding sequence (while containing significant amounts of non-protein-coding sequence), suggesting it is under positive selection. We also classified segments according to conservation and GC content simultaneously. This analysis identified numerous sub-classes of those identified on the basis of conservation alone, but was nevertheless consistent with that classification. Software, data, and results available at www.maths.qut.edu.au/-keithj/. Genomic segments comprising the conservation classes available in BED format.
Breaking and Fixing Origin-Based Access Control in Hybrid Web/Mobile Application Frameworks.
Georgiev, Martin; Jana, Suman; Shmatikov, Vitaly
2014-02-01
Hybrid mobile applications (apps) combine the features of Web applications and "native" mobile apps. Like Web applications, they are implemented in portable, platform-independent languages such as HTML and JavaScript. Like native apps, they have direct access to local device resources-file system, location, camera, contacts, etc. Hybrid apps are typically developed using hybrid application frameworks such as PhoneGap. The purpose of the framework is twofold. First, it provides an embedded Web browser (for example, WebView on Android) that executes the app's Web code. Second, it supplies "bridges" that allow Web code to escape the browser and access local resources on the device. We analyze the software stack created by hybrid frameworks and demonstrate that it does not properly compose the access-control policies governing Web code and local code, respectively. Web code is governed by the same origin policy, whereas local code is governed by the access-control policy of the operating system (for example, user-granted permissions in Android). The bridges added by the framework to the browser have the same local access rights as the entire application, but are not correctly protected by the same origin policy. This opens the door to fracking attacks, which allow foreign-origin Web content included into a hybrid app (e.g., ads confined in iframes) to drill through the layers and directly access device resources. Fracking vulnerabilities are generic: they affect all hybrid frameworks, all embedded Web browsers, all bridge mechanisms, and all platforms on which these frameworks are deployed. We study the prevalence of fracking vulnerabilities in free Android apps based on the PhoneGap framework. Each vulnerability exposes sensitive local resources-the ability to read and write contacts list, local files, etc.-to dozens of potentially malicious Web domains. We also analyze the defenses deployed by hybrid frameworks to prevent resource access by foreign-origin Web content and explain why they are ineffectual. We then present NoFrak, a capability-based defense against fracking attacks. NoFrak is platform-independent, compatible with any framework and embedded browser, requires no changes to the code of the existing hybrid apps, and does not break their advertising-supported business model.
Complete mitochondrial genome of the agarophyte red alga Gelidium vagum (Gelidiales).
Yang, Eun Chan; Kim, Kyeong Mi; Boo, Ga Hun; Lee, Jung-Hyun; Boo, Sung Min; Yoon, Hwan Su
2014-08-01
We describe the first complete mitochondrial genome of Gelidium vagum (Gelidiales) (24,901 bp, 30.4% GC content), an agar-producing red alga. The circular mitochondrial genome contains 43 genes, including 23 protein-coding, 18 tRNA and 2 rRNA genes. All the protein-coding genes have a typical ATG start codon. No introns were found. Two genes, secY and rps12, were overlapped by 41 bp.
Theory of Coding Informational Simulation.
1981-04-06
reach the valu , cf several thousands; single-prcgressicn represertation of this valu i5. little attractive duc to the unwieldinsss. Here we approachfd a...the moment/torque when contents of location ccunter must be chanqid tc the larger or smaller side. Value and directicn of change are assiqned by the...ths register of transition is formed by the algebraic addition of contained location counter and value of a change in the code of the latter (step
Coding Gains for Rank Decoding
1990-02-01
PM PUB=C RERZASB DISThIDUnO UNLI M . U.S. ARMY LABORATORY COWMAND BALLISTIC RESEARCH LABORATORY ABERDEEN PROVING GROUND, MARYLAND 9o 03 is.032...Proving Ground, MD 21005-5066 ATITN: SLCBR-D Aberdeen Proving Ground, M 21005-5066 8a NAME OF FUNDING , SPONSORING 8b OFFICE SYMBOL 9 PROCUREMENT...Previouseditionsare obsolete. SECURITY CLASSIFILATION OF THIS PAGE mm m ini IIIIIIIIIIIIIII I Isn FI E Contents 1 Soft Decision Concepts 1 2 Coding Gain 2 3
Are Military and Medical Ethics Necessarily Incompatible? A Canadian Case Study.
Rochon, Christiane; Williams-Jones, Bryn
2016-12-01
Military physicians are often perceived to be in a position of 'dual loyalty' because they have responsibilities towards their patients but also towards their employer, the military institution. Further, they have to ascribe to and are bound by two distinct codes of ethics (i.e., medical and military), each with its own set of values and duties, that could at first glance be considered to be very different or even incompatible. How, then, can military physicians reconcile these two codes of ethics and their distinct professional/institutional values, and assume their responsibilities towards both their patients and the military institution? To clarify this situation, and to show how such a reconciliation might be possible, we compared the history and content of two national professional codes of ethics: the Defence Ethics of the Canadian Armed Forces and the Code of Ethics of the Canadian Medical Association. Interestingly, even if the medical code is more focused on duties and responsibility while the military code is more focused on core values and is supported by a comprehensive ethical training program, they also have many elements in common. Further, both are based on the same core values of loyalty and integrity, and they are broad in scope but are relatively flexible in application. While there are still important sources of tension between and limits within these two codes of ethics, there are fewer differences than may appear at first glance because the core values and principles of military and medical ethics are not so different.
Hjerpe, Per; Boström, Kristina Bengtsson; Lindblad, Ulf; Merlo, Juan
2012-12-01
To investigate the impact on ICD coding behaviour of a new case-mix reimbursement system based on coded patient diagnoses. The main hypothesis was that after the introduction of the new system the coding of chronic diseases like hypertension and cancer would increase and the variance in propensity for coding would decrease on both physician and health care centre (HCC) levels. Cross-sectional multilevel logistic regression analyses were performed in periods covering the time before and after the introduction of the new reimbursement system. Skaraborg primary care, Sweden. All patients (n = 76 546 to 79 826) 50 years of age and older visiting 468 to 627 physicians at the 22 public HCCs in five consecutive time periods of one year each. Registered codes for hypertension and cancer diseases in Skaraborg primary care database (SPCD). After the introduction of the new reimbursement system the adjusted prevalence of hypertension and cancer in SPCD increased from 17.4% to 32.2% and from 0.79% to 2.32%, respectively, probably partly due to an increased diagnosis coding of indirect patient contacts. The total variance in the propensity for coding declined simultaneously at the physician level for both diagnosis groups. Changes in the healthcare reimbursement system may directly influence the contents of a research database that retrieves data from clinical practice. This should be taken into account when using such a database for research purposes, and the data should be validated for each diagnosis.
Industry self-regulation of alcohol marketing: a systematic review of content and exposure research.
Noel, Jonathan K; Babor, Thomas F; Robaina, Katherine
2017-01-01
With governments relying increasingly upon the alcohol industry's self-regulated marketing codes to restrict alcohol marketing activity, there is a need to summarize the findings of research relevant to alcohol marketing controls. This paper provides a systematic review of studies investigating the content of, and exposure to, alcohol marketing in relation to self-regulated guidelines. Peer-reviewed papers were identified through four literature search engines: SCOPUS, Web of Science, PubMed and PsychINFO. Non-peer-reviewed reports produced by public health agencies, alcohol research centers, non-governmental organizations and government research centers were also identified. Ninety-six publications met the inclusion criteria. Of the 19 studies evaluating a specific marketing code and 25 content analysis studies reviewed, all detected content that could be considered potentially harmful to children and adolescents, including themes that appeal strongly to young men. Of the 57 studies of alcohol advertising exposure, high levels of youth exposure and high awareness of alcohol advertising were found for television, radio, print, digital and outdoor advertisements. Youth exposure to alcohol advertising has increased over time, even as greater compliance with exposure thresholds has been documented. Violations of the content guidelines within self-regulated alcohol marketing codes are highly prevalent in certain media. Exposure to alcohol marketing, particularly among youth, is also prevalent. Taken together, the findings suggest that the current self-regulatory systems that govern alcohol marketing practices are not meeting their intended goal of protecting vulnerable populations. © 2016 Society for the Study of Addiction.
Jeong, Dahn; Presseau, Justin; ElChamaa, Rima; Naumann, Danielle N; Mascaro, Colin; Luconi, Francesca; Smith, Karen M; Kitto, Simon
2018-04-10
This scoping review explored the barriers and facilitators that influence engagement in and implementation of self-directed learning (SDL) in continuing professional development (CPD) for physicians in Canada. This review followed the six-stage scoping review framework of Arksey and O'Malley and of Daudt et al. In 2015, the authors searched eight online databases for English-language Canadian articles published January 2005-December 2015. To chart and analyze the data from the 17 included studies, they employed two-step analysis process of conventional content analysis followed by directed coding guided by the Theoretical Domains Framework (TDF). Conventional content analysis generated five categories of barriers and facilitators: individual, program, technological, environmental, and workplace/organizational. Directed coding guided by the TDF allowed analysis of barriers and facilitators to behavior change according to two key groups: physicians engaging in SDL and SDL developers designing and implementing SDL programs. Of the 318 total barriers and facilitators coded, 290 (91.2%) were coded for physicians and 28 (8.8%) for SDL developers. The majority (209; 65.7%) were coded in four key TDF domains: environmental context and resources, social influences, beliefs about consequences, and behavioral regulation. This scoping review identified five categories of barriers and facilitators in the literature and four key TDF domains where most factors related to behavior change of physicians and SDL developers regarding SDL programs in CPD were coded. There was a significant gap in the literature about factors that may contribute to SDL developers' capacity to design and implement SDL programs in CPD.This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal.
Changing the Latitudes and Attitudes about Content Analysis Research
ERIC Educational Resources Information Center
Brank, Eve M.; Fox, Kathleen A.; Youstin, Tasha J.; Boeppler, Lee C.
2008-01-01
The current research employs the use of content analysis to teach research methods concepts among students enrolled in an upper division research methods course. Students coded and analyzed Jimmy Buffett song lyrics rather than using a downloadable database or collecting survey data. Students' knowledge of content analysis concepts increased after…
Atkinson, Mark J; Lohs, Jan; Kuhagen, Ilka; Kaufman, Julie; Bhaidani, Shamsu
2006-01-01
Objectives This proof of concept (POC) study was designed to evaluate the use of an Internet-based bulletin board technology to aid parallel cross-cultural development of thematic content for a new set of patient-reported outcome measures (PROs). Methods The POC study, conducted in Germany and the United States, utilized Internet Focus Groups (IFGs) to assure the validity of new PRO items across the two cultures – all items were designed to assess the impact of excess facial oil on individuals' lives. The on-line IFG activities were modeled after traditional face-to-face focus groups and organized by a common 'Topic' Guide designed with input from thought leaders in dermatology and health outcomes research. The two sets of IFGs were professionally moderated in the native language of each country. IFG moderators coded the thematic content of transcripts, and a frequency analysis of code endorsement was used to identify areas of content similarity and difference between the two countries. Based on this information, draft PRO items were designed and a majority (80%) of the original participants returned to rate the relative importance of the newly designed questions. Findings The use of parallel cross-cultural content analysis of IFG transcripts permitted identification of the major content themes in each country as well as exploration of the possible reasons for any observed differences between the countries. Results from coded frequency counts and transcript reviews informed the design and wording of the test questions for the future PRO instrument(s). Subsequent ratings of item importance also deepened our understanding of potential areas of cross-cultural difference, differences that would be explored over the course of future validation studies involving these PROs. Conclusion The use of IFGs for cross-cultural content development received positive reviews from participants and was found to be both cost and time effective. The novel thematic coding methodology provided an empirical platform on which to develop culturally sensitive questionnaire content using the natural language of participants. Overall, the IFG responses and thematic analyses provided a thorough evaluation of similarities and differences in cross-cultural themes, which in turn acted as a sound base for the development of new PRO questionnaires. PMID:16995935
Atkinson, Mark J; Lohs, Jan; Kuhagen, Ilka; Kaufman, Julie; Bhaidani, Shamsu
2006-09-22
This proof of concept (POC) study was designed to evaluate the use of an Internet-based bulletin board technology to aid parallel cross-cultural development of thematic content for a new set of patient-reported outcome measures (PROs). The POC study, conducted in Germany and the United States, utilized Internet Focus Groups (IFGs) to assure the validity of new PRO items across the two cultures--all items were designed to assess the impact of excess facial oil on individuals' lives. The on-line IFG activities were modeled after traditional face-to-face focus groups and organized by a common 'Topic' Guide designed with input from thought leaders in dermatology and health outcomes research. The two sets of IFGs were professionally moderated in the native language of each country. IFG moderators coded the thematic content of transcripts, and a frequency analysis of code endorsement was used to identify areas of content similarity and difference between the two countries. Based on this information, draft PRO items were designed and a majority (80%) of the original participants returned to rate the relative importance of the newly designed questions. The use of parallel cross-cultural content analysis of IFG transcripts permitted identification of the major content themes in each country as well as exploration of the possible reasons for any observed differences between the countries. Results from coded frequency counts and transcript reviews informed the design and wording of the test questions for the future PRO instrument(s). Subsequent ratings of item importance also deepened our understanding of potential areas of cross-cultural difference, differences that would be explored over the course of future validation studies involving these PROs. The use of IFGs for cross-cultural content development received positive reviews from participants and was found to be both cost and time effective. The novel thematic coding methodology provided an empirical platform on which to develop culturally sensitive questionnaire content using the natural language of participants. Overall, the IFG responses and thematic analyses provided a thorough evaluation of similarities and differences in cross-cultural themes, which in turn acted as a sound base for the development of new PRO questionnaires.
Combating Drug Abuse by Targeting Toll-Like Receptor 4 (TLR4)
2014-10-01
area code ) Table of Contents...to IACUC/OLAW code . In the meantime, we attempted to set up the CPP testing paradigm in a...We believe this is due to the comparative increase in activity (nearby testing, surgery , and wet
Observer Based Compensators for Nonlinear Systems
1989-03-31
T’niversitV of California I_ ______ Air Force Office Of Scientific RPesprc~i 6c. ADDI ESS ( C ..ty. State, ania ZIP Code) 7b. ADDRESS (City, State, and Z...Mathematics, M. Luksik, C . Martin and W. Shadwick, eds. Contempary Mathematics V.68, American Mathematical Society, Providence, 157-189. 2. 1987 Krener, A...ibution I University of California C AvJility Codes Avail ,111d I of Davis, CA 95616 sai. spvc-d TABLE OF CONTENTS A bstract
Survey Probe Infrared Celestial Experiment (SPICE).
1985-01-01
amplitude modulation (PAM) Word clock Bit clock 3.1.2.3.2 Each signal shall be buffered and short- circuit proofed and capable of delivering a signal...TABLE OF CONTENTS I Appendix A I SPICE II Electronic Test Report Appendix B VC409-OOOl Pulse Code Modulator Specification - SPICE I VC409-OOOl-21 Pulse...Code Modulator Specification - SPICE II Appendix C - Specifications AA0209-103 Evacuation AA0209-104 Cryogen Filling AA0209-105 Leak Rate AA0209-106
ERIC Educational Resources Information Center
Rajagopalan, Kanavillil
2006-01-01
The objective of this response article is to think through some of what I see as the far-reaching implications of a recent paper by Eric Hauser (2005) entitled "Coding 'corrective recasts': the maintenance of meaning and more fundamental problems". Hauser makes a compelling, empirically-backed case for his contention that, contrary to widespread…
The complete mitochondrial genome of the Giant Manta ray, Manta birostris.
Hinojosa-Alvarez, Silvia; Díaz-Jaimes, Pindaro; Marcet-Houben, Marina; Gabaldón, Toni
2015-01-01
The complete mitochondrial genome of the giant manta ray (Manta birostris), consists of 18,075 bp with rich A + T and low G content. Gene organization and length is similar to other species of ray. It comprises of 13 protein-coding genes, 2 rRNAs genes, 23 tRNAs genes and 1 non-coding sequence, and the control region. We identified an AT tandem repeat region, similar to that reported in Mobula japanica.
Deciphering Neural Codes of Memory during Sleep
Chen, Zhe; Wilson, Matthew A.
2017-01-01
Memories of experiences are stored in the cerebral cortex. Sleep is critical for consolidating hippocampal memory of wake experiences into the neocortex. Understanding representations of neural codes of hippocampal-neocortical networks during sleep would reveal important circuit mechanisms on memory consolidation, and provide novel insights into memory and dreams. Although sleep-associated ensemble spike activity has been investigated, identifying the content of memory in sleep remains challenging. Here, we revisit important experimental findings on sleep-associated memory (i.e., neural activity patterns in sleep that reflect memory processing) and review computational approaches for analyzing sleep-associated neural codes (SANC). We focus on two analysis paradigms for sleep-associated memory, and propose a new unsupervised learning framework (“memory first, meaning later”) for unbiased assessment of SANC. PMID:28390699
Self-regulation of motor vehicle advertising: is it working in Australia?
Donovan, Robert J; Fielder, Lynda J; Ouschan, Robyn; Ewing, Michael
2011-05-01
There is growing concern that certain content within motor vehicle advertising may have a negative influence on driving attitudes and behaviours of viewers, particularly young people, and hence a negative impact on road safety. In response, many developed countries have adopted a self-regulatory approach to motor vehicle advertising. However, it appears that many motor vehicle advertisements in Australia and elsewhere are not compliant with self-regulatory codes. Using standard commercial advertising methods, we exposed three motor vehicle ads that had been the subject of complaints to the Australian Advertising Standards Board (ASB) to, N = 463, 14-55 year olds to assess the extent to which their perceptions of the content of the ads communicated themes that were contrary to the Australian self-regulatory code. All three ads were found to communicate messages contrary to the code (such as the vehicle's speed and acceleration capabilities). However, the ASB had upheld complaints about only one of the ads. Where motor vehicle advertising regulatory frameworks exist to guide motor vehicle advertisers as to what is and what is not acceptable in their advertising, greater efforts are needed to ensure compliance with these codes. One way may be to make it mandatory for advertisers to report consumer pre-testing of their advertising to ensure that undesirable messages are not being communicated to viewers. Copyright © 2010 Elsevier Ltd. All rights reserved.
Introduction to the Natural Anticipator and the Artificial Anticipator
NASA Astrophysics Data System (ADS)
Dubois, Daniel M.
2010-11-01
This short communication deals with the introduction of the concept of anticipator, which is one who anticipates, in the framework of computing anticipatory systems. The definition of anticipation deals with the concept of program. Indeed, the word program, comes from "pro-gram" meaning "to write before" by anticipation, and means a plan for the programming of a mechanism, or a sequence of coded instructions that can be inserted into a mechanism, or a sequence of coded instructions, as genes or behavioural responses, that is part of an organism. Any natural or artificial programs are thus related to anticipatory rewriting systems, as shown in this paper. All the cells in the body, and the neurons in the brain, are programmed by the anticipatory genetic code, DNA, in a low-level language with four signs. The programs in computers are also computing anticipatory systems. It will be shown, at one hand, that the genetic code DNA is a natural anticipator. As demonstrated by Nobel laureate McClintock [8], genomes are programmed. The fundamental program deals with the DNA genetic code. The properties of the DNA consist in self-replication and self-modification. The self-replicating process leads to reproduction of the species, while the self-modifying process leads to new species or evolution and adaptation in existing ones. The genetic code DNA keeps its instructions in memory in the DNA coding molecule. The genetic code DNA is a rewriting system, from DNA coding to DNA template molecule. The DNA template molecule is a rewriting system to the Messenger RNA molecule. The information is not destroyed during the execution of the rewriting program. On the other hand, it will be demonstrated that Turing machine is an artificial anticipator. The Turing machine is a rewriting system. The head reads and writes, modifying the content of the tape. The information is destroyed during the execution of the program. This is an irreversible process. The input data are lost.
Tsien, Joe Z.
2013-01-01
Mapping and decoding brain activity patterns underlying learning and memory represents both great interest and immense challenge. At present, very little is known regarding many of the very basic questions regarding the neural codes of memory: are fear memories retrieved during the freezing state or non-freezing state of the animals? How do individual memory traces give arise to a holistic, real-time associative memory engram? How are memory codes regulated by synaptic plasticity? Here, by applying high-density electrode arrays and dimensionality-reduction decoding algorithms, we investigate hippocampal CA1 activity patterns of trace fear conditioning memory code in inducible NMDA receptor knockout mice and their control littermates. Our analyses showed that the conditioned tone (CS) and unconditioned foot-shock (US) can evoke hippocampal ensemble responses in control and mutant mice. Yet, temporal formats and contents of CA1 fear memory engrams differ significantly between the genotypes. The mutant mice with disabled NMDA receptor plasticity failed to generate CS-to-US or US-to-CS associative memory traces. Moreover, the mutant CA1 region lacked memory traces for “what at when” information that predicts the timing relationship between the conditioned tone and the foot shock. The degraded associative fear memory engram is further manifested in its lack of intertwined and alternating temporal association between CS and US memory traces that are characteristic to the holistic memory recall in the wild-type animals. Therefore, our study has decoded real-time memory contents, timing relationship between CS and US, and temporal organizing patterns of fear memory engrams and demonstrated how hippocampal memory codes are regulated by NMDA receptor synaptic plasticity. PMID:24302990
Ciliates learn to diagnose and correct classical error syndromes in mating strategies
Clark, Kevin B.
2013-01-01
Preconjugal ciliates learn classical repetition error-correction codes to safeguard mating messages and replies from corruption by “rivals” and local ambient noise. Because individual cells behave as memory channels with Szilárd engine attributes, these coding schemes also might be used to limit, diagnose, and correct mating-signal errors due to noisy intracellular information processing. The present study, therefore, assessed whether heterotrich ciliates effect fault-tolerant signal planning and execution by modifying engine performance, and consequently entropy content of codes, during mock cell–cell communication. Socially meaningful serial vibrations emitted from an ambiguous artificial source initiated ciliate behavioral signaling performances known to advertise mating fitness with varying courtship strategies. Microbes, employing calcium-dependent Hebbian-like decision making, learned to diagnose then correct error syndromes by recursively matching Boltzmann entropies between signal planning and execution stages via “power” or “refrigeration” cycles. All eight serial contraction and reversal strategies incurred errors in entropy magnitude by the execution stage of processing. Absolute errors, however, subtended expected threshold values for single bit-flip errors in three-bit replies, indicating coding schemes protected information content throughout signal production. Ciliate preparedness for vibrations selectively and significantly affected the magnitude and valence of Szilárd engine performance during modal and non-modal strategy corrective cycles. But entropy fidelity for all replies mainly improved across learning trials as refinements in engine efficiency. Fidelity neared maximum levels for only modal signals coded in resilient three-bit repetition error-correction sequences. Together, these findings demonstrate microbes can elevate survival/reproductive success by learning to implement classical fault-tolerant information processing in social contexts. PMID:23966987
Hjerpe, Per; Boström, Kristina Bengtsson; Lindblad, Ulf; Merlo, Juan
2012-01-01
Objective To investigate the impact on ICD coding behaviour of a new case-mix reimbursement system based on coded patient diagnoses. The main hypothesis was that after the introduction of the new system the coding of chronic diseases like hypertension and cancer would increase and the variance in propensity for coding would decrease on both physician and health care centre (HCC) levels. Design Cross-sectional multilevel logistic regression analyses were performed in periods covering the time before and after the introduction of the new reimbursement system. Setting Skaraborg primary care, Sweden. Subjects All patients (n = 76 546 to 79 826) 50 years of age and older visiting 468 to 627 physicians at the 22 public HCCs in five consecutive time periods of one year each. Main outcome measures Registered codes for hypertension and cancer diseases in Skaraborg primary care database (SPCD). Results After the introduction of the new reimbursement system the adjusted prevalence of hypertension and cancer in SPCD increased from 17.4% to 32.2% and from 0.79% to 2.32%, respectively, probably partly due to an increased diagnosis coding of indirect patient contacts. The total variance in the propensity for coding declined simultaneously at the physician level for both diagnosis groups. Conclusions Changes in the healthcare reimbursement system may directly influence the contents of a research database that retrieves data from clinical practice. This should be taken into account when using such a database for research purposes, and the data should be validated for each diagnosis. PMID:23130878
Yatawara, Lalani; Wickramasinghe, Susiji; Rajapakse, R P V J; Agatsuma, Takeshi
2010-09-01
In the present study, we determined the complete mitochondrial (mt) genome sequence (13,839bp) of parasitic nematode Setaria digitata and its structure and organization compared with Onchocerca volvulus, Dirofilaria immitis and Brugia malayi. The mt genome of S. digitata is slightly larger than the mt genomes of other filarial nematodes. S. digitata mt genome contains 36 genes (12 protein-coding genes, 22 transfer RNAs and 2 ribosomal RNAs) that are typically found in metazoans. This genome contains a high A+T (75.1%) content and low G+C content (24.9%). The mt gene order for S. digitata is the same as those for O. volvulus, D. immitis and B. malayi but it is distinctly different from other nematodes compared. The start codons inferred in the mt genome of S. digitata are TTT, ATT, TTG, ATG, GTT and ATA. Interestingly, the initiation codon TTT is unique to S. digitata mt genome and four protein-coding genes use this codon as a translation initiation codon. Five protein-coding genes use TAG as a stop codon whereas three genes use TAA and four genes use T as a termination codon. Out of 64 possible codons, only 57 are used for mitochondrial protein-coding genes of S. digitata. T-rich codons such as TTT (18.9%), GTT (7.9%), TTG (7.8%), TAT (7%), ATT (5.7%), TCT (4.8%) and TTA (4.1%) are used more frequently. This pattern of codon usage reflects the strong bias for T in the mt genome of S. digitata. In conclusion, the present investigation provides new molecular data for future studies of the comparative mitochondrial genomics and systematic of parasitic nematodes of socio-economic importance. 2010 Elsevier B.V. All rights reserved.
Code of Federal Regulations, 2010 CFR
... 49 U.S.C. United States Code, 2009 Edition Title 49 - TRANSPORTATION SUBTITLE V - RAIL PROGRAMS PART B - ASSISTANCE CHAPTER 227 - STATE RAIL PLANS Sec. 22705 - Content §22705. Content (a) In General .—Each State rail plan shall, at a minimum, contain the following: (1) An inventory of the existing overall rail transportation system an...
Code of Federal Regulations, 2010 CFR
... 49 U.S.C. United States Code, 2011 Edition Title 49 - TRANSPORTATION SUBTITLE V - RAIL PROGRAMS PART B - ASSISTANCE CHAPTER 227 - STATE RAIL PLANS Sec. 22705 - Content §22705. Content (a) In General .—Each State rail plan shall, at a minimum, contain the following: (1) An inventory of the existing overall rail transportation system an...
Code of Federal Regulations, 2010 CFR
... 49 U.S.C. United States Code, 2014 Edition Title 49 - TRANSPORTATION SUBTITLE V - RAIL PROGRAMS PART B - ASSISTANCE CHAPTER 227 - STATE RAIL PLANS Sec. 22705 - Content §22705. Content (a) In General .—Each State rail plan shall, at a minimum, contain the following: (1) An inventory of the existing overall rail transportation system an...
2015-06-01
events was ad - hoc and problematic due to time constraints and changing requirements. Determining errors in context and heuristics required expertise...area code ) 410-278-4678 Standard Form 298 (Rev. 8/98) Prescribed by ANSI Std. Z39.18 iii Contents List of Figures iv 1. Introduction 1...reduction code ...........8 1 1. Introduction Data reduction for analysis of Command, Control, Communications, and Computer (C4) network tests
ONR Far East Scientific Bulletin, Volume 7, Number 2, April-June 1982,
1982-01-01
contained source code . - PAL (Program Automation Language) PAL is a system design language that automatically generates an executable program from a...NTIS c3&1 DTIC TliB Unn ’l.- A ElJustitt for _ By - Distrib~tion Availability Codes Avail and/or Di st Speojal iii 0- CONTENTS~ P age r’A Gflmpse at...tools exist at ECL in prototype forms. Like most major computer manufacturers, they have also extended high level languages such as FORTRAN , COBOL
A Turnover Analysis for Department of Defense Physicians
1988-06-01
uRIIY CLASSIFiCArtON ZIR0,CLASSIiFVNIMiFr E5 SAME AS RPT DTI( U(SRS Unclassifi ed 22a NAME OF RESPONAiLk IND VL)UAL 220 TELEPHONE lnxiudt Aula Code...8217u.u oun e ed El Just ifloatil 16 Dist ribution/ S Avallability Codes Avail are or ist Specia! I - I TABLE OF CONTENTS a. I. INTRODUCTION AND...consideration when choosing between a military or civilian career it is not the only factor. Other factors such as benefits, education, esprit de corps
Content Representation in the Human Medial Temporal Lobe
Liang, Jackson C.; Wagner, Anthony D.
2013-01-01
Current theories of medial temporal lobe (MTL) function focus on event content as an important organizational principle that differentiates MTL subregions. Perirhinal and parahippocampal cortices may play content-specific roles in memory, whereas hippocampal processing is alternately hypothesized to be content specific or content general. Despite anatomical evidence for content-specific MTL pathways, empirical data for content-based MTL subregional dissociations are mixed. Here, we combined functional magnetic resonance imaging with multiple statistical approaches to characterize MTL subregional responses to different classes of novel event content (faces, scenes, spoken words, sounds, visual words). Univariate analyses revealed that responses to novel faces and scenes were distributed across the anterior–posterior axis of MTL cortex, with face responses distributed more anteriorly than scene responses. Moreover, multivariate pattern analyses of perirhinal and parahippocampal data revealed spatially organized representational codes for multiple content classes, including nonpreferred visual and auditory stimuli. In contrast, anterior hippocampal responses were content general, with less accurate overall pattern classification relative to MTL cortex. Finally, posterior hippocampal activation patterns consistently discriminated scenes more accurately than other forms of content. Collectively, our findings indicate differential contributions of MTL subregions to event representation via a distributed code along the anterior–posterior axis of MTL that depends on the nature of event content. PMID:22275474
Breaking and Fixing Origin-Based Access Control in Hybrid Web/Mobile Application Frameworks
Georgiev, Martin; Jana, Suman; Shmatikov, Vitaly
2014-01-01
Hybrid mobile applications (apps) combine the features of Web applications and “native” mobile apps. Like Web applications, they are implemented in portable, platform-independent languages such as HTML and JavaScript. Like native apps, they have direct access to local device resources—file system, location, camera, contacts, etc. Hybrid apps are typically developed using hybrid application frameworks such as PhoneGap. The purpose of the framework is twofold. First, it provides an embedded Web browser (for example, WebView on Android) that executes the app's Web code. Second, it supplies “bridges” that allow Web code to escape the browser and access local resources on the device. We analyze the software stack created by hybrid frameworks and demonstrate that it does not properly compose the access-control policies governing Web code and local code, respectively. Web code is governed by the same origin policy, whereas local code is governed by the access-control policy of the operating system (for example, user-granted permissions in Android). The bridges added by the framework to the browser have the same local access rights as the entire application, but are not correctly protected by the same origin policy. This opens the door to fracking attacks, which allow foreign-origin Web content included into a hybrid app (e.g., ads confined in iframes) to drill through the layers and directly access device resources. Fracking vulnerabilities are generic: they affect all hybrid frameworks, all embedded Web browsers, all bridge mechanisms, and all platforms on which these frameworks are deployed. We study the prevalence of fracking vulnerabilities in free Android apps based on the PhoneGap framework. Each vulnerability exposes sensitive local resources—the ability to read and write contacts list, local files, etc.—to dozens of potentially malicious Web domains. We also analyze the defenses deployed by hybrid frameworks to prevent resource access by foreign-origin Web content and explain why they are ineffectual. We then present NoFrak, a capability-based defense against fracking attacks. NoFrak is platform-independent, compatible with any framework and embedded browser, requires no changes to the code of the existing hybrid apps, and does not break their advertising-supported business model. PMID:25485311
Mu-Like Prophage in Serogroup B Neisseria meningitidis Coding for Surface-Exposed Antigens
Masignani, Vega; Giuliani, Marzia Monica; Tettelin, Hervé; Comanducci, Maurizio; Rappuoli, Rino; Scarlato, Vincenzo
2001-01-01
Sequence analysis of the genome of Neisseria meningititdis serogroup B revealed the presence of an ∼35-kb region inserted within a putative gene coding for an ABC-type transporter. The region contains 46 open reading frames, 29 of which are colinear and homologous to the genes of Escherichia coli Mu phage. Two prophages with similar organizations were also found in serogroup A meningococcus, and one was found in Haemophilus influenzae. Early and late phage functions are well preserved in this family of Mu-like prophages. Several regions of atypical nucleotide content were identified. These likely represent genes acquired by horizontal transfer. Three of the acquired genes are shown to code for surface-associated antigens, and the encoded proteins are able to induce bactericidal antibodies. PMID:11254622
Iterative categorization (IC): a systematic technique for analysing qualitative data.
Neale, Joanne
2016-06-01
The processes of analysing qualitative data, particularly the stage between coding and publication, are often vague and/or poorly explained within addiction science and research more broadly. A simple but rigorous and transparent technique for analysing qualitative textual data, developed within the field of addiction, is described. The technique, iterative categorization (IC), is suitable for use with inductive and deductive codes and can support a range of common analytical approaches, e.g. thematic analysis, Framework, constant comparison, analytical induction, content analysis, conversational analysis, discourse analysis, interpretative phenomenological analysis and narrative analysis. Once the data have been coded, the only software required is a standard word processing package. Worked examples are provided. © 2016 The Authors. Addiction published by John Wiley & Sons Ltd on behalf of Society for the Study of Addiction.
A new framework for interactive quality assessment with application to light field coding
NASA Astrophysics Data System (ADS)
Viola, Irene; Ebrahimi, Touradj
2017-09-01
In recent years, light field has experienced a surge of popularity, mainly due to the recent advances in acquisition and rendering technologies that have made it more accessible to the public. Thanks to image-based rendering techniques, light field contents can be rendered in real time on common 2D screens, allowing virtual navigation through the captured scenes in an interactive fashion. However, this richer representation of the scene poses the problem of reliable quality assessments for light field contents. In particular, while subjective methodologies that enable interaction have already been proposed, no work has been done on assessing how users interact with light field contents. In this paper, we propose a new framework to subjectively assess the quality of light field contents in an interactive manner and simultaneously track users behaviour. The framework is successfully used to perform subjective assessment of two coding solutions. Moreover, statistical analysis performed on the results shows interesting correlation between subjective scores and average interaction time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dr. George L Mesina
Our ultimate goal is to create and maintain RELAP5-3D as the best software tool available to analyze nuclear power plants. This begins with writing excellent programming and requires thorough testing. This document covers development of RELAP5-3D software, the behavior of the RELAP5-3D program that must be maintained, and code testing. RELAP5-3D must perform in a manner consistent with previous code versions with backward compatibility for the sake of the users. Thus file operations, code termination, input and output must remain consistent in form and content while adding appropriate new files, input and output as new features are developed. As computermore » hardware, operating systems, and other software change, RELAP5-3D must adapt and maintain performance. The code must be thoroughly tested to ensure that it continues to perform robustly on the supported platforms. The coding must be written in a consistent manner that makes the program easy to read to reduce the time and cost of development, maintenance and error resolution. The programming guidelines presented her are intended to institutionalize a consistent way of writing FORTRAN code for the RELAP5-3D computer program that will minimize errors and rework. A common format and organization of program units creates a unifying look and feel to the code. This in turn increases readability and reduces time required for maintenance, development and debugging. It also aids new programmers in reading and understanding the program. Therefore, when undertaking development of the RELAP5-3D computer program, the programmer must write computer code that follows these guidelines. This set of programming guidelines creates a framework of good programming practices, such as initialization, structured programming, and vector-friendly coding. It sets out formatting rules for lines of code, such as indentation, capitalization, spacing, etc. It creates limits on program units, such as subprograms, functions, and modules. It establishes documentation guidance on internal comments. The guidelines apply to both existing and new subprograms. They are written for both FORTRAN 77 and FORTRAN 95. The guidelines are not so rigorous as to inhibit a programmer’s unique style, but do restrict the variations in acceptable coding to create sufficient commonality that new readers will find the coding in each new subroutine familiar. It is recognized that this is a “living” document and must be updated as languages, compilers, and computer hardware and software evolve.« less
A novel multiple description scalable coding scheme for mobile wireless video transmission
NASA Astrophysics Data System (ADS)
Zheng, Haifeng; Yu, Lun; Chen, Chang Wen
2005-03-01
We proposed in this paper a novel multiple description scalable coding (MDSC) scheme based on in-band motion compensation temporal filtering (IBMCTF) technique in order to achieve high video coding performance and robust video transmission. The input video sequence is first split into equal-sized groups of frames (GOFs). Within a GOF, each frame is hierarchically decomposed by discrete wavelet transform. Since there is a direct relationship between wavelet coefficients and what they represent in the image content after wavelet decomposition, we are able to reorganize the spatial orientation trees to generate multiple bit-streams and employed SPIHT algorithm to achieve high coding efficiency. We have shown that multiple bit-stream transmission is very effective in combating error propagation in both Internet video streaming and mobile wireless video. Furthermore, we adopt the IBMCTF scheme to remove the redundancy for inter-frames along the temporal direction using motion compensated temporal filtering, thus high coding performance and flexible scalability can be provided in this scheme. In order to make compressed video resilient to channel error and to guarantee robust video transmission over mobile wireless channels, we add redundancy to each bit-stream and apply error concealment strategy for lost motion vectors. Unlike traditional multiple description schemes, the integration of these techniques enable us to generate more than two bit-streams that may be more appropriate for multiple antenna transmission of compressed video. Simulate results on standard video sequences have shown that the proposed scheme provides flexible tradeoff between coding efficiency and error resilience.
Portelli, Geoffrey; Barrett, John M; Hilgen, Gerrit; Masquelier, Timothée; Maccione, Alessandro; Di Marco, Stefano; Berdondini, Luca; Kornprobst, Pierre; Sernagor, Evelyne
2016-01-01
How a population of retinal ganglion cells (RGCs) encodes the visual scene remains an open question. Going beyond individual RGC coding strategies, results in salamander suggest that the relative latencies of a RGC pair encode spatial information. Thus, a population code based on this concerted spiking could be a powerful mechanism to transmit visual information rapidly and efficiently. Here, we tested this hypothesis in mouse by recording simultaneous light-evoked responses from hundreds of RGCs, at pan-retinal level, using a new generation of large-scale, high-density multielectrode array consisting of 4096 electrodes. Interestingly, we did not find any RGCs exhibiting a clear latency tuning to the stimuli, suggesting that in mouse, individual RGC pairs may not provide sufficient information. We show that a significant amount of information is encoded synergistically in the concerted spiking of large RGC populations. Thus, the RGC population response described with relative activities, or ranks, provides more relevant information than classical independent spike count- or latency- based codes. In particular, we report for the first time that when considering the relative activities across the whole population, the wave of first stimulus-evoked spikes is an accurate indicator of stimulus content. We show that this coding strategy coexists with classical neural codes, and that it is more efficient and faster. Overall, these novel observations suggest that already at the level of the retina, concerted spiking provides a reliable and fast strategy to rapidly transmit new visual scenes.
Hoelzer, Simon; Schweiger, Ralf K; Dudeck, Joachim
2003-01-01
With the introduction of ICD-10 as the standard for diagnostics, it becomes necessary to develop an electronic representation of its complete content, inherent semantics, and coding rules. The authors' design relates to the current efforts by the CEN/TC 251 to establish a European standard for hierarchical classification systems in health care. The authors have developed an electronic representation of ICD-10 with the eXtensible Markup Language (XML) that facilitates integration into current information systems and coding software, taking different languages and versions into account. In this context, XML provides a complete processing framework of related technologies and standard tools that helps develop interoperable applications. XML provides semantic markup. It allows domain-specific definition of tags and hierarchical document structure. The idea of linking and thus combining information from different sources is a valuable feature of XML. In addition, XML topic maps are used to describe relationships between different sources, or "semantically associated" parts of these sources. The issue of achieving a standardized medical vocabulary becomes more and more important with the stepwise implementation of diagnostically related groups, for example. The aim of the authors' work is to provide a transparent and open infrastructure that can be used to support clinical coding and to develop further software applications. The authors are assuming that a comprehensive representation of the content, structure, inherent semantics, and layout of medical classification systems can be achieved through a document-oriented approach.
Hoelzer, Simon; Schweiger, Ralf K.; Dudeck, Joachim
2003-01-01
With the introduction of ICD-10 as the standard for diagnostics, it becomes necessary to develop an electronic representation of its complete content, inherent semantics, and coding rules. The authors' design relates to the current efforts by the CEN/TC 251 to establish a European standard for hierarchical classification systems in health care. The authors have developed an electronic representation of ICD-10 with the eXtensible Markup Language (XML) that facilitates integration into current information systems and coding software, taking different languages and versions into account. In this context, XML provides a complete processing framework of related technologies and standard tools that helps develop interoperable applications. XML provides semantic markup. It allows domain-specific definition of tags and hierarchical document structure. The idea of linking and thus combining information from different sources is a valuable feature of XML. In addition, XML topic maps are used to describe relationships between different sources, or “semantically associated” parts of these sources. The issue of achieving a standardized medical vocabulary becomes more and more important with the stepwise implementation of diagnostically related groups, for example. The aim of the authors' work is to provide a transparent and open infrastructure that can be used to support clinical coding and to develop further software applications. The authors are assuming that a comprehensive representation of the content, structure, inherent semantics, and layout of medical classification systems can be achieved through a document-oriented approach. PMID:12807813
Idaho National Engineering Laboratory code assessment of the Rocky Flats transuranic waste
DOE Office of Scientific and Technical Information (OSTI.GOV)
NONE
1995-07-01
This report is an assessment of the content codes associated with transuranic waste shipped from the Rocky Flats Plant in Golden, Colorado, to INEL. The primary objective of this document is to characterize and describe the transuranic wastes shipped to INEL from Rocky Flats by item description code (IDC). This information will aid INEL in determining if the waste meets the waste acceptance criteria (WAC) of the Waste Isolation Pilot Plant (WIPP). The waste covered by this content code assessment was shipped from Rocky Flats between 1985 and 1989. These years coincide with the dates for information available in themore » Rocky Flats Solid Waste Information Management System (SWIMS). The majority of waste shipped during this time was certified to the existing WIPP WAC. This waste is referred to as precertified waste. Reassessment of these precertified waste containers is necessary because of changes in the WIPP WAC. To accomplish this assessment, the analytical and process knowledge available on the various IDCs used at Rocky Flats were evaluated. Rocky Flats sources for this information include employee interviews, SWIMS, Transuranic Waste Certification Program, Transuranic Waste Inspection Procedure, Backlog Waste Baseline Books, WIPP Experimental Waste Characterization Program (headspace analysis), and other related documents, procedures, and programs. Summaries are provided of: (a) certification information, (b) waste description, (c) generation source, (d) recovery method, (e) waste packaging and handling information, (f) container preparation information, (g) assay information, (h) inspection information, (i) analytical data, and (j) RCRA characterization.« less
2 CFR 1.100 - Content of this title.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 2 Grants and Agreements 1 2012-01-01 2012-01-01 false Content of this title. 1.100 Section 1.100 Grants and Agreements Office of Management and Budget Guidance for Grants and Agreements ABOUT TITLE 2 OF THE CODE OF FEDERAL REGULATIONS AND SUBTITLE A Introduction to Title 2 of the CFR § 1.100 Content of...
2 CFR 1.100 - Content of this title.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 2 Grants and Agreements 1 2014-01-01 2014-01-01 false Content of this title. 1.100 Section 1.100 Grants and Agreements Office of Management and Budget Guidance for Grants and Agreements ABOUT TITLE 2 OF THE CODE OF FEDERAL REGULATIONS AND SUBTITLE A Introduction to Title 2 of the CFR § 1.100 Content of...
2 CFR 1.100 - Content of this title.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 2 Grants and Agreements 1 2013-01-01 2013-01-01 false Content of this title. 1.100 Section 1.100 Grants and Agreements Office of Management and Budget Guidance for Grants and Agreements ABOUT TITLE 2 OF THE CODE OF FEDERAL REGULATIONS AND SUBTITLE A Introduction to Title 2 of the CFR § 1.100 Content of...
2 CFR 1.105 - Organization and subtitle content.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 2 Grants and Agreements 1 2010-01-01 2010-01-01 false Organization and subtitle content. 1.105 Section 1.105 Grants and Agreements ABOUT TITLE 2 OF THE CODE OF FEDERAL REGULATIONS AND SUBTITLE A Introduction to Title 2 of the CFR § 1.105 Organization and subtitle content. (a) This title is organized into...
2 CFR 1.100 - Content of this title.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 2 Grants and Agreements 1 2011-01-01 2011-01-01 false Content of this title. 1.100 Section 1.100 Grants and Agreements Office of Management and Budget Guidance for Grants and Agreements ABOUT TITLE 2 OF THE CODE OF FEDERAL REGULATIONS AND SUBTITLE A Introduction to Title 2 of the CFR § 1.100 Content of...
2 CFR 1.100 - Content of this title.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 2 Grants and Agreements 1 2010-01-01 2010-01-01 false Content of this title. 1.100 Section 1.100 Grants and Agreements ABOUT TITLE 2 OF THE CODE OF FEDERAL REGULATIONS AND SUBTITLE A Introduction to Title 2 of the CFR § 1.100 Content of this title. This title contains— (a) Office of Management and...
Content Analysis of the "Professional School Counseling" Journal: The First Ten Years
ERIC Educational Resources Information Center
Falco, Lia D.; Bauman, Sheri; Sumnicht, Zachary; Engelstad, Alicia
2011-01-01
The authors conducted a content analysis of the articles published in the first 10 volumes of the "Professional School Counseling" (PSC) journal, beginning in October 1997 when "The School Counselor" merged with "Elementary School Counseling and Guidance". The analysis coded a total of 571 articles into 20 content categories. Findings address the…
A study of thematic content in hospital mission statements: a question of values.
Williams, Jaime; Smythe, William; Hadjistavropoulos, Thomas; Malloy, David C; Martin, Ronald
2005-01-01
We examined the content of Canadian hospital mission statements using thematic content analysis. The mission statements that we studied varied in terms of both content and length. Although there was some content related to goals designed to ensure organizational visibility, survival, and competitiveness, the domain of values predominated over our entire coding structure. The primary value-related theme that emerged concerned the importance of patient care.
Wavelet-based reversible watermarking for authentication
NASA Astrophysics Data System (ADS)
Tian, Jun
2002-04-01
In the digital information age, digital content (audio, image, and video) can be easily copied, manipulated, and distributed. Copyright protection and content authentication of digital content has become an urgent problem to content owners and distributors. Digital watermarking has provided a valuable solution to this problem. Based on its application scenario, most digital watermarking methods can be divided into two categories: robust watermarking and fragile watermarking. As a special subset of fragile watermark, reversible watermark (which is also called lossless watermark, invertible watermark, erasable watermark) enables the recovery of the original, unwatermarked content after the watermarked content has been detected to be authentic. Such reversibility to get back unwatermarked content is highly desired in sensitive imagery, such as military data and medical data. In this paper we present a reversible watermarking method based on an integer wavelet transform. We look into the binary representation of each wavelet coefficient and embed an extra bit to expandable wavelet coefficient. The location map of all expanded coefficients will be coded by JBIG2 compression and these coefficient values will be losslessly compressed by arithmetic coding. Besides these two compressed bit streams, an SHA-256 hash of the original image will also be embedded for authentication purpose.
3D video coding: an overview of present and upcoming standards
NASA Astrophysics Data System (ADS)
Merkle, Philipp; Müller, Karsten; Wiegand, Thomas
2010-07-01
An overview of existing and upcoming 3D video coding standards is given. Various different 3D video formats are available, each with individual pros and cons. The 3D video formats can be separated into two classes: video-only formats (such as stereo and multiview video) and depth-enhanced formats (such as video plus depth and multiview video plus depth). Since all these formats exist of at least two video sequences and possibly additional depth data, efficient compression is essential for the success of 3D video applications and technologies. For the video-only formats the H.264 family of coding standards already provides efficient and widely established compression algorithms: H.264/AVC simulcast, H.264/AVC stereo SEI message, and H.264/MVC. For the depth-enhanced formats standardized coding algorithms are currently being developed. New and specially adapted coding approaches are necessary, as the depth or disparity information included in these formats has significantly different characteristics than video and is not displayed directly, but used for rendering. Motivated by evolving market needs, MPEG has started an activity to develop a generic 3D video standard within the 3DVC ad-hoc group. Key features of the standard are efficient and flexible compression of depth-enhanced 3D video representations and decoupling of content creation and display requirements.
Turco, Gina; Schnable, James C.; Pedersen, Brent; Freeling, Michael
2013-01-01
Conserved non-coding sequences (CNS) are islands of non-coding sequence that, like protein coding exons, show less divergence in sequence between related species than functionless DNA. Several CNSs have been demonstrated experimentally to function as cis-regulatory regions. However, the specific functions of most CNSs remain unknown. Previous searches for CNS in plants have either anchored on exons and only identified nearby sequences or required years of painstaking manual annotation. Here we present an open source tool that can accurately identify CNSs between any two related species with sequenced genomes, including both those immediately adjacent to exons and distal sequences separated by >12 kb of non-coding sequence. We have used this tool to characterize new motifs, associate CNSs with additional functions, and identify previously undetected genes encoding RNA and protein in the genomes of five grass species. We provide a list of 15,363 orthologous CNSs conserved across all grasses tested. We were also able to identify regulatory sequences present in the common ancestor of grasses that have been lost in one or more extant grass lineages. Lists of orthologous gene pairs and associated CNSs are provided for reference inbred lines of arabidopsis, Japonica rice, foxtail millet, sorghum, brachypodium, and maize. PMID:23874343
Antúnez, Lucía; Giménez, Ana; Maiche, Alejandro; Ares, Gastón
2015-01-01
To study the influence of 2 interpretational aids of front-of-package (FOP) nutrition labels (color code and text descriptors) on attentional capture and consumers' understanding of nutritional information. A full factorial design was used to assess the influence of color code and text descriptors using visual search and eye tracking. Ten trained assessors participated in the visual search study and 54 consumers completed the eye-tracking study. In the visual search study, assessors were asked to indicate whether there was a label high in fat within sets of mayonnaise labels with different FOP labels. In the eye-tracking study, assessors answered a set of questions about the nutritional content of labels. The researchers used logistic regression to evaluate the influence of interpretational aids of FOP nutrition labels on the percentage of correct answers. Analyses of variance were used to evaluate the influence of the studied variables on attentional measures and participants' response times. Response times were significantly higher for monochromatic FOP labels compared with color-coded ones (3,225 vs 964 ms; P < .001), which suggests that color codes increase attentional capture. The highest number and duration of fixations and visits were recorded on labels that did not include color codes or text descriptors (P < .05). The lowest percentage of incorrect answers was observed when the nutrient level was indicated using color code and text descriptors (P < .05). The combination of color codes and text descriptors seems to be the most effective alternative to increase attentional capture and understanding of nutritional information. Copyright © 2015 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.
Seligmann, Hervé
2013-03-01
Usual DNA→RNA transcription exchanges T→U. Assuming different systematic symmetric nucleotide exchanges during translation, some GenBank RNAs match exactly human mitochondrial sequences (exchange rules listed in decreasing transcript frequencies): C↔U, A↔U, A↔U+C↔G (two nucleotide pairs exchanged), G↔U, A↔G, C↔G, none for A↔C, A↔G+C↔U, and A↔C+G↔U. Most unusual transcripts involve exchanging uracil. Independent measures of rates of rare replicational enzymatic DNA nucleotide misinsertions predict frequencies of RNA transcripts systematically exchanging the corresponding misinserted nucleotides. Exchange transcripts self-hybridize less than other gene regions, self-hybridization increases with length, suggesting endoribonuclease-limited elongation. Blast detects stop codon depleted putative protein coding overlapping genes within exchange-transcribed mitochondrial genes. These align with existing GenBank proteins (mainly metazoan origins, prokaryotic and viral origins underrepresented). These GenBank proteins frequently interact with RNA/DNA, are membrane transporters, or are typical of mitochondrial metabolism. Nucleotide exchange transcript frequencies increase with overlapping gene densities and stop densities, indicating finely tuned counterbalancing regulation of expression of systematic symmetric nucleotide exchange-encrypted proteins. Such expression necessitates combined activities of suppressor tRNAs matching stops, and nucleotide exchange transcription. Two independent properties confirm predicted exchanged overlap coding genes: discrepancy of third codon nucleotide contents from replicational deamination gradients, and codon usage according to circular code predictions. Predictions from both properties converge, especially for frequent nucleotide exchange types. Nucleotide exchanging transcription apparently increases coding densities of protein coding genes without lengthening genomes, revealing unsuspected functional DNA coding potential. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Social media use by community-based organizations conducting health promotion: a content analysis.
Ramanadhan, Shoba; Mendez, Samuel R; Rao, Megan; Viswanath, Kasisomayajula
2013-12-05
Community-based organizations (CBOs) are critical channels for the delivery of health promotion programs. Much of their influence comes from the relationships they have with community members and other key stakeholders and they may be able to harness the power of social media tools to develop and maintain these relationships. There are limited data describing if and how CBOs are using social media. This study assesses the extent to which CBOs engaged in health promotion use popular social media channels, the types of content typically shared, and the extent to which the interactive aspects of social media tools are utilized. We assessed the social media presence and patterns of usage of CBOs engaged in health promotion in Boston, Lawrence, and Worcester, Massachusetts. We coded content on three popular channels: Facebook, Twitter, and YouTube. We used content analysis techniques to quantitatively summarize posts, tweets, and videos on these channels, respectively. For each organization, we coded all content put forth by the CBO on the three channels in a 30-day window. Two coders were trained and conducted the coding. Data were collected between November 2011 and January 2012. A total of 166 organizations were included in our census. We found that 42% of organizations used at least one of the channels of interest. Across the three channels, organization promotion was the most common theme for content (66% of posts, 63% of tweets, and 93% of videos included this content). Most organizations updated Facebook and Twitter content at rates close to recommended frequencies. We found limited interaction/engagement with audience members. Much of the use of social media tools appeared to be uni-directional, a flow of information from the organization to the audience. By better leveraging opportunities for interaction and user engagement, these organizations can reap greater benefits from the non-trivial investment required to use social media well. Future research should assess links between use patterns and organizational characteristics, staff perspectives, and audience engagement.
Banna, Jinan C; Gilliland, Betsy; Keefe, Margaret; Zheng, Dongping
2016-09-26
Understanding views about what constitutes a healthy diet in diverse populations may inform design of culturally tailored behavior change interventions. The objective of this study was to describe perspectives on healthy eating among Chinese and American young adults and identify similarities and differences between these groups. Chinese (n = 55) and American (n = 57) undergraduate students in Changsha, Hunan, China and Honolulu, Hawai'i, U.S.A. composed one- to two-paragraph responses to the following prompt: "What does the phrase 'a healthy diet' mean to you?" Researchers used content analysis to identify predominant themes using Dedoose (version 5.2.0, SocioCultural Research Consultants, LLC, Los Angeles, CA, 2015). Three researchers independently coded essays and grouped codes with similar content. The team then identified themes and sorted them in discussion. Two researchers then deductively coded the entire data set using eight codes developed from the initial coding and calculated total code counts for each group of participants. Chinese students mentioned physical outcomes, such as maintaining immunity and digestive health. Timing of eating, with regular meals and greater intake during day than night, was emphasized. American students described balancing among food groups and balancing consumption with exercise, with physical activity considered essential. Students also stated that food components such as sugar, salt and fat should be avoided in large quantities. Similarities included principles such as moderation and fruits and vegetables as nutritious, and differences included foods to be restricted and meal timing. While both groups emphasized specific foods and guiding dietary principles, several distinctions in viewpoints emerged. The diverse views may reflect food-related messages to which participants are exposed both through the media and educational systems in their respective countries. Future studies may further examine themes that may not typically be addressed in nutrition education programs in diverse populations of young adults. Gaining greater knowledge of the ways in which healthy eating is viewed will allow for development of interventions that are sensitive to the traditional values and predominant views of health in various groups.
Auditory-neurophysiological responses to speech during early childhood: Effects of background noise
White-Schwoch, Travis; Davies, Evan C.; Thompson, Elaine C.; Carr, Kali Woodruff; Nicol, Trent; Bradlow, Ann R.; Kraus, Nina
2015-01-01
Early childhood is a critical period of auditory learning, during which children are constantly mapping sounds to meaning. But learning rarely occurs under ideal listening conditions—children are forced to listen against a relentless din. This background noise degrades the neural coding of these critical sounds, in turn interfering with auditory learning. Despite the importance of robust and reliable auditory processing during early childhood, little is known about the neurophysiology underlying speech processing in children so young. To better understand the physiological constraints these adverse listening scenarios impose on speech sound coding during early childhood, auditory-neurophysiological responses were elicited to a consonant-vowel syllable in quiet and background noise in a cohort of typically-developing preschoolers (ages 3–5 yr). Overall, responses were degraded in noise: they were smaller, less stable across trials, slower, and there was poorer coding of spectral content and the temporal envelope. These effects were exacerbated in response to the consonant transition relative to the vowel, suggesting that the neural coding of spectrotemporally-dynamic speech features is more tenuous in noise than the coding of static features—even in children this young. Neural coding of speech temporal fine structure, however, was more resilient to the addition of background noise than coding of temporal envelope information. Taken together, these results demonstrate that noise places a neurophysiological constraint on speech processing during early childhood by causing a breakdown in neural processing of speech acoustics. These results may explain why some listeners have inordinate difficulties understanding speech in noise. Speech-elicited auditory-neurophysiological responses offer objective insight into listening skills during early childhood by reflecting the integrity of neural coding in quiet and noise; this paper documents typical response properties in this age group. These normative metrics may be useful clinically to evaluate auditory processing difficulties during early childhood. PMID:26113025
Quality improvement utilizing in-situ simulation for a dual-hospital pediatric code response team.
Yager, Phoebe; Collins, Corey; Blais, Carlene; O'Connor, Kathy; Donovan, Patricia; Martinez, Maureen; Cummings, Brian; Hartnick, Christopher; Noviski, Natan
2016-09-01
Given the rarity of in-hospital pediatric emergency events, identification of gaps and inefficiencies in the code response can be difficult. In-situ, simulation-based medical education programs can identify unrecognized systems-based challenges. We hypothesized that developing an in-situ, simulation-based pediatric emergency response program would identify latent inefficiencies in a complex, dual-hospital pediatric code response system and allow rapid intervention testing to improve performance before implementation at an institutional level. Pediatric leadership from two hospitals with a shared pediatric code response team employed the Institute for Healthcare Improvement's (IHI) Breakthrough Model for Collaborative Improvement to design a program consisting of Plan-Do-Study-Act cycles occurring in a simulated environment. The objectives of the program were to 1) identify inefficiencies in our pediatric code response; 2) correlate to current workflow; 3) employ an iterative process to test quality improvement interventions in a safe environment; and 4) measure performance before actual implementation at the institutional level. Twelve dual-hospital, in-situ, simulated, pediatric emergencies occurred over one year. The initial simulated event allowed identification of inefficiencies including delayed provider response, delayed initiation of cardiopulmonary resuscitation (CPR), and delayed vascular access. These gaps were linked to process issues including unreliable code pager activation, slow elevator response, and lack of responder familiarity with layout and contents of code cart. From first to last simulation with multiple simulated process improvements, code response time for secondary providers coming from the second hospital decreased from 29 to 7 min, time to CPR initiation decreased from 90 to 15 s, and vascular access obtainment decreased from 15 to 3 min. Some of these simulated process improvements were adopted into the institutional response while others continue to be trended over time for evidence that observed changes represent a true new state of control. Utilizing the IHI's Breakthrough Model, we developed a simulation-based program to 1) successfully identify gaps and inefficiencies in a complex, dual-hospital, pediatric code response system and 2) provide an environment in which to safely test quality improvement interventions before institutional dissemination. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Improvements of MCOR: A Monte Carlo depletion code system for fuel assembly reference calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tippayakul, C.; Ivanov, K.; Misu, S.
2006-07-01
This paper presents the improvements of MCOR, a Monte Carlo depletion code system for fuel assembly reference calculations. The improvements of MCOR were initiated by the cooperation between the Penn State Univ. and AREVA NP to enhance the original Penn State Univ. MCOR version in order to be used as a new Monte Carlo depletion analysis tool. Essentially, a new depletion module using KORIGEN is utilized to replace the existing ORIGEN-S depletion module in MCOR. Furthermore, the online burnup cross section generation by the Monte Carlo calculation is implemented in the improved version instead of using the burnup cross sectionmore » library pre-generated by a transport code. Other code features have also been added to make the new MCOR version easier to use. This paper, in addition, presents the result comparisons of the original and the improved MCOR versions against CASMO-4 and OCTOPUS. It was observed in the comparisons that there were quite significant improvements of the results in terms of k{sub inf}, fission rate distributions and isotopic contents. (authors)« less
Performance comparison of AV1, HEVC, and JVET video codecs on 360 (spherical) video
NASA Astrophysics Data System (ADS)
Topiwala, Pankaj; Dai, Wei; Krishnan, Madhu; Abbas, Adeel; Doshi, Sandeep; Newman, David
2017-09-01
This paper compares the coding efficiency performance on 360 videos, of three software codecs: (a) AV1 video codec from the Alliance for Open Media (AOM); (b) the HEVC Reference Software HM; and (c) the JVET JEM Reference SW. Note that 360 video is especially challenging content, in that one codes full res globally, but typically looks locally (in a viewport), which magnifies errors. These are tested in two different projection formats ERP and RSP, to check consistency. Performance is tabulated for 1-pass encoding on two fronts: (1) objective performance based on end-to-end (E2E) metrics such as SPSNR-NN, and WS-PSNR, currently developed in the JVET committee; and (2) informal subjective assessment of static viewports. Constant quality encoding is performed with all the three codecs for an unbiased comparison of the core coding tools. Our general conclusion is that under constant quality coding, AV1 underperforms HEVC, which underperforms JVET. We also test with rate control, where AV1 currently underperforms the open source X265 HEVC codec. Objective and visual evidence is provided.
NASA Astrophysics Data System (ADS)
Lam, Meng Chun; Nizam, Siti Soleha Muhammad; Arshad, Haslina; A'isyah Ahmad Shukri, Saidatul; Hashim, Nurhazarifah Che; Putra, Haekal Mozzia; Abidin, Rimaniza Zainal
2017-10-01
This article discusses the usability of an interactive application for halal products using Optical Character Recognition (OCR) and Augmented Reality (AR) technologies. Among the problems that have been identified in this study is that consumers have little knowledge about the E-Code. Therefore, users often have doubts about the halal status of the product. Nowadays, the integrity of halal status can be doubtful due to the actions of some irresponsible people spreading false information about a product. Therefore, an application that uses OCR and AR technology developed in this study will help the users to identify the information content of a product by scanning the E-Code label and by scanning the product's brand to know the halal status of the product. In this application, E-Code on the label of a product is scanned using OCR technology to display information about the E-Code. The product's brand is scan using augmented reality technology to display halal status of the product. The findings reveal that users are satisfied with this application and it is useful and easy to use.
Comparison of simple sequence repeats in 19 Archaea.
Trivedi, S
2006-12-05
All organisms that have been studied until now have been found to have differential distribution of simple sequence repeats (SSRs), with more SSRs in intergenic than in coding sequences. SSR distribution was investigated in Archaea genomes where complete chromosome sequences of 19 Archaea were analyzed with the program SPUTNIK to find di- to penta-nucleotide repeats. The number of repeats was determined for the complete chromosome sequences and for the coding and non-coding sequences. Different from what has been found for other groups of organisms, there is an abundance of SSRs in coding regions of the genome of some Archaea. Dinucleotide repeats were rare and CG repeats were found in only two Archaea. In general, trinucleotide repeats are the most abundant SSR motifs; however, pentanucleotide repeats are abundant in some Archaea. Some of the tetranucleotide and pentanucleotide repeat motifs are organism specific. In general, repeats are short and CG-rich repeats are present in Archaea having a CG-rich genome. Among the 19 Archaea, SSR density was not correlated with genome size or with optimum growth temperature. Pentanucleotide density had an inverse correlation with the CG content of the genome.
MPEG4: coding for content, interactivity, and universal accessibility
NASA Astrophysics Data System (ADS)
Reader, Cliff
1996-01-01
MPEG4 is a natural extension of audiovisual coding, and yet from many perspectives breaks new ground as a standard. New coding techniques are being introduced, of course, but they will work on new data structures. The standard itself has a new architecture, and will use a new operational model when implemented on equipment that is likely to have innovative system architecture. The author introduces the background developments in technology and applications that are driving or enabling the standard, introduces the focus of MPEG4, and enumerates the new functionalities to be supported. Key applications in interactive TV and heterogeneous environments are discussed. The architecture of MPEG4 is described, followed by a discussion of the multiphase MPEG4 communication scenarios, and issues of practical implementation of MPEG4 terminals. The paper concludes with a description of the MPEG4 workplan. In summary, MPEG4 has two fundamental attributes. First, it is the coding of audiovisual objects, which may be natural or synthetic data in two or three dimensions. Second, the heart of MPEG4 is its syntax: the MPEG4 Syntactic Descriptive Language -- MSDL.
Swimsuit issues: promoting positive body image in young women's magazines.
Boyd, Elizabeth Reid; Moncrieff-Boyd, Jessica
2011-08-01
This preliminary study reviews the promotion of healthy body image to young Australian women, following the 2009 introduction of the voluntary Industry Code of Conduct on Body Image. The Code includes using diverse sized models in magazines. A qualitative content analysis of the 2010 annual 'swimsuit issues' was conducted on 10 Australian young women's magazines. Pictorial and/or textual editorial evidence of promoting diverse body shapes and sizes was regarded as indicative of the magazines' upholding aspects of the voluntary Code of Conduct for Body Image. Diverse sized models were incorporated in four of the seven magazines with swimsuit features sampled. Body size differentials were presented as part of the swimsuit features in three of the magazines sampled. Tips for diverse body type enhancement were included in four of the magazines. All magazines met at least one criterion. One magazine displayed evidence of all three criteria. Preliminary examination suggests that more than half of young women's magazines are upholding elements of the voluntary Code of Conduct for Body Image, through representation of diverse-sized women in their swimsuit issues.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garnier, Ch.; Mailhe, P.; Sontheimer, F.
2007-07-01
Fuel performance is a key factor for minimizing operating costs in nuclear plants. One of the important aspects of fuel performance is fuel rod design, based upon reliable tools able to verify the safety of current fuel solutions, prevent potential issues in new core managements and guide the invention of tomorrow's fuels. AREVA is developing its future global fuel rod code COPERNIC3, which is able to calculate the thermal-mechanical behavior of advanced fuel rods in nuclear plants. Some of the best practices to achieve this goal are described, by reviewing the three pillars of a fuel rod code: the database,more » the modelling and the computer and numerical aspects. At first, the COPERNIC3 database content is described, accompanied by the tools developed to effectively exploit the data. Then is given an overview of the main modelling aspects, by emphasizing the thermal, fission gas release and mechanical sub-models. In the last part, numerical solutions are detailed in order to increase the computational performance of the code, with a presentation of software configuration management solutions. (authors)« less
Informal caregivers of patients with disorders of consciousness: experience of ambiguous loss.
Giovannetti, A M; Černiauskaitė, M; Leonardi, M; Sattin, D; Covelli, V
2015-01-01
This study aimed at better understanding of the complex psychological process underlying the demanding situation of taking care of a relative with disorder of consciousness (DOCs). This is a qualitative study based on the grounded theory constant comparative method. Narratives of informal caregivers were collected through in-depth interviews with a psychologist. A three-step coding scheme was applied: coding of narratives to label the specific contents; organization of codes into sub-categories and categories; and theoretical coding to describe the relation between categories. Twenty informal caregivers participated in one in-depth interview between December 2011 and May 2012. Four major themes emerged: Another person with past in common; Losing and finding myself; Old and new ways of being in relationship; and Dealing with concerns. These themes represent caregivers' efforts to deal with the situation in which their relative is at the same time present and absent. The core salient feature emerging from all these themes is the experience of ambiguous loss. Features of ambiguous loss that emerged in this study could guide clinicians' interventions to support adjustment of caregivers of patients with DOCs.
Surface reflectance retrieval from imaging spectrometer data using three atmospheric codes
NASA Astrophysics Data System (ADS)
Staenz, Karl; Williams, Daniel J.; Fedosejevs, Gunar; Teillet, Phil M.
1994-12-01
Surface reflectance retrieval from imaging spectrometer data has become important for quantitative information extraction in many application areas. In order to calculate surface reflectance from remotely measured radiance, radiative transfer codes play an important role for removal of the scattering and gaseous absorption effects of the atmosphere. The present study evaluates surface reflectances retrieved from airborne visible/infrared imaging spectrometer (AVIRIS) data using three radiative transfer codes: modified 5S (M5S), 6S, and MODTRAN2. Comparisons of the retrieved surface reflectance with ground-based reflectance were made for different target types such as asphalt, gravel, grass/soil mixture (soccer field), and water (Sooke Lake). The results indicate that the estimation of the atmospheric water vapor content is important for an accurate surface reflectance retrieval regardless of the radiative transfer code used. For the present atmospheric conditions, a difference of 0.1 in aerosol optical depth had little impact on the retrieved surface reflectance. The performance of MODTRAN2 is superior in the gas absorption regions compared to M5S and 6S.
Codon usage patterns in Nematoda: analysis based on over 25 million codons in thirty-two species
2006-01-01
Background Codon usage has direct utility in molecular characterization of species and is also a marker for molecular evolution. To understand codon usage within the diverse phylum Nematoda, we analyzed a total of 265,494 expressed sequence tags (ESTs) from 30 nematode species. The full genomes of Caenorhabditis elegans and C. briggsae were also examined. A total of 25,871,325 codons were analyzed and a comprehensive codon usage table for all species was generated. This is the first codon usage table available for 24 of these organisms. Results Codon usage similarity in Nematoda usually persists over the breadth of a genus but then rapidly diminishes even within each clade. Globodera, Meloidogyne, Pristionchus, and Strongyloides have the most highly derived patterns of codon usage. The major factor affecting differences in codon usage between species is the coding sequence GC content, which varies in nematodes from 32% to 51%. Coding GC content (measured as GC3) also explains much of the observed variation in the effective number of codons (R = 0.70), which is a measure of codon bias, and it even accounts for differences in amino acid frequency. Codon usage is also affected by neighboring nucleotides (N1 context). Coding GC content correlates strongly with estimated noncoding genomic GC content (R = 0.92). On examining abundant clusters in five species, candidate optimal codons were identified that may be preferred in highly expressed transcripts. Conclusion Evolutionary models indicate that total genomic GC content, probably the product of directional mutation pressure, drives codon usage rather than the converse, a conclusion that is supported by examination of nematode genomes. PMID:26271136
Compression performance comparison in low delay real-time video for mobile applications
NASA Astrophysics Data System (ADS)
Bivolarski, Lazar
2012-10-01
This article compares the performance of several current video coding standards in the conditions of low-delay real-time in a resource constrained environment. The comparison is performed using the same content and the metrics and mix of objective and perceptual quality metrics. The metrics results in different coding schemes are analyzed from a point of view of user perception and quality of service. Multiple standards are compared MPEG-2, MPEG4 and MPEG-AVC and well and H.263. The metrics used in the comparison include SSIM, VQM and DVQ. Subjective evaluation and quality of service are discussed from a point of view of perceptual metrics and their incorporation in the coding scheme development process. The performance and the correlation of results are presented as a predictor of the performance of video compression schemes.
Deciphering Neural Codes of Memory during Sleep.
Chen, Zhe; Wilson, Matthew A
2017-05-01
Memories of experiences are stored in the cerebral cortex. Sleep is critical for the consolidation of hippocampal memory of wake experiences into the neocortex. Understanding representations of neural codes of hippocampal-neocortical networks during sleep would reveal important circuit mechanisms in memory consolidation and provide novel insights into memory and dreams. Although sleep-associated ensemble spike activity has been investigated, identifying the content of memory in sleep remains challenging. Here we revisit important experimental findings on sleep-associated memory (i.e., neural activity patterns in sleep that reflect memory processing) and review computational approaches to the analysis of sleep-associated neural codes (SANCs). We focus on two analysis paradigms for sleep-associated memory and propose a new unsupervised learning framework ('memory first, meaning later') for unbiased assessment of SANCs. Copyright © 2017 Elsevier Ltd. All rights reserved.
Social Work Science and Knowledge Utilization
ERIC Educational Resources Information Center
Marsh, Jeanne C.; Reed, Martena
2016-01-01
Objective: This article advances understanding of social work science by examining the content and methods of highly utilized or cited journal articles in social work. Methods: A data base of the 100 most frequently cited articles from 79 social work journals was coded and categorized into three primary domains: content, research versus…
Minnesota Academic Standards: Kindergarten
ERIC Educational Resources Information Center
Minnesota Department of Education, 2017
2017-01-01
This document contains all of the Minnesota kindergarten academic standards in the content areas of Arts, English Language Arts, Mathematics, Science and Social Studies. For each content area there is a short overview followed by a coding diagram of how the standards are organized and displayed. This document is adapted from the official versions…
Map Feature Content and Text Recall of Good and Poor Readers.
ERIC Educational Resources Information Center
Amlund, Jeanne T.; And Others
1985-01-01
Reports two experiments evaluating the effect of map feature content on text recall by subjects of varying reading skill levels. Finds that both experiments support the conjoint retention hypothesis, in which dual-coding of spatial and verbal information and their interaction in memory enhance recall. (MM)
7 CFR 1485.13 - Application process and strategic plan.
Code of Federal Regulations, 2010 CFR
2010-01-01
... affiliated organizations; (D) A description of management and administrative capability; (E) A description of... code and the percentage of U.S. origin content by weight, exclusive of added water; (B) A description... and the percentage of U.S. origin content by weight, exclusive of added water; (C) A description of...
Race Socialization Messages across Historical Time
ERIC Educational Resources Information Center
Brown, Tony N.; Lesane-Brown, Chase L.
2006-01-01
In this study we investigated whether the content of race socialization messages varied by birth cohort, using data from a national probability sample. Most respondents recalled receiving messages about what it means to be black from their parents or guardians; these messages were coded into five mutually exclusive content categories: individual…
The processing of IMU data in ENTREE implementation and preliminary results
NASA Technical Reports Server (NTRS)
Heck, M. L.
1980-01-01
It is demonstrated that the shuttle entry trajectory can be accurately represented in ENTREE with IMU data available postflight. The IMU data consist of platform to body quaternions, and accumulated sensed velocities in mean of fifty (M50) coordinates approximately every second. The preprocessing software required to incorporate the IMU data in ENTREE is described as well as the relatively minor code changes to the ENTREE program itself required to process the IMU data. Code changes to the ENTREE program and input tape data format and content changes are described.
A Summary of Important Documents in the Field of Research Ethics
Fischer, Bernard A
2006-01-01
Today's researchers are obligated to conduct their studies ethically. However, it often seems a daunting task to become familiar with the important ethical codes required to do so. The purpose of this article is to examine the content of those ethical documents most relevant to the biomedical researcher. Documents examined include the Nuremberg Code, the Declaration of Helsinki, Henry Beecher's landmark paper, the Belmont Report, the U.S. Common Rule, the Guideline for Good Clinical Practice, and the National Bioethics Advisory Commission's report on research protections for the mentally ill. PMID:16192409
Analytical ice shape predictions for flight in natural icing conditions
NASA Technical Reports Server (NTRS)
Berkowitz, Brian M.; Riley, James T.
1988-01-01
LEWICE is an analytical ice prediction code that has been evaluated against icing tunnel data, but on a more limited basis against flight data. Ice shapes predicted by LEWICE is compared with experimental ice shapes accreted on the NASA Lewis Icing Research Aircraft. The flight data selected for comparison includes liquid water content recorded using a hot wire device and droplet distribution data from a laser spectrometer; the ice shape is recorded using stereo photography. The main findings are as follows: (1) An equivalent sand grain roughness correlation different from that used for LEWICE tunnel comparisons must be employed to obtain satisfactory results for flight; (2) Using this correlation and making no other changes in the code, the comparisons to ice shapes accreted in flight are in general as good as the comparisons to ice shapes accreted in the tunnel (as in the case of tunnel ice shapes, agreement is least reliable for large glaze ice shapes at high angles of attack); (3) In some cases comparisons can be somewhat improved by utilizing the code so as to take account of the variation of parameters such as liquid water content, which may vary significantly in flight.
Sequence Polishing Library (SPL) v10.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oberortner, Ernst
The Sequence Polishing Library (SPL) is a suite of software tools in order to automate "Design for Synthesis and Assembly" workflows. Specifically: The SPL "Converter" tool converts files among the following sequence data exchange formats: CSV, FASTA, GenBank, and Synthetic Biology Open Language (SBOL); The SPL "Juggler" tool optimizes the codon usages of DNA coding sequences according to an optimization strategy, a user-specific codon usage table and genetic code. In addition, the SPL "Juggler" can translate amino acid sequences into DNA sequences.:The SPL "Polisher" verifies NA sequences against DNA synthesis constraints, such as GC content, repeating k-mers, and restriction sites.more » In case of violations, the "Polisher" reports the violations in a comprehensive manner. The "Polisher" tool can also modify the violating regions according to an optimization strategy, a user-specific codon usage table and genetic code;The SPL "Partitioner" decomposes large DNA sequences into smaller building blocks with partial overlaps that enable an efficient assembly. The "Partitioner" enables the user to configure the characteristics of the overlaps, which are mostly determined by the utilized assembly protocol, such as length, GC content, or melting temperature.« less
Optimized bit extraction using distortion modeling in the scalable extension of H.264/AVC.
Maani, Ehsan; Katsaggelos, Aggelos K
2009-09-01
The newly adopted scalable extension of H.264/AVC video coding standard (SVC) demonstrates significant improvements in coding efficiency in addition to an increased degree of supported scalability relative to the scalable profiles of prior video coding standards. Due to the complicated hierarchical prediction structure of the SVC and the concept of key pictures, content-aware rate adaptation of SVC bit streams to intermediate bit rates is a nontrivial task. The concept of quality layers has been introduced in the design of the SVC to allow for fast content-aware prioritized rate adaptation. However, existing quality layer assignment methods are suboptimal and do not consider all network abstraction layer (NAL) units from different layers for the optimization. In this paper, we first propose a technique to accurately and efficiently estimate the quality degradation resulting from discarding an arbitrary number of NAL units from multiple layers of a bitstream by properly taking drift into account. Then, we utilize this distortion estimation technique to assign quality layers to NAL units for a more efficient extraction. Experimental results show that a significant gain can be achieved by the proposed scheme.
Acquired Codes of Meaning in Data Visualization and Infographics: Beyond Perceptual Primitives.
Byrne, Lydia; Angus, Daniel; Wiles, Janet
2016-01-01
While information visualization frameworks and heuristics have traditionally been reluctant to include acquired codes of meaning, designers are making use of them in a wide variety of ways. Acquired codes leverage a user's experience to understand the meaning of a visualization. They range from figurative visualizations which rely on the reader's recognition of shapes, to conventional arrangements of graphic elements which represent particular subjects. In this study, we used content analysis to codify acquired meaning in visualization. We applied the content analysis to a set of infographics and data visualizations which are exemplars of innovative and effective design. 88% of the infographics and 71% of data visualizations in the sample contain at least one use of figurative visualization. Conventions on the arrangement of graphics are also widespread in the sample. In particular, a comparison of representations of time and other quantitative data showed that conventions can be specific to a subject. These results suggest that there is a need for information visualization research to expand its scope beyond perceptual channels, to include social and culturally constructed meaning. Our paper demonstrates a viable method for identifying figurative techniques and graphic conventions and integrating them into heuristics for visualization design.
Hu, Guang Fu; Liu, Xiang Jiang; Zou, Gui Wei; Li, Zhong; Liang, Hong-Wei; Hu, Shao-Na
2016-01-01
We sequenced the complete mitogenomes of (Cyprinus carpio haematopterus) and Russian scattered scale mirror carp (Cyprinus carpio carpio). Comparison of these two mitogenomes revealed that the mitogenomes of these two common carp strains were remarkably similar in genome length, gene order and content, and AT content. There were only 55 bp variations in 16,581 nucleotides. About 1 bp variation was located in rRNAs, 2 bp in tRNAs, 9 bp in the control region and 43 bp in protein-coding genes. Furthermore, forty-three variable nucleotides in the protein-coding genes of the two strains led to four variable amino acids, which were located in the ND2, ATPase 6, ND5 and ND6 genes, respectively.
Impact of packet losses in scalable 3D holoscopic video coding
NASA Astrophysics Data System (ADS)
Conti, Caroline; Nunes, Paulo; Ducla Soares, Luís.
2014-05-01
Holoscopic imaging became a prospective glassless 3D technology to provide more natural 3D viewing experiences to the end user. Additionally, holoscopic systems also allow new post-production degrees of freedom, such as controlling the plane of focus or the viewing angle presented to the user. However, to successfully introduce this technology into the consumer market, a display scalable coding approach is essential to achieve backward compatibility with legacy 2D and 3D displays. Moreover, to effectively transmit 3D holoscopic content over error-prone networks, e.g., wireless networks or the Internet, error resilience techniques are required to mitigate the impact of data impairments in the user quality perception. Therefore, it is essential to deeply understand the impact of packet losses in terms of decoding video quality for the specific case of 3D holoscopic content, notably when a scalable approach is used. In this context, this paper studies the impact of packet losses when using a three-layer display scalable 3D holoscopic video coding architecture previously proposed, where each layer represents a different level of display scalability (i.e., L0 - 2D, L1 - stereo or multiview, and L2 - full 3D holoscopic). For this, a simple error concealment algorithm is used, which makes use of inter-layer redundancy between multiview and 3D holoscopic content and the inherent correlation of the 3D holoscopic content to estimate lost data. Furthermore, a study of the influence of 2D views generation parameters used in lower layers on the performance of the used error concealment algorithm is also presented.
Quality of outpatient clinical notes: a stakeholder definition derived through qualitative research.
Hanson, Janice L; Stephens, Mark B; Pangaro, Louis N; Gimbel, Ronald W
2012-11-19
There are no empirically-grounded criteria or tools to define or benchmark the quality of outpatient clinical documentation. Outpatient clinical notes document care, communicate treatment plans and support patient safety, medical education, medico-legal investigations and reimbursement. Accurately describing and assessing quality of clinical documentation is a necessary improvement in an increasingly team-based healthcare delivery system. In this paper we describe the quality of outpatient clinical notes from the perspective of multiple stakeholders. Using purposeful sampling for maximum diversity, we conducted focus groups and individual interviews with clinicians, nursing and ancillary staff, patients, and healthcare administrators at six federal health care facilities between 2009 and 2011. All sessions were audio-recorded, transcribed and qualitatively analyzed using open, axial and selective coding. The 163 participants included 61 clinicians, 52 nurse/ancillary staff, 31 patients and 19 administrative staff. Three organizing themes emerged: 1) characteristics of quality in clinical notes, 2) desired elements within the clinical notes and 3) system supports to improve the quality of clinical notes. We identified 11 codes to describe characteristics of clinical notes, 20 codes to describe desired elements in quality clinical notes and 11 codes to describe clinical system elements that support quality when writing clinical notes. While there was substantial overlap between the aspects of quality described by the four stakeholder groups, only clinicians and administrators identified ease of translation into billing codes as an important characteristic of a quality note. Only patients rated prioritization of their medical problems as an aspect of quality. Nurses included care and education delivered to the patient, information added by the patient, interdisciplinary information, and infection alerts as important content. Perspectives of these four stakeholder groups provide a comprehensive description of quality in outpatient clinical documentation. The resulting description of characteristics and content necessary for quality notes provides a research-based foundation for assessing the quality of clinical documentation in outpatient health care settings.
A Mixed Methods Approach to Code Stakeholder Beliefs in Urban Water Governance
NASA Astrophysics Data System (ADS)
Bell, E. V.; Henry, A.; Pivo, G.
2017-12-01
What is a reliable way to code policies to represent belief systems? The Advocacy Coalition Framework posits that public policy may be viewed as manifestations of belief systems. Belief systems include both ontological beliefs about cause-and-effect relationships and policy effectiveness, as well as normative beliefs about appropriate policy instruments and the relative value of different outcomes. The idea that belief systems are embodied in public policy is important for urban water governance because it trains our focus on belief conflict; this can help us understand why many water-scarce cities do not adopt innovative technology despite available scientific information. To date, there has been very little research on systematic, rigorous methods to measure the belief system content of public policies. We address this by testing the relationship between beliefs and policy participation to develop an innovative coding framework. With a focus on urban water governance in Tucson, Arizona, we analyze grey literature on local water management. Mentioned policies are coded into a typology of common approaches identified in urban water governance literature, which include regulation, education, price and non-price incentives, green infrastructure and other types of technology. We then survey local water stakeholders about their perceptions of these policies. Urban water governance requires coordination of organizations from multiple sectors, and we cannot assume that belief development and policy participation occur in a vacuum. Thus, we use a generalized exponential random graph model to test the relationship between perceptions and policy participation in the Tucson water governance network. We measure policy perceptions for organizations by averaging across their respective, affiliated respondents and generating a belief distance matrix of coordinating network participants. Similarly, we generate a distance matrix of these actors based on the frequency of their participation in each of the aforementioned policy types. By linking these perceptions and policies, we develop a coding frame that can supplement future content analysis when survey methods are not viable.
NASA Astrophysics Data System (ADS)
Durmaz, Murat; Karslioglu, Mahmut Onur
2015-04-01
There are various global and regional methods that have been proposed for the modeling of ionospheric vertical total electron content (VTEC). Global distribution of VTEC is usually modeled by spherical harmonic expansions, while tensor products of compactly supported univariate B-splines can be used for regional modeling. In these empirical parametric models, the coefficients of the basis functions as well as differential code biases (DCBs) of satellites and receivers can be treated as unknown parameters which can be estimated from geometry-free linear combinations of global positioning system observables. In this work we propose a new semi-parametric multivariate adaptive regression B-splines (SP-BMARS) method for the regional modeling of VTEC together with satellite and receiver DCBs, where the parametric part of the model is related to the DCBs as fixed parameters and the non-parametric part adaptively models the spatio-temporal distribution of VTEC. The latter is based on multivariate adaptive regression B-splines which is a non-parametric modeling technique making use of compactly supported B-spline basis functions that are generated from the observations automatically. This algorithm takes advantage of an adaptive scale-by-scale model building strategy that searches for best-fitting B-splines to the data at each scale. The VTEC maps generated from the proposed method are compared numerically and visually with the global ionosphere maps (GIMs) which are provided by the Center for Orbit Determination in Europe (CODE). The VTEC values from SP-BMARS and CODE GIMs are also compared with VTEC values obtained through calibration using local ionospheric model. The estimated satellite and receiver DCBs from the SP-BMARS model are compared with the CODE distributed DCBs. The results show that the SP-BMARS algorithm can be used to estimate satellite and receiver DCBs while adaptively and flexibly modeling the daily regional VTEC.
NASA Astrophysics Data System (ADS)
Reato, Thomas; Demir, Begüm; Bruzzone, Lorenzo
2017-10-01
This paper presents a novel class sensitive hashing technique in the framework of large-scale content-based remote sensing (RS) image retrieval. The proposed technique aims at representing each image with multi-hash codes, each of which corresponds to a primitive (i.e., land cover class) present in the image. To this end, the proposed method consists of a three-steps algorithm. The first step is devoted to characterize each image by primitive class descriptors. These descriptors are obtained through a supervised approach, which initially extracts the image regions and their descriptors that are then associated with primitives present in the images. This step requires a set of annotated training regions to define primitive classes. A correspondence between the regions of an image and the primitive classes is built based on the probability of each primitive class to be present at each region. All the regions belonging to the specific primitive class with a probability higher than a given threshold are highly representative of that class. Thus, the average value of the descriptors of these regions is used to characterize that primitive. In the second step, the descriptors of primitive classes are transformed into multi-hash codes to represent each image. This is achieved by adapting the kernel-based supervised locality sensitive hashing method to multi-code hashing problems. The first two steps of the proposed technique, unlike the standard hashing methods, allow one to represent each image by a set of primitive class sensitive descriptors and their hash codes. Then, in the last step, the images in the archive that are very similar to a query image are retrieved based on a multi-hash-code-matching scheme. Experimental results obtained on an archive of aerial images confirm the effectiveness of the proposed technique in terms of retrieval accuracy when compared to the standard hashing methods.
10 CFR 602.19 - Records and data.
Code of Federal Regulations, 2011 CFR
2011-01-01
... software used to compile, manage, and analyze data; (2) Define all technical characteristics necessary for reading or processing the records; (3) Define file and record content and codes; (4) Describe update...
10 CFR 602.19 - Records and data.
Code of Federal Regulations, 2010 CFR
2010-01-01
... software used to compile, manage, and analyze data; (2) Define all technical characteristics necessary for reading or processing the records; (3) Define file and record content and codes; (4) Describe update...
10 CFR 602.19 - Records and data.
Code of Federal Regulations, 2012 CFR
2012-01-01
... software used to compile, manage, and analyze data; (2) Define all technical characteristics necessary for reading or processing the records; (3) Define file and record content and codes; (4) Describe update...
10 CFR 602.19 - Records and data.
Code of Federal Regulations, 2014 CFR
2014-01-01
... software used to compile, manage, and analyze data; (2) Define all technical characteristics necessary for reading or processing the records; (3) Define file and record content and codes; (4) Describe update...
10 CFR 602.19 - Records and data.
Code of Federal Regulations, 2013 CFR
2013-01-01
... software used to compile, manage, and analyze data; (2) Define all technical characteristics necessary for reading or processing the records; (3) Define file and record content and codes; (4) Describe update...
Seligmann, Hervé
2013-05-07
GenBank's EST database includes RNAs matching exactly human mitochondrial sequences assuming systematic asymmetric nucleotide exchange-transcription along exchange rules: A→G→C→U/T→A (12 ESTs), A→U/T→C→G→A (4 ESTs), C→G→U/T→C (3 ESTs), and A→C→G→U/T→A (1 EST), no RNAs correspond to other potential asymmetric exchange rules. Hypothetical polypeptides translated from nucleotide-exchanged human mitochondrial protein coding genes align with numerous GenBank proteins, predicted secondary structures resemble their putative GenBank homologue's. Two independent methods designed to detect overlapping genes (one based on nucleotide contents analyses in relation to replicative deamination gradients at third codon positions, and circular code analyses of codon contents based on frame redundancy), confirm nucleotide-exchange-encrypted overlapping genes. Methods converge on which genes are most probably active, and which not, and this for the various exchange rules. Mean EST lengths produced by different nucleotide exchanges are proportional to (a) extents that various bioinformatics analyses confirm the protein coding status of putative overlapping genes; (b) known kinetic chemistry parameters of the corresponding nucleotide substitutions by the human mitochondrial DNA polymerase gamma (nucleotide DNA misinsertion rates); (c) stop codon densities in predicted overlapping genes (stop codon readthrough and exchanging polymerization regulate gene expression by counterbalancing each other). Numerous rarely expressed proteins seem encoded within regular mitochondrial genes through asymmetric nucleotide exchange, avoiding lengthening genomes. Intersecting evidence between several independent approaches confirms the working hypothesis status of gene encryption by systematic nucleotide exchanges. Copyright © 2013 Elsevier Ltd. All rights reserved.
de Porcellinis, Alice J; Klähn, Stephan; Rosgaard, Lisa; Kirsch, Rebekka; Gutekunst, Kirstin; Georg, Jens; Hess, Wolfgang R; Sakuragi, Yumiko
2016-10-01
Carbohydrate metabolism is a tightly regulated process in photosynthetic organisms. In the cyanobacterium Synechocystis sp. PCC 6803, the photomixotrophic growth protein A (PmgA) is involved in the regulation of glucose and storage carbohydrate (i.e. glycogen) metabolism, while its biochemical activity and possible factors acting downstream of PmgA are unknown. Here, a genome-wide microarray analysis of a ΔpmgA strain identified the expression of 36 protein-coding genes and 42 non-coding transcripts as significantly altered. From these, the non-coding RNA Ncr0700 was identified as the transcript most strongly reduced in abundance. Ncr0700 is widely conserved among cyanobacteria. In Synechocystis its expression is inversely correlated with light intensity. Similarly to a ΔpmgA mutant, a Δncr0700 deletion strain showed an approximately 2-fold increase in glycogen content under photoautotrophic conditions and wild-type-like growth. Moreover, its growth was arrested by 38 h after a shift to photomixotrophic conditions. Ectopic expression of Ncr0700 in Δncr0700 and ΔpmgA restored the glycogen content and photomixotrophic growth to wild-type levels. These results indicate that Ncr0700 is required for photomixotrophic growth and the regulation of glycogen accumulation, and acts downstream of PmgA. Hence Ncr0700 is renamed here as PmgR1 for photomixotrophic growth RNA 1. © The Author 2016. Published by Oxford University Press on behalf of Japanese Society of Plant Physiologists. All rights reserved. For permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Boumehrez, Farouk; Brai, Radhia; Doghmane, Noureddine; Mansouri, Khaled
2018-01-01
Recently, video streaming has attracted much attention and interest due to its capability to process and transmit large data. We propose a quality of experience (QoE) model relying on high efficiency video coding (HEVC) encoder adaptation scheme, in turn based on the multiple description coding (MDC) for video streaming. The main contributions of the paper are (1) a performance evaluation of the new and emerging video coding standard HEVC/H.265, which is based on the variation of quantization parameter (QP) values depending on different video contents to deduce their influence on the sequence to be transmitted, (2) QoE support multimedia applications in wireless networks are investigated, so we inspect the packet loss impact on the QoE of transmitted video sequences, (3) HEVC encoder parameter adaptation scheme based on MDC is modeled with the encoder parameter and objective QoE model. A comparative study revealed that the proposed MDC approach is effective for improving the transmission with a peak signal-to-noise ratio (PSNR) gain of about 2 to 3 dB. Results show that a good choice of QP value can compensate for transmission channel effects and improve received video quality, although HEVC/H.265 is also sensitive to packet loss. The obtained results show the efficiency of our proposed method in terms of PSNR and mean-opinion-score.
Conserved pattern of embryonic actin gene expression in several sea urchins and a sand dollar.
Bushman, F D; Crain, W R
1983-08-01
An examination of the size and relative abundance of actin-coding RNA in embryos of four sea urchins (Strongylocentrotus purpuratus, Strongylocentrotus droebachiensis, Arbacia punctulata, Lytechinus variegatus) and one sand dollar (Echinarachnius parma) reveals a generally conserved program of expression. In each species the relative abundance of these sequences is low in early embryos and begins to rise during late cleavage or blastula stages. In the four sea urchins, actin-coding RNAs increase between approximately 9- and 35-fold by pluteus or an earlier stage, and in the sand dollar about 5.5-fold by blastula. A major actin-coding RNA class of 2.0-2.2 kilobases (kb) is found in each species. A smaller actin-coding RNA class, which accumulates during embryogenesis, is also present in S. purpuratus (1.8 kb), S. droebachiensis (1.9 kb), and A. punctulata (1.6 kb), but apparently absent in L. variegatus and E. parma. In S. droebachiensis, actin-coding RNA is relatively abundant in unfertilized eggs and drops sharply by the 16-cell stage. This is in contrast to the other sea urchins where the actin message content is relatively low in eggs and does not change substantially in the embryos throughout early cleavage. The observations in this study suggest that the pattern of embryonic expression of at least some members of this gene family is ancient and conserved.
Xia, Yidong; Lou, Jialin; Luo, Hong; ...
2015-02-09
Here, an OpenACC directive-based graphics processing unit (GPU) parallel scheme is presented for solving the compressible Navier–Stokes equations on 3D hybrid unstructured grids with a third-order reconstructed discontinuous Galerkin method. The developed scheme requires the minimum code intrusion and algorithm alteration for upgrading a legacy solver with the GPU computing capability at very little extra effort in programming, which leads to a unified and portable code development strategy. A face coloring algorithm is adopted to eliminate the memory contention because of the threading of internal and boundary face integrals. A number of flow problems are presented to verify the implementationmore » of the developed scheme. Timing measurements were obtained by running the resulting GPU code on one Nvidia Tesla K20c GPU card (Nvidia Corporation, Santa Clara, CA, USA) and compared with those obtained by running the equivalent Message Passing Interface (MPI) parallel CPU code on a compute node (consisting of two AMD Opteron 6128 eight-core CPUs (Advanced Micro Devices, Inc., Sunnyvale, CA, USA)). Speedup factors of up to 24× and 1.6× for the GPU code were achieved with respect to one and 16 CPU cores, respectively. The numerical results indicate that this OpenACC-based parallel scheme is an effective and extensible approach to port unstructured high-order CFD solvers to GPU computing.« less
NASA Astrophysics Data System (ADS)
Zhang, Baocheng; Teunissen, Peter J. G.; Yuan, Yunbin; Zhang, Hongxing; Li, Min
2018-04-01
Vertical total electron content (VTEC) parameters estimated using global navigation satellite system (GNSS) data are of great interest for ionosphere sensing. Satellite differential code biases (SDCBs) account for one source of error which, if left uncorrected, can deteriorate performance of positioning, timing and other applications. The customary approach to estimate VTEC along with SDCBs from dual-frequency GNSS data, hereinafter referred to as DF approach, consists of two sequential steps. The first step seeks to retrieve ionospheric observables through the carrier-to-code leveling technique. This observable, related to the slant total electron content (STEC) along the satellite-receiver line-of-sight, is biased also by the SDCBs and the receiver differential code biases (RDCBs). By means of thin-layer ionospheric model, in the second step one is able to isolate the VTEC, the SDCBs and the RDCBs from the ionospheric observables. In this work, we present a single-frequency (SF) approach, enabling the joint estimation of VTEC and SDCBs using low-cost receivers; this approach is also based on two steps and it differs from the DF approach only in the first step, where we turn to the precise point positioning technique to retrieve from the single-frequency GNSS data the ionospheric observables, interpreted as the combination of the STEC, the SDCBs and the biased receiver clocks at the pivot epoch. Our numerical analyses clarify how SF approach performs when being applied to GPS L1 data collected by a single receiver under both calm and disturbed ionospheric conditions. The daily time series of zenith VTEC estimates has an accuracy ranging from a few tenths of a TEC unit (TECU) to approximately 2 TECU. For 73-96% of GPS satellites in view, the daily estimates of SDCBs do not deviate, in absolute value, more than 1 ns from their ground truth values published by the Centre for Orbit Determination in Europe.
The Geometric Organizer: A Study Technique.
ERIC Educational Resources Information Center
Derr, Alice M.; Peters, Chris L.
1986-01-01
The geometric organizer, a multisensory technique using visual mnemonic devices that key information to color-coded geometric shapes, can help learning disabled students read, organize, and study information in content subject textbooks. (CL)
2 CFR 1.105 - Organization and subtitle content.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 2 Grants and Agreements 1 2013-01-01 2013-01-01 false Organization and subtitle content. 1.105 Section 1.105 Grants and Agreements Office of Management and Budget Guidance for Grants and Agreements ABOUT TITLE 2 OF THE CODE OF FEDERAL REGULATIONS AND SUBTITLE A Introduction to Title 2 of the CFR § 1...
2 CFR 1.105 - Organization and subtitle content.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 2 Grants and Agreements 1 2014-01-01 2014-01-01 false Organization and subtitle content. 1.105 Section 1.105 Grants and Agreements Office of Management and Budget Guidance for Grants and Agreements ABOUT TITLE 2 OF THE CODE OF FEDERAL REGULATIONS AND SUBTITLE A Introduction to Title 2 of the CFR § 1...
2 CFR 1.105 - Organization and subtitle content.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 2 Grants and Agreements 1 2012-01-01 2012-01-01 false Organization and subtitle content. 1.105 Section 1.105 Grants and Agreements Office of Management and Budget Guidance for Grants and Agreements ABOUT TITLE 2 OF THE CODE OF FEDERAL REGULATIONS AND SUBTITLE A Introduction to Title 2 of the CFR § 1...
A Content Analysis of 10 Years of Clinical Supervision Articles in Counseling
ERIC Educational Resources Information Center
Bernard, Janine M.; Luke, Melissa
2015-01-01
This content analysis follows Borders's (2005) review of counseling supervision literature and includes 184 counselor supervision articles published over the past 10 years. Articles were coded as representing 1 of 3 research types or 1 of 3 conceptual types. Articles were then analyzed for main topics producing 11 topic categories.
Discovering Genres of Online Discussion Threads via Text Mining
ERIC Educational Resources Information Center
Lin, Fu-Ren; Hsieh, Lu-Shih; Chuang, Fu-Tai
2009-01-01
As course management systems (CMS) gain popularity in facilitating teaching. A forum is a key component to facilitate the interactions among students and teachers. Content analysis is the most popular way to study a discussion forum. But content analysis is a human labor intensity process; for example, the coding process relies heavily on manual…
78 FR 69793 - Voluntary Remedial Actions and Guidelines for Voluntary Recall Notices
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-21
... (Commission, CPSC, or we) proposes an interpretive rule to set forth principles and guidelines for the content... setting forth the Commission's principles and guidelines regarding the content of voluntary recall notices..., ``Principles and Guidelines for Voluntary Recall Notices,'' in part 1115 of title 16 of the Code of Federal...
Looking Back and Moving Ahead: A Content Analysis of Two Teacher Education Journals
ERIC Educational Resources Information Center
Rock, Marcia L.; Cheek, Aftynne E.; Sullivan, Melissa E.; Jones, Jennie L.; Holden, Kara B.; Kang, Jeongae
2016-01-01
We conducted a content analysis to examine trends in articles published between 1996 and 2014 in two journals--"Teacher Education and Special Education" ("TESE") and the "Journal of Teacher Education" ("JTE"). Across both journals, we coded 1,062 articles categorically based on multiple attributes (e.g.,…
Do Special Education Interventions Improve Learning of Secondary Content? A Meta-Analysis
ERIC Educational Resources Information Center
Scruggs, Thomas E.; Mastropieri, Margo A.; Berkeley, Sheri; Graetz, Janet E.
2010-01-01
The authors describe findings from a research synthesis on content area instruction for students with disabilities. Seventy studies were identified from a comprehensive literature search, examined, and coded for a number of variables, including weighted standardized mean-difference effect sizes. More than 2,400 students were participants in these…
Comprehending News Videotexts: The Influence of the Visual Content
ERIC Educational Resources Information Center
Cross, Jeremy
2011-01-01
Informed by dual coding theory, this study explores the role of the visual content in L2 listeners' comprehension of news videotexts. L1 research into the visual characteristics and comprehension of news videotexts is outlined, subsequently informing the quantitative analysis of audiovisual correspondence in the news videotexts used. In each of…
Princess Picture Books: Content and Messages
ERIC Educational Resources Information Center
Dale, Lourdes P.; Higgins, Brittany E.; Pinkerton, Nick; Couto, Michelle; Mansolillo, Victoria; Weisinger, Nica; Flores, Marci
2016-01-01
Because many girls develop their understanding of what it means to be a girl from books about princesses, the researchers coded the messages and content in 58 princess books (picture, fairy tales, and fractured fairy tales). Results indicate that gender stereotypes are present in the books--the princesses were more likely to be nurturing, in…
NASA Astrophysics Data System (ADS)
Liu, Siwei; Li, Qi; Yu, Hong; Kong, Lingfeng
2017-02-01
Glycogen is important not only for the energy supplementary of oysters, but also for human consumption. High glycogen content can improve the stress survival of oyster. A key enzyme in glycogenesis is glycogen synthase that is encoded by glycogen synthase gene GYS. In this study, the relationship between single nucleotide polymorphisms (SNPs) in coding regions of Crassostrea gigas GYS (Cg-GYS) and individual glycogen content was investigated with 321 individuals from five full-sib families. Single-strand conformation polymorphism (SSCP) procedure was combined with sequencing to confirm individual SNP genotypes of Cg-GYS. Least-square analysis of variance was performed to assess the relationship of variation in glycogen content of C. gigas with single SNP genotype and SNP haplotype. As a consequence, six SNPs were found in coding regions to be significantly associated with glycogen content ( P < 0.01), from which we constructed four main haplotypes due to linkage disequilibrium. Furthermore, the most effective haplotype H2 (GAGGAT) had extremely significant relationship with high glycogen content ( P < 0.0001). These findings revealed the potential influence of Cg-GYS polymorphism on the glycogen content and provided molecular biological information for the selective breeding of good quality traits of C. gigas.
Estimation of climate change impact on dead fuel moisture at local scale by using weather generators
NASA Astrophysics Data System (ADS)
Pellizzaro, Grazia; Bortolu, Sara; Dubrovsky, Martin; Arca, Bachisio; Ventura, Andrea; Duce, Pierpaolo
2015-04-01
The moisture content of dead fuel is an important variable in fire ignition and fire propagation. Moisture exchange in dead materials is controlled by physical processes, and is clearly dependent on atmospheric changes. According to projections of future climate in Southern Europe, changes in temperature, precipitation and extreme events are expected. More prolonged drought seasons could influence fuel moisture content and, consequently, the number of days characterized by high ignition danger in Mediterranean ecosystems. The low resolution of the climate data provided by the general circulation models (GCMs) represents a limitation for evaluating climate change impacts at local scale. For this reason, the climate research community has called to develop appropriate downscaling techniques. One of the downscaling approaches, which transform the raw outputs from the climate models (GCMs or RCMs) into data with more realistic structure, is based on linking a stochastic weather generator with the climate model outputs. Weather generators linked to climate change scenarios can therefore be used to create synthetic weather series (air temperature and relative humidity, wind speed and precipitation) representing present and future climates at local scale. The main aims of this work are to identify useful tools to determine potential impacts of expected climate change on dead fuel status in Mediterranean shrubland and, in particular, to estimate the effect of climate changes on the number of days characterized by critical values of dead fuel moisture. Measurements of dead fuel moisture content (FMC) in Mediterranean shrubland were performed by using humidity sensors in North Western Sardinia (Italy) for six years. Meteorological variables were also recorded. Data were used to determine the accuracy of the Canadian Fine Fuels Moisture Code (FFM code) in modelling moisture dynamics of dead fuel in Mediterranean vegetation. Critical threshold values of FFM code for Mediterranean climate were identified by percentile analysis, and new fuel moisture code classes were also defined. A stochastic weather generator (M&Rfi), linked to climate change scenarios derived from 17 available General Circulation Models (GCMs), was used to produce synthetic weather series, representing present and future climates, for four selected sites located in North Western Sardinia, Italy. The number of days with critical FFM code values for present and future climate were calculated and the potential impact of future climate change was analysed.
2 CFR 1.100 - Content of this title.
Code of Federal Regulations, 2012 CFR
2009-01-01
... 2 Grants and Agreements 1 2009-01-01 2009-01-01 false Content of this title. 1.100 Section 1.100 Grants and Agreements Office of Management and Budget Guidance for Grants and Agreements ABOUT TITLE 2 OF THE CODE OF FEDERAL REGULATIONS AND SUBTITLE A Introduction to Title 2 of the CFR § 1.100 Content of this title. This title contains— (a) Office...
2 CFR 1.100 - Content of this title.
Code of Federal Regulations, 2011 CFR
2008-01-01
... 2 Grants and Agreements 1 2008-01-01 2008-01-01 false Content of this title. 1.100 Section 1.100 Grants and Agreements Office of Management and Budget Guidance for Grants and Agreements ABOUT TITLE 2 OF THE CODE OF FEDERAL REGULATIONS AND SUBTITLE A Introduction to Title 2 of the CFR § 1.100 Content of this title. This title contains— (a) Office...
2 CFR 1.100 - Content of this title.
Code of Federal Regulations, 2011 CFR
2005-01-01
... 2 Grants and Agreements 1 2005-01-01 2005-01-01 false Content of this title. 1.100 Section 1.100 Grants and Agreements Office of Management and Budget Guidance for Grants and Agreements ABOUT TITLE 2 OF THE CODE OF FEDERAL REGULATIONS AND SUBTITLE A Introduction to Title 2 of the CFR § 1.100 Content of this title. This title contains— (a) Office...
2 CFR 1.100 - Content of this title.
Code of Federal Regulations, 2013 CFR
2006-01-01
... 2 Grants and Agreements 1 2006-01-01 2006-01-01 false Content of this title. 1.100 Section 1.100 Grants and Agreements Office of Management and Budget Guidance for Grants and Agreements ABOUT TITLE 2 OF THE CODE OF FEDERAL REGULATIONS AND SUBTITLE A Introduction to Title 2 of the CFR § 1.100 Content of this title. This title contains— (a) Office...
2 CFR 1.100 - Content of this title.
Code of Federal Regulations, 2013 CFR
2007-01-01
... 2 Grants and Agreements 1 2007-01-01 2007-01-01 false Content of this title. 1.100 Section 1.100 Grants and Agreements Office of Management and Budget Guidance for Grants and Agreements ABOUT TITLE 2 OF THE CODE OF FEDERAL REGULATIONS AND SUBTITLE A Introduction to Title 2 of the CFR § 1.100 Content of this title. This title contains— (a) Office...
A qualitative study on physicians' perceptions of specialty characteristics.
Park, Kwi Hwa; Jun, Soo-Koung; Park, Ie Byung
2016-09-01
There has been limited research on physicians' perceptions of the specialty characteristics that are needed to sustain a successful career in medical specialties in Korea. Medical Specialty Preference Inventory in the United States or SCI59 (specialty choice inventory) in the United Kingdom are implemented to help medical students plan their careers. The purpose of this study was to explore the characteristics of the major specialties in Korea. Twelve physicians from different specialties participated in an exploratory study consisting of qualitative interviews about the personal ability and emotional characteristics and job attributes of each specialty. The collected data were analysed with content analysis methods. Twelve codes were extracted for ability & skill attributes, 23 codes for emotion & attitude attributes, and 12 codes for job attributes. Each specialty shows a different profile in terms of its characteristic attributes. The findings have implications for the design of career planning programs for medical students.
Selected DOE Headquarters publications, October 1977-September 1979
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
1979-11-01
This sixth issue of cumulative listings of DOE Headquarters publications covers the first two years of the Department's operation (October 1, 1977 - September 30, 1979). It lists two groups of publications issued by then-existing Headquarters organizations and provides an index to their title keywords. The two groups of publications are publications assigned a DOE/XXX-type report number code and Headquarters contractor reports prepared by contractors (and published by DOE) to describe research and development work they have performed for the Department. Certain publications are omitted. They include such items as pamphlets, fact sheets, bulletins, newsletters, and telephone directories, as wellmore » as headquarters publications issued under the DOE-tr (DOE translation) and CONF (conference proceedings) codes, and technical reports from the Jet Propulsion Laboratory and NASA issued under DOE/JPL and DOE/NASA codes. The contents of this issue will not be repeated in subsequent issues of DOE/AD-0010. (RWR)« less
Power Balance and Impurity Studies in TCS
NASA Astrophysics Data System (ADS)
Grossnickle, J. A.; Pietrzyk, Z. A.; Vlases, G. C.
2003-10-01
A "zero-dimension" power balance model was developed based on measurements of absorbed power, radiated power, absolute D_α, temperature, and density for the TCS device. Radiation was determined to be the dominant source of power loss for medium to high density plasmas. The total radiated power was strongly correlated with the Oxygen line radiation. This suggests Oxygen is the dominant radiating species, which was confirmed by doping studies. These also extrapolate to a Carbon content below 1.5%. Determining the source of the impurities is an important question that must be answered for the TCS upgrade. Preliminary indications are that the primary sources of Oxygen are the stainless steel end cones. A Ti gettering system is being installed to reduce this Oxygen source. A field line code has been developed for use in tracking where open field lines terminate on the walls. Output from this code is also used to generate grids for an impurity tracking code.
A study on multiresolution lossless video coding using inter/intra frame adaptive prediction
NASA Astrophysics Data System (ADS)
Nakachi, Takayuki; Sawabe, Tomoko; Fujii, Tetsuro
2003-06-01
Lossless video coding is required in the fields of archiving and editing digital cinema or digital broadcasting contents. This paper combines a discrete wavelet transform and adaptive inter/intra-frame prediction in the wavelet transform domain to create multiresolution lossless video coding. The multiresolution structure offered by the wavelet transform facilitates interchange among several video source formats such as Super High Definition (SHD) images, HDTV, SDTV, and mobile applications. Adaptive inter/intra-frame prediction is an extension of JPEG-LS, a state-of-the-art lossless still image compression standard. Based on the image statistics of the wavelet transform domains in successive frames, inter/intra frame adaptive prediction is applied to the appropriate wavelet transform domain. This adaptation offers superior compression performance. This is achieved with low computational cost and no increase in additional information. Experiments on digital cinema test sequences confirm the effectiveness of the proposed algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sesigur, Haluk; Cili, Feridun
Seismic isolation is an effective design strategy to mitigate the seismic hazard wherein the structure and its contents are protected from the damaging effects of an earthquake. This paper presents the Hangar Project in Sabiha Goekcen Airport which is located in Istanbul, Turkey. Seismic isolation system where the isolation layer arranged at the top of the columns is selected. The seismic hazard analysis, superstructure design, isolator design and testing were based on the Uniform Building Code (1997) and met all requirements of the Turkish Earthquake Code (2007). The substructure which has the steel vertical trusses on facades and RC Hmore » shaped columns in the middle axis of the building was designed with an R factor limited to 2.0 in accordance with Turkish Earthquake Code. In order to verify the effectiveness of the isolation system, nonlinear static and dynamic analyses are performed. The analysis revealed that isolated building has lower base shear (approximately 1/4) against the non-isolated structure.« less
The Consolidated Human Activity Database — Master Version (CHAD-Master) Technical Memorandum
This technical memorandum contains information about the Consolidated Human Activity Database -- Master version, including CHAD contents, inventory of variables: Questionnaire files and Event files, CHAD codes, and references.
The full mitochondrial genome sequence of Raillietina tetragona from chicken (Cestoda: Davaineidae).
Liang, Jian-Ying; Lin, Rui-Qing
2016-11-01
In the present study, the complete mitochondrial DNA (mtDNA) sequence of Raillietina tetragona was sequenced and its gene contents and genome organizations was compared with that of other tapeworm. The complete mt genome sequence of R. tetragona is 14,444 bp in length. It contains 12 protein-coding genes, two ribosomal RNA genes, 22 transfer RNA genes, and two non-coding region. All genes are transcribed in the same direction and have a nucleotide composition high in A and T. The contents of A + T of the complete mt genome are 71.4% for R. tetragona. The R. tetragona mt genome sequence provides novel mtDNA marker for studying the molecular epidemiology and population genetics of Raillietina and has implications for the molecular diagnosis of chicken cestodosis caused by Raillietina.
Female Sex Offenders' Relationship Experiences
Lawson, Louanne
2010-01-01
Interventions for child sexual abusers should take into account their perspectives on the context of their offenses, but no descriptions of everyday life from the offender's point of view have been published. This study therefore explored female offenders' views of their strengths and challenges. Documented risk assessments of 20 female offenders were analyzed using inductive content analysis (Cavanagh, 1997; Priest, Roberts & Woods, 2002; Woods, Priest & Roberts, 2002). The Good Lives Model provided the initial coding framework and Atlas/ti software (Muhr, 1997) was used for simultaneous data collection and analysis. The content analysis yielded 999 coding decisions organized in three themes. The global theme was relationship experiences. Offenders described the quality of their relationship experiences, including their personal perspectives, intimate relationships and social lives. These descriptions have implications for treatment planning and future research with women who have molested children. PMID:18624098
Hu, Guang-Fu; Liu, Xiang-Jiang; Li, Zhong; Liang, Hong-Wei; Hu, Shao-Na; Zou, Gui-Wei
2016-01-01
The complete mitochondrial genomes of Xingguo red carp (Cyprinus carpio var. singuonensis) and purse red carp (Cyprinus carpio var. wuyuanensis) were sequenced. Comparison of these two mitochondrial genomes revealed that the mtDNAs of these two common carp varieties were remarkably similar in genome length, gene order and content, and AT content. However, size variation between these two mitochondrial genomes presented here showed 39 site differences in overall length. About 2 site differences were located in rRNAs, 3 in tRNAs, 3 in the control region, 31 in protein-coding genes. Thirty-one variable bases in the protein-coding regions between the two varieties mitochondrial sequences led to three variable amino acids, which were mainly located in the protein ND5 and ND4.
NASA Astrophysics Data System (ADS)
Dimopoulos, Kostas; Koulaidis, Vasilis; Sklaveniti, Spyridoula
2003-04-01
This paper aims at presenting the application of a grid for the analysis of the pedagogic functions of visual images included in school science textbooks and daily press articles about science and technology. The analysis is made using the dimensions of content specialisation (classification) and social-pedagogic relationships (framing) promoted by the images as well as the elaboration and abstraction of the corresponding visual code (formality), thus combining pedagogical and socio-semiotic perspectives. The grid is applied to the analysis of 2819 visual images collected from school science textbooks and another 1630 visual images additionally collected from the press. The results show that the science textbooks in comparison to the press material: a) use ten times more images, b) use more images so as to familiarise their readers with the specialised techno-scientific content and codes, and c) tend to create a sense of higher empowerment for their readers by using the visual mode. Furthermore, as the educational level of the school science textbooks (i.e., from primary to lower secondary level) rises, the content specialisation projected by the visual images and the elaboration and abstraction of the corresponding visual code also increases. The above results have implications for the terms and conditions for the effective exploitation of visual material as the educational level rises as well as for the effective incorporation of visual images from press material into science classes.
NullSeq: A Tool for Generating Random Coding Sequences with Desired Amino Acid and GC Contents.
Liu, Sophia S; Hockenberry, Adam J; Lancichinetti, Andrea; Jewett, Michael C; Amaral, Luís A N
2016-11-01
The existence of over- and under-represented sequence motifs in genomes provides evidence of selective evolutionary pressures on biological mechanisms such as transcription, translation, ligand-substrate binding, and host immunity. In order to accurately identify motifs and other genome-scale patterns of interest, it is essential to be able to generate accurate null models that are appropriate for the sequences under study. While many tools have been developed to create random nucleotide sequences, protein coding sequences are subject to a unique set of constraints that complicates the process of generating appropriate null models. There are currently no tools available that allow users to create random coding sequences with specified amino acid composition and GC content for the purpose of hypothesis testing. Using the principle of maximum entropy, we developed a method that generates unbiased random sequences with pre-specified amino acid and GC content, which we have developed into a python package. Our method is the simplest way to obtain maximally unbiased random sequences that are subject to GC usage and primary amino acid sequence constraints. Furthermore, this approach can easily be expanded to create unbiased random sequences that incorporate more complicated constraints such as individual nucleotide usage or even di-nucleotide frequencies. The ability to generate correctly specified null models will allow researchers to accurately identify sequence motifs which will lead to a better understanding of biological processes as well as more effective engineering of biological systems.
Cranwell, Jo; Britton, John; Bains, Manpreet
2017-02-01
The purpose of the present study is to describe the portrayal of alcohol content in popular YouTube music videos. We used inductive thematic analysis to explore the lyrics and visual imagery in 49 UK Top 40 songs and music videos previously found to contain alcohol content and watched by many British adolescents aged between 11 and 18 years and to examine if branded content contravened alcohol industry advertising codes of practice. The analysis generated three themes. First, alcohol content was associated with sexualised imagery or lyrics and the objectification of women. Second, alcohol was associated with image, lifestyle and sociability. Finally, some videos showed alcohol overtly encouraging excessive drinking and drunkenness, including those containing branding, with no negative consequences to the drinker. Our results suggest that YouTube music videos promote positive associations with alcohol use. Further, several alcohol companies adopt marketing strategies in the video medium that are entirely inconsistent with their own or others agreed advertising codes of practice. We conclude that, as a harm reduction measure, policies should change to prevent adolescent exposure to the positive promotion of alcohol and alcohol branding in music videos.
Generating code adapted for interlinking legacy scalar code and extended vector code
Gschwind, Michael K
2013-06-04
Mechanisms for intermixing code are provided. Source code is received for compilation using an extended Application Binary Interface (ABI) that extends a legacy ABI and uses a different register configuration than the legacy ABI. First compiled code is generated based on the source code, the first compiled code comprising code for accommodating the difference in register configurations used by the extended ABI and the legacy ABI. The first compiled code and second compiled code are intermixed to generate intermixed code, the second compiled code being compiled code that uses the legacy ABI. The intermixed code comprises at least one call instruction that is one of a call from the first compiled code to the second compiled code or a call from the second compiled code to the first compiled code. The code for accommodating the difference in register configurations is associated with the at least one call instruction.
Quantum internet using code division multiple access
Zhang, Jing; Liu, Yu-xi; Özdemir, Şahin Kaya; Wu, Re-Bing; Gao, Feifei; Wang, Xiang-Bin; Yang, Lan; Nori, Franco
2013-01-01
A crucial open problem inS large-scale quantum networks is how to efficiently transmit quantum data among many pairs of users via a common data-transmission medium. We propose a solution by developing a quantum code division multiple access (q-CDMA) approach in which quantum information is chaotically encoded to spread its spectral content, and then decoded via chaos synchronization to separate different sender-receiver pairs. In comparison to other existing approaches, such as frequency division multiple access (FDMA), the proposed q-CDMA can greatly increase the information rates per channel used, especially for very noisy quantum channels. PMID:23860488
FLUSH: A tool for the design of slush hydrogen flow systems
NASA Technical Reports Server (NTRS)
Hardy, Terry L.
1990-01-01
As part of the National Aerospace Plane Project an analytical model was developed to perform calculations for in-line transfer of solid-liquid mixtures of hydrogen. This code, called FLUSH, calculates pressure drop and solid fraction loss for the flow of slush hydrogen through pipe systems. The model solves the steady-state, one-dimensional equation of energy to obtain slush loss estimates. A description of the code is provided as well as a guide for users of the program. Preliminary results are also presented showing the anticipated degradation of slush hydrogen solid content for various piping systems.
Austin, Christopher M; Tan, Mun Hua; Lee, Yin Peng; Croft, Laurence J; Meekan, Mark G; Pierce, Simon J; Gan, Han Ming
2016-01-01
The complete mitochondrial genome of the parasitic copepod Pandarus rhincodonicus was obtained from a partial genome scan using the HiSeq sequencing system. The Pandarus rhincodonicus mitogenome has 14,480 base pairs (62% A+T content) made up of 12 protein-coding genes, 2 ribosomal subunit genes, 22 transfer RNAs, and a putative 384 bp non-coding AT-rich region. This Pandarus mitogenome sequence is the first for the family Pandaridae, the second for the order Siphonostomatoida and the sixth for the Copepoda.
Building an effective corporate compliance plan.
Ryan, E
1997-09-01
Corporate compliance plans are essential for healthcare organizations to cope with, and perhaps even stave off, investigations arising from allegations of illegal business practices. Initial development and implementation of a corporate compliance plan can be facilitated through four steps: determining the content of the code of conduct, determining how the code will be distributed, assigning responsibility for implementing the plan, and appointing a compliance task force to guide the implementation process. Special attention should be paid to education requirements of the United States Sentencing Guidelines to see that all employees understand and can apply provisions of the plan.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hirayama, Hideo; Namito, Yoshihito; /KEK, Tsukuba
2005-12-20
In the nineteen years since EGS4 was released, it has been used in a wide variety of applications, particularly in medical physics, radiation measurement studies, and industrial development. Every new user and every new application bring new challenges for Monte Carlo code designers, and code refinements and bug fixes eventually result in a code that becomes difficult to maintain. Several of the code modifications represented significant advances in electron and photon transport physics, and required a more substantial invocation than code patching. Moreover, the arcane MORTRAN3[48] computer language of EGS4, was highest on the complaint list of the users ofmore » EGS4. The size of the EGS4 user base is difficult to measure, as there never existed a formal user registration process. However, some idea of the numbers may be gleaned from the number of EGS4 manuals that were produced and distributed at SLAC: almost three thousand. Consequently, the EGS5 project was undertaken. It was decided to employ the FORTRAN 77 compiler, yet include as much as possible, the structural beauty and power of MORTRAN3. This report consists of four chapters and several appendices. Chapter 1 is an introduction to EGS5 and to this report in general. We suggest that you read it. Chapter 2 is a major update of similar chapters in the old EGS4 report[126] (SLAC-265) and the old EGS3 report[61] (SLAC-210), in which all the details of the old physics (i.e., models which were carried over from EGS4) and the new physics are gathered together. The descriptions of the new physics are extensive, and not for the faint of heart. Detailed knowledge of the contents of Chapter 2 is not essential in order to use EGS, but sophisticated users should be aware of its contents. In particular, details of the restrictions on the range of applicability of EGS are dispersed throughout the chapter. First-time users of EGS should skip Chapter 2 and come back to it later if necessary. With the release of the EGS4 version, a deliberate attempt was made to present example problems in order to help the user ''get started'', and we follow that spirit in this report. A series of elementary tutorial user codes are presented in Chapter 3, with more sophisticated sample user codes described in Chapter 4. Novice EGS users will find it helpful to read through the initial sections of the EGS5 User Manual (provided in Appendix B of this report), proceeding then to work through the tutorials in Chapter 3. The User Manuals and other materials found in the appendices contain detailed flow charts, variable lists, and subprogram descriptions of EGS5 and PEGS. Included are step-by-step instructions for developing basic EGS5 user codes and for accessing all of the physics options available in EGS5 and PEGS. Once acquainted with the basic structure of EGS5, users should find the appendices the most frequently consulted sections of this report.« less
Dual coding: a cognitive model for psychoanalytic research.
Bucci, W
1985-01-01
Four theories of mental representation derived from current experimental work in cognitive psychology have been discussed in relation to psychoanalytic theory. These are: verbal mediation theory, in which language determines or mediates thought; perceptual dominance theory, in which imagistic structures are dominant; common code or propositional models, in which all information, perceptual or linguistic, is represented in an abstract, amodal code; and dual coding, in which nonverbal and verbal information are each encoded, in symbolic form, in separate systems specialized for such representation, and connected by a complex system of referential relations. The weight of current empirical evidence supports the dual code theory. However, psychoanalysis has implicitly accepted a mixed model-perceptual dominance theory applying to unconscious representation, and verbal mediation characterizing mature conscious waking thought. The characterization of psychoanalysis, by Schafer, Spence, and others, as a domain in which reality is constructed rather than discovered, reflects the application of this incomplete mixed model. The representations of experience in the patient's mind are seen as without structure of their own, needing to be organized by words, thus vulnerable to distortion or dissolution by the language of the analyst or the patient himself. In these terms, hypothesis testing becomes a meaningless pursuit; the propositions of the theory are no longer falsifiable; the analyst is always more or less "right." This paper suggests that the integrated dual code formulation provides a more coherent theoretical framework for psychoanalysis than the mixed model, with important implications for theory and technique. In terms of dual coding, the problem is not that the nonverbal representations are vulnerable to distortion by words, but that the words that pass back and forth between analyst and patient will not affect the nonverbal schemata at all. Using the dual code formulation, and applying an investigative methodology derived from experimental cognitive psychology, a new approach to the verification of interpretations is possible. Some constructions of a patient's story may be seen as more accurate than others, by virtue of their linkage to stored perceptual representations in long-term memory. We can demonstrate that such linking has occurred in functional or operational terms--through evaluating the representation of imagistic content in the patient's speech.
Patient health record on a smart card.
Naszlady, A; Naszlady, J
1998-02-01
A validated health questionnaire has been used for the documentation of a patient's history (826 items) and of the findings from physical examination (591 items) in our clinical ward for 25 years. This computerized patient record has been completed in EUCLIDES code (CEN TC/251) for laboratory tests and an ATC and EAN code listing for the names of the drugs permanently required by the patient. In addition, emergency data were also included on an EEPROM chipcard with a 24 kb capacity. The program is written in FOX-PRO language. A group of 5000 chronically ill in-patients received these cards which contain their health data. For security reasons the contents of the smart card is only accessible by a doctor's PIN coded key card. The personalization of each card was carried out in our health center and the depersonalized alphanumeric data were collected for further statistical evaluation. This information served as a basis for a real need assessment of health care and for the calculation of its cost. Code-combined with an optical card, a completely paperless electronic patient record system has been developed containing all three information carriers in medicine: Texts, Curves and Pictures.
Face Coding Is Bilateral in the Female Brain
Proverbio, Alice Mado; Riva, Federica; Martin, Eleonora; Zani, Alberto
2010-01-01
Background It is currently believed that face processing predominantly activates the right hemisphere in humans, but available literature is very inconsistent. Methodology/Principal Findings In this study, ERPs were recorded in 50 right-handed women and men in response to 390 faces (of different age and sex), and 130 technological objects. Results showed no sex difference in the amplitude of N170 to objects; a much larger face-specific response over the right hemisphere in men, and a bilateral response in women; a lack of face-age coding effect over the left hemisphere in men, with no differences in N170 to faces as a function of age; a significant bilateral face-age coding effect in women. Conclusions/Significance LORETA reconstruction showed a significant left and right asymmetry in the activation of the fusiform gyrus (BA19), in women and men, respectively. The present data reveal a lesser degree of lateralization of brain functions related to face coding in women than men. In this light, they may provide an explanation of the inconsistencies in the available literature concerning the asymmetric activity of left and right occipito-temporal cortices devoted to face perception during processing of face identity, structure, familiarity or affective content. PMID:20574528
Self-Supervised Video Hashing With Hierarchical Binary Auto-Encoder.
Song, Jingkuan; Zhang, Hanwang; Li, Xiangpeng; Gao, Lianli; Wang, Meng; Hong, Richang
2018-07-01
Existing video hash functions are built on three isolated stages: frame pooling, relaxed learning, and binarization, which have not adequately explored the temporal order of video frames in a joint binary optimization model, resulting in severe information loss. In this paper, we propose a novel unsupervised video hashing framework dubbed self-supervised video hashing (SSVH), which is able to capture the temporal nature of videos in an end-to-end learning to hash fashion. We specifically address two central problems: 1) how to design an encoder-decoder architecture to generate binary codes for videos and 2) how to equip the binary codes with the ability of accurate video retrieval. We design a hierarchical binary auto-encoder to model the temporal dependencies in videos with multiple granularities, and embed the videos into binary codes with less computations than the stacked architecture. Then, we encourage the binary codes to simultaneously reconstruct the visual content and neighborhood structure of the videos. Experiments on two real-world data sets show that our SSVH method can significantly outperform the state-of-the-art methods and achieve the current best performance on the task of unsupervised video retrieval.
Face coding is bilateral in the female brain.
Proverbio, Alice Mado; Riva, Federica; Martin, Eleonora; Zani, Alberto
2010-06-21
It is currently believed that face processing predominantly activates the right hemisphere in humans, but available literature is very inconsistent. In this study, ERPs were recorded in 50 right-handed women and men in response to 390 faces (of different age and sex), and 130 technological objects. Results showed no sex difference in the amplitude of N170 to objects; a much larger face-specific response over the right hemisphere in men, and a bilateral response in women; a lack of face-age coding effect over the left hemisphere in men, with no differences in N170 to faces as a function of age; a significant bilateral face-age coding effect in women. LORETA reconstruction showed a significant left and right asymmetry in the activation of the fusiform gyrus (BA19), in women and men, respectively. The present data reveal a lesser degree of lateralization of brain functions related to face coding in women than men. In this light, they may provide an explanation of the inconsistencies in the available literature concerning the asymmetric activity of left and right occipito-temporal cortices devoted to face perception during processing of face identity, structure, familiarity or affective content.
Self-Supervised Video Hashing With Hierarchical Binary Auto-Encoder
NASA Astrophysics Data System (ADS)
Song, Jingkuan; Zhang, Hanwang; Li, Xiangpeng; Gao, Lianli; Wang, Meng; Hong, Richang
2018-07-01
Existing video hash functions are built on three isolated stages: frame pooling, relaxed learning, and binarization, which have not adequately explored the temporal order of video frames in a joint binary optimization model, resulting in severe information loss. In this paper, we propose a novel unsupervised video hashing framework dubbed Self-Supervised Video Hashing (SSVH), that is able to capture the temporal nature of videos in an end-to-end learning-to-hash fashion. We specifically address two central problems: 1) how to design an encoder-decoder architecture to generate binary codes for videos; and 2) how to equip the binary codes with the ability of accurate video retrieval. We design a hierarchical binary autoencoder to model the temporal dependencies in videos with multiple granularities, and embed the videos into binary codes with less computations than the stacked architecture. Then, we encourage the binary codes to simultaneously reconstruct the visual content and neighborhood structure of the videos. Experiments on two real-world datasets (FCVID and YFCC) show that our SSVH method can significantly outperform the state-of-the-art methods and achieve the currently best performance on the task of unsupervised video retrieval.
Schiff, Elad; Ben-Arye, Eran; Shilo, Margalit; Levy, Moti; Schachter, Leora; Weitchner, Na'ama; Golan, Ofra; Stone, Julie
2011-02-01
Recently, ethical guidelines regarding safe touch in CAM were developed in Israel. Publishing ethical codes does not imply that they will actually help practitioners to meet ethical care standards. The effectiveness of ethical rules depends on familiarity with the code and its content. In addition, critical self-examination of the code by individual members of the profession is required to reflect on the moral commitments encompassed in the code. For the purpose of dynamic self-appraisal, we devised a survey to assess how CAM practitioners view the suggested ethical guidelines for safe touch. We surveyed 781 CAM practitioners regarding their perspectives on the safe-touch code. There was a high level of agreement with general statements regarding ethics pertaining to safe touch with a mean rate of agreement of 4.61 out of a maximum of 5. Practitioners concurred substantially with practice guidelines for appropriate touch with a mean rate of agreement of 4.16 out of a maximum of 5. Attitudes toward the necessity to touch intimate areas for treatment purposes varied with 78.6% of respondents strongly disagreeing with any notion of need to touch intimate areas during treatment. 7.9% neither disagreed nor agreed, 7.9% slightly agreed, and 7.6% strongly agreed with the need for touching intimate areas during treatment. There was a direct correlation between disagreement with touching intimate areas for therapeutic purposes and agreement with general statements regarding ethics of safe touch (Spearman r=0.177, p<0.0001), and practice guidelines for appropriate touch (r=0.092, p=0.012). A substantial number of practitioners agreed with the code, although some findings regarding the need to touch intimate area during treatments were disturbing. Our findings can serve as a basis for ethical code development and implementation, as well as for educating CAM practitioners on the ethics of touch. Copyright © 2010 Elsevier Ltd. All rights reserved.
Gutiérrez, Verónica; Rego, Natalia; Naya, Hugo; García, Graciela
2015-10-28
Among teleosts, the South American genus Austrolebias (Cyprinodontiformes: Rivulidae) includes 42 taxa of annual fishes divided into five different species groups. It is a monophyletic genus, but morphological and molecular data do not resolve the relationship among intrageneric clades and high rates of substitution have been previously described in some mitochondrial genes. In this work, the complete mitogenome of a species of the genus was determined for the first time. We determined its structure, gene order and evolutionary peculiar features, which will allow us to evaluate the performance of mitochondrial genes in the phylogenetic resolution at different taxonomic levels. Regarding gene content and order, the circular mitogenome of A. charrua (17,271 pb) presents the typical pattern of vertebrate mitogenomes. It contains the full complement of 13 proteins-coding genes, 22 tRNA, 2 rRNA and one non-coding control region. Notably, the tRNA-Cys was only 57 bp in length and lacks the D-loop arm. In three full sibling individuals, heteroplasmatic condition was detected due to a total of 12 variable sites in seven protein-coding genes. Among cyprinodontiforms, the mitogenome of A. charrua exhibits the lowest G+C content (37 %) and GCskew, as well as the highest strand asymmetry with a net difference of T over A at 1st and 3rd codon positions. Considering the 12 coding-genes of the H strand, correspondence analyses of nucleotide composition and codon usage show that A and T at 1st and 3rd codon positions have the highest weight in the first axis, and segregate annual species from the other cyprinodontiforms analyzed. Given the annual life-style, their mitogenomes could be under different selective pressures. All 13 protein-coding genes are under strong purifying selection and we did not find any significant evidence of nucleotide sites showing episodic selection (dN >dS) at annual lineages. When fast evolving third codon positions were removed from alignments, the "supergene" tree recovers our reference species phylogeny as well as the Cytb, ND4L and ND6 genes. Therefore, third codon positions seem to be saturated in the aforementioned coding regions at intergeneric Cyprinodontiformes comparisons. The complete mitogenome obtained in present work, offers relevant data for further comparative studies on molecular phylogeny and systematics of this taxonomic controversial endemic genus of annual fishes.
Portal of medical data models: information infrastructure for medical research and healthcare.
Dugas, Martin; Neuhaus, Philipp; Meidt, Alexandra; Doods, Justin; Storck, Michael; Bruland, Philipp; Varghese, Julian
2016-01-01
Information systems are a key success factor for medical research and healthcare. Currently, most of these systems apply heterogeneous and proprietary data models, which impede data exchange and integrated data analysis for scientific purposes. Due to the complexity of medical terminology, the overall number of medical data models is very high. At present, the vast majority of these models are not available to the scientific community. The objective of the Portal of Medical Data Models (MDM, https://medical-data-models.org) is to foster sharing of medical data models. MDM is a registered European information infrastructure. It provides a multilingual platform for exchange and discussion of data models in medicine, both for medical research and healthcare. The system is developed in collaboration with the University Library of Münster to ensure sustainability. A web front-end enables users to search, view, download and discuss data models. Eleven different export formats are available (ODM, PDF, CDA, CSV, MACRO-XML, REDCap, SQL, SPSS, ADL, R, XLSX). MDM contents were analysed with descriptive statistics. MDM contains 4387 current versions of data models (in total 10,963 versions). 2475 of these models belong to oncology trials. The most common keyword (n = 3826) is 'Clinical Trial'; most frequent diseases are breast cancer, leukemia, lung and colorectal neoplasms. Most common languages of data elements are English (n = 328,557) and German (n = 68,738). Semantic annotations (UMLS codes) are available for 108,412 data items, 2453 item groups and 35,361 code list items. Overall 335,087 UMLS codes are assigned with 21,847 unique codes. Few UMLS codes are used several thousand times, but there is a long tail of rarely used codes in the frequency distribution. Expected benefits of the MDM portal are improved and accelerated design of medical data models by sharing best practice, more standardised data models with semantic annotation and better information exchange between information systems, in particular Electronic Data Capture (EDC) and Electronic Health Records (EHR) systems. Contents of the MDM portal need to be further expanded to reach broad coverage of all relevant medical domains. Database URL: https://medical-data-models.org. © The Author(s) 2016. Published by Oxford University Press.
Tucker, Carole A; Escorpizo, Reuben; Cieza, Alarcos; Lai, Jin Shei; Stucki, Gerold; Ustun, T. Bedirhan; Kostanjsek, Nenad; Cella, David; Forrest, Christopher B.
2014-01-01
Background The Patient Reported Outcomes Measurement Information System (PROMIS®) is a U.S. National Institutes of Health initiative that has produced self-reported item banks for physical, mental, and social health. Objective To describe the content of PROMIS at the item level using the World Health Organization’s International Classification of Functioning, Disability and Health (ICF). Methods All PROMIS adult items (publicly available as of 2012) were assigned to relevant ICF concepts. The content of the PROMIS adult item banks were then described using the mapped ICF code descriptors. Results The 1006 items in the PROMIS instruments could all be mapped to ICF concepts at the second level of classification, with the exception of 3 items of global or general health that mapped across the first-level classification of ICF activity and participation component (d categories). Individual PROMIS item banks mapped from 1 to 5 separate ICF codes indicating one-to-one, one-to-many and many-to-one mappings between PROMIS item banks and ICF second level classification codes. PROMIS supports measurement of the majority of major concepts in the ICF Body Functions (b) and Activity & Participation (d) components using PROMIS item banks or subsets of PROMIS items that could, with care, be used to develop customized instruments. Given the focus of PROMIS is on measurement of person health outcomes, concepts in body structures (s) and some body functions (b), as well as many ICF environmental factor have minimal coverage in PROMIS. Discussion The PROMIS-ICF mapped items provide a basis for users to evaluate the ICF related content of specific PROMIS instruments, and to select PROMIS instruments in ICF based measurement applications. PMID:24760532
Patient complaints in healthcare systems: a systematic review and coding taxonomy
Reader, Tom W; Gillespie, Alex; Roberts, Jane
2014-01-01
Background Patient complaints have been identified as a valuable resource for monitoring and improving patient safety. This article critically reviews the literature on patient complaints, and synthesises the research findings to develop a coding taxonomy for analysing patient complaints. Methods The PubMed, Science Direct and Medline databases were systematically investigated to identify patient complaint research studies. Publications were included if they reported primary quantitative data on the content of patient-initiated complaints. Data were extracted and synthesised on (1) basic study characteristics; (2) methodological details; and (3) the issues patients complained about. Results 59 studies, reporting 88 069 patient complaints, were included. Patient complaint coding methodologies varied considerably (eg, in attributing single or multiple causes to complaints). In total, 113 551 issues were found to underlie the patient complaints. These were analysed using 205 different analytical codes which when combined represented 29 subcategories of complaint issue. The most common issues complained about were ‘treatment’ (15.6%) and ‘communication’ (13.7%). To develop a patient complaint coding taxonomy, the subcategories were thematically grouped into seven categories, and then three conceptually distinct domains. The first domain related to complaints on the safety and quality of clinical care (representing 33.7% of complaint issues), the second to the management of healthcare organisations (35.1%) and the third to problems in healthcare staff–patient relationships (29.1%). Conclusions Rigorous analyses of patient complaints will help to identify problems in patient safety. To achieve this, it is necessary to standardise how patient complaints are analysed and interpreted. Through synthesising data from 59 patient complaint studies, we propose a coding taxonomy for supporting future research and practice in the analysis of patient complaint data. PMID:24876289
Tanana, Michael; Hallgren, Kevin A; Imel, Zac E; Atkins, David C; Srikumar, Vivek
2016-06-01
Motivational interviewing (MI) is an efficacious treatment for substance use disorders and other problem behaviors. Studies on MI fidelity and mechanisms of change typically use human raters to code therapy sessions, which requires considerable time, training, and financial costs. Natural language processing techniques have recently been utilized for coding MI sessions using machine learning techniques, rather than human coders, and preliminary results have suggested these methods hold promise. The current study extends this previous work by introducing two natural language processing models for automatically coding MI sessions via computer. The two models differ in the way they semantically represent session content, utilizing either 1) simple discrete sentence features (DSF model) and 2) more complex recursive neural networks (RNN model). Utterance- and session-level predictions from these models were compared to ratings provided by human coders using a large sample of MI sessions (N=341 sessions; 78,977 clinician and client talk turns) from 6 MI studies. Results show that the DSF model generally had slightly better performance compared to the RNN model. The DSF model had "good" or higher utterance-level agreement with human coders (Cohen's kappa>0.60) for open and closed questions, affirm, giving information, and follow/neutral (all therapist codes); considerably higher agreement was obtained for session-level indices, and many estimates were competitive with human-to-human agreement. However, there was poor agreement for client change talk, client sustain talk, and therapist MI-inconsistent behaviors. Natural language processing methods provide accurate representations of human derived behavioral codes and could offer substantial improvements to the efficiency and scale in which MI mechanisms of change research and fidelity monitoring are conducted. Copyright © 2016 Elsevier Inc. All rights reserved.
Hu, Jianwei; Gauld, Ian C.
2014-12-01
The U.S. Department of Energy’s Next Generation Safeguards Initiative Spent Fuel (NGSI-SF) project is nearing the final phase of developing several advanced nondestructive assay (NDA) instruments designed to measure spent nuclear fuel assemblies for the purpose of improving nuclear safeguards. Current efforts are focusing on calibrating several of these instruments with spent fuel assemblies at two international spent fuel facilities. Modelling and simulation is expected to play an important role in predicting nuclide compositions, neutron and gamma source terms, and instrument responses in order to inform the instrument calibration procedures. As part of NGSI-SF project, this work was carried outmore » to assess the impacts of uncertainties in the nuclear data used in the calculations of spent fuel content, radiation emissions and instrument responses. Nuclear data is an essential part of nuclear fuel burnup and decay codes and nuclear transport codes. Such codes are routinely used for analysis of spent fuel and NDA safeguards instruments. Hence, the uncertainties existing in the nuclear data used in these codes affect the accuracies of such analysis. In addition, nuclear data uncertainties represent the limiting (smallest) uncertainties that can be expected from nuclear code predictions, and therefore define the highest attainable accuracy of the NDA instrument. This work studies the impacts of nuclear data uncertainties on calculated spent fuel nuclide inventories and the associated NDA instrument response. Recently developed methods within the SCALE code system are applied in this study. The Californium Interrogation with Prompt Neutron instrument was selected to illustrate the impact of these uncertainties on NDA instrument response.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hu, Jianwei; Gauld, Ian C.
The U.S. Department of Energy’s Next Generation Safeguards Initiative Spent Fuel (NGSI-SF) project is nearing the final phase of developing several advanced nondestructive assay (NDA) instruments designed to measure spent nuclear fuel assemblies for the purpose of improving nuclear safeguards. Current efforts are focusing on calibrating several of these instruments with spent fuel assemblies at two international spent fuel facilities. Modelling and simulation is expected to play an important role in predicting nuclide compositions, neutron and gamma source terms, and instrument responses in order to inform the instrument calibration procedures. As part of NGSI-SF project, this work was carried outmore » to assess the impacts of uncertainties in the nuclear data used in the calculations of spent fuel content, radiation emissions and instrument responses. Nuclear data is an essential part of nuclear fuel burnup and decay codes and nuclear transport codes. Such codes are routinely used for analysis of spent fuel and NDA safeguards instruments. Hence, the uncertainties existing in the nuclear data used in these codes affect the accuracies of such analysis. In addition, nuclear data uncertainties represent the limiting (smallest) uncertainties that can be expected from nuclear code predictions, and therefore define the highest attainable accuracy of the NDA instrument. This work studies the impacts of nuclear data uncertainties on calculated spent fuel nuclide inventories and the associated NDA instrument response. Recently developed methods within the SCALE code system are applied in this study. The Californium Interrogation with Prompt Neutron instrument was selected to illustrate the impact of these uncertainties on NDA instrument response.« less
Booth, Chelsea L
2014-09-01
The Research Prioritization Task Force of the National Action Alliance for Suicide Prevention conducted a stakeholder survey including 716 respondents from 49 U.S. states and 18 foreign countries. To conduct a qualitative analysis on responses from individuals representing four main stakeholder groups: attempt and loss survivors, researchers, providers, and policy/administrators. This article focuses on a qualitative analysis of the early-round, open-ended responses collected in a modified online Delphi process, and, as an illustration of the research method, focuses on analysis of respondents' views of the role of life and emotional skills in suicide prevention. Content analysis was performed using both inductive and deductive code and category development and systematic qualitative methods. After the inductive coding was completed, the same data set was re-coded using the 12 Aspirational Goals (AGs) identified by the Delphi process. Codes and thematic categories produced from the inductive coding process were, in some cases, very similar or identical to the 12 AGs (i.e., those dealing with risk and protective factors, provider training, preventing reattempts, and stigma). Other codes highlighted areas that were not identified as important in the Delphi process (e.g., cultural/social factors of suicide, substance use). Qualitative and mixed-methods research are essential to the future of suicide prevention work. By design, qualitative research is explorative and appropriate for complex, culturally embedded social issues such as suicide. Such research can be used to generate hypotheses for testing and, as in this analysis, illuminate areas that would be missed in an approach that imposed predetermined categories on data. Published by Elsevier Inc.
Yancey, Antronette K; Cole, Brian L; Brown, Rochelle; Williams, Jerome D; Hillier, Amy; Kline, Randolph S; Ashe, Marice; Grier, Sonya A; Backman, Desiree; McCarthy, William J
2009-03-01
Commercial marketing is a critical but understudied element of the sociocultural environment influencing Americans' food and beverage preferences and purchases. This marketing also likely influences the utilization of goods and services related to physical activity and sedentary behavior. A growing literature documents the targeting of racial/ethnic and income groups in commercial advertisements in magazines, on billboards, and on television that may contribute to sociodemographic disparities in obesity and chronic disease risk and protective behaviors. This article examines whether African Americans, Latinos, and people living in low-income neighborhoods are disproportionately exposed to advertisements for high-calorie, low nutrient-dense foods and beverages and for sedentary entertainment and transportation and are relatively underexposed to advertising for nutritious foods and beverages and goods and services promoting physical activities. Outdoor advertising density and content were compared in zip code areas selected to offer contrasts by area income and ethnicity in four cities: Los Angeles, Austin, New York City, and Philadelphia. Large variations were observed in the amount, type, and value of advertising in the selected zip code areas. Living in an upper-income neighborhood, regardless of its residents' predominant ethnicity, is generally protective against exposure to most types of obesity-promoting outdoor advertising (food, fast food, sugary beverages, sedentary entertainment, and transportation). The density of advertising varied by zip code area race/ethnicity, with African American zip code areas having the highest advertising densities, Latino zip code areas having slightly lower densities, and white zip code areas having the lowest densities. The potential health and economic implications of differential exposure to obesity-related advertising are substantial. Although substantive legal questions remain about the government's ability to regulate advertising, the success of limiting tobacco advertising offers lessons for reducing the marketing contribution to the obesigenicity of urban environments.
Chen, Zhi-Teng; Du, Yu-Zhou
2015-03-01
The complete mitochondrial genome of the stonefly, Sweltsa longistyla Wu (Plecoptera: Chloroperlidae), was sequenced in this study. The mitogenome of S. longistyla is 16,151bp and contains 37 genes including 13 protein-coding genes (PCGs), 22 tRNA genes, two rRNA genes, and a large non-coding region. S. longistyla, Pteronarcys princeps Banks, Kamimuria wangi Du and Cryptoperla stilifera Sivec belong to the Plecoptera, and the gene order and orientation of their mitogenomes were similar. The overall AT content for the four stoneflies was below 72%, and the AT content of tRNA genes was above 69%. The four genomes were compact and contained only 65-127bp of non-coding intergenic DNAs. Overlapping nucleotides existed in all four genomes and ranged from 24 (P. princeps) to 178bp (K. wangi). There was a 7-bp motif ('ATGATAA') of overlapping DNA and an 8-bp motif (AAGCCTTA) conserved in three stonefly species (P. princeps, K. wangi and C. stilifera). The control regions of four stoneflies contained a stem-loop structure. Four conserved sequence blocks (CSBs) were present in the A+T-rich regions of all four stoneflies. Copyright © 2014 Elsevier B.V. All rights reserved.
Living Organisms Author Their Read-Write Genomes in Evolution.
Shapiro, James A
2017-12-06
Evolutionary variations generating phenotypic adaptations and novel taxa resulted from complex cellular activities altering genome content and expression: (i) Symbiogenetic cell mergers producing the mitochondrion-bearing ancestor of eukaryotes and chloroplast-bearing ancestors of photosynthetic eukaryotes; (ii) interspecific hybridizations and genome doublings generating new species and adaptive radiations of higher plants and animals; and, (iii) interspecific horizontal DNA transfer encoding virtually all of the cellular functions between organisms and their viruses in all domains of life. Consequently, assuming that evolutionary processes occur in isolated genomes of individual species has become an unrealistic abstraction. Adaptive variations also involved natural genetic engineering of mobile DNA elements to rewire regulatory networks. In the most highly evolved organisms, biological complexity scales with "non-coding" DNA content more closely than with protein-coding capacity. Coincidentally, we have learned how so-called "non-coding" RNAs that are rich in repetitive mobile DNA sequences are key regulators of complex phenotypes. Both biotic and abiotic ecological challenges serve as triggers for episodes of elevated genome change. The intersections of cell activities, biosphere interactions, horizontal DNA transfers, and non-random Read-Write genome modifications by natural genetic engineering provide a rich molecular and biological foundation for understanding how ecological disruptions can stimulate productive, often abrupt, evolutionary transformations.
Gilinsky, Alyssa Sara; Dale, Hannah; Robinson, Clare; Hughes, Adrienne R; McInnes, Rhona; Lavallee, David
2015-01-01
This systematic review and meta-analysis reports the efficacy of post-natal physical activity change interventions with content coding of behaviour change techniques (BCTs). Electronic databases (MEDLINE, CINAHL and PsychINFO) were searched for interventions published from January 1980 to July 2013. Inclusion criteria were: (i) interventions including ≥1 BCT designed to change physical activity behaviour, (ii) studies reporting ≥1 physical activity outcome, (iii) interventions commencing later than four weeks after childbirth and (iv) studies including participants who had given birth within the last year. Controlled trials were included in the meta-analysis. Interventions were coded using the 40-item Coventry, Aberdeen & London - Refined (CALO-RE) taxonomy of BCTs and study quality assessment was conducted using Cochrane criteria. Twenty studies were included in the review (meta-analysis: n = 14). Seven were interventions conducted with healthy inactive post-natal women. Nine were post-natal weight management studies. Two studies included women with post-natal depression. Two studies focused on improving general well-being. Studies in healthy populations but not for weight management successfully changed physical activity. Interventions increased frequency but not volume of physical activity or walking behaviour. Efficacious interventions always included the BCTs 'goal setting (behaviour)' and 'prompt self-monitoring of behaviour'.
Social media use by community-based organizations conducting health promotion: a content analysis
2013-01-01
Background Community-based organizations (CBOs) are critical channels for the delivery of health promotion programs. Much of their influence comes from the relationships they have with community members and other key stakeholders and they may be able to harness the power of social media tools to develop and maintain these relationships. There are limited data describing if and how CBOs are using social media. This study assesses the extent to which CBOs engaged in health promotion use popular social media channels, the types of content typically shared, and the extent to which the interactive aspects of social media tools are utilized. Methods We assessed the social media presence and patterns of usage of CBOs engaged in health promotion in Boston, Lawrence, and Worcester, Massachusetts. We coded content on three popular channels: Facebook, Twitter, and YouTube. We used content analysis techniques to quantitatively summarize posts, tweets, and videos on these channels, respectively. For each organization, we coded all content put forth by the CBO on the three channels in a 30-day window. Two coders were trained and conducted the coding. Data were collected between November 2011 and January 2012. Results A total of 166 organizations were included in our census. We found that 42% of organizations used at least one of the channels of interest. Across the three channels, organization promotion was the most common theme for content (66% of posts, 63% of tweets, and 93% of videos included this content). Most organizations updated Facebook and Twitter content at rates close to recommended frequencies. We found limited interaction/engagement with audience members. Conclusions Much of the use of social media tools appeared to be uni-directional, a flow of information from the organization to the audience. By better leveraging opportunities for interaction and user engagement, these organizations can reap greater benefits from the non-trivial investment required to use social media well. Future research should assess links between use patterns and organizational characteristics, staff perspectives, and audience engagement. PMID:24313999
What Information is Stored in DNA: Does it Contain Digital Error Correcting Codes?
NASA Astrophysics Data System (ADS)
Liebovitch, Larry
1998-03-01
The longest term correlations in living systems are the information stored in DNA which reflects the evolutionary history of an organism. The 4 bases (A,T,G,C) encode sequences of amino acids as well as locations of binding sites for proteins that regulate DNA. The fidelity of this important information is maintained by ANALOG error check mechanisms. When a single strand of DNA is replicated the complementary base is inserted in the new strand. Sometimes the wrong base is inserted that sticks out disrupting the phosphate backbone. The new base is not yet methylated, so repair enzymes, that slide along the DNA, can tear out the wrong base and replace it with the right one. The bases in DNA form a sequence of 4 different symbols and so the information is encoded in a DIGITAL form. All the digital codes in our society (ISBN book numbers, UPC product codes, bank account numbers, airline ticket numbers) use error checking code, where some digits are functions of other digits to maintain the fidelity of transmitted informaiton. Does DNA also utitlize a DIGITAL error chekcing code to maintain the fidelity of its information and increase the accuracy of replication? That is, are some bases in DNA functions of other bases upstream or downstream? This raises the interesting mathematical problem: How does one determine whether some symbols in a sequence of symbols are a function of other symbols. It also bears on the issue of determining algorithmic complexity: What is the function that generates the shortest algorithm for reproducing the symbol sequence. The error checking codes most used in our technology are linear block codes. We developed an efficient method to test for the presence of such codes in DNA. We coded the 4 bases as (0,1,2,3) and used Gaussian elimination, modified for modulus 4, to test if some bases are linear combinations of other bases. We used this method to analyze the base sequence in the genes from the lac operon and cytochrome C. We did not find evidence for such error correcting codes in these genes. However, we analyzed only a small amount of DNA and if digitial error correcting schemes are present in DNA, they may be more subtle than such simple linear block codes. The basic issue we raise here, is how information is stored in DNA and an appreciation that digital symbol sequences, such as DNA, admit of interesting schemes to store and protect the fidelity of their information content. Liebovitch, Tao, Todorov, Levine. 1996. Biophys. J. 71:1539-1544. Supported by NIH grant EY6234.
78 FR 60137 - Shipping and Transportation; Technical, Organizational, and Conforming Amendments
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-30
... INFORMATION: Table of Contents for Preamble I. Abbreviations II. Regulatory History III. Background and... States Code VCDOA Vice Commandant Decision on Appeal II. Regulatory History We did not publish a notice...
48 CFR 4.803 - Contents of contract files.
Code of Federal Regulations, 2011 CFR
2011-10-01
... the type and extent of market research conducted. (7) Government estimate of contract price. (8) A... documentation— (i) Of the type and extent of market research; and (ii) That the NAICS code assigned to the...
2 CFR § 1.100 - Content of this title.
Code of Federal Regulations, 2012 CFR
2017-01-01
... 2 Grants and Agreements 1 2017-01-01 2017-01-01 false Content of this title. § 1.100 Section § 1.100 Grants and Agreements Office of Management and Budget Guidance for Grants and Agreements ABOUT TITLE 2 OF THE CODE OF FEDERAL REGULATIONS AND SUBTITLE A Introduction to Title 2 of the CFR § 1.100 Content of this title. This title contains— (a)...
2 CFR § 1.100 - Content of this title.
Code of Federal Regulations, 2013 CFR
2016-01-01
... 2 Grants and Agreements 1 2016-01-01 2016-01-01 false Content of this title. § 1.100 Section § 1.100 Grants and Agreements Office of Management and Budget Guidance for Grants and Agreements ABOUT TITLE 2 OF THE CODE OF FEDERAL REGULATIONS AND SUBTITLE A Introduction to Title 2 of the CFR § 1.100 Content of this title. This title contains— (a)...
2 CFR § 1.100 - Content of this title.
Code of Federal Regulations, 2010 CFR
2018-01-01
... 2 Grants and Agreements 1 2018-01-01 2018-01-01 false Content of this title. § 1.100 Section § 1.100 Grants and Agreements Office of Management and Budget Guidance for Grants and Agreements ABOUT TITLE 2 OF THE CODE OF FEDERAL REGULATIONS AND SUBTITLE A Introduction to Title 2 of the CFR § 1.100 Content of this title. This title contains— (a)...
2 CFR § 1.100 - Content of this title.
Code of Federal Regulations, 2011 CFR
2015-01-01
... 2 Grants and Agreements 1 2015-01-01 2015-01-01 false Content of this title. § 1.100 Section § 1.100 Grants and Agreements Office of Management and Budget Guidance for Grants and Agreements ABOUT TITLE 2 OF THE CODE OF FEDERAL REGULATIONS AND SUBTITLE A Introduction to Title 2 of the CFR § 1.100 Content of this title. This title contains— (a)...
Writing in the Secondary-Level Disciplines: A Systematic Review of Context, Cognition, and Content
ERIC Educational Resources Information Center
Miller, Diane M.; Scott, Chyllis E.; McTigue, Erin M.
2018-01-01
Situated within the historical and current state of writing and adolescent literacy research, this systematic literature review screened 3504 articles to determine the prevalent themes in current research on writing tasks in content-area classrooms. Each of the 3504 studies was evaluated and coded using seven methodological quality indicators. The…
2 CFR 1.105 - Organization and subtitle content.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 2 Grants and Agreements 1 2011-01-01 2011-01-01 false Organization and subtitle content. 1.105 Section 1.105 Grants and Agreements Office of Management and Budget Guidance for Grants and Agreements ABOUT TITLE 2 OF THE CODE OF FEDERAL REGULATIONS AND SUBTITLE A Introduction to Title 2 of the CFR § 1...
ERIC Educational Resources Information Center
Egmir, Eray; Erdem, Cahit; Koçyigit, Mehmet
2017-01-01
The aim of this study is to analyse the studies published in "International Journal of Instruction" ["IJI"] in the last ten years. This study is a qualitative, descriptive literature review study. The data was collected through document analysis, coded using constant comparison and analysed using content analysis. Frequencies…
Forest value orientations in Australia: an application of computer content analysis
Trevor J. Webb; David N. Bengston; David P. Fan
2008-01-01
This article explores the expression of three forest value orientations that emerged from an analysis of Australian news media discourse about the management of Australian native forests from August 1, 1997 through December 31, 2004. Computer-coded content analysis was used to measure and track the relative importance of commodity, ecological and moral/spiritual/...
Content and Language Integrated Learning through an Online Game in Primary School: A Case Study
ERIC Educational Resources Information Center
Dourda, Kyriaki; Bratitsis, Tharrenos; Griva, Eleni; Papadopoulou, Penelope
2014-01-01
In this paper an educational design proposal is presented which combines two well established teaching approaches, that of Game-based Learning (GBL) and Content and Language Integrated Learning (CLIL). The context of the proposal was the design of an educational geography computer game, utilizing QR Codes and Google Earth for teaching English…
An Open Source Agenda for Research Linking Text and Image Content Features.
ERIC Educational Resources Information Center
Goodrum, Abby A.; Rorvig, Mark E.; Jeong, Ki-Tai; Suresh, Chitturi
2001-01-01
Proposes methods to utilize image primitives to support term assignment for image classification. Proposes to release code for image analysis in a common tool set for other researchers to use. Of particular focus is the expansion of work by researchers in image indexing to include image content-based feature extraction capabilities in their work.…
Pretty Risky Behavior: A Content Analysis of Youth Risk Behaviors in "Pretty Little Liars"
ERIC Educational Resources Information Center
Hall, Cougar; West, Joshua; Herbert, Patrick C.
2015-01-01
Adolescent consumption of screen media continues to increase. A variety of theoretical constructs hypothesize the impact of media content on health-related attitudes, beliefs, and behaviors. This study uses a coding instrument based on the Centers for Disease Control and Prevention Youth Risk Behavior Survey to analyze health behavior contained in…
Supports for Vocabulary Instruction in Early Language and Literacy Methods Textbooks
ERIC Educational Resources Information Center
Wright, Tanya S.; Peltier, Marliese R.
2016-01-01
The goal of this study was to examine the extent to which the content and recommendations in recently published early language and literacy methods textbooks may support early childhood teachers in learning to provide vocabulary instruction for young children. We completed a content analysis of 9 textbooks with coding at the sentence level.…
The Challenges of Planning Language Objectives in Content-Based ESL Instruction
ERIC Educational Resources Information Center
Baecher, Laura; Farnsworth, Tim; Ediger, Anne
2014-01-01
The purpose of this research was to investigate the major patterns in content-based instruction (CBI) lesson plans among practicum teachers at the final stage of an MA TESOL program. One hundred and seven lesson plans were coded according to a typology developed to evaluate clarity and identify areas of potential difficulty in the design of…
ERIC Educational Resources Information Center
Jones, Dustin; Hollas, Victoria; Klespis, Mark
2017-01-01
This article presents an overview of the ways technology is presented in textbooks written for mathematics content courses for prospective elementary teachers. Six popular textbooks comprising a total of more than 5,000 pages were examined, and 1,055 distinct references to technology were identified. These references are coded according to…
Children's Literature on Recycling: What Does It Contribute to Environmental Literacy?
ERIC Educational Resources Information Center
Christenson, Mary A.
2008-01-01
This article addresses the content and use of children's literature to teach about recycling. Twenty children's books were examined using a coding system created to compare and generalize about the content of the books. It was found that although the books can be useful for providing basic information they fall short in asking children to think…
ERIC Educational Resources Information Center
Scott, Wendy; Suh, Yonghee
2015-01-01
This content analysis explored how Civics and Government textbooks and the Virginia Standards of Learning for Civics and Government courses reflect citizenship outcomes, specifically deconstructing the unique needs of marginalized students. The coding frame was constructed by using themes and categories from previous literature, specifically…
21 CFR 172.340 - Fish protein isolate.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 3 2014-04-01 2014-04-01 false Fish protein isolate. 172.340 Section 172.340 Food.../federal_register/code_of_federal_regulations/ibr_locations.html. (1) Protein content, as N × 6.25, shall... described in section 24.003, Air Drying (1)—Official First Action. (3) Fat content shall not be more than 0...
ERIC Educational Resources Information Center
Erdogan, Ibrahim; Campbell, Todd; Abd-Hamid, Nor Hashidah
2011-01-01
This study describes the development of an instrument to investigate the extent to which student-centered actions are occurring in science classrooms. The instrument was developed through the following five stages: (1) student action identification, (2) use of both national and international content experts to establish content validity, (3)…
ERIC Educational Resources Information Center
Sims, Judy R.; Giordano, Joseph
A research study assessed the amount of front page newspaper coverage allotted to "character/competence/image" issues versus "platform/political" issues in the 1992 presidential campaign. Using textual analysis, methodology of content analysis, researchers coded the front page of the following 5 newspapers between August 1 and…
Berry, Nina J; Gribble, Karleen D
2017-10-01
The use of health and nutrition content claims in infant formula advertising is restricted by many governments in response to WHO policies and WHA resolutions. The purpose of this study was to determine whether such prohibited claims could be observed in Australian websites that advertise infant formula products. A comprehensive internet search was conducted to identify websites that advertise infant formula available for purchase in Australia. Content analysis was used to identify prohibited claims. The coding frame was closely aligned with the provisions of the Australian and New Zealand Food Standard Code, which prohibits these claims. The outcome measures were the presence of health claims, nutrition content claims, or references to the nutritional content of human milk. Web pages advertising 25 unique infant formula products available for purchase in Australia were identified. Every advertisement (100%) contained at least one health claim. Eighteen (72%) also contained at least one nutrition content claim. Three web pages (12%) advertising brands associated with infant formula products referenced the nutritional content of human milk. All of these claims appear in spite of national regulations prohibiting them indicating a failure of monitoring and/or enforcement. Where countries have enacted instruments to prohibit health and other claims in infant formula advertising, the marketing of infant formula must be actively monitored to be effective. © 2016 John Wiley & Sons Ltd.
Bandwidth efficient coding for satellite communications
NASA Technical Reports Server (NTRS)
Lin, Shu; Costello, Daniel J., Jr.; Miller, Warner H.; Morakis, James C.; Poland, William B., Jr.
1992-01-01
An error control coding scheme was devised to achieve large coding gain and high reliability by using coded modulation with reduced decoding complexity. To achieve a 3 to 5 dB coding gain and moderate reliability, the decoding complexity is quite modest. In fact, to achieve a 3 dB coding gain, the decoding complexity is quite simple, no matter whether trellis coded modulation or block coded modulation is used. However, to achieve coding gains exceeding 5 dB, the decoding complexity increases drastically, and the implementation of the decoder becomes very expensive and unpractical. The use is proposed of coded modulation in conjunction with concatenated (or cascaded) coding. A good short bandwidth efficient modulation code is used as the inner code and relatively powerful Reed-Solomon code is used as the outer code. With properly chosen inner and outer codes, a concatenated coded modulation scheme not only can achieve large coding gains and high reliability with good bandwidth efficiency but also can be practically implemented. This combination of coded modulation and concatenated coding really offers a way of achieving the best of three worlds, reliability and coding gain, bandwidth efficiency, and decoding complexity.
How do gut feelings feature in tutorial dialogues on diagnostic reasoning in GP traineeship?
Stolper, C F; Van de Wiel, M W J; Hendriks, R H M; Van Royen, P; Van Bokhoven, M A; Van der Weijden, T; Dinant, G J
2015-05-01
Diagnostic reasoning is considered to be based on the interaction between analytical and non-analytical cognitive processes. Gut feelings, a specific form of non-analytical reasoning, play a substantial role in diagnostic reasoning by general practitioners (GPs) and may activate analytical reasoning. In GP traineeships in the Netherlands, trainees mostly see patients alone but regularly consult with their supervisors to discuss patients and problems, receive feedback, and improve their competencies. In the present study, we examined the discussions of supervisors and their trainees about diagnostic reasoning in these so-called tutorial dialogues and how gut feelings feature in these discussions. 17 tutorial dialogues focussing on diagnostic reasoning were video-recorded and transcribed and the protocols were analysed using a detailed bottom-up and iterative content analysis and coding procedure. The dialogues were segmented into quotes. Each quote received a content code and a participant code. The number of words per code was used as a unit of analysis to quantitatively compare the contributions to the dialogues made by supervisors and trainees, and the attention given to different topics. The dialogues were usually analytical reflections on a trainee's diagnostic reasoning. A hypothetico-deductive strategy was often used, by listing differential diagnoses and discussing what information guided the reasoning process and might confirm or exclude provisional hypotheses. Gut feelings were discussed in seven dialogues. They were used as a tool in diagnostic reasoning, inducing analytical reflection, sometimes on the entire diagnostic reasoning process. The emphasis in these tutorial dialogues was on analytical components of diagnostic reasoning. Discussing gut feelings in tutorial dialogues seems to be a good educational method to familiarize trainees with non-analytical reasoning. Supervisors need specialised knowledge about these aspects of diagnostic reasoning and how to deal with them in medical education.
Empirical evaluation of H.265/HEVC-based dynamic adaptive video streaming over HTTP (HEVC-DASH)
NASA Astrophysics Data System (ADS)
Irondi, Iheanyi; Wang, Qi; Grecos, Christos
2014-05-01
Real-time HTTP streaming has gained global popularity for delivering video content over Internet. In particular, the recent MPEG-DASH (Dynamic Adaptive Streaming over HTTP) standard enables on-demand, live, and adaptive Internet streaming in response to network bandwidth fluctuations. Meanwhile, emerging is the new-generation video coding standard, H.265/HEVC (High Efficiency Video Coding) promises to reduce the bandwidth requirement by 50% at the same video quality when compared with the current H.264/AVC standard. However, little existing work has addressed the integration of the DASH and HEVC standards, let alone empirical performance evaluation of such systems. This paper presents an experimental HEVC-DASH system, which is a pull-based adaptive streaming solution that delivers HEVC-coded video content through conventional HTTP servers where the client switches to its desired quality, resolution or bitrate based on the available network bandwidth. Previous studies in DASH have focused on H.264/AVC, whereas we present an empirical evaluation of the HEVC-DASH system by implementing a real-world test bed, which consists of an Apache HTTP Server with GPAC, an MP4Client (GPAC) with open HEVC-based DASH client and a NETEM box in the middle emulating different network conditions. We investigate and analyze the performance of HEVC-DASH by exploring the impact of various network conditions such as packet loss, bandwidth and delay on video quality. Furthermore, we compare the Intra and Random Access profiles of HEVC coding with the Intra profile of H.264/AVC when the correspondingly encoded video is streamed with DASH. Finally, we explore the correlation among the quality metrics and network conditions, and empirically establish under which conditions the different codecs can provide satisfactory performance.
Effects of nutrition label format and product assortment on the healthfulness of food choice.
Aschemann-Witzel, Jessica; Grunert, Klaus G; van Trijp, Hans C M; Bialkova, Svetlana; Raats, Monique M; Hodgkins, Charo; Wasowicz-Kirylo, Grazyna; Koenigstorfer, Joerg
2013-12-01
This study aims to find out whether front-of-pack nutrition label formats influence the healthfulness of consumers' food choices and important predictors of healthful choices, depending on the size of the choice set that is made available to consumers. The predictors explored were health motivation and perceived capability of making healthful choices. One thousand German and Polish consumers participated in the study that manipulated the format of nutrition labels. All labels referred to the content of calories and four negative nutrients and were presented on savoury and sweet snacks. The different formats included the percentage of guideline daily amount, colour coding schemes, and text describing low, medium and high content of each nutrient. Participants first chose from a set of 10 products and then from a set of 20 products, which was, on average, more healthful than the first choice set. The results showed that food choices were more healthful in the extended 20-product (vs. 10-product) choice set and that this effect is stronger than a random choice would produce. The formats colour coding and texts, particularly colour coding in Germany, increased the healthfulness of product choices when consumers were asked to choose a healthful product, but not when they were asked to choose according to their preferences. The formats did not influence consumers' motivation to choose healthful foods. Colour coding, however, increased consumers' perceived capability of making healthful choices. While the results revealed no consistent differences in the effects between the formats, they indicate that manipulating choice sets by including healthier options is an effective strategy to increase the healthfulness of food choices. Copyright © 2013 Elsevier Ltd. All rights reserved.
A genetic scale of reading frame coding.
Michel, Christian J
2014-08-21
The reading frame coding (RFC) of codes (sets) of trinucleotides is a genetic concept which has been largely ignored during the last 50 years. A first objective is the definition of a new and simple statistical parameter PrRFC for analysing the probability (efficiency) of reading frame coding (RFC) of any trinucleotide code. A second objective is to reveal different classes and subclasses of trinucleotide codes involved in reading frame coding: the circular codes of 20 trinucleotides and the bijective genetic codes of 20 trinucleotides coding the 20 amino acids. This approach allows us to propose a genetic scale of reading frame coding which ranges from 1/3 with the random codes (RFC probability identical in the three frames) to 1 with the comma-free circular codes (RFC probability maximal in the reading frame and null in the two shifted frames). This genetic scale shows, in particular, the reading frame coding probabilities of the 12,964,440 circular codes (PrRFC=83.2% in average), the 216 C(3) self-complementary circular codes (PrRFC=84.1% in average) including the code X identified in eukaryotic and prokaryotic genes (PrRFC=81.3%) and the 339,738,624 bijective genetic codes (PrRFC=61.5% in average) including the 52 codes without permuted trinucleotides (PrRFC=66.0% in average). Otherwise, the reading frame coding probabilities of each trinucleotide code coding an amino acid with the universal genetic code are also determined. The four amino acids Gly, Lys, Phe and Pro are coded by codes (not circular) with RFC probabilities equal to 2/3, 1/2, 1/2 and 2/3, respectively. The amino acid Leu is coded by a circular code (not comma-free) with a RFC probability equal to 18/19. The 15 other amino acids are coded by comma-free circular codes, i.e. with RFC probabilities equal to 1. The identification of coding properties in some classes of trinucleotide codes studied here may bring new insights in the origin and evolution of the genetic code. Copyright © 2014 Elsevier Ltd. All rights reserved.
Telepharmacy and bar-code technology in an i.v. chemotherapy admixture area.
O'Neal, Brian C; Worden, John C; Couldry, Rick J
2009-07-01
A program using telepharmacy and bar-code technology to increase the presence of the pharmacist at a critical risk point during chemotherapy preparation is described. Telepharmacy hardware and software were acquired, and an inspection camera was placed in a biological safety cabinet to allow the pharmacy technician to take digital photographs at various stages of the chemotherapy preparation process. Once the pharmacist checks the medication vials' agreement with the work label, the technician takes the product into the biological safety cabinet, where the appropriate patient is selected from the pending work list, a queue of patient orders sent from the pharmacy information system. The technician then scans the bar code on the vial. Assuming the bar code matches, the technician photographs the work label, vials, diluents and fluids to be used, and the syringe (before injecting the contents into the bag) along with the vial. The pharmacist views all images as a part of the final product-checking process. This process allows the pharmacist to verify that the correct quantity of medication was transferred from the primary source to a secondary container without being physically present at the time of transfer. Telepharmacy and bar coding provide a means to improve the accuracy of chemotherapy preparation by decreasing the likelihood of using the incorrect product or quantity of drug. The system facilitates the reading of small product labels and removes the need for a pharmacist to handle contaminated syringes and vials when checking the final product.
When Content Matters: The Role of Processing Code in Tactile Display Design.
Ferris, Thomas K; Sarter, Nadine
2010-01-01
The distribution of tasks and stimuli across multiple modalities has been proposed as a means to support multitasking in data-rich environments. Recently, the tactile channel and, more specifically, communication via the use of tactile/haptic icons have received considerable interest. Past research has examined primarily the impact of concurrent task modality on the effectiveness of tactile information presentation. However, it is not well known to what extent the interpretation of iconic tactile patterns is affected by another attribute of information: the information processing codes of concurrent tasks. In two driving simulation studies (n = 25 for each), participants decoded icons composed of either spatial or nonspatial patterns of vibrations (engaging spatial and nonspatial processing code resources, respectively) while concurrently interpreting spatial or nonspatial visual task stimuli. As predicted by Multiple Resource Theory, performance was significantly worse (approximately 5-10 percent worse) when the tactile icons and visual tasks engaged the same processing code, with the overall worst performance in the spatial-spatial task pairing. The findings from these studies contribute to an improved understanding of information processing and can serve as input to multidimensional quantitative models of timesharing performance. From an applied perspective, the results suggest that competition for processing code resources warrants consideration, alongside other factors such as the naturalness of signal-message mapping, when designing iconic tactile displays. Nonspatially encoded tactile icons may be preferable in environments which already rely heavily on spatial processing, such as car cockpits.
75 FR 77007 - Notice of Information Collection
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-10
... works with K-12 teachers to provide content and curricular support selected as the best from among the... Parker, NASA Clearance Officer. [FR Doc. 2010-31038 Filed 12-9-10; 8:45 am] BILLING CODE 7510-13-P ...
QRAC-the-Code: a comprehension monitoring strategy for middle school social studies textbooks.
Berkeley, Sheri; Riccomini, Paul J
2013-01-01
Requirements for reading and ascertaining information from text increase as students advance through the educational system, especially in content-rich classes; hence, monitoring comprehension is especially important. However, this is a particularly challenging skill for many students who struggle with reading comprehension, including students with learning disabilities. A randomized pre-post experimental design was employed to investigate the effectiveness of a comprehension monitoring strategy (QRAC-the-Code) for improving the reading comprehension of 323 students in grades 6 and 7 in inclusive social studies classes. Findings indicated that both general education students and students with learning disabilities who were taught a simple comprehension monitoring strategy improved their comprehension of textbook content compared to students who read independently and noted important points. In addition, students in the comprehension monitoring condition reported using more reading strategies after the intervention. Implications for research and practice are discussed.
Error-correction coding for digital communications
NASA Astrophysics Data System (ADS)
Clark, G. C., Jr.; Cain, J. B.
This book is written for the design engineer who must build the coding and decoding equipment and for the communication system engineer who must incorporate this equipment into a system. It is also suitable as a senior-level or first-year graduate text for an introductory one-semester course in coding theory. Fundamental concepts of coding are discussed along with group codes, taking into account basic principles, practical constraints, performance computations, coding bounds, generalized parity check codes, polynomial codes, and important classes of group codes. Other topics explored are related to simple nonalgebraic decoding techniques for group codes, soft decision decoding of block codes, algebraic techniques for multiple error correction, the convolutional code structure and Viterbi decoding, syndrome decoding techniques, and sequential decoding techniques. System applications are also considered, giving attention to concatenated codes, coding for the white Gaussian noise channel, interleaver structures for coded systems, and coding for burst noise channels.
A novel concatenated code based on the improved SCG-LDPC code for optical transmission systems
NASA Astrophysics Data System (ADS)
Yuan, Jian-guo; Xie, Ya; Wang, Lin; Huang, Sheng; Wang, Yong
2013-01-01
Based on the optimization and improvement for the construction method of systematically constructed Gallager (SCG) (4, k) code, a novel SCG low density parity check (SCG-LDPC)(3969, 3720) code to be suitable for optical transmission systems is constructed. The novel SCG-LDPC (6561,6240) code with code rate of 95.1% is constructed by increasing the length of SCG-LDPC (3969,3720) code, and in a way, the code rate of LDPC codes can better meet the high requirements of optical transmission systems. And then the novel concatenated code is constructed by concatenating SCG-LDPC(6561,6240) code and BCH(127,120) code with code rate of 94.5%. The simulation results and analyses show that the net coding gain (NCG) of BCH(127,120)+SCG-LDPC(6561,6240) concatenated code is respectively 2.28 dB and 0.48 dB more than those of the classic RS(255,239) code and SCG-LDPC(6561,6240) code at the bit error rate (BER) of 10-7.
Trace-shortened Reed-Solomon codes
NASA Technical Reports Server (NTRS)
Mceliece, R. J.; Solomon, G.
1994-01-01
Reed-Solomon (RS) codes have been part of standard NASA telecommunications systems for many years. RS codes are character-oriented error-correcting codes, and their principal use in space applications has been as outer codes in concatenated coding systems. However, for a given character size, say m bits, RS codes are limited to a length of, at most, 2(exp m). It is known in theory that longer character-oriented codes would be superior to RS codes in concatenation applications, but until recently no practical class of 'long' character-oriented codes had been discovered. In 1992, however, Solomon discovered an extensive class of such codes, which are now called trace-shortened Reed-Solomon (TSRS) codes. In this article, we will continue the study of TSRS codes. Our main result is a formula for the dimension of any TSRS code, as a function of its error-correcting power. Using this formula, we will give several examples of TSRS codes, some of which look very promising as candidate outer codes in high-performance coded telecommunications systems.
Error Control Coding Techniques for Space and Satellite Communications
NASA Technical Reports Server (NTRS)
Lin, Shu
2000-01-01
This paper presents a concatenated turbo coding system in which a Reed-Solomom outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft-decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.
An Interactive Concatenated Turbo Coding System
NASA Technical Reports Server (NTRS)
Liu, Ye; Tang, Heng; Lin, Shu; Fossorier, Marc
1999-01-01
This paper presents a concatenated turbo coding system in which a Reed-Solomon outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft- decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.
Performance Analysis of New Binary User Codes for DS-CDMA Communication
NASA Astrophysics Data System (ADS)
Usha, Kamle; Jaya Sankar, Kottareddygari
2016-03-01
This paper analyzes new binary spreading codes through correlation properties and also presents their performance over additive white Gaussian noise (AWGN) channel. The proposed codes are constructed using gray and inverse gray codes. In this paper, a n-bit gray code appended by its n-bit inverse gray code to construct the 2n-length binary user codes are discussed. Like Walsh codes, these binary user codes are available in sizes of power of two and additionally code sets of length 6 and their even multiples are also available. The simple construction technique and generation of code sets of different sizes are the salient features of the proposed codes. Walsh codes and gold codes are considered for comparison in this paper as these are popularly used for synchronous and asynchronous multi user communications respectively. In the current work the auto and cross correlation properties of the proposed codes are compared with those of Walsh codes and gold codes. Performance of the proposed binary user codes for both synchronous and asynchronous direct sequence CDMA communication over AWGN channel is also discussed in this paper. The proposed binary user codes are found to be suitable for both synchronous and asynchronous DS-CDMA communication.
A Picture is Worth 1,000 Words. The Use of Clinical Images in Electronic Medical Records.
Ai, Angela C; Maloney, Francine L; Hickman, Thu-Trang; Wilcox, Allison R; Ramelson, Harley; Wright, Adam
2017-07-12
To understand how clinicians utilize image uploading tools in a home grown electronic health records (EHR) system. A content analysis of patient notes containing non-radiological images from the EHR was conducted. Images from 4,000 random notes from July 1, 2009 - June 30, 2010 were reviewed and manually coded. Codes were assigned to four properties of the image: (1) image type, (2) role of image uploader (e.g. MD, NP, PA, RN), (3) practice type (e.g. internal medicine, dermatology, ophthalmology), and (4) image subject. 3,815 images from image-containing notes stored in the EHR were reviewed and manually coded. Of those images, 32.8% were clinical and 66.2% were non-clinical. The most common types of the clinical images were photographs (38.0%), diagrams (19.1%), and scanned documents (14.4%). MDs uploaded 67.9% of clinical images, followed by RNs with 10.2%, and genetic counselors with 6.8%. Dermatology (34.9%), ophthalmology (16.1%), and general surgery (10.8%) uploaded the most clinical images. The content of clinical images referencing body parts varied, with 49.8% of those images focusing on the head and neck region, 15.3% focusing on the thorax, and 13.8% focusing on the lower extremities. The diversity of image types, content, and uploaders within a home grown EHR system reflected the versatility and importance of the image uploading tool. Understanding how users utilize image uploading tools in a clinical setting highlights important considerations for designing better EHR tools and the importance of interoperability between EHR systems and other health technology.
ERIC Educational Resources Information Center
Campbell, Rebecca P.; Saltonstall, Margot; Buford, Betsy
2013-01-01
In recognition of 25th anniversary of the "Journal of The First-Year Experience & Students in Transition" (the "Journal"), a content analysis and review of "Journal" citations in other works was conducted. A team of three researchers coded type of research, type of intervention, target population, and topics of…
Enacting FAIR Education: Approaches to Integrating LGBT Content in the K-12 Curriculum
ERIC Educational Resources Information Center
Vecellio, Shawn
2012-01-01
The FAIR Education Act (SB 48) was signed into law in California in July of 2011, amending the Education Code by requiring representation of lesbian, gay, bisexual, and transgender persons in the social sciences. In this article, the author uses James Banks' model of the Four Levels of Integration of Multicultural Content to suggest ways in which…
Disability in Physical Education Textbooks: An Analysis of Image Content
ERIC Educational Resources Information Center
Taboas-Pais, Maria Ines; Rey-Cao, Ana
2012-01-01
The aim of this paper is to show how images of disability are portrayed in physical education textbooks for secondary schools in Spain. The sample was composed of 3,316 images published in 36 textbooks by 10 publishing houses. A content analysis was carried out using a coding scheme based on categories employed in other similar studies and adapted…
ERIC Educational Resources Information Center
Woo, Hongryun; Mulit, Cynthia J.; Visalli, Kelsea M.
2016-01-01
Counselor Education (CE) program websites play a role in program fit by helping prospective students learn about the profession, search for programs and apply for admission. Using the 2014 "ACA Code of Ethics'" nine categories of orientation content as its framework, this study explored the information provided on the 63…
ERIC Educational Resources Information Center
Kang, Namjun
If content analysis is to satisfy the requirement of objectivity, measures and procedures must be reliable. Reliability is usually measured by the proportion of agreement of all categories identically coded by different coders. For such data to be empirically meaningful, a high degree of inter-coder reliability must be demonstrated. Researchers in…
Approaches to Fungal Genome Annotation
Haas, Brian J.; Zeng, Qiandong; Pearson, Matthew D.; Cuomo, Christina A.; Wortman, Jennifer R.
2011-01-01
Fungal genome annotation is the starting point for analysis of genome content. This generally involves the application of diverse methods to identify features on a genome assembly such as protein-coding and non-coding genes, repeats and transposable elements, and pseudogenes. Here we describe tools and methods leveraged for eukaryotic genome annotation with a focus on the annotation of fungal nuclear and mitochondrial genomes. We highlight the application of the latest technologies and tools to improve the quality of predicted gene sets. The Broad Institute eukaryotic genome annotation pipeline is described as one example of how such methods and tools are integrated into a sequencing center’s production genome annotation environment. PMID:22059117
Ogneva, I V; Maximova, M V; Larina, I M
2014-01-01
The aim of this study was to determine the transversal stiffness of the cortical cytoskeleton and the cytoskeletal protein desmin content in the left ventricle cardiomyocytes, fibers of the mouse soleus and tibialis anterior muscle after a 30-day space flight on board the "BION-M1" biosatellite (Russia, 2013). The dissection was made after 13-16.5 h after landing. The transversal stiffness was measured in relaxed and calcium activated state by, atomic force microscopy. The desmin content was estimated by western blotting, and the expression level of desmin-coding gene was detected using real-time PCR. The results indicate that, the transversal stiffness of the left ventricle cardiomyocytes and fibers of the soleus muscle in relaxed and activated states did not differ from the control. The transversal stiffness of the tibialis muscle fibers in relaxed and activated state was increased in the mice group after space flight. At the same time, in all types of studied tissues the desmin content and the expression level of desmin-coding gene did not differ from the control level.
Diehl, William E.; Johnson, Welkin E.; Hunter, Eric
2013-01-01
All genes in the TRIM6/TRIM34/TRIM5/TRIM22 locus are type I interferon inducible, with TRIM5 and TRIM22 possessing antiviral properties. Evolutionary studies involving the TRIM6/34/5/22 locus have predominantly focused on the coding sequence of the genes, finding that TRIM5 and TRIM22 have undergone high rates of both non-synonymous nucleotide replacements and in-frame insertions and deletions. We sought to understand if divergent evolutionary pressures on TRIM6/34/5/22 coding regions have selected for modifications in the non-coding regions of these genes and explore whether such non-coding changes may influence the biological function of these genes. The transcribed genomic regions, including the introns, of TRIM6, TRIM34, TRIM5, and TRIM22 from ten Haplorhini primates and one prosimian species were analyzed for transposable element content. In Haplorhini species, TRIM5 displayed an exaggerated interspecies variability, predominantly resulting from changes in the composition of transposable elements in the large first and fourth introns. Multiple lineage-specific endogenous retroviral long terminal repeats (LTRs) were identified in the first intron of TRIM5 and TRIM22. In the prosimian genome, we identified a duplication of TRIM5 with a concomitant loss of TRIM22. The transposable element content of the prosimian TRIM5 genes appears to largely represent the shared Haplorhini/prosimian ancestral state for this gene. Furthermore, we demonstrated that one such differentially fixed LTR provides for species-specific transcriptional regulation of TRIM22 in response to p53 activation. Our results identify a previously unrecognized source of species-specific variation in the antiviral TRIM genes, which can lead to alterations in their transcriptional regulation. These observations suggest that there has existed long-term pressure for exaptation of retroviral LTRs in the non-coding regions of these genes. This likely resulted from serial viral challenges and provided a mechanism for rapid alteration of transcriptional regulation. To our knowledge, this represents the first report of persistent evolutionary pressure for the capture of retroviral LTR insertions. PMID:23516500
Multiple component codes based generalized LDPC codes for high-speed optical transport.
Djordjevic, Ivan B; Wang, Ting
2014-07-14
A class of generalized low-density parity-check (GLDPC) codes suitable for optical communications is proposed, which consists of multiple local codes. It is shown that Hamming, BCH, and Reed-Muller codes can be used as local codes, and that the maximum a posteriori probability (MAP) decoding of these local codes by Ashikhmin-Lytsin algorithm is feasible in terms of complexity and performance. We demonstrate that record coding gains can be obtained from properly designed GLDPC codes, derived from multiple component codes. We then show that several recently proposed classes of LDPC codes such as convolutional and spatially-coupled codes can be described using the concept of GLDPC coding, which indicates that the GLDPC coding can be used as a unified platform for advanced FEC enabling ultra-high speed optical transport. The proposed class of GLDPC codes is also suitable for code-rate adaption, to adjust the error correction strength depending on the optical channel conditions.
NASA Technical Reports Server (NTRS)
Lin, Shu; Rhee, Dojun
1996-01-01
This paper is concerned with construction of multilevel concatenated block modulation codes using a multi-level concatenation scheme for the frequency non-selective Rayleigh fading channel. In the construction of multilevel concatenated modulation code, block modulation codes are used as the inner codes. Various types of codes (block or convolutional, binary or nonbinary) are being considered as the outer codes. In particular, we focus on the special case for which Reed-Solomon (RS) codes are used as the outer codes. For this special case, a systematic algebraic technique for constructing q-level concatenated block modulation codes is proposed. Codes have been constructed for certain specific values of q and compared with the single-level concatenated block modulation codes using the same inner codes. A multilevel closest coset decoding scheme for these codes is proposed.
NASA Astrophysics Data System (ADS)
Juhler, Martin Vogt
2016-08-01
Recent research, both internationally and in Norway, has clearly expressed concerns about missing connections between subject-matter knowledge, pedagogical competence and real-life practice in schools. This study addresses this problem within the domain of field practice in teacher education, studying pre-service teachers' planning of a Physics lesson. Two means of intervention were introduced. The first was lesson study, which is a method for planning, carrying out and reflecting on a research lesson in detail with a learner and content-centered focus. This was used in combination with a second means, content representations, which is a systematic tool that connects overall teaching aims with pedagogical prompts. Changes in teaching were assessed through the construct of pedagogical content knowledge (PCK). A deductive coding analysis was carried out for this purpose. Transcripts of pre-service teachers' planning of a Physics lesson were coded into four main PCK categories, which were thereafter divided into 16 PCK sub-categories. The results showed that the intervention affected the pre-service teachers' potential to start developing PCK. First, they focused much more on categories concerning the learners. Second, they focused far more uniformly in all of the four main categories comprising PCK. Consequently, these differences could affect their potential to start developing PCK.
NASA Astrophysics Data System (ADS)
Brooks, J. N.; Hassanein, A.; Sizyuk, T.
2013-07-01
Plasma interactions with mixed-material surfaces are being analyzed using advanced modeling of time-dependent surface evolution/erosion. Simulations use the REDEP/WBC erosion/redeposition code package coupled to the HEIGHTS package ITMC-DYN mixed-material formation/response code, with plasma parameter input from codes and data. We report here on analysis for a DIII-D Mo/C containing tokamak divertor. A DIII-D/DiMES probe experiment simulation predicts that sputtered molybdenum from a 1 cm diameter central spot quickly saturates (˜4 s) in the 5 cm diameter surrounding carbon probe surface, with subsequent re-sputtering and transport to off-probe divertor regions, and with high (˜50%) redeposition on the Mo spot. Predicted Mo content in the carbon agrees well with post-exposure probe data. We discuss implications and mixed-material analysis issues for Be/W mixing at the ITER outer divertor, and Li, C, Mo mixing at an NSTX divertor.
Subjective quality evaluation of low-bit-rate video
NASA Astrophysics Data System (ADS)
Masry, Mark; Hemami, Sheila S.; Osberger, Wilfried M.; Rohaly, Ann M.
2001-06-01
A subjective quality evaluation was performed to qualify vie4wre responses to visual defects that appear in low bit rate video at full and reduced frame rates. The stimuli were eight sequences compressed by three motion compensated encoders - Sorenson Video, H.263+ and a Wavelet based coder - operating at five bit/frame rate combinations. The stimulus sequences exhibited obvious coding artifacts whose nature differed across the three coders. The subjective evaluation was performed using the Single Stimulus Continuos Quality Evaluation method of UTI-R Rec. BT.500-8. Viewers watched concatenated coded test sequences and continuously registered the perceived quality using a slider device. Data form 19 viewers was colleted. An analysis of their responses to the presence of various artifacts across the range of possible coding conditions and content is presented. The effects of blockiness and blurriness on perceived quality are examined. The effects of changes in frame rate on perceived quality are found to be related to the nature of the motion in the sequence.
Quasi-experimental study designs series-paper 9: collecting data from quasi-experimental studies.
Aloe, Ariel M; Becker, Betsy Jane; Duvendack, Maren; Valentine, Jeffrey C; Shemilt, Ian; Waddington, Hugh
2017-09-01
To identify variables that must be coded when synthesizing primary studies that use quasi-experimental designs. All quasi-experimental (QE) designs. When designing a systematic review of QE studies, potential sources of heterogeneity-both theory-based and methodological-must be identified. We outline key components of inclusion criteria for syntheses of quasi-experimental studies. We provide recommendations for coding content-relevant and methodological variables and outlined the distinction between bivariate effect sizes and partial (i.e., adjusted) effect sizes. Designs used and controls used are viewed as of greatest importance. Potential sources of bias and confounding are also addressed. Careful consideration must be given to inclusion criteria and the coding of theoretical and methodological variables during the design phase of a synthesis of quasi-experimental studies. The success of the meta-regression analysis relies on the data available to the meta-analyst. Omission of critical moderator variables (i.e., effect modifiers) will undermine the conclusions of a meta-analysis. Copyright © 2017 Elsevier Inc. All rights reserved.
The complete mitochondrial genome of Pomacea canaliculata (Gastropoda: Ampullariidae).
Zhou, Xuming; Chen, Yu; Zhu, Shanliang; Xu, Haigen; Liu, Yan; Chen, Lian
2016-01-01
The mitochondrial genome of Pomacea canaliculata (Gastropoda: Ampullariidae) is the first complete mtDNA sequence reported in the genus Pomacea. The total length of mtDNA is 15,707 bp, which containing 13 protein-coding genes, 2 ribosomal RNAs, 22 transfer RNAs, and a 359 bp non-coding region. The A + T content of the overall base composition of H-strand is 71.7% (T: 41%, C: 12.7%, A: 30.7%, G: 15.6%). ATP6, ATP8, CO1, CO2, ND1-3, ND5, ND6, ND4L and Cyt b genes begin with ATG as start codon, CO3 and ND4 begin with ATA. ATP8, CO2-3, ND4L, ND2-6 and Cyt b genes are terminated with TAA as stop codon, ATP6, ND1, and CO1 end with TAG. A long non-coding region is found and a 23 bp repeat unit repeat 11 times in this region.
Dynamic Monte Carlo simulations of radiatively accelerated GRB fireballs
NASA Astrophysics Data System (ADS)
Chhotray, Atul; Lazzati, Davide
2018-05-01
We present a novel Dynamic Monte Carlo code (DynaMo code) that self-consistently simulates the Compton-scattering-driven dynamic evolution of a plasma. We use the DynaMo code to investigate the time-dependent expansion and acceleration of dissipationless gamma-ray burst fireballs by varying their initial opacities and baryonic content. We study the opacity and energy density evolution of an initially optically thick, radiation-dominated fireball across its entire phase space - in particular during the Rph < Rsat regime. Our results reveal new phases of fireball evolution: a transition phase with a radial extent of several orders of magnitude - the fireball transitions from Γ ∝ R to Γ ∝ R0, a post-photospheric acceleration phase - where fireballs accelerate beyond the photosphere and a Thomson-dominated acceleration phase - characterized by slow acceleration of optically thick, matter-dominated fireballs due to Thomson scattering. We quantify the new phases by providing analytical expressions of Lorentz factor evolution, which will be useful for deriving jet parameters.
Catona, Danielle; Greene, Kathryn; Magsamen-Conrad, Kate
2015-01-01
People living with HIV/AIDS must make decisions about how, where, when, what, and to whom to disclose their HIV status. This study explores their perceptions of benefits and drawbacks of various HIV disclosure strategies. The authors interviewed 53 people living with HIV/AIDS from a large AIDS service organization in a northeastern U.S. state and used a combination of deductive and inductive coding to analyze disclosure strategies and advantages and disadvantages of disclosure strategies. Deductive codes consisted of eight strategies subsumed under three broad categories: mode (face-to-face, non-face-to-face, and third-party disclosure), context (setting, bringing a companion, and planning a time), and content (practicing and incremental disclosure). Inductive coding identified benefits and drawbacks for enacting each specific disclosure strategy. The discussion focuses on theoretical explanations for the reasons for and against disclosure strategy enactment and the utility of these findings for practical interventions concerning HIV disclosure practices and decision making.
Resident challenges with daily life in Chinese long-term care facilities: A qualitative pilot study.
Song, Yuting; Scales, Kezia; Anderson, Ruth A; Wu, Bei; Corazzini, Kirsten N
As traditional family-based care in China declines, the demand for residential care increases. Knowledge of residents' experiences with long-term care (LTC) facilities is essential to improving quality of care. This pilot study aimed to describe residents' experiences in LTC facilities, particularly as it related to physical function. Semi-structured open-ended interviews were conducted in two facilities with residents stratified by three functional levels (n = 5). Directed content analysis was guided by the Adaptive Leadership Framework. A two-cycle coding approach was used with a first-cycle descriptive coding and second-cycle dramaturgical coding. Interviews provided examples of challenges faced by residents in meeting their daily care needs. Five themes emerged: staff care, care from family members, physical environment, other residents in the facility, and personal strategies. Findings demonstrate the significance of organizational context for care quality and reveal foci for future research. Copyright © 2017 Elsevier Inc. All rights reserved.
Pharmaceutical advertisements in prescribing software: an analysis.
Harvey, Ken J; Vitry, Agnes I; Roughead, Elizabeth; Aroni, Rosalie; Ballenden, Nicola; Faggotter, Ralph
2005-07-18
To assess pharmaceutical advertisements in prescribing software, their adherence to code standards, and the opinions of general practitioners regarding the advertisements. Content analysis of advertisements displayed by Medical Director version 2.81 (Health Communication Network, Sydney, NSW) in early 2005; thematic analysis of a debate on this topic held on the General Practice Computer Group email forum (GPCG_talk) during December 2004. Placement, frequency and type of advertisements; their compliance with the Medicines Australia Code of Conduct, and the views of GPs. 24 clinical functions in Medical Director contained advertisements. These included 79 different advertisements for 41 prescription products marketed by 17 companies, including one generic manufacturer. 57 of 60 (95%) advertisements making a promotional claim appeared noncompliant with one or more requirements of the Code. 29 contributors, primarily GPs, posted 174 emails to GPCG_talk; there was little support for these advertisements, but some concern that the price of software would increase if they were removed. We suggest that pharmaceutical promotion in prescribing software should be banned, and inclusion of independent therapeutic information be mandated.
Kahler, Christopher W; Caswell, Amy J; Laws, M Barton; Walthers, Justin; Magill, Molly; Mastroleo, Nadine R; Howe, Chanelle J; Souza, Timothy; Wilson, Ira; Bryant, Kendall; Monti, Peter M
2016-10-01
To elucidate patient language that supports changing a health behavior (change talk) or sustaining the behavior (sustain talk). We developed a novel coding system to characterize topics of patient speech in a motivational intervention targeting alcohol and HIV/sexual risk in 90 Emergency Department patients. We further coded patient language as change or sustain talk. For both alcohol and sex, discussions focusing on benefits of behavior change or change planning were most likely to involve change talk, and these topics comprised a large portion of all change talk. Greater discussion of barriers and facilitators of change also was associated with more change talk. For alcohol use, benefits of drinking behavior was the most common topic of sustain talk. For sex risk, benefits of sexual behavior were rarely discussed, and sustain talk centered more on patterns and contexts, negations of drawbacks, and drawbacks of sexual risk behavior change. Topic coding provided unique insights into the content of patient change and sustain talk. Patients are most likely to voice change talk when conversation focuses on behavior change rather than ongoing behavior. Interventions addressing multiple health behaviors should address the unique motivations for maintaining specific risky behaviors. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Side-information-dependent correlation channel estimation in hash-based distributed video coding.
Deligiannis, Nikos; Barbarien, Joeri; Jacobs, Marc; Munteanu, Adrian; Skodras, Athanassios; Schelkens, Peter
2012-04-01
In the context of low-cost video encoding, distributed video coding (DVC) has recently emerged as a potential candidate for uplink-oriented applications. This paper builds on a concept of correlation channel (CC) modeling, which expresses the correlation noise as being statistically dependent on the side information (SI). Compared with classical side-information-independent (SII) noise modeling adopted in current DVC solutions, it is theoretically proven that side-information-dependent (SID) modeling improves the Wyner-Ziv coding performance. Anchored in this finding, this paper proposes a novel algorithm for online estimation of the SID CC parameters based on already decoded information. The proposed algorithm enables bit-plane-by-bit-plane successive refinement of the channel estimation leading to progressively improved accuracy. Additionally, the proposed algorithm is included in a novel DVC architecture that employs a competitive hash-based motion estimation technique to generate high-quality SI at the decoder. Experimental results corroborate our theoretical gains and validate the accuracy of the channel estimation algorithm. The performance assessment of the proposed architecture shows remarkable and consistent coding gains over a germane group of state-of-the-art distributed and standard video codecs, even under strenuous conditions, i.e., large groups of pictures and highly irregular motion content.
Comparison of Predicted and Measured Attenuation of Turbine Noise from a Static Engine Test
NASA Technical Reports Server (NTRS)
Chien, Eugene W.; Ruiz, Marta; Yu, Jia; Morin, Bruce L.; Cicon, Dennis; Schwieger, Paul S.; Nark, Douglas M.
2007-01-01
Aircraft noise has become an increasing concern for commercial airlines. Worldwide demand for quieter aircraft is increasing, making the prediction of engine noise suppression one of the most important fields of research. The Low-Pressure Turbine (LPT) can be an important noise source during the approach condition for commercial aircraft. The National Aeronautics and Space Administration (NASA), Pratt & Whitney (P&W), and Goodrich Aerostructures (Goodrich) conducted a joint program to validate a method for predicting turbine noise attenuation. The method includes noise-source estimation, acoustic treatment impedance prediction, and in-duct noise propagation analysis. Two noise propagation prediction codes, Eversman Finite Element Method (FEM) code [1] and the CDUCT-LaRC [2] code, were used in this study to compare the predicted and the measured turbine noise attenuation from a static engine test. In this paper, the test setup, test configurations and test results are detailed in Section II. A description of the input parameters, including estimated noise modal content (in terms of acoustic potential), and acoustic treatment impedance values are provided in Section III. The prediction-to-test correlation study results are illustrated and discussed in Section IV and V for the FEM and the CDUCT-LaRC codes, respectively, and a summary of the results is presented in Section VI.
Sparse Coding for N-Gram Feature Extraction and Training for File Fragment Classification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Felix; Quach, Tu-Thach; Wheeler, Jason
File fragment classification is an important step in the task of file carving in digital forensics. In file carving, files must be reconstructed based on their content as a result of their fragmented storage on disk or in memory. Existing methods for classification of file fragments typically use hand-engineered features such as byte histograms or entropy measures. In this paper, we propose an approach using sparse coding that enables automated feature extraction. Sparse coding, or sparse dictionary learning, is an unsupervised learning algorithm, and is capable of extracting features based simply on how well those features can be used tomore » reconstruct the original data. With respect to file fragments, we learn sparse dictionaries for n-grams, continuous sequences of bytes, of different sizes. These dictionaries may then be used to estimate n-gram frequencies for a given file fragment, but for significantly larger n-gram sizes than are typically found in existing methods which suffer from combinatorial explosion. To demonstrate the capability of our sparse coding approach, we used the resulting features to train standard classifiers such as support vector machines (SVMs) over multiple file types. Experimentally, we achieved significantly better classification results with respect to existing methods, especially when the features were used in supplement to existing hand-engineered features.« less
Robertson, Helen E; Lapraz, François; Egger, Bernhard; Telford, Maximilian J; Schiffer, Philipp H
2017-05-12
Acoels are small, ubiquitous - but understudied - marine worms with a very simple body plan. Their internal phylogeny is still not fully resolved, and the position of their proposed phylum Xenacoelomorpha remains debated. Here we describe mitochondrial genome sequences from the acoels Paratomella rubra and Isodiametra pulchra, and the complete mitochondrial genome of the acoel Archaphanostoma ylvae. The P. rubra and A. ylvae sequences are typical for metazoans in size and gene content. The larger I. pulchra mitochondrial genome contains both ribosomal genes, 21 tRNAs, but only 11 protein-coding genes. We find evidence suggesting a duplicated sequence in the I. pulchra mitochondrial genome. The P. rubra, I. pulchra and A. ylvae mitochondria have a unique genome organisation in comparison to other metazoan mitochondrial genomes. We found a large degree of protein-coding gene and tRNA overlap with little non-coding sequence in the compact P. rubra genome. Conversely, the A. ylvae and I. pulchra genomes have many long non-coding sequences between genes, likely driving genome size expansion in the latter. Phylogenetic trees inferred from mitochondrial genes retrieve Xenacoelomorpha as an early branching taxon in the deuterostomes. Sequence divergence analysis between P. rubra sampled in England and Spain indicates cryptic diversity.
Sparse Coding for N-Gram Feature Extraction and Training for File Fragment Classification
Wang, Felix; Quach, Tu-Thach; Wheeler, Jason; ...
2018-04-05
File fragment classification is an important step in the task of file carving in digital forensics. In file carving, files must be reconstructed based on their content as a result of their fragmented storage on disk or in memory. Existing methods for classification of file fragments typically use hand-engineered features such as byte histograms or entropy measures. In this paper, we propose an approach using sparse coding that enables automated feature extraction. Sparse coding, or sparse dictionary learning, is an unsupervised learning algorithm, and is capable of extracting features based simply on how well those features can be used tomore » reconstruct the original data. With respect to file fragments, we learn sparse dictionaries for n-grams, continuous sequences of bytes, of different sizes. These dictionaries may then be used to estimate n-gram frequencies for a given file fragment, but for significantly larger n-gram sizes than are typically found in existing methods which suffer from combinatorial explosion. To demonstrate the capability of our sparse coding approach, we used the resulting features to train standard classifiers such as support vector machines (SVMs) over multiple file types. Experimentally, we achieved significantly better classification results with respect to existing methods, especially when the features were used in supplement to existing hand-engineered features.« less
[Orthopedic and trauma surgery in the German DRG system. Recent developments].
Franz, D; Schemmann, F; Selter, D D; Wirtz, D C; Roeder, N; Siebert, H; Mahlke, L
2012-07-01
Orthopedics and trauma surgery are subject to continuous medical advancement. The correct and performance-based case allocation by German diagnosis-related groups (G-DRG) is a major challenge. This article analyzes and assesses current developments in orthopedics and trauma surgery in the areas of coding of diagnoses and medical procedures and the development of the 2012 G-DRG system. The relevant diagnoses, medical procedures and G-DRGs in the versions 2011 and 2012 were analyzed based on the publications of the German DRG Institute (InEK) and the German Institute of Medical Documentation and Information (DIMDI). Changes were made for the International Classification of Diseases (ICD) coding of complex cases with medical complications, the procedure coding for spinal surgery and for hand and foot surgery. The G-DRG structures were modified for endoprosthetic surgery on ankle, shoulder and elbow joints. The definition of modular structured endoprostheses was clarified. The G-DRG system for orthopedic and trauma surgery appears to be largely consolidated. The current phase of the evolution of the G-DRG system is primarily aimed at developing most exact descriptions and definitions of the content and mutual delimitation of operation and procedures coding (OPS). This is an essential prerequisite for a correct and performance-based case allocation in the G-DRG system.
Prediction of Acoustic Loads Generated by Propulsion Systems
NASA Technical Reports Server (NTRS)
Perez, Linamaria; Allgood, Daniel C.
2011-01-01
NASA Stennis Space Center is one of the nation's premier facilities for conducting large-scale rocket engine testing. As liquid rocket engines vary in size, so do the acoustic loads that they produce. When these acoustic loads reach very high levels they may cause damages both to humans and to actual structures surrounding the testing area. To prevent these damages, prediction tools are used to estimate the spectral content and levels of the acoustics being generated by the rocket engine plumes and model their propagation through the surrounding atmosphere. Prior to the current work, two different acoustic prediction tools were being implemented at Stennis Space Center, each having their own advantages and disadvantages depending on the application. Therefore, a new prediction tool was created, using NASA SP-8072 handbook as a guide, which would replicate the same prediction methods as the previous codes, but eliminate any of the drawbacks the individual codes had. Aside from replicating the previous modeling capability in a single framework, additional modeling functions were added thereby expanding the current modeling capability. To verify that the new code could reproduce the same predictions as the previous codes, two verification test cases were defined. These verification test cases also served as validation cases as the predicted results were compared to actual test data.
Evaluation of Cueing Innovation for Pressure Ulcer Prevention Using Staff Focus Groups.
Yap, Tracey L; Kennerly, Susan; Corazzini, Kirsten; Porter, Kristie; Toles, Mark; Anderson, Ruth A
2014-07-25
The purpose of the manuscript is to describe long-term care (LTC) staff perceptions of a music cueing intervention designed to improve staff integration of pressure ulcer (PrU) prevention guidelines regarding consistent and regular movement of LTC residents a minimum of every two hours. The Diffusion of Innovation (DOI) model guided staff interviews about their perceptions of the intervention's characteristics, outcomes, and sustainability. This was a qualitative, observational study of staff perceptions of the PrU prevention intervention conducted in Midwestern U.S. LTC facilities (N = 45 staff members). One focus group was held in each of eight intervention facilities using a semi-structured interview protocol. Transcripts were analyzed using thematic content analysis, and summaries for each category were compared across groups. The a priori codes (observability, trialability, compatibility, relative advantage and complexity) described the innovation characteristics, and the sixth code, sustainability, was identified in the data. Within each code, two themes emerged as a positive or negative response regarding characteristics of the innovation. Moreover, within the sustainability code, a third theme emerged that was labeled "brainstormed ideas", focusing on strategies for improving the innovation. Cueing LTC staff using music offers a sustainable potential to improve PrU prevention practices, to increase resident movement, which can subsequently lead to a reduction in PrUs.
NASA Astrophysics Data System (ADS)
You, Minli; Lin, Min; Wang, Shurui; Wang, Xuemin; Zhang, Ge; Hong, Yuan; Dong, Yuqing; Jin, Guorui; Xu, Feng
2016-05-01
Medicine counterfeiting is a serious issue worldwide, involving potentially devastating health repercussions. Advanced anti-counterfeit technology for drugs has therefore aroused intensive interest. However, existing anti-counterfeit technologies are associated with drawbacks such as the high cost, complex fabrication process, sophisticated operation and incapability in authenticating drug ingredients. In this contribution, we developed a smart phone recognition based upconversion fluorescent three-dimensional (3D) quick response (QR) code for tracking and anti-counterfeiting of drugs. We firstly formulated three colored inks incorporating upconversion nanoparticles with RGB (i.e., red, green and blue) emission colors. Using a modified inkjet printer, we printed a series of colors by precisely regulating the overlap of these three inks. Meanwhile, we developed a multilayer printing and splitting technology, which significantly increases the information storage capacity per unit area. As an example, we directly printed the upconversion fluorescent 3D QR code on the surface of drug capsules. The 3D QR code consisted of three different color layers with each layer encoded by information of different aspects of the drug. A smart phone APP was designed to decode the multicolor 3D QR code, providing the authenticity and related information of drugs. The developed technology possesses merits in terms of low cost, ease of operation, high throughput and high information capacity, thus holds great potential for drug anti-counterfeiting.Medicine counterfeiting is a serious issue worldwide, involving potentially devastating health repercussions. Advanced anti-counterfeit technology for drugs has therefore aroused intensive interest. However, existing anti-counterfeit technologies are associated with drawbacks such as the high cost, complex fabrication process, sophisticated operation and incapability in authenticating drug ingredients. In this contribution, we developed a smart phone recognition based upconversion fluorescent three-dimensional (3D) quick response (QR) code for tracking and anti-counterfeiting of drugs. We firstly formulated three colored inks incorporating upconversion nanoparticles with RGB (i.e., red, green and blue) emission colors. Using a modified inkjet printer, we printed a series of colors by precisely regulating the overlap of these three inks. Meanwhile, we developed a multilayer printing and splitting technology, which significantly increases the information storage capacity per unit area. As an example, we directly printed the upconversion fluorescent 3D QR code on the surface of drug capsules. The 3D QR code consisted of three different color layers with each layer encoded by information of different aspects of the drug. A smart phone APP was designed to decode the multicolor 3D QR code, providing the authenticity and related information of drugs. The developed technology possesses merits in terms of low cost, ease of operation, high throughput and high information capacity, thus holds great potential for drug anti-counterfeiting. Electronic supplementary information (ESI) available: Calculating details of UCNP content per 3D QR code and decoding process of the 3D QR code. See DOI: 10.1039/c6nr01353h
Unaligned instruction relocation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bertolli, Carlo; O'Brien, John K.; Sallenave, Olivier H.
In one embodiment, a computer-implemented method includes receiving source code to be compiled into an executable file for an unaligned instruction set architecture (ISA). Aligned assembled code is generated, by a computer processor. The aligned assembled code complies with an aligned ISA and includes aligned processor code for a processor and aligned accelerator code for an accelerator. A first linking pass is performed on the aligned assembled code, including relocating a first relocation target in the aligned accelerator code that refers to a first object outside the aligned accelerator code. Unaligned assembled code is generated in accordance with the unalignedmore » ISA and includes unaligned accelerator code for the accelerator and unaligned processor code for the processor. A second linking pass is performed on the unaligned assembled code, including relocating a second relocation target outside the unaligned accelerator code that refers to an object in the unaligned accelerator code.« less
Unaligned instruction relocation
Bertolli, Carlo; O'Brien, John K.; Sallenave, Olivier H.; Sura, Zehra N.
2018-01-23
In one embodiment, a computer-implemented method includes receiving source code to be compiled into an executable file for an unaligned instruction set architecture (ISA). Aligned assembled code is generated, by a computer processor. The aligned assembled code complies with an aligned ISA and includes aligned processor code for a processor and aligned accelerator code for an accelerator. A first linking pass is performed on the aligned assembled code, including relocating a first relocation target in the aligned accelerator code that refers to a first object outside the aligned accelerator code. Unaligned assembled code is generated in accordance with the unaligned ISA and includes unaligned accelerator code for the accelerator and unaligned processor code for the processor. A second linking pass is performed on the unaligned assembled code, including relocating a second relocation target outside the unaligned accelerator code that refers to an object in the unaligned accelerator code.
Improved Helicopter Rotor Performance Prediction through Loose and Tight CFD/CSD Coupling
NASA Astrophysics Data System (ADS)
Ickes, Jacob C.
Helicopters and other Vertical Take-Off or Landing (VTOL) vehicles exhibit an interesting combination of structural dynamic and aerodynamic phenomena which together drive the rotor performance. The combination of factors involved make simulating the rotor a challenging and multidisciplinary effort, and one which is still an active area of interest in the industry because of the money and time it could save during design. Modern tools allow the prediction of rotorcraft physics from first principles. Analysis of the rotor system with this level of accuracy provides the understanding necessary to improve its performance. There has historically been a divide between the comprehensive codes which perform aeroelastic rotor simulations using simplified aerodynamic models, and the very computationally intensive Navier-Stokes Computational Fluid Dynamics (CFD) solvers. As computer resources become more available, efforts have been made to replace the simplified aerodynamics of the comprehensive codes with the more accurate results from a CFD code. The objective of this work is to perform aeroelastic rotorcraft analysis using first-principles simulations for both fluids and structural predictions using tools available at the University of Toledo. Two separate codes are coupled together in both loose coupling (data exchange on a periodic interval) and tight coupling (data exchange each time step) schemes. To allow the coupling to be carried out in a reliable and efficient way, a Fluid-Structure Interaction code was developed which automatically performs primary functions of loose and tight coupling procedures. Flow phenomena such as transonics, dynamic stall, locally reversed flow on a blade, and Blade-Vortex Interaction (BVI) were simulated in this work. Results of the analysis show aerodynamic load improvement due to the inclusion of the CFD-based airloads in the structural dynamics analysis of the Computational Structural Dynamics (CSD) code. Improvements came in the form of improved peak/trough magnitude prediction, better phase prediction of these locations, and a predicted signal with a frequency content more like the flight test data than the CSD code acting alone. Additionally, a tight coupling analysis was performed as a demonstration of the capability and unique aspects of such an analysis. This work shows that away from the center of the flight envelope, the aerodynamic modeling of the CSD code can be replaced with a more accurate set of predictions from a CFD code with an improvement in the aerodynamic results. The better predictions come at substantially increased computational costs between 1,000 and 10,000 processor-hours.
Linear chirp phase perturbing approach for finding binary phased codes
NASA Astrophysics Data System (ADS)
Li, Bing C.
2017-05-01
Binary phased codes have many applications in communication and radar systems. These applications require binary phased codes to have low sidelobes in order to reduce interferences and false detection. Barker codes are the ones that satisfy these requirements and they have lowest maximum sidelobes. However, Barker codes have very limited code lengths (equal or less than 13) while many applications including low probability of intercept radar, and spread spectrum communication, require much higher code lengths. The conventional techniques of finding binary phased codes in literatures include exhaust search, neural network, and evolutionary methods, and they all require very expensive computation for large code lengths. Therefore these techniques are limited to find binary phased codes with small code lengths (less than 100). In this paper, by analyzing Barker code, linear chirp, and P3 phases, we propose a new approach to find binary codes. Experiments show that the proposed method is able to find long low sidelobe binary phased codes (code length >500) with reasonable computational cost.
Maximum likelihood decoding analysis of Accumulate-Repeat-Accumulate Codes
NASA Technical Reports Server (NTRS)
Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung
2004-01-01
Repeat-Accumulate (RA) codes are the simplest turbo-like codes that achieve good performance. However, they cannot compete with Turbo codes or low-density parity check codes (LDPC) as far as performance is concerned. The Accumulate Repeat Accumulate (ARA) codes, as a subclass of LDPC codes, are obtained by adding a pre-coder in front of RA codes with puncturing where an accumulator is chosen as a precoder. These codes not only are very simple, but also achieve excellent performance with iterative decoding. In this paper, the performance of these codes with (ML) decoding are analyzed and compared to random codes by very tight bounds. The weight distribution of some simple ARA codes is obtained, and through existing tightest bounds we have shown the ML SNR threshold of ARA codes approaches very closely to the performance of random codes. We have shown that the use of precoder improves the SNR threshold but interleaving gain remains unchanged with respect to RA code with puncturing.
Accumulate repeat accumulate codes
NASA Technical Reports Server (NTRS)
Abbasfar, A.; Divsalar, D.; Yao, K.
2004-01-01
In this paper we propose an innovative channel coding scheme called Accumulate Repeat Accumulate codes. This class of codes can be viewed as trubo-like codes, namely a double serial concatenation of a rate-1 accumulator as an outer code, a regular or irregular repetition as a middle code, and a punctured accumulator as an inner code.
Ensemble coding of face identity is not independent of the coding of individual identity.
Neumann, Markus F; Ng, Ryan; Rhodes, Gillian; Palermo, Romina
2018-06-01
Information about a group of similar objects can be summarized into a compressed code, known as ensemble coding. Ensemble coding of simple stimuli (e.g., groups of circles) can occur in the absence of detailed exemplar coding, suggesting dissociable processes. Here, we investigate whether a dissociation would still be apparent when coding facial identity, where individual exemplar information is much more important. We examined whether ensemble coding can occur when exemplar coding is difficult, as a result of large sets or short viewing times, or whether the two types of coding are positively associated. We found a positive association, whereby both ensemble and exemplar coding were reduced for larger groups and shorter viewing times. There was no evidence for ensemble coding in the absence of exemplar coding. At longer presentation times, there was an unexpected dissociation, where exemplar coding increased yet ensemble coding decreased, suggesting that robust information about face identity might suppress ensemble coding. Thus, for face identity, we did not find the classic dissociation-of access to ensemble information in the absence of detailed exemplar information-that has been used to support claims of distinct mechanisms for ensemble and exemplar coding.
Performance and structure of single-mode bosonic codes
NASA Astrophysics Data System (ADS)
Albert, Victor V.; Noh, Kyungjoo; Duivenvoorden, Kasper; Young, Dylan J.; Brierley, R. T.; Reinhold, Philip; Vuillot, Christophe; Li, Linshu; Shen, Chao; Girvin, S. M.; Terhal, Barbara M.; Jiang, Liang
2018-03-01
The early Gottesman, Kitaev, and Preskill (GKP) proposal for encoding a qubit in an oscillator has recently been followed by cat- and binomial-code proposals. Numerically optimized codes have also been proposed, and we introduce codes of this type here. These codes have yet to be compared using the same error model; we provide such a comparison by determining the entanglement fidelity of all codes with respect to the bosonic pure-loss channel (i.e., photon loss) after the optimal recovery operation. We then compare achievable communication rates of the combined encoding-error-recovery channel by calculating the channel's hashing bound for each code. Cat and binomial codes perform similarly, with binomial codes outperforming cat codes at small loss rates. Despite not being designed to protect against the pure-loss channel, GKP codes significantly outperform all other codes for most values of the loss rate. We show that the performance of GKP and some binomial codes increases monotonically with increasing average photon number of the codes. In order to corroborate our numerical evidence of the cat-binomial-GKP order of performance occurring at small loss rates, we analytically evaluate the quantum error-correction conditions of those codes. For GKP codes, we find an essential singularity in the entanglement fidelity in the limit of vanishing loss rate. In addition to comparing the codes, we draw parallels between binomial codes and discrete-variable systems. First, we characterize one- and two-mode binomial as well as multiqubit permutation-invariant codes in terms of spin-coherent states. Such a characterization allows us to introduce check operators and error-correction procedures for binomial codes. Second, we introduce a generalization of spin-coherent states, extending our characterization to qudit binomial codes and yielding a multiqudit code.
LDPC Codes with Minimum Distance Proportional to Block Size
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Jones, Christopher; Dolinar, Samuel; Thorpe, Jeremy
2009-01-01
Low-density parity-check (LDPC) codes characterized by minimum Hamming distances proportional to block sizes have been demonstrated. Like the codes mentioned in the immediately preceding article, the present codes are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. The previously mentioned codes have low decoding thresholds and reasonably low error floors. However, the minimum Hamming distances of those codes do not grow linearly with code-block sizes. Codes that have this minimum-distance property exhibit very low error floors. Examples of such codes include regular LDPC codes with variable degrees of at least 3. Unfortunately, the decoding thresholds of regular LDPC codes are high. Hence, there is a need for LDPC codes characterized by both low decoding thresholds and, in order to obtain acceptably low error floors, minimum Hamming distances that are proportional to code-block sizes. The present codes were developed to satisfy this need. The minimum Hamming distances of the present codes have been shown, through consideration of ensemble-average weight enumerators, to be proportional to code block sizes. As in the cases of irregular ensembles, the properties of these codes are sensitive to the proportion of degree-2 variable nodes. A code having too few such nodes tends to have an iterative decoding threshold that is far from the capacity threshold. A code having too many such nodes tends not to exhibit a minimum distance that is proportional to block size. Results of computational simulations have shown that the decoding thresholds of codes of the present type are lower than those of regular LDPC codes. Included in the simulations were a few examples from a family of codes characterized by rates ranging from low to high and by thresholds that adhere closely to their respective channel capacity thresholds; the simulation results from these examples showed that the codes in question have low error floors as well as low decoding thresholds. As an example, the illustration shows the protograph (which represents the blueprint for overall construction) of one proposed code family for code rates greater than or equal to 1.2. Any size LDPC code can be obtained by copying the protograph structure N times, then permuting the edges. The illustration also provides Field Programmable Gate Array (FPGA) hardware performance simulations for this code family. In addition, the illustration provides minimum signal-to-noise ratios (Eb/No) in decibels (decoding thresholds) to achieve zero error rates as the code block size goes to infinity for various code rates. In comparison with the codes mentioned in the preceding article, these codes have slightly higher decoding thresholds.
76 FR 77549 - Lummi Nation-Title 20-Code of Laws-Liquor Code
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-13
... DEPARTMENT OF THE INTERIOR Bureau of Indian Affairs Lummi Nation--Title 20--Code of Laws--Liquor... amendment to Lummi Nation's Title 20--Code of Laws--Liquor Code. The Code regulates and controls the... this amendment to Title 20--Lummi Nation Code of Laws--Liquor Code by Resolution 2011-038 on March 1...
NASA Technical Reports Server (NTRS)
Hinds, Erold W. (Principal Investigator)
1996-01-01
This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.
Beam-dynamics codes used at DARHT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ekdahl, Jr., Carl August
Several beam simulation codes are used to help gain a better understanding of beam dynamics in the DARHT LIAs. The most notable of these fall into the following categories: for beam production – Tricomp Trak orbit tracking code, LSP Particle in cell (PIC) code, for beam transport and acceleration – XTR static envelope and centroid code, LAMDA time-resolved envelope and centroid code, LSP-Slice PIC code, for coasting-beam transport to target – LAMDA time-resolved envelope code, LSP-Slice PIC code. These codes are also being used to inform the design of Scorpius.
Amino acid codes in mitochondria as possible clues to primitive codes
NASA Technical Reports Server (NTRS)
Jukes, T. H.
1981-01-01
Differences between mitochondrial codes and the universal code indicate that an evolutionary simplification has taken place, rather than a return to a more primitive code. However, these differences make it evident that the universal code is not the only code possible, and therefore earlier codes may have differed markedly from the previous code. The present universal code is probably a 'frozen accident.' The change in CUN codons from leucine to threonine (Neurospora vs. yeast mitochondria) indicates that neutral or near-neutral changes occurred in the corresponding proteins when this code change took place, caused presumably by a mutation in a tRNA gene.
Some partial-unit-memory convolutional codes
NASA Technical Reports Server (NTRS)
Abdel-Ghaffar, K.; Mceliece, R. J.; Solomon, G.
1991-01-01
The results of a study on a class of error correcting codes called partial unit memory (PUM) codes are presented. This class of codes, though not entirely new, has until now remained relatively unexplored. The possibility of using the well developed theory of block codes to construct a large family of promising PUM codes is shown. The performance of several specific PUM codes are compared with that of the Voyager standard (2, 1, 6) convolutional code. It was found that these codes can outperform the Voyager code with little or no increase in decoder complexity. This suggests that there may very well be PUM codes that can be used for deep space telemetry that offer both increased performance and decreased implementational complexity over current coding systems.
NASA Technical Reports Server (NTRS)
Lee, L.-N.
1977-01-01
Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively modest coding complexity, it is proposed to concatenate a byte-oriented unit-memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real-time minimal-byte-error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.
NASA Technical Reports Server (NTRS)
Lee, L. N.
1976-01-01
Concatenated coding systems utilizing a convolutional code as the inner code and a Reed-Solomon code as the outer code are considered. In order to obtain very reliable communications over a very noisy channel with relatively small coding complexity, it is proposed to concatenate a byte oriented unit memory convolutional code with an RS outer code whose symbol size is one byte. It is further proposed to utilize a real time minimal byte error probability decoding algorithm, together with feedback from the outer decoder, in the decoder for the inner convolutional code. The performance of the proposed concatenated coding system is studied, and the improvement over conventional concatenated systems due to each additional feature is isolated.
You've Written a Cool Astronomy Code! Now What Do You Do with It?
NASA Astrophysics Data System (ADS)
Allen, Alice; Accomazzi, A.; Berriman, G. B.; DuPrie, K.; Hanisch, R. J.; Mink, J. D.; Nemiroff, R. J.; Shamir, L.; Shortridge, K.; Taylor, M. B.; Teuben, P. J.; Wallin, J. F.
2014-01-01
Now that you've written a useful astronomy code for your soon-to-be-published research, you have to figure out what you want to do with it. Our suggestion? Share it! This presentation highlights the means and benefits of sharing your code. Make your code citable -- submit it to the Astrophysics Source Code Library and have it indexed by ADS! The Astrophysics Source Code Library (ASCL) is a free online registry of source codes of interest to astronomers and astrophysicists. With over 700 codes, it is continuing its rapid growth, with an average of 17 new codes a month. The editors seek out codes for inclusion; indexing by ADS improves the discoverability of codes and provides a way to cite codes as separate entries, especially codes without papers that describe them.
Relapse and Craving in Alcohol-Dependent Individuals: A Comparison of Self-Reported Determinants.
Snelleman, Michelle; Schoenmakers, Tim M; van de Mheen, Dike
2018-06-07
Negative affective states and alcohol-related stimuli increase risk of relapse in alcohol dependence. In research and in clinical practice, craving is often used as another important indicator of relapse, but this lacks a firm empirical foundation. The goal of the present study is to explore and compare determinants for relapse and craving, using Marlatt's (1996) taxonomy of high risk situations as a template. We conducted semi-structured interviews with 20 alcohol-dependent patients about their most recent relapse and craving episodes. Interview transcripts were carefully reviewed for their thematic content, and codes capturing the thematic content were formulated. In total, we formulated 42 relapse-related codes and 33 craving-related codes. Descriptions of craving episodes revealed that these episodes vary in frequency and intensity. The presence of alcohol-related stimuli (n = 11) and experiencing a negative emotional state (n = 11) were often occurring determinants of craving episodes. Both negative emotional states (n = 17) and testing personal control (n = 11) were viewed as important determinants of relapses. Craving was seldom mentioned as a determinant for relapse. Additionally, participants reported multiple determinants preceding a relapse, whereas craving episodes were preceded by only one determinant. Patient reports do not support the claim that craving by itself is an important proximal determinant for relapse. In addition, multiple determinants were present before a relapse. Therefore, future research should focus on a complexity of different determinants.
Izadi, Sonya; Pachur, Thorsten; Wheeler, Courtney; McGuire, Jaclyn; Waters, Erika A
2017-10-01
To gain insight into patients' medical decisions by exploring the content of laypeople's spontaneous mental associations with the term "side effect." An online cross-sectional survey asked 144 women aged 40-74, "What are the first three things you think of when you hear the words 'side effect?"' Data were analyzed using content analysis, chi-square, and Fisher's exact tests. 17 codes emerged and were grouped into 4 themes and a Miscellaneous category: Health Problems (70.8% of participants), Decision-Relevant Evaluations (52.8%), Negative Affect (30.6%), Practical Considerations (18.1%) and Miscellaneous (9.7%). The 4 most frequently identified codes were: Risk (36.1%), Health Problems-Specific Symptoms (35.4%), Health Problems-General Terms (32.6%), and Negative Affect-Strong (19.4%). Code and theme frequencies were generally similar across demographic groups (ps>0.05). The term "side effect" spontaneously elicited comments related to identifying health problems and expressing negative emotions. This might explain why the mere possibility of side effects triggers negative affect for people making medical decisions. Some respondents also mentioned decision-relevant evaluations and practical considerations in response to side effects. Addressing commonly-held associations and acknowledging negative affects provoked by side effects are first steps healthcare providers can take towards improving informed and shared patient decision making. Copyright © 2017 Elsevier B.V. All rights reserved.
Ou, Jing; Liu, Jin-Bo; Yao, Fu-Jiao; Wang, Xin-Guo; Wei, Zhao-Ming
2016-01-01
Flour beetles of the genus Tribolium are all pests of stored products and cause severe economic losses every year. The American black flour beetle Tribolium audax is one of the important pest species of flour beetle, and it is also an important quarantine insect. Here we sequenced and characterized the complete mitochondrial genome of T. audax, which was intercepted by Huangpu Custom in maize from America. The complete circular mitochondrial genome (mitogenome) of T. audax was 15,924 bp in length, containing 37 typical coding genes and one non-coding AT-rich region. The mitogenome of T. audax exhibits a gene arrangement and content identical to the most common type in insects. All protein coding genes (PCGs) are start with a typical ATN initiation codon, except for the cox1, which use AAC as its start codon instead of ATN. Eleven genes use standard complete termination codon (nine TAA, two TAG), whereas the nad4 and nad5 genes end with single T. Except for trnS1 (AGN), all tRNA genes display typical secondary cloverleaf structures as those of other insects. The sizes of the large and small ribosomal RNA genes are 1288 and 780 bp, respectively. The AT content of the AT-rich region is 81.36%. The 5 bp conserved motif TACTA was found in the intergenic region between trnS2 (UCN) and nad1.
Trends in newspaper coverage of mental illness in Canada: 2005-2010.
Whitley, Rob; Berry, Sarah
2013-02-01
Much research suggests that the general public relies on the popular media as a primary source of information about mental illness. We assessed the broad content of articles relating to mental illness in major Canadian newspapers over a 6-year period. We also sought to assess if such content has changed over time. We conducted a retrospective analysis of Canadian newspaper coverage from 2005 to 2010. Research assistants used a standardized guide to code 11 263 newspaper articles that mention the terms mental health, mental illness, schizophrenia, or schizophrenic. Once the articles were coded, descriptive statistics were produced for overarching themes and time trend analyses from 2005 to 2010. Danger, violence, and criminality were direct themes in 40% of newspaper articles. Treatment for a mental illness was discussed in only 19% of newspaper articles, and in only 18% was recovery or rehabilitation a significant theme. Eighty-three per cent of articles coded lacked a quotation from someone with a mental illness. We did not observe any significant changes over time from 2005 to 2010 in any domain measured. There is scope for more balanced, accurate, and informative coverage of mental health issues in Canada. Newspaper articles infrequently reflect the common realities of mental illness phenomenology, course, and outcome. Currently, clinicians may direct patients and family members to other resources for more comprehensive and accurate information about mental illness.
Li, Hu; Liu, Hui; Shi, Aimin; Štys, Pavel; Zhou, Xuguo; Cai, Wanzhi
2012-01-01
Many of true bugs are important insect pests to cultivated crops and some are important vectors of human diseases, but few cladistic analyses have addressed relationships among the seven infraorders of Heteroptera. The Enicocephalomorpha and Nepomorpha are consider the basal groups of Heteroptera, but the basal-most lineage remains unresolved. Here we report the mitochondrial genome of the unique-headed bug Stenopirates sp., the first mitochondrial genome sequenced from Enicocephalomorpha. The Stenopirates sp. mitochondrial genome is a typical circular DNA molecule of 15, 384 bp in length, and contains 37 genes and a large non-coding fragment. The gene order differs substantially from other known insect mitochondrial genomes, with rearrangements of both tRNA genes and protein-coding genes. The overall AT content (82.5%) of Stenopirates sp. is the highest among all the known heteropteran mitochondrial genomes. The strand bias is consistent with other true bugs with negative GC-skew and positive AT-skew for the J-strand. The heteropteran mitochondrial atp8 exhibits the highest evolutionary rate, whereas cox1 appears to have the lowest rate. Furthermore, a negative correlation was observed between the variation of nucleotide substitutions and the GC content of each protein-coding gene. A microsatellite was identified in the putative control region. Finally, phylogenetic reconstruction suggests that Enicocephalomorpha is the sister group to all the remaining Heteroptera. PMID:22235294
Code of Federal Regulations, 2011 CFR
2011-01-01
... lien accommodation or subordination and including the amount and maturity of the proposed loan, a... provisions of applicable model codes for any buildings to be constructed, as required by 7 CFR 1792.104; and...
Performance comparison of leading image codecs: H.264/AVC Intra, JPEG2000, and Microsoft HD Photo
NASA Astrophysics Data System (ADS)
Tran, Trac D.; Liu, Lijie; Topiwala, Pankaj
2007-09-01
This paper provides a detailed rate-distortion performance comparison between JPEG2000, Microsoft HD Photo, and H.264/AVC High Profile 4:4:4 I-frame coding for high-resolution still images and high-definition (HD) 1080p video sequences. This work is an extension to our previous comparative study published in previous SPIE conferences [1, 2]. Here we further optimize all three codecs for compression performance. Coding simulations are performed on a set of large-format color images captured from mainstream digital cameras and 1080p HD video sequences commonly used for H.264/AVC standardization work. Overall, our experimental results show that all three codecs offer very similar coding performances at the high-quality, high-resolution setting. Differences tend to be data-dependent: JPEG2000 with the wavelet technology tends to be the best performer with smooth spatial data; H.264/AVC High-Profile with advanced spatial prediction modes tends to cope best with more complex visual content; Microsoft HD Photo tends to be the most consistent across the board. For the still-image data sets, JPEG2000 offers the best R-D performance gains (around 0.2 to 1 dB in peak signal-to-noise ratio) over H.264/AVC High-Profile intra coding and Microsoft HD Photo. For the 1080p video data set, all three codecs offer very similar coding performance. As in [1, 2], neither do we consider scalability nor complexity in this study (JPEG2000 is operating in non-scalable, but optimal performance mode).
Zayed, Amro; Whitfield, Charles W.
2008-01-01
Apis mellifera originated in Africa and extended its range into Eurasia in two or more ancient expansions. In 1956, honey bees of African origin were introduced into South America, their descendents admixing with previously introduced European bees, giving rise to the highly invasive and economically devastating “Africanized” honey bee. Here we ask whether the honey bee's out-of-Africa expansions, both ancient and recent (invasive), were associated with a genome-wide signature of positive selection, detected by contrasting genetic differentiation estimates (FST) between coding and noncoding SNPs. In native populations, SNPs in protein-coding regions had significantly higher FST estimates than those in noncoding regions, indicating adaptive evolution in the genome driven by positive selection. This signal of selection was associated with the expansion of honey bees from Africa into Western and Northern Europe, perhaps reflecting adaptation to temperate environments. We estimate that positive selection acted on a minimum of 852–1,371 genes or ≈10% of the bee's coding genome. We also detected positive selection associated with the invasion of African-derived honey bees in the New World. We found that introgression of European-derived alleles into Africanized bees was significantly greater for coding than noncoding regions. Our findings demonstrate that Africanized bees exploited the genetic diversity present from preexisting introductions in an adaptive way. Finally, we found a significant negative correlation between FST estimates and the local GC content surrounding coding SNPs, suggesting that AT-rich genes play an important role in adaptive evolution in the honey bee. PMID:18299560
Wang, Pei; Song, Fan; Cai, Wanzhi
2014-01-01
Insect mitochondrial genomes are very important to understand the molecular evolution as well as for phylogenetic and phylogeographic studies of the insects. The Miridae are the largest family of Heteroptera encompassing more than 11,000 described species and of great economic importance. For better understanding the diversity and the evolution of plant bugs, we sequence five new mitochondrial genomes and present the first comparative analysis of nine mitochondrial genomes of mirids available to date. Our result showed that gene content, gene arrangement, base composition and sequences of mitochondrial transcription termination factor were conserved in plant bugs. Intra-genus species shared more conserved genomic characteristics, such as nucleotide and amino acid composition of protein-coding genes, secondary structure and anticodon mutations of tRNAs, and non-coding sequences. Control region possessed several distinct characteristics, including: variable size, abundant tandem repetitions, and intra-genus conservation; and was useful in evolutionary and population genetic studies. The AGG codon reassignments were investigated between serine and lysine in the genera Adelphocoris and other cimicomorphans. Our analysis revealed correlated evolution between reassignments of the AGG codon and specific point mutations at the antidocons of tRNALys and tRNASer(AGN). Phylogenetic analysis indicated that mitochondrial genome sequences were useful in resolving family level relationship of Cimicomorpha. Comparative evolutionary analysis of plant bug mitochondrial genomes allowed the identification of previously neglected coding genes or non-coding regions as potential molecular markers. The finding of the AGG codon reassignments between serine and lysine indicated the parallel evolution of the genetic code in Hemiptera mitochondrial genomes. PMID:24988409
Zayed, Amro; Whitfield, Charles W
2008-03-04
Apis mellifera originated in Africa and extended its range into Eurasia in two or more ancient expansions. In 1956, honey bees of African origin were introduced into South America, their descendents admixing with previously introduced European bees, giving rise to the highly invasive and economically devastating "Africanized" honey bee. Here we ask whether the honey bee's out-of-Africa expansions, both ancient and recent (invasive), were associated with a genome-wide signature of positive selection, detected by contrasting genetic differentiation estimates (F(ST)) between coding and noncoding SNPs. In native populations, SNPs in protein-coding regions had significantly higher F(ST) estimates than those in noncoding regions, indicating adaptive evolution in the genome driven by positive selection. This signal of selection was associated with the expansion of honey bees from Africa into Western and Northern Europe, perhaps reflecting adaptation to temperate environments. We estimate that positive selection acted on a minimum of 852-1,371 genes or approximately 10% of the bee's coding genome. We also detected positive selection associated with the invasion of African-derived honey bees in the New World. We found that introgression of European-derived alleles into Africanized bees was significantly greater for coding than noncoding regions. Our findings demonstrate that Africanized bees exploited the genetic diversity present from preexisting introductions in an adaptive way. Finally, we found a significant negative correlation between F(ST) estimates and the local GC content surrounding coding SNPs, suggesting that AT-rich genes play an important role in adaptive evolution in the honey bee.
Jamoulle, Marc; Resnick, Melissa; Grosjean, Julien; Ittoo, Ashwin; Cardillo, Elena; Vander Stichele, Robert; Darmoni, Stefan; Vanmeerbeek, Marc
2018-12-01
While documentation of clinical aspects of General Practice/Family Medicine (GP/FM) is assured by the International Classification of Primary Care (ICPC), there is no taxonomy for the professional aspects (context and management) of GP/FM. To present the development, dissemination, applications, and resulting face validity of the Q-Codes taxonomy specifically designed to describe contextual features of GP/FM, proposed as an extension to the ICPC. The Q-Codes taxonomy was developed from Lamberts' seminal idea for indexing contextual content (1987) by a multi-disciplinary team of knowledge engineers, linguists and general practitioners, through a qualitative and iterative analysis of 1702 abstracts from six GP/FM conferences using Atlas.ti software. A total of 182 concepts, called Q-Codes, representing professional aspects of GP/FM were identified and organized in a taxonomy. Dissemination: The taxonomy is published as an online terminological resource, using semantic web techniques and web ontology language (OWL) ( http://www.hetop.eu/Q ). Each Q-Code is identified with a unique resource identifier (URI), and provided with preferred terms, and scope notes in ten languages (Portuguese, Spanish, English, French, Dutch, Korean, Vietnamese, Turkish, Georgian, German) and search filters for MEDLINE and web searches. This taxonomy has already been used to support queries in bibliographic databases (e.g., MEDLINE), to facilitate indexing of grey literature in GP/FM as congress abstracts, master theses, websites and as an educational tool in vocational teaching, Conclusions: The rapidly growing list of practical applications provides face-validity for the usefulness of this freely available new terminological resource.
Alcohol industry self-regulation: who is it really protecting?
Noel, Jonathan; Lazzarini, Zita; Robaina, Katherine; Vendrame, Alan
2017-01-01
Self-regulation has been promoted by the alcohol industry as a sufficient means of regulating alcohol marketing activities. However, evidence suggests that the guidelines of self-regulated alcohol marketing codes are violated routinely, resulting in excessive alcohol marketing exposure to youth and the use of content that is potentially harmful to youth and other vulnerable populations. If the alcohol industry does not adhere to its own regulations the purpose and design of these codes should be questioned. Indeed, implementation of alcohol marketing self-regulation in Brazil, the United Kingdom and the United States was likely to delay statutory regulation rather than to promote public health. Moreover, current self-regulation codes suffer from vague language that may allow the industry to circumvent the guidelines, loopholes that may obstruct the implementation of the codes, lax exposure guidelines that can allow excessive youth exposure, even if properly followed, and a standard of review that may be inappropriate for protecting vulnerable populations. Greater public health benefits may be realized if legislative restrictions were applied to alcohol marketing, and strict statutory alcohol marketing regulations have been implemented and defended successfully in the European Union, with European courts declaring that restrictions on alcohol marketing are proportional to the benefits to public health. In contrast, attempts to restrict alcohol marketing activities in the United States have occurred through private litigation and have been unsuccessful. None the less, repeated violations of industry codes may provide legislators with sufficient justification to pass new legislation and for such legislation to withstand constitutional review in the United States and elsewhere. © 2016 Society for the Study of Addiction.
Box codes of lengths 48 and 72
NASA Technical Reports Server (NTRS)
Solomon, G.; Jin, Y.
1993-01-01
A self-dual code length 48, dimension 24, with Hamming distance essentially equal to 12 is constructed here. There are only six code words of weight eight. All the other code words have weights that are multiples of four and have a minimum weight equal to 12. This code may be encoded systematically and arises from a strict binary representation of the (8,4;5) Reed-Solomon (RS) code over GF (64). The code may be considered as six interrelated (8,7;2) codes. The Mattson-Solomon representation of the cyclic decomposition of these codes and their parity sums are used to detect an odd number of errors in any of the six codes. These may then be used in a correction algorithm for hard or soft decision decoding. A (72,36;15) box code was constructed from a (63,35;8) cyclic code. The theoretical justification is presented herein. A second (72,36;15) code is constructed from an inner (63,27;16) Bose Chaudhuri Hocquenghem (BCH) code and expanded to length 72 using box code algorithms for extension. This code was simulated and verified to have a minimum distance of 15 with even weight words congruent to zero modulo four. The decoding for hard and soft decision is still more complex than the first code constructed above. Finally, an (8,4;5) RS code over GF (512) in the binary representation of the (72,36;15) box code gives rise to a (72,36;16*) code with nine words of weight eight, and all the rest have weights greater than or equal to 16.
Error control techniques for satellite and space communications
NASA Technical Reports Server (NTRS)
Costello, Daniel J., Jr.
1989-01-01
Two aspects of the work for NASA are examined: the construction of multi-dimensional phase modulation trellis codes and a performance analysis of these codes. A complete list is contained of all the best trellis codes for use with phase modulation. LxMPSK signal constellations are included for M = 4, 8, and 16 and L = 1, 2, 3, and 4. Spectral efficiencies range from 1 bit/channel symbol (equivalent to rate 1/2 coded QPSK) to 3.75 bits/channel symbol (equivalent to 15/16 coded 16-PSK). The parity check polynomials, rotational invariance properties, free distance, path multiplicities, and coding gains are given for all codes. These codes are considered to be the best candidates for implementation of a high speed decoder for satellite transmission. The design of a hardware decoder for one of these codes, viz., the 16-state 3x8-PSK code with free distance 4.0 and coding gain 3.75 dB is discussed. An exhaustive simulation study of the multi-dimensional phase modulation trellis codes is contained. This study was motivated by the fact that coding gains quoted for almost all codes found in literature are in fact only asymptotic coding gains, i.e., the coding gain at very high signal to noise ratios (SNRs) or very low BER. These asymptotic coding gains can be obtained directly from a knowledge of the free distance of the code. On the other hand, real coding gains at BERs in the range of 10(exp -2) to 10(exp -6), where these codes are most likely to operate in a concatenated system, must be done by simulation.
Accumulate repeat accumulate codes
NASA Technical Reports Server (NTRS)
Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung
2004-01-01
In this paper we propose an innovative channel coding scheme called 'Accumulate Repeat Accumulate codes' (ARA). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes, thus belief propagation can be used for iterative decoding of ARA codes on a graph. The structure of encoder for this class can be viewed as precoded Repeat Accumulate (RA) code or as precoded Irregular Repeat Accumulate (IRA) code, where simply an accumulator is chosen as a precoder. Thus ARA codes have simple, and very fast encoder structure when they representing LDPC codes. Based on density evolution for LDPC codes through some examples for ARA codes, we show that for maximum variable node degree 5 a minimum bit SNR as low as 0.08 dB from channel capacity for rate 1/2 can be achieved as the block size goes to infinity. Thus based on fixed low maximum variable node degree, its threshold outperforms not only the RA and IRA codes but also the best known LDPC codes with the dame maximum node degree. Furthermore by puncturing the accumulators any desired high rate codes close to code rate 1 can be obtained with thresholds that stay close to the channel capacity thresholds uniformly. Iterative decoding simulation results are provided. The ARA codes also have projected graph or protograph representation that allows for high speed decoder implementation.
Combinatorial neural codes from a mathematical coding theory perspective.
Curto, Carina; Itskov, Vladimir; Morrison, Katherine; Roth, Zachary; Walker, Judy L
2013-07-01
Shannon's seminal 1948 work gave rise to two distinct areas of research: information theory and mathematical coding theory. While information theory has had a strong influence on theoretical neuroscience, ideas from mathematical coding theory have received considerably less attention. Here we take a new look at combinatorial neural codes from a mathematical coding theory perspective, examining the error correction capabilities of familiar receptive field codes (RF codes). We find, perhaps surprisingly, that the high levels of redundancy present in these codes do not support accurate error correction, although the error-correcting performance of receptive field codes catches up to that of random comparison codes when a small tolerance to error is introduced. However, receptive field codes are good at reflecting distances between represented stimuli, while the random comparison codes are not. We suggest that a compromise in error-correcting capability may be a necessary price to pay for a neural code whose structure serves not only error correction, but must also reflect relationships between stimuli.
Joint design of QC-LDPC codes for coded cooperation system with joint iterative decoding
NASA Astrophysics Data System (ADS)
Zhang, Shunwai; Yang, Fengfan; Tang, Lei; Ejaz, Saqib; Luo, Lin; Maharaj, B. T.
2016-03-01
In this paper, we investigate joint design of quasi-cyclic low-density-parity-check (QC-LDPC) codes for coded cooperation system with joint iterative decoding in the destination. First, QC-LDPC codes based on the base matrix and exponent matrix are introduced, and then we describe two types of girth-4 cycles in QC-LDPC codes employed by the source and relay. In the equivalent parity-check matrix corresponding to the jointly designed QC-LDPC codes employed by the source and relay, all girth-4 cycles including both type I and type II are cancelled. Theoretical analysis and numerical simulations show that the jointly designed QC-LDPC coded cooperation well combines cooperation gain and channel coding gain, and outperforms the coded non-cooperation under the same conditions. Furthermore, the bit error rate performance of the coded cooperation employing jointly designed QC-LDPC codes is better than those of random LDPC codes and separately designed QC-LDPC codes over AWGN channels.
Exploring the Content of Shared Mental Models in Project Teams
2005-09-30
FINAL REPORT Grant Title: EXPLORING THE CONTENT OF SHARED MENTAL MODELS IN PROJECT TEAMS Office of Naval Research Award Number: N000140210535... Research Laboratory, Attn: code 5227, 4555 Overlook Ave., SW, Washington, DC •t• The University of Massachusetts Amherst is an Affirmative Action/Equal...satisfaction. 2.0 PROJECT SUMMARY No consensus among researchers studying shared cognition exists regarding the identification of what should be
ERIC Educational Resources Information Center
Fulmer, Gavin W.; Liang, Ling L.; Liu, Xiufeng
2014-01-01
This exploratory study applied a proposed force and motion learning progression (LP) to high-school and university students and to content involving both one- and two-dimensional force and motion situations. The Force Concept Inventory (FCI) was adapted, based on a previous content analysis and coding of the questions in the FCI in terms of the…
Accumulate-Repeat-Accumulate-Accumulate Codes
NASA Technical Reports Server (NTRS)
Divsalar, Dariush; Dolinar, Samuel; Thorpe, Jeremy
2007-01-01
Accumulate-repeat-accumulate-accumulate (ARAA) codes have been proposed, inspired by the recently proposed accumulate-repeat-accumulate (ARA) codes. These are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. ARAA codes can be regarded as serial turbolike codes or as a subclass of low-density parity-check (LDPC) codes, and, like ARA codes they have projected graph or protograph representations; these characteristics make it possible to design high-speed iterative decoders that utilize belief-propagation algorithms. The objective in proposing ARAA codes as a subclass of ARA codes was to enhance the error-floor performance of ARA codes while maintaining simple encoding structures and low maximum variable node degree.
NASA Astrophysics Data System (ADS)
Karriem, Veronica V.
Nuclear reactor design incorporates the study and application of nuclear physics, nuclear thermal hydraulic and nuclear safety. Theoretical models and numerical methods implemented in computer programs are utilized to analyze and design nuclear reactors. The focus of this PhD study's is the development of an advanced high-fidelity multi-physics code system to perform reactor core analysis for design and safety evaluations of research TRIGA-type reactors. The fuel management and design code system TRIGSIMS was further developed to fulfill the function of a reactor design and analysis code system for the Pennsylvania State Breazeale Reactor (PSBR). TRIGSIMS, which is currently in use at the PSBR, is a fuel management tool, which incorporates the depletion code ORIGEN-S (part of SCALE system) and the Monte Carlo neutronics solver MCNP. The diffusion theory code ADMARC-H is used within TRIGSIMS to accelerate the MCNP calculations. It manages the data and fuel isotopic content and stores it for future burnup calculations. The contribution of this work is the development of an improved version of TRIGSIMS, named TRIGSIMS-TH. TRIGSIMS-TH incorporates a thermal hydraulic module based on the advanced sub-channel code COBRA-TF (CTF). CTF provides the temperature feedback needed in the multi-physics calculations as well as the thermal hydraulics modeling capability of the reactor core. The temperature feedback model is using the CTF-provided local moderator and fuel temperatures for the cross-section modeling for ADMARC-H and MCNP calculations. To perform efficient critical control rod calculations, a methodology for applying a control rod position was implemented in TRIGSIMS-TH, making this code system a modeling and design tool for future core loadings. The new TRIGSIMS-TH is a computer program that interlinks various other functional reactor analysis tools. It consists of the MCNP5, ADMARC-H, ORIGEN-S, and CTF. CTF was coupled with both MCNP and ADMARC-H to provide the heterogeneous temperature distribution throughout the core. Each of these codes is written in its own computer language performing its function and outputs a set of data. TRIGSIMS-TH provides an effective use and data manipulation and transfer between different codes. With the implementation of feedback and control- rod-position modeling methodologies, the TRIGSIMS-TH calculations are more accurate and in a better agreement with measured data. The PSBR is unique in many ways and there are no "off-the-shelf" codes, which can model this design in its entirety. In particular, PSBR has an open core design, which is cooled by natural convection. Combining several codes into a unique system brings many challenges. It also requires substantial knowledge of both operation and core design of the PSBR. This reactor is in operation decades and there is a fair amount of studies and developments in both PSBR thermal hydraulics and neutronics. Measured data is also available for various core loadings and can be used for validation activities. The previous studies and developments in PSBR modeling also aids as a guide to assess the findings of the work herein. In order to incorporate new methods and codes into exiting TRIGSIMS, a re-evaluation of various components of the code was performed to assure the accuracy and efficiency of the existing CTF/MCNP5/ADMARC-H multi-physics coupling. A new set of ADMARC-H diffusion coefficients and cross sections was generated using the SERPENT code. This was needed as the previous data was not generated with thermal hydraulic feedback and the ARO position was used as the critical rod position. The B4C was re-evaluated for this update. The data exchange between ADMARC-H and MCNP5 was modified. The basic core model is given a flexibility to allow for various changes within the core model, and this feature was implemented in TRIGSIMS-TH. The PSBR core in the new code model can be expanded and changed. This allows the new code to be used as a modeling tool for design and analyses of future code loadings.
Presseau, Justin; Ivers, Noah M; Newham, James J; Knittle, Keegan; Danko, Kristin J; Grimshaw, Jeremy M
2015-04-23
Methodological guidelines for intervention reporting emphasise describing intervention content in detail. Despite this, systematic reviews of quality improvement (QI) implementation interventions continue to be limited by a lack of clarity and detail regarding the intervention content being evaluated. We aimed to apply the recently developed Behaviour Change Techniques Taxonomy version 1 (BCTTv1) to trials of implementation interventions for managing diabetes to assess the capacity and utility of this taxonomy for characterising active ingredients. Three psychologists independently coded a random sample of 23 trials of healthcare system, provider- and/or patient-focused implementation interventions from a systematic review that included 142 such studies. Intervention content was coded using the BCTTv1, which describes 93 behaviour change techniques (BCTs) grouped within 16 categories. We supplemented the generic coding instructions within the BCTTv1 with decision rules and examples from this literature. Less than a quarter of possible BCTs within the BCTTv1 were identified. For implementation interventions targeting providers, the most commonly identified BCTs included the following: adding objects to the environment, prompts/cues, instruction on how to perform the behaviour, credible source, goal setting (outcome), feedback on outcome of behaviour, and social support (practical). For implementation interventions also targeting patients, the most commonly identified BCTs included the following: prompts/cues, instruction on how to perform the behaviour, information about health consequences, restructuring the social environment, adding objects to the environment, social support (practical), and goal setting (behaviour). The BCTTv1 mapped well onto implementation interventions directly targeting clinicians and patients and could also be used to examine the impact of system-level interventions on clinician and patient behaviour. The BCTTv1 can be used to characterise the active ingredients in trials of implementation interventions and provides specificity of content beyond what is given by broader intervention labels. Identification of BCTs may provide a more helpful means of accumulating knowledge on the content used in trials of implementation interventions, which may help to better inform replication efforts. In addition, prospective use of a behaviour change techniques taxonomy for developing and reporting intervention content would further aid in building a cumulative science of effective implementation interventions.
1975-09-01
This report assumes a familiarity with the GIFT and MAGIC computer codes. The EDIT-COMGEOM code is a FORTRAN computer code. The EDIT-COMGEOM code...converts the target description data which was used in the MAGIC computer code to the target description data which can be used in the GIFT computer code
Impacts of Model Building Energy Codes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Athalye, Rahul A.; Sivaraman, Deepak; Elliott, Douglas B.
The U.S. Department of Energy (DOE) Building Energy Codes Program (BECP) periodically evaluates national and state-level impacts associated with energy codes in residential and commercial buildings. Pacific Northwest National Laboratory (PNNL), funded by DOE, conducted an assessment of the prospective impacts of national model building energy codes from 2010 through 2040. A previous PNNL study evaluated the impact of the Building Energy Codes Program; this study looked more broadly at overall code impacts. This report describes the methodology used for the assessment and presents the impacts in terms of energy savings, consumer cost savings, and reduced CO 2 emissions atmore » the state level and at aggregated levels. This analysis does not represent all potential savings from energy codes in the U.S. because it excludes several states which have codes which are fundamentally different from the national model energy codes or which do not have state-wide codes. Energy codes follow a three-phase cycle that starts with the development of a new model code, proceeds with the adoption of the new code by states and local jurisdictions, and finishes when buildings comply with the code. The development of new model code editions creates the potential for increased energy savings. After a new model code is adopted, potential savings are realized in the field when new buildings (or additions and alterations) are constructed to comply with the new code. Delayed adoption of a model code and incomplete compliance with the code’s requirements erode potential savings. The contributions of all three phases are crucial to the overall impact of codes, and are considered in this assessment.« less
NASA Technical Reports Server (NTRS)
Rajpal, Sandeep; Rhee, Do Jun; Lin, Shu
1997-01-01
The first part of this paper presents a simple and systematic technique for constructing multidimensional M-ary phase shift keying (MMK) trellis coded modulation (TCM) codes. The construction is based on a multilevel concatenation approach in which binary convolutional codes with good free branch distances are used as the outer codes and block MPSK modulation codes are used as the inner codes (or the signal spaces). Conditions on phase invariance of these codes are derived and a multistage decoding scheme for these codes is proposed. The proposed technique can be used to construct good codes for both the additive white Gaussian noise (AWGN) and fading channels as is shown in the second part of this paper.
Barriers to healthy eating among food pantry clients
USDA-ARS?s Scientific Manuscript database
This study explored perspectives on barriers of eating healthy among food pantry clients. Food pantry clients participated in focus groups/interviews. Qualitative data were coded and analyzed using content analyses and grounded theory approach. Themes were then identified. Quantitative data were ana...
Code of Federal Regulations, 2012 CFR
2012-10-01
... authorized native organization, that has zoning and building code jurisdiction over a particular area having... of each claim (including building and contents payments) exceeding $5,000, and with the cumulative... (building payments only) have been made under such coverage, with cumulative amount of such claims exceeding...
Code of Federal Regulations, 2014 CFR
2014-10-01
... authorized native organization, that has zoning and building code jurisdiction over a particular area having... of each claim (including building and contents payments) exceeding $5,000, and with the cumulative... (building payments only) have been made under such coverage, with cumulative amount of such claims exceeding...
NUSC Technical Publications Guide.
1985-05-01
Facility personnel especially that of A. Castelluzzo, E. Deland, J. Gesel , and E. Szlosek (all of Code 4343). Reviewed and Approved: 14 July 1980 D...their technical content and format. Review and approve the manual outline, the review manuscript, and the final camera - reproducible copy. Conduct in
Yancey, Antronette K; Cole, Brian L; Brown, Rochelle; Williams, Jerome D; Hillier, Amy; Kline, Randolph S; Ashe, Marice; Grier, Sonya A; Backman, Desiree; McCarthy, William J
2009-01-01
Context: Commercial marketing is a critical but understudied element of the sociocultural environment influencing Americans' food and beverage preferences and purchases. This marketing also likely influences the utilization of goods and services related to physical activity and sedentary behavior. A growing literature documents the targeting of racial/ethnic and income groups in commercial advertisements in magazines, on billboards, and on television that may contribute to sociodemographic disparities in obesity and chronic disease risk and protective behaviors. This article examines whether African Americans, Latinos, and people living in low-income neighborhoods are disproportionately exposed to advertisements for high-calorie, low nutrient–dense foods and beverages and for sedentary entertainment and transportation and are relatively underexposed to advertising for nutritious foods and beverages and goods and services promoting physical activities. Methods: Outdoor advertising density and content were compared in zip code areas selected to offer contrasts by area income and ethnicity in four cities: Los Angeles, Austin, New York City, and Philadelphia. Findings: Large variations were observed in the amount, type, and value of advertising in the selected zip code areas. Living in an upper-income neighborhood, regardless of its residents' predominant ethnicity, is generally protective against exposure to most types of obesity-promoting outdoor advertising (food, fast food, sugary beverages, sedentary entertainment, and transportation). The density of advertising varied by zip code area race/ethnicity, with African American zip code areas having the highest advertising densities, Latino zip code areas having slightly lower densities, and white zip code areas having the lowest densities. Conclusions: The potential health and economic implications of differential exposure to obesity-related advertising are substantial. Although substantive legal questions remain about the government's ability to regulate advertising, the success of limiting tobacco advertising offers lessons for reducing the marketing contribution to the obesigenicity of urban environments. PMID:19298419
Genomic Sequence around Butterfly Wing Development Genes: Annotation and Comparative Analysis
Conceição, Inês C.; Long, Anthony D.; Gruber, Jonathan D.; Beldade, Patrícia
2011-01-01
Background Analysis of genomic sequence allows characterization of genome content and organization, and access beyond gene-coding regions for identification of functional elements. BAC libraries, where relatively large genomic regions are made readily available, are especially useful for species without a fully sequenced genome and can increase genomic coverage of phylogenetic and biological diversity. For example, no butterfly genome is yet available despite the unique genetic and biological properties of this group, such as diversified wing color patterns. The evolution and development of these patterns is being studied in a few target species, including Bicyclus anynana, where a whole-genome BAC library allows targeted access to large genomic regions. Methodology/Principal Findings We characterize ∼1.3 Mb of genomic sequence around 11 selected genes expressed in B. anynana developing wings. Extensive manual curation of in silico predictions, also making use of a large dataset of expressed genes for this species, identified repetitive elements and protein coding sequence, and highlighted an expansion of Alcohol dehydrogenase genes. Comparative analysis with orthologous regions of the lepidopteran reference genome allowed assessment of conservation of fine-scale synteny (with detection of new inversions and translocations) and of DNA sequence (with detection of high levels of conservation of non-coding regions around some, but not all, developmental genes). Conclusions The general properties and organization of the available B. anynana genomic sequence are similar to the lepidopteran reference, despite the more than 140 MY divergence. Our results lay the groundwork for further studies of new interesting findings in relation to both coding and non-coding sequence: 1) the Alcohol dehydrogenase expansion with higher similarity between the five tandemly-repeated B. anynana paralogs than with the corresponding B. mori orthologs, and 2) the high conservation of non-coding sequence around the genes wingless and Ecdysone receptor, both involved in multiple developmental processes including wing pattern formation. PMID:21909358
Shi, Jiaqin; Huang, Shunmou; Fu, Donghui; Yu, Jinyin; Wang, Xinfa; Hua, Wei; Liu, Shengyi; Liu, Guihua; Wang, Hanzhong
2013-01-01
Despite their ubiquity and functional importance, microsatellites have been largely ignored in comparative genomics, mostly due to the lack of genomic information. In the current study, microsatellite distribution was characterized and compared in the whole genomes and both the coding and non-coding DNA sequences of the sequenced Brassica, Arabidopsis and other angiosperm species to investigate their evolutionary dynamics in plants. The variation in the microsatellite frequencies of these angiosperm species was much smaller than those for their microsatellite numbers and genome sizes, suggesting that microsatellite frequency may be relatively stable in plants. The microsatellite frequencies of these angiosperm species were significantly negatively correlated with both their genome sizes and transposable elements contents. The pattern of microsatellite distribution may differ according to the different genomic regions (such as coding and non-coding sequences). The observed differences in many important microsatellite characteristics (especially the distribution with respect to motif length, type and repeat number) of these angiosperm species were generally accordant with their phylogenetic distance, which suggested that the evolutionary dynamics of microsatellite distribution may be generally consistent with plant divergence/evolution. Importantly, by comparing these microsatellite characteristics (especially the distribution with respect to motif type) the angiosperm species (aside from a few species) all clustered into two obviously different groups that were largely represented by monocots and dicots, suggesting a complex and generally dichotomous evolutionary pattern of microsatellite distribution in angiosperms. Polyploidy may lead to a slight increase in microsatellite frequency in the coding sequences and a significant decrease in microsatellite frequency in the whole genome/non-coding sequences, but have little effect on the microsatellite distribution with respect to motif length, type and repeat number. Interestingly, several microsatellite characteristics seemed to be constant in plant evolution, which can be well explained by the general biological rules. PMID:23555856
Nedelcu, Aurora M.; Lee, Robert W.; Lemieux, Claude; Gray, Michael W.; Burger, Gertraud
2000-01-01
Two distinct mitochondrial genome types have been described among the green algal lineages investigated to date: a reduced–derived, Chlamydomonas-like type and an ancestral, Prototheca-like type. To determine if this unexpected dichotomy is real or is due to insufficient or biased sampling and to define trends in the evolution of the green algal mitochondrial genome, we sequenced and analyzed the mitochondrial DNA (mtDNA) of Scenedesmus obliquus. This genome is 42,919 bp in size and encodes 42 conserved genes (i.e., large and small subunit rRNA genes, 27 tRNA and 13 respiratory protein-coding genes), four additional free-standing open reading frames with no known homologs, and an intronic reading frame with endonuclease/maturase similarity. No 5S rRNA or ribosomal protein-coding genes have been identified in Scenedesmus mtDNA. The standard protein-coding genes feature a deviant genetic code characterized by the use of UAG (normally a stop codon) to specify leucine, and the unprecedented use of UCA (normally a serine codon) as a signal for termination of translation. The mitochondrial genome of Scenedesmus combines features of both green algal mitochondrial genome types: the presence of a more complex set of protein-coding and tRNA genes is shared with the ancestral type, whereas the lack of 5S rRNA and ribosomal protein-coding genes as well as the presence of fragmented and scrambled rRNA genes are shared with the reduced–derived type of mitochondrial genome organization. Furthermore, the gene content and the fragmentation pattern of the rRNA genes suggest that this genome represents an intermediate stage in the evolutionary process of mitochondrial genome streamlining in green algae. [The sequence data described in this paper have been submitted to the GenBank data library under accession no. AF204057.] PMID:10854413
User's manual for Axisymmetric Diffuser Duct (ADD) code. Volume 1: General ADD code description
NASA Technical Reports Server (NTRS)
Anderson, O. L.; Hankins, G. B., Jr.; Edwards, D. E.
1982-01-01
This User's Manual contains a complete description of the computer codes known as the AXISYMMETRIC DIFFUSER DUCT code or ADD code. It includes a list of references which describe the formulation of the ADD code and comparisons of calculation with experimental flows. The input/output and general use of the code is described in the first volume. The second volume contains a detailed description of the code including the global structure of the code, list of FORTRAN variables, and descriptions of the subroutines. The third volume contains a detailed description of the CODUCT code which generates coordinate systems for arbitrary axisymmetric ducts.
Patel, Mehul D; Rose, Kathryn M; Owens, Cindy R; Bang, Heejung; Kaufman, Jay S
2012-03-01
Occupational data are a common source of workplace exposure and socioeconomic information in epidemiologic research. We compared the performance of two occupation coding methods, an automated software and a manual coder, using occupation and industry titles from U.S. historical records. We collected parental occupational data from 1920-40s birth certificates, Census records, and city directories on 3,135 deceased individuals in the Atherosclerosis Risk in Communities (ARIC) study. Unique occupation-industry narratives were assigned codes by a manual coder and the Standardized Occupation and Industry Coding software program. We calculated agreement between coding methods of classification into major Census occupational groups. Automated coding software assigned codes to 71% of occupations and 76% of industries. Of this subset coded by software, 73% of occupation codes and 69% of industry codes matched between automated and manual coding. For major occupational groups, agreement improved to 89% (kappa = 0.86). Automated occupational coding is a cost-efficient alternative to manual coding. However, some manual coding is required to code incomplete information. We found substantial variability between coders in the assignment of occupations although not as large for major groups.
Convolutional coding techniques for data protection
NASA Technical Reports Server (NTRS)
Massey, J. L.
1975-01-01
Results of research on the use of convolutional codes in data communications are presented. Convolutional coding fundamentals are discussed along with modulation and coding interaction. Concatenated coding systems and data compression with convolutional codes are described.
Alternative Fuels Data Center: Codes and Standards Resources
codes and standards. Biodiesel Vehicle and Infrastructure Codes and Standards Chart Electric Vehicle and Infrastructure Codes and Standards Chart Ethanol Vehicle and Infrastructure Codes and Standards Chart Natural Gas Vehicle and Infrastructure Codes and Standards Chart Propane Vehicle and Infrastructure Codes and
The study on dynamic cadastral coding rules based on kinship relationship
NASA Astrophysics Data System (ADS)
Xu, Huan; Liu, Nan; Liu, Renyi; Lu, Jingfeng
2007-06-01
Cadastral coding rules are an important supplement to the existing national and local standard specifications for building cadastral database. After analyzing the course of cadastral change, especially the parcel change with the method of object-oriented analysis, a set of dynamic cadastral coding rules based on kinship relationship corresponding to the cadastral change is put forward and a coding format composed of street code, block code, father parcel code, child parcel code and grandchild parcel code is worked out within the county administrative area. The coding rule has been applied to the development of an urban cadastral information system called "ReGIS", which is not only able to figure out the cadastral code automatically according to both the type of parcel change and the coding rules, but also capable of checking out whether the code is spatiotemporally unique before the parcel is stored in the database. The system has been used in several cities of Zhejiang Province and got a favorable response. This verifies the feasibility and effectiveness of the coding rules to some extent.
NASA Technical Reports Server (NTRS)
Lin, Shu; Rhee, Dojun; Rajpal, Sandeep
1993-01-01
This report presents a low-complexity and high performance concatenated coding scheme for high-speed satellite communications. In this proposed scheme, the NASA Standard Reed-Solomon (RS) code over GF(2(exp 8) is used as the outer code and the second-order Reed-Muller (RM) code of Hamming distance 8 is used as the inner code. The RM inner code has a very simple trellis structure and is decoded with the soft-decision Viterbi decoding algorithm. It is shown that the proposed concatenated coding scheme achieves an error performance which is comparable to that of the NASA TDRS concatenated coding scheme in which the NASA Standard rate-1/2 convolutional code of constraint length 7 and d sub free = 10 is used as the inner code. However, the proposed RM inner code has much smaller decoding complexity, less decoding delay, and much higher decoding speed. Consequently, the proposed concatenated coding scheme is suitable for reliable high-speed satellite communications, and it may be considered as an alternate coding scheme for the NASA TDRS system.
Construction of self-dual codes in the Rosenbloom-Tsfasman metric
NASA Astrophysics Data System (ADS)
Krisnawati, Vira Hari; Nisa, Anzi Lina Ukhtin
2017-12-01
Linear code is a very basic code and very useful in coding theory. Generally, linear code is a code over finite field in Hamming metric. Among the most interesting families of codes, the family of self-dual code is a very important one, because it is the best known error-correcting code. The concept of Hamming metric is develop into Rosenbloom-Tsfasman metric (RT-metric). The inner product in RT-metric is different from Euclid inner product that is used to define duality in Hamming metric. Most of the codes which are self-dual in Hamming metric are not so in RT-metric. And, generator matrix is very important to construct a code because it contains basis of the code. Therefore in this paper, we give some theorems and methods to construct self-dual codes in RT-metric by considering properties of the inner product and generator matrix. Also, we illustrate some examples for every kind of the construction.
Two Perspectives on the Origin of the Standard Genetic Code
NASA Astrophysics Data System (ADS)
Sengupta, Supratim; Aggarwal, Neha; Bandhu, Ashutosh Vishwa
2014-12-01
The origin of a genetic code made it possible to create ordered sequences of amino acids. In this article we provide two perspectives on code origin by carrying out simulations of code-sequence coevolution in finite populations with the aim of examining how the standard genetic code may have evolved from more primitive code(s) encoding a small number of amino acids. We determine the efficacy of the physico-chemical hypothesis of code origin in the absence and presence of horizontal gene transfer (HGT) by allowing a diverse collection of code-sequence sets to compete with each other. We find that in the absence of horizontal gene transfer, natural selection between competing codes distinguished by differences in the degree of physico-chemical optimization is unable to explain the structure of the standard genetic code. However, for certain probabilities of the horizontal transfer events, a universal code emerges having a structure that is consistent with the standard genetic code.
NASA Astrophysics Data System (ADS)
Jellali, Nabiha; Najjar, Monia; Ferchichi, Moez; Rezig, Houria
2017-07-01
In this paper, a new two-dimensional spectral/spatial codes family, named two dimensional dynamic cyclic shift codes (2D-DCS) is introduced. The 2D-DCS codes are derived from the dynamic cyclic shift code for the spectral and spatial coding. The proposed system can fully eliminate the multiple access interference (MAI) by using the MAI cancellation property. The effect of shot noise, phase-induced intensity noise and thermal noise are used to analyze the code performance. In comparison with existing two dimensional (2D) codes, such as 2D perfect difference (2D-PD), 2D Extended Enhanced Double Weight (2D-Extended-EDW) and 2D hybrid (2D-FCC/MDW) codes, the numerical results show that our proposed codes have the best performance. By keeping the same code length and increasing the spatial code, the performance of our 2D-DCS system is enhanced: it provides higher data rates while using lower transmitted power and a smaller spectral width.
Investigation of water droplet trajectories within the NASA icing research tunnel
NASA Technical Reports Server (NTRS)
Reehorst, Andrew; Ibrahim, Mounir
1995-01-01
Water droplet trajectories within the NASA Lewis Research Center's Icing Research Tunnel (IRT) were studied through computer analysis. Of interest was the influence of the wind tunnel contraction and wind tunnel model blockage on the water droplet trajectories. The computer analysis was carried out with a program package consisting of a three-dimensional potential panel code and a three-dimensional droplet trajectory code. The wind tunnel contraction was found to influence the droplet size distribution and liquid water content distribution across the test section from that at the inlet. The wind tunnel walls were found to have negligible influence upon the impingement of water droplets upon a wing model.
The Model 9977 Radioactive Material Packaging Primer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abramczyk, G.
2015-10-09
The Model 9977 Packaging is a single containment drum style radioactive material (RAM) shipping container designed, tested and analyzed to meet the performance requirements of Title 10 the Code of Federal Regulations Part 71. A radioactive material shipping package, in combination with its contents, must perform three functions (please note that the performance criteria specified in the Code of Federal Regulations have alternate limits for normal operations and after accident conditions): Containment, the package must “contain” the radioactive material within it; Shielding, the packaging must limit its users and the public to radiation doses within specified limits; and Subcriticality, themore » package must maintain its radioactive material as subcritical« less
The complete mitochondrial genome of Pholis nebulosus (Perciformes: Pholidae).
Wang, Zhongquan; Qin, Kaili; Liu, Jingxi; Song, Na; Han, Zhiqiang; Gao, Tianxiang
2016-11-01
In this study, the complete mitochondrial genome (mitogenome) sequence of Pholis nebulosus has been determined by long polymerase chain reaction and primer-walking methods. The mitogenome is a circular molecule of 16 524 bp in length, including the typical structure of 13 protein-coding genes, 2 ribosomal RNA genes, 22 transfer RNA genes and 2 non-coding regions (L-strand replication origin and control region), the gene contents of which are identical to those observed in most bony fishes. Within the control region, we identified the termination-associated sequence domain (TAS), and the conserved sequence block domain (CSB-F, CSB-E, CSB-D, CSB-C, CSB-B, CSB-A, CSB-1, CSB-2, CSB-3).
Chen, Chien P; Braunstein, Steve; Mourad, Michelle; Hsu, I-Chow J; Haas-Kogan, Daphne; Roach, Mack; Fogh, Shannon E
2015-01-01
Accurate International Classification of Diseases (ICD) diagnosis coding is critical for patient care, billing purposes, and research endeavors. In this single-institution study, we evaluated our baseline ICD-9 (9th revision) diagnosis coding accuracy, identified the most common errors contributing to inaccurate coding, and implemented a multimodality strategy to improve radiation oncology coding. We prospectively studied ICD-9 coding accuracy in our radiation therapy--specific electronic medical record system. Baseline ICD-9 coding accuracy was obtained from chart review targeting ICD-9 coding accuracy of all patients treated at our institution between March and June of 2010. To improve performance an educational session highlighted common coding errors, and a user-friendly software tool, RadOnc ICD Search, version 1.0, for coding radiation oncology specific diagnoses was implemented. We then prospectively analyzed ICD-9 coding accuracy for all patients treated from July 2010 to June 2011, with the goal of maintaining 80% or higher coding accuracy. Data on coding accuracy were analyzed and fed back monthly to individual providers. Baseline coding accuracy for physicians was 463 of 661 (70%) cases. Only 46% of physicians had coding accuracy above 80%. The most common errors involved metastatic cases, whereby primary or secondary site ICD-9 codes were either incorrect or missing, and special procedures such as stereotactic radiosurgery cases. After implementing our project, overall coding accuracy rose to 92% (range, 86%-96%). The median accuracy for all physicians was 93% (range, 77%-100%) with only 1 attending having accuracy below 80%. Incorrect primary and secondary ICD-9 codes in metastatic cases showed the most significant improvement (10% vs 2% after intervention). Identifying common coding errors and implementing both education and systems changes led to significantly improved coding accuracy. This quality assurance project highlights the potential problem of ICD-9 coding accuracy by physicians and offers an approach to effectively address this shortcoming. Copyright © 2015. Published by Elsevier Inc.
Spread Spectrum Visual Sensor Network Resource Management Using an End-to-End Cross-Layer Design
2011-02-01
Coding In this work, we use rate compatible punctured convolutional (RCPC) codes for channel coding [11]. Using RCPC codes al- lows us to utilize Viterbi’s...11] J. Hagenauer, “ Rate - compatible punctured convolutional codes (RCPC codes ) and their applications,” IEEE Trans. Commun., vol. 36, no. 4, pp. 389...source coding rate , a channel coding rate , and a power level to all nodes in the
New quantum codes derived from a family of antiprimitive BCH codes
NASA Astrophysics Data System (ADS)
Liu, Yang; Li, Ruihu; Lü, Liangdong; Guo, Luobin
The Bose-Chaudhuri-Hocquenghem (BCH) codes have been studied for more than 57 years and have found wide application in classical communication system and quantum information theory. In this paper, we study the construction of quantum codes from a family of q2-ary BCH codes with length n=q2m+1 (also called antiprimitive BCH codes in the literature), where q≥4 is a power of 2 and m≥2. By a detailed analysis of some useful properties about q2-ary cyclotomic cosets modulo n, Hermitian dual-containing conditions for a family of non-narrow-sense antiprimitive BCH codes are presented, which are similar to those of q2-ary primitive BCH codes. Consequently, via Hermitian Construction, a family of new quantum codes can be derived from these dual-containing BCH codes. Some of these new antiprimitive quantum BCH codes are comparable with those derived from primitive BCH codes.
Constructions for finite-state codes
NASA Technical Reports Server (NTRS)
Pollara, F.; Mceliece, R. J.; Abdel-Ghaffar, K.
1987-01-01
A class of codes called finite-state (FS) codes is defined and investigated. These codes, which generalize both block and convolutional codes, are defined by their encoders, which are finite-state machines with parallel inputs and outputs. A family of upper bounds on the free distance of a given FS code is derived from known upper bounds on the minimum distance of block codes. A general construction for FS codes is then given, based on the idea of partitioning a given linear block into cosets of one of its subcodes, and it is shown that in many cases the FS codes constructed in this way have a d sub free which is as large as possible. These codes are found without the need for lengthy computer searches, and have potential applications for future deep-space coding systems. The issue of catastropic error propagation (CEP) for FS codes is also investigated.
NASA Astrophysics Data System (ADS)
Kong, Gyuyeol; Choi, Sooyong
2017-09-01
An enhanced 2/3 four-ary modulation code using soft-decision Viterbi decoding is proposed for four-level holographic data storage systems. While the previous four-ary modulation codes focus on preventing maximum two-dimensional intersymbol interference patterns, the proposed four-ary modulation code aims at maximizing the coding gains for better bit error rate performances. For achieving significant coding gains from the four-ary modulation codes, we design a new 2/3 four-ary modulation code in order to enlarge the free distance on the trellis through extensive simulation. The free distance of the proposed four-ary modulation code is extended from 1.21 to 2.04 compared with that of the conventional four-ary modulation code. The simulation result shows that the proposed four-ary modulation code has more than 1 dB gains compared with the conventional four-ary modulation code.
Aiello, Francesco A; Judelson, Dejah R; Messina, Louis M; Indes, Jeffrey; FitzGerald, Gordon; Doucet, Danielle R; Simons, Jessica P; Schanzer, Andres
2016-08-01
Vascular surgery procedural reimbursement depends on accurate procedural coding and documentation. Despite the critical importance of correct coding, there has been a paucity of research focused on the effect of direct physician involvement. We hypothesize that direct physician involvement in procedural coding will lead to improved coding accuracy, increased work relative value unit (wRVU) assignment, and increased physician reimbursement. This prospective observational cohort study evaluated procedural coding accuracy of fistulograms at an academic medical institution (January-June 2014). All fistulograms were coded by institutional coders (traditional coding) and by a single vascular surgeon whose codes were verified by two institution coders (multidisciplinary coding). The coding methods were compared, and differences were translated into revenue and wRVUs using the Medicare Physician Fee Schedule. Comparison between traditional and multidisciplinary coding was performed for three discrete study periods: baseline (period 1), after a coding education session for physicians and coders (period 2), and after a coding education session with implementation of an operative dictation template (period 3). The accuracy of surgeon operative dictations during each study period was also assessed. An external validation at a second academic institution was performed during period 1 to assess and compare coding accuracy. During period 1, traditional coding resulted in a 4.4% (P = .004) loss in reimbursement and a 5.4% (P = .01) loss in wRVUs compared with multidisciplinary coding. During period 2, no significant difference was found between traditional and multidisciplinary coding in reimbursement (1.3% loss; P = .24) or wRVUs (1.8% loss; P = .20). During period 3, traditional coding yielded a higher overall reimbursement (1.3% gain; P = .26) than multidisciplinary coding. This increase, however, was due to errors by institution coders, with six inappropriately used codes resulting in a higher overall reimbursement that was subsequently corrected. Assessment of physician documentation showed improvement, with decreased documentation errors at each period (11% vs 3.1% vs 0.6%; P = .02). Overall, between period 1 and period 3, multidisciplinary coding resulted in a significant increase in additional reimbursement ($17.63 per procedure; P = .004) and wRVUs (0.50 per procedure; P = .01). External validation at a second academic institution was performed to assess coding accuracy during period 1. Similar to institution 1, traditional coding revealed an 11% loss in reimbursement ($13,178 vs $14,630; P = .007) and a 12% loss in wRVU (293 vs 329; P = .01) compared with multidisciplinary coding. Physician involvement in the coding of endovascular procedures leads to improved procedural coding accuracy, increased wRVU assignments, and increased physician reimbursement. Copyright © 2016 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.
Gnjidic, Danijela; Pearson, Sallie-Anne; Hilmer, Sarah N; Basilakis, Jim; Schaffer, Andrea L; Blyth, Fiona M; Banks, Emily
2015-03-30
Increasingly, automated methods are being used to code free-text medication data, but evidence on the validity of these methods is limited. To examine the accuracy of automated coding of previously keyed in free-text medication data compared with manual coding of original handwritten free-text responses (the 'gold standard'). A random sample of 500 participants (475 with and 25 without medication data in the free-text box) enrolled in the 45 and Up Study was selected. Manual coding involved medication experts keying in free-text responses and coding using Anatomical Therapeutic Chemical (ATC) codes (i.e. chemical substance 7-digit level; chemical subgroup 5-digit; pharmacological subgroup 4-digit; therapeutic subgroup 3-digit). Using keyed-in free-text responses entered by non-experts, the automated approach coded entries using the Australian Medicines Terminology database and assigned corresponding ATC codes. Based on manual coding, 1377 free-text entries were recorded and, of these, 1282 medications were coded to ATCs manually. The sensitivity of automated coding compared with manual coding was 79% (n = 1014) for entries coded at the exact ATC level, and 81.6% (n = 1046), 83.0% (n = 1064) and 83.8% (n = 1074) at the 5, 4 and 3-digit ATC levels, respectively. The sensitivity of automated coding for blank responses was 100% compared with manual coding. Sensitivity of automated coding was highest for prescription medications and lowest for vitamins and supplements, compared with the manual approach. Positive predictive values for automated coding were above 95% for 34 of the 38 individual prescription medications examined. Automated coding for free-text prescription medication data shows very high to excellent sensitivity and positive predictive values, indicating that automated methods can potentially be useful for large-scale, medication-related research.
Investigation of Near Shannon Limit Coding Schemes
NASA Technical Reports Server (NTRS)
Kwatra, S. C.; Kim, J.; Mo, Fan
1999-01-01
Turbo codes can deliver performance that is very close to the Shannon limit. This report investigates algorithms for convolutional turbo codes and block turbo codes. Both coding schemes can achieve performance near Shannon limit. The performance of the schemes is obtained using computer simulations. There are three sections in this report. First section is the introduction. The fundamental knowledge about coding, block coding and convolutional coding is discussed. In the second section, the basic concepts of convolutional turbo codes are introduced and the performance of turbo codes, especially high rate turbo codes, is provided from the simulation results. After introducing all the parameters that help turbo codes achieve such a good performance, it is concluded that output weight distribution should be the main consideration in designing turbo codes. Based on the output weight distribution, the performance bounds for turbo codes are given. Then, the relationships between the output weight distribution and the factors like generator polynomial, interleaver and puncturing pattern are examined. The criterion for the best selection of system components is provided. The puncturing pattern algorithm is discussed in detail. Different puncturing patterns are compared for each high rate. For most of the high rate codes, the puncturing pattern does not show any significant effect on the code performance if pseudo - random interleaver is used in the system. For some special rate codes with poor performance, an alternative puncturing algorithm is designed which restores their performance close to the Shannon limit. Finally, in section three, for iterative decoding of block codes, the method of building trellis for block codes, the structure of the iterative decoding system and the calculation of extrinsic values are discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhengqiu, C.; Penaflor, C.; Kuehl, J.V.
2006-06-01
The magnoliids represent the largest basal angiosperm clade with four orders, 19 families and 8,500 species. Although several recent angiosperm molecular phylogenies have supported the monophyly of magnoliids and suggested relationships among the orders, the limited number of genes examined resulted in only weak support, and these issues remain controversial. Furthermore, considerable incongruence has resulted in phylogenies supporting three different sets of relationships among magnoliids and the two large angiosperm clades, monocots and eudicots. This is one of the most important remaining issues concerning relationships among basal angiosperms. We sequenced the chloroplast genomes of three magnoliids, Drimys (Canellales), Liriodendron (Magnoliales),more » and Piper (Piperales), and used these data in combination with 32 other completed angiosperm chloroplast genomes to assess phylogenetic relationships among magnoliids. The Drimys and Piper chloroplast genomes are nearly identical in size at 160,606 and 160,624 bp, respectively. The genomes include a pair of inverted repeats of 26,649 bp (Drimys) and 27,039 (Piper), separated by a small single copy region of 18,621 (Drimys) and 18,878 (Piper) and a large single copy region of 88,685 bp (Drimys) and 87,666 bp (Piper). The gene order of both taxa is nearly identical to many other unrearranged angiosperm chloroplast genomes, including Calycanthus, the other published magnoliid genome. Comparisons of angiosperm chloroplast genomes indicate that GC content is not uniformly distributed across the genome. Overall GC content ranges from 34-39%, and coding regions have a substantially higher GC content than non-coding regions (both intergenic spacers and introns). Among protein-coding genes, GC content varies by codon position with 1st codon > 2nd codon > 3rd codon, and it varies by functional group with photosynthetic genes having the highest percentage and NADH genes the lowest. Across the genome, GC content is highest in the inverted repeat due to the presence of rRNA genes and lowest in the small single copy region where most NADH genes are located. Phylogenetic analyses using maximum parsimony and maximum likelihood methods were performed on DNA sequences of 61 protein-coding genes. Trees from both analyses provided strong support for the monophyly of magnoliids and two strongly supported groups were identified, the Canellales/Piperales and the Laurales/Magnoliales. The phylogenies also provided moderate to strong support for the basal position of Amborella, and a sister relationship of magnoliids to a clade that includes monocots and eudicots. The complete sequences of three magnoliid chloroplast genomes provide new data from the largest basal angiosperm clade. Evolutionary comparisons of these new genome sequences, combined with other published angiosperm genome, confirm that GC content is unevenly distributed across the genome by location, codon position, and functional group. Furthermore, phylogenetic analyses provide the strongest support so far for the hypothesis that the magnoliids are sister to a large clade that includes both monocots and eudicots.« less
Audit of Clinical Coding of Major Head and Neck Operations
Mitra, Indu; Malik, Tass; Homer, Jarrod J; Loughran, Sean
2009-01-01
INTRODUCTION Within the NHS, operations are coded using the Office of Population Censuses and Surveys (OPCS) classification system. These codes, together with diagnostic codes, are used to generate Healthcare Resource Group (HRG) codes, which correlate to a payment bracket. The aim of this study was to determine whether allocated procedure codes for major head and neck operations were correct and reflective of the work undertaken. HRG codes generated were assessed to determine accuracy of remuneration. PATIENTS AND METHODS The coding of consecutive major head and neck operations undertaken in a tertiary referral centre over a retrospective 3-month period were assessed. Procedure codes were initially ascribed by professional hospital coders. Operations were then recoded by the surgical trainee in liaison with the head of clinical coding. The initial and revised procedure codes were compared and used to generate HRG codes, to determine whether the payment banding had altered. RESULTS A total of 34 cases were reviewed. The number of procedure codes generated initially by the clinical coders was 99, whereas the revised codes generated 146. Of the original codes, 47 of 99 (47.4%) were incorrect. In 19 of the 34 cases reviewed (55.9%), the HRG code remained unchanged, thus resulting in the correct payment. Six cases were never coded, equating to £15,300 loss of payment. CONCLUSIONS These results highlight the inadequacy of this system to reward hospitals for the work carried out within the NHS in a fair and consistent manner. The current coding system was found to be complicated, ambiguous and inaccurate, resulting in loss of remuneration. PMID:19220944
Reviewing the Challenges and Opportunities Presented by Code Switching and Mixing in Bangla
ERIC Educational Resources Information Center
Hasan, Md. Kamrul; Akhand, Mohd. Moniruzzaman
2015-01-01
This paper investigates the issues related to code-switching/code-mixing in an ESL context. Some preliminary data on Bangla-English code-switching/code-mixing has been analyzed in order to determine which structural pattern of code-switching/code-mixing is predominant in different social strata. This study also explores the relationship of…