Science.gov

Sample records for da vinci code

  1. Leonardo da Vinci and the Downburst.

    NASA Astrophysics Data System (ADS)

    Gedzelman, Stanley David

    1990-05-01

    Evidence from the drawings, experiments, and writings of Leonardo da Vinci are presented to demonstrate that da Vinci recognized and, possibly, discovered the downburst and understood its associated airflow. Other early references to vortex flows resembling downbursts are mentioned.

  2. Media Images Abbott and Costello Meet the End of the World: Who Is the Enemy in "The Da Vinci Code" and "An Inconvenient Truth?"

    ERIC Educational Resources Information Center

    Beck, Bernard

    2007-01-01

    Popular culture requires readily identifiable villains. Subcultural groups often serve this role, creating controversies. Controversies based on religion are especially bitter. As a rule, religion in the movies is inoffensively sentimental, but "The Da Vinci Code" is both popular and provocative, treading on the dangerous ground of Jesus's…

  3. Hidden sketches by Leonardo da Vinci revealed

    NASA Astrophysics Data System (ADS)

    Dumé, Belle

    2009-02-01

    Three drawings on the back of Leonardo da Vinci's The Virgin and Child with St Anne (circa 1508) have been discovered by researchers led by Michel Menu from the Centre de Recherche et de Restauration des Musées de France (C2RMF) and the Louvre Museum in Paris.

  4. How to Think Like Leonardo da Vinci

    ERIC Educational Resources Information Center

    Caouette, Ralph

    2008-01-01

    To be effective and relevant in twenty-first-century learning, art needs to be more inclusive. In this article, the author discusses how teachers can find a good example in Leonardo da Vinci for building an art program. His art, design, and curiosity are the perfect foundation for any art program, at any level. (Contains 3 resources and 3 online…

  5. How to Think Like Leonardo da Vinci

    ERIC Educational Resources Information Center

    Caouette, Ralph

    2008-01-01

    To be effective and relevant in twenty-first-century learning, art needs to be more inclusive. In this article, the author discusses how teachers can find a good example in Leonardo da Vinci for building an art program. His art, design, and curiosity are the perfect foundation for any art program, at any level. (Contains 3 resources and 3 online

  6. The Case: Bunche-Da Vinci Learning Partnership Academy

    ERIC Educational Resources Information Center

    Eisenberg, Nicole; Winters, Lynn; Alkin, Marvin C.

    2005-01-01

    The Bunche-Da Vinci case described in this article presents a situation at Bunche Elementary School that four theorists were asked to address in their evaluation designs (see EJ791771, EJ719772, EJ791773, and EJ792694). The Bunche-Da Vinci Learning Partnership Academy, an elementary school located between an urban port city and a historically…

  7. Depth asymmetry in da Vinci stereopsis.

    PubMed

    Häkkinen, J; Nyman, G

    1996-12-01

    We investigated processes that determine the depth localization of monocular points which have no unambiguous depth. It is known that horizontally adjacent binocular objects are used in depth localization and for a distance of 25-40 min arc monocular points localize to the leading edge of a depth constraint zone, which is an area defined by the visibility lines between which the points in the real world must be. We demonstrate that this rule is not valid in complex depth scenes. Adding other disparate objects to the scene changes the localization of the monocular point in a way that cannot be explained by the da Vinci explanation of monocular-binocular integration. The effect of additional disparate objects is asymmetric in depth: a crossed object does not affect the da Vinci effect but an uncrossed object biases the depth localization of monocular objects to uncrossed direction. We conclude that a horizontally adjacent binocular plane does not completely determine the depth localization of a monocular point and that depth spreading from other binocular elements biases the localization process. PMID:8994582

  8. The role of transparency in da Vinci stereopsis.

    PubMed

    Zannoli, Marina; Mamassian, Pascal

    2011-10-15

    The majority of natural scenes contains zones that are visible to one eye only. Past studies have shown that these monocular regions can be seen at a precise depth even though there are no binocular disparities that uniquely constrain their locations in depth. In the so-called da Vinci stereopsis configuration, the monocular region is a vertical line placed next to a binocular rectangular occluder. The opacity of the occluder has been mentioned to be a necessary condition to obtain da Vinci stereopsis. However, this opacity constraint has never been empirically tested. In the present study, we tested whether da Vinci stereopsis and perceptual transparency can interact using a classical da Vinci configuration in which the opacity of the occluder varied. We used two different monocular objects: a line and a disk. We found no effect of the opacity of the occluder on the perceived depth of the monocular object. A careful analysis of the distribution of perceived depth revealed that the monocular object was perceived at a depth that increased with the distance between the object and the occluder. The analysis of the skewness of the distributions was not consistent with a double fusion explanation, favoring an implication of occlusion geometry in da Vinci stereopsis. A simple model that includes the geometry of the scene could account for the results. In summary, the mechanism responsible to locate monocular regions in depth is not sensitive to the material properties of objects, suggesting that da Vinci stereopsis is solved at relatively early stages of disparity processing. PMID:21906614

  9. Leonardo da Vinci (1452-1519)

    NASA Astrophysics Data System (ADS)

    Murdin, P.

    2000-11-01

    Painter, inventor and polymath, born in Vinci (near Empolia), Italy. Although astronomy does not figure large in Leonardo's works, he realized the possibility of constructing a telescope (`making glasses to see the Moon enlarged'). He suggested that `… in order to observe the nature of the planets, open the roof and bring the image of a single planet onto the base of a concave mirror. The image o...

  10. Women and Technical Professions. Leonardo da Vinci Series: Good Practices.

    ERIC Educational Resources Information Center

    Commission of the European Communities, Brussels (Belgium). Directorate-General for Education and Culture.

    This document profiles programs for women in technical professions that are offered through the European Commission's Leonardo da Vinci program. The following programs are profiled: (1) Artemis and Diana (vocational guidance programs to help direct girls toward technology-related careers); (2) CEEWIT (an Internet-based information and…

  11. Studying and Working Abroad. Leonardo da Vinci Series: Good Practices.

    ERIC Educational Resources Information Center

    Commission of the European Communities, Brussels (Belgium). Directorate-General for Education and Culture.

    This document profiles recent successful examples of students studying and working abroad as part of the European Commission's Leonardo da Vinci program, which is designed to give students across the European Union the opportunity to experience vocational training in a foreign country. The following examples are presented: (1) 3 Finnish students…

  12. Training and Health. Leonardo da Vinci Series: Good Practices.

    ERIC Educational Resources Information Center

    Commission of the European Communities, Brussels (Belgium). Directorate-General for Education and Culture.

    This document profiles programs in the fields of health and medicine that are offered through the European Commission's Leonardo da Vinci program. The following programs are profiled: (1) CYTOTRAIN (a transnational vocational training program in cervical cancer screening); (2) Apollo (a program of open and distance learning for paramedical…

  13. The Potential da Vinci in All of Us

    ERIC Educational Resources Information Center

    Petto, Sarah; Petto, Andrew

    2009-01-01

    The study of the human form is fundamental to both science and art curricula. For vertebrates, perhaps no feature is more important than the skeleton to determine observable form and function. As Leonard da Vinci's famous Proportions of the Human Figure (Virtruvian Man) illustrates, the size, shape, and proportions of the human body are defined by…

  14. The Potential da Vinci in All of Us

    ERIC Educational Resources Information Center

    Petto, Sarah; Petto, Andrew

    2009-01-01

    The study of the human form is fundamental to both science and art curricula. For vertebrates, perhaps no feature is more important than the skeleton to determine observable form and function. As Leonard da Vinci's famous Proportions of the Human Figure (Virtruvian Man) illustrates, the size, shape, and proportions of the human body are defined by

  15. The DaVinci Project: Multimedia in Art and Chemistry.

    ERIC Educational Resources Information Center

    Simonson, Michael; Schlosser, Charles

    1998-01-01

    Provides an overview of the DaVinci Project, a collaboration of students, teachers, and researchers in chemistry and art to develop multimedia materials for grades 3-12 visualizing basic concepts in chemistry and visual art. Topics addressed include standards in art and science; the conceptual framework for the project; and project goals,…

  16. The da Vinci robot in right adrenalectomy: considerations on technique.

    PubMed

    D'Annibale, Annibale; Fiscon, Valentino; Trevisan, Paolo; Pozzobon, Maurizia; Gianfreda, Valeria; Sovernigo, Gianna; Morpurgo, Emilio; Orsini, Camillo; Del Monte, Daniele

    2004-02-01

    The da Vinci Robotic System (Intuitive Surgical, Mountain View, CA) became available at the General Surgery Department of Camposampiero Hospital in May 2001. From May 2001 to October 2002, 139 robotic operations were performed, one of which was a right adrenalectomy for a right adrenal mass. The progressive growth of the mass was the indication for surgical excision. Surgical adrenalectomy was successfully completed with da Vinci Robotic System using 5 ports (3 for the robotic system, 2 as service trocars). The wrist-like movements of the instrument's tip easily enabled the detachment of the right hepatic lobe from the gland and vessel isolation, while the 3-dimensional vision facilitated dissection of the veins from the vena cava. PMID:15259586

  17. [Lobectomy for lung cancer using the Da Vinci surgical system].

    PubMed

    Nakamura, Hiroshige

    2014-05-01

    Robot-assisted surgery using the da Vinci surgical system has attracted attention because of excellent operability without shaking by joint forceps under the clear vision of a three-dimensional high-definition camera in lung cancer surgery. Although this form of advanced medical care is not yet approved for insurance coverage, it is at the stage of clinical research and expected to be useful in hilar exposure, lymph node dissection, and suturing of the lung parenchyma or bronchus. Lung cancer surgery with the da Vinci system has the advantage of combining thoracotomy and minimally invasive surgery in video-assisted thoracic surgery. However, safety management, education, and significant cost are problems to be resolved. Several important issues such as sharing knowledge and technology of robotic surgery, education, training, development of new instruments, and acquisition of advanced medical insurance are discussed for the future development of robotic surgical systems. PMID:24946522

  18. Visual tracking of da Vinci instruments for laparoscopic surgery

    NASA Astrophysics Data System (ADS)

    Speidel, S.; Kuhn, E.; Bodenstedt, S.; Röhl, S.; Kenngott, H.; Müller-Stich, B.; Dillmann, R.

    2014-03-01

    Intraoperative tracking of laparoscopic instruments is a prerequisite to realize further assistance functions. Since endoscopic images are always available, this sensor input can be used to localize the instruments without special devices or robot kinematics. In this paper, we present an image-based markerless 3D tracking of different da Vinci instruments in near real-time without an explicit model. The method is based on different visual cues to segment the instrument tip, calculates a tip point and uses a multiple object particle filter for tracking. The accuracy and robustness is evaluated with in vivo data.

  19. Leonardo da Vinci and the origin of semen

    PubMed Central

    Noble, Denis; DiFrancesco, Dario; Zancani, Diego

    2014-01-01

    It is well known that Leonardo da Vinci made several drawings of the human male anatomy. The early drawings (before 1500) were incorrect in identifying the origin of semen, where he followed accepted teaching of his time. It is widely thought that he did not correct this mistake, a view that is reflected in several biographies. In fact, he made a later drawing (after 1500) in which the description of the anatomy is remarkably accurate and must have been based on careful dissection. In addition to highlighting this fact, acknowledged previously in only one other source, this article reviews the background to Leonardo's knowledge of the relevant anatomy.

  20. A Creative Approach to the Common Core Standards: The Da Vinci Curriculum

    ERIC Educational Resources Information Center

    Chaucer, Harry

    2012-01-01

    "A Creative Approach to the Common Core Standards: The Da Vinci Curriculum" challenges educators to design programs that boldly embrace the Common Core State Standards by imaginatively drawing from the genius of great men and women such as Leonardo da Vinci. A central figure in the High Renaissance, Leonardo made extraordinary contributions as a…

  1. A Creative Approach to the Common Core Standards: The Da Vinci Curriculum

    ERIC Educational Resources Information Center

    Chaucer, Harry

    2012-01-01

    "A Creative Approach to the Common Core Standards: The Da Vinci Curriculum" challenges educators to design programs that boldly embrace the Common Core State Standards by imaginatively drawing from the genius of great men and women such as Leonardo da Vinci. A central figure in the High Renaissance, Leonardo made extraordinary contributions as a

  2. Leonardo da Vinci: the search for the soul.

    PubMed

    Del Maestro, R F

    1998-11-01

    The human race has always contemplated the question of the anatomical location of the soul. During the Renaissance the controversy crystallized into those individuals who supported the heart ("cardiocentric soul") and others who supported the brain ("cephalocentric soul") as the abode for this elusive entity. Leonardo da Vinci (1452-1519) joined a long list of other explorers in the "search for the soul." The method he used to resolve this anatomical problem involved the accumulation of information from ancient and contemporary sources, careful notetaking, discussions with acknowledged experts, and his own personal search for the truth. Leonardo used a myriad of innovative methods acquired from his knowledge of painting, sculpture, and architecture to define more clearly the site of the "senso comune"--the soul. In this review the author examines the sources of this ancient question, the knowledge base tapped by Leonardo for his personal search for the soul, and the views of key individuals who followed him. PMID:9817431

  3. [Regarding the Manuscript D " Dell' occhio " of Leonardo da Vinci].

    PubMed

    Heitz, Robert F

    2009-01-01

    Leonardo da Vinci's Manuscript D consists of five double pages sheets, which, folded in two, comprise ten folios. This document, in the old Tuscan dialect and mirror writing, reveals the ideas of Leonardo on the anatomy of the eye in relation to the formation of images and visual perception. Leonardo explains in particular the behavior of the rays in the eye in terms of refraction and reflection, and is very mechanistic in his conception of the eye and of the visual process. The most significant innovations found in these folios are the concept of the eye as a camera obscura and the intersection of light rays in the interior of the eye. His texts nevertheless show hesitation, doubts and a troubled confusion, reflecting the ideas and uncertainties of his era. He did not share his results in his lifetime, despite both printing and etching being readily available to him. PMID:19852385

  4. Thinking like Leonardo da Vinci and its implications for the modern doctor.

    PubMed

    Baum, Neil

    2013-01-01

    Most people when asked to name the most creative, innovative, and multidimensional people in history would agree that Leonardo da Vinci is either at the top or very close to the number one position on that list. Wouldn't it be nice to think like da Vinci? This article shares the seven unique principles of thinking that da Vinci used that enabled him to be the greatest painter, sculptor, architect, musician, mathematician, engineer, inventor, anatomist, geologist, cartographer, botanist, and writer of his (if not of all) time. This article will take you deep into the notebooks and codices of da Vinci, and suggest ways his ideas can be used by anyone in the healthcare profession to make them a better healthcare provider. PMID:24228380

  5. Load evaluation of the da Vinci surgical system for transoral robotic surgery.

    PubMed

    Fujiwara, Kazunori; Fukuhara, Takahiro; Niimi, Koji; Sato, Takahiro; Kitano, Hiroya

    2015-12-01

    Transoral robotic surgery, performed with the da Vinci surgical system (da Vinci), is a surgical approach for benign and malignant lesions of the oral cavity and laryngopharynx. It provides several unique advantages, which include a 3-dimensional magnified view and ability to see and work around curves or angles. However, the current da Vinci surgical system does not provide haptic feedback. This is problematic because the potential risks specific to the transoral use of the da Vinci include tooth injury, mucosal laceration, ocular injury and mandibular fracture. To assess the potential for intraoperative injuries, we measured the load of the endoscope and the instrument of the da Vinci Si surgical system. We pressed the endoscope and instrument of the da Vinci Si against Load cell six times each and measured the dynamic load and the time-to-maximum load. We also struck the da Vinci Si endoscope and instrument against the Load cell six times each and measured the impact load. The maximum dynamic load was 7.27 ± 1.31 kg for the endoscope and 1.90 ± 0.72 for the instrument. The corresponding time-to-maximum loads were 1.72 ± 0.22 and 1.29 ± 0.34 s, but the impact loads were significantly lower than the dynamic load. It remains possible that a major load is exerted on adjacent structures by continuous contact with the endoscope and instrument of da Vinci Si. However, there is a minor delay in reaching the maximum load. Careful monitoring by an on-site assistant may, therefore, help prevent contiguous injury. PMID:26530845

  6. Robotic Partial Nephrectomy with the Da Vinci Xi

    PubMed Central

    Kallingal, George J. S.; Swain, Sanjaya; Darwiche, Fadi; Punnen, Sanoj; Manoharan, Murugesan; Gonzalgo, Mark L.; Parekh, Dipen J.

    2016-01-01

    Purpose. The surgical expertise to perform robotic partial nephrectomy is heavily dependent on technology. The Da Vinci Xi (XI) is the latest robotic surgical platform with significant advancements compared to its predecessor. We describe our operative technique and experience with the XI system for robotic partial nephrectomy (RPN). Materials and Methods. Patients with clinical T1 renal masses were offered RPN with the XI. We used laser targeting, autopositioning, and a novel “in-line” port placement to perform RPN. Results. 15 patients underwent RPN with the XI. There were no intraoperative complications and no operative conversions. Mean console time was 101.3 minutes (range 44–176 minutes). Mean ischemia time was 17.5 minutes and estimated blood loss was 120 mLs. 12 of 15 patients had renal cell carcinoma. Two patients had oncocytoma and one had benign cystic disease. All patients had negative surgical margins and pathologic T1 disease. Two postoperative complications were encountered, including one patient who developed a pseudoaneurysm and one readmitted for presumed urinary tract infection. Conclusions. RPN with the XI system can be safely performed. Combining our surgical technique with the technological advancements on the XI offers patients acceptable pathologic and perioperative outcomes. PMID:26977144

  7. Leonardo da Vinci and the first hemodynamic observations.

    PubMed

    Martins e Silva, J

    2008-02-01

    Leonardo da Vinci was a genius whose accomplishments and ideas come down to us today, five centuries later, with the freshness of innovation and the fascination of discovery. This brief review begins with a summary of Leonardo's life and a description of the most important works of art that he bequeathed us, and then concentrates on his last great challenge. There was a point at which Leonardo's passion for art gave way to the study of human anatomy, not only to improve his drawing but to go beyond what had been simply a representation of form to understand the underlying functioning. Among his many interests, we focus on his study of the heart and blood vessels, which he observed carefully in animals and human autopsies, and reproduced in drawings of great quality with annotations of astonishing acuteness. The experience that he had acquired from observing the flow of water in currents and around obstacles, and the conclusions that he drew concerning hydrodynamics, were central to his interpretation of the mechanisms of the heart and of blood flow, to which he devoted much of his time between 1508 and 1513. From these studies, immortalized in drawings of great clarity, come what are acknowledged to be the first hemodynamic records, in which Leonardo demonstrates the characteristics of blood flow in the aorta and great vessels and the importance of blood reflux and the formation of eddies in the sinus in aortic valve his assiduous and careful observations, and his subsequent deductions, Leonardo put forward detailed findings on hemodynamic questions that advanced technology has only recently enabled us to confirm. PMID:18488922

  8. Evolution of robots throughout history from Hephaestus to Da Vinci Robot.

    PubMed

    Iavazzo, Christos; Gkegke, Xanthi-Ekaterini D; Iavazzo, Paraskevi-Evangelia; Gkegkes, Ioannis D

    2014-01-01

    Da Vinci robot is increasingly used for operations adding the advantages of robots to the favor of medicine. This is a historical article with the aim to present the evolution of robots in the medical area from the time of ancient myths to Renaissance and finally to the current revolutionary applications. We endeavored to collect several elegant narratives on the topic. The use of imagination could help the reader to find similarities. A trip from the Greek myths of Hephaestus through Aristotle and Leonardo Da Vinci to the robots of Karel Capek and Isaac Asimov and finally the invention of the medical robots is presented. PMID:25811686

  9. Leonardo da Vinci, One Year on...a Different Look at Vocational Training in Europe.

    ERIC Educational Resources Information Center

    Le Magazine, 1996

    1996-01-01

    Discusses the success of the Leonardo da Vinci program, a European laboratory of innovation in vocational training, a priority focus of investment in human resources and intelligence, and a way to mobilize innovative forces beyond national boundaries. Trends identified by the program focus on new information and communication technologies. (JOW)

  10. Transparency of Vocational Qualifications: The Leonardo da Vinci Approach. CEDEFOP Panorama Series.

    ERIC Educational Resources Information Center

    Bjornavold, Jens; Pettersson, Sten

    This report gives an overview of the situation of transparency of vocational qualifications by presenting measures introduced at the European Community level and by drawing attention to projects within the Leonardo da Vinci Program dealing with the issue. A 16-page executive summary appears first. Chapter 1 provides general background and aims.…

  11. Visual degradation in Leonardo da Vinci's iconic self-portrait: A nanoscale study

    NASA Astrophysics Data System (ADS)

    Conte, A. Mosca; Pulci, O.; Misiti, M. C.; Lojewska, J.; Teodonio, L.; Violante, C.; Missori, M.

    2014-06-01

    The discoloration of ancient paper, due to the development of oxidized groups acting as chromophores in its chief component, cellulose, is responsible for severe visual degradation in ancient artifacts. By adopting a non-destructive approach based on the combination of optical reflectance measurements and time-dependent density functional theory ab-initio calculations, we describe and quantify the chromophores affecting Leonardo da Vinci's iconic self-portrait. Their relative concentrations are very similar to those measured in modern and ancient samples aged in humid environments. This analysis quantifies the present level of optical degradation of the Leonardo da Vinci's self-portrait which, compared with future measurements, will assess its degradation rate. This is a fundamental information in order to plan appropriate conservation strategies.

  12. [Freud's identification with men who had 2 mothers: Oedipus, Leonardo da Vinci, Michealangelo and Moses].

    PubMed

    Harsch, H E

    1994-02-01

    In view of the fact that as a child Sigmund Freud was looked after by two mothers--his actual mother and a nursemaid--it is hardly surprising that traces of this pre-oedipal situation, fraught as it was with traumatisation and loss, should be discernible in the works of the creator of psychoanalysis. Freud's continued preoccupation with the Oedipus myth, his interest in "great men" like da Vinci and Michelangelo, and finally his identification with the figure of Moses are pointers not only to the paternal dimension (as long suggested by Freud's biographers) but also to the maternal dimension and its significance for Freud's life and work. The author demonstrates that those mythical and historical figures which Freud identified with--Oedipus, da Vinci, Michelangelo, Moses--themselves all had two mothers and sublimated this traumatic experience into outstanding achievements, the same being true of Freud himself "who solved the famous riddle and was a most powerful man". PMID:8153361

  13. The Handedness of Leonardo da Vinci: A Tale of the Complexities of Lateralisation

    ERIC Educational Resources Information Center

    McManus, I. C.; Drury, Helena

    2004-01-01

    The handedness of Leonardo da Vinci is controversial. Although there is little doubt that many of his well-attributed drawings were drawn with the left hand, the hatch marks of the shading going downwards from left to right, it is not clear that he was a natural left-hander, there being some suggestion that he may have become left-handed as the…

  14. The Handedness of Leonardo da Vinci: A Tale of the Complexities of Lateralisation

    ERIC Educational Resources Information Center

    McManus, I. C.; Drury, Helena

    2004-01-01

    The handedness of Leonardo da Vinci is controversial. Although there is little doubt that many of his well-attributed drawings were drawn with the left hand, the hatch marks of the shading going downwards from left to right, it is not clear that he was a natural left-hander, there being some suggestion that he may have become left-handed as the

  15. Comparing the da Vinci Si Single Console and Dual Console in Teaching Novice Surgeons Suturing Techniques

    PubMed Central

    Jackson, Tiffany; Advincula, Arnold

    2014-01-01

    Background and Objectives: Robot-assisted laparoscopic surgery is often taught with the surgical mentor at the surgeon console and the trainee at the patient's bedside. The da Vinci dual console (Intuitive Surgical, Sunnyvale, California) allows a surgical mentor to teach with both the mentor and the trainee working at a surgeon console simultaneously. The purpose of this study is to evaluate the effectiveness of the dual console versus the single console for teaching medical students robotic tasks. Methods: Forty novice medical students were randomized to either the da Vinci single-console or dual-console group and were taught 4 knot-tying techniques by a surgical mentor. The students were timed while performing the tasks. Results: No statistically significant differences in mean task times were observed between the single- and dual-console groups: interrupted stitch with a 2-handed knot (300 seconds for single vs 294 seconds for dual, P = .59), interrupted stitch with a 1-handed knot (198 seconds for single vs 212 seconds for dual, P = .88), figure-of-8 stitch with a 2-handed knot (261 seconds for single vs 219 seconds for dual, P = .20), and figure-of-8 stitch with a 1-handed knot (200 seconds for single vs 199 seconds for dual, P = .53). Conclusion: No significant difference was observed in performance time when teaching knot-tying techniques to medical students using the da Vinci dual console compared with the single console. More research needs to be performed on the utility of the da Vinci dual console in surgical training. PMID:25392618

  16. Potential applications of the da Vinci minimally invasive surgical robotic system in otolaryngology.

    PubMed

    McLeod, Ian K; Mair, Eric A; Melder, Patrick C

    2005-08-01

    Anatomic constraints and instrumentation design characteristics have limited the exploitation of endoscopic surgery in otolaryngology. The move toward less invasive and less morbid procedures has paved the way for the development and application of robotic and computer-assisted systems in surgery. Surgical robotics allows for the use of new instrumentation in our field. We review the operative advantages, limitations, and possible surgical applications of the da Vinci Surgical System in otolaryngology. In the laboratory setting, we explored the setup and use of the da Vinci system in porcine and cadaveric head and neck airway models; the setup was configuredfor optimal airway surgery. Endoscopic cautery, manipulation, and suturing of supraglottic tissues were performed in both the porcine and cadaveric models. We found that the da Vinci system provided the advantages of the lower morbidity associated with endoscopic surgery, more freedom of movement, and three-dimensional open surgical viewing. We also observed that the system has several limitations to use in otolaryngology. PMID:16220853

  17. da Vinci robot-assisted keyhole neurosurgery: a cadaver study on feasibility and safety.

    PubMed

    Marcus, Hani J; Hughes-Hallett, Archie; Cundy, Thomas P; Yang, Guang-Zhong; Darzi, Ara; Nandi, Dipankar

    2015-04-01

    The goal of this cadaver study was to evaluate the feasibility and safety of da Vinci robot-assisted keyhole neurosurgery. Several keyhole craniotomies were fashioned including supraorbital subfrontal, retrosigmoid and supracerebellar infratentorial. In each case, a simple durotomy was performed, and the flap was retracted. The da Vinci surgical system was then used to perform arachnoid dissection towards the deep-seated intracranial cisterns. It was not possible to simultaneously pass the 12-mm endoscope and instruments through the keyhole craniotomy in any of the approaches performed, limiting visualization. The articulated instruments provided greater dexterity than existing tools, but the instrument arms could not be placed in parallel through the keyhole craniotomy and, therefore, could not be advanced to the deep cisterns without significant clashing. The da Vinci console offered considerable ergonomic advantages over the existing operating room arrangement, allowing the operating surgeon to remain non-sterile and seated comfortably throughout the procedure. However, the lack of haptic feedback was a notable limitation. In conclusion, while robotic platforms have the potential to greatly enhance the performance of transcranial approaches, there is strong justification for research into next-generation robots, better suited to keyhole neurosurgery. PMID:25516094

  18. An efficient floating-point to fixed-point conversion process for biometric algorithm on DaVinci DSP architecture

    NASA Astrophysics Data System (ADS)

    Konvalinka, Ira; Quddus, Azhar; Asraf, Daniel

    2009-05-01

    Today there is no direct path for the conversion of a floating-point algorithm implementation to an optimized fixed-point implementation. This paper proposes a novel and efficient methodology for Floating-point to Fixed-point Conversion (FFC) of biometric Fingerprint Algorithm Library (FAL) on fixed-point DaVinci processor. A general FFC research task is streamlined along smaller tasks which can be accomplished with lower effort and higher certainty. Formally specified in this paper is the optimization target in FFC, to preserve floating-point accuracy and to reduce execution time, while preserving the majority of algorithm code base. A comprehensive eight point strategy is formulated to achieve that target. Both local (focused on the most time consuming routines) and global optimization flow (to optimize across multiple routines) are used. Characteristic phases in the FFC activity are presented using data from employing the proposed FFC methodology to FAL, starting with target optimization specification, to speed optimization breakthroughs, finalized with validation of FAL accuracy after the execution time optimization. FAL implementation resulted in biometric verification time reduction for over a factor of 5, with negligible impact on accuracy. Any algorithm developer facing the task of implementing his floating-point algorithm on DaVinci DSP is expected to benefit from this presentation.

  19. [The Vitruvian Man: an anatomical drawing for proportions by Leonardo Da Vinci].

    PubMed

    Le Floch-Prigent, P

    2008-12-01

    The aim of the study was to find out and to analyse the text by Vitruvius which inspired the famous drawing by Leonardo da Vinci (circa 1490) kept in the Galleria dell'Accademia, in Venezia, Italy: the man inscribed in one circle and in one square. The book "de Architectura" by Vitruvius Marcus Pollio was printed several times since the Renaissance when both the roman architecture of antiquity and this text became very popular. From a French translation by Claude Perrault in 1864, it became easy to find a French translation with the original text in Latin (Paris, 2003, Les Belles Lettres, French text by Pierre Gros). The drawing by Leonardo da Vinci illustrates with great accuracy and fidelity the quotation of Vitruvius (with the exception of two of the 12 main relationships). The genius of Leonardo da Vinci was to keep only one trunk, head and neck for two pairs of limbs: scapular and pelvic; to make the circle tangent to the lower edge of the square; to adjust a few features of the quotation for the equilibrium of the whole figure; and of course to bring his incredible skill as a drawer (one of the best of his century). The drawing was made on a sheet of paper 344x245mm, in black ink which became dark brown with time; several lines complete the figure above and below; a short caption and a horizontal scale appear just under the drawing. The celebrity of the drawing, a symbol of the Renaissance, of the equilibrium of man and mankind, of the universality of the artists and intellectuals of the time (Humanism) made it iconic and it has been constantly reproduced and adapted especially for advertisement and logos, not only in the medical field. PMID:18951824

  20. OCT structural examination of Madonna dei Fusi by Leonardo da Vinci

    NASA Astrophysics Data System (ADS)

    Targowski, Piotr; Iwanicka, Magdalena; Sylwestrzak, Marcin; Kaszewska, Ewa A.; Frosinini, Cecilia

    2013-05-01

    Madonna dei Fusi (`Madonna of the Yarnwider') is a spectacular example of Italian Renaissance painting, attributed to Leonardo da Vinci. The aim of this study is to give an account of past restoration procedures. The evidence of a former retouching campaign will be presented with cross-sectional images obtained non-invasively with Optical Coherence Tomography (OCT). Specifically, the locations of overpaintings/retouchings with respect to the original paint layer and secondary varnishes will be given. Additionally, the evidence of a former transfer of the pictorial layer to the new canvas support by detecting the presence of its structure incised into paint layer will be shown.

  1. [The beginnings of robotic surgery--from the roots up to the da Vinci telemanipulator system].

    PubMed

    Dervaderics, János

    2007-12-01

    The history of the robotic surgery is only 22 years old. The article gives a short overview regarding the history of robotics, the surgical robots, the da Vinci telemanipulator system and some further commercial and experimental surgical robotic surgical simulation is also emphasized. Robotic surgery has its own place within the following concepts: 1. computer assisted surgery (CAS), 2. computer integrated surgery (CIS), 3. surgical automation, 4. surgical system integration and 5. artificial intelligence (AI). At the end of the paper there are some important sources of informations regarding robotic surgery. PMID:18048110

  2. Associated Da Vinci and magellan robotic systems for successful treatment of nutcracker syndrome.

    PubMed

    Thaveau, Fabien; Nicolini, Philippe; Lucereau, Benoit; Georg, Yannick; Lejay, Anne; Chakfe, Nabil

    2015-01-01

    Here, we report the case of a 26-year-old woman suffering from nutcracker syndrome with concurrent disabling pelvic congestion syndrome. She was given the minimally invasive treatment of left renal vein transposition with the Da Vinci(®) robotic system (Intuitive Surgical, Sunnyvale, CA), followed the next day by a gonadal vein and pelvic varicose embolization using a robotic intraluminal navigation with the Magellan™ robotic system (Hansen Medical, Mountain View, CA). The procedure was uneventful, and the patient had good results at 6 months of follow-up, including a patent left renal vein and complete relief of symptoms. PMID:25531954

  3. The Da Vinci European BioBank: A Metabolomics-Driven Infrastructure

    PubMed Central

    Carotenuto, Dario; Luchinat, Claudio; Marcon, Giordana; Rosato, Antonio; Turano, Paola

    2015-01-01

    We present here the organization of the recently-constituted da Vinci European BioBank (daVEB, https://www.davincieuropeanbiobank.org/it). The biobank was created as an infrastructure to support the activities of the Fiorgen Foundation (http://www.fiorgen.net/), a nonprofit organization that promotes research in the field of pharmacogenomics and personalized medicine. The way operating procedures concerning samples and data have been developed at daVEB largely stems from the strong metabolomics connotation of Fiorgen and from the involvement of the scientific collaborators of the foundation in international/European projects aimed to tackle the standardization of pre-analytical procedures and the promotion of data standards in metabolomics. PMID:25913579

  4. A grip force model for the da Vinci end-effector to predict a compensation force.

    PubMed

    Lee, Chiwon; Park, Yong Hyun; Yoon, Chiyul; Noh, Seungwoo; Lee, Choonghee; Kim, Youdan; Kim, Hee Chan; Kim, Hyeon Hoe; Kim, Sungwan

    2015-03-01

    A torque transfer system (TTS) that measures grip forces is developed to resolve a potential drawback of the current da Vinci robot system whose grip forces vary according to the different postures of its EndoWrist. A preliminary model of EndoWrist Inner Mechanism Model (EIMM) is also developed and validated with real grip force measurements. EndoWrist's grip forces, posture angles, and transferred torque are measured by using TTS. The mean measured grip forces of three different EndoWrist for 27 different postures were very diverse. The EndoWrist exerted different grip forces, with a minimum of 1.84-times more and a maximum of 3.37-times more in specific posture even if the surgeon exerted the same amount of force. Using the posture angles as input and the grip forces as output, the EIMM is constructed. Then, expected grip force values obtained from EIMM are compared with actual measurements of da Vinci EndoWrist to validate the proposed model. From these results, surgeons will be beneficial with the understandings of actual grip force being applied to tissue and mechanical properties of robotic system. The EIMM could provide a baseline in designing a force-feedback system for surgical robot. These are significantly important to prevent serious injury by maintaining a proper force to tissue. PMID:25432526

  5. [History of robotics: from Archytas of Tarentum until da Vinci robot. (Part I)].

    PubMed

    Sánchez Martín, F M; Millán Rodríguez, F; Salvador Bayarri, J; Palou Redorta, J; Rodríguez Escovar, F; Esquena Fernández, S; Villavicencio Mavrich, H

    2007-02-01

    Robotic surgery is the newst technologic option in urology. To understand how new robots work is interesting to know their history. The desire to design machines imitating humans continued for more than 4000 years. There are references to King-su Tse (clasic China) making up automaton at 500 a. C. Archytas of Tarentum (at around 400 a.C.) is considered the father of mechanical engineering, and one of the occidental robotics classic referents. Heron of Alexandria, Hsieh-Fec, Al-Jazari, Roger Bacon, Juanelo Turriano, Leonardo da Vinci, Vaucanson o von Kempelen were robot inventors in the middle age, renaissance and classicism. At the XIXth century, automaton production underwent a peak and all engineering branches suffered a great development. At 1942 Asimov published the three robotics laws, based on mechanics, electronics and informatics advances. At XXth century robots able to do very complex self governing works were developed, like da Vinci Surgical System (Intuitive Surgical Inc, Sunnyvale, CA, USA), a very sophisticated robot to assist surgeons. PMID:17645084

  6. Leonardo da Vinci and Andreas Vesalius; the shoulder girdle and the spine, a comparison.

    PubMed

    Ganseman, Y; Broos, P

    2008-01-01

    Leonardo Da Vinci and Andreas Vesalius were two important renaissance persons; Vesalius was a surgeon-anatomist who delivered innovative work on the study of the human body, Leonardo da Vinci was an artist who delivered strikingly accurate and beautiful drawings on the human body. Below we compare both masters with regard to their knowledge of the working of the muscles, their method and system of dissection and their system and presentation of the drawings. The investigation consisted of a comparison between both anatomists, in particular concerning their study on the shoulder girdle and spine, by reviewing their original work as well as already existing literature on this subject. The investigation led to the conclusion that the drawings mentioned meant a change in history, and were of high quality, centuries ahead of their time. Both were anatomists, both were revolutionary, only one changed history at the moment itself, while the other changed history centuries later. Leonardo has made beautiful drawings that are at a match with the drawings of today or are even better. Vesalius set the start for medicine as a science as it is until this day. Their lives differed as strongly as their impact. In the light of their time, the achievement they made was extraordinary. PMID:18807610

  7. The uncatchable smile in Leonardo da Vinci's La Bella Principessa portrait.

    PubMed

    Soranzo, Alessandro; Newberry, Michelle

    2015-08-01

    A portrait of uncertain origin recently came to light which, after extensive research and examination, was shown to be that rarest of things: a newly discovered Leonardo da Vinci painting entitled La Bella Principessa. This research presents a new illusion which is similar to that identified in the Mona Lisa; La Bella Principessa's mouth appears to change slant depending on both the Viewing Distance and the Level of Blur applied to a digital version of the portrait. Through a series of psychophysics experiments, it was found that a perceived change in the slant of the La Bella Principessa's mouth influences her expression of contentment thus generating an illusion that we have coined the "uncatchable smile". The elusive quality of the Mona Lisa's smile has been previously reported (Science, 290 (2000) 1299) and so the existence of a similar illusion in a portrait painted prior to the Mona Lisa becomes more interesting. The question remains whether Leonardo da Vinci intended this illusion. In any case, it can be argued that the ambiguity created adds to the portrait's allure. PMID:26049039

  8. Virtual Mobility in Reality: A Study of the Use of ICT in Finnish Leonardo da Vinci Mobility Projects.

    ERIC Educational Resources Information Center

    Valjus, Sonja

    An e-mail survey and interviews collected data on use of information and communications technology (ICT) in Finnish Leonardo da Vinci mobility projects from 2000-02. Findings showed that the most common ICT tools used were e-mail, digital tools, and the World Wide Web; ICT was used during all project phases; the most common problems concerned…

  9. Educating in the Design and Construction of Built Environments Accessible to Disabled People: The Leonardo da Vinci AWARD Project

    ERIC Educational Resources Information Center

    Frattari, Antonio; Dalpra, Michela; Bernardi, Fabio

    2013-01-01

    An interdisciplinary partnership within an European Leonardo da Vinci project has developed a new approach aimed at educating secondary school students in the creation of built environments accessible to disabled people and at sensitizing them towards the inclusion of people with disabilities in all realms of social life. The AWARD (Accessible…

  10. Educating in the Design and Construction of Built Environments Accessible to Disabled People: The Leonardo da Vinci AWARD Project

    ERIC Educational Resources Information Center

    Frattari, Antonio; Dalpra, Michela; Bernardi, Fabio

    2013-01-01

    An interdisciplinary partnership within an European Leonardo da Vinci project has developed a new approach aimed at educating secondary school students in the creation of built environments accessible to disabled people and at sensitizing them towards the inclusion of people with disabilities in all realms of social life. The AWARD (Accessible

  11. Bell's palsy: the answer to the riddle of Leonardo da Vinci's 'Mona Lisa'.

    PubMed

    Maloney, W J

    2011-05-01

    The smile of the famed portrait 'The Mona Lisa' has perplexed both art historians and researchers for the past 500 years. There has been a multitude of theories expounded to explain the nature of the model's enigmatic smile. The origin of the model's wry smile can be demonstrated through a careful analysis of both documented facts concerning the portrait--some gathered only recently through the use of modern technology--and a knowledge of the clinical presentation of Bell's palsy. Bell's palsy is more prevalent in women who are either pregnant or who have recently given birth. This paper postulates that the smile of the portrait's model was due to Leonardo da Vinci's anatomically precise representation of a new mother affected by Bell's palsy subsequent to her recent pregnancy. PMID:20929717

  12. Microbiological Analysis of Surfaces of Leonardo Da Vinci's Atlantic Codex: Biodeterioration Risk

    PubMed Central

    Moroni, Catia; Pasquariello, Giovanna; Maggi, Oriana

    2014-01-01

    Following the discovery of discoloration on some pages of the Atlantic Codex (AC) of Leonardo da Vinci kept in the Biblioteca Ambrosiana in Milan, some investigations have been carried out to verify the presence of microorganisms, such as bacteria and fungi. To verify the presence of microorganisms a noninvasive method of sampling has been used that was efficient and allowed us to highlight the microbial facies of the material that was examined using conventional microbiological techniques. The microclimatic conditions in the storage room as well as the water content of the volume were also assessed. The combined observations allowed the conclusion that the discoloration of suspected biological origin on some pages of AC is not related to the presence or current attack of microbial agents. PMID:25574171

  13. Understanding the adoption dynamics of medical innovations: affordances of the da Vinci robot in the Netherlands.

    PubMed

    Abrishami, Payam; Boer, Albert; Horstman, Klasien

    2014-09-01

    This study explored the rather rapid adoption of a new surgical device - the da Vinci robot - in the Netherlands despite the high costs and its controversial clinical benefits. We used the concept 'affordances' as a conceptual-analytic tool to refer to the perceived promises, symbolic meanings, and utility values of an innovation constructed in the wider social context of use. This concept helps us empirically understand robot adoption. Data from 28 in-depth interviews with diverse purposively-sampled stakeholders, and from medical literature, policy documents, Health Technology Assessment reports, congress websites and patients' weblogs/forums between April 2009 and February 2014 were systematically analysed from the perspective of affordances. We distinguished five interrelated affordances of the robot that accounted for shaping and fulfilling its rapid adoption: 'characteristics-related' affordances such as smart nomenclature and novelty, symbolising high-tech clinical excellence; 'research-related' affordances offering medical-technical scientific excellence; 'entrepreneurship-related' affordances for performing better-than-the-competition; 'policy-related' affordances indicating the robot's liberalised provision and its reduced financial risks; and 'communication-related' affordances of the robot in shaping patients' choices and the public's expectations by resonating promising discourses while pushing uncertainties into the background. These affordances make the take-up and use of the da Vinci robot sound perfectly rational and inevitable. This Dutch case study demonstrates the fruitfulness of the affordances approach to empirically capturing the contextual dynamics of technology adoption in health care: exploring in-depth actors' interaction with the technology while considering the interpretative spaces created in situations of use. This approach can best elicit real-life value of innovations, values as defined through the eyes of (potential) users. PMID:25063968

  14. The experience of totally endoscopic coronary bypass grafting with the robotic system «Da Vinci» in Russia

    NASA Astrophysics Data System (ADS)

    Efendiev, V. U.; Alsov, S. A.; Ruzmatov, T. M.; Mikheenko, I. L.; Chernyavsky, A. M.; Malakhov, E. S.

    2015-11-01

    A new technology - a thoracoscopic coronary bypass grafting with the use of Da Vinci robotic system in Russia is represented by the experience of NRICP. The technology was introduced in Russia in 2011. Overall, one hundred endoscopic coronary artery bypass procedures were performed. We have compared and analyzed results of coronary artery stenting vs minimally invasive coronary artery bypass grafting. According to the results, totally endoscopic coronary artery bypass grafting has several advantages over alternative treatment strategies.

  15. Realization of a single image haze removal system based on DaVinci DM6467T processor

    NASA Astrophysics Data System (ADS)

    Liu, Zhuang

    2014-10-01

    Video monitoring system (VMS) has been extensively applied in domains of target recognition, traffic management, remote sensing, auto navigation and national defence. However the VMS has a strong dependence on the weather, for instance, in foggy weather, the quality of images received by the VMS are distinct degraded and the effective range of VMS is also decreased. All in all, the VMS performs terribly in bad weather. Thus the research of fog degraded images enhancement has very high theoretical and practical application value. A design scheme of a fog degraded images enhancement system based on the TI DaVinci processor is presented in this paper. The main function of the referred system is to extract and digital cameras capture images and execute image enhancement processing to obtain a clear image. The processor used in this system is the dual core TI DaVinci DM6467T - ARM@500MHz+DSP@1GH. A MontaVista Linux operating system is running on the ARM subsystem which handles I/O and application processing. The DSP handles signal processing and the results are available to the ARM subsystem in shared memory.The system benefits from the DaVinci processor so that, with lower power cost and smaller volume, it provides the equivalent image processing capability of a X86 computer. The outcome shows that the system in this paper can process images at 25 frames per second on D1 resolution.

  16. Maintenance mechanisms of the pipe model relationship and Leonardo da Vinci's rule in the branching architecture of Acer rufinerve trees.

    PubMed

    Sone, Kosei; Suzuki, Alata Antonio; Miyazawa, Shin-Ichi; Noguchi, Ko; Terashima, Ichiro

    2009-01-01

    The pipe model relationship (constancy of branch cross-sectional area/leaf area) and Leonardo da Vinci's rule (equality of total cross-sectional area of the daughter branches and cross-sectional area of their mother branch) are empirical rules of tree branching. Effects of branch manipulation on the pipe model relationships were examined using five Acer rufinerve trees. Half the branches in each tree were untreated (control branches, CBs), and, for the others (manipulated branches, MBs), either light intensity or leaf area (both relating to photosynthetic source activity), or shoot elongation (source + sink activities), was reduced, and responses of the pipe model relationships were followed for 2 years. The pipe model relationship in MBs changed by suppression of source activity, but not by simultaneous suppression of source + sink activities. The manipulations also affected CBs in the year of manipulation and both branches in the next year. The branch diameter growth was most affected by light, followed by shoot elongation and leaf area, in that order. Because of the decussate phyllotaxis of A. rufinerve, one branching node can potentially have one main and two lateral branches. Analysis of 295 branching nodes from 13 untreated trees revealed that the da Vinci's rule held in branching nodes having one shed branch but not in the nodes without branch shedding, indicating the necessity of natural shedding of branches for da Vinci's rule to hold. These analyses highlight the importance of the source-sink balance and branch shedding in maintenance of these empirical rules. PMID:18690411

  17. A midline sagittal brain view depicted in Da Vinci's "Saint Jerome in the wilderness".

    PubMed

    Valença, M M; Aragão, M de F V Vasco; Castillo, M

    2013-01-01

    It is estimated that around the year 1480 Leonardo da Vinci painted Saint Jerome in the Wilderness, representing the saint during his years of retreat in the Syrian dessert where he lived the life of a hermit. One may interpret Leonardo's Saint Jerome in the Wilderness as St. Jerome practicing self-chastisement with a stone in his right hand, seemingly punching his chest repeatedly. The stone, the lion and a cardinal's hat are conventionally linked to the saint. A skull was also almost always present with the image of the saint symbolically representing penance. With careful analysis of the painting one can identify the skull which is hidden in an arc represented as a lion's tail. The image is of a hemicranium (midline sagittal view) showing the intracranial dura, including the falx and tentorium, and venous system with the sinuses and major deep veins. This may have been the first time when the intracranial sinuses and the major deep venous vessels were illustrated. PMID:23971176

  18. Robot-Assisted Cardiac Surgery Using the Da Vinci Surgical System: A Single Center Experience

    PubMed Central

    Kim, Eung Re; Lim, Cheong; Kim, Dong Jin; Kim, Jun Sung; Park, Kay Hyun

    2015-01-01

    Background We report our initial experiences of robot-assisted cardiac surgery using the da Vinci Surgical System. Methods Between February 2010 and March 2014, 50 consecutive patients underwent minimally invasive robot-assisted cardiac surgery. Results Robot-assisted cardiac surgery was employed in two cases of minimally invasive direct coronary artery bypass, 17 cases of mitral valve repair, 10 cases of cardiac myxoma removal, 20 cases of atrial septal defect repair, and one isolated CryoMaze procedure. Average cardiopulmonary bypass time and average aorta cross-clamping time were 194.8±48.6 minutes and 126.1±22.6 minutes in mitral valve repair operations and 132.0±32.0 minutes and 76.1±23.1 minutes in myxoma removal operations, respectively. During atrial septal defect closure operations, the average cardiopulmonary bypass time was 128.3±43.1 minutes. The median length of stay was between five and seven days. The only complication was that one patient needed reoperation to address bleeding. There were no hospital mortalities. Conclusion Robot-assisted cardiac surgery is safe and effective for mitral valve repair, atrial septal defect closure, and cardiac myxoma removal surgery. Reducing operative time depends heavily on the experience of the entire robotic surgical team. PMID:25883892

  19. Side docking of the da Vinci robotic system for radical prostatectomy: advantages over traditional docking.

    PubMed

    Cestari, Andrea; Ferrari, Matteo; Zanoni, Matteo; Sangalli, Mattia; Ghezzi, Massimo; Fabbri, Fabio; Sozzi, Francesco; Rigatti, Patrizio

    2015-09-01

    The standard low lithotomic position, used during robot-assisted radical prostatectomy (RARP), with prolonged positioning in stirrups together with steep Trendelenburg may expose the patient to neurapraxia phenomena of the lower limbs and can rarely be used in patients with problems of hip abduction. To overcome these hurdles, we evaluated the clinical benefits of "side docking" (SD) of the da Vinci(®) robotic system in comparison to "traditional docking" (TD). A cohort of 120 patients submitted to RARP were prospectively randomized into two groups by docking approach: SD with the patient supine with lower limbs slightly abducted on the operating table, and TD docking time, intraoperative number of collisions between the robotic arms and postoperative neurological problems in the lower limbs were noted. Descriptive statistics was used to analyze outcomes. Docking time was shorter for the SD group [SD: median 13 min (range 10-18); TD: median 21 min (range 15-34)]. None in the SD group and six of 60 patients (10%) in the TD group suffered from temporary (<30 days) unilateral neurological deficits of the lower limbs. In both groups no collisions between the robotic arms occurred. The SD approach is technically feasible. It does not cause collisions between the robotic arms, and is a reliable method for reducing the setup time of RARP. The supine position of the patient may prevent neurological complications of the lower limbs. Based on these results, SD has become the standard docking technique used by our department. PMID:26531205

  20. Single-Port Surgery: Laboratory Experience with the daVinci Single-Site Platform

    PubMed Central

    Haber, Georges-Pascal; Kaouk, Jihad; Kroh, Matthew; Chalikonda, Sricharan; Falcone, Tommaso

    2011-01-01

    Background and Objectives: The purpose of this study was to evaluate the feasibility and validity of a dedicated da Vinci single-port platform in the porcine model in the performance of gynecologic surgery. Methods: This pilot study was conducted in 4 female pigs. All pigs had a general anesthetic and were placed in the supine and flank position. A 2-cm umbilical incision was made, through which a robotic single-port device was placed and pneumoperitoneum obtained. A data set was collected for each procedure and included port placement time, docking time, operative time, blood loss, and complications. Operative times were compared between cases and procedures by use of the Student t test. Results: A total of 28 surgical procedures (8 oophorectomies, 4 hysterectomies, 8 pelvic lymph node dissections, 4 aorto-caval nodal dissections, 2 bladder repairs, 1 uterine horn anastomosis, and 1 radical cystectomy) were performed. There was no statistically significant difference in operating times for symmetrical procedures among animals (P=0.3215). Conclusions: This animal study demonstrates that single-port robotic surgery using a dedicated single-site platform allows performing technically challenging procedures within acceptable operative times and without complications or insertion of additional trocars. PMID:21902962

  1. AB139. Web promotion of da Vinci robotic prostatectomy exhibits varying sexual health information

    PubMed Central

    Matsushita, Kazuhito; Endo, Fumiyasu; Shimbo, Masaki; Kyono, Yoko; Anan, Go; Komatsu, Kenji; Muraishi, Osamu; Hattori, Kazunori

    2015-01-01

    Objective The surgical robot has been widely adopted in Japan in spite of its high cost and controversy surroundings its benefit. Accordingly, robotic prostatectomy has become the most common treatment options for prostate cancer. The evidence isn’t strong enough to determine whether or not a robot is better than traditional minimally invasive surgery, but the evidence indicates that it is better compared to open surgery. The da Vinci surgical system continues to be the shiny new toy that hospitals boast about on websites. Clinical experience shows that patients are often given unrealistic expectations, especially about the long-term outcomes for erectile function, as well as the time to recovery of erectile function. Sexual dysfunctions other than ED, e.g., orgasmic pain, reduced orgasmic intensity, orgasmic-associated incontinence, penile volume changes with the development of Peyronie’s disease are rarely mentioned to a patient before radical prostatectomy. Hospitals advertise their da Vinci machines in part as a response to perceived consumer demand. The purpose of this analysis was to survey websites for robotic prostatectomy to evaluate the quality of the information found there as it pertains to the outcome for ED. Methods We identified 168 centers in Japan that were using robotic prostatectomy. Their websites were reviewed for the following factors: Has information between copied directly from the Intuitive Surgical website? Is ED mentioned as a complication of robotic prostatectomy? Is it suggested that the robotic prostatectomy approach is better than the other technique? Are there references to support the values mentioned? Did the sites give realistic expectations about time to recovery and overall recovery? Are the ED rates cited within the published rates? Are the ED rates cited specific to the individual site? Did the site mention ED treatment options? Did the site mention sexual dysfunction other than ED? Results Of the 168 centers websites reviewed, 63 (38%) were academic, while 105 (62%) were community-based. On the websites, 59% of the hospitals only posted robotic prostatectomy as a treatment option for prostate cancer. Almost half (55%) of the centers websites suggested that functional outcome were better for robotic prostatectomy than for the other approach; this compared to rates being 58.7% for academic and 53.3% for community-based (P=0.525). ED was mentioned by only 15%; 28.6% of academic centers mentioned ED compared with 7.6% of community-based centers (P<0.05). Realistic expectations about time to recovery for erectile function and overall recovery were mentioned by only 4%. Only 2% offered the ED rates cited specific to the individual site. Conclusions To make informed decisions about their medical care, patients need unbiased, evidence-based information about the benefits and risks of different treatment options. The Internet is a major source of information for prostate cancer patients. However, our study of how hospitals talk about robotic surgery found that many copied directly from Intuitive’s marketing materials. Many centers claimed benefits that were unsupported by evidence and only a minority pointed to potential risks. A balanced presentation of outcomes expected after robotic prostatectomy is necessary to allow patients to make informed treatment decisions and to help them set realistic expectations, which will improve their satisfaction and minimize any regret.

  2. The LEONARDO-DA-VINCI pilot project "e-learning-assistant" - Situation-based learning in nursing education.

    PubMed

    Pfefferle, Petra Ina; Van den Stock, Etienne; Nauerth, Annette

    2010-07-01

    E-learning will play an important role in the training portfolio of students in higher and vocational education. Within the LEONARDO-DA-VINCI action programme transnational pilot projects were funded by the European Union, which aimed to improve the usage and quality of e-learning tools in education and professional training. The overall aim of the LEONARDO-DA-VINCI pilot project "e-learning-assistant" was to create new didactical and technical e-learning tools for Europe-wide use in nursing education. Based on a new situation-oriented learning approach, nursing teachers enrolled in the project were instructed to adapt, develop and implement e- and blended learning units. According to the training contents nursing modules were developed by teachers from partner institutions, implemented in the project centers and evaluated by students. The user-package "e-learning-assistant" as a product of the project includes two teacher training units, the authoring tool "synapse" to create situation-based e-learning units, a student's learning platform containing blended learning modules in nursing and an open sourced web-based communication centre. PMID:19883959

  3. Leonardo da Vinci's drapery studies: characterization of lead white pigments by µ-XRD and 2D scanning XRF

    NASA Astrophysics Data System (ADS)

    Gonzalez, Victor; Calligaro, Thomas; Pichon, Laurent; Wallez, Gilles; Mottin, Bruno

    2015-11-01

    This work focuses on the composition and microstructure of the lead white pigment employed in a set of paintworks, using a combination of µ-XRD and 2D scanning XRF, directly applied on five drapery studies attributed to Leonardo da Vinci (1452-1519) and conserved in the Département des Arts Graphiques, Musée du Louvre and in the Musée des Beaux- Arts de Rennes. Trace elements present in the composition as well as in the lead white highlights were imaged by 2D scanning XRF. Mineral phases were determined in a fully noninvasive way using a special µ-XRD diffractometer. Phase proportions were estimated by Rietveld refinement. The analytical results obtained will contribute to differentiate lead white qualities and to highlight the artist's technique.

  4. Realization and optimization of AES algorithm on the TMS320DM6446 based on DaVinci technology

    NASA Astrophysics Data System (ADS)

    Jia, Wen-bin; Xiao, Fu-hai

    2013-03-01

    The application of AES algorithm in the digital cinema system avoids video data to be illegal theft or malicious tampering, and solves its security problems. At the same time, in order to meet the requirements of the real-time, scene and transparent encryption of high-speed data streams of audio and video in the information security field, through the in-depth analysis of AES algorithm principle, based on the hardware platform of TMS320DM6446, with the software framework structure of DaVinci, this paper proposes the specific realization methods of AES algorithm in digital video system and its optimization solutions. The test results show digital movies encrypted by AES128 can not play normally, which ensures the security of digital movies. Through the comparison of the performance of AES128 algorithm before optimization and after, the correctness and validity of improved algorithm is verified.

  5. How did Leonardo perceive himself? Metric iconography of da Vinci's self-portraits

    NASA Astrophysics Data System (ADS)

    Tyler, Christopher W.

    2010-02-01

    Some eighteen portraits are now recognized of Leonardo in old age, consolidating the impression from his bestestablished self-portrait of an old man with long white hair and beard. However, his appearance when younger is generally regarded as unknown, although he was described as very beautiful as a youth. Application of the principles of metric iconography, the study of the quantitative analysis of the painted images, provides an avenue for the identification of other portraits that may be proposed as valid portraits of Leonardo during various stages of his life, by himself and by his contemporaries. Overall, this approach identifies portraits of Leonardo by Verrocchio, Raphael, Botticelli, and others. Beyond this physiognomic analysis, Leonardo's first known drawing provides further insight into his core motivations. Topographic considerations make clear that the drawing is of the hills behind Vinci with a view overlooking the rocky promontory of the town and the plain stretching out before it. The outcroppings in the foreground bear a striking resemblance to those of his unique composition, 'The Virgin of the Rocks', suggesting a deep childhood appreciation of this wild terrain. and an identification with that religious man of the mountains, John the Baptist, who was also the topic of Leonardo's last known painting. Following this trail leads to a line of possible selfportraits continuing the age-regression concept back to a self view at about two years of age.

  6. 2D and 3D optical diagnostic techniques applied to Madonna dei Fusi by Leonardo da Vinci

    NASA Astrophysics Data System (ADS)

    Fontana, R.; Gambino, M. C.; Greco, M.; Marras, L.; Materazzi, M.; Pampaloni, E.; Pelagotti, A.; Pezzati, L.; Poggi, P.; Sanapo, C.

    2005-06-01

    3D measurement and modelling have been traditionally applied to statues, buildings, archeological sites or similar large structures, but rarely to paintings. Recently, however, 3D measurements have been performed successfully also on easel paintings, allowing to detect and document the painting's surface. We used 3D models to integrate the results of various 2D imaging techniques on a common reference frame. These applications show how the 3D shape information, complemented with 2D colour maps as well as with other types of sensory data, provide the most interesting information. The 3D data acquisition was carried out by means of two devices: a high-resolution laser micro-profilometer, composed of a commercial distance meter mounted on a scanning device, and a laser-line scanner. The 2D data acquisitions were carried out using a scanning device for simultaneous RGB colour imaging and IR reflectography, and a UV fluorescence multispectral image acquisition system. We present here the results of the techniques described, applied to the analysis of an important painting of the Italian Reinassance: `Madonna dei Fusi', attributed to Leonardo da Vinci.

  7. The oldest anatomical handmade skull of the world c. 1508: 'the ugliness of growing old' attributed to Leonardo da Vinci.

    PubMed

    Missinne, Stefaan J

    2014-06-01

    The author discusses a previously unknown early sixteenth-century renaissance handmade anatomical miniature skull. The small, naturalistic skull made from an agate (calcedonia) stone mixture (mistioni) shows remarkable osteologic details. Dr. Saban was the first to link the skull to Leonardo. The three-dimensional perspective of and the search for the senso comune are discussed. Anatomical errors both in the drawings of Leonardo and this skull are presented. The article ends with the issue of physiognomy, his grotesque faces, the Perspective Communis and his experimenting c. 1508 with the stone mixture and the human skull. Evidence, including the Italian scale based on Crazie and Braccia, chemical analysis leading to a mine in Volterra and Leonardo's search for the soul in the skull are presented. Written references in the inventory of Salai (1524), the inventory of the Villa Riposo (Raffaello Borghini 1584) and Don Ambrogio Mazenta (1635) are reviewed. The author attributes the skull c. 1508 to Leonardo da Vinci. PMID:24853982

  8. [Telemanipulatory application of a magnetic coupler on the beating heart with the daVinci surgical system].

    PubMed

    Stein, H; Ikeda, M; Jacobs, St; Lilagan, P; Walther, C; Rastan, A; Mohr, F W; Falk, V

    2003-09-01

    The construction of a coronary anastomosis on the beating heart under totally endoscopic conditions is technically demanding. In this study the potential benefits of an endoscopic magnetic vascular coupler (MVP, Ventrica, Inc, Fremont, CA) designed to facilitate construction of a coronary anastomosis with the help of the daVinci telemanipulator (Intuitive Surgical Inc., Sunnyvale, CA) were evaluated in a totally endoscopic coronary arterial bypass (TECAB) operation on the beating heart in eight dogs. The telemanipulated instruments were used to guide and place the endoscopic MVP-application platform (prototype). All animals underwent angiography, and gross inspection of the anastomotic site was done after excision of the hearts. The procedure was accomplished in 169 minutes (155-190). With the exception of one premature deployment, all MVP-anastomoses were accomplished in 3 minutes (1-28). The following adverse events were encountered: Bleeding from the right ventricle caused by occlusion tape (1), anastomotic leakage upon reperfusion requiring repair stitches (2), anastomotic occlusion due to a thrombus (1). All but one animal that died on reperfusion despite a patent graft and anastomosis, survived the procedure. Overall patency was 7 out of 8. The combination of telemanipulator technology allowing increased manipulation dexterity in a total endoscopic environment and the effective and time saving magnetic technique for anastomotic coupling has the potential to facilitate TECAB on the beating heart. PMID:14526450

  9. The mother relationship and artistic inhibition in the lives of Leonardo da Vinci and Erik H. Erikson.

    PubMed

    Capps, Donald

    2008-12-01

    In four earlier articles, I focused on the theme of the relationship of melancholia and the mother, and suggested that the melancholic self may experience humor (Capps, 2007a), play (Capps, 2007b), dreams (Capps, 2008a), and art (Capps, 2008b) as restorative resources. I argued that Erik H. Erikson found these resources to be valuable remedies for his own melancholic condition, which had its origins in the fact that he was illegitimate and was raised solely by his mother until he was three years old, when she remarried. In this article, I focus on two themes in Freud's Leonardo da Vinci and a memory of his childhood (1964): Leonardo's relationship with his mother in early childhood and his inhibitions as an artist. I relate these two themes to Erikson's own early childhood and his failure to achieve his goal as an aspiring artist in his early twenties. The article concludes with a discussion of Erikson's frustrated aspirations to become an artist and his emphasis, in his psychoanalytic work, on children's play. PMID:19093682

  10. Amid the possible causes of a very famous foxing: molecular and microscopic insight into Leonardo da Vinci's self-portrait.

    PubMed

    Piñar, Guadalupe; Tafer, Hakim; Sterflinger, Katja; Pinzari, Flavia

    2015-12-01

    Leonardo da Vinci's self-portrait is affected by foxing spots. The portrait has no fungal or bacterial infections in place, but is contaminated with airborne spores and fungal material that could play a role in its disfigurement. The knowledge of the nature of the stains is of great concern because future conservation treatments should be derived from scientific investigations. The lack of reliable scientific data, due to the non-culturability of the microorganisms inhabiting the portrait, prompted the investigation of the drawing using non-invasive and micro-invasive sampling, in combination with scanning electron microscope (SEM) imaging and molecular techniques. The fungus Eurotium halophilicum was found in foxing spots using SEM analyses. Oxalates of fungal origin were also documented. Both findings are consistent with the hypothesis that tonophilic fungi germinate on paper metabolizing organic acids, oligosaccharides and proteic compounds, which react chemically with the material at a low water activity, forming brown products and oxidative reactions resulting in foxing spots. Additionally, molecular techniques enabled a screening of the fungi inhabiting the portrait and showed differences when different sampling techniques were employed. Swabs samples showed a high abundance of lichenized Ascomycota, while the membrane filters showed a dominance of Acremonium sp. colonizing the drawing. PMID:26111623

  11. Michelangelo in Florence, Leonardo in Vinci.

    ERIC Educational Resources Information Center

    Herberholz, Barbara

    2003-01-01

    Provides background information on the lives and works of Michelangelo and Leonardo da Vinci. Focuses on the artwork of the artists and the museums where their work is displayed. Includes museum photographs of their work. (CMK)

  12. Peri-operative comparison between daVinci-assisted radical prostatectomy and open radical prostatectomy in obese patients

    NASA Astrophysics Data System (ADS)

    Le, Carter Q.; Ho, Khai-Linh V.; Slezak, Jeffrey M.; Blute, Michael L.; Gettman, Matthew T.

    2007-02-01

    Introduction: While the effects of increasing body mass index on prostate cancer epidemiology and surgical approach have recently been studied, its effects on surgical outcomes are less clear. We studied the perioperative outcomes of obese (BMI >= 30) men treated with daVinci-assisted laparoscopic radical prostatectomy (DLP) and compared them to those treated with open radical retropubic prostatectomy (RRP) in a contemporary time frame. Method: After Institutional Review Board approval, we used the Mayo Clinic Radical Prostatectomy database to identify patients who had undergone DLP by a single surgeon and those who had undergone open RRP by a single surgeon between December 2002 and March 2005. Baseline demographics, peri- and post-operative courses, and complications were collected by retrospective chart review, and variables from the two cohorts compared using chi-square method and least-squares method of linear regression where appropriate. Results: 59 patients who had DLP and 76 undergoing RRP were available for study. Baseline demographics were not statistically different between the two cohorts. Although DLP had a significantly lower clinical stage than RRP (p=0.02), pathological stage was not statistically different (p=0.10). Transfusion rates, hospital stay, overall complications, and pathological Gleason were also not significantly different, nor were PSA progression, positive margin rate, or continence at 1 year. After bilateral nerve-sparing, erections suitable for intercourse with or without therapy at 1 year was 88.5% (23/26) for DLP and 61.2% (30/49) for RRP (p=0.01). Follow-up time was similar. Conclusion: For obese patients, DLP appears to have similar perioperative, as well as short-term oncologic and functional outcomes when compared to open RRP.

  13. Physics-based stereoscopic suturing simulation with force feedback and continuous multipoint interactions for training on the da Vinci surgical system.

    PubMed

    Deo, Dhanannjay; De, Suvranu; Singh, Tejinder P

    2007-01-01

    In this paper we present a 3d stereoscopic, bimanual surgical suturing system with a realistic thread model, and physics based force feedback training surgical residents at Albany medical college for the use da Vinci surgical system. A novel algorithm is developed to calculate tissue deformations at multiple points due to both the frictional pull resulting from the passage of the thread through the tissue, and the additional forces applied by both hands of the user on the thread or tissues, enabling, for the first time, continuous sutures and knot tying with force feedback. Two Phantom Premium 1.0 (Sensable Technologies) are used for force feedback. Planar system's dual monitor based stereo vision system is used for simultaneous rendering of left and right eye views facilitating 3D rendering. PMID:17377247

  14. A psychoanalytic understanding of the desire for knowledge as reflected in Freud's Leonardo da Vinci and a memory of his childhood.

    PubMed

    Blass, Rachel B

    2006-10-01

    The author offers an understanding of the psychoanalytic notion of the desire for knowledge and the possibility of attaining it as it fi nds expression in Freud's Leonardo da Vinci and a memory of his childhood. This understanding has not been explicitly articulated by Freud but may be considered integral to psychoanalysis' Weltanschauung as shaped by Freud's legacy. It emerges through an attempt to explain basic shifts, contradictions, inconsistencies and tensions that become apparent from a close reading of the text of Leonardo. Articulating this implicit understanding of knowledge provides the grounds for a stance on epistemology that is integral to psychoanalysis and relevant to contemporary psychoanalytic concerns on this topic. This epistemology focuses on the necessary involvement of passion, rather than detachment, in the search for knowledge and views the psychoanalytic aim of self-knowledge as a derivative, and most immediate expression, of a broader and more basic human drive to know. PMID:16997725

  15. Elastography Using Multi-Stream GPU: An Application to Online Tracked Ultrasound Elastography, In-Vivo and the da Vinci Surgical System

    PubMed Central

    Deshmukh, Nishikant P.; Kang, Hyun Jae; Billings, Seth D.; Taylor, Russell H.; Hager, Gregory D.; Boctor, Emad M.

    2014-01-01

    A system for real-time ultrasound (US) elastography will advance interventions for the diagnosis and treatment of cancer by advancing methods such as thermal monitoring of tissue ablation. A multi-stream graphics processing unit (GPU) based accelerated normalized cross-correlation (NCC) elastography, with a maximum frame rate of 78 frames per second, is presented in this paper. A study of NCC window size is undertaken to determine the effect on frame rate and the quality of output elastography images. This paper also presents a novel system for Online Tracked Ultrasound Elastography (O-TRuE), which extends prior work on an offline method. By tracking the US probe with an electromagnetic (EM) tracker, the system selects in-plane radio frequency (RF) data frames for generating high quality elastograms. A novel method for evaluating the quality of an elastography output stream is presented, suggesting that O-TRuE generates more stable elastograms than generated by untracked, free-hand palpation. Since EM tracking cannot be used in all systems, an integration of real-time elastography and the da Vinci Surgical System is presented and evaluated for elastography stream quality based on our metric. The da Vinci surgical robot is outfitted with a laparoscopic US probe, and palpation motions are autonomously generated by customized software. It is found that a stable output stream can be achieved, which is affected by both the frequency and amplitude of palpation. The GPU framework is validated using data from in-vivo pig liver ablation; the generated elastography images identify the ablated region, outlined more clearly than in the corresponding B-mode US images. PMID:25541954

  16. Elastography using multi-stream GPU: an application to online tracked ultrasound elastography, in-vivo and the da Vinci Surgical System.

    PubMed

    Deshmukh, Nishikant P; Kang, Hyun Jae; Billings, Seth D; Taylor, Russell H; Hager, Gregory D; Boctor, Emad M

    2014-01-01

    A system for real-time ultrasound (US) elastography will advance interventions for the diagnosis and treatment of cancer by advancing methods such as thermal monitoring of tissue ablation. A multi-stream graphics processing unit (GPU) based accelerated normalized cross-correlation (NCC) elastography, with a maximum frame rate of 78 frames per second, is presented in this paper. A study of NCC window size is undertaken to determine the effect on frame rate and the quality of output elastography images. This paper also presents a novel system for Online Tracked Ultrasound Elastography (O-TRuE), which extends prior work on an offline method. By tracking the US probe with an electromagnetic (EM) tracker, the system selects in-plane radio frequency (RF) data frames for generating high quality elastograms. A novel method for evaluating the quality of an elastography output stream is presented, suggesting that O-TRuE generates more stable elastograms than generated by untracked, free-hand palpation. Since EM tracking cannot be used in all systems, an integration of real-time elastography and the da Vinci Surgical System is presented and evaluated for elastography stream quality based on our metric. The da Vinci surgical robot is outfitted with a laparoscopic US probe, and palpation motions are autonomously generated by customized software. It is found that a stable output stream can be achieved, which is affected by both the frequency and amplitude of palpation. The GPU framework is validated using data from in-vivo pig liver ablation; the generated elastography images identify the ablated region, outlined more clearly than in the corresponding B-mode US images. PMID:25541954

  17. Da Vinci Coding? Using Renaissance Artists Depictions of the Brain to Engage Student Interest in Neuroanatomy

    PubMed Central

    Watson, Todd D.

    2013-01-01

    This report describes a pair of brief, interactive classroom exercises utilizing Renaissance artists depictions of the brain to help increase student interest in learning basic neuroanatomy. Undergraduate students provided anonymous quantitative evaluations of both exercises. The feedback data suggest that students found both exercises engaging. The data also suggest that the first exercise increased student interest in learning more about neuroanatomy in general, while the second provided useful practice in identifying major neuroanatomical structures. Overall, the data suggest that these exercises may be a useful addition to courses that introduce or review neuroanatomical concepts. PMID:23805058

  18. Chemical characterization and source apportionment of fine and coarse particulate matter inside the refectory of Santa Maria Delle Grazie Church, home of Leonardo Da Vinci's "Last Supper".

    PubMed

    Daher, Nancy; Ruprecht, Ario; Invernizzi, Giovanni; De Marco, Cinzia; Miller-Schulze, Justin; Heo, Jong Bae; Shafer, Martin M; Schauer, James J; Sioutas, Constantinos

    2011-12-15

    The association between exposure to indoor particulate matter (PM) and damage to cultural assets has been of primary relevance to museum conservators. PM-induced damage to the "Last Supper" painting, one of Leonardo da Vinci's most famous artworks, has been a major concern, given the location of this masterpiece inside a refectory in the city center of Milan, one of Europe's most polluted cities. To assess this risk, a one-year sampling campaign was conducted at indoor and outdoor sites of the painting's location, where time-integrated fine and coarse PM (PM(2.5) and PM(2.5-10)) samples were simultaneously collected. Findings showed that PM(2.5) and PM(2.5-10) concentrations were reduced indoors by 88 and 94% on a yearly average basis, respectively. This large reduction is mainly attributed to the efficacy of the deployed ventilation system in removing particles. Furthermore, PM(2.5) dominated indoor particle levels, with organic matter as the most abundant species. Next, the chemical mass balance model was applied to apportion primary and secondary sources to monthly indoor fine organic carbon (OC) and PM mass. Results revealed that gasoline vehicles, urban soil, and wood-smoke only contributed to an annual average of 11.2 ± 3.7% of OC mass. Tracers for these major sources had minimal infiltration factors. On the other hand, fatty acids and squalane had high indoor-to-outdoor concentration ratios with fatty acids showing a good correlation with indoor OC, implying a common indoor source. PMID:22070580

  19. Avoiding Steric Congestion in Dendrimer Growth through Proportionate Branching. A Twist on da Vinci's Rule of Tree Branching

    SciTech Connect

    Yue, Xuyi; Taraban, Marc B.; Hyland, Laura L.; Yu, Yihua Bruce

    2012-10-05

    In making defect-free macromolecules, the challenge occurs during chemical synthesis. This challenge is especially pronounced in dendrimer synthesis where exponential growth quickly leads to steric congestion. To overcome this difficulty, proportionate branching in dendrimer growth is proposed. In proportionate branching, both the number and the length of branches increase exponentially but in opposite directions to mimic tree growth. The effectiveness of this strategy is demonstrated through the synthesis of a fluorocarbon dendron containing 243 chemically identical fluorine atoms with a MW of 9082 Da. Monodispersity is confirmed by nuclear magnetic resonance spectroscopy, mass spectrometry, and small-angle X-ray scattering. Moreover, growing different parts proportionately, as nature does, could be a general strategy to achieve defect-free synthesis of macromolecules.

  20. Reforming Upper Secondary Education in Europe. The Leonardo da Vinci Project Post-16 Strategies. Surveys of Strategies for Post-16 Education To Improve the Parity of Esteem for Initial Vocational Education in Eight European Educational Systems. Theory into Practice 92. Institute for Educational Research Publication Series B.

    ERIC Educational Resources Information Center

    Lasonen, Johanna, Ed.

    This book contains the following papers on the Leonardo da Vinci project: "Looking for Post-16 Education Strategies for Parity of Esteem in Europe" (Lasonen); "Improving Parity of Esteem as a Policy Goal" (Makinen, Volanen); "Alternative Strategies for Parity of Esteem between General/Academic and Vocational Education in Europe" (Kamarainen);…

  1. What Is the Moral Imperative of Workplace Learning: Unlocking the DaVinci Code of Human Resource Development?

    ERIC Educational Resources Information Center

    Short, Tom

    2006-01-01

    In the course of the author's doctoral study, he is exploring the strategic linkages between learning activities in the modern workplace and the long-term success they bring to organisations. For many years, this challenge has been the Holy Grail of human resource (HR) development practitioners, who invest heavily on training and professional…

  2. VINCI: the VLT Interferometer commissioning instrument

    NASA Astrophysics Data System (ADS)

    Kervella, Pierre; Coudé du Foresto, Vincent; Glindemann, Andreas; Hofmann, Reiner

    2000-07-01

    The Very Large Telescope Interferometer (VLTI) is a complex system, made of a large number of separated elements. To prepare an early successful operation, it will require a period of extensive testing and verification to ensure that the many devices involved work properly together, and can produce meaningful data. This paper describes the concept chosen for the VLTI commissioning instrument, LEONARDO da VINCI, and details its functionalities. It is a fiber based two-way beam combiner, associated with an artificial star and an alignment verification unit. The technical commissioning of the VLTI is foreseen as a stepwise process: fringes will first be obtained with the commissioning instrument in an autonomous mode (no other parts of the VLTI involved); then the VLTI telescopes and optical trains will be tested in autocollimation; finally fringes will be observed on the sky.

  3. Tourism. Leonardo da Vinci Series: Good Practices.

    ERIC Educational Resources Information Center

    Commission of the European Communities, Brussels (Belgium). Directorate-General for Education and Culture.

    This brochure, part of a series about good practices in vocational training in the European Union, describes 10 projects that have promoted investment in human resources through training in the tourism sector to promote sustainable, or responsible, tourism. The projects and their countries of origin are as follows: (1) BEEFT, training of mobility…

  4. [Studies of vision by Leonardo da Vinci].

    PubMed

    Berggren, L

    2001-01-01

    Leonardo was an advocate of the intromission theory of vision. Light rays from the object to the eye caused visual perceptions which were transported to the brain ventricles via a hollow optic nerve. Leonardo introduced wax injections to explore the ventricular system. Perceptions were assumed to go to the "senso comune" in the middle (3rd) ventricle, also the seat of the soul. The processing station "imprensiva" in the anterior lateral horns together with memory "memoria" in th posterior (4th) ventricle integrated the visual perceptions to visual experience. - Leonardo's sketches with circular lenses in the center of the eye reveal that his dependence on medieval optics prevailed over anatomical observations. Drawings of the anatomy of the sectioned eye are missing although Leonardo had invented a new embedding technique. In order to dissect the eye without spilling its contents, the eye was first boiled in egg white and then cut. The procedure was now repeated and showed that the ovoid lens after boiling had become spherical. - Leonardo described that light rays were refracted and reflected in the eye but his imperfect anatomy prevented a development of physiological optics. He was, however, the first to compare the eye with a pin-hole camera (camera obscura). Leonardo's drawings of the inverted pictures on the back wall of a camera obscura inspired to its use as an instrument for artistic practice. The camera obscura was for centuries a model for explaining human vision. PMID:11824410

  5. Leonardo Da Vinci, the genius and the monsters. Casual encounters?

    PubMed

    Ciseri, Lorenzo Montemagno

    2014-01-01

    This article analyses Leonardo's interest in monsters and deformed reality, one of the lesser known aspects of his vast and multifaceted output. With the possible exception of his studies of physiognomy, relevant drawings, sketches and short stories represent a marginal aspect of his work, but they are nevertheless significant for historians of teratology. The purpose of this study is to provide a broad overview of the relationship between Leonardo and both the literature on mythological monsters and the reports on monstrous births that he either read about or witnessed personally. While aspects of his appreciation and attention to beauty and the pursuit of perfection and good proportions are the elements most emphasised in Leonardo's work, other no less interesting aspects related to deformity have been considered of marginal importance. My analysis will demonstrate that Leonardo approached the realm of monstrosity as if he considered abnormality a mirror of normality, deformity a mirror of harmony, and disease a mirror of health, as if to emphasise that, ultimately, it is the monster that gives the world the gift of normality. Two special cases of monstrosity are analysed: the famous monster of Ravenna, whose image was found among his papers, and a very rare case of parasitic conjoined twins (thoracopagus parasiticus) portrayed for the first time alive, probably in Florence, by Leonardo himself. PMID:25702382

  6. Possible role of DaVinci Robot in uterine transplantation

    PubMed Central

    Iavazzo, Christos; Gkegkes, Ioannis D.

    2015-01-01

    Minimally invasive surgery, specifically robotic surgery, became a common technique used by gynecological surgeons over the last decade. The realization of the first human uterine transplantation commenced new perspectives in the treatment of uterine agenesia or infertility in women with history of hysterectomy at a young age. Robot-assisted technique may enhance the safety of the procedure by facilitating the microvascular anastomosis, vaginal anastomosis, and ligaments’ fixation. This study proposes the formation of a multicenter collaboration group to organize a protocol with the aim to clarify the possible role of robotic surgery in uterine transplantation. PMID:26401113

  7. Scientific Aspects of Leonardo da Vinci's Drawings: An Interdisciplinary Model.

    ERIC Educational Resources Information Center

    Struthers, Sally A.

    While interdisciplinary courses can help demonstrate the relevance of learning to students and reinforce education from different fields, they can be difficult to implement and are often not cost effective. An interdisciplinary art history course at Ohio's Sinclair Community College incorporates science into the art history curriculum, making use…

  8. Distance Learning. Leonardo da Vinci Series: Good Practices.

    ERIC Educational Resources Information Center

    Commission of the European Communities, Brussels (Belgium). Directorate-General for Education and Culture.

    This brochure, part of a series about good practices in vocational training in the European Union, describes 12 projects that use distance learning to promote lifelong learning in adults. The projects and their countries of origin are as follows: (1) 3D Project, training in the use of IT tools for 3D simulation and animation and practical…

  9. MONA, LISA and VINCI Soon Ready to Travel to Paranal

    NASA Astrophysics Data System (ADS)

    2000-11-01

    First Instruments for the VLT Interferometer Summary A few months from now, light from celestial objects will be directed for the first time towards ESO's Very Large Telescope Interferometer (VLTI) at the Paranal Observatory (Chile). During this "First Light" event and the subsequent test phase, the light will be recorded with a special test instrument, VINCI (VLT INterferometer Commissioning Instrument). The main components of this high-tech instrument are aptly named MONA (a system that combines the light beams from several telescopes by means of optical fibers) and LISA (the infrared camera). VINCI was designed and constructed within a fruitful collaboration between ESO and several research institutes and industrial companies in France and Germany . It is now being assembled at the ESO Headquarters in Garching (Germany) and will soon be ready for installation at the telescope on Paranal. With the VLTI and VINCI, Europe's astronomers are now entering the first, crucial phase of an exciting scientific and technology venture that will ultimately put the world's most powerful optical/IR interferometric facility in their hands . PR Photo 31/00 : VINCI during tests at the ESO Headquarters in Garching. The VLT Interferometer (VLTI) ESO Press Photo 31/00 ESO Press Photo 31/00 [Preview; JPEG: 400 x 301; 43k] [Normal; JPEG: 800 x 602;208xk] [Full-Res; JPEG: 1923 x 1448; 2.2Mb] PR Photo 31/00 shows the various components of the complex VINCI instrument for the VLT Interferometer , during the current tests at the Optical Laboratory at the ESO Headquarters in Garching (Germany). It will later be installed in "clean-room" conditions within the Interferometric Laboratory at the Paranal Observatory. This electronic photo was obtained for documentary purposes. VINCI (VLT INterferometer Commissioning Instrument) is the "First Light" instrument for the Very Large Telescope Interferometer (VLTI) at the Paranal Observatory (Chile). Early in 2001, it will be used for the first tests of this very complex system. Subsequently, it will serve to tune this key research facility to the highest possible performance. The VLTI is based on the combination of light (beams) from the telescopes at Paranal. Of these, the four 8.2-m Unit Telescopes are already in operation - they will soon be joined by three 1.8-m telescopes that can be relocated on rails, cf. PR Photo 43b/99. By means of a system of mirrors, the light from two or more of these telescopes will be guided to the central Interferometric Laboratory , at the center of the observing platform on Paranal. Information about the heart of this complex system, the Delay Lines that are located in the underground Interferometric Tunnel, is available with the recent ESO PR Photos 26a-e/00. The VLTI will later receive other front-line instruments, e.g. AMBER , MIDI and PRIMA. When fully ready some years from now, the VLTI will produce extremely sharp images. This will have a major impact on different types of exciting astronomical observations, e.g.: * the direct discovery and imaging of extra-solar planets comparable to Jupiter, * the discovery and imaging of low-mass stars such as brown dwarfs, * observations of star-forming regions and to better understand the physical processes that give birth to stars, * spectral analysis of the atmospheres of nearby stars, and * imaging the objects of the very core of our Galaxy and the detection of black holes in active nuclei of galaxies. The VINCI test instrument The new instrument, VINCI , will soon be delivered to Paranal by the Département de Recherche Spatiale (Department for Space Research), a joint unit of the Centre Nationale de la Recherche Scientifique (French National Centre for Scientific Research) and the Paris Observatory. VINCI is a functional copy of the FLUOR instrument - now at the IOTA (Infrared Optical Telescope Array) interferometer - that has been upgraded and adapted to the needs of the VLTI. FLUOR was developed by the Département de Recherche Spatiale (DESPA) of the Paris Observatory. It was used in 1991 at the Kitt Peak National Observatory (Arizona, USA), for the first (coherent) combination of the light beams from two independent telescopes by means of optical fibers of fluoride glass. It has since been in operation for five years as a focal instrument at the IOTA Interferometer (Mount Hopkins, Arizona, USA) within a collaboration with the Harvard Smithsonian Center for Astrophysics), producing a rich harvest of scientific data. The VINCI partners The VINCI instrument is constructed in a collaboration between ESO (that also finances it) and the following laboratories and institutes: * DESPA (Paris Observatory) provides the expertise, the general concept, the development and integration of the optomechanics (with the exception of the camera) and the electronics, * Observatoire Midi-Pyrénées that produces the control software * The LISA infrared camera is developed by the Max-Planck-Institut für Extraterrestrische Physik (Garching, Germany), and * ESO provides the IR camera electronics and the overall observational software and is also responsible for the final integration. DESPA delivered VINCI to ESO in Garching on September 27, 2000, and is now assembling the instrument in the ESO optical workshop. It will stay here during three months, until it has been fully integrated and thoroughly tested. It will then be shipped to Paranal at the beginning of next year. After set-up and further tests, the first observations on the sky are expected in late March 2001. Fluoride fibers guide the light The heart of VINCI - named MONA - is a fiber optics beam combine unit. It is the outcome of a fertile, 10-year research partnership between Science (DESPA) and Industry ("Le Verre Fluoré" [2]). Optical fibers will be used to combine the light from two telescopes inside VINCI . Since the instrument will be working in the near-infrared region of the spectrum (wavelength 2-2.5 µm), it is necessary to use optical fibers made of a special type of glass that is transparent at these wavelengths. By far the best best material for this is fluoride glass . It was invented by one of the co-founders of the company "Le Verre Fluoré", the only manufacturer of this kind of highly specialized material in the world. Optical fibers of fluoride glass from this company are therefore used in VINCI . They are of a special type ("monomode") with a very narrow core measuring only 6.5 µm (0.065 mm) across. Light that is collected by one of the telescopes in the VLTI array (e.g., by the 50 m 2 mirror of a VLT Unit Telescope) is guided through the VLTI system of optics and finally enters this core. The fibers guide the light and at the same time "clean" the light beam by eliminating the errors introduced by the atmospheric turbulence, hereby improving the accuracy of the measurements by a factor of 10. DESPA has shown that this is indeed possible by means of real astronomical observations with the FLUOR experiment. Following this positive demonstration, it has been decided to equip the instrumentation of all interferometers currently under construction with fibers or equivalent systems.

  10. Adiabatic properties of pulsating DA white dwarfs. III - A finite-element code for solving nonradial pulsation equations

    NASA Astrophysics Data System (ADS)

    Brassard, P.; Pelletier, C.; Fontaine, G.; Wesemael, F.

    1992-06-01

    A new numerical code to solve the system of ordinary differential equations which describes the linear, adiabatic, nonradial pulsations of stellar models is presented. This code, based on the Galerkin FEM of weighted residuals, is characterized by its stability, speed, accuracy, high-order convergence, and ease of use. Its performance is illustrated in tests carried out both with a homogeneous, polytropic stellar model and with a realistic stellar evolution model. It is also contrasted with the performance of two other codes, previously used in adiabatic studies of pulsating DA white dwarfs, which are based on either relaxation or shooting methods. The finite-element code outperforms both of them in terms of accuracy, effectiveness, and stability.

  11. 78 FR 58376 - American Asset Development, Inc., aVinci Media Corp., Ceragenix Pharmaceuticals, Inc., Marshall...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-23

    ... From the Federal Register Online via the Government Publishing Office SECURITIES AND EXCHANGE COMMISSION American Asset Development, Inc., aVinci Media Corp., Ceragenix Pharmaceuticals, Inc., Marshall Holdings International, Inc., MedCom USA, Incorporated, and Millenium Holding Group, Inc., Order of Suspension of Trading September 19, 2013....

  12. Back to the Drawing Board Reconstructing DaVinci's Vitruvian Man to Teach Anatomy

    ERIC Educational Resources Information Center

    Babaian, C.

    2009-01-01

    In today's high tech world, one hardly expects to see the original chalkboard or blackboard utilized in research, teaching, or scientific communication, but having spent an equal number of years doing both art and biology and dabbling in computer graphics, the author has found the simple technology of the chalkboard and chalk to have incredible

  13. Moving towards Optimising Demand-Led Learning: The 2005-2007 ECUANET Leonardo Da Vinci Project

    ERIC Educational Resources Information Center

    Dealtry, Richard; Howard, Keith

    2008-01-01

    Purpose: The purpose of this paper is to present the key project learning points and outcomes as a guideline for the future quality management of demand-led learning and development. Design/methodology/approach: The research methodology was based upon a corporate university blueprint architecture and browser toolkit developed by a member of the

  14. Moving towards Optimising Demand-Led Learning: The 2005-2007 ECUANET Leonardo Da Vinci Project

    ERIC Educational Resources Information Center

    Dealtry, Richard; Howard, Keith

    2008-01-01

    Purpose: The purpose of this paper is to present the key project learning points and outcomes as a guideline for the future quality management of demand-led learning and development. Design/methodology/approach: The research methodology was based upon a corporate university blueprint architecture and browser toolkit developed by a member of the…

  15. Social and Occupational Integration of Disadvantaged People. Leonardo da Vinci Good Practices Series.

    ERIC Educational Resources Information Center

    Commission of the European Communities, Brussels (Belgium). Directorate-General for Education and Culture.

    This document profiles nine European programs that exemplify good practice in social and occupational integration of disadvantaged people. The programs profiled are as follows: (1) Restaurant Venezia (a CD-ROM program to improve the reading and writing skills of young people in Luxembourg who have learning difficulties); (2) an integrated…

  16. Back to the Drawing Board Reconstructing DaVinci's Vitruvian Man to Teach Anatomy

    ERIC Educational Resources Information Center

    Babaian, C.

    2009-01-01

    In today's high tech world, one hardly expects to see the original chalkboard or blackboard utilized in research, teaching, or scientific communication, but having spent an equal number of years doing both art and biology and dabbling in computer graphics, the author has found the simple technology of the chalkboard and chalk to have incredible…

  17. Building Skills and Qualifications among SME Employees. Leonardo da Vinci Good Practices Series.

    ERIC Educational Resources Information Center

    Commission of the European Communities, Brussels (Belgium). Directorate-General for Education and Culture.

    This document profiles 10 European programs that exemplify good practice in building skills and qualifications among employees of small and medium enterprises (SMEs). The programs profiled are as follows: (1) TRICTSME (a program providing World Wide Web-based information and communication technologies training for SMEs in manufacturing); (2)…

  18. Complex repair of a Barlow's valve using the Da Vinci robotic surgical system.

    PubMed

    Masroor, Saqib; Plambeck, Christopher; Dahnert, Melissa

    2010-09-01

    Robotic mitral valve repair is increasingly being used for mitral valve repair. However, the repair of a bileaflet prolapse (especially Barlow's type) is difficult and not often considered suitable for a robotic-assisted approach. The case is reported of a successful robotic-assisted repair of a Barlow's valve, including posterior leaflet resection, chordal transfer, cleft repair, construction of Gore-Tex neo-chords, bilateral commissuroplasties, and a flexible/partial annuloplasty. The total cardiopulmonary bypass and cross-clamp times were 231 and 183 min, respectively. The patient was discharged home on the third postoperative day and is doing well one year later, with no residual mitral regurgitation. PMID:21053737

  19. [From Leonardo Da Vinci to present days; from the history of antiplague costume].

    PubMed

    Kalmykov, A A; Aminev, R M; Korneev, A G; Polyakov, V S; Artebyakin, S V

    2016-01-01

    As a prototype of the antiplague costume can be considered a special clothing, which physicians in medieval Europe wear for protection in plague nidus. Inventor of the first antiplague costume is considered to be a French doctor Charles de Lorme (1619). Much later, in 1878, a Russian professor Pashutin V V offered to use a costume, which looked like a hermetically sealed "bag" with a special breathing device aimed at protection of medical staff. Later, professor O.I. Dogel's respirator became well-known (1889). At the beginning of 20th century as part of the antiplague costume was used a charcoal filter mask, invented by Zelinsky N.D. Requirements to order the use of modern means of individual protection when working in nidus of especially dangerous infections identified sanitary-epidemiological rules, which reflect issues of laboratory workers working and protective clothing, respiratory protection, and view, especially operation, the procedure of putting on, removing and disinfecting antiplague costumes, pneumocostumes, pneumohelmets, isolation suits, gas-protection boxes, etc. PMID:27120957

  20. The rare DAT coding variant Val559 perturbs DA neuron function, changes behavior, and alters in vivo responses to psychostimulants

    PubMed Central

    Mergy, Marc A.; Gowrishankar, Raajaram; Gresch, Paul J.; Gantz, Stephanie C.; Williams, John; Davis, Gwynne L.; Wheeler, C. Austin; Stanwood, Gregg D.; Hahn, Maureen K.; Blakely, Randy D.

    2014-01-01

    Despite the critical role of the presynaptic dopamine (DA) transporter (DAT, SLC6A3) in DA clearance and psychostimulant responses, evidence that DAT dysfunction supports risk for mental illness is indirect. Recently, we identified a rare, nonsynonymous Slc6a3 variant that produces the DAT substitution Ala559Val in two male siblings who share a diagnosis of attention-deficit hyperactivity disorder (ADHD), with other studies identifying the variant in subjects with bipolar disorder (BPD) and autism spectrum disorder (ASD). Previously, using transfected cell studies, we observed that although DAT Val559 displays normal total and surface DAT protein levels, and normal DA recognition and uptake, the variant transporter exhibits anomalous DA efflux (ADE) and lacks capacity for amphetamine (AMPH)-stimulated DA release. To pursue the significance of these findings in vivo, we engineered DAT Val559 knock-in mice, and here we demonstrate in this model the presence of elevated extracellular DA levels, altered somatodendritic and presynaptic D2 DA receptor (D2R) function, a blunted ability of DA terminals to support depolarization and AMPH-evoked DA release, and disruptions in basal and psychostimulant-evoked locomotor behavior. Together, our studies demonstrate an in vivo functional impact of the DAT Val559 variant, providing support for the ability of DAT dysfunction to impact risk for mental illness. PMID:25331903

  1. The L-coding region of the DA strain of Theiler's murine encephalomyelitis virus causes dysfunction and death of myelin-synthesizing cells.

    PubMed

    Ghadge, G D; Wollmann, R; Baida, G; Traka, M; Roos, R P

    2011-09-01

    The DA strain and other members of the TO subgroup of Theiler's murine encephalomyelitis virus (TMEV) induce an early transient subclinical neuronal disease followed by a chronic progressive inflammatory demyelination, with persistence of the virus in the central nervous system (CNS) for the life of the mouse. Although TMEV-induced demyelinating disease (TMEV-IDD) is thought to be immune mediated, there is also evidence that supports a role for the virus in directly inducing demyelination. In order to clarify the function of DA virus genes, we generated a transgenic mouse that had tamoxifen-inducible expression of the DA L-coding region in oligodendrocytes (and Schwann cells), a cell type in which the virus is known to persist. Tamoxifen-treated young transgenic mice usually developed an acute progressive fatal paralysis, with abnormalities of the oligodendrocytes and Schwann cells and demyelination, but without significant lymphocytic infiltration; later treatment led to transient weakness with demyelination and persistent expression of the recombined transgene. These findings demonstrate that a high level of expression of DA L can cause the death of myelin-synthesizing cells and death of the mouse, while a lower level of L expression (which can persist) can lead to cellular dysfunction with survival. The results suggest that expression of DA L plays an important role in the pathogenesis of TMEV-IDD. Virus-induced infection and death of oligodendrocytes may play a part in the demyelination of other diseases in which an immune-mediated mechanism has been stressed, including multiple sclerosis. PMID:21752920

  2. Molecular cloning and sequence analysis of the gene coding for the 57-kDa major soluble antigen of the salmonid fish pathogen Renibacterium salmoninarum.

    PubMed

    Chien, M S; Gilbert, T L; Huang, C; Landolt, M L; O'Hara, P J; Winton, J R

    1992-09-15

    The complete sequence coding for the 57-kDa major soluble antigen of the salmonid fish pathogen, Renibacterium salmoninarum, was determined. The gene contained an opening reading frame of 1671 nucleotides coding for a protein of 557 amino acids with a calculated M(r) value of 57,190. The first 26 amino acids constituted a signal peptide. The deduced sequence for amino acid residues 27-61 was in agreement with the 35 N-terminal amino acid residues determined by microsequencing, suggesting the protein is synthesized as a 557-amino acid precursor and processed to produce a mature protein of M(r) 54,505. Two regions of the protein contained imperfect direct repeats. The first region contained two copies of an 81-residue repeat, the second contained five copies of an unrelated 25-residue repeat. Also, a perfect inverted repeat (including three in-frame UAA stop codons) was observed at the carboxyl-terminus of the gene. PMID:1383085

  3. Leonardo da Vinci, visual perspective and the crystalline sphere (lens): if only Leonardo had had a freezer.

    PubMed

    Hilloowala, Rumy

    2004-06-01

    This study confirms Leonardo's claim to have experimented on the bovine eye to determine the internal anatomy of the eye. The experiment, as described by Leonardo, was repeated in our laboratory. The study further discusses Leonardo's primary interest in the study of the eye (especially the lens), to determine how the image of an object which enters the eye in an inverted form is righted. The study shows the evolution of Leonardo's understanding of the anatomy and the physiology of vision. Initially, in keeping with his reading of the literature, the lens was placed in the centre but he made it globular. Later he promulgated two theories, reflection from the uvea and refraction within the lens to explain reversal of the image in the eye. Subsequently he rejected the first theory and, putting credence in the second theory, experimented (1509) to show that the lens is globular and is centrally placed. The fact that the present knowledge about the lens is at variance from his findings is not because he did not carry out the experiment, as suggested by some modern authors, but because of the limitation of the techniques available to him at the time. PMID:15386876

  4. Cloning and sequencing of the genes coding for the 10- and 60-kDa heat shock proteins from Pseudomonas aeruginosa and mapping of a species-specific epitope.

    PubMed Central

    Sipos, A; Klocke, M; Frosch, M

    1991-01-01

    A genomic library of Pseudomonas aeruginosa DNA was screened with a monoclonal antibody (MAb 2528) specific for the P. aeruginosa 60-kDa heat shock protein. A positive clone, pAS-1, was isolated. The gene coding for P. aeruginosa chaperonin (hsp60) was localized to a 2-kb EcoRI fragment subcloned in pAS-2. A sequence analysis of pAS-2 and parts of pAS-1 identified two open reading frames that encoded proteins with calculated molecular masses of 10 and 57 kDa. In amino acid sequence comparison studies the sequences of these proteins, which were designated GroES and GroEL, exhibited up to 78% homology with known prokaryotic sequences of 10- and 60-kDa heat shock proteins (hsp10 and hsp60). In order to map the epitope recognized by MAb 2528, a series of GroEL nested carboxy-terminal deletion clones were tested with MAb 2528. We identified the clone with the shortest insertion that was still recognized by MAb 2528 and the clone with the largest insertion that was not recognized by MAb 2528. The 3' ends of the insertions were determined by sequencing and were found to delimit a region that encoded 25 amino acid residues. Synthetic oligonucleotides that coded for peptides possibly resembling the epitope within this region were ligated into expression vector pGEX-3X, and fusion proteins expressed by these clones were tested for reactivity with MAb 2528. By using this method we determined that the decapeptide QADIEARVLQ (positions 339 to 348 on GroEL) was responsible for the binding of P. aeruginosa-specific MAb 2528. Images PMID:1715325

  5. Speech coding

    NASA Astrophysics Data System (ADS)

    Gersho, Allen

    1990-05-01

    Recent advances in algorithms and techniques for speech coding now permit high quality voice reproduction at remarkably low bit rates. The advent of powerful single-ship signal processors has made it cost effective to implement these new and sophisticated speech coding algorithms for many important applications in voice communication and storage. Some of the main ideas underlying the algorithms of major interest today are reviewed. The concept of removing redundancy by linear prediction is reviewed, first in the context of predictive quantization or DPCM. Then linear predictive coding, adaptive predictive coding, and vector quantization are discussed. The concepts of excitation coding via analysis-by-synthesis, vector sum excitation codebooks, and adaptive postfiltering are explained. The main idea of vector excitation coding (VXC) or code excited linear prediction (CELP) are presented. Finally low-delay VXC coding and phonetic segmentation for VXC are described.

  6. Uplink Coding

    NASA Technical Reports Server (NTRS)

    Pollara, Fabrizio; Hamkins, Jon; Dolinar, Sam; Andrews, Ken; Divsalar, Dariush

    2006-01-01

    This viewgraph presentation reviews uplink coding. The purpose and goals of the briefing are (1) Show a plan for using uplink coding and describe benefits (2) Define possible solutions and their applicability to different types of uplink, including emergency uplink (3) Concur with our conclusions so we can embark on a plan to use proposed uplink system (4) Identify the need for the development of appropriate technology and infusion in the DSN (5) Gain advocacy to implement uplink coding in flight projects Action Item EMB04-1-14 -- Show a plan for using uplink coding, including showing where it is useful or not (include discussion of emergency uplink coding).

  7. Image coding.

    PubMed

    Kunt, M

    1988-01-01

    The digital representation of an image requires a very large number of bits. The goal of image coding is to reduce this number, as much as possible, and reconstruct a faithful duplicate of the original picture. Early efforts in image coding, solely guided by information theory, led to a plethora of methods. The compression ratio reached a saturation level around 10:1 a couple of years ago. Recent progress in the study of the brain mechanism of vision and scene analysis has opened new vistas in picture coding. Directional sensitivity of the neurones in the visual pathway combined with the separate processing of contours and textures has led to a new class of coding methods capable of achieving compression ratios as high as 100:1. PMID:3072645

  8. Verification of relationships between anthropometric variables among ureteral stents recipients and ureteric lengths: a challenge for Vitruvian-da Vinci theory

    PubMed Central

    Acelam, Philip A

    2015-01-01

    Objective To determine and verify how anthropometric variables correlate to ureteric lengths and how well statistical models approximate the actual ureteric lengths. Materials and methods In this work, 129 charts of endourological patients (71 females and 58 males) were studied retrospectively. Data were gathered from various research centers from North and South America. Continuous data were studied using descriptive statistics. Anthropometric variables (age, body surface area, body weight, obesity, and stature) were utilized as predictors of ureteric lengths. Linear regressions and correlations were used for studying relationships between the predictors and the outcome variables (ureteric lengths); P-value was set at 0.05. To assess how well statistical models were capable of predicting the actual ureteric lengths, percentages (or ratios of matched to mismatched results) were employed. Results The results of the study show that anthropometric variables do not correlate well to ureteric lengths. Statistical models can partially estimate ureteric lengths. Out of the five anthropometric variables studied, three of them: body frame, stature, and weight, each with a P<0.0001, were significant. Two of the variables: age (R2=0.01; P=0.20) and obesity (R2=0.03; P=0.06), were found to be poor estimators of ureteric lengths. None of the predictors reached the expected (match:above:below) ratio of 1:0:0 to qualify as reliable predictors of ureteric lengths. Conclusion There is not sufficient evidence to conclude that anthropometric variables can reliably predict ureteric lengths. These variables appear to lack adequate specificity as they failed to reach the expected (match:above:below) ratio of 1:0:0. Consequently, selections of ureteral stents continue to remain a challenge. However, height (R2=0.68) with the (match:above:below) ratio of 3:3:4 appears suited for use as estimator, but on the basis of decision rule. Additional research is recommended for stent improvements and ureteric length determinations. PMID:26317082

  9. Speech coding

    SciTech Connect

    Ravishankar, C., Hughes Network Systems, Germantown, MD

    1998-05-08

    Speech is the predominant means of communication between human beings and since the invention of the telephone by Alexander Graham Bell in 1876, speech services have remained to be the core service in almost all telecommunication systems. Original analog methods of telephony had the disadvantage of speech signal getting corrupted by noise, cross-talk and distortion Long haul transmissions which use repeaters to compensate for the loss in signal strength on transmission links also increase the associated noise and distortion. On the other hand digital transmission is relatively immune to noise, cross-talk and distortion primarily because of the capability to faithfully regenerate digital signal at each repeater purely based on a binary decision. Hence end-to-end performance of the digital link essentially becomes independent of the length and operating frequency bands of the link Hence from a transmission point of view digital transmission has been the preferred approach due to its higher immunity to noise. The need to carry digital speech became extremely important from a service provision point of view as well. Modem requirements have introduced the need for robust, flexible and secure services that can carry a multitude of signal types (such as voice, data and video) without a fundamental change in infrastructure. Such a requirement could not have been easily met without the advent of digital transmission systems, thereby requiring speech to be coded digitally. The term Speech Coding is often referred to techniques that represent or code speech signals either directly as a waveform or as a set of parameters by analyzing the speech signal. In either case, the codes are transmitted to the distant end where speech is reconstructed or synthesized using the received set of codes. A more generic term that is applicable to these techniques that is often interchangeably used with speech coding is the term voice coding. This term is more generic in the sense that the coding techniques are equally applicable to any voice signal whether or not it carries any intelligible information, as the term speech implies. Other terms that are commonly used are speech compression and voice compression since the fundamental idea behind speech coding is to reduce (compress) the transmission rate (or equivalently the bandwidth) And/or reduce storage requirements In this document the terms speech and voice shall be used interchangeably.

  10. TRACKING CODE DEVELOPMENT FOR BEAM DYNAMICS OPTIMIZATION

    SciTech Connect

    Yang, L.

    2011-03-28

    Dynamic aperture (DA) optimization with direct particle tracking is a straight forward approach when the computing power is permitted. It can have various realistic errors included and is more close than theoretical estimations. In this approach, a fast and parallel tracking code could be very helpful. In this presentation, we describe an implementation of storage ring particle tracking code TESLA for beam dynamics optimization. It supports MPI based parallel computing and is robust as DA calculation engine. This code has been used in the NSLS-II dynamics optimizations and obtained promising performance.

  11. QR Codes

    ERIC Educational Resources Information Center

    Lai, Hsin-Chih; Chang, Chun-Yen; Li, Wen-Shiane; Fan, Yu-Lin; Wu, Ying-Tien

    2013-01-01

    This study presents an m-learning method that incorporates Integrated Quick Response (QR) codes. This learning method not only achieves the objectives of outdoor education, but it also increases applications of Cognitive Theory of Multimedia Learning (CTML) (Mayer, 2001) in m-learning for practical use in a diverse range of outdoor locations. When…

  12. Uplink Coding

    NASA Technical Reports Server (NTRS)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the objectives, meeting goals and overall NASA goals for the NASA Data Standards Working Group. The presentation includes information on the technical progress surrounding the objective, short LDPC codes, and the general results on the Pu-Pw tradeoff.

  13. Codes with special correlation.

    NASA Technical Reports Server (NTRS)

    Baumert, L. D.

    1964-01-01

    Uniform binary codes with special correlation including transorthogonality and simplex code, Hadamard matrices and difference sets uniform binary codes with special correlation including transorthogonality and simplex code, Hadamard matrices and difference sets

  14. Optimizing ATLAS code with different profilers

    NASA Astrophysics Data System (ADS)

    Kama, S.; Seuster, R.; Stewart, G. A.; Vitillo, R. A.

    2014-06-01

    After the current maintenance period, the LHC will provide higher energy collisions with increased luminosity. In order to keep up with these higher rates, ATLAS software needs to speed up substantially. However, ATLAS code is composed of approximately 6M lines, written by many different programmers with different backgrounds, which makes code optimisation a challenge. To help with this effort different profiling tools and techniques are being used. These include well known tools, such as the Valgrind suite and Intel Amplifier; less common tools like Pin, PAPI, and GOoDA; as well as techniques such as library interposing. In this paper we will mainly focus on Pin tools and GOoDA. Pin is a dynamic binary instrumentation tool which can obtain statistics such as call counts, instruction counts and interrogate functions' arguments. It has been used to obtain CLHEP Matrix profiles, operations and vector sizes for linear algebra calculations which has provided the insight necessary to achieve significant performance improvements. Complimenting this, GOoDA, an in-house performance tool built in collaboration with Google, which is based on hardware performance monitoring unit events, is used to identify hot-spots in the code for different types of hardware limitations, such as CPU resources, caches, or memory bandwidth. GOoDA has been used in improvement of the performance of new magnetic field code and identification of potential vectorization targets in several places, such as Runge-Kutta propagation code.

  15. Error-correction coding

    NASA Technical Reports Server (NTRS)

    Hinds, Erold W. (Principal Investigator)

    1996-01-01

    This report describes the progress made towards the completion of a specific task on error-correcting coding. The proposed research consisted of investigating the use of modulation block codes as the inner code of a concatenated coding system in order to improve the overall space link communications performance. The study proposed to identify and analyze candidate codes that will complement the performance of the overall coding system which uses the interleaved RS (255,223) code as the outer code.

  16. [The "myologie dynamique" by Girolamo Fabrizi da Aquapendente in the scientific language in the Renaissance age (XVI-XVII)].

    PubMed

    Stroppiana, L

    1989-01-01

    Beginning from the XV century, mechanical materialism underwent an evolution in "biological mechanics" within the scientific doctrine. Among the greatest exponents of this new current there were two Italian men, Leonardo da Vinci (1452-1519) and Girolamo da Acquapendente (1533-1619). By the trend given by Leonardo, the myology, instead of being a static science, took a dynamic meaning and valence. Later, Fabrizi resumed and investigated the subject above all in its less known expression, elaborating an original theory. With Acquapendente, the anatomy lost its merely descriptive pecularity and evolved in analysis of the structure in connection with the function. Moreover, he opposed the syllogism against the mechanic language and the mathematical formulation. A new scientific way will be afterwards characterized by Galileo Galilei in the field of the physics and by Giovanni Alfonso Borrelli in the biology. PMID:11640090

  17. Cryptographer

    ERIC Educational Resources Information Center

    Sullivan, Megan

    2005-01-01

    For the general public, the field of cryptography has recently become famous as the method used to uncover secrets in Dan Brown's fictional bestseller, The Da Vinci Code. But the science of cryptography has been popular for centuries--secret hieroglyphics discovered in Egypt suggest that code-making dates back almost 4,000 years. In today's…

  18. Homological stabilizer codes

    SciTech Connect

    Anderson, Jonas T.

    2013-03-15

    In this paper we define homological stabilizer codes on qubits which encompass codes such as Kitaev's toric code and the topological color codes. These codes are defined solely by the graphs they reside on. This feature allows us to use properties of topological graph theory to determine the graphs which are suitable as homological stabilizer codes. We then show that all toric codes are equivalent to homological stabilizer codes on 4-valent graphs. We show that the topological color codes and toric codes correspond to two distinct classes of graphs. We define the notion of label set equivalencies and show that under a small set of constraints the only homological stabilizer codes without local logical operators are equivalent to Kitaev's toric code or to the topological color codes. - Highlights: Black-Right-Pointing-Pointer We show that Kitaev's toric codes are equivalent to homological stabilizer codes on 4-valent graphs. Black-Right-Pointing-Pointer We show that toric codes and color codes correspond to homological stabilizer codes on distinct graphs. Black-Right-Pointing-Pointer We find and classify all 2D homological stabilizer codes. Black-Right-Pointing-Pointer We find optimal codes among the homological stabilizer codes.

  19. Accumulate repeat accumulate codes

    NASA Technical Reports Server (NTRS)

    Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung

    2004-01-01

    In this paper we propose an innovative channel coding scheme called 'Accumulate Repeat Accumulate codes' (ARA). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes, thus belief propagation can be used for iterative decoding of ARA codes on a graph. The structure of encoder for this class can be viewed as precoded Repeat Accumulate (RA) code or as precoded Irregular Repeat Accumulate (IRA) code, where simply an accumulator is chosen as a precoder. Thus ARA codes have simple, and very fast encoder structure when they representing LDPC codes. Based on density evolution for LDPC codes through some examples for ARA codes, we show that for maximum variable node degree 5 a minimum bit SNR as low as 0.08 dB from channel capacity for rate 1/2 can be achieved as the block size goes to infinity. Thus based on fixed low maximum variable node degree, its threshold outperforms not only the RA and IRA codes but also the best known LDPC codes with the dame maximum node degree. Furthermore by puncturing the accumulators any desired high rate codes close to code rate 1 can be obtained with thresholds that stay close to the channel capacity thresholds uniformly. Iterative decoding simulation results are provided. The ARA codes also have projected graph or protograph representation that allows for high speed decoder implementation.

  20. Concatenated Coding Using Trellis-Coded Modulation

    NASA Technical Reports Server (NTRS)

    Thompson, Michael W.

    1997-01-01

    In the late seventies and early eighties a technique known as Trellis Coded Modulation (TCM) was developed for providing spectrally efficient error correction coding. Instead of adding redundant information in the form of parity bits, redundancy is added at the modulation stage thereby increasing bandwidth efficiency. A digital communications system can be designed to use bandwidth-efficient multilevel/phase modulation such as Amplitude Shift Keying (ASK), Phase Shift Keying (PSK), Differential Phase Shift Keying (DPSK) or Quadrature Amplitude Modulation (QAM). Performance gain can be achieved by increasing the number of signals over the corresponding uncoded system to compensate for the redundancy introduced by the code. A considerable amount of research and development has been devoted toward developing good TCM codes for severely bandlimited applications. More recently, the use of TCM for satellite and deep space communications applications has received increased attention. This report describes the general approach of using a concatenated coding scheme that features TCM and RS coding. Results have indicated that substantial (6-10 dB) performance gains can be achieved with this approach with comparatively little bandwidth expansion. Since all of the bandwidth expansion is due to the RS code we see that TCM based concatenated coding results in roughly 10-50% bandwidth expansion compared to 70-150% expansion for similar concatenated scheme which use convolution code. We stress that combined coding and modulation optimization is important for achieving performance gains while maintaining spectral efficiency.

  1. Coset Codes Viewed as Terminated Convolutional Codes

    NASA Technical Reports Server (NTRS)

    Fossorier, Marc P. C.; Lin, Shu

    1996-01-01

    In this paper, coset codes are considered as terminated convolutional codes. Based on this approach, three new general results are presented. First, it is shown that the iterative squaring construction can equivalently be defined from a convolutional code whose trellis terminates. This convolutional code determines a simple encoder for the coset code considered, and the state and branch labelings of the associated trellis diagram become straightforward. Also, from the generator matrix of the code in its convolutional code form, much information about the trade-off between the state connectivity and complexity at each section, and the parallel structure of the trellis, is directly available. Based on this generator matrix, it is shown that the parallel branches in the trellis diagram of the convolutional code represent the same coset code C(sub 1), of smaller dimension and shorter length. Utilizing this fact, a two-stage optimum trellis decoding method is devised. The first stage decodes C(sub 1), while the second stage decodes the associated convolutional code, using the branch metrics delivered by stage 1. Finally, a bidirectional decoding of each received block starting at both ends is presented. If about the same number of computations is required, this approach remains very attractive from a practical point of view as it roughly doubles the decoding speed. This fact is particularly interesting whenever the second half of the trellis is the mirror image of the first half, since the same decoder can be implemented for both parts.

  2. Discussion on LDPC Codes and Uplink Coding

    NASA Technical Reports Server (NTRS)

    Andrews, Ken; Divsalar, Dariush; Dolinar, Sam; Moision, Bruce; Hamkins, Jon; Pollara, Fabrizio

    2007-01-01

    This slide presentation reviews the progress that the workgroup on Low-Density Parity-Check (LDPC) for space link coding. The workgroup is tasked with developing and recommending new error correcting codes for near-Earth, Lunar, and deep space applications. Included in the presentation is a summary of the technical progress of the workgroup. Charts that show the LDPC decoder sensitivity to symbol scaling errors are reviewed, as well as a chart showing the performance of several frame synchronizer algorithms compared to that of some good codes and LDPC decoder tests at ESTL. Also reviewed is a study on Coding, Modulation, and Link Protocol (CMLP), and the recommended codes. A design for the Pseudo-Randomizer with LDPC Decoder and CRC is also reviewed. A chart that summarizes the three proposed coding systems is also presented.

  3. Binary primitive alternant codes

    NASA Technical Reports Server (NTRS)

    Helgert, H. J.

    1975-01-01

    In this note we investigate the properties of two classes of binary primitive alternant codes that are generalizations of the primitive BCH codes. For these codes we establish certain equivalence and invariance relations and obtain values of d and d*, the minimum distances of the prime and dual codes.

  4. Manually operated coded switch

    DOEpatents

    Barnette, Jon H.

    1978-01-01

    The disclosure relates to a manually operated recodable coded switch in which a code may be inserted, tried and used to actuate a lever controlling an external device. After attempting a code, the switch's code wheels must be returned to their zero positions before another try is made.

  5. QR Codes 101

    ERIC Educational Resources Information Center

    Crompton, Helen; LaFrance, Jason; van 't Hooft, Mark

    2012-01-01

    A QR (quick-response) code is a two-dimensional scannable code, similar in function to a traditional bar code that one might find on a product at the supermarket. The main difference between the two is that, while a traditional bar code can hold a maximum of only 20 digits, a QR code can hold up to 7,089 characters, so it can contain much more…

  6. ARA type protograph codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush (Inventor); Abbasfar, Aliazam (Inventor); Jones, Christopher R. (Inventor); Dolinar, Samuel J. (Inventor); Thorpe, Jeremy C. (Inventor); Andrews, Kenneth S. (Inventor); Yao, Kung (Inventor)

    2008-01-01

    An apparatus and method for encoding low-density parity check codes. Together with a repeater, an interleaver and an accumulator, the apparatus comprises a precoder, thus forming accumulate-repeat-accumulate (ARA codes). Protographs representing various types of ARA codes, including AR3A, AR4A and ARJA codes, are described. High performance is obtained when compared to the performance of current repeat-accumulate (RA) or irregular-repeat-accumulate (IRA) codes.

  7. Asymmetric quantum convolutional codes

    NASA Astrophysics Data System (ADS)

    La Guardia, Giuliano G.

    2016-01-01

    In this paper, we construct the first families of asymmetric quantum convolutional codes (AQCCs). These new AQCCs are constructed by means of the CSS-type construction applied to suitable families of classical convolutional codes, which are also constructed here. The new codes have non-catastrophic generator matrices, and they have great asymmetry. Since our constructions are performed algebraically, i.e. we develop general algebraic methods and properties to perform the constructions, it is possible to derive several families of such codes and not only codes with specific parameters. Additionally, several different types of such codes are obtained.

  8. Unfolding the color code

    NASA Astrophysics Data System (ADS)

    Kubica, Aleksander; Yoshida, Beni; Pastawski, Fernando

    2015-08-01

    The topological color code and the toric code are two leading candidates for realizing fault-tolerant quantum computation. Here we show that the color code on a d-dimensional closed manifold is equivalent to multiple decoupled copies of the d-dimensional toric code up to local unitary transformations and adding or removing ancilla qubits. Our result not only generalizes the proven equivalence for d = 2, but also provides an explicit recipe of how to decouple independent components of the color code, highlighting the importance of colorability in the construction of the code. Moreover, for the d-dimensional color code with d+1 boundaries of d+1 distinct colors, we find that the code is equivalent to multiple copies of the d-dimensional toric code which are attached along a (d-1)-dimensional boundary. In particular, for d = 2, we show that the (triangular) color code with boundaries is equivalent to the (folded) toric code with boundaries. We also find that the d-dimensional toric code admits logical non-Pauli gates from the dth level of the Clifford hierarchy, and thus saturates the bound by Bravyi and Knig. In particular, we show that the logical d-qubit control-Z gate can be fault-tolerantly implemented on the stack of d copies of the toric code by a local unitary transformation.

  9. Mathematical Fiction for Senior Students and Undergraduates: Novels, Plays, and Film

    ERIC Educational Resources Information Center

    Padula, Janice

    2006-01-01

    Mathematical fiction has probably existed since ideas have been written down and certainly as early as 414 BC (Kasman, 2000). Mathematical fiction is a recently rediscovered and growing literature, as sales of the novels: "The Curious Incident of the Dog in the Night-time" (Haddon, 2003) and "The Da Vinci Code" (Brown, 2004) attest. Science…

  10. Cellulases and coding sequences

    SciTech Connect

    Li, Xin-Liang; Ljungdahl, Lars G.; Chen, Huizhong

    2001-02-20

    The present invention provides three fungal cellulases, their coding sequences, recombinant DNA molecules comprising the cellulase coding sequences, recombinant host cells and methods for producing same. The present cellulases are from Orpinomyces PC-2.

  11. Cellulases and coding sequences

    SciTech Connect

    Li, Xin-Liang; Ljungdahl, Lars G.; Chen, Huizhong

    2001-01-01

    The present invention provides three fungal cellulases, their coding sequences, recombinant DNA molecules comprising the cellulase coding sequences, recombinant host cells and methods for producing same. The present cellulases are from Orpinomyces PC-2.

  12. QR Code Mania!

    ERIC Educational Resources Information Center

    Shumack, Kellie A.; Reilly, Erin; Chamberlain, Nik

    2013-01-01

    space, has error-correction capacity, and can be read from any direction. These codes are used in manufacturing, shipping, and marketing, as well as in education. QR codes can be created to produce…

  13. DIANE multiparticle transport code

    NASA Astrophysics Data System (ADS)

    Caillaud, M.; Lemaire, S.; Ménard, S.; Rathouit, P.; Ribes, J. C.; Riz, D.

    2014-06-01

    DIANE is the general Monte Carlo code developed at CEA-DAM. DIANE is a 3D multiparticle multigroup code. DIANE includes automated biasing techniques and is optimized for massive parallel calculations.

  14. STEEP32 computer code

    NASA Technical Reports Server (NTRS)

    Goerke, W. S.

    1972-01-01

    A manual is presented as an aid in using the STEEP32 code. The code is the EXEC 8 version of the STEEP code (STEEP is an acronym for shock two-dimensional Eulerian elastic plastic). The major steps in a STEEP32 run are illustrated in a sample problem. There is a detailed discussion of the internal organization of the code, including a description of each subroutine.

  15. Morse Code Activity Packet.

    ERIC Educational Resources Information Center

    Clinton, Janeen S.

    This activity packet offers simple directions for setting up a Morse Code system appropriate to interfacing with any of several personal computer systems. Worksheets are also included to facilitate teaching Morse Code to persons with visual or other disabilities including blindness, as it is argued that the code is best learned auditorily. (PB)

  16. Clip and Save.

    ERIC Educational Resources Information Center

    Hubbard, Guy

    2003-01-01

    Focuses on the facial expression in the "Mona Lisa" by Leonardo da Vinci. Offers background information on da Vinci as well as learning activities for students. Includes a reproduction of the "Mona Lisa" and information about the painting. (CMK)

  17. Software Certification - Coding, Code, and Coders

    NASA Technical Reports Server (NTRS)

    Havelund, Klaus; Holzmann, Gerard J.

    2011-01-01

    We describe a certification approach for software development that has been adopted at our organization. JPL develops robotic spacecraft for the exploration of the solar system. The flight software that controls these spacecraft is considered to be mission critical. We argue that the goal of a software certification process cannot be the development of "perfect" software, i.e., software that can be formally proven to be correct under all imaginable and unimaginable circumstances. More realistically, the goal is to guarantee a software development process that is conducted by knowledgeable engineers, who follow generally accepted procedures to control known risks, while meeting agreed upon standards of workmanship. We target three specific issues that must be addressed in such a certification procedure: the coding process, the code that is developed, and the skills of the coders. The coding process is driven by standards (e.g., a coding standard) and tools. The code is mechanically checked against the standard with the help of state-of-the-art static source code analyzers. The coders, finally, are certified in on-site training courses that include formal exams.

  18. To code, or not to code?

    PubMed

    Parman, Cindy C

    2003-01-01

    In summary, it is also important to remember the hidden rules: 1) Just because there is a code in the manual, it doesn't mean it can be billed to insurance, or that once billed, it will be reimbursed. 2) Just because a code was paid once, doesn't mean it will ever be paid again--or that you get to keep the money! 3) The healthcare provider is responsible for knowing all the rules, but then it is impossible to know all the rules! And not knowing all the rules can lead to fines, penalties or worse! New codes are added annually (quarterly for OPPS), definitions of existing codes are changed, and it is the responsibility of healthcare providers to keep abreast of all coding updates and changes. In addition, the federal regulations are constantly updated and changed, making compliant billing a moving target. All healthcare entities should focus on complete documentation, the adherence to authoritative coding guidance and the provision of detailed explanations and specialty education to the payor, as necessary. PMID:14619987

  19. Coding for Electronic Mail

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Lee, J. J.

    1986-01-01

    Scheme for coding facsimile messages promises to reduce data transmission requirements to one-tenth current level. Coding scheme paves way for true electronic mail in which handwritten, typed, or printed messages or diagrams sent virtually instantaneously - between buildings or between continents. Scheme, called Universal System for Efficient Electronic Mail (USEEM), uses unsupervised character recognition and adaptive noiseless coding of text. Image quality of resulting delivered messages improved over messages transmitted by conventional coding. Coding scheme compatible with direct-entry electronic mail as well as facsimile reproduction. Text transmitted in this scheme automatically translated to word-processor form.

  20. XSOR codes users manual

    SciTech Connect

    Jow, Hong-Nian; Murfin, W.B.; Johnson, J.D.

    1993-11-01

    This report describes the source term estimation codes, XSORs. The codes are written for three pressurized water reactors (Surry, Sequoyah, and Zion) and two boiling water reactors (Peach Bottom and Grand Gulf). The ensemble of codes has been named ``XSOR``. The purpose of XSOR codes is to estimate the source terms which would be released to the atmosphere in severe accidents. A source term includes the release fractions of several radionuclide groups, the timing and duration of releases, the rates of energy release, and the elevation of releases. The codes have been developed by Sandia National Laboratories for the US Nuclear Regulatory Commission (NRC) in support of the NUREG-1150 program. The XSOR codes are fast running parametric codes and are used as surrogates for detailed mechanistic codes. The XSOR codes also provide the capability to explore the phenomena and their uncertainty which are not currently modeled by the mechanistic codes. The uncertainty distributions of input parameters may be used by an. XSOR code to estimate the uncertainty of source terms.

  1. Permutation codes for sources.

    NASA Technical Reports Server (NTRS)

    Berger, T.; Jelinek, F.; Wolf, J. K.

    1972-01-01

    Source encoding techniques based on permutation codes are investigated. For a broad class of distortion measures it is shown that optimum encoding of a source permutation code is easy to instrument even for very long block lengths. Also, the nonparametric nature of permutation encoding is well suited to situations involving unknown source statistics. For the squared-error distortion measure a procedure for generating good permutation codes of a given rate and block length is described. The performance of such codes for a memoryless Gaussian source is compared both with the rate-distortion function bound and with the performance of various quantization schemes. The comparison reveals that permutation codes are asymptotically ideal for small rates and perform as well as the best entropy-coded quantizers presently known for intermediate rates. They can be made to compare favorably at high rates, too, provided the coding delay associated with extremely long block lengths is tolerable.

  2. DLLExternalCode

    SciTech Connect

    Greg Flach, Frank Smith

    2014-05-14

    DLLExternalCode is the a general dynamic-link library (DLL) interface for linking GoldSim (www.goldsim.com) with external codes. The overall concept is to use GoldSim as top level modeling software with interfaces to external codes for specific calculations. The DLLExternalCode DLL that performs the linking function is designed to take a list of code inputs from GoldSim, create an input file for the external application, run the external code, and return a list of outputs, read from files created by the external application, back to GoldSim. Instructions for creating the input file, running the external code, and reading the output are contained in an instructions file that is read and interpreted by the DLL.

  3. DLLExternalCode

    Energy Science and Technology Software Center (ESTSC)

    2014-05-14

    DLLExternalCode is the a general dynamic-link library (DLL) interface for linking GoldSim (www.goldsim.com) with external codes. The overall concept is to use GoldSim as top level modeling software with interfaces to external codes for specific calculations. The DLLExternalCode DLL that performs the linking function is designed to take a list of code inputs from GoldSim, create an input file for the external application, run the external code, and return a list of outputs, read frommore » files created by the external application, back to GoldSim. Instructions for creating the input file, running the external code, and reading the output are contained in an instructions file that is read and interpreted by the DLL.« less

  4. Adaptive entropy coded subband coding of images.

    PubMed

    Kim, Y H; Modestino, J W

    1992-01-01

    The authors describe a design approach, called 2-D entropy-constrained subband coding (ECSBC), based upon recently developed 2-D entropy-constrained vector quantization (ECVQ) schemes. The output indexes of the embedded quantizers are further compressed by use of noiseless entropy coding schemes, such as Huffman or arithmetic codes, resulting in variable-rate outputs. Depending upon the specific configurations of the ECVQ and the ECPVQ over the subbands, many different types of SBC schemes can be derived within the generic 2-D ECSBC framework. Among these, the authors concentrate on three representative types of 2-D ECSBC schemes and provide relative performance evaluations. They also describe an adaptive buffer instrumented version of 2-D ECSBC, called 2-D ECSBC/AEC, for use with fixed-rate channels which completely eliminates buffer overflow/underflow problems. This adaptive scheme achieves performance quite close to the corresponding ideal 2-D ECSBC system. PMID:18296138

  5. Overview of Code Verification

    NASA Technical Reports Server (NTRS)

    1983-01-01

    The verified code for the SIFT Executive is not the code that executes on the SIFT system as delivered. The running versions of the SIFT Executive contain optimizations and special code relating to the messy interface to the hardware broadcast interface and to packing of data to conserve space in the store of the BDX930 processors. The running code was in fact developed prior to and without consideration of any mechanical verification. This was regarded as necessary experimentation with the SIFT hardware and special purpose Pascal compiler. The Pascal code sections cover: the selection of a schedule from the global executive broadcast, scheduling, dispatching, three way voting, and error reporting actions of the SIFT Executive. Not included in these sections of Pascal code are: the global executive, five way voting, clock synchronization, interactive consistency, low level broadcasting, and program loading, initialization, and schedule construction.

  6. Mechanical code comparator

    DOEpatents

    Peter, Frank J.; Dalton, Larry J.; Plummer, David W.

    2002-01-01

    A new class of mechanical code comparators is described which have broad potential for application in safety, surety, and security applications. These devices can be implemented as micro-scale electromechanical systems that isolate a secure or otherwise controlled device until an access code is entered. This access code is converted into a series of mechanical inputs to the mechanical code comparator, which compares the access code to a pre-input combination, entered previously into the mechanical code comparator by an operator at the system security control point. These devices provide extremely high levels of robust security. Being totally mechanical in operation, an access control system properly based on such devices cannot be circumvented by software attack alone.

  7. Industrial Code Development

    NASA Technical Reports Server (NTRS)

    Shapiro, Wilbur

    1991-01-01

    The industrial codes will consist of modules of 2-D and simplified 2-D or 1-D codes, intended for expeditious parametric studies, analysis, and design of a wide variety of seals. Integration into a unified system is accomplished by the industrial Knowledge Based System (KBS), which will also provide user friendly interaction, contact sensitive and hypertext help, design guidance, and an expandable database. The types of analysis to be included with the industrial codes are interfacial performance (leakage, load, stiffness, friction losses, etc.), thermoelastic distortions, and dynamic response to rotor excursions. The first three codes to be completed and which are presently being incorporated into the KBS are the incompressible cylindrical code, ICYL, and the compressible cylindrical code, GCYL.

  8. Generating code adapted for interlinking legacy scalar code and extended vector code

    SciTech Connect

    Gschwind, Michael K

    2013-06-04

    Mechanisms for intermixing code are provided. Source code is received for compilation using an extended Application Binary Interface (ABI) that extends a legacy ABI and uses a different register configuration than the legacy ABI. First compiled code is generated based on the source code, the first compiled code comprising code for accommodating the difference in register configurations used by the extended ABI and the legacy ABI. The first compiled code and second compiled code are intermixed to generate intermixed code, the second compiled code being compiled code that uses the legacy ABI. The intermixed code comprises at least one call instruction that is one of a call from the first compiled code to the second compiled code or a call from the second compiled code to the first compiled code. The code for accommodating the difference in register configurations is associated with the at least one call instruction.

  9. Doubled Color Codes

    NASA Astrophysics Data System (ADS)

    Bravyi, Sergey

    Combining protection from noise and computational universality is one of the biggest challenges in the fault-tolerant quantum computing. Topological stabilizer codes such as the 2D surface code can tolerate a high level of noise but implementing logical gates, especially non-Clifford ones, requires a prohibitively large overhead due to the need of state distillation. In this talk I will describe a new family of 2D quantum error correcting codes that enable a transversal implementation of all logical gates required for the universal quantum computing. Transversal logical gates (TLG) are encoded operations that can be realized by applying some single-qubit rotation to each physical qubit. TLG are highly desirable since they introduce no overhead and do not spread errors. It has been known before that a quantum code can have only a finite number of TLGs which rules out computational universality. Our scheme circumvents this no-go result by combining TLGs of two different quantum codes using the gauge-fixing method pioneered by Paetznick and Reichardt. The first code, closely related to the 2D color code, enables a transversal implementation of all single-qubit Clifford gates such as the Hadamard gate and the π / 2 phase shift. The second code that we call a doubled color code provides a transversal T-gate, where T is the π / 4 phase shift. The Clifford+T gate set is known to be computationally universal. The two codes can be laid out on the honeycomb lattice with two qubits per site such that the code conversion requires parity measurements for six-qubit Pauli operators supported on faces of the lattice. I will also describe numerical simulations of logical Clifford+T circuits encoded by the distance-3 doubled color code. Based on a joint work with Andrew Cross.

  10. Phonological coding during reading

    PubMed Central

    Leinenger, Mallorie

    2014-01-01

    The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early (pre-lexical) or that phonological codes come online late (post-lexical)) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eyetracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model (Van Order, 1987), dual-route model (e.g., Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001), parallel distributed processing model (Seidenberg & McClelland, 1989)) are discussed. PMID:25150679

  11. Phonological coding during reading.

    PubMed

    Leinenger, Mallorie

    2014-11-01

    The exact role that phonological coding (the recoding of written, orthographic information into a sound based code) plays during silent reading has been extensively studied for more than a century. Despite the large body of research surrounding the topic, varying theories as to the time course and function of this recoding still exist. The present review synthesizes this body of research, addressing the topics of time course and function in tandem. The varying theories surrounding the function of phonological coding (e.g., that phonological codes aid lexical access, that phonological codes aid comprehension and bolster short-term memory, or that phonological codes are largely epiphenomenal in skilled readers) are first outlined, and the time courses that each maps onto (e.g., that phonological codes come online early [prelexical] or that phonological codes come online late [postlexical]) are discussed. Next the research relevant to each of these proposed functions is reviewed, discussing the varying methodologies that have been used to investigate phonological coding (e.g., response time methods, reading while eye-tracking or recording EEG and MEG, concurrent articulation) and highlighting the advantages and limitations of each with respect to the study of phonological coding. In response to the view that phonological coding is largely epiphenomenal in skilled readers, research on the use of phonological codes in prelingually, profoundly deaf readers is reviewed. Finally, implications for current models of word identification (activation-verification model, Van Orden, 1987; dual-route model, e.g., M. Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001; parallel distributed processing model, Seidenberg & McClelland, 1989) are discussed. PMID:25150679

  12. Industrial Computer Codes

    NASA Technical Reports Server (NTRS)

    Shapiro, Wilbur

    1996-01-01

    This is an overview of new and updated industrial codes for seal design and testing. GCYLT (gas cylindrical seals -- turbulent), SPIRALI (spiral-groove seals -- incompressible), KTK (knife to knife) Labyrinth Seal Code, and DYSEAL (dynamic seal analysis) are covered. CGYLT uses G-factors for Poiseuille and Couette turbulence coefficients. SPIRALI is updated to include turbulence and inertia, but maintains the narrow groove theory. KTK labyrinth seal code handles straight or stepped seals. And DYSEAL provides dynamics for the seal geometry.

  13. Tokamak Systems Code

    SciTech Connect

    Reid, R.L.; Barrett, R.J.; Brown, T.G.; Gorker, G.E.; Hooper, R.J.; Kalsi, S.S.; Metzler, D.H.; Peng, Y.K.M.; Roth, K.E.; Spampinato, P.T.

    1985-03-01

    The FEDC Tokamak Systems Code calculates tokamak performance, cost, and configuration as a function of plasma engineering parameters. This version of the code models experimental tokamaks. It does not currently consider tokamak configurations that generate electrical power or incorporate breeding blankets. The code has a modular (or subroutine) structure to allow independent modeling for each major tokamak component or system. A primary benefit of modularization is that a component module may be updated without disturbing the remainder of the systems code as long as the imput to or output from the module remains unchanged.

  14. Topological subsystem codes

    SciTech Connect

    Bombin, H.

    2010-03-15

    We introduce a family of two-dimensional (2D) topological subsystem quantum error-correcting codes. The gauge group is generated by two-local Pauli operators, so that two-local measurements are enough to recover the error syndrome. We study the computational power of code deformation in these codes and show that boundaries cannot be introduced in the usual way. In addition, we give a general mapping connecting suitable classical statistical mechanical models to optimal error correction in subsystem stabilizer codes that suffer from depolarizing noise.

  15. MORSE Monte Carlo code

    SciTech Connect

    Cramer, S.N.

    1984-01-01

    The MORSE code is a large general-use multigroup Monte Carlo code system. Although no claims can be made regarding its superiority in either theoretical details or Monte Carlo techniques, MORSE has been, since its inception at ORNL in the late 1960s, the most widely used Monte Carlo radiation transport code. The principal reason for this popularity is that MORSE is relatively easy to use, independent of any installation or distribution center, and it can be easily customized to fit almost any specific need. Features of the MORSE code are described.

  16. Expander chunked codes

    NASA Astrophysics Data System (ADS)

    Tang, Bin; Yang, Shenghao; Ye, Baoliu; Yin, Yitong; Lu, Sanglu

    2015-12-01

    Chunked codes are efficient random linear network coding (RLNC) schemes with low computational cost, where the input packets are encoded into small chunks (i.e., subsets of the coded packets). During the network transmission, RLNC is performed within each chunk. In this paper, we first introduce a simple transfer matrix model to characterize the transmission of chunks and derive some basic properties of the model to facilitate the performance analysis. We then focus on the design of overlapped chunked codes, a class of chunked codes whose chunks are non-disjoint subsets of input packets, which are of special interest since they can be encoded with negligible computational cost and in a causal fashion. We propose expander chunked (EC) codes, the first class of overlapped chunked codes that have an analyzable performance, where the construction of the chunks makes use of regular graphs. Numerical and simulation results show that in some practical settings, EC codes can achieve rates within 91 to 97 % of the optimum and outperform the state-of-the-art overlapped chunked codes significantly.

  17. FAA Smoke Transport Code

    SciTech Connect

    2006-10-27

    FAA Smoke Transport Code, a physics-based Computational Fluid Dynamics tool, which couples heat, mass, and momentum transfer, has been developed to provide information on smoke transport in cargo compartments with various geometries and flight conditions. The software package contains a graphical user interface for specification of geometry and boundary conditions, analysis module for solving the governing equations, and a post-processing tool. The current code was produced by making substantial improvements and additions to a code obtained from a university. The original code was able to compute steady, uniform, isothermal turbulent pressurization. In addition, a preprocessor and postprocessor were added to arrive at the current software package.

  18. Code-Switching or Code-Mixing?

    ERIC Educational Resources Information Center

    Thelander, Mats

    1976-01-01

    An attempt to apply Blom's and Gumperz' model of code-switching to a small Swedish community in northern Sweden, Burtrask. The informants spoke standard Swedish, the Burtrask dialect, and a third variety which was a combination of the two. (CFM)

  19. Fast Coding Unit Encoding Mechanism for Low Complexity Video Coding

    PubMed Central

    Wu, Yueying; Jia, Kebin; Gao, Guandong

    2016-01-01

    In high efficiency video coding (HEVC), coding tree contributes to excellent compression performance. However, coding tree brings extremely high computational complexity. Innovative works for improving coding tree to further reduce encoding time are stated in this paper. A novel low complexity coding tree mechanism is proposed for HEVC fast coding unit (CU) encoding. Firstly, this paper makes an in-depth study of the relationship among CU distribution, quantization parameter (QP) and content change (CC). Secondly, a CU coding tree probability model is proposed for modeling and predicting CU distribution. Eventually, a CU coding tree probability update is proposed, aiming to address probabilistic model distortion problems caused by CC. Experimental results show that the proposed low complexity CU coding tree mechanism significantly reduces encoding time by 27% for lossy coding and 42% for visually lossless coding and lossless coding. The proposed low complexity CU coding tree mechanism devotes to improving coding performance under various application conditions. PMID:26999741

  20. Dress Codes for Teachers?

    ERIC Educational Resources Information Center

    Million, June

    2004-01-01

    In this article, the author discusses an e-mail survey of principals from across the country regarding whether or not their school had a formal staff dress code. The results indicate that most did not have a formal dress code, but agreed that professional dress for teachers was not only necessary, but showed respect for the school and had a…

  1. Synthesizing Certified Code

    NASA Technical Reports Server (NTRS)

    Whalen, Michael; Schumann, Johann; Fischer, Bernd

    2002-01-01

    Code certification is a lightweight approach to demonstrate software quality on a formal level. Its basic idea is to require producers to provide formal proofs that their code satisfies certain quality properties. These proofs serve as certificates which can be checked independently. Since code certification uses the same underlying technology as program verification, it also requires many detailed annotations (e.g., loop invariants) to make the proofs possible. However, manually adding theses annotations to the code is time-consuming and error-prone. We address this problem by combining code certification with automatic program synthesis. We propose an approach to generate simultaneously, from a high-level specification, code and all annotations required to certify generated code. Here, we describe a certification extension of AUTOBAYES, a synthesis tool which automatically generates complex data analysis programs from compact specifications. AUTOBAYES contains sufficient high-level domain knowledge to generate detailed annotations. This allows us to use a general-purpose verification condition generator to produce a set of proof obligations in first-order logic. The obligations are then discharged using the automated theorem E-SETHEO. We demonstrate our approach by certifying operator safety for a generated iterative data classification program without manual annotation of the code.

  2. Legacy Code Modernization

    NASA Technical Reports Server (NTRS)

    Hribar, Michelle R.; Frumkin, Michael; Jin, Haoqiang; Waheed, Abdul; Yan, Jerry; Saini, Subhash (Technical Monitor)

    1998-01-01

    Over the past decade, high performance computing has evolved rapidly; systems based on commodity microprocessors have been introduced in quick succession from at least seven vendors/families. Porting codes to every new architecture is a difficult problem; in particular, here at NASA, there are many large CFD applications that are very costly to port to new machines by hand. The LCM ("Legacy Code Modernization") Project is the development of an integrated parallelization environment (IPE) which performs the automated mapping of legacy CFD (Fortran) applications to state-of-the-art high performance computers. While most projects to port codes focus on the parallelization of the code, we consider porting to be an iterative process consisting of several steps: 1) code cleanup, 2) serial optimization,3) parallelization, 4) performance monitoring and visualization, 5) intelligent tools for automated tuning using performance prediction and 6) machine specific optimization. The approach for building this parallelization environment is to build the components for each of the steps simultaneously and then integrate them together. The demonstration will exhibit our latest research in building this environment: 1. Parallelizing tools and compiler evaluation. 2. Code cleanup and serial optimization using automated scripts 3. Development of a code generator for performance prediction 4. Automated partitioning 5. Automated insertion of directives. These demonstrations will exhibit the effectiveness of an automated approach for all the steps involved with porting and tuning a legacy code application for a new architecture.

  3. Lichenase and coding sequences

    SciTech Connect

    Li, Xin-Liang; Ljungdahl, Lars G.; Chen, Huizhong

    2000-08-15

    The present invention provides a fungal lichenase, i.e., an endo-1,3-1,4-.beta.-D-glucanohydrolase, its coding sequence, recombinant DNA molecules comprising the lichenase coding sequences, recombinant host cells and methods for producing same. The present lichenase is from Orpinomyces PC-2.

  4. Insurance billing and coding.

    PubMed

    Napier, Rebecca H; Bruelheide, Lori S; Demann, Eric T K; Haug, Richard H

    2008-07-01

    The purpose of this article is to highlight the importance of understanding various numeric and alpha-numeric codes for accurately billing dental and medically related services to private pay or third-party insurance carriers. In the United States, common dental terminology (CDT) codes are most commonly used by dentists to submit claims, whereas current procedural terminology (CPT) and International Classification of Diseases, Ninth Revision, Clinical Modification (ICD.9.CM) codes are more commonly used by physicians to bill for their services. The CPT and ICD.9.CM coding systems complement each other in that CPT codes provide the procedure and service information and ICD.9.CM codes provide the reason or rationale for a particular procedure or service. These codes are more commonly used for "medical necessity" determinations, and general dentists and specialists who routinely perform care, including trauma-related care, biopsies, and dental treatment as a result of or in anticipation of a cancer-related treatment, are likely to use these codes. Claim submissions for care provided can be completed electronically or by means of paper forms. PMID:18501731

  5. Code blue: seizures.

    PubMed

    Hoerth, Matthew T; Drazkowski, Joseph F; Noe, Katherine H; Sirven, Joseph I

    2011-06-01

    Eyewitnesses frequently perceive seizures as life threatening. If an event occurs on the hospital premises, a "code blue" can be called which consumes considerable resources. The purpose of this study was to determine the frequency and characteristics of code blue calls for seizures and seizure mimickers. A retrospective review of a code blue log from 2001 through 2008 identified 50 seizure-like events, representing 5.3% of all codes. Twenty-eight (54%) occurred in inpatients; the other 22 (44%) events involved visitors or employees on the hospital premises. Eighty-six percent of the events were epileptic seizures. Seizure mimickers, particularly psychogenic nonepileptic seizures, were more common in the nonhospitalized group. Only five (17.9%) inpatients had a known diagnosis of epilepsy, compared with 17 (77.3%) of the nonhospitalized patients. This retrospective survey provides insights into how code blues are called on hospitalized versus nonhospitalized patients for seizure-like events. PMID:21546315

  6. Combustion chamber analysis code

    NASA Technical Reports Server (NTRS)

    Przekwas, A. J.; Lai, Y. G.; Krishnan, A.; Avva, R. K.; Giridharan, M. G.

    1993-01-01

    A three-dimensional, time dependent, Favre averaged, finite volume Navier-Stokes code has been developed to model compressible and incompressible flows (with and without chemical reactions) in liquid rocket engines. The code has a non-staggered formulation with generalized body-fitted-coordinates (BFC) capability. Higher order differencing methodologies such as MUSCL and Osher-Chakravarthy schemes are available. Turbulent flows can be modeled using any of the five turbulent models present in the code. A two-phase, two-liquid, Lagrangian spray model has been incorporated into the code. Chemical equilibrium and finite rate reaction models are available to model chemically reacting flows. The discrete ordinate method is used to model effects of thermal radiation. The code has been validated extensively against benchmark experimental data and has been applied to model flows in several propulsion system components of the SSME and the STME.

  7. Astrophysics Source Code Library

    NASA Astrophysics Data System (ADS)

    Allen, A.; DuPrie, K.; Berriman, B.; Hanisch, R. J.; Mink, J.; Teuben, P. J.

    2013-10-01

    The Astrophysics Source Code Library (ASCL), founded in 1999, is a free on-line registry for source codes of interest to astronomers and astrophysicists. The library is housed on the discussion forum for Astronomy Picture of the Day (APOD) and can be accessed at http://ascl.net. The ASCL has a comprehensive listing that covers a significant number of the astrophysics source codes used to generate results published in or submitted to refereed journals and continues to grow. The ASCL currently has entries for over 500 codes; its records are citable and are indexed by ADS. The editors of the ASCL and members of its Advisory Committee were on hand at a demonstration table in the ADASS poster room to present the ASCL, accept code submissions, show how the ASCL is starting to be used by the astrophysics community, and take questions on and suggestions for improving the resource.

  8. Pyramid image codes

    NASA Technical Reports Server (NTRS)

    Watson, Andrew B.

    1990-01-01

    All vision systems, both human and machine, transform the spatial image into a coded representation. Particular codes may be optimized for efficiency or to extract useful image features. Researchers explored image codes based on primary visual cortex in man and other primates. Understanding these codes will advance the art in image coding, autonomous vision, and computational human factors. In cortex, imagery is coded by features that vary in size, orientation, and position. Researchers have devised a mathematical model of this transformation, called the Hexagonal oriented Orthogonal quadrature Pyramid (HOP). In a pyramid code, features are segregated by size into layers, with fewer features in the layers devoted to large features. Pyramid schemes provide scale invariance, and are useful for coarse-to-fine searching and for progressive transmission of images. The HOP Pyramid is novel in three respects: (1) it uses a hexagonal pixel lattice, (2) it uses oriented features, and (3) it accurately models most of the prominent aspects of primary visual cortex. The transform uses seven basic features (kernels), which may be regarded as three oriented edges, three oriented bars, and one non-oriented blob. Application of these kernels to non-overlapping seven-pixel neighborhoods yields six oriented, high-pass pyramid layers, and one low-pass (blob) layer.

  9. Report number codes

    SciTech Connect

    Nelson, R.N.

    1985-05-01

    This publication lists all report number codes processed by the Office of Scientific and Technical Information. The report codes are substantially based on the American National Standards Institute, Standard Technical Report Number (STRN)-Format and Creation Z39.23-1983. The Standard Technical Report Number (STRN) provides one of the primary methods of identifying a specific technical report. The STRN consists of two parts: The report code and the sequential number. The report code identifies the issuing organization, a specific program, or a type of document. The sequential number, which is assigned in sequence by each report issuing entity, is not included in this publication. Part I of this compilation is alphabetized by report codes followed by issuing installations. Part II lists the issuing organization followed by the assigned report code(s). In both Parts I and II, the names of issuing organizations appear for the most part in the form used at the time the reports were issued. However, for some of the more prolific installations which have had name changes, all entries have been merged under the current name.

  10. Reflections on Post-16 Strategies in European Countries. Interim Report of the Leonardo da Vinci/Multiplier Effect Project III.3.a. Priority 2: Forging Links between Educational Establishments and Enterprises (1997-2000) ID 27009. Working Papers, No. 9.

    ERIC Educational Resources Information Center

    Stenstrom, Marja-Leena, Ed.

    This four-part publication contains 19 papers on educational practices and promises for post-16 education in European countries. Part I, the introduction, contains these three papers: "Sharpening Post-16 Education Strategies: Building on the Results of the Previous Projects" (Johanna Lasonen); "'Parity of Esteem' and 'Integrated…

  11. FORTRAN code-evaluation system

    NASA Technical Reports Server (NTRS)

    Capps, J. D.; Kleir, R.

    1977-01-01

    Automated code evaluation system can be used to detect coding errors and unsound coding practices in any ANSI FORTRAN IV source code before they can cause execution-time malfunctions. System concentrates on acceptable FORTRAN code features which are likely to produce undesirable results.

  12. Compressible Astrophysics Simulation Code

    Energy Science and Technology Software Center (ESTSC)

    2007-07-18

    This is an astrophysics simulation code involving a radiation diffusion module developed at LLNL coupled to compressible hydrodynamics and adaptive mesh infrastructure developed at LBNL. One intended application is to neutrino diffusion in core collapse supernovae.

  13. ORECA CODE ASSESSMENT.

    SciTech Connect

    KROEGER,P.G.

    1980-07-01

    Results of an assessment of the ORECA code are being presented. In particular it was found that in the case of loss of forced flow circulation the predicted peak core temperatures are very sensitive to the mean gas temperatures used in the evaluation of the pressure drop terms. Some potential shortcomings of the conduction algorithm for some specific applications are discussed. The results of these efforts have been taken into consideration in the current version of the ORECA code.

  14. Seals Flow Code Development

    NASA Technical Reports Server (NTRS)

    1991-01-01

    In recognition of a deficiency in the current modeling capability for seals, an effort was established by NASA to develop verified computational fluid dynamic concepts, codes, and analyses for seals. The objectives were to develop advanced concepts for the design and analysis of seals, to effectively disseminate the information to potential users by way of annual workshops, and to provide experimental verification for the models and codes under a wide range of operating conditions.

  15. Robust Nonlinear Neural Codes

    NASA Astrophysics Data System (ADS)

    Yang, Qianli; Pitkow, Xaq

    2015-03-01

    Most interesting natural sensory stimuli are encoded in the brain in a form that can only be decoded nonlinearly. But despite being a core function of the brain, nonlinear population codes are rarely studied and poorly understood. Interestingly, the few existing models of nonlinear codes are inconsistent with known architectural features of the brain. In particular, these codes have information content that scales with the size of the cortical population, even if that violates the data processing inequality by exceeding the amount of information entering the sensory system. Here we provide a valid theory of nonlinear population codes by generalizing recent work on information-limiting correlations in linear population codes. Although these generalized, nonlinear information-limiting correlations bound the performance of any decoder, they also make decoding more robust to suboptimal computation, allowing many suboptimal decoders to achieve nearly the same efficiency as an optimal decoder. Although these correlations are extremely difficult to measure directly, particularly for nonlinear codes, we provide a simple, practical test by which one can use choice-related activity in small populations of neurons to determine whether decoding is suboptimal or optimal and limited by correlated noise. We conclude by describing an example computation in the vestibular system where this theory applies. QY and XP was supported by a grant from the McNair foundation.

  16. Coded aperture compressive temporal imaging.

    PubMed

    Llull, Patrick; Liao, Xuejun; Yuan, Xin; Yang, Jianbo; Kittle, David; Carin, Lawrence; Sapiro, Guillermo; Brady, David J

    2013-05-01

    We use mechanical translation of a coded aperture for code division multiple access compression of video. We discuss the compressed video's temporal resolution and present experimental results for reconstructions of > 10 frames of temporal data per coded snapshot. PMID:23669910

  17. Prioritized LT Codes

    NASA Technical Reports Server (NTRS)

    Woo, Simon S.; Cheng, Michael K.

    2011-01-01

    The original Luby Transform (LT) coding scheme is extended to account for data transmissions where some information symbols in a message block are more important than others. Prioritized LT codes provide unequal error protection (UEP) of data on an erasure channel by modifying the original LT encoder. The prioritized algorithm improves high-priority data protection without penalizing low-priority data recovery. Moreover, low-latency decoding is also obtained for high-priority data due to fast encoding. Prioritized LT codes only require a slight change in the original encoding algorithm, and no changes at all at the decoder. Hence, with a small complexity increase in the LT encoder, an improved UEP and low-decoding latency performance for high-priority data can be achieved. LT encoding partitions a data stream into fixed-sized message blocks each with a constant number of information symbols. To generate a code symbol from the information symbols in a message, the Robust-Soliton probability distribution is first applied in order to determine the number of information symbols to be used to compute the code symbol. Then, the specific information symbols are chosen uniform randomly from the message block. Finally, the selected information symbols are XORed to form the code symbol. The Prioritized LT code construction includes an additional restriction that code symbols formed by a relatively small number of XORed information symbols select some of these information symbols from the pool of high-priority data. Once high-priority data are fully covered, encoding continues with the conventional LT approach where code symbols are generated by selecting information symbols from the entire message block including all different priorities. Therefore, if code symbols derived from high-priority data experience an unusual high number of erasures, Prioritized LT codes can still reliably recover both high- and low-priority data. This hybrid approach decides not only "how to encode" but also "what to encode" to achieve UEP. Another advantage of the priority encoding process is that the majority of high-priority data can be decoded sooner since only a small number of code symbols are required to reconstruct high-priority data. This approach increases the likelihood that high-priority data is decoded first over low-priority data. The Prioritized LT code scheme achieves an improvement in high-priority data decoding performance as well as overall information recovery without penalizing the decoding of low-priority data, assuming high-priority data is no more than half of a message block. The cost is in the additional complexity required in the encoder. If extra computation resource is available at the transmitter, image, voice, and video transmission quality in terrestrial and space communications can benefit from accurate use of redundancy in protecting data with varying priorities.

  18. Error coding simulations

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1993-01-01

    There are various elements such as radio frequency interference (RFI) which may induce errors in data being transmitted via a satellite communication link. When a transmission is affected by interference or other error-causing elements, the transmitted data becomes indecipherable. It becomes necessary to implement techniques to recover from these disturbances. The objective of this research is to develop software which simulates error control circuits and evaluate the performance of these modules in various bit error rate environments. The results of the evaluation provide the engineer with information which helps determine the optimal error control scheme. The Consultative Committee for Space Data Systems (CCSDS) recommends the use of Reed-Solomon (RS) and convolutional encoders and Viterbi and RS decoders for error correction. The use of forward error correction techniques greatly reduces the received signal to noise needed for a certain desired bit error rate. The use of concatenated coding, e.g. inner convolutional code and outer RS code, provides even greater coding gain. The 16-bit cyclic redundancy check (CRC) code is recommended by CCSDS for error detection.

  19. Coded source neutron imaging

    SciTech Connect

    Bingham, Philip R; Santos-Villalobos, Hector J

    2011-01-01

    Coded aperture techniques have been applied to neutron radiography to address limitations in neutron flux and resolution of neutron detectors in a system labeled coded source imaging (CSI). By coding the neutron source, a magnified imaging system is designed with small spot size aperture holes (10 and 100 m) for improved resolution beyond the detector limits and with many holes in the aperture (50% open) to account for flux losses due to the small pinhole size. An introduction to neutron radiography and coded aperture imaging is presented. A system design is developed for a CSI system with a development of equations for limitations on the system based on the coded image requirements and the neutron source characteristics of size and divergence. Simulation has been applied to the design using McStas to provide qualitative measures of performance with simulations of pinhole array objects followed by a quantitative measure through simulation of a tilted edge and calculation of the modulation transfer function (MTF) from the line spread function. MTF results for both 100um and 10um aperture hole diameters show resolutions matching the hole diameters.

  20. Phase-coded pulse aperiodic transmitter coding

    NASA Astrophysics Data System (ADS)

    Virtanen, I. I.; Vierinen, J.; Lehtinen, M. S.

    2009-07-01

    Both ionospheric and weather radar communities have already adopted the method of transmitting radar pulses in an aperiodic manner when measuring moderately overspread targets. Among the users of the ionospheric radars, this method is called Aperiodic Transmitter Coding (ATC), whereas the weather radar users have adopted the term Simultaneous Multiple Pulse-Repetition Frequency (SMPRF). When probing the ionosphere at the carrier frequencies of the EISCAT Incoherent Scatter Radar facilities, the range extent of the detectable target is typically of the order of one thousand kilometers - about seven milliseconds - whereas the characteristic correlation time of the scattered signal varies from a few milliseconds in the D-region to only tens of microseconds in the F-region. If one is interested in estimating the scattering autocorrelation function (ACF) at time lags shorter than the F-region correlation time, the D-region must be considered as a moderately overspread target, whereas the F-region is a severely overspread one. Given the technical restrictions of the radar hardware, a combination of ATC and phase-coded long pulses is advantageous for this kind of target. We evaluate such an experiment under infinitely low signal-to-noise ratio (SNR) conditions using lag profile inversion. In addition, a qualitative evaluation under high-SNR conditions is performed by analysing simulated data. The results show that an acceptable estimation accuracy and a very good lag resolution in the D-region can be achieved with a pulse length long enough for simultaneous E- and F-region measurements with a reasonable lag extent. The new experiment design is tested with the EISCAT Tromsø VHF (224 MHz) radar. An example of a full D/E/F-region ACF from the test run is shown at the end of the paper.

  1. FAA Smoke Transport Code

    Energy Science and Technology Software Center (ESTSC)

    2006-10-27

    FAA Smoke Transport Code, a physics-based Computational Fluid Dynamics tool, which couples heat, mass, and momentum transfer, has been developed to provide information on smoke transport in cargo compartments with various geometries and flight conditions. The software package contains a graphical user interface for specification of geometry and boundary conditions, analysis module for solving the governing equations, and a post-processing tool. The current code was produced by making substantial improvements and additions to a codemore » obtained from a university. The original code was able to compute steady, uniform, isothermal turbulent pressurization. In addition, a preprocessor and postprocessor were added to arrive at the current software package.« less

  2. Adaptation and visual coding

    PubMed Central

    Webster, Michael A.

    2011-01-01

    Visual coding is a highly dynamic process and continuously adapting to the current viewing context. The perceptual changes that result from adaptation to recently viewed stimuli remain a powerful and popular tool for analyzing sensory mechanisms and plasticity. Over the last decade, the footprints of this adaptation have been tracked to both higher and lower levels of the visual pathway and over a wider range of timescales, revealing that visual processing is much more adaptable than previously thought. This work has also revealed that the pattern of aftereffects is similar across many stimulus dimensions, pointing to common coding principles in which adaptation plays a central role. However, why visual coding adapts has yet to be fully answered. PMID:21602298

  3. Seals Code Development Workshop

    NASA Technical Reports Server (NTRS)

    Hendricks, Robert C. (Compiler); Liang, Anita D. (Compiler)

    1996-01-01

    Seals Workshop of 1995 industrial code (INDSEAL) release include ICYL, GCYLT, IFACE, GFACE, SPIRALG, SPIRALI, DYSEAL, and KTK. The scientific code (SCISEAL) release includes conjugate heat transfer and multidomain with rotordynamic capability. Several seals and bearings codes (e.g., HYDROFLEX, HYDROTRAN, HYDROB3D, FLOWCON1, FLOWCON2) are presented and results compared. Current computational and experimental emphasis includes multiple connected cavity flows with goals of reducing parasitic losses and gas ingestion. Labyrinth seals continue to play a significant role in sealing with face, honeycomb, and new sealing concepts under investigation for advanced engine concepts in view of strict environmental constraints. The clean sheet approach to engine design is advocated with program directions and anticipated percentage SFC reductions cited. Future activities center on engine applications with coupled seal/power/secondary flow streams.

  4. Code inspection instructional validation

    NASA Technical Reports Server (NTRS)

    Orr, Kay; Stancil, Shirley

    1992-01-01

    The Shuttle Data Systems Branch (SDSB) of the Flight Data Systems Division (FDSD) at Johnson Space Center contracted with Southwest Research Institute (SwRI) to validate the effectiveness of an interactive video course on the code inspection process. The purpose of this project was to determine if this course could be effective for teaching NASA analysts the process of code inspection. In addition, NASA was interested in the effectiveness of this unique type of instruction (Digital Video Interactive), for providing training on software processes. This study found the Carnegie Mellon course, 'A Cure for the Common Code', effective for teaching the process of code inspection. In addition, analysts prefer learning with this method of instruction, or this method in combination with other methods. As is, the course is definitely better than no course at all; however, findings indicate changes are needed. Following are conclusions of this study. (1) The course is instructionally effective. (2) The simulation has a positive effect on student's confidence in his ability to apply new knowledge. (3) Analysts like the course and prefer this method of training, or this method in combination with current methods of training in code inspection, over the way training is currently being conducted. (4) Analysts responded favorably to information presented through scenarios incorporating full motion video. (5) Some course content needs to be changed. (6) Some content needs to be added to the course. SwRI believes this study indicates interactive video instruction combined with simulation is effective for teaching software processes. Based on the conclusions of this study, SwRI has outlined seven options for NASA to consider. SwRI recommends the option which involves creation of new source code and data files, but uses much of the existing content and design from the current course. Although this option involves a significant software development effort, SwRI believes this option will produce the most effective results.

  5. Highly overcomplete sparse coding

    NASA Astrophysics Data System (ADS)

    Olshausen, Bruno A.

    2013-03-01

    This paper explores sparse coding of natural images in the highly overcomplete regime. We show that as the overcompleteness ratio approaches l0x, new types of dictionary elements emerge beyond the classical Gabor function shape obtained from complete or only modestly overcomplete sparse coding. These more diverse dic­ tionaries allow images to be approximated with lower L1 norm (for a fixed SNR), and the coefficients exhibit steeper decay. We also evaluate the learned dictionaries in a denoising task, showing that higher degrees of overcompleteness yield modest gains in peformance.

  6. Aeroacoustic Prediction Codes

    NASA Technical Reports Server (NTRS)

    Gliebe, P; Mani, R.; Shin, H.; Mitchell, B.; Ashford, G.; Salamah, S.; Connell, S.; Huff, Dennis (Technical Monitor)

    2000-01-01

    This report describes work performed on Contract NAS3-27720AoI 13 as part of the NASA Advanced Subsonic Transport (AST) Noise Reduction Technology effort. Computer codes were developed to provide quantitative prediction, design, and analysis capability for several aircraft engine noise sources. The objective was to provide improved, physics-based tools for exploration of noise-reduction concepts and understanding of experimental results. Methods and codes focused on fan broadband and 'buzz saw' noise and on low-emissions combustor noise and compliment work done by other contractors under the NASA AST program to develop methods and codes for fan harmonic tone noise and jet noise. The methods and codes developed and reported herein employ a wide range of approaches, from the strictly empirical to the completely computational, with some being semiempirical analytical, and/or analytical/computational. Emphasis was on capturing the essential physics while still considering method or code utility as a practical design and analysis tool for everyday engineering use. Codes and prediction models were developed for: (1) an improved empirical correlation model for fan rotor exit flow mean and turbulence properties, for use in predicting broadband noise generated by rotor exit flow turbulence interaction with downstream stator vanes: (2) fan broadband noise models for rotor and stator/turbulence interaction sources including 3D effects, noncompact-source effects. directivity modeling, and extensions to the rotor supersonic tip-speed regime; (3) fan multiple-pure-tone in-duct sound pressure prediction methodology based on computational fluid dynamics (CFD) analysis; and (4) low-emissions combustor prediction methodology and computer code based on CFD and actuator disk theory. In addition. the relative importance of dipole and quadrupole source mechanisms was studied using direct CFD source computation for a simple cascadeigust interaction problem, and an empirical combustor-noise correlation model was developed from engine acoustic test results. This work provided several insights on potential approaches to reducing aircraft engine noise. Code development is described in this report, and those insights are discussed.

  7. Securing mobile code.

    SciTech Connect

    Link, Hamilton E.; Schroeppel, Richard Crabtree; Neumann, William Douglas; Campbell, Philip LaRoche; Beaver, Cheryl Lynn; Pierson, Lyndon George; Anderson, William Erik

    2004-10-01

    If software is designed so that the software can issue functions that will move that software from one computing platform to another, then the software is said to be 'mobile'. There are two general areas of security problems associated with mobile code. The 'secure host' problem involves protecting the host from malicious mobile code. The 'secure mobile code' problem, on the other hand, involves protecting the code from malicious hosts. This report focuses on the latter problem. We have found three distinct camps of opinions regarding how to secure mobile code. There are those who believe special distributed hardware is necessary, those who believe special distributed software is necessary, and those who believe neither is necessary. We examine all three camps, with a focus on the third. In the distributed software camp we examine some commonly proposed techniques including Java, D'Agents and Flask. For the specialized hardware camp, we propose a cryptographic technique for 'tamper-proofing' code over a large portion of the software/hardware life cycle by careful modification of current architectures. This method culminates by decrypting/authenticating each instruction within a physically protected CPU, thereby protecting against subversion by malicious code. Our main focus is on the camp that believes that neither specialized software nor hardware is necessary. We concentrate on methods of code obfuscation to render an entire program or a data segment on which a program depends incomprehensible. The hope is to prevent or at least slow down reverse engineering efforts and to prevent goal-oriented attacks on the software and execution. The field of obfuscation is still in a state of development with the central problem being the lack of a basis for evaluating the protection schemes. We give a brief introduction to some of the main ideas in the field, followed by an in depth analysis of a technique called 'white-boxing'. We put forth some new attacks and improvements on this method as well as demonstrating its implementation for various algorithms. We also examine cryptographic techniques to achieve obfuscation including encrypted functions and offer a new application to digital signature algorithms. To better understand the lack of security proofs for obfuscation techniques, we examine in detail general theoretical models of obfuscation. We explain the need for formal models in order to obtain provable security and the progress made in this direction thus far. Finally we tackle the problem of verifying remote execution. We introduce some methods of verifying remote exponentiation computations and some insight into generic computation checking.

  8. Extended quantum color coding

    NASA Astrophysics Data System (ADS)

    Hayashi, A.; Hashimoto, T.; Horibe, M.

    2005-01-01

    The quantum color coding scheme proposed by Korff and Kempe [e-print quant-ph/0405086] is easily extended so that the color coding quantum system is allowed to be entangled with an extra auxiliary quantum system. It is shown that in the extended scheme we need only ˜2N quantum colors to order N objects in large N limit, whereas ˜N/e quantum colors are required in the original nonextended version. The maximum success probability has asymptotics expressed by the Tracy-Widom distribution of the largest eigenvalue of a random Gaussian unitary ensemble (GUE) matrix.

  9. Accumulate Repeat Accumulate Coded Modulation

    NASA Technical Reports Server (NTRS)

    Abbasfar, Aliazam; Divsalar, Dariush; Yao, Kung

    2004-01-01

    In this paper we propose an innovative coded modulation scheme called 'Accumulate Repeat Accumulate Coded Modulation' (ARA coded modulation). This class of codes can be viewed as serial turbo-like codes, or as a subclass of Low Density Parity Check (LDPC) codes that are combined with high level modulation. Thus at the decoder belief propagation can be used for iterative decoding of ARA coded modulation on a graph, provided a demapper transforms the received in-phase and quadrature samples to reliability of the bits.

  10. Video Coding for ESL.

    ERIC Educational Resources Information Center

    King, Kevin

    1992-01-01

    Coding tasks, a valuable technique for teaching English as a Second Language, are presented that enable students to look at patterns and structures of marital communication as well as objectively evaluate the degree of happiness or distress in the marriage. (seven references) (JL)

  11. The Redox Code

    PubMed Central

    Jones, Dean P.

    2015-01-01

    Abstract Significance: The redox code is a set of principles that defines the positioning of the nicotinamide adenine dinucleotide (NAD, NADP) and thiol/disulfide and other redox systems as well as the thiol redox proteome in space and time in biological systems. The code is richly elaborated in an oxygen-dependent life, where activation/deactivation cycles involving O2 and H2O2 contribute to spatiotemporal organization for differentiation, development, and adaptation to the environment. Disruption of this organizational structure during oxidative stress represents a fundamental mechanism in system failure and disease. Recent Advances: Methodology in assessing components of the redox code under physiological conditions has progressed, permitting insight into spatiotemporal organization and allowing for identification of redox partners in redox proteomics and redox metabolomics. Critical Issues: Complexity of redox networks and redox regulation is being revealed step by step, yet much still needs to be learned. Future Directions: Detailed knowledge of the molecular patterns generated from the principles of the redox code under defined physiological or pathological conditions in cells and organs will contribute to understanding the redox component in health and disease. Ultimately, there will be a scientific basis to a modern redox medicine. Antioxid. Redox Signal. 23, 734–746. PMID:25891126

  12. Electrical Circuit Simulation Code

    Energy Science and Technology Software Center (ESTSC)

    2001-08-09

    Massively-Parallel Electrical Circuit Simulation Code. CHILESPICE is a massively-arallel distributed-memory electrical circuit simulation tool that contains many enhanced radiation, time-based, and thermal features and models. Large scale electronic circuit simulation. Shared memory, parallel processing, enhance convergence. Sandia specific device models.

  13. Odor Coding Sensor

    NASA Astrophysics Data System (ADS)

    Hayashi, Kenshi

    Odor is a one of important sensing parameters for human life. However, odor has not been quantified by a measuring instrument because of its vagueness. In this paper, a measuring of odor with odor coding, which are vector quantities of plural odor molecular information, and its applications are described.

  14. Sharing the Code.

    ERIC Educational Resources Information Center

    Olsen, Florence

    2003-01-01

    Colleges and universities are beginning to consider collaborating on open-source-code projects as a way to meet critical software and computing needs. Points out the attractive features of noncommercial open-source software and describes some examples in use now, especially for the creation of Web infrastructure. (SLD)

  15. Environmental Fluid Dynamics Code

    EPA Science Inventory

    The Environmental Fluid Dynamics Code (EFDC)is a state-of-the-art hydrodynamic model that can be used to simulate aquatic systems in one, two, and three dimensions. It has evolved over the past two decades to become one of the most widely used and technically defensible hydrodyn...

  16. Student Dress Codes.

    ERIC Educational Resources Information Center

    Uerling, Donald F.

    School officials see a need for regulations that prohibit disruptive and inappropriate forms of expression and attire; students see these regulations as unwanted restrictions on their freedom. This paper reviews court litigation involving constitutional limitations on school authority, dress and hair codes, state law constraints, and school…

  17. Multiple trellis coded modulation

    NASA Technical Reports Server (NTRS)

    Simon, Marvin K. (Inventor); Divsalar, Dariush (Inventor)

    1990-01-01

    A technique for designing trellis codes to minimize bit error performance for a fading channel. The invention provides a criteria which may be used in the design of such codes which is significantly different from that used for average white Gaussian noise channels. The method of multiple trellis coded modulation of the present invention comprises the steps of: (a) coding b bits of input data into s intermediate outputs; (b) grouping said s intermediate outputs into k groups of s.sub.i intermediate outputs each where the summation of all s.sub.i,s is equal to s and k is equal to at least 2; (c) mapping each of said k groups of intermediate outputs into one of a plurality of symbols in accordance with a plurality of modulation schemes, one for each group such that the first group is mapped in accordance with a first modulation scheme and the second group is mapped in accordance with a second modulation scheme; and (d) outputting each of said symbols to provide k output symbols for each b bits of input data.

  18. The revised genetic code

    NASA Astrophysics Data System (ADS)

    Ninio, Jacques

    1990-03-01

    Recent findings on the genetic code are reviewed, including selenocysteine usage, deviations in the assignments of sense and nonsense codons, RNA editing, natural ribosomal frameshifts and non-orthodox codon-anticodon pairings. A multi-stage codon reading process is presented.

  19. Code of Ethics.

    ERIC Educational Resources Information Center

    Association of College Unions-International, Bloomington, IN.

    The code of ethics for the college union and student activities professional is presented by the Association of College Unions-International. The preamble identifies the objectives of the college union as providing campus community centers and social programs that enhance the quality of life for members of the academic community. Ethics for…

  20. Dress Codes and Uniforms.

    ERIC Educational Resources Information Center

    Lumsden, Linda; Miller, Gabriel

    2002-01-01

    Students do not always make choices that adults agree with in their choice of school dress. Dress-code issues are explored in this Research Roundup, and guidance is offered to principals seeking to maintain a positive school climate. In "Do School Uniforms Fit?" Kerry White discusses arguments for and against school uniforms and summarizes the…

  1. Splicing code modeling.

    PubMed

    Barash, Yoseph; Vaquero-Garcia, Jorge

    2014-01-01

    How do cis and trans elements involved in pre-mRNA splicing come together to form a splicing "code"? This question has been a driver of much of the research involving RNA biogenesis. The variability of splicing outcome across developmental stages and between tissues coupled with association of splicing defects with numerous diseases highlights the importance of such a code. However, the sheer number of elements involved in splicing regulation and the context-specific manner of their operation have made the derivation of such a code challenging. Recently, machine learning-based methods have been developed to infer computational models for a splicing code. These methods use high-throughput experiments measuring mRNA expression at exonic resolution and binding locations of RNA-binding proteins (RBPs) to infer what the regulatory elements that control the inclusion of a given pre-mRNA segment are. The inferred regulatory models can then be applied to genomic sequences or experimental conditions that have not been measured to predict splicing outcome. Moreover, the models themselves can be interrogated to identify new regulatory mechanisms, which can be subsequently tested experimentally. In this chapter, we survey the current state of this technology, and illustrate how it can be applied by non-computational or RNA splicing experts to study regulation of specific exons by using the AVISPA web tool. PMID:25201114

  2. Code Optimization Techniques

    SciTech Connect

    MAGEE,GLEN I.

    2000-08-03

    Computers transfer data in a number of different ways. Whether through a serial port, a parallel port, over a modem, over an ethernet cable, or internally from a hard disk to memory, some data will be lost. To compensate for that loss, numerous error detection and correction algorithms have been developed. One of the most common error correction codes is the Reed-Solomon code, which is a special subset of BCH (Bose-Chaudhuri-Hocquenghem) linear cyclic block codes. In the AURA project, an unmanned aircraft sends the data it collects back to earth so it can be analyzed during flight and possible flight modifications made. To counter possible data corruption during transmission, the data is encoded using a multi-block Reed-Solomon implementation with a possibly shortened final block. In order to maximize the amount of data transmitted, it was necessary to reduce the computation time of a Reed-Solomon encoding to three percent of the processor's time. To achieve such a reduction, many code optimization techniques were employed. This paper outlines the steps taken to reduce the processing time of a Reed-Solomon encoding and the insight into modern optimization techniques gained from the experience.

  3. Wire Transport Code

    SciTech Connect

    Caporaso, G.J.; Cole, A.G.

    1983-03-01

    The Wire Transport Code was developed to study the dynamics of relativistic-electron-beam propagation in the transport tube in which a wire-conditioning zone is present. In order for the beam to propagate successfully in the transport section it must be matched onto the wire by focusing elements. The beam must then be controlled by strong lenses as it exits the wire zone. The wire transport code was developed to model this process in substantial detail. It is able to treat axially symmetric problems as well as those in which the beam is transversely displaced from the axis of the transport tube. The focusing effects of foils and various beamline lenses are included in the calculations.

  4. Confocal coded aperture imaging

    DOEpatents

    Tobin, Jr., Kenneth William; Thomas, Jr., Clarence E.

    2001-01-01

    A method for imaging a target volume comprises the steps of: radiating a small bandwidth of energy toward the target volume; focusing the small bandwidth of energy into a beam; moving the target volume through a plurality of positions within the focused beam; collecting a beam of energy scattered from the target volume with a non-diffractive confocal coded aperture; generating a shadow image of said aperture from every point source of radiation in the target volume; and, reconstructing the shadow image into a 3-dimensional image of the every point source by mathematically correlating the shadow image with a digital or analog version of the coded aperture. The method can comprise the step of collecting the beam of energy scattered from the target volume with a Fresnel zone plate.

  5. Reading a neural code.

    PubMed

    Bialek, W; Rieke, F; de Ruyter van Steveninck, R R; Warland, D

    1991-06-28

    Traditional approaches to neural coding characterize the encoding of known stimuli in average neural responses. Organisms face nearly the opposite task--extracting information about an unknown time-dependent stimulus from short segments of a spike train. Here the neural code was characterized from the point of view of the organism, culminating in algorithms for real-time stimulus estimation based on a single example of the spike train. These methods were applied to an identified movement-sensitive neuron in the fly visual system. Such decoding experiments determined the effective noise level and fault tolerance of neural computation, and the structure of the decoding algorithms suggested a simple model for real-time analog signal processing with spiking neurons. PMID:2063199

  6. Coding isotropic images

    NASA Technical Reports Server (NTRS)

    Oneal, J. B., Jr.; Natarajan, T. R.

    1976-01-01

    Rate distortion functions for two-dimensional homogeneous isotropic images are compared with the performance of 5 source encoders designed for such images. Both unweighted and frequency weighted mean square error distortion measures are considered. The coders considered are differential PCM (DPCM) using six previous samples in the prediction, herein called 6 pel (picutre element) DPCM; simple DPCM using single sample prediction; 6 pel DPCM followed by entropy coding; 8 x 8 discrete cosine transform coder, and 4 x 4 Hadamard transform coder. Other transform coders were studied and found to have about the same performance as the two transform coders above. With the mean square error distortion measure DPCM with entropy coding performed best. The relative performance of the coders changes slightly when the distortion measure is frequency weighted mean square error. The performance of all the coders was separated by only about 4 dB.

  7. Adaptive compression coding

    NASA Astrophysics Data System (ADS)

    Nasiopoulos, Panos; Ward, Rabab K.; Morse, Daryl J.

    1991-08-01

    A compression technique which preserves edges in compressed pictures is developed. The proposed compression algorithm adapts itself to the local nature of the image. Smooth regions are represented by their averages and edges are preserved using quad trees. Textured regions are encoded using BTC (block truncation coding) and a modification of BTC using look-up tables. A threshold using a range which is the difference between the maximum and the minimum grey levels in a 4 x 4 pixel quadrant is used. At the recommended value of the threshold (equal to 18), the quality of the compressed texture regions is very high, the same as that of AMBTC (absolute moment block truncation coding), but the edge preservation quality is far superior to that of AMBTC. Compression levels below 0.5-0.8 b/pixel may be achieved.

  8. Finite Element Analysis Code

    Energy Science and Technology Software Center (ESTSC)

    2006-03-08

    MAPVAR-KD is designed to transfer solution results from one finite element mesh to another. MAPVAR-KD draws heavily from the structure and coding of MERLIN II, but it employs a new finite element data base, EXODUS II, and offers enhanced speed and new capabilities not available in MERLIN II. In keeping with the MERLIN II documentation, the computational algorithms used in MAPVAR-KD are described. User instructions are presented. Example problems are included to demonstrate the operationmore » of the code and the effects of various input options. MAPVAR-KD is a modification of MAPVAR in which the search algorithm was replaced by a kd-tree-based search for better performance on large problems.« less

  9. On quantum network coding

    NASA Astrophysics Data System (ADS)

    Jain, Avinash; Franceschetti, Massimo; Meyer, David A.

    2011-03-01

    We study the problem of error-free multiple unicast over directed acyclic networks in a quantum setting. We provide a new information-theoretic proof of the known result that network coding does not achieve a larger quantum information flow than what can be achieved by routing for two-pair communication on the butterfly network. We then consider a k-pair multiple unicast problem and for all k ⩾ 2 we show that there exists a family of networks where quantum network coding achieves k-times larger quantum information flow than what can be achieved by routing. Finally, we specify a graph-theoretic sufficient condition for the quantum information flow of any multiple unicast problem to be bounded by the capacity of any sparsest multicut of the network.

  10. VAC: Versatile Advection Code

    NASA Astrophysics Data System (ADS)

    Tóth, Gábor; Keppens, Rony

    2012-07-01

    The Versatile Advection Code (VAC) is a freely available general hydrodynamic and magnetohydrodynamic simulation software that works in 1, 2 or 3 dimensions on Cartesian and logically Cartesian grids. VAC runs on any Unix/Linux system with a Fortran 90 (or 77) compiler and Perl interpreter. VAC can run on parallel machines using either the Message Passing Interface (MPI) library or a High Performance Fortran (HPF) compiler.

  11. Reeds computer code

    NASA Technical Reports Server (NTRS)

    Bjork, C.

    1981-01-01

    The REEDS (rocket exhaust effluent diffusion single layer) computer code is used for the estimation of certain rocket exhaust effluent concentrations and dosages and their distributions near the Earth's surface following a rocket launch event. Output from REEDS is used in producing near real time air quality and environmental assessments of the effects of certain potentially harmful effluents, namely HCl, Al2O3, CO, and NO.

  12. Status of MARS Code

    SciTech Connect

    N.V. Mokhov

    2003-04-09

    Status and recent developments of the MARS 14 Monte Carlo code system for simulation of hadronic and electromagnetic cascades in shielding, accelerator and detector components in the energy range from a fraction of an electronvolt up to 100 TeV are described. these include physics models both in strong and electromagnetic interaction sectors, variance reduction techniques, residual dose, geometry, tracking, histograming. MAD-MARS Beam Line Build and Graphical-User Interface.

  13. Bar coded retroreflective target

    DOEpatents

    Vann, Charles S.

    2000-01-01

    This small, inexpensive, non-contact laser sensor can detect the location of a retroreflective target in a relatively large volume and up to six degrees of position. The tracker's laser beam is formed into a plane of light which is swept across the space of interest. When the beam illuminates the retroreflector, some of the light returns to the tracker. The intensity, angle, and time of the return beam is measured to calculate the three dimensional location of the target. With three retroreflectors on the target, the locations of three points on the target are measured, enabling the calculation of all six degrees of target position. Until now, devices for three-dimensional tracking of objects in a large volume have been heavy, large, and very expensive. Because of the simplicity and unique characteristics of this tracker, it is capable of three-dimensional tracking of one to several objects in a large volume, yet it is compact, light-weight, and relatively inexpensive. Alternatively, a tracker produces a diverging laser beam which is directed towards a fixed position, and senses when a retroreflective target enters the fixed field of view. An optically bar coded target can be read by the tracker to provide information about the target. The target can be formed of a ball lens with a bar code on one end. As the target moves through the field, the ball lens causes the laser beam to scan across the bar code.

  14. Moment code BEDLAM

    SciTech Connect

    Channell, P.J.; Healy, L.M.; Lysenko, W.P.

    1985-01-01

    BEDLAM is a fourth-order moment simulation code. The beam at the input to a linear accelerator is specified as a collection of moments of the phase-space distribution. Then the moment equations, which describe the time evolution of the moments, are numerically integrated. No particles are traced in this approach. The accuracy of the computed distribution, the external forces, and the space-charge forces are computed consistently to a given order. Although BEDLAM includes moments to fourth order only, it could be systematically extended to any order. Another feature of this method is that physically interesting and intuitive quantities, such as beam sizes and rms emittances, are computed directly. This paper describes the status of BEDLAM and presents the results of some tests. We simulated a section of radio-frequency quadrupole (RFQ) linac, neglecting space charge, to test the new code. Agreement with a Particle-In-Cell (PIC) simulation was excellent. We also verified that the fourth-order solution is more accurate than the second-order solution, which indicates the convergence of the method. We believe these results justify the continued development of moment simulation codes.

  15. Orthopedics coding and funding.

    PubMed

    Baron, S; Duclos, C; Thoreux, P

    2014-02-01

    The French tarification à l'activité (T2A) prospective payment system is a financial system in which a health-care institution's resources are based on performed activity. Activity is described via the PMSI medical information system (programme de médicalisation du système d'information). The PMSI classifies hospital cases by clinical and economic categories known as diagnosis-related groups (DRG), each with an associated price tag. Coding a hospital case involves giving as realistic a description as possible so as to categorize it in the right DRG and thus ensure appropriate payment. For this, it is essential to understand what determines the pricing of inpatient stay: namely, the code for the surgical procedure, the patient's principal diagnosis (reason for admission), codes for comorbidities (everything that adds to management burden), and the management of the length of inpatient stay. The PMSI is used to analyze the institution's activity and dynamism: change on previous year, relation to target, and comparison with competing institutions based on indicators such as the mean length of stay performance indicator (MLS PI). The T2A system improves overall care efficiency. Quality of care, however, is not presently taken account of in the payment made to the institution, as there are no indicators for this; work needs to be done on this topic. PMID:24461230

  16. MELCOR computer code manuals

    SciTech Connect

    Summers, R.M.; Cole, R.K. Jr.; Smith, R.C.; Stuart, D.S.; Thompson, S.L.; Hodge, S.A.; Hyman, C.R.; Sanders, R.L.

    1995-03-01

    MELCOR is a fully integrated, engineering-level computer code that models the progression of severe accidents in light water reactor nuclear power plants. MELCOR is being developed at Sandia National Laboratories for the U.S. Nuclear Regulatory Commission as a second-generation plant risk assessment tool and the successor to the Source Term Code Package. A broad spectrum of severe accident phenomena in both boiling and pressurized water reactors is treated in MELCOR in a unified framework. These include: thermal-hydraulic response in the reactor coolant system, reactor cavity, containment, and confinement buildings; core heatup, degradation, and relocation; core-concrete attack; hydrogen production, transport, and combustion; fission product release and transport; and the impact of engineered safety features on thermal-hydraulic and radionuclide behavior. Current uses of MELCOR include estimation of severe accident source terms and their sensitivities and uncertainties in a variety of applications. This publication of the MELCOR computer code manuals corresponds to MELCOR 1.8.3, released to users in August, 1994. Volume 1 contains a primer that describes MELCOR`s phenomenological scope, organization (by package), and documentation. The remainder of Volume 1 contains the MELCOR Users Guides, which provide the input instructions and guidelines for each package. Volume 2 contains the MELCOR Reference Manuals, which describe the phenomenological models that have been implemented in each package.

  17. Preliminary Assessment of Turbomachinery Codes

    NASA Technical Reports Server (NTRS)

    Mazumder, Quamrul H.

    2007-01-01

    This report assesses different CFD codes developed and currently being used at Glenn Research Center to predict turbomachinery fluid flow and heat transfer behavior. This report will consider the following codes: APNASA, TURBO, GlennHT, H3D, and SWIFT. Each code will be described separately in the following section with their current modeling capabilities, level of validation, pre/post processing, and future development and validation requirements. This report addresses only previously published and validations of the codes. However, the codes have been further developed to extend the capabilities of the codes.

  18. Polynomial weights and code constructions.

    NASA Technical Reports Server (NTRS)

    Massey, J. L.; Costello, D. J., Jr.; Justesen, J.

    1973-01-01

    Study of certain polynomials with the 'weight-retaining' property that any linear combination of these polynomials with coefficients in a general finite field has Hamming weight at least as great as that of the minimum-degree polynomial included. This fundamental property is used in applications to Reed-Muller codes, a new class of 'repeated-root' binary cyclic codes, two new classes of binary convolutional codes derived from binary cyclic codes, and two new classes of binary convolutional codes derived from Reed-Solomon codes.

  19. Anatomy Of A Code Block

    NASA Astrophysics Data System (ADS)

    Cortez, Edward

    1981-12-01

    The purpose of this paper is to present a short definition of a MIL-STD-782 code block, to introduce the advantage of using a code block, to discuss the generation of a code block, and to identify two problems that have limited the wide-spread use of code block data annotation: the rate of information retrieval through-put, and the error rate associated with this information retrieval. Finally, an automatic code block reader, useful in alleviating these two problems is identified. With the introduction of the automatic reader, the use of the code block, long held in its infancy, can grow to full maturity.

  20. ENSDF ANALYSIS AND UTILITY CODES.

    SciTech Connect

    BURROWS, T.

    2005-04-04

    The ENSDF analysis and checking codes are briefly described, along with their uses with various types of ENSDF datasets. For more information on the programs see ''Read Me'' entries and other documentation associated with each code.

  1. Open code for open science?

    NASA Astrophysics Data System (ADS)

    Easterbrook, Steve M.

    2014-11-01

    Open source software is often seen as a path to reproducibility in computational science. In practice there are many obstacles, even when the code is freely available, but open source policies should at least lead to better quality code.

  2. Dual Coding, Reasoning and Fallacies.

    ERIC Educational Resources Information Center

    Hample, Dale

    1982-01-01

    Develops the theory that a fallacy is not a comparison of a rhetorical text to a set of definitions but a comparison of one person's cognition with another's. Reviews Paivio's dual coding theory, relates nonverbal coding to reasoning processes, and generates a limited fallacy theory based on dual coding theory. (PD)

  3. Authorship Attribution of Source Code

    ERIC Educational Resources Information Center

    Tennyson, Matthew F.

    2013-01-01

    Authorship attribution of source code is the task of deciding who wrote a program, given its source code. Applications include software forensics, plagiarism detection, and determining software ownership. A number of methods for the authorship attribution of source code have been presented in the past. A review of those existing methods is…

  4. Coding Issues in Grounded Theory

    ERIC Educational Resources Information Center

    Moghaddam, Alireza

    2006-01-01

    This paper discusses grounded theory as one of the qualitative research designs. It describes how grounded theory generates from data. Three phases of grounded theory--open coding, axial coding, and selective coding--are discussed, along with some of the issues which are the source of debate among grounded theorists, especially between its…

  5. Energy Codes and Standards: Facilities

    SciTech Connect

    Bartlett, Rosemarie; Halverson, Mark A.; Shankle, Diana L.

    2007-01-01

    Energy codes and standards play a vital role in the marketplace by setting minimum requirements for energy-efficient design and construction. They outline uniform requirements for new buildings as well as additions and renovations. This article covers basic knowledge of codes and standards; development processes of each; adoption, implementation, and enforcement of energy codes and standards; and voluntary energy efficiency programs.

  6. Validation of the BEPLATE code

    SciTech Connect

    Giles, G.E.; Bullock, J.S.

    1997-11-01

    The electroforming simulation code BEPLATE (Boundary Element-PLATE) has been developed and validated for specific applications at Oak Ridge. New areas of application are opening up and more validations are being performed. This paper reports the validation experience of the BEPLATE code on two types of electroforms and describes some recent applications of the code.

  7. SAR image coding

    NASA Astrophysics Data System (ADS)

    Tourtier, P.

    1989-10-01

    The Synthetic Aperture Radar imagery causes a very large flow rate, to the extent that the data flow is at a record level. The image coding technique reduces the flow rate so that the original quality is preserved. This permits the reduction of the transmission channel capacity and improves the flow rate. A different technique is presented for data flow compression. The technique performs best at low cosine transform and is described in detail. The results obtained by Thomson-CSF show that a compression rate of the magnitude of 4 or 5 is possible without visible image degradation.

  8. Finite Element Analysis Code

    Energy Science and Technology Software Center (ESTSC)

    2005-05-07

    CONEX is a code for joining sequentially in time multiple exodusll database files which all represent the same base mesh topology and geometry. It is used to create a single results or restart file from multiple results or restart files which typically arise as the result of multiple restarted analyses. CONEX is used to postprocess the results from a series of finite element analyses. It can join sequentially the data from multiple results databases intomore » a single database which makes it easier to postprocess the results data.« less

  9. Finite Element Analysis Code

    Energy Science and Technology Software Center (ESTSC)

    2005-06-26

    Exotxt is an analysis code that reads finite element results data stored in an exodusII file and generates a file in a structured text format. The text file can be edited or modified via a number of text formatting tools. Exotxt is used by analysis to translate data from the binary exodusII format into a structured text format which can then be edited or modified and then either translated back to exodusII format or tomore » another format.« less

  10. Structured error recovery for code-word-stabilized quantum codes

    SciTech Connect

    Li Yunfan; Dumer, Ilya; Grassl, Markus; Pryadko, Leonid P.

    2010-05-15

    Code-word-stabilized (CWS) codes are, in general, nonadditive quantum codes that can correct errors by an exhaustive search of different error patterns, similar to the way that we decode classical nonlinear codes. For an n-qubit quantum code correcting errors on up to t qubits, this brute-force approach consecutively tests different errors of weight t or less and employs a separate n-qubit measurement in each test. In this article, we suggest an error grouping technique that allows one to simultaneously test large groups of errors in a single measurement. This structured error recovery technique exponentially reduces the number of measurements by about 3{sup t} times. While it still leaves exponentially many measurements for a generic CWS code, the technique is equivalent to syndrome-based recovery for the special case of additive CWS codes.

  11. Low Density Parity Check Codes: Bandwidth Efficient Channel Coding

    NASA Technical Reports Server (NTRS)

    Fong, Wai; Lin, Shu; Maki, Gary; Yeh, Pen-Shu

    2003-01-01

    Low Density Parity Check (LDPC) Codes provide near-Shannon Capacity performance for NASA Missions. These codes have high coding rates R=0.82 and 0.875 with moderate code lengths, n=4096 and 8176. Their decoders have inherently parallel structures which allows for high-speed implementation. Two codes based on Euclidean Geometry (EG) were selected for flight ASIC implementation. These codes are cyclic and quasi-cyclic in nature and therefore have a simple encoder structure. This results in power and size benefits. These codes also have a large minimum distance as much as d,,, = 65 giving them powerful error correcting capabilities and error floors less than lo- BER. This paper will present development of the LDPC flight encoder and decoder, its applications and status.

  12. Genetic code for sine

    NASA Astrophysics Data System (ADS)

    Abdullah, Alyasa Gan; Wah, Yap Bee

    2015-02-01

    The computation of the approximate values of the trigonometric sines was discovered by Bhaskara I (c. 600-c.680), a seventh century Indian mathematician and is known as the Bjaskara's I's sine approximation formula. The formula is given in his treatise titled Mahabhaskariya. In the 14th century, Madhava of Sangamagrama, a Kerala mathematician astronomer constructed the table of trigonometric sines of various angles. Madhava's table gives the measure of angles in arcminutes, arcseconds and sixtieths of an arcsecond. The search for more accurate formulas led to the discovery of the power series expansion by Madhava of Sangamagrama (c.1350-c. 1425), the founder of the Kerala school of astronomy and mathematics. In 1715, the Taylor series was introduced by Brook Taylor an English mathematician. If the Taylor series is centered at zero, it is called a Maclaurin series, named after the Scottish mathematician Colin Maclaurin. Some of the important Maclaurin series expansions include trigonometric functions. This paper introduces the genetic code of the sine of an angle without using power series expansion. The genetic code using square root approach reveals the pattern in the signs (plus, minus) and sequence of numbers in the sine of an angle. The square root approach complements the Pythagoras method, provides a better understanding of calculating an angle and will be useful for teaching the concepts of angles in trigonometry.

  13. Practices in Code Discoverability: Astrophysics Source Code Library

    NASA Astrophysics Data System (ADS)

    Allen, A.; Teuben, P.; Nemiroff, R. J.; Shamir, L.

    2012-09-01

    Here we describe the Astrophysics Source Code Library (ASCL), which takes an active approach to sharing astrophysics source code. ASCL's editor seeks out both new and old peer-reviewed papers that describe methods or experiments that involve the development or use of source code, and adds entries for the found codes to the library. This approach ensures that source codes are added without requiring authors to actively submit them, resulting in a comprehensive listing that covers a significant number of the astrophysics source codes used in peer-reviewed studies. The ASCL now has over 340 codes in it and continues to grow. In 2011, the ASCL has on average added 19 codes per month. An advisory committee has been established to provide input and guide the development and expansion of the new site, and a marketing plan has been developed and is being executed. All ASCL source codes have been used to generate results published in or submitted to a refereed journal and are freely available either via a download site or from an identified source. This paper provides the history and description of the ASCL. It lists the requirements for including codes, examines the advantages of the ASCL, and outlines some of its future plans.

  14. Making your code citable with the Astrophysics Source Code Library

    NASA Astrophysics Data System (ADS)

    Allen, Alice; DuPrie, Kimberly; Schmidt, Judy; Berriman, G. Bruce; Hanisch, Robert J.; Mink, Jessica D.; Nemiroff, Robert J.; Shamir, Lior; Shortridge, Keith; Taylor, Mark B.; Teuben, Peter J.; Wallin, John F.

    2016-01-01

    The Astrophysics Source Code Library (ASCL, ascl.net) is a free online registry of codes used in astronomy research. With nearly 1,200 codes, it is the largest indexed resource for astronomy codes in existence. Established in 1999, it offers software authors a path to citation of their research codes even without publication of a paper describing the software, and offers scientists a way to find codes used in refereed publications, thus improving the transparency of the research. It also provides a method to quantify the impact of source codes in a fashion similar to the science metrics of journal articles. Citations using ASCL IDs are accepted by major astronomy journals and if formatted properly are tracked by ADS and other indexing services. The number of citations to ASCL entries increased sharply from 110 citations in January 2014 to 456 citations in September 2015. The percentage of code entries in ASCL that were cited at least once rose from 7.5% in January 2014 to 17.4% in September 2015. The ASCL's mid-2014 infrastructure upgrade added an easy entry submission form, more flexible browsing, search capabilities, and an RSS feeder for updates. A Changes/Additions form added this past fall lets authors submit links for papers that use their codes for addition to the ASCL entry even if those papers don't formally cite the codes, thus increasing the transparency of that research and capturing the value of their software to the community.

  15. Electromagnetic particle simulation codes

    NASA Technical Reports Server (NTRS)

    Pritchett, P. L.

    1985-01-01

    Electromagnetic particle simulations solve the full set of Maxwell's equations. They thus include the effects of self-consistent electric and magnetic fields, magnetic induction, and electromagnetic radiation. The algorithms for an electromagnetic code which works directly with the electric and magnetic fields are described. The fields and current are separated into transverse and longitudinal components. The transverse E and B fields are integrated in time using a leapfrog scheme applied to the Fourier components. The particle pushing is performed via the relativistic Lorentz force equation for the particle momentum. As an example, simulation results are presented for the electron cyclotron maser instability which illustrate the importance of relativistic effects on the wave-particle resonance condition and on wave dispersion.

  16. Peripheral coding of taste

    PubMed Central

    Liman, Emily R.; Zhang, Yali V.; Montell, Craig

    2014-01-01

    Five canonical tastes, bitter, sweet, umami (amino acid), salty and sour (acid) are detected by animals as diverse as fruit flies and humans, consistent with a near universal drive to consume fundamental nutrients and to avoid toxins or other harmful compounds. Surprisingly, despite this strong conservation of basic taste qualities between vertebrates and invertebrates, the receptors and signaling mechanisms that mediate taste in each are highly divergent. The identification over the last two decades of receptors and other molecules that mediate taste has led to stunning advances in our understanding of the basic mechanisms of transduction and coding of information by the gustatory systems of vertebrates and invertebrates. In this review, we discuss recent advances in taste research, mainly from the fly and mammalian systems, and we highlight principles that are common across species, despite stark differences in receptor types. PMID:24607224

  17. Surface acoustic wave coding for orthogonal frequency coded devices

    NASA Technical Reports Server (NTRS)

    Malocha, Donald (Inventor); Kozlovski, Nikolai (Inventor)

    2011-01-01

    Methods and systems for coding SAW OFC devices to mitigate code collisions in a wireless multi-tag system. Each device producing plural stepped frequencies as an OFC signal with a chip offset delay to increase code diversity. A method for assigning a different OCF to each device includes using a matrix based on the number of OFCs needed and the number chips per code, populating each matrix cell with OFC chip, and assigning the codes from the matrix to the devices. The asynchronous passive multi-tag system includes plural surface acoustic wave devices each producing a different OFC signal having the same number of chips and including a chip offset time delay, an algorithm for assigning OFCs to each device, and a transceiver to transmit an interrogation signal and receive OFC signals in response with minimal code collisions during transmission.

  18. Transionospheric Propagation Code (TIPC)

    SciTech Connect

    Roussel-Dupre, R.; Kelley, T.A.

    1990-10-01

    The Transionospheric Propagation Code is a computer program developed at Los Alamos National Lab to perform certain tasks related to the detection of vhf signals following propagation through the ionosphere. The code is written in Fortran 77, runs interactively and was designed to be as machine independent as possible. A menu format in which the user is prompted to supply appropriate parameters for a given task has been adopted for the input while the output is primarily in the form of graphics. The user has the option of selecting from five basic tasks, namely transionospheric propagation, signal filtering, signal processing, DTOA study, and DTOA uncertainty study. For the first task a specified signal is convolved against the impulse response function of the ionosphere to obtain the transionospheric signal. The user is given a choice of four analytic forms for the input pulse or of supplying a tabular form. The option of adding Gaussian-distributed white noise of spectral noise to the input signal is also provided. The deterministic ionosphere is characterized to first order in terms of a total electron content (TEC) along the propagation path. In addition, a scattering model parameterized in terms of a frequency coherence bandwidth is also available. In the second task, detection is simulated by convolving a given filter response against the transionospheric signal. The user is given a choice of a wideband filter or a narrowband Gaussian filter. It is also possible to input a filter response. The third task provides for quadrature detection, envelope detection, and three different techniques for time-tagging the arrival of the transionospheric signal at specified receivers. The latter algorithms can be used to determine a TEC and thus take out the effects of the ionosphere to first order. Task four allows the user to construct a table of delta-times-of-arrival (DTOAs) vs TECs for a specified pair of receivers.

  19. NeuCode Labels for Relative Protein Quantification *

    PubMed Central

    Merrill, Anna E.; Hebert, Alexander S.; MacGilvray, Matthew E.; Rose, Christopher M.; Bailey, Derek J.; Bradley, Joel C.; Wood, William W.; El Masri, Marwan; Westphall, Michael S.; Gasch, Audrey P.; Coon, Joshua J.

    2014-01-01

    We describe a synthesis strategy for the preparation of lysine isotopologues that differ in mass by as little as 6 mDa. We demonstrate that incorporation of these molecules into the proteomes of actively growing cells does not affect cellular proliferation, and we discuss how to use the embedded mass signatures (neutron encoding (NeuCode)) for multiplexed proteome quantification by means of high-resolution mass spectrometry. NeuCode SILAC amalgamates the quantitative accuracy of SILAC with the multiplexing of isobaric tags and, in doing so, offers up new opportunities for biological investigation. We applied NeuCode SILAC to examine the relationship between transcript and protein levels in yeast cells responding to environmental stress. Finally, we monitored the time-resolved responses of five signaling mutants in a single 18-plex experiment. PMID:24938287

  20. Diagonal Eigenvalue Unity (DEU) code for spectral amplitude coding-optical code division multiple access

    NASA Astrophysics Data System (ADS)

    Ahmed, Hassan Yousif; Nisar, K. S.

    2013-08-01

    Code with ideal in-phase cross correlation (CC) and practical code length to support high number of users are required in spectral amplitude coding-optical code division multiple access (SAC-OCDMA) systems. SAC systems are getting more attractive in the field of OCDMA because of its ability to eliminate the influence of multiple access interference (MAI) and also suppress the effect of phase induced intensity noise (PIIN). In this paper, we have proposed new Diagonal Eigenvalue Unity (DEU) code families with ideal in-phase CC based on Jordan block matrix with simple algebraic ways. Four sets of DEU code families based on the code weight W and number of users N for the combination (even, even), (even, odd), (odd, odd) and (odd, even) are constructed. This combination gives DEU code more flexibility in selection of code weight and number of users. These features made this code a compelling candidate for future optical communication systems. Numerical results show that the proposed DEU system outperforms reported codes. In addition, simulation results taken from a commercial optical systems simulator, Virtual Photonic Instrument (VPI™) shown that, using point to multipoint transmission in passive optical network (PON), DEU has better performance and could support long span with high data rate.

  1. Explosive Formulation Code Naming SOP

    SciTech Connect

    Martz, H. E.

    2014-09-19

    The purpose of this SOP is to provide a procedure for giving individual HME formulations code names. A code name for an individual HME formulation consists of an explosive family code, given by the classified guide, followed by a dash, -, and a number. If the formulation requires preparation such as packing or aging, these add additional groups of symbols to the X-ray specimen name.

  2. Golay and other box codes

    NASA Technical Reports Server (NTRS)

    Solomon, G.

    1992-01-01

    The (24,12;8) extended Golay Code can be generated as a 6x4 binary matrix from the (15,11;3) BCH-Hamming Code, represented as a 5 x 3 matrix, by adding a row and a column, both of odd or even parity. The odd-parity case provides the additional 12th dimension. Furthermore, any three columns and five rows of the 6 x 4 Golay form a BCH-Hamming (15,11;3) Code. Similarly a (80,58;8) code can be generated as a 10 x 8 binary matrix from the (63,57;3) BCH-Hamming Code represented as a 9 x 7 matrix by adding a row and a column both of odd and even parity. Furthermore, any seven columns along with the top nine rows is a BCH-Hamming (63,57;3) Code. A (80,40;16) 10 x 8 matrix binary code with weight structure identical to the extended (80,40;16) Quadratic Residue Code is generated from a (63,39;7) binary cyclic code represented as a 9 x 7 matrix, by adding a row and a column, both of odd or even parity.

  3. Golay and other box codes

    NASA Technical Reports Server (NTRS)

    Solomon, G.

    1992-01-01

    The (24,12;8) extended Golay Code can be generated as a 6 x 4 binary matrix from the (15,11;3) BCH-Hamming Code, represented as a 5 x 3 matrix, by adding a row and a column, both of odd or even parity. The odd-parity case provides the additional 12th dimension. Furthermore, any three columns and five rows of the 6 x 4 Golay form a BCH-Hamming (15,11;3) Code. Similarly a (80,58;8) code can be generated as a 10 x 8 binary matrix from the (63,57;3) BCH-Hamming Code represented as a 9 x 7 matrix by adding a row and a column both of odd and even parity. Furthermore, any seven columns along with the top nine rows is a BCH-Hamming (53,57;3) Code. A (80,40;16) 10 x 8 matrix binary code with weight structure identical to the extended (80,40;16) Quadratic Residue Code is generated from a (63,39;7) binary cyclic code represented as a 9 x 7 matrix, by adding a row and a column, both of odd or even parity.

  4. Astrophysics Source Code Library Enhancements

    NASA Astrophysics Data System (ADS)

    Hanisch, R. J.; Allen, A.; Berriman, G. B.; DuPrie, K.; Mink, J.; Nemiroff, R. J.; Schmidt, J.; Shamir, L.; Shortridge, K.; Taylor, M.; Teuben, P. J.; Wallin, J.

    2015-09-01

    The Astrophysics Source Code Library (ASCL)1 is a free online registry of codes used in astronomy research; it currently contains over 900 codes and is indexed by ADS. The ASCL has recently moved a new infrastructure into production. The new site provides a true database for the code entries and integrates the WordPress news and information pages and the discussion forum into one site. Previous capabilities are retained and permalinks to ascl.net continue to work. This improvement offers more functionality and flexibility than the previous site, is easier to maintain, and offers new possibilities for collaboration. This paper covers these recent changes to the ASCL.

  5. Understanding perception through neural "codes".

    PubMed

    Freeman, Walter J

    2011-07-01

    A major challenge for cognitive scientists is to deduce and explain the neural mechanisms of the rapid transposition between stimulus energy and recalled memory-between the specific (sensation) and the generic (perception)-in both material and mental aspects. Researchers are attempting three explanations in terms of neural codes. The microscopic code: cellular neurobiologists correlate stimulus properties with the rates and frequencies of trains of action potentials induced by stimuli and carried by topologically organized axons. The mesoscopic code: cognitive scientists formulate symbolic codes in trains of action potentials from feature-detector neurons of phonemes, lines, odorants, vibrations, faces, etc., that object-detector neurons bind into representations of stimuli. The macroscopic code: neurodynamicists extract neural correlates of stimuli and associated behaviors in spatial patterns of oscillatory fields of dendritic activity, which self-organize and evolve on trajectories through high-dimensional brain state space. This multivariate code is expressed in landscapes of chaotic attractors. Unlike other scientific codes, such as DNA and the periodic table, these neural codes have no alphabet or syntax. They are epistemological metaphors that experimentalists need to measure neural activity and engineers need to model brain functions. My aim is to describe the main properties of the macroscopic code and the grand challenge it poses: how do very large patterns of textured synchronized oscillations form in cortex so quickly? PMID:21134811

  6. The FLUKA code: an overview

    NASA Astrophysics Data System (ADS)

    Ballarini, F.; Battistoni, G.; Campanella, M.; Carboni, M.; Cerutti, F.; Empl, A.; Fassò, A.; Ferrari, A.; Gadioli, E.; Garzelli, M. V.; Lantz, M.; Liotta, M.; Mairani, A.; Mostacci, A.; Muraro, S.; Ottolenghi, A.; Pelliccioni, M.; Pinsky, L.; Ranft, J.; Roesler, S.; Sala, P. R.; Scannicchio, D.; Trovati, S.; Villari, R.; Wilson, T.; Zapp, N.; Vlachoudis, V.

    2006-05-01

    FLUKA is a multipurpose MonteCarlo code which can transport a variety of particles over a wide energy range in complex geometries. The code is a joint project of INFN and CERN: part of its development is also supported by the University of Houston and NASA. FLUKA is successfully applied in several fields, including but not only, particle physics, cosmic ray physics, dosimetry, radioprotection, hadron therapy, space radiation, accelerator design and neutronics. The code is the standard tool used at CERN for dosimetry, radioprotection and beam-machine interaction studies. Here we give a glimpse into the code physics models with a particular emphasis to the hadronic and nuclear sector.

  7. The FLUKA Code: an Overview

    SciTech Connect

    Ballarini, F.; Battistoni, G.; Campanella, M.; Carboni, M.; Cerutti, F.; Empl, A.; Fasso, A.; Ferrari, A.; Gadioli, E.; Garzelli, M.V.; Lantz, M.; Liotta, M.; Mairani, A.; Mostacci, A.; Muraro, S.; Ottolenghi, A.; Pelliccioni, M.; Pinsky, L.; Ranft, J.; Roesler, S.; Sala, P.R.; /Milan U. /INFN, Milan /Pavia U. /INFN, Pavia /CERN /Siegen U. /Houston U. /SLAC /Frascati /NASA, Houston /ENEA, Frascati

    2005-11-09

    FLUKA is a multipurpose Monte Carlo code which can transport a variety of particles over a wide energy range in complex geometries. The code is a joint project of INFN and CERN: part of its development is also supported by the University of Houston and NASA. FLUKA is successfully applied in several fields, including but not only, particle physics, cosmic ray physics, dosimetry, radioprotection, hadron therapy, space radiation, accelerator design and neutronics. The code is the standard tool used at CERN for dosimetry, radioprotection and beam-machine interaction studies. Here we give a glimpse into the code physics models with a particular emphasis to the hadronic and nuclear sector.

  8. The FLUKA Code: An Overview

    NASA Technical Reports Server (NTRS)

    Ballarini, F.; Battistoni, G.; Campanella, M.; Carboni, M.; Cerutti, F.; Empl, A.; Fasso, A.; Ferrari, A.; Gadioli, E.; Garzelli, M. V.; Lantz, M.; Liotta, M.; Mairani, A.; Mostacci, A.; Muraro, S.; Ottolenghi, A.; Pelliccioni, M.; Pinsky, L.; Ranft, J.; Roesler, S.; Sala, P. R.; Scannicchio, D.; Trovati, S.; Villari, R.; Wilson, T.

    2006-01-01

    FLUKA is a multipurpose Monte Carlo code which can transport a variety of particles over a wide energy range in complex geometries. The code is a joint project of INFN and CERN: part of its development is also supported by the University of Houston and NASA. FLUKA is successfully applied in several fields, including but not only, particle physics, cosmic ray physics, dosimetry, radioprotection, hadron therapy, space radiation, accelerator design and neutronics. The code is the standard tool used at CERN for dosimetry, radioprotection and beam-machine interaction studies. Here we give a glimpse into the code physics models with a particular emphasis to the hadronic and nuclear sector.

  9. High Order Modulation Protograph Codes

    NASA Technical Reports Server (NTRS)

    Nguyen, Thuy V. (Inventor); Nosratinia, Aria (Inventor); Divsalar, Dariush (Inventor)

    2014-01-01

    Digital communication coding methods for designing protograph-based bit-interleaved code modulation that is general and applies to any modulation. The general coding framework can support not only multiple rates but also adaptive modulation. The method is a two stage lifting approach. In the first stage, an original protograph is lifted to a slightly larger intermediate protograph. The intermediate protograph is then lifted via a circulant matrix to the expected codeword length to form a protograph-based low-density parity-check code.

  10. Implementation issues in source coding

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Chen, Yun-Chung; Hadenfeldt, A. C.

    1989-01-01

    An edge preserving image coding scheme which can be operated in both a lossy and a lossless manner was developed. The technique is an extension of the lossless encoding algorithm developed for the Mars observer spectral data. It can also be viewed as a modification of the DPCM algorithm. A packet video simulator was also developed from an existing modified packet network simulator. The coding scheme for this system is a modification of the mixture block coding (MBC) scheme described in the last report. Coding algorithms for packet video were also investigated.

  11. Telescope Adaptive Optics Code

    Energy Science and Technology Software Center (ESTSC)

    2005-07-28

    The Telescope AO Code has general adaptive optics capabilities plus specialized models for three telescopes with either adaptive optics or active optics systems. It has the capability to generate either single-layer or distributed Kolmogorov turbulence phase screens using the FFT. Missing low order spatial frequencies are added using the Karhunen-Loeve expansion. The phase structure curve is extremely dose to the theoreUcal. Secondly, it has the capability to simulate an adaptive optics control systems. The defaultmore » parameters are those of the Keck II adaptive optics system. Thirdly, it has a general wave optics capability to model the science camera halo due to scintillation from atmospheric turbulence and the telescope optics. Although this capability was implemented for the Gemini telescopes, the only default parameter specific to the Gemini telescopes is the primary mirror diameter. Finally, it has a model for the LSST active optics alignment strategy. This last model is highly specific to the LSST« less

  12. Patched Conic Trajectory Code

    NASA Technical Reports Server (NTRS)

    Park, Brooke Anderson; Wright, Henry

    2012-01-01

    PatCon code was developed to help mission designers run trade studies on launch and arrival times for any given planet. Initially developed in Fortran, the required inputs included launch date, arrival date, and other orbital parameters of the launch planet and arrival planets at the given dates. These parameters include the position of the planets, the eccentricity, semi-major axes, argument of periapsis, ascending node, and inclination of the planets. With these inputs, a patched conic approximation is used to determine the trajectory. The patched conic approximation divides the planetary mission into three parts: (1) the departure phase, in which the two relevant bodies are Earth and the spacecraft, and where the trajectory is a departure hyperbola with Earth at the focus; (2) the cruise phase, in which the two bodies are the Sun and the spacecraft, and where the trajectory is a transfer ellipse with the Sun at the focus; and (3) the arrival phase, in which the two bodies are the target planet and the spacecraft, where the trajectory is an arrival hyperbola with the planet as the focus.

  13. The GLUT4 code.

    PubMed

    Larance, Mark; Ramm, Georg; James, David E

    2008-02-01

    Despite being one of the first recognized targets of insulin action, the acceleration of glucose transport into muscle and fat tissue remains one of the most enigmatic processes in the insulin action cascade. Glucose transport is accomplished by a shift in the distribution of the insulin-responsive glucose transporter GLUT4 from intracellular compartments to the plasma membrane in the presence of insulin. The complexity in deciphering the molecular blueprint of insulin regulation of glucose transport arises because it represents a convergence of two convoluted biological systems-vesicular transport and signal transduction. Whereas more than 60 molecular players have been implicated in this orchestral performance, it has been difficult to distinguish between mainly passive participants vs. those that are clearly driving the process. The maze-like nature of the endosomal system makes it almost impossible to dissect the anatomical nature of what appears to be a medley of many overlapping and rapidly changing transitions. A major limitation is technology. It is clear that further progress in teasing apart the GLUT4 code will require the development and application of novel and advanced technologies that can discriminate one molecule from another in the living cell and to superimpose this upon a system in which the molecular environment can be carefully manipulated. Many are now taking on this challenge. PMID:17717074

  14. Error coding simulations in C

    NASA Technical Reports Server (NTRS)

    Noble, Viveca K.

    1994-01-01

    When data is transmitted through a noisy channel, errors are produced within the data rendering it indecipherable. Through the use of error control coding techniques, the bit error rate can be reduced to any desired level without sacrificing the transmission data rate. The Astrionics Laboratory at Marshall Space Flight Center has decided to use a modular, end-to-end telemetry data simulator to simulate the transmission of data from flight to ground and various methods of error control. The simulator includes modules for random data generation, data compression, Consultative Committee for Space Data Systems (CCSDS) transfer frame formation, error correction/detection, error generation and error statistics. The simulator utilizes a concatenated coding scheme which includes CCSDS standard (255,223) Reed-Solomon (RS) code over GF(2(exp 8)) with interleave depth of 5 as the outermost code, (7, 1/2) convolutional code as an inner code and CCSDS recommended (n, n-16) cyclic redundancy check (CRC) code as the innermost code, where n is the number of information bits plus 16 parity bits. The received signal-to-noise for a desired bit error rate is greatly reduced through the use of forward error correction techniques. Even greater coding gain is provided through the use of a concatenated coding scheme. Interleaving/deinterleaving is necessary to randomize burst errors which may appear at the input of the RS decoder. The burst correction capability length is increased in proportion to the interleave depth. The modular nature of the simulator allows for inclusion or exclusion of modules as needed. This paper describes the development and operation of the simulator, the verification of a C-language Reed-Solomon code, and the possibility of using Comdisco SPW(tm) as a tool for determining optimal error control schemes.

  15. Coding in pigeons: Multiple-coding versus single-code/default strategies.

    PubMed

    Pinto, Carlos; Machado, Armando

    2015-05-01

    To investigate the coding strategies that pigeons may use in a temporal discrimination tasks, pigeons were trained on a matching-to-sample procedure with three sample durations (2s, 6s and 18s) and two comparisons (red and green hues). One comparison was correct following 2-s samples and the other was correct following both 6-s and 18-s samples. Tests were then run to contrast the predictions of two hypotheses concerning the pigeons' coding strategies, the multiple-coding and the single-code/default. According to the multiple-coding hypothesis, three response rules are acquired, one for each sample. According to the single-code/default hypothesis, only two response rules are acquired, one for the 2-s sample and a "default" rule for any other duration. In retention interval tests, pigeons preferred the "default" key, a result predicted by the single-code/default hypothesis. In no-sample tests, pigeons preferred the key associated with the 2-s sample, a result predicted by multiple-coding. Finally, in generalization tests, when the sample duration equaled 3.5s, the geometric mean of 2s and 6s, pigeons preferred the key associated with the 6-s and 18-s samples, a result predicted by the single-code/default hypothesis. The pattern of results suggests the need for models that take into account multiple sources of stimulus control. PMID:25894105

  16. Cracking the bioelectric code

    PubMed Central

    Tseng, AiSun; Levin, Michael

    2013-01-01

    Patterns of resting potential in non-excitable cells of living tissue are now known to be instructive signals for pattern formation during embryogenesis, regeneration and cancer suppression. The development of molecular-level techniques for tracking ion flows and functionally manipulating the activity of ion channels and pumps has begun to reveal the mechanisms by which voltage gradients regulate cell behaviors and the assembly of complex large-scale structures. A recent paper demonstrated that a specific voltage range is necessary for demarcation of eye fields in the frog embryo. Remarkably, artificially setting other somatic cells to the eye-specific voltage range resulted in formation of eyes in aberrant locations, including tissues that are not in the normal anterior ectoderm lineage: eyes could be formed in the gut, on the tail, or in the lateral plate mesoderm. These data challenge the existing models of eye fate restriction and tissue competence maps, and suggest the presence of a bioelectric code—a mapping of physiological properties to anatomical outcomes. This Addendum summarizes the current state of knowledge in developmental bioelectricity, proposes three possible interpretations of the bioelectric code that functionally maps physiological states to anatomical outcomes, and highlights the biggest open questions in this field. We also suggest a speculative hypothesis at the intersection of cognitive science and developmental biology: that bioelectrical signaling among non-excitable cells coupled by gap junctions simulates neural network-like dynamics, and underlies the information processing functions required by complex pattern formation in vivo. Understanding and learning to control the information stored in physiological networks will have transformative implications for developmental biology, regenerative medicine and synthetic bioengineering. PMID:23802040

  17. Simplified Correlator For Ranging Codes

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.; Smith, J. R.

    1990-01-01

    Improved correlating subsystem of pseudorandom-code ranging system made possible by advent of fast, custom-made, very-large-scale integrated circuits. Performs far fewer arithmetical operations, contains much less specialized analog and digital circuitry, and used with large number of different codes.

  18. Breaking the Code of Silence.

    ERIC Educational Resources Information Center

    Halbig, Wolfgang W.

    2000-01-01

    Schools and communities must break the adolescent code of silence concerning threats of violence. Schools need character education stressing courage, caring, and responsibility; regular discussions of the school discipline code; formal security discussions with parents; 24-hour hotlines; and protocols for handling reports of potential violence.…

  19. Accelerator Physics Code Web Repository

    SciTech Connect

    Zimmermann, F.; Basset, R.; Bellodi, G.; Benedetto, E.; Dorda, U.; Giovannozzi, M.; Papaphilippou, Y.; Pieloni, T.; Ruggiero, F.; Rumolo, G.; Schmidt, F.; Todesco, E.; Zotter, B.W.; Payet, J.; Bartolini, R.; Farvacque, L.; Sen, T.; Chin, Y.H.; Ohmi, K.; Oide, K.; Furman, M.; /LBL, Berkeley /Oak Ridge /Pohang Accelerator Lab. /SLAC /TRIUMF /Tech-X, Boulder /UC, San Diego /Darmstadt, GSI /Rutherford /Brookhaven

    2006-10-24

    In the framework of the CARE HHH European Network, we have developed a web-based dynamic accelerator-physics code repository. We describe the design, structure and contents of this repository, illustrate its usage, and discuss our future plans, with emphasis on code benchmarking.

  20. ACCELERATION PHYSICS CODE WEB REPOSITORY.

    SciTech Connect

    WEI, J.

    2006-06-26

    In the framework of the CARE HHH European Network, we have developed a web-based dynamic accelerator-physics code repository. We describe the design, structure and contents of this repository, illustrate its usage, and discuss our future plans, with emphasis on code benchmarking.

  1. Computer algorithm for coding gain

    NASA Technical Reports Server (NTRS)

    Dodd, E. E.

    1974-01-01

    Development of a computer algorithm for coding gain for use in an automated communications link design system. Using an empirical formula which defines coding gain as used in space communications engineering, an algorithm is constructed on the basis of available performance data for nonsystematic convolutional encoding with soft-decision (eight-level) Viterbi decoding.

  2. LFSC - Linac Feedback Simulation Code

    SciTech Connect

    Ivanov, Valentin; /Fermilab

    2008-05-01

    The computer program LFSC (Code>) is a numerical tool for simulation beam based feedback in high performance linacs. The code LFSC is based on the earlier version developed by a collective of authors at SLAC (L.Hendrickson, R. McEwen, T. Himel, H. Shoaee, S. Shah, P. Emma, P. Schultz) during 1990-2005. That code was successively used in simulation of SLC, TESLA, CLIC and NLC projects. It can simulate as pulse-to-pulse feedback on timescale corresponding to 5-100 Hz, as slower feedbacks, operating in the 0.1-1 Hz range in the Main Linac and Beam Delivery System. The code LFSC is running under Matlab for MS Windows operating system. It contains about 30,000 lines of source code in more than 260 subroutines. The code uses the LIAR ('Linear Accelerator Research code') for particle tracking under ground motion and technical noise perturbations. It uses the Guinea Pig code to simulate the luminosity performance. A set of input files includes the lattice description (XSIF format), and plane text files with numerical parameters, wake fields, ground motion data etc. The Matlab environment provides a flexible system for graphical output.

  3. Best practices for code release

    NASA Astrophysics Data System (ADS)

    Berriman, G. Bruce

    2016-01-01

    In this talk, I want to describe what I think are the best practices for releasing code and having it adopted by end users. Make sure your code is licensed, so users will know how the software can be used and modified, and place your code in a public repository that (and make sure that you follow institutional policies in doing this). Yet licensing and releasing code are not enough: the code must be organized and documented so users can understand what it does, what its limitations are, and how to build and use it. I will describe what I think are best practices in developing the content to support release, including tutorials, design documents, specifications of interfaces and so on. Much of what I have learned on based on ten years of experience in supporting releases of the Montage Image Mosaic Engine.

  4. National Transport Code Collaboration (NTCC)

    NASA Astrophysics Data System (ADS)

    Kritz, A. H.; Bateman, G.; Kinsey, J.; Wiley, J.; Cary, J. R.; Luetkemeyer, K. G.; Cohen, R.; Jong, R.; Lodestro, L.; Yang, T. B.; Houlberg, W.; Greenwood, D.; McCune, D.; Mikkelsen, D.; Pletzer, A.; St. John, H.; Fredian, T.; Sugiyama, L.

    1999-11-01

    Progress continues toward achieving NTCC goals, which include the development of -- a library of modules which satisfy clearly defined standards, a framework using modern computer languages to write transport codes, a Web-invocable data server and demonstration code, and an education program to utilize modern computational tools. The development of a flexible framework (using five programming languages -- FORTRAN, C++, PYTHON, CORBA and JAVA, and modern software engineering) allows the design of new customizable, user-friendly, easily maintained transport codes that can address major physics issues facing the fusion program. The demo code runs on up to three computers simultaneously; the GUI Client runs on a local computer, the Physics server advances the transport equations, and the Data server accesses experimental data. The evolutions of plasma discharges using different transport models from the module library are compared using the demo code.

  5. Portable code development in C

    SciTech Connect

    Brown, S.A.

    1990-11-06

    With a new generation of high performance computers appearing around us on a time scale of months, a new challenge for developers of simulation codes is to write and maintain production codes that are both highly portable and maximally efficient. My contention is that C is the language that is both best suited to that goal and is widely available today. GLF is a new code written mainly in C which is intended to have all of the XRASER physics and run on any platform of interest. It demonstrates the power of the C paradigm for code developers and flexibility and ease of use for the users. Three fundamental problems are discussed: the C/UNIX development environment; the supporting tools and libraries which handle data and graphics portability issues; and the advantages of C in numerical simulation code development.

  6. BASS Code Development

    NASA Technical Reports Server (NTRS)

    Sawyer, Scott

    2004-01-01

    The BASS computational aeroacoustic code solves the fully nonlinear Euler equations in the time domain in two-dimensions. The acoustic response of the stator is determined simultaneously for the first three harmonics of the convected vortical gust of the rotor. The spatial mode generation, propagation and decay characteristics are predicted by assuming the acoustic field away from the stator can be represented as a uniform flow with small harmonic perturbations superimposed. The computed field is then decomposed using a joint temporal-spatial transform to determine the wave amplitudes as a function of rotor harmonic and spatial mode order. This report details the following technical aspects of the computations and analysis. 1) the BASS computational technique; 2) the application of periodic time shifted boundary conditions; 3) the linear theory aspects unique to rotor-stator interactions; and 4) the joint spatial-temporal transform. The computational results presented herein are twofold. In each case, the acoustic response of the stator is determined simultaneously for the first three harmonics of the convected vortical gust of the rotor. The fan under consideration here like modern fans is cut-off at +, and propagating acoustic waves are only expected at 2BPF and 3BPF. In the first case, the computations showed excellent agreement with linear theory predictions. The frequency and spatial mode order of acoustic field was computed and found consistent with linear theory. Further, the propagation of the generated modes was also correctly predicted. The upstream going waves propagated from the domain without reflection from the in ow boundary. However, reflections from the out ow boundary were noticed. The amplitude of the reflected wave was approximately 5% of the incident wave. The second set of computations were used to determine the influence of steady loading on the generated noise. Toward this end, the acoustic response was determined with three steady loading conditions: design, low-loading, high-loading. The overall trend showed significant (approximately 10 dB) increases in the generated noise for the highly loaded stator.

  7. ETR/ITER systems code

    SciTech Connect

    Barr, W.L.; Bathke, C.G.; Brooks, J.N.; Bulmer, R.H.; Busigin, A.; DuBois, P.F.; Fenstermacher, M.E.; Fink, J.; Finn, P.A.; Galambos, J.D.; Gohar, Y.; Gorker, G.E.; Haines, J.R.; Hassanein, A.M.; Hicks, D.R.; Ho, S.K.; Kalsi, S.S.; Kalyanam, K.M.; Kerns, J.A.; Lee, J.D.; Miller, J.R.; Miller, R.L.; Myall, J.O.; Peng, Y-K.M.; Perkins, L.J.; Spampinato, P.T.; Strickler, D.J.; Thomson, S.L.; Wagner, C.E.; Willms, R.S.; Reid, R.L.

    1988-04-01

    A tokamak systems code capable of modeling experimental test reactors has been developed and is described in this document. The code, named TETRA (for Tokamak Engineering Test Reactor Analysis), consists of a series of modules, each describing a tokamak system or component, controlled by an optimizer/driver. This code development was a national effort in that the modules were contributed by members of the fusion community and integrated into a code by the Fusion Engineering Design Center. The code has been checked out on the Cray computers at the National Magnetic Fusion Energy Computing Center and has satisfactorily simulated the Tokamak Ignition/Burn Experimental Reactor II (TIBER) design. A feature of this code is the ability to perform optimization studies through the use of a numerical software package, which iterates prescribed variables to satisfy a set of prescribed equations or constraints. This code will be used to perform sensitivity studies for the proposed International Thermonuclear Experimental Reactor (ITER). 22 figs., 29 tabs.

  8. Surface code implementation of block code state distillation

    PubMed Central

    Fowler, Austin G.; Devitt, Simon J.; Jones, Cody

    2013-01-01

    State distillation is the process of taking a number of imperfect copies of a particular quantum state and producing fewer better copies. Until recently, the lowest overhead method of distilling states produced a single improved |A? state given 15 input copies. New block code state distillation methods can produce k improved |A? states given 3k + 8 input copies, potentially significantly reducing the overhead associated with state distillation. We construct an explicit surface code implementation of block code state distillation and quantitatively compare the overhead of this approach to the old. We find that, using the best available techniques, for parameters of practical interest, block code state distillation does not always lead to lower overhead, and, when it does, the overhead reduction is typically less than a factor of three. PMID:23736868

  9. PANEL CODE FOR PLANAR CASCADES

    NASA Technical Reports Server (NTRS)

    Mcfarland, E. R.

    1994-01-01

    The Panel Code for Planar Cascades was developed as an aid for the designer of turbomachinery blade rows. The effective design of turbomachinery blade rows relies on the use of computer codes to model the flow on blade-to-blade surfaces. Most of the currently used codes model the flow as inviscid, irrotational, and compressible with solutions being obtained by finite difference or finite element numerical techniques. While these codes can yield very accurate solutions, they usually require an experienced user to manipulate input data and control parameters. Also, they often limit a designer in the types of blade geometries, cascade configurations, and flow conditions that can be considered. The Panel Code for Planar Cascades accelerates the design process and gives the designer more freedom in developing blade shapes by offering a simple blade-to-blade flow code. Panel, or integral equation, solution techniques have been used for several years by external aerodynamicists who have developed and refined them into a primary design tool of the aircraft industry. The Panel Code for Planar Cascades adapts these same techniques to provide a versatile, stable, and efficient calculation scheme for internal flow. The code calculates the compressible, inviscid, irrotational flow through a planar cascade of arbitrary blade shapes. Since the panel solution technique is for incompressible flow, a compressibility correction is introduced to account for compressible flow effects. The analysis is limited to flow conditions in the subsonic and shock-free transonic range. Input to the code consists of inlet flow conditions, blade geometry data, and simple control parameters. Output includes flow parameters at selected control points. This program is written in FORTRAN IV for batch execution and has been implemented on an IBM 370 series computer with a central memory requirement of approximately 590K of 8 bit bytes. This program was developed in 1982.

  10. MACCS versus GENII: Code comparison

    SciTech Connect

    Foster, J.; Chanin, D.I.

    1991-09-30

    The computer codes GENII and MACCS, utilized for computing radiation doses, are discussed. The codes are compared from input from the source term from LANL file HW101-SY.INP, run dated 2/19/91. The release of radionuclides was assumed to be from a point source at ground level with a 10 minute release duration.Doses were calculated at a distance of 660 meters with an exposure duration of 2 hours. It was found that the 2 codes differed in how wind direction was treated.

  11. Comparison of linac simulation codes

    SciTech Connect

    Nath, S.; Ryne, Robert D.; Stovall, J.; Takeda, H.; Xiang, J.; Young, L.; Pichoff, N.; Uriot, D.; Crandall, K.

    2001-01-25

    The Spallation Neutron Source (SNS) project is a collaborative effort between Brookhaven, Argonne, Jefferson, Lawrence Berkeley, Los Alamos and Oak Ridge National Laboratories. Los Alamos is responsible for the design of the linac for this accelerator complex. The code PARMILA, developed at Los Alamos is widely used for proton linac design and beam dynamics studies. The most updated version includes superconducting structures among others. In recent years, some other codes have also been developed which primarily focuses on the studies of the beam dynamics. In this paper, we compare the simulation results and discuss physics aspects of the different linac design and beam dynamics simulation codes.

  12. State building energy codes status

    SciTech Connect

    1996-09-01

    This document contains the State Building Energy Codes Status prepared by Pacific Northwest National Laboratory for the U.S. Department of Energy under Contract DE-AC06-76RL01830 and dated September 1996. The U.S. Department of Energy`s Office of Codes and Standards has developed this document to provide an information resource for individuals interested in energy efficiency of buildings and the relevant building energy codes in each state and U.S. territory. This is considered to be an evolving document and will be updated twice a year. In addition, special state updates will be issued as warranted.

  13. Iterative nonlinear unfolding code: TWOGO

    SciTech Connect

    Hajnal, F.

    1981-03-01

    a new iterative unfolding code, TWOGO, was developed to analyze Bonner sphere neutron measurements. The code includes two different unfolding schemes which alternate on successive iterations. The iterative process can be terminated either when the ratio of the coefficient of variations in terms of the measured and calculated responses is unity, or when the percentage difference between the measured and evaluated sphere responses is less than the average measurement error. The code was extensively tested with various known spectra and real multisphere neutron measurements which were performed inside the containments of pressurized water reactors.

  14. Understanding the Code: upholding dignity.

    PubMed

    Griffith, Richard

    2015-04-01

    The Nursing and Midwifery Council, the statutory professional regulator for registered district nurses, has introduced a revised code of standards that came into effect on 31 March 2015. The Code makes clear that while district nurses can interpret the values and principles for use in community settings, the standards are not negotiable or discretionary. They must be applied, otherwise the district nurse's fitness to practice will be called into question. In the second of a series of articles analysing the legal implications of the Code on district nurse practice, the author considers the first standard, which requires district nurses to treat people as individuals and to uphold their dignity. PMID:25839879

  15. Combinatorial coding of Drosophila muscle shape by Collier and Nautilus.

    PubMed

    Enriquez, Jonathan; de Taffin, Mathilde; Crozatier, Michèle; Vincent, Alain; Dubois, Laurence

    2012-03-01

    The diversity of Drosophila muscles correlates with the expression of combinations of identity transcription factors (iTFs) in muscle progenitors. Here, we address the question of when and how a combinatorial code is translated into muscle specific properties, by studying the roles of the Collier and Nautilus iTFs that are expressed in partly overlapping subsets of muscle progenitors. We show that the three dorso-lateral (DL) progenitors which express Nautilus and Collier are specified in a fixed temporal sequence and that each expresses additionally other, distinct iTFs. Removal of Collier leads to changes in expression of some of these iTFs and mis-orientation of several DL muscles, including the dorsal acute DA3 muscle which adopts a DA2 morphology. Detailed analysis of this transformation revealed the existence of two steps in the attachment of elongating muscles to specific tendon cells: transient attachment to alternate tendon cells, followed by a resolution step selecting the final sites. The multiple cases of triangular-shaped muscles observed in col mutant embryos indicate that transient binding of elongating muscle to exploratory sites could be a general feature of the developing musculature. In nau mutants, the DA3 muscle randomly adopts the attachment sites of the DA3 or DO5 muscles that derive from the same progenitor, resulting in a DA3, DO5-like or bifid DA3-DO5 orientation. In addition, nau mutant embryos display thinner muscle fibres. Together, our data show that the sequence of expression and combinatorial activities of Col and Nau control the pattern and morphology of DL muscles. PMID:22200594

  16. Bandwidth efficient coding for satellite communications

    NASA Technical Reports Server (NTRS)

    Lin, Shu; Costello, Daniel J., Jr.; Miller, Warner H.; Morakis, James C.; Poland, William B., Jr.

    1992-01-01

    An error control coding scheme was devised to achieve large coding gain and high reliability by using coded modulation with reduced decoding complexity. To achieve a 3 to 5 dB coding gain and moderate reliability, the decoding complexity is quite modest. In fact, to achieve a 3 dB coding gain, the decoding complexity is quite simple, no matter whether trellis coded modulation or block coded modulation is used. However, to achieve coding gains exceeding 5 dB, the decoding complexity increases drastically, and the implementation of the decoder becomes very expensive and unpractical. The use is proposed of coded modulation in conjunction with concatenated (or cascaded) coding. A good short bandwidth efficient modulation code is used as the inner code and relatively powerful Reed-Solomon code is used as the outer code. With properly chosen inner and outer codes, a concatenated coded modulation scheme not only can achieve large coding gains and high reliability with good bandwidth efficiency but also can be practically implemented. This combination of coded modulation and concatenated coding really offers a way of achieving the best of three worlds, reliability and coding gain, bandwidth efficiency, and decoding complexity.

  17. Facilitating Internet-Scale Code Retrieval

    ERIC Educational Resources Information Center

    Bajracharya, Sushil Krishna

    2010-01-01

    Internet-Scale code retrieval deals with the representation, storage, and access of relevant source code from a large amount of source code available on the Internet. Internet-Scale code retrieval systems support common emerging practices among software developers related to finding and reusing source code. In this dissertation we focus on some…

  18. Efficient codes and balanced networks.

    PubMed

    Denève, Sophie; Machens, Christian K

    2016-02-23

    Recent years have seen a growing interest in inhibitory interneurons and their circuits. A striking property of cortical inhibition is how tightly it balances excitation. Inhibitory currents not only match excitatory currents on average, but track them on a millisecond time scale, whether they are caused by external stimuli or spontaneous fluctuations. We review, together with experimental evidence, recent theoretical approaches that investigate the advantages of such tight balance for coding and computation. These studies suggest a possible revision of the dominant view that neurons represent information with firing rates corrupted by Poisson noise. Instead, tight excitatory/inhibitory balance may be a signature of a highly cooperative code, orders of magnitude more precise than a Poisson rate code. Moreover, tight balance may provide a template that allows cortical neurons to construct high-dimensional population codes and learn complex functions of their inputs. PMID:26906504

  19. Property Control through Bar Coding.

    ERIC Educational Resources Information Center

    Kingma, Gerben J.

    1984-01-01

    A public utility company uses laser wands to read bar-coded labels on furniture and equipment. The system allows an 80 percent savings of the time required to create reports for inventory control. (MLF)

  20. Improvements to the NASAP code

    NASA Technical Reports Server (NTRS)

    Perel, D.

    1980-01-01

    The FORTRAN code, NASAP was modified and improved for the capability of transforming the CAD-generated NASTRAN input data for DESAP II and/or DESAP I. The latter programs were developed for structural optimization.

  1. Research synthesis. Coding and conjectures.

    PubMed

    Stock, W A; Goméz Benito, J; Balluerka Lasa, N

    1996-03-01

    In meta-analyses the extraction and coding of information from primary research reports has to be completed in a competent way because these tasks implicate most of the decisions that determine the usefulness of the final product. The authors offer guidelines that make it more likely that high-quality information is reliably extracted and coded from primary research reports. These guidelines address issues ranging from the selection of items and construction of coding materials to sustaining reliability and vigilance across extended periods of coding. Thereafter, the authors note how the methodology of meta-analysis results in pressure to change the type of information that appears in primary research reports, and close by offering a few conjectures about the future of meta-analysis. PMID:10186898

  2. Seals Flow Code Development 1993

    NASA Technical Reports Server (NTRS)

    Liang, Anita D. (Compiler); Hendricks, Robert C. (Compiler)

    1994-01-01

    Seals Workshop of 1993 code releases include SPIRALI for spiral grooved cylindrical and face seal configurations; IFACE for face seals with pockets, steps, tapers, turbulence, and cavitation; GFACE for gas face seals with 'lift pad' configurations; and SCISEAL, a CFD code for research and design of seals of cylindrical configuration. GUI (graphical user interface) and code usage was discussed with hands on usage of the codes, discussions, comparisons, and industry feedback. Other highlights for the Seals Workshop-93 include environmental and customer driven seal requirements; 'what's coming'; and brush seal developments including flow visualization, numerical analysis, bench testing, T-700 engine testing, tribological pairing and ceramic configurations, and cryogenic and hot gas facility brush seal results. Also discussed are seals for hypersonic engines and dynamic results for spiral groove and smooth annular seals.

  3. PARAVT: Parallel Voronoi Tessellation code

    NASA Astrophysics Data System (ADS)

    Gonzalez, Roberto E.

    2016-01-01

    We present a new open source code for massive parallel computation of Voronoi tessellations(VT hereafter) in large data sets. The code is focused for astrophysical purposes where VT densities and neighbors are widely used. There are several serial Voronoi tessellation codes, however no open source and parallel implementations are available to handle the large number of particles/galaxies in current N-body simulations and sky surveys. Parallelization is implemented under MPI and VT using Qhull library. Domain decomposition take into account consistent boundary computation between tasks, and support periodic conditions. In addition, the code compute neighbors lists, Voronoi density and Voronoi cell volumes for each particle, and can compute density on a regular grid.

  4. Correlated algebraic-geometric codes

    NASA Astrophysics Data System (ADS)

    Guruswami, Venkatesan; Patthak, Anindya C.

    2008-03-01

    We define a new family of error-correcting codes based on algebraic curves over finite fields, and develop efficient list decoding algorithms for them. Our codes extend the class of algebraic-geometric (AG) codes via a (nonobvious) generalization of the approach in the recent breakthrough work of Parvaresh and Vardy (2005). Our work shows that the PV framework applies to fairly general settings by elucidating the key algebraic concepts underlying it. Also, more importantly, AG codes of arbitrary block length exist over fixed alphabets Sigma , thus enabling us to establish new trade-offs between the list decoding radius and rate over a bounded alphabet size. The work of Parvaresh and Vardy (2005) was extended in Guruswami and Rudra (2006) to give explicit codes that achieve the list decoding capacity (optimal trade-off between rate and fraction of errors corrected) over large alphabets. A similar extension of this work along the lines of Guruswami and Rudra could have substantial impact. Indeed, it could give better trade-offs than currently known over a fixed alphabet (say, GF(2^{12}) ), which in turn, upon concatenation with a fixed, well-understood binary code, could take us closer to the list decoding capacity for binary codes. This may also be a promising way to address the significant complexity drawback of the result of Guruswami and Rudra, and to enable approaching capacity with bounded list size independent of the block length (the list size and decoding complexity in their work are both n^{Omega(1/\\varepsilon)} where \\varepsilon is the distance to capacity). Similar to algorithms for AG codes from Guruswami and Sudan (1999) and (2001), our encoding/decoding algorithms run in polynomial time assuming a natural polynomial-size representation of the code. For codes based on a specific ``optimal'' algebraic curve, we also present an expected polynomial time algorithm to construct the requisite representation. This in turn fills an important void in the literature by presenting an efficient construction of the representation often assumed in the list decoding algorithms for AG codes.

  5. electromagnetics, eddy current, computer codes

    Energy Science and Technology Software Center (ESTSC)

    2002-03-12

    TORO Version 4 is designed for finite element analysis of steady, transient and time-harmonic, multi-dimensional, quasi-static problems in electromagnetics. The code allows simulation of electrostatic fields, steady current flows, magnetostatics and eddy current problems in plane or axisymmetric, two-dimensional geometries. TORO is easily coupled to heat conduction and solid mechanics codes to allow multi-physics simulations to be performed.

  6. Computer-Access-Code Matrices

    NASA Technical Reports Server (NTRS)

    Collins, Earl R., Jr.

    1990-01-01

    Authorized users respond to changing challenges with changing passwords. Scheme for controlling access to computers defeats eavesdroppers and "hackers". Based on password system of challenge and password or sign, challenge, and countersign correlated with random alphanumeric codes in matrices of two or more dimensions. Codes stored on floppy disk or plug-in card and changed frequently. For even higher security, matrices of four or more dimensions used, just as cubes compounded into hypercubes in concurrent processing.

  7. Training course on code implementation.

    PubMed

    Allain, A; De Arango, R

    1992-01-01

    The International Baby Food Action Network (IBFAN) is a coalition of over 40 citizen groups in 70 countries. IBFAN monitors the progress worldwide of the implementation of the International Code of Marketing of Breastmilk Substitutes. The Code is intended to regulate the advertising and promotional techniques used to sell infant formula. The 1991 IBFAN report shows that 75 countries have taken some action to implement the International Code. During 1992, the IBFAN Code Documentation Center in Malaysia conducted 2 training courses to help countries draft legislation to implement and monitor compliance with the International Code. In April, government officials from 19 Asian and African countries attended the first course in Malaysia; the second course was conducted in Spanish in Guatemala and attended by officials from 15 Latin American and Caribbean countries. The resource people included representatives from NGOs in Africa, Asia, Latin America, Europe and North America with experience in Code implementation and monitoring at the national level. The main purpose of each course was to train government officials to use the International Code as a starting point for national legislation to protect breastfeeding. Participants reviewed recent information on lactation management, the advantages of breastfeeding, current trends in breastfeeding and the marketing practices of infant formula manufacturers. The participants studied the terminology contained in the International Code and terminology used by infant formula manufacturers to include breastmilk supplements such as follow-on formulas and cereal-based baby foods. Relevant World Health Assembly resolutions such as the one adopted in 1986 on the need to ban free and low-cost supplies to hospitals were examined. The legal aspects of the current Baby Friendly Hospital Initiative (BFHI) and the progress in the 12 BFHI test countries concerning the elimination of supplies were also examined. International Labor Organization conventions on maternity legislation also need to be implemented to support breastfeeding. PMID:12288850

  8. The Integrated TIGER Series Codes

    SciTech Connect

    Kensek, Ronald P.; Franke, Brian C.; Laub, Thomas W.

    2006-01-15

    ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with an input scheme based on order-independent descriptive keywords that makes maximum use of defaults and intemal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2) multigroup codes with adjoint transport capabilities, (3) parallel implementations of all ITS codes, (4) a general purpose geometry engine for linking with CAD or other geometry formats, and (5) the Cholla facet geometry library. Moreover, the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.

  9. Multiple-Trellis-Coded Modulation

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Simon, M. K.

    1990-01-01

    Theoretical gain over simple multiple-phase-shift keying at least 2 to 3 decibels. Multiple-trellis-coded modulation scheme combined with M-ary modulation shows theoretically to yield asymptotic gains in performance over uncoded multiple-phase-shift keying, while employing symmetric multiple-phase-shift signal constellations and avoiding code catastrophe. Suitable for satellite and terrestrial-mobile/satellite communications or other communications requiring burst-error correction. Extended to such higher dimensional modulations as quadrature amplitude modulation.

  10. The Integrated TIGER Series Codes

    Energy Science and Technology Software Center (ESTSC)

    2006-01-15

    ITS is a powerful and user-friendly software package permitting state-of-the-art Monte Carlo solution of linear time-independent coupled electron/photon radiation transport problems, with or without the presence of macroscopic electric and magnetic fields of arbitrary spatial dependence. Our goal has been to simultaneously maximize operational simplicity and physical accuracy. Through a set of preprocessor directives, the user selects one of the many ITS codes. The ease with which the makefile system is applied combines with anmore » input scheme based on order-independent descriptive keywords that makes maximum use of defaults and intemal error checking to provide experimentalists and theorists alike with a method for the routine but rigorous solution of sophisticated radiation transport problems. Physical rigor is provided by employing accurate cross sections, sampling distributions, and physical models for describing the production and transport of the electron/photon cascade from 1.0 GeV down to 1.0 keV. The availability of source code permits the more sophisticated user to tailor the codes to specific applications and to extend the capabilities of the codes to more complex applications. Version 5.0, the latest version of ITS, contains (1) improvements to the ITS 3.0 continuous-energy codes, (2) multigroup codes with adjoint transport capabilities, (3) parallel implementations of all ITS codes, (4) a general purpose geometry engine for linking with CAD or other geometry formats, and (5) the Cholla facet geometry library. Moreover, the general user friendliness of the software has been enhanced through increased internal error checking and improved code portability.« less

  11. UNIX code management and distribution

    SciTech Connect

    Hung, T.; Kunz, P.F.

    1992-09-01

    We describe a code management and distribution system based on tools freely available for the UNIX systems. At the master site, version control is managed with CVS, which is a layer on top of RCS, and distribution is done via NFS mounted file systems. At remote sites, small modifications to CVS provide for interactive transactions with the CVS system at the master site such that remote developers are true peers in the code development process.

  12. Edge equilibrium code for tokamaks

    SciTech Connect

    Li, Xujing; Drozdov, Vladimir V.

    2014-01-15

    The edge equilibrium code (EEC) described in this paper is developed for simulations of the near edge plasma using the finite element method. It solves the Grad-Shafranov equation in toroidal coordinate and uses adaptive grids aligned with magnetic field lines. Hermite finite elements are chosen for the numerical scheme. A fast Newton scheme which is the same as implemented in the equilibrium and stability code (ESC) is applied here to adjust the grids.

  13. Coding assignments of the genome of adult diarrhea rotavirus.

    PubMed

    Fang, Z Y; Monroe, S S; Dong, H; Penaranda, M; Wen, L; Gouvea, V; Allen, J R; Hung, T; Glass, R I

    1992-01-01

    Adult diarrhea rotavirus (ADRV) has caused epidemics of diarrhea in China since 1982 and remains the only group B rotavirus associated with widespread disease in humans. We recently characterized the proteins of ADRV and have now proceeded to identify the gene segments encoding each protein. Viral RNA transcripts were synthesized in vitro with the endogenous viral RNA polymerase and separated by electrophoresis in agarose. The individual transcripts were translated in a cell-free system using nuclease-treated rabbit reticulocyte lysates. The translation products were compared with polypeptides found in purified virus and were characterized by SDS-PAGE, immunoprecipitation, and Western blot analysis using antisera to double- and single-shelled virions, virus cores, and monoclonal antibodies. Furthermore, individual RNA transcripts were hybridized to total dsRNA to determine their genomic origin. Based on this analysis, the core polypeptides VP1, VP2 and VP3 are encoded by segments 1, 2, and 3, respectively. The main polypeptides in the inner capsid, VP6, and the outer capsid, VP4 and VP7, are encoded by segments 6, 4, and 8 respectively. Segments 5, 7, and 9 code for 60, 45, and 30 kDa nonstructural polypeptides. Two other nonstructural polypeptides (24 and 25 kDa) are derived from gene segment 11. Gene segment 10 codes for a 26 kDa polypeptide that is precipitated with serum to ADRV and may be a structural protein VP9. With this exception, gene coding assignments of ADRV are comparable to those of the group A rotaviruses. Our results have clear implications for further work in cloning, sequencing, and expression genes of ADRV and can provide direction towards understanding the origin and the evolution of this virus. PMID:1322659

  14. The ATLAS PanDA Monitoring System and its Evolution

    NASA Astrophysics Data System (ADS)

    Klimentov, A.; Nevski, P.; Potekhin, M.; Wenaus, T.

    2011-12-01

    The PanDA (Production and Distributed Analysis) Workload Management System is used for ATLAS distributed production and analysis worldwide. The needs of ATLAS global computing imposed challenging requirements on the design of PanDA in areas such as scalability, robustness, automation, diagnostics, and usability for both production shifters and analysis users. Through a system-wide job database, the PanDA monitor provides a comprehensive and coherent view of the system and job execution, from high level summaries to detailed drill-down job diagnostics. It is (like the rest of PanDA) an Apache-based Python application backed by Oracle. The presentation layer is HTML code generated on the fly in the Python application which is also responsible for managing database queries. However, this approach is lacking in user interface flexibility, simplicity of communication with external systems, and ease of maintenance. A decision was therefore made to migrate the PanDA monitor server to Django Web Application Framework and apply JSON/AJAX technology in the browser front end. This allows us to greatly reduce the amount of application code, separate data preparation from presentation, leverage open source for tools such as authentication and authorization mechanisms, and provide a richer and more dynamic user experience. We describe our approach, design and initial experience with the migration process.

  15. Rotating-Pump Design Code

    NASA Technical Reports Server (NTRS)

    Walker, James F.; Chen, Shu-Cheng; Scheer, Dean D.

    2006-01-01

    Pump Design (PUMPDES) is a computer program for designing a rotating pump for liquid hydrogen, liquid oxygen, liquid nitrogen, water, methane, or ethane. Using realistic properties of these fluids provided by another program called GASPAK, this code performs a station-by-station, mean-line analysis along the pump flow path, obtaining thermodynamic properties of the pumped fluid at each station and evaluating hydraulic losses along the flow path. The variables at each station are obtained under constraints that are consistent with the underlying physical principles. The code evaluates the performance of each stage and the overall pump. In addition, by judiciously choosing the givens and the unknowns, the code can perform a geometric inverse design function: that is, it can compute a pump geometry that yields a closest approximation of given design point. The code contains two major parts: one for an axial-rotor/inducer and one for a multistage centrifugal pump. The inducer and the centrifugal pump are functionally integrated. The code can be used in designing and/or evaluating the inducer/centrifugal-pump combination or the centrifugal pump alone. The code is written in standard Fortran 77.

  16. Spaceflight Validation of Hzetrn Code

    NASA Technical Reports Server (NTRS)

    Wilson, J. W.; Shinn, J. L.; Singleterry, R. C.; Badavi, F. F.; Badhwar, G. D.; Reitz, G.; Beaujean, R.; Cucinotta, F. A.

    1999-01-01

    HZETRN is being developed as a fast deterministic radiation transport code applicable to neutrons, protons, and multiply charged ions in the space environment. It was recently applied to 50 hours of IMP8 data measured during the August 4, 1972 solar event to map the hourly exposures within the human body under several shield configurations. This calculation required only 18 hours on a VAX 4000 machine. A similar calculation using the Monte Carlo method would have required two years of dedicated computer time. The code has been benchmarked against well documented and tested Monte Carlo proton transport codes with good success. The code will allow important trade studies to be made with relative ease due to the computational speed and will be useful in assessing design alternatives in an integrated system software environment. Since there are no well tested Monte Carlo codes for HZE particles, we have been engaged in flight validation of the HZETRN results. To date we have made comparison with TEPC, CR-39, charge particle telescopes, and Bonner spheres. This broad range of detectors allows us to test a number of functions related to differing physical processes which add to the complicated radiation fields within a spacecraft or the human body, which functions can be calculated by the HZETRN code system. In the present report we will review these results.

  17. Sensorimotor transformation via sparse coding

    PubMed Central

    Takiyama, Ken

    2015-01-01

    Sensorimotor transformation is indispensable to the accurate motion of the human body in daily life. For instance, when we grasp an object, the distance from our hands to an object needs to be calculated by integrating multisensory inputs, and our motor system needs to appropriately activate the arm and hand muscles to minimize the distance. The sensorimotor transformation is implemented in our neural systems, and recent advances in measurement techniques have revealed an important property of neural systems: a small percentage of neurons exhibits extensive activity while a large percentage shows little activity, i.e., sparse coding. However, we do not yet know the functional role of sparse coding in sensorimotor transformation. In this paper, I show that sparse coding enables complete and robust learning in sensorimotor transformation. In general, if a neural network is trained to maximize the performance on training data, the network shows poor performance on test data. Nevertheless, sparse coding renders compatible the performance of the network on both training and test data. Furthermore, sparse coding can reproduce reported neural activities. Thus, I conclude that sparse coding is necessary and a biologically plausible factor in sensorimotor transformation. PMID:25923980

  18. The Proteomic Code: a molecular recognition code for proteins

    PubMed Central

    Biro, Jan C

    2007-01-01

    Background The Proteomic Code is a set of rules by which information in genetic material is transferred into the physico-chemical properties of amino acids. It determines how individual amino acids interact with each other during folding and in specific protein-protein interactions. The Proteomic Code is part of the redundant Genetic Code. Review The 25-year-old history of this concept is reviewed from the first independent suggestions by Biro and Mekler, through the works of Blalock, Root-Bernstein, Siemion, Miller and others, followed by the discovery of a Common Periodic Table of Codons and Nucleic Acids in 2003 and culminating in the recent conceptualization of partial complementary coding of interacting amino acids as well as the theory of the nucleic acid-assisted protein folding. Methods and conclusions A novel cloning method for the design and production of specific, high-affinity-reacting proteins (SHARP) is presented. This method is based on the concept of proteomic codes and is suitable for large-scale, industrial production of specifically interacting peptides. PMID:17999762

  19. Cleanup MAC and MBA code ATP

    SciTech Connect

    Russell, V.K.

    1994-10-17

    The K Basins Materials Accounting (MAC) and Material Balance (MBA) database system had some minor code cleanup performed to its code. This ATP describes how the code was to be tested to verify its correctness.

  20. Entanglement-assisted codeword stabilized quantum codes

    SciTech Connect

    Shin, Jeonghwan; Heo, Jun; Brun, Todd A.

    2011-12-15

    Entangled qubits can increase the capacity of quantum error-correcting codes based on stabilizer codes. In addition, by using entanglement quantum stabilizer codes can be construct from classical linear codes that do not satisfy the dual-containing constraint. We show that it is possible to construct both additive and nonadditive quantum codes using the codeword stabilized quantum code framework. Nonadditive codes may offer improved performance over the more common stabilizer codes. Like other entanglement-assisted codes, the encoding procedure acts only on the qubits on Alice's side, and only these qubits are assumed to pass through the channel. However, errors in the codeword stabilized quantum code framework give rise to effective Z errors on Bob's side. We use this scheme to construct entanglement-assisted nonadditive quantum codes, in particular, ((5,16,2;1)) and ((7,4,5;4)) codes.

  1. Number of minimum-weight code words in a product code

    NASA Technical Reports Server (NTRS)

    Miller, R. L.

    1978-01-01

    Consideration is given to the number of minimum-weight code words in a product code. The code is considered as a tensor product of linear codes over a finite field. Complete theorems and proofs are presented.

  2. System Design Description for the TMAD Code

    SciTech Connect

    Finfrock, S.H.

    1995-09-28

    This document serves as the System Design Description (SDD) for the TMAD Code System, which includes the TMAD code and the LIBMAKR code. The SDD provides a detailed description of the theory behind the code, and the implementation of that theory. It is essential for anyone who is attempting to review or modify the code or who otherwise needs to understand the internal workings of the code. In addition, this document includes, in Appendix A, the System Requirements Specification for the TMAD System.

  3. NASA Rotor 37 CFD Code Validation: Glenn-HT Code

    NASA Technical Reports Server (NTRS)

    Ameri, Ali A.

    2010-01-01

    In order to advance the goals of NASA aeronautics programs, it is necessary to continuously evaluate and improve the computational tools used for research and design at NASA. One such code is the Glenn-HT code which is used at NASA Glenn Research Center (GRC) for turbomachinery computations. Although the code has been thoroughly validated for turbine heat transfer computations, it has not been utilized for compressors. In this work, Glenn-HT was used to compute the flow in a transonic compressor and comparisons were made to experimental data. The results presented here are in good agreement with this data. Most of the measures of performance are well within the measurement uncertainties and the exit profiles of interest agree with the experimental measurements.

  4. Arithmetic algorithms for error-coded operands.

    NASA Technical Reports Server (NTRS)

    Avizienis, A.

    1973-01-01

    Arithmetic algorithms for separate and nonseparate codes are considered. The nonseparate AN code is formed when an uncoded operand X is multiplied by the check modulus A to give the coded operand AX. The separate codes are the residue code, and the inverse-residue code, which has significant advantages in fault detection of repeated-use faults. A set of algorithms for low-cost AN-coded operands is discussed together with questions of their implementation in a byte-organized arithmetic processor. Algorithms for inverse-residue coded operands of the STAR computer are also examined.

  5. MOSES code Version 1. 0

    SciTech Connect

    Wilkinson, K.J.; Summers, K.V.; Mills, W.B.; Gherini, S.A. )

    1991-04-01

    The MOSES code Version 1.0 is an interactive, menu-driven code for microcomputers. The code estimates the probability that spills of mineral oil or other fluids from substations or above ground storage tanks will or will not reach nearby surface waters. A range of spills can be simulated (i.e., from small leaks to failure of an entire above ground storage unit or the maximum volume of one unit). The code can also be used to help design substations by predicting the effect of changes in substation features (e.g., increasing the size of the on-site gravel bed or adding additional on-site sumps). Output of the model includes the probability that a spill will remain on-site or reach surface water and graphs showing selected characteristics of spills which reached surface water (e.g., spill size, volume reaching surface water, and distance to river). The distribution of the spill sizes contained on-site can also be plotted. The processes considered in the simulation include on-site and off-site storage, infiltration, evaporation, off-site retention, transport by overland flow, and the effect of rainfall (i.e., decrease in available storage and increase in transport by surface water runoff). The code uses a Monte Carlo approach for selecting input parameter values from user-defined ranges. The values used in the simulations for the cases where mineral oil reaches surface water can be saved as output files for later review. The code operates on IBM-compatible microcomputers equipped with a color monitor and an EGA or VGA video card, at least 640K memory, and a math coprocessor. The code is distributed in an already compiled/linked form and is available on one 5-1/4 in. high density diskette, on three 5-1/4 in. double density diskettes, or on one 3-1/2 in. high density diskette. This manual includes instructions for installing and using the code, example applications, methods for estimating the input data, and the theoretical basis for the code. 4 refs., 19 figs., 3 tabs.

  6. International assessment of PCA codes

    SciTech Connect

    Neymotin, L.; Lui, C.; Glynn, J.; Archarya, S.

    1993-11-01

    Over the past three years (1991-1993), an extensive international exercise for intercomparison of a group of six Probabilistic Consequence Assessment (PCA) codes was undertaken. The exercise was jointly sponsored by the Commission of European Communities (CEC) and OECD Nuclear Energy Agency. This exercise was a logical continuation of a similar effort undertaken by OECD/NEA/CSNI in 1979-1981. The PCA codes are currently used by different countries for predicting radiological health and economic consequences of severe accidents at nuclear power plants (and certain types of non-reactor nuclear facilities) resulting in releases of radioactive materials into the atmosphere. The codes participating in the exercise were: ARANO (Finland), CONDOR (UK), COSYMA (CEC), LENA (Sweden), MACCS (USA), and OSCAAR (Japan). In parallel with this inter-code comparison effort, two separate groups performed a similar set of calculations using two of the participating codes, MACCS and COSYMA. Results of the intercode and inter-MACCS comparisons are presented in this paper. The MACCS group included four participants: GREECE: Institute of Nuclear Technology and Radiation Protection, NCSR Demokritos; ITALY: ENEL, ENEA/DISP, and ENEA/NUC-RIN; SPAIN: Universidad Politecnica de Madrid (UPM) and Consejo de Seguridad Nuclear; USA: Brookhaven National Laboratory, US NRC and DOE.

  7. Driver Code for Adaptive Optics

    NASA Technical Reports Server (NTRS)

    Rao, Shanti

    2007-01-01

    A special-purpose computer code for a deformable-mirror adaptive-optics control system transmits pixel-registered control from (1) a personal computer running software that generates the control data to (2) a circuit board with 128 digital-to-analog converters (DACs) that generate voltages to drive the deformable-mirror actuators. This program reads control-voltage codes from a text file, then sends them, via the computer s parallel port, to a circuit board with four AD5535 (or equivalent) chips. Whereas a similar prior computer program was capable of transmitting data to only one chip at a time, this program can send data to four chips simultaneously. This program is in the form of C-language code that can be compiled and linked into an adaptive-optics software system. The program as supplied includes source code for integration into the adaptive-optics software, documentation, and a component that provides a demonstration of loading DAC codes from a text file. On a standard Windows desktop computer, the software can update 128 channels in 10 ms. On Real-Time Linux with a digital I/O card, the software can update 1024 channels (8 boards in parallel) every 8 ms.

  8. AEST: Adaptive Eigenvalue Stability Code

    NASA Astrophysics Data System (ADS)

    Zheng, L.-J.; Kotschenreuther, M.; Waelbroeck, F.; van Dam, J. W.; Berk, H.

    2002-11-01

    An adaptive eigenvalue linear stability code is developed. The aim is on one hand to include the non-ideal MHD effects into the global MHD stability calculation for both low and high n modes and on the other hand to resolve the numerical difficulty involving MHD singularity on the rational surfaces at the marginal stability. Our code follows some parts of philosophy of DCON by abandoning relaxation methods based on radial finite element expansion in favor of an efficient shooting procedure with adaptive gridding. The δ W criterion is replaced by the shooting procedure and subsequent matrix eigenvalue problem. Since the technique of expanding a general solution into a summation of the independent solutions employed, the rank of the matrices involved is just a few hundreds. This makes easier to solve the eigenvalue problem with non-ideal MHD effects, such as FLR or even full kinetic effects, as well as plasma rotation effect, taken into account. To include kinetic effects, the approach of solving for the distribution function as a local eigenvalue ω problem as in the GS2 code will be employed in the future. Comparison of the ideal MHD version of the code with DCON, PEST, and GATO will be discussed. The non-ideal MHD version of the code will be employed to study as an application the transport barrier physics in tokamak discharges.

  9. A genetic scale of reading frame coding.

    PubMed

    Michel, Christian J

    2014-08-21

    The reading frame coding (RFC) of codes (sets) of trinucleotides is a genetic concept which has been largely ignored during the last 50 years. A first objective is the definition of a new and simple statistical parameter PrRFC for analysing the probability (efficiency) of reading frame coding (RFC) of any trinucleotide code. A second objective is to reveal different classes and subclasses of trinucleotide codes involved in reading frame coding: the circular codes of 20 trinucleotides and the bijective genetic codes of 20 trinucleotides coding the 20 amino acids. This approach allows us to propose a genetic scale of reading frame coding which ranges from 1/3 with the random codes (RFC probability identical in the three frames) to 1 with the comma-free circular codes (RFC probability maximal in the reading frame and null in the two shifted frames). This genetic scale shows, in particular, the reading frame coding probabilities of the 12,964,440 circular codes (PrRFC=83.2% in average), the 216 C(3) self-complementary circular codes (PrRFC=84.1% in average) including the code X identified in eukaryotic and prokaryotic genes (PrRFC=81.3%) and the 339,738,624 bijective genetic codes (PrRFC=61.5% in average) including the 52 codes without permuted trinucleotides (PrRFC=66.0% in average). Otherwise, the reading frame coding probabilities of each trinucleotide code coding an amino acid with the universal genetic code are also determined. The four amino acids Gly, Lys, Phe and Pro are coded by codes (not circular) with RFC probabilities equal to 2/3, 1/2, 1/2 and 2/3, respectively. The amino acid Leu is coded by a circular code (not comma-free) with a RFC probability equal to 18/19. The 15 other amino acids are coded by comma-free circular codes, i.e. with RFC probabilities equal to 1. The identification of coding properties in some classes of trinucleotide codes studied here may bring new insights in the origin and evolution of the genetic code. PMID:24698943

  10. Constructions for finite-state codes

    NASA Technical Reports Server (NTRS)

    Pollara, F.; Mceliece, R. J.; Abdel-Ghaffar, K.

    1987-01-01

    A class of codes called finite-state (FS) codes is defined and investigated. These codes, which generalize both block and convolutional codes, are defined by their encoders, which are finite-state machines with parallel inputs and outputs. A family of upper bounds on the free distance of a given FS code is derived from known upper bounds on the minimum distance of block codes. A general construction for FS codes is then given, based on the idea of partitioning a given linear block into cosets of one of its subcodes, and it is shown that in many cases the FS codes constructed in this way have a d sub free which is as large as possible. These codes are found without the need for lengthy computer searches, and have potential applications for future deep-space coding systems. The issue of catastropic error propagation (CEP) for FS codes is also investigated.

  11. Some partial-unit-memory convolutional codes

    NASA Technical Reports Server (NTRS)

    Abdel-Ghaffar, K.; Mceliece, R. J.; Solomon, G.

    1991-01-01

    The results of a study on a class of error correcting codes called partial unit memory (PUM) codes are presented. This class of codes, though not entirely new, has until now remained relatively unexplored. The possibility of using the well developed theory of block codes to construct a large family of promising PUM codes is shown. The performance of several specific PUM codes are compared with that of the Voyager standard (2, 1, 6) convolutional code. It was found that these codes can outperform the Voyager code with little or no increase in decoder complexity. This suggests that there may very well be PUM codes that can be used for deep space telemetry that offer both increased performance and decreased implementational complexity over current coding systems.

  12. Image coding of SAR imagery

    NASA Technical Reports Server (NTRS)

    Chang, C. Y.; Kwok, R.; Curlander, J. C.

    1987-01-01

    Five coding techniques in the spatial and transform domains have been evaluated for SAR image compression: linear three-point predictor (LTPP), block truncation coding (BTC), microadaptive picture sequencing (MAPS), adaptive discrete cosine transform (ADCT), and adaptive Hadamard transform (AHT). These techniques have been tested with Seasat data. Both LTPP and BTC spatial domain coding techniques provide very good performance at rates of 1-2 bits/pixel. The two transform techniques, ADCT and AHT, demonstrate the capability to compress the SAR imagery to less than 0.5 bits/pixel without visible artifacts. Tradeoffs such as the rate distortion performance, the computational complexity, the algorithm flexibility, and the controllability of compression ratios are also discussed.

  13. Code-multiplexed optical scanner

    NASA Astrophysics Data System (ADS)

    Riza, Nabeel A.; Arain, Muzammil A.

    2003-03-01

    A three-dimensional (3-D) optical-scanning technique is proposed based on spatial optical phase code activation on an input beam. This code-multiplexed optical scanner (C-MOS) relies on holographically stored 3-D beam-forming information. Proof-of-concept C-MOS experimental results by use of a photorefractive crystal as a holographic medium generates eight beams representing a basic 3-D voxel element generated via a binary-code matrix of the Hadamard type. The experiment demonstrates the C-MOS features of no moving parts, beam-forming flexibility, and large centimeter-size apertures. A novel application of the C-MOS as an optical security lock is highlighted.

  14. ASME Code Efforts Supporting HTGRs

    SciTech Connect

    D.K. Morton

    2012-09-01

    In 1999, an international collaborative initiative for the development of advanced (Generation IV) reactors was started. The idea behind this effort was to bring nuclear energy closer to the needs of sustainability, to increase proliferation resistance, and to support concepts able to produce energy (both electricity and process heat) at competitive costs. The U.S. Department of Energy has supported this effort by pursuing the development of the Next Generation Nuclear Plant, a high temperature gas-cooled reactor. This support has included research and development of pertinent data, initial regulatory discussions, and engineering support of various codes and standards development. This report discusses the various applicable American Society of Mechanical Engineers (ASME) codes and standards that are being developed to support these high temperature gascooled reactors during construction and operation. ASME is aggressively pursuing these codes and standards to support an international effort to build the next generation of advanced reactors so that all can benefit.

  15. FLOWTRAN-TF code description

    SciTech Connect

    Flach, G.P.

    1991-09-01

    FLOWTRAN-TF is a two-component (air-water), two-phase thermal-hydraulics code designed for performing accident analyses of SRS reactor fuel assemblies during the Emergency Cooling System (ECS) phase of a Double Ended Guillotine Break (DEGB) Loss of Coolant Accident (LOCA). This report provides a brief description of the physical models in the version of FLOWTRAN-TF used to compute the Recommended K-Reactor Restart ECS Power Limit. This document is viewed as an interim report and should ultimately be superseded by a comprehensive user/programmer manual. In general, only high level discussions of governing equations and constitutive laws are presented. Numerical implementation of these models, code architecture and user information are not generally covered. A companion document describing code benchmarking is available.

  16. A coded tracking telemetry system

    USGS Publications Warehouse

    Howey, P.W.; Seegar, W.S.; Fuller, M.R.; Titus, K.

    1989-01-01

    We describe the general characteristics of an automated radio telemetry system designed to operate for prolonged periods on a single frequency. Each transmitter sends a unique coded signal to a receiving system that encodes and records only the appropriater, pre-programmed codes. A record of the time of each reception is stored on diskettes in a micro-computer. This system enables continuous monitoring of infrequent signals (e.g. one per minute or one per hour), thus extending operation life or allowing size reduction of the transmitter, compared to conventional wildlife telemetry. Furthermore, when using unique codes transmitted on a single frequency, biologists can monitor many individuals without exceeding the radio frequency allocations for wildlife.

  17. Pulse code modulated signal synchronizer

    NASA Technical Reports Server (NTRS)

    Kobayashi, H. S. (Inventor)

    1974-01-01

    A bit synchronizer for a split phase PCM transmission is reported that includes three loop circuits which receive incoming phase coded PCM signals. In the first loop, called a Q-loop, a generated, phase coded, PCM signal is multiplied with the incoming signals, and the frequency and phase of the generated signal are nulled to that of the incoming subcarrier signal. In the second loop, called a B-loop, a circuit multiplies a generated signal with incoming signals to null the phase of the generated signal in a bit phase locked relationship to the incoming signal. In a third loop, called the I-loop, a phase coded PCM signal is multiplied with the incoming signals for decoding the bit information from the PCM signal. A counter means is used for timing of the generated signals and timing of sample intervals for each bit period.

  18. ASME Code Efforts Supporting HTGRs

    SciTech Connect

    D.K. Morton

    2011-09-01

    In 1999, an international collaborative initiative for the development of advanced (Generation IV) reactors was started. The idea behind this effort was to bring nuclear energy closer to the needs of sustainability, to increase proliferation resistance, and to support concepts able to produce energy (both electricity and process heat) at competitive costs. The U.S. Department of Energy has supported this effort by pursuing the development of the Next Generation Nuclear Plant, a high temperature gas-cooled reactor. This support has included research and development of pertinent data, initial regulatory discussions, and engineering support of various codes and standards development. This report discusses the various applicable American Society of Mechanical Engineers (ASME) codes and standards that are being developed to support these high temperature gascooled reactors during construction and operation. ASME is aggressively pursuing these codes and standards to support an international effort to build the next generation of advanced reactors so that all can benefit.

  19. ASME Code Efforts Supporting HTGRs

    SciTech Connect

    D.K. Morton

    2010-09-01

    In 1999, an international collaborative initiative for the development of advanced (Generation IV) reactors was started. The idea behind this effort was to bring nuclear energy closer to the needs of sustainability, to increase proliferation resistance, and to support concepts able to produce energy (both electricity and process heat) at competitive costs. The U.S. Department of Energy has supported this effort by pursuing the development of the Next Generation Nuclear Plant, a high temperature gas-cooled reactor. This support has included research and development of pertinent data, initial regulatory discussions, and engineering support of various codes and standards development. This report discusses the various applicable American Society of Mechanical Engineers (ASME) codes and standards that are being developed to support these high temperature gascooled reactors during construction and operation. ASME is aggressively pursuing these codes and standards to support an international effort to build the next generation of advanced reactors so that all can benefit.

  20. Verification of FANTASTIC integrated code

    NASA Technical Reports Server (NTRS)

    Chauhan, Rajinder Singh

    1987-01-01

    FANTASTIC is an acronym for Failure Analysis Nonlinear Thermal and Structural Integrated Code. This program was developed by Failure Analysis Associates, Palo Alto, Calif., for MSFC to improve the accuracy of solid rocket motor nozzle analysis. FANTASTIC has three modules: FACT - thermochemical analysis; FAHT - heat transfer analysis; and FAST - structural analysis. All modules have keywords for data input. Work is in progress for the verification of the FAHT module, which is done by using data for various problems with known solutions as inputs to the FAHT module. The information obtained is used to identify problem areas of the code and passed on to the developer for debugging purposes. Failure Analysis Associates have revised the first version of the FANTASTIC code and a new improved version has been released to the Thermal Systems Branch.

  1. Prediction matching for video coding

    NASA Astrophysics Data System (ADS)

    Zheng, Yunfei; Yin, Peng; Divorra Escoda, Òscar; Solé, Joel; Gomila, Cristina

    2010-01-01

    Modern video coding schemes such as H.264/AVC employ multi-hypothesis motion compensation for an improved coding efficiency. However, an additional cost has to be paid for the improved prediction performance in these schemes. Based on the observed high correlation among the multiple hypothesis in H.264/AVC, in this paper, we propose a new method (Prediction Matching) to jointly combine explicit and implicit prediction approaches. The first motion hypothesis on a predicted block is explicitly coded, while the eventual additional hypotheses are implicitly derived at the decoder based on the first one and the available data from previously decoded frames. Thus, the overhead to indicate motion information is reduced, while prediction accuracy may be better with respect to fully implicit multi-hypothesis prediction. Proof-of-concept simulation results show that up to 7.06% bitrate saving with respect to state-of-the-art H.264/AVC can be achieved using our Prediction Matching.

  2. FLOWTRAN-TF code description

    SciTech Connect

    Flach, G.P.

    1990-12-01

    FLOWTRAN-TF is a two-component (air-water), two-phase thermal-hydraulics code designed for performing accident analyses of SRS reactor fuel assemblies during the Emergency Cooling System (ECS) phase of a Double Ended Guillotine Break (DEGB) Loss of Coolant Accident (LOCA). This report provides a brief description of the physical models in the version of FLOWTRAN-TF used to compute the Recommended K-Reactor Restart ECS Power Limit. This document is viewed as an interim report and should ultimately be superseded by a comprehensive user/programmer manual. In general, only high level discussions of governing equations and constitutive laws are presented. Numerical implementation of these models, code architecture and user information are not generally covered. A companion document describing code benchmarking is available.

  3. Experimental evaluation of photoacoustic coded excitation using unipolar golay codes.

    PubMed

    Mienkina, Martin P; Friedrich, Claus-Stefan; Gerhardt, Nils C; Wilkening, Wilko G; Hofmann, Martin R; Schmitz, Georg

    2010-07-01

    Q-switched Nd:YAG lasers are commonly used as light sources for photoacoustic imaging. However, laser diodes are attractive as an alternative to Nd:YAG lasers because they are less expensive and more compact. Although laser diodes deliver about three orders of magnitude less light pulse energy than Nd:YAG lasers (tens of microjoules compared with tens of millijoules), their pulse repetition frequency (PRF) is four to five orders of magnitude higher (up to 1 MHz compared with tens of hertz); this enables the use of averaging to improve SNR without compromising the image acquisition rate. In photoacoustic imaging, the PRF is limited by the maximum acoustic time-of-flight. This limit can be overcome by using coded excitation schemes in which the coding eliminates ambiguities between echoes induced by subsequent pulses. To evaluate the benefits of photoacoustic coded excitation (PACE), the performance of unipolar Golay codes is investigated analytically and validated experimentally. PACE imaging of a copper slab using laser diodes at a PRF of 1 MHz and a modified clinical ultrasound scanner is successfully demonstrated. Considering laser safety regulations and taking into account a comparison between a laser diode system and Nd:YAG systems with respect to SNR, we conclude that PACE is feasible for small animal imaging. PMID:20639152

  4. Signal Processing Expert Code (SPEC)

    SciTech Connect

    Ames, H.S.

    1985-12-01

    The purpose of this paper is to describe a prototype expert system called SPEC which was developed to demonstrate the utility of providing an intelligent interface for users of SIG, a general purpose signal processing code. The expert system is written in NIL, runs on a VAX 11/750 and consists of a backward chaining inference engine and an English-like parser. The inference engine uses knowledge encoded as rules about the formats of SIG commands and about how to perform frequency analyses using SIG. The system demonstrated that expert system can be used to control existing codes.

  5. Hybrid codes: Methods and applications

    SciTech Connect

    Winske, D. ); Omidi, N. )

    1991-01-01

    In this chapter we discuss hybrid'' algorithms used in the study of low frequency electromagnetic phenomena, where one or more ion species are treated kinetically via standard PIC methods used in particle codes and the electrons are treated as a single charge neutralizing massless fluid. Other types of hybrid models are possible, as discussed in Winske and Quest, but hybrid codes with particle ions and massless fluid electrons have become the most common for simulating space plasma physics phenomena in the last decade, as we discuss in this paper.

  6. Software for universal noiseless coding

    NASA Technical Reports Server (NTRS)

    Rice, R. F.; Schlutsmeyer, A. P.

    1981-01-01

    An overview is provided of the universal noiseless coding algorithms as well as their relationship to the now available FORTRAN implementations. It is suggested that readers considering investigating the utility of these algorithms for actual applications should consult both NASA's Computer Software Management and Information Center (COSMIC) and descriptions of coding techniques provided by Rice (1979). Examples of applying these techniques have also been given by Rice (1975, 1979, 1980). Attention is given to reversible preprocessing, general implementation instructions, naming conventions, and calling arguments. A general applicability of the considered algorithms to solving practical problems is obtained because most real data sources can be simply transformed into the required form by appropriate preprocessing.

  7. COLAcode: COmoving Lagrangian Acceleration code

    NASA Astrophysics Data System (ADS)

    Tassev, Svetlin V.

    2016-02-01

    COLAcode is a serial particle mesh-based N-body code illustrating the COLA (COmoving Lagrangian Acceleration) method; it solves for Large Scale Structure (LSS) in a frame that is comoving with observers following trajectories calculated in Lagrangian Perturbation Theory (LPT). It differs from standard N-body code by trading accuracy at small-scales to gain computational speed without sacrificing accuracy at large scales. This is useful for generating large ensembles of accurate mock halo catalogs required to study galaxy clustering and weak lensing; such catalogs are needed to perform detailed error analysis for ongoing and future surveys of LSS.

  8. Sensor Authentication: Embedded Processor Code

    SciTech Connect

    Svoboda, John

    2012-09-25

    Described is the c code running on the embedded Microchip 32bit PIC32MX575F256H located on the INL developed noise analysis circuit board. The code performs the following functions: Controls the noise analysis circuit board preamplifier voltage gains of 1, 10, 100, 000 Initializes the analog to digital conversion hardware, input channel selection, Fast Fourier Transform (FFT) function, USB communications interface, and internal memory allocations Initiates high resolution 4096 point 200 kHz data acquisition Computes complex 2048 point FFT and FFT magnitude. Services Host command set Transfers raw data to Host Transfers FFT result to host Communication error checking

  9. Migration of ATLAS PanDA to CERN

    NASA Astrophysics Data System (ADS)

    Stewart, Graeme Andrew; Klimentov, Alexei; Koblitz, Birger; Lamanna, Massimo; Maeno, Tadashi; Nevski, Pavel; Nowak, Marcin; Emanuel De Castro Faria Salgado, Pedro; Wenaus, Torre

    2010-04-01

    The ATLAS Production and Distributed Analysis System (PanDA) is a key component of the ATLAS distributed computing infrastructure. All ATLAS production jobs, and a substantial amount of user and group analysis jobs, pass through the PanDA system, which manages their execution on the grid. PanDA also plays a key role in production task definition and the data set replication request system. PanDA has recently been migrated from Brookhaven National Laboratory (BNL) to the European Organization for Nuclear Research (CERN), a process we describe here. We discuss how the new infrastructure for PanDA, which relies heavily on services provided by CERN IT, was introduced in order to make the service as reliable as possible and to allow it to be scaled to ATLAS's increasing need for distributed computing. The migration involved changing the backend database for PanDA from MySQL to Oracle, which impacted upon the database schemas. The process by which the client code was optimised for the new database backend is discussed. We describe the procedure by which the new database infrastructure was tested and commissioned for production use. Operations during the migration had to be planned carefully to minimise disruption to ongoing ATLAS offline computing. All parts of the migration were fully tested before commissioning the new infrastructure and the gradual migration of computing resources to the new system allowed any problems of scaling to be addressed.

  10. Improving Security in the ATLAS PanDA System

    NASA Astrophysics Data System (ADS)

    Caballero, J.; Maeno, T.; Nilsson, P.; Stewart, G.; Potekhin, M.; Wenaus, T.

    2011-12-01

    The security challenges faced by users of the grid are considerably different to those faced in previous environments. The adoption of pilot jobs systems by LHC experiments has mitigated many of the problems associated with the inhomogeneities found on the grid and has greatly improved job reliability; however, pilot jobs systems themselves must then address many security issues, including the execution of multiple users' code under a common 'grid' identity. In this paper we describe the improvements and evolution of the security model in the ATLAS PanDA (Production and Distributed Analysis) system. We describe the security in the PanDA server which is in place to ensure that only authorized members of the VO are allowed to submit work into the system and that jobs are properly audited and monitored. We discuss the security in place between the pilot code itself and the PanDA server, ensuring that only properly authenticated workload is delivered to the pilot for execution. When the code to be executed is from a 'normal' ATLAS user, as opposed to the production system or other privileged actor, then the pilot may use an EGEE developed identity switching tool called gLExec. This changes the grid proxy available to the job and also switches the UNIX user identity to protect the privileges of the pilot code proxy. We describe the problems in using this system and how they are overcome. Finally, we discuss security drills which have been run using PanDA and show how these improved our operational security procedures.

  11. On the Grammar of Code-Switching.

    ERIC Educational Resources Information Center

    Bhatt, Rakesh M.

    1996-01-01

    Explores an Optimality-Theoretic approach to account for observed cross-linguistic patterns of code switching that assumes that code switching strives for well-formedness. Optimization of well-formedness in code switching is shown to follow from (violable) ranked constraints. An argument is advanced that code-switching patterns emerge from…

  12. 21 CFR 106.90 - Coding.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 2 2010-04-01 2010-04-01 false Coding. 106.90 Section 106.90 Food and Drugs FOOD... of Infant Formulas § 106.90 Coding. The manufacturer shall code all infant formulas in conformity with the coding requirements that are applicable to thermally processed low-acid foods packaged...

  13. 21 CFR 106.90 - Coding.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 2 2013-04-01 2013-04-01 false Coding. 106.90 Section 106.90 Food and Drugs FOOD... of Infant Formulas § 106.90 Coding. The manufacturer shall code all infant formulas in conformity with the coding requirements that are applicable to thermally processed low-acid foods packaged...

  14. 21 CFR 106.90 - Coding.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 2 2014-04-01 2014-04-01 false Coding. 106.90 Section 106.90 Food and Drugs FOOD... Assuring Nutrient Content of Infant Formulas § 106.90 Coding. The manufacturer shall code all infant formulas in conformity with the coding requirements that are applicable to thermally processed...

  15. 21 CFR 106.90 - Coding.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 2 2012-04-01 2012-04-01 false Coding. 106.90 Section 106.90 Food and Drugs FOOD... of Infant Formulas § 106.90 Coding. The manufacturer shall code all infant formulas in conformity with the coding requirements that are applicable to thermally processed low-acid foods packaged...

  16. 21 CFR 106.90 - Coding.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 2 2011-04-01 2011-04-01 false Coding. 106.90 Section 106.90 Food and Drugs FOOD... of Infant Formulas § 106.90 Coding. The manufacturer shall code all infant formulas in conformity with the coding requirements that are applicable to thermally processed low-acid foods packaged...

  17. An Interactive Concatenated Turbo Coding System

    NASA Technical Reports Server (NTRS)

    Liu, Ye; Tang, Heng; Lin, Shu; Fossorier, Marc

    1999-01-01

    This paper presents a concatenated turbo coding system in which a Reed-Solomon outer code is concatenated with a binary turbo inner code. In the proposed system, the outer code decoder and the inner turbo code decoder interact to achieve both good bit error and frame error performances. The outer code decoder helps the inner turbo code decoder to terminate its decoding iteration while the inner turbo code decoder provides soft-output information to the outer code decoder to carry out a reliability-based soft- decision decoding. In the case that the outer code decoding fails, the outer code decoder instructs the inner code decoder to continue its decoding iterations until the outer code decoding is successful or a preset maximum number of decoding iterations is reached. This interaction between outer and inner code decoders reduces decoding delay. Also presented in the paper are an effective criterion for stopping the iteration process of the inner code decoder and a new reliability-based decoding algorithm for nonbinary codes.

  18. Coded aperture spectroscopy with denoising through sparsity.

    PubMed

    Mrozack, Alex; Marks, Daniel L; Brady, David J

    2012-01-30

    We compare noise and classification metrics for three aperture codes in dispersive spectroscopy. In contrast with previous theory, we show that multiplex codes may be advantageous even in systems dominated by Poisson noise. Furthermore, ill-conditioned codes with a regularized estimation strategy are shown to perform competitively with well-conditioned codes. PMID:22330469

  19. Constructions of Asymmetric Quantum Alternant Codes

    NASA Astrophysics Data System (ADS)

    Fan, Jihao; Chen, Hanwu; Xu, Juan

    2016-01-01

    Asymmetric quantum error-correcting codes (AQCs) have been proposed to deal with the significant asymmetry in many quantum channels, which may have more flexbility than general quantum error-correcting codes (QECs). In this paper, we construct AQCs based on Alternant codes. Firstly, we propose a new subclass of Alternant codes and combine them with BCH codes to construct AQCs. Then we construct AQCs based on series of nested pairs of subclasses of Alternant codes such as nested Goppa codes. As an illustrative example, we get three [[55, 6, 19/4

  20. Entanglement-assisted quantum convolutional coding

    SciTech Connect

    Wilde, Mark M.; Brun, Todd A.

    2010-04-15

    We show how to protect a stream of quantum information from decoherence induced by a noisy quantum communication channel. We exploit preshared entanglement and a convolutional coding structure to develop a theory of entanglement-assisted quantum convolutional coding. Our construction produces a Calderbank-Shor-Steane (CSS) entanglement-assisted quantum convolutional code from two arbitrary classical binary convolutional codes. The rate and error-correcting properties of the classical convolutional codes directly determine the corresponding properties of the resulting entanglement-assisted quantum convolutional code. We explain how to encode our CSS entanglement-assisted quantum convolutional codes starting from a stream of information qubits, ancilla qubits, and shared entangled bits.

  1. Generating Constant Weight Binary Codes

    ERIC Educational Resources Information Center

    Knight, D.G.

    2008-01-01

    The determination of bounds for A(n, d, w), the maximum possible number of binary vectors of length n, weight w, and pairwise Hamming distance no less than d, is a classic problem in coding theory. Such sets of vectors have many applications. A description is given of how the problem can be used in a first-year undergraduate computational…

  2. Visual communication with retinex coding.

    PubMed

    Huck, F O; Fales, C L; Davis, R E; Alter-Gartenberg, R

    2000-04-10

    Visual communication with retinex coding seeks to suppress the spatial variation of the irradiance (e.g., shadows) across natural scenes and preserve only the spatial detail and the reflectance (or the lightness) of the surface itself. The separation of reflectance from irradiance begins with nonlinear retinex coding that sharply and clearly enhances edges and preserves their contrast, and it ends with a Wiener filter that restores images from this edge and contrast information. An approximate small-signal model of image gathering with retinex coding is found to consist of the familiar difference-of-Gaussian bandpass filter and a locally adaptive automatic-gain control. A linear representation of this model is used to develop expressions within the small-signal constraint for the information rate and the theoretical minimum data rate of the retinex-coded signal and for the maximum-realizable fidelity of the images restored from this signal. Extensive computations and simulations demonstrate that predictions based on these figures of merit correlate closely with perceptual and measured performance. Hence these predictions can serve as a general guide for the design of visual communication channels that produce images with a visual quality that consistently approaches the best possible sharpness, clarity, and reflectance constancy, even for nonuniform irradiances. The suppression of shadows in the restored image is found to be constrained inherently more by the sharpness of their penumbra than by their depth. PMID:18345070

  3. AEDS Property Classification Code Manual.

    ERIC Educational Resources Information Center

    Association for Educational Data Systems, Washington, DC.

    The control and inventory of property items using data processing machines requires a form of numerical description or code which will allow a maximum of description in a minimum of space on the data card. An adaptation of a standard industrial classification system is given to cover any expendable warehouse item or non-expendable piece of…

  4. Coded continuous wave meteor radar

    NASA Astrophysics Data System (ADS)

    Vierinen, Juha; Chau, Jorge L.; Pfeffer, Nico; Clahsen, Matthias; Stober, Gunter

    2016-03-01

    The concept of a coded continuous wave specular meteor radar (SMR) is described. The radar uses a continuously transmitted pseudorandom phase-modulated waveform, which has several advantages compared to conventional pulsed SMRs. The coding avoids range and Doppler aliasing, which are in some cases problematic with pulsed radars. Continuous transmissions maximize pulse compression gain, allowing operation at lower peak power than a pulsed system. With continuous coding, the temporal and spectral resolution are not dependent on the transmit waveform and they can be fairly flexibly changed after performing a measurement. The low signal-to-noise ratio before pulse compression, combined with independent pseudorandom transmit waveforms, allows multiple geographically separated transmitters to be used in the same frequency band simultaneously without significantly interfering with each other. Because the same frequency band can be used by multiple transmitters, the same interferometric receiver antennas can be used to receive multiple transmitters at the same time. The principles of the signal processing are discussed, in addition to discussion of several practical ways to increase computation speed, and how to optimally detect meteor echoes. Measurements from a campaign performed with a coded continuous wave SMR are shown and compared with two standard pulsed SMR measurements. The type of meteor radar described in this paper would be suited for use in a large-scale multi-static network of meteor radar transmitters and receivers. Such a system would be useful for increasing the number of meteor detections to obtain improved meteor radar data products.

  5. Generating Constant Weight Binary Codes

    ERIC Educational Resources Information Center

    Knight, D.G.

    2008-01-01

    The determination of bounds for A(n, d, w), the maximum possible number of binary vectors of length n, weight w, and pairwise Hamming distance no less than d, is a classic problem in coding theory. Such sets of vectors have many applications. A description is given of how the problem can be used in a first-year undergraduate computational

  6. Corrections to the Vienna Code

    Technology Transfer Automated Retrieval System (TEKTRAN)

    Since the publication of the Vienna Code, several errors have been noticed. Most are minor punctuation or cross-referencing errors, or, in the Appendices, inconsistencies in abbreviation, but there was one important omission from Art. 37, the misspelling of two specific epithets and the transpositio...

  7. QR Codes: Taking Collections Further

    ERIC Educational Resources Information Center

    Ahearn, Caitlin

    2014-01-01

    With some thought and direction, QR (quick response) codes are a great tool to use in school libraries to enhance access to information. From March through April 2013, Caitlin Ahearn interned at Sanborn Regional High School (SRHS) under the supervision of Pam Harland. As a result of Harland's un-Deweying of the nonfiction collection at SRHS,…

  8. Tri-Coding of Information.

    ERIC Educational Resources Information Center

    Simpson, Timothy J.

    Paivio's Dual Coding Theory has received widespread recognition for its connection between visual and aural channels of internal information processing. The use of only two channels, however, cannot satisfactorily explain the effects witnessed every day. This paper presents a study suggesting the presence a third, kinesthetic channel, currently…

  9. Research Synthesis: Coding and Conjectures.

    ERIC Educational Resources Information Center

    Stock, William A.; And Others

    1996-01-01

    Guidelines are offered that make it more likely that high-quality information will be extracted and coded from primary research reports in meta-analyses. It is also noted that the methodology of meta-analysis results in pressure to change the type of information that appears in primary research reports. (SLD)

  10. Three-dimensional stellarator codes

    PubMed Central

    Garabedian, P. R.

    2002-01-01

    Three-dimensional computer codes have been used to develop quasisymmetric stellarators with modular coils that are promising candidates for a magnetic fusion reactor. The mathematics of plasma confinement raises serious questions about the numerical calculations. Convergence studies have been performed to assess the best configurations. Comparisons with recent data from large stellarator experiments serve to validate the theory. PMID:12140367

  11. FORTRAN Static Source Code Analyzer

    NASA Technical Reports Server (NTRS)

    Merwarth, P.

    1982-01-01

    FORTRAN Static Source Code Analyzer program (SAP) automatically gathers and reports statistics on occurrences of statements and structures within FORTRAN program. Provisions are made for weighting each statistic, providing user with overall figure of complexity. Statistics, as well as figures of complexity, are gathered on module-by-module basis. Overall summed statistics are accumulated for complete input source file.

  12. FORTRAN Static Source Code Analyzer

    NASA Technical Reports Server (NTRS)

    Merwarth, P.

    1984-01-01

    FORTRAN Static Source Code Analyzer program, SAP (DEC VAX version), automatically gathers statistics on occurrences of statements and structures within FORTRAN program and provides reports of those statistics. Provisions made for weighting each statistic and provide an overall figure of complexity.

  13. Multichannel error correction code decoder

    NASA Technical Reports Server (NTRS)

    Wagner, Paul K.; Ivancic, William D.

    1993-01-01

    A brief overview of a processing satellite for a mesh very-small-aperture (VSAT) communications network is provided. The multichannel error correction code (ECC) decoder system, the uplink signal generation and link simulation equipment, and the time-shared decoder are described. The testing is discussed. Applications of the time-shared decoder are recommended.

  14. Accumulate-Repeat-Accumulate-Accumulate Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Samuel; Thorpe, Jeremy

    2007-01-01

    Accumulate-repeat-accumulate-accumulate (ARAA) codes have been proposed, inspired by the recently proposed accumulate-repeat-accumulate (ARA) codes. These are error-correcting codes suitable for use in a variety of wireless data-communication systems that include noisy channels. ARAA codes can be regarded as serial turbolike codes or as a subclass of low-density parity-check (LDPC) codes, and, like ARA codes they have projected graph or protograph representations; these characteristics make it possible to design high-speed iterative decoders that utilize belief-propagation algorithms. The objective in proposing ARAA codes as a subclass of ARA codes was to enhance the error-floor performance of ARA codes while maintaining simple encoding structures and low maximum variable node degree.

  15. Recent Improvements in the ATLAS PanDA Pilot

    NASA Astrophysics Data System (ADS)

    Nilsson, P.; Caballero Bejar, J.; Compostella, G.; Contreras, C.; De, K.; Dos Santos, T.; Maeno, T.; Potekhin, M.; Wenaus, T.

    2012-12-01

    The Production and Distributed Analysis system (PanDA) in the ATLAS experiment uses pilots to execute submitted jobs on the worker nodes. The pilots are designed to deal with different runtime conditions and failure scenarios, and support many storage systems. This talk will give a brief overview of the PanDA pilot system and will present major features and recent improvements including CernVM File System integration, the job retry mechanism, advanced job monitoring including JEM technology, and validation of new pilot code using the HammerCloud stress-testing system. PanDA is used for all ATLAS distributed production and is the primary system for distributed analysis. It is currently used at over 130 sites worldwide. We analyze the performance of the pilot system in processing LHC data on the OSG, EGI and Nordugrid infrastructures used by ATLAS, and describe plans for its further evolution.

  16. Fluid Film Bearing Code Development

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The next generation of rocket engine turbopumps is being developed by industry through Government-directed contracts. These turbopumps will use fluid film bearings because they eliminate the life and shaft-speed limitations of rolling-element bearings, increase turbopump design flexibility, and reduce the need for turbopump overhauls and maintenance. The design of the fluid film bearings for these turbopumps, however, requires sophisticated analysis tools to model the complex physical behavior characteristic of fluid film bearings operating at high speeds with low viscosity fluids. State-of-the-art analysis and design tools are being developed at the Texas A&M University under a grant guided by the NASA Lewis Research Center. The latest version of the code, HYDROFLEXT, is a thermohydrodynamic bulk flow analysis with fluid compressibility, full inertia, and fully developed turbulence models. It can predict the static and dynamic force response of rigid and flexible pad hydrodynamic bearings and of rigid and tilting pad hydrostatic bearings. The Texas A&M code is a comprehensive analysis tool, incorporating key fluid phenomenon pertinent to bearings that operate at high speeds with low-viscosity fluids typical of those used in rocket engine turbopumps. Specifically, the energy equation was implemented into the code to enable fluid properties to vary with temperature and pressure. This is particularly important for cryogenic fluids because their properties are sensitive to temperature as well as pressure. As shown in the figure, predicted bearing mass flow rates vary significantly depending on the fluid model used. Because cryogens are semicompressible fluids and the bearing dynamic characteristics are highly sensitive to fluid compressibility, fluid compressibility effects are also modeled. The code contains fluid properties for liquid hydrogen, liquid oxygen, and liquid nitrogen as well as for water and air. Other fluids can be handled by the code provided that the user inputs information that relates the fluid transport properties to the temperature.

  17. Error Correcting Codes on Algebraic Surfaces

    NASA Astrophysics Data System (ADS)

    Lomont, Chris

    2003-09-01

    Error correcting codes are defined and important parameters for a code are explained. Parameters of new codes constructed on algebraic surfaces are studied. In particular, codes resulting from blowing up points in proj^2 are briefly studied, then codes resulting from ruled surfaces are covered. Codes resulting from ruled surfaces over curves of genus 0 are completely analyzed, and some codes are discovered that are better than direct product Reed Solomon codes of similar length. Ruled surfaces over genus 1 curves are also studied, but not all classes are completely analyzed. However, in this case a family of codes are found that are comparable in performance to the direct product code of a Reed Solomon code and a Goppa code. Some further work is done on surfaces from higher genus curves, but there remains much work to be done in this direction to understand fully the resulting codes. Codes resulting from blowing points on surfaces are also studied, obtaining necessary parameters for constructing infinite families of such codes. Also included is a paper giving explicit formulas for curves with more field{q}-rational points than were previously known for certain combinations of field size and genus. Some upper bounds are now known to be optimal from these examples.

  18. Amino acid codes in mitochondria as possible clues to primitive codes

    NASA Technical Reports Server (NTRS)

    Jukes, T. H.

    1981-01-01

    Differences between mitochondrial codes and the universal code indicate that an evolutionary simplification has taken place, rather than a return to a more primitive code. However, these differences make it evident that the universal code is not the only code possible, and therefore earlier codes may have differed markedly from the previous code. The present universal code is probably a 'frozen accident.' The change in CUN codons from leucine to threonine (Neurospora vs. yeast mitochondria) indicates that neutral or near-neutral changes occurred in the corresponding proteins when this code change took place, caused presumably by a mutation in a tRNA gene.

  19. Genetic coding and gene expression - new Quadruplet genetic coding model

    NASA Astrophysics Data System (ADS)

    Shankar Singh, Rama

    2012-07-01

    Successful demonstration of human genome project has opened the door not only for developing personalized medicine and cure for genetic diseases, but it may also answer the complex and difficult question of the origin of life. It may lead to making 21st century, a century of Biological Sciences as well. Based on the central dogma of Biology, genetic codons in conjunction with tRNA play a key role in translating the RNA bases forming sequence of amino acids leading to a synthesized protein. This is the most critical step in synthesizing the right protein needed for personalized medicine and curing genetic diseases. So far, only triplet codons involving three bases of RNA, transcribed from DNA bases, have been used. Since this approach has several inconsistencies and limitations, even the promise of personalized medicine has not been realized. The new Quadruplet genetic coding model proposed and developed here involves all four RNA bases which in conjunction with tRNA will synthesize the right protein. The transcription and translation process used will be the same, but the Quadruplet codons will help overcome most of the inconsistencies and limitations of the triplet codes. Details of this new Quadruplet genetic coding model and its subsequent potential applications including relevance to the origin of life will be presented.

  20. Box codes of lengths 48 and 72

    NASA Technical Reports Server (NTRS)

    Solomon, G.; Jin, Y.

    1993-01-01

    A self-dual code length 48, dimension 24, with Hamming distance essentially equal to 12 is constructed here. There are only six code words of weight eight. All the other code words have weights that are multiples of four and have a minimum weight equal to 12. This code may be encoded systematically and arises from a strict binary representation of the (8,4;5) Reed-Solomon (RS) code over GF (64). The code may be considered as six interrelated (8,7;2) codes. The Mattson-Solomon representation of the cyclic decomposition of these codes and their parity sums are used to detect an odd number of errors in any of the six codes. These may then be used in a correction algorithm for hard or soft decision decoding. A (72,36;15) box code was constructed from a (63,35;8) cyclic code. The theoretical justification is presented herein. A second (72,36;15) code is constructed from an inner (63,27;16) Bose Chaudhuri Hocquenghem (BCH) code and expanded to length 72 using box code algorithms for extension. This code was simulated and verified to have a minimum distance of 15 with even weight words congruent to zero modulo four. The decoding for hard and soft decision is still more complex than the first code constructed above. Finally, an (8,4;5) RS code over GF (512) in the binary representation of the (72,36;15) box code gives rise to a (72,36;16*) code with nine words of weight eight, and all the rest have weights greater than or equal to 16.

  1. The Mystery Behind the Code: Differentiated Instruction with Quick Response Codes in Secondary Physical Education

    ERIC Educational Resources Information Center

    Adkins, Megan; Wajciechowski, Misti R.; Scantling, Ed

    2013-01-01

    Quick response codes, better known as QR codes, are small barcodes scanned to receive information about a specific topic. This article explains QR code technology and the utility of QR codes in the delivery of physical education instruction. Consideration is given to how QR codes can be used to accommodate learners of varying ability levels as…

  2. A tutorial on convolution coding for M-ary signals-Trellis-coded modulation

    NASA Astrophysics Data System (ADS)

    Sklar, Bernard

    Trellis-coded modulation (TCM) is defined and described with respect to code-to-signal mapping and coding for QAM. Also summarized are the evolution and underlying principles of TCM and how TCM schemes achieve coding gains without bandwidth expansion. An example is used to illustrate a simple implementation of such a coding scheme.

  3. Biological Information Transfer Beyond the Genetic Code: The Sugar Code

    NASA Astrophysics Data System (ADS)

    Gabius, H.-J.

    In the era of genetic engineering, cloning, and genome sequencing the focus of research on the genetic code has received an even further accentuation in the public eye. In attempting, however, to understand intra- and intercellular recognition processes comprehensively, the two biochemical dimensions established by nucleic acids and proteins are not sufficient to satisfactorily explain all molecular events in, for example, cell adhesion or routing. The consideration of further code systems is essential to bridge this gap. A third biochemical alphabet forming code words with an information storage capacity second to no other substance class in rather small units (words, sentences) is established by monosaccharides (letters). As hardware oligosaccharides surpass peptides by more than seven orders of magnitude in the theoretical ability to build isomers, when the total of conceivable hexamers is calculated. In addition to the sequence complexity, the use of magnetic resonance spectroscopy and molecular modeling has been instrumental in discovering that even small glycans can often reside in not only one but several distinct low-energy conformations (keys). Intriguingly, conformers can display notably different capacities to fit snugly into the binding site of nonhomologous receptors (locks). This process, experimentally verified for two classes of lectins, is termed "differential conformer selection." It adds potential for shifts of the conformer equilibrium to modulate ligand properties dynamically and reversibly to the well-known changes in sequence (including anomeric positioning and linkage points) and in pattern of substitution, for example, by sulfation. In the intimate interplay with sugar receptors (lectins, enzymes, and antibodies) the message of coding units of the sugar code is deciphered. Their recognition will trigger postbinding signaling and the intended biological response. Knowledge about the driving forces for the molecular rendezvous, i.e., contributions of bidentate or cooperative hydrogen bonds, dispersion forces, stacking, and solvent rearrangement, will enable the design of high-affinity ligands or mimetics thereof. They embody clinical applications reaching from receptor localization in diagnostic pathology to cell type-selective targeting of drugs and inhibition of undesired cell adhesion in bacterial/viral infections, inflammation, or metastasis.

  4. Maximal dinucleotide and trinucleotide circular codes.

    PubMed

    Michel, Christian J; Pellegrini, Marco; Pirillo, Giuseppe

    2016-01-21

    We determine here the number and the list of maximal dinucleotide and trinucleotide circular codes. We prove that there is no maximal dinucleotide circular code having strictly less than 6 elements (maximum size of dinucleotide circular codes). On the other hand, a computer calculus shows that there are maximal trinucleotide circular codes with less than 20 elements (maximum size of trinucleotide circular codes). More precisely, there are maximal trinucleotide circular codes with 14, 15, 16, 17, 18 and 19 elements and no maximal trinucleotide circular code having less than 14 elements. We give the same information for the maximal self-complementary dinucleotide and trinucleotide circular codes. The amino acid distribution of maximal trinucleotide circular codes is also determined. PMID:26382231

  5. Dual-code quantum computation model

    NASA Astrophysics Data System (ADS)

    Choi, Byung-Soo

    2015-08-01

    In this work, we propose the dual-code quantum computation modela fault-tolerant quantum computation scheme which alternates between two different quantum error-correction codes. Since the chosen two codes have different sets of transversal gates, we can implement a universal set of gates transversally, thereby reducing the overall cost. We use code teleportation to convert between quantum states in different codes. The overall cost is decreased if code teleportation requires fewer resources than the fault-tolerant implementation of the non-transversal gate in a specific code. To analyze the cost reduction, we investigate two cases with different base codes, namely the Steane and Bacon-Shor codes. For the Steane code, neither the proposed dual-code model nor another variation of it achieves any cost reduction since the conventional approach is simple. For the Bacon-Shor code, the three proposed variations of the dual-code model reduce the overall cost. However, as the encoding level increases, the cost reduction decreases and becomes negative. Therefore, the proposed dual-code model is advantageous only when the encoding level is low and the cost of the non-transversal gate is relatively high.

  6. GeoPhysical Analysis Code

    SciTech Connect

    2011-05-21

    GPAC is a code that integrates open source libraries for element formulations, linear algebra, and I/O with two main LLNL-Written components: (i) a set of standard finite elements physics solvers for rersolving Darcy fluid flow, explicit mechanics, implicit mechanics, and fluid-mediated fracturing, including resolution of contact both implicity and explicity, and (ii) a MPI-based parallelization implementation for use on generic HPC distributed memory architectures. The resultant code can be used alone for linearly elastic problems and problems involving hydraulic fracturing, where the mesh topology is dynamically changed. The key application domain is for low-rate stimulation and fracture control in subsurface reservoirs (e.g., enhanced geothermal sites and unconventional shale gas stimulation). GPAC also has interfaces to call external libraries for, e.g., material models and equations of state; however, LLNL-developed EOS and material models will not be part of the current release.

  7. Multidimensional Fuel Performance Code: BISON

    Energy Science and Technology Software Center (ESTSC)

    2014-09-03

    BISON is a finite element based nuclear fuel performance code applicable to a variety of fuel forms including light water reactor fuel rods, TRISO fuel particles, and metallic rod and plate fuel (Refs. [a, b, c]). It solves the fully-coupled equations of thermomechanics and species diffusion and includes important fuel physics such as fission gas release and material property degradation with burnup. BISON is based on the MOOSE framework (Ref. [d]) and can therefore efficientlymore » solve problems on 1-, 2- or 3-D meshes using standard workstations or large high performance computers. BISON is also coupled to a MOOSE-based mesoscale phase field material property simulation capability (Refs. [e, f]). As described here, BISON includes the code library named FOX, which was developed concurrent with BISON. FOX contains material and behavioral models that are specific to oxide fuels.« less

  8. Visual analysis of code security

    SciTech Connect

    Goodall, John R; Radwan, Hassan; Halseth, Lenny

    2010-01-01

    To help increase the confidence that software is secure, researchers and vendors have developed different kinds of automated software security analysis tools. These tools analyze software for weaknesses and vulnerabilities, but the individual tools catch different vulnerabilities and produce voluminous data with many false positives. This paper describes a system that brings together the results of disparate software analysis tools into a visual environment to support the triage and exploration of code vulnerabilities. Our system allows software developers to explore vulnerability results to uncover hidden trends, triage the most important code weaknesses, and show who is responsible for introducing software vulnerabilities. By correlating and normalizing multiple software analysis tools' data, the overall vulnerability detection coverage of software is increased. A visual overview and powerful interaction allows the user to focus attention on the most pressing vulnerabilities within huge volumes of data, and streamlines the secure software development workflow through integration with development tools.

  9. The EGS5 Code System

    SciTech Connect

    Hirayama, Hideo; Namito, Yoshihito; Bielajew, Alex F.; Wilderman, Scott J.; U., Michigan; Nelson, Walter R.; /SLAC

    2005-12-20

    In the nineteen years since EGS4 was released, it has been used in a wide variety of applications, particularly in medical physics, radiation measurement studies, and industrial development. Every new user and every new application bring new challenges for Monte Carlo code designers, and code refinements and bug fixes eventually result in a code that becomes difficult to maintain. Several of the code modifications represented significant advances in electron and photon transport physics, and required a more substantial invocation than code patching. Moreover, the arcane MORTRAN3[48] computer language of EGS4, was highest on the complaint list of the users of EGS4. The size of the EGS4 user base is difficult to measure, as there never existed a formal user registration process. However, some idea of the numbers may be gleaned from the number of EGS4 manuals that were produced and distributed at SLAC: almost three thousand. Consequently, the EGS5 project was undertaken. It was decided to employ the FORTRAN 77 compiler, yet include as much as possible, the structural beauty and power of MORTRAN3. This report consists of four chapters and several appendices. Chapter 1 is an introduction to EGS5 and to this report in general. We suggest that you read it. Chapter 2 is a major update of similar chapters in the old EGS4 report[126] (SLAC-265) and the old EGS3 report[61] (SLAC-210), in which all the details of the old physics (i.e., models which were carried over from EGS4) and the new physics are gathered together. The descriptions of the new physics are extensive, and not for the faint of heart. Detailed knowledge of the contents of Chapter 2 is not essential in order to use EGS, but sophisticated users should be aware of its contents. In particular, details of the restrictions on the range of applicability of EGS are dispersed throughout the chapter. First-time users of EGS should skip Chapter 2 and come back to it later if necessary. With the release of the EGS4 version, a deliberate attempt was made to present example problems in order to help the user ''get started'', and we follow that spirit in this report. A series of elementary tutorial user codes are presented in Chapter 3, with more sophisticated sample user codes described in Chapter 4. Novice EGS users will find it helpful to read through the initial sections of the EGS5 User Manual (provided in Appendix B of this report), proceeding then to work through the tutorials in Chapter 3. The User Manuals and other materials found in the appendices contain detailed flow charts, variable lists, and subprogram descriptions of EGS5 and PEGS. Included are step-by-step instructions for developing basic EGS5 user codes and for accessing all of the physics options available in EGS5 and PEGS. Once acquainted with the basic structure of EGS5, users should find the appendices the most frequently consulted sections of this report.

  10. Sensor Authentication: Embedded Processor Code

    Energy Science and Technology Software Center (ESTSC)

    2012-09-25

    Described is the c code running on the embedded Microchip 32bit PIC32MX575F256H located on the INL developed noise analysis circuit board. The code performs the following functions: Controls the noise analysis circuit board preamplifier voltage gains of 1, 10, 100, 000 Initializes the analog to digital conversion hardware, input channel selection, Fast Fourier Transform (FFT) function, USB communications interface, and internal memory allocations Initiates high resolution 4096 point 200 kHz data acquisition Computes complex 2048more » point FFT and FFT magnitude. Services Host command set Transfers raw data to Host Transfers FFT result to host Communication error checking« less

  11. GeoPhysical Analysis Code

    Energy Science and Technology Software Center (ESTSC)

    2011-05-21

    GPAC is a code that integrates open source libraries for element formulations, linear algebra, and I/O with two main LLNL-Written components: (i) a set of standard finite elements physics solvers for rersolving Darcy fluid flow, explicit mechanics, implicit mechanics, and fluid-mediated fracturing, including resolution of contact both implicity and explicity, and (ii) a MPI-based parallelization implementation for use on generic HPC distributed memory architectures. The resultant code can be used alone for linearly elastic problemsmore » and problems involving hydraulic fracturing, where the mesh topology is dynamically changed. The key application domain is for low-rate stimulation and fracture control in subsurface reservoirs (e.g., enhanced geothermal sites and unconventional shale gas stimulation). GPAC also has interfaces to call external libraries for, e.g., material models and equations of state; however, LLNL-developed EOS and material models will not be part of the current release.« less

  12. Local intensity adaptive image coding

    NASA Technical Reports Server (NTRS)

    Huck, Friedrich O.

    1989-01-01

    The objective of preprocessing for machine vision is to extract intrinsic target properties. The most important properties ordinarily are structure and reflectance. Illumination in space, however, is a significant problem as the extreme range of light intensity, stretching from deep shadow to highly reflective surfaces in direct sunlight, impairs the effectiveness of standard approaches to machine vision. To overcome this critical constraint, an image coding scheme is being investigated which combines local intensity adaptivity, image enhancement, and data compression. It is very effective under the highly variant illumination that can exist within a single frame or field of view, and it is very robust to noise at low illuminations. Some of the theory and salient features of the coding scheme are reviewed. Its performance is characterized in a simulated space application, the research and development activities are described.

  13. Secure Communication with Network Coding

    NASA Astrophysics Data System (ADS)

    Cao, Zhanghua; Tang, Yuansheng; Luo, Jinquan

    In this paper, we consider the problem of secure communication over wiretap multicast networks. Noticing that network coding renders the intermediate nodes to mix information from different data flows, we propose a secure communication scheme based on cryptographic means and network coding. Specifically, we employ a confidential cryptosystem to encrypt the source message packets, then treat the secret key as a message packet and mix the key with the obtained cryptograms. Furthermore, we can prove that, under suitable conditions, the wiretapper is unable to gain the secret key. Meanwhile, the confidential cryptosystem prohibits the wiretapper from extracting meaningful information from the obtained cryptograms. Our scheme doesn't need a private channel to transmit the secret key and enables the utilization of network capacity to reach 1 n n.

  14. CBP PHASE I CODE INTEGRATION

    SciTech Connect

    Smith, F.; Brown, K.; Flach, G.; Sarkar, S.

    2011-09-30

    The goal of the Cementitious Barriers Partnership (CBP) is to develop a reasonable and credible set of software tools to predict the structural, hydraulic, and chemical performance of cement barriers used in nuclear applications over extended time frames (greater than 100 years for operating facilities and greater than 1000 years for waste management). The simulation tools will be used to evaluate and predict the behavior of cementitious barriers used in near surface engineered waste disposal systems including waste forms, containment structures, entombments, and environmental remediation. These cementitious materials are exposed to dynamic environmental conditions that cause changes in material properties via (i) aging, (ii) chloride attack, (iii) sulfate attack, (iv) carbonation, (v) oxidation, and (vi) primary constituent leaching. A set of state-of-the-art software tools has been selected as a starting point to capture these important aging and degradation phenomena. Integration of existing software developed by the CBP partner organizations was determined to be the quickest method of meeting the CBP goal of providing a computational tool that improves the prediction of the long-term behavior of cementitious materials. These partner codes were selected based on their maturity and ability to address the problems outlined above. The GoldSim Monte Carlo simulation program (GTG 2010a, GTG 2010b) was chosen as the code integration platform (Brown & Flach 2009b). GoldSim (current Version 10.5) is a Windows based graphical object-oriented computer program that provides a flexible environment for model development (Brown & Flach 2009b). The linking of GoldSim to external codes has previously been successfully demonstrated (Eary 2007, Mattie et al. 2007). GoldSim is capable of performing deterministic and probabilistic simulations and of modeling radioactive decay and constituent transport. As part of the CBP project, a general Dynamic Link Library (DLL) interface was developed to link GoldSim with external codes (Smith III et al. 2010). The DLL uses a list of code inputs provided by GoldSim to create an input file for the external application, runs the external code, and returns a list of outputs (read from files created by the external application) back to GoldSim. In this way GoldSim provides: (1) a unified user interface to the applications, (2) the capability of coupling selected codes in a synergistic manner, and (3) the capability of performing probabilistic uncertainty analysis with the codes. GoldSim is made available by the GoldSim Technology Group as a free 'Player' version that allows running but not editing GoldSim models. The player version makes the software readily available to a wider community of users that would wish to use the CBP application but do not have a license for GoldSim.

  15. Multidimensional Fuel Performance Code: BISON

    SciTech Connect

    2014-09-03

    BISON is a finite element based nuclear fuel performance code applicable to a variety of fuel forms including light water reactor fuel rods, TRISO fuel particles, and metallic rod and plate fuel (Refs. [a, b, c]). It solves the fully-coupled equations of thermomechanics and species diffusion and includes important fuel physics such as fission gas release and material property degradation with burnup. BISON is based on the MOOSE framework (Ref. [d]) and can therefore efficiently solve problems on 1-, 2- or 3-D meshes using standard workstations or large high performance computers. BISON is also coupled to a MOOSE-based mesoscale phase field material property simulation capability (Refs. [e, f]). As described here, BISON includes the code library named FOX, which was developed concurrent with BISON. FOX contains material and behavioral models that are specific to oxide fuels.

  16. Dopamine reward prediction error coding

    PubMed Central

    Schultz, Wolfram

    2016-01-01

    Reward prediction errors consist of the differences between received and predicted rewards. They are crucial for basic forms of learning about rewards and make us strive for more rewards—an evolutionary beneficial trait. Most dopamine neurons in the midbrain of humans, monkeys, and rodents signal a reward prediction error; they are activated by more reward than predicted (positive prediction error), remain at baseline activity for fully predicted rewards, and show depressed activity with less reward than predicted (negative prediction error). The dopamine signal increases nonlinearly with reward value and codes formal economic utility. Drugs of addiction generate, hijack, and amplify the dopamine reward signal and induce exaggerated, uncontrolled dopamine effects on neuronal plasticity. The striatum, amygdala, and frontal cortex also show reward prediction error coding, but only in subpopulations of neurons. Thus, the important concept of reward prediction errors is implemented in neuronal hardware. PMID:27069377

  17. Computer access security code system

    NASA Technical Reports Server (NTRS)

    Collins, Earl R., Jr. (Inventor)

    1990-01-01

    A security code system for controlling access to computer and computer-controlled entry situations comprises a plurality of subsets of alpha-numeric characters disposed in random order in matrices of at least two dimensions forming theoretical rectangles, cubes, etc., such that when access is desired, at least one pair of previously unused character subsets not found in the same row or column of the matrix is chosen at random and transmitted by the computer. The proper response to gain access is transmittal of subsets which complete the rectangle, and/or a parallelepiped whose opposite corners were defined by first groups of code. Once used, subsets are not used again to absolutely defeat unauthorized access by eavesdropping, and the like.

  18. Investigation of Near Shannon Limit Coding Schemes

    NASA Technical Reports Server (NTRS)

    Kwatra, S. C.; Kim, J.; Mo, Fan

    1999-01-01

    Turbo codes can deliver performance that is very close to the Shannon limit. This report investigates algorithms for convolutional turbo codes and block turbo codes. Both coding schemes can achieve performance near Shannon limit. The performance of the schemes is obtained using computer simulations. There are three sections in this report. First section is the introduction. The fundamental knowledge about coding, block coding and convolutional coding is discussed. In the second section, the basic concepts of convolutional turbo codes are introduced and the performance of turbo codes, especially high rate turbo codes, is provided from the simulation results. After introducing all the parameters that help turbo codes achieve such a good performance, it is concluded that output weight distribution should be the main consideration in designing turbo codes. Based on the output weight distribution, the performance bounds for turbo codes are given. Then, the relationships between the output weight distribution and the factors like generator polynomial, interleaver and puncturing pattern are examined. The criterion for the best selection of system components is provided. The puncturing pattern algorithm is discussed in detail. Different puncturing patterns are compared for each high rate. For most of the high rate codes, the puncturing pattern does not show any significant effect on the code performance if pseudo - random interleaver is used in the system. For some special rate codes with poor performance, an alternative puncturing algorithm is designed which restores their performance close to the Shannon limit. Finally, in section three, for iterative decoding of block codes, the method of building trellis for block codes, the structure of the iterative decoding system and the calculation of extrinsic values are discussed.

  19. Future trends in image coding

    NASA Astrophysics Data System (ADS)

    Habibi, Ali

    1993-01-01

    The objective of this article is to present a discussion on the future of image data compression in the next two decades. It is virtually impossible to predict with any degree of certainty the breakthroughs in theory and developments, the milestones in advancement of technology and the success of the upcoming commercial products in the market place which will be the main factors in establishing the future stage to image coding. What we propose to do, instead, is look back at the progress in image coding during the last two decades and assess the state of the art in image coding today. Then, by observing the trends in developments of theory, software, and hardware coupled with the future needs for use and dissemination of imagery data and the constraints on the bandwidth and capacity of various networks, predict the future state of image coding. What seems to be certain today is the growing need for bandwidth compression. The television is using a technology which is half a century old and is ready to be replaced by high definition television with an extremely high digital bandwidth. Smart telephones coupled with personal computers and TV monitors accommodating both printed and video data will be common in homes and businesses within the next decade. Efficient and compact digital processing modules using developing technologies will make bandwidth compressed imagery the cheap and preferred alternative in satellite and on-board applications. In view of the above needs, we expect increased activities in development of theory, software, special purpose chips and hardware for image bandwidth compression in the next two decades. The following sections summarize the future trends in these areas.

  20. Source-Code-Analyzing Program

    NASA Technical Reports Server (NTRS)

    Manteufel, Thomas; Jun, Linda

    1991-01-01

    FORTRAN Static Source Code Analyzer program, SAP, developed to gather statistics automatically on occurrences of statements and structures within FORTRAN program and provide for reporting of those statistics. Provisions made to weight each statistic and provide overall figure of complexity. Statistics, as well as figures of complexity, gathered on module-by-module basis. Overall summed statistics also accumulated for complete input source file. Written in FORTRAN IV.

  1. Anelastic Strain Recovery Analysis Code

    Energy Science and Technology Software Center (ESTSC)

    1995-04-05

    ASR4 is a nonlinear least-squares regression of Anelastic Strain Recovery (ASR) data for the purpose of determining in situ stress orientations and magnitudes. ASR4 fits the viscoelastic model of Warpinski and Teufel to measure ASR data, calculates the stress orientations directly, and stress magnitudes if sufficient input data are available. The code also calculates the stress orientation using strain-rosette equations, and it calculates stress magnitudes using Blanton''s approach, assuming sufficient input data are available.

  2. Medical imaging with coded apertures

    SciTech Connect

    Keto, E.; Libby, S.

    1995-06-16

    Now algorithms were investigated for image reconstruction in emission tomography which could incorporate complex instrumental effects such as might be obtained with a coded aperture system. The investigation focused on possible uses of the wavelet transform to handle non-stationary instrumental effects and analytic continuation of the Radon transform to handle self-absorption. Neither investigation was completed during the funding period and whether such algorithms will be useful remains an open question.

  3. SLINGSHOT - a Coilgun Design Code

    SciTech Connect

    MARDER, BARRY M.

    2001-09-01

    The Sandia coilgun [1,2,3,4,5] is an inductive electromagnetic launcher. It consists of a sequence of powered, multi-turn coils surrounding a flyway of circular cross-section through which a conducting armature passes. When the armature is properly positioned with respect to a coil, a charged capacitor is switched into the coil circuit. The rising coil currents induce a current in the armature, producing a repulsive accelerating force. The basic numerical tool for modeling the coilgun is the SLINGSHOT code, an expanded, user-friendly successor to WARP-10 [6]. SLINGSHOT computes the currents in the coils and armature, finds the forces produced by those currents, and moves the armature through the array of coils. In this approach, the cylindrically symmetric coils and armature are subdivided into concentric hoops with rectangular cross-section, in each of which the current is assumed to be uniform. The ensemble of hoops are treated as coupled circuits. The specific heats and resistivities of the hoops are found as functions of temperature and used to determine the resistive heating. The code calculates the resistances and inductances for all hoops, and the mutual inductances for all hoop pairs. Using these, it computes the hoop currents from their circuit equations, finds the forces from the products of these currents and the mutual inductance gradient, and moves the armature. Treating the problem as a set of coupled circuits is a fast and accurate approach compared to solving the field equations. Its use, however, is restricted to problems in which the symmetry dictates the current paths. This paper is divided into three parts. The first presents a demonstration of the code. The second describes the input and output. The third part describes the physical models and numerical methods used in the code. It is assumed that the reader is familiar with coilguns.

  4. Code blue: what to do?

    PubMed

    Porteous, Joan

    2009-09-01

    Cardiac arrest may occur intraoperatively at any time. The purpose of this article is to help the reader recognize and assist in the management of an intraoperative cardiac arrest. Patients who are at risk for cardiac arrest in the OR are identified and different types of pulseless arrythmias are identified. Roles of perioperative personnel are suggested and documentation during the code is discussed. PMID:19830990

  5. Coded continuous wave meteor radar

    NASA Astrophysics Data System (ADS)

    Vierinen, J.; Chau, J. L.; Pfeffer, N.; Clahsen, M.; Stober, G.

    2015-07-01

    The concept of coded continuous wave meteor radar is introduced. The radar uses a continuously transmitted pseudo-random waveform, which has several advantages: coding avoids range aliased echoes, which are often seen with commonly used pulsed specular meteor radars (SMRs); continuous transmissions maximize pulse compression gain, allowing operation with significantly lower peak transmit power; the temporal resolution can be changed after performing a measurement, as it does not depend on pulse spacing; and the low signal to noise ratio allows multiple geographically separated transmitters to be used in the same frequency band without significantly interfering with each other. The latter allows the same receiver antennas to be used to receive multiple transmitters. The principles of the signal processing are discussed, in addition to discussion of several practical ways to increase computation speed, and how to optimally detect meteor echoes. Measurements from a campaign performed with a coded continuous wave SMR are shown and compared with two standard pulsed SMR measurements. The type of meteor radar described in this paper would be suited for use in a large scale multi-static network of meteor radar transmitters and receivers. This would, for example, provide higher spatio-temporal resolution for mesospheric wind field measurements.

  6. Computer Code for Nanostructure Simulation

    NASA Technical Reports Server (NTRS)

    Filikhin, Igor; Vlahovic, Branislav

    2009-01-01

    Due to their small size, nanostructures can have stress and thermal gradients that are larger than any macroscopic analogue. These gradients can lead to specific regions that are susceptible to failure via processes such as plastic deformation by dislocation emission, chemical debonding, and interfacial alloying. A program has been developed that rigorously simulates and predicts optoelectronic properties of nanostructures of virtually any geometrical complexity and material composition. It can be used in simulations of energy level structure, wave functions, density of states of spatially configured phonon-coupled electrons, excitons in quantum dots, quantum rings, quantum ring complexes, and more. The code can be used to calculate stress distributions and thermal transport properties for a variety of nanostructures and interfaces, transport and scattering at nanoscale interfaces and surfaces under various stress states, and alloy compositional gradients. The code allows users to perform modeling of charge transport processes through quantum-dot (QD) arrays as functions of inter-dot distance, array order versus disorder, QD orientation, shape, size, and chemical composition for applications in photovoltaics and physical properties of QD-based biochemical sensors. The code can be used to study the hot exciton formation/relation dynamics in arrays of QDs of different shapes and sizes at different temperatures. It also can be used to understand the relation among the deposition parameters and inherent stresses, strain deformation, heat flow, and failure of nanostructures.

  7. Sensitivity of coded mask telescopes.

    PubMed

    Skinner, Gerald K

    2008-05-20

    Simple formulas are often used to estimate the sensitivity of coded mask x-ray or gamma-ray telescopes, but these are strictly applicable only if a number of basic assumptions are met. Complications arise, for example, if a grid structure is used to support the mask elements, if the detector spatial resolution is not good enough to completely resolve all the detail in the shadow of the mask, or if any of a number of other simplifying conditions are not fulfilled. We derive more general expressions for the Poisson-noise-limited sensitivity of astronomical telescopes using the coded mask technique, noting explicitly in what circumstances they are applicable. The emphasis is on using nomenclature and techniques that result in simple and revealing results. Where no convenient expression is available a procedure is given that allows the calculation of the sensitivity. We consider certain aspects of the optimization of the design of a coded mask telescope and show that when the detector spatial resolution and the mask to detector separation are fixed, the best source location accuracy is obtained when the mask elements are equal in size to the detector pixels. PMID:18493279

  8. Sensitivity of coded mask telescopes

    SciTech Connect

    Skinner, Gerald K

    2008-05-20

    Simple formulas are often used to estimate the sensitivity of coded mask x-ray or gamma-ray telescopes, but these are strictly applicable only if a number of basic assumptions are met. Complications arise, for example, if a grid structure is used to support the mask elements, if the detector spatial resolution is not good enough to completely resolve all the detail in the shadow of the mask, or if any of a number of other simplifying conditions are not fulfilled. We derive more general expressions for the Poisson-noise-limited sensitivity of astronomical telescopes using the coded mask technique, noting explicitly in what circumstances they are applicable. The emphasis is on using nomenclature and techniques that result in simple and revealing results. Where no convenient expression is available a procedure is given that allows the calculation of the sensitivity. We consider certain aspects of the optimization of the design of a coded mask telescope and show that when the detector spatial resolution and the mask to detector separation are fixed, the best source location accuracy is obtained when the mask elements are equal in size to the detector pixels.

  9. User instructions for the CIDER Dose Code

    SciTech Connect

    Eslinger, P.W.; Lessor, K.S.; Ouderkirk, S.J.

    1994-05-01

    This document provides user instructions for the CIDER (Calculation of Individual Doses from Environmental Radionuclides) computer code. The CIDER code computes estimates of annual doses estimated for both reference individuals with a known residence and food consumption history. This document also provides user instructions for four utility codes used to build input data libraries for CIDER. These utility codes are ENVFAC (environmental factors), FOOFAC (food factors), LIFFAC (lifestyle factors), and ORGFAC (organ factors). Finally, this document provides user instructions for the EXPAND utility code. The EXPAND code processes a result file from CIDER and extracts a summary of the dose information for reporting or plotting purposes.

  10. Tandem Mirror Reactor Systems Code (Version I)

    SciTech Connect

    Reid, R.L.; Finn, P.A.; Gohar, M.Y.; Barrett, R.J.; Gorker, G.E.; Spampinaton, P.T.; Bulmer, R.H.; Dorn, D.W.; Perkins, L.J.; Ghose, S.

    1985-09-01

    A computer code was developed to model a Tandem Mirror Reactor. Ths is the first Tandem Mirror Reactor model to couple, in detail, the highly linked physics, magnetics, and neutronic analysis into a single code. This report describes the code architecture, provides a summary description of the modules comprising the code, and includes an example execution of the Tandem Mirror Reactor Systems Code. Results from this code for two sensitivity studies are also included. These studies are: (1) to determine the impact of center cell plasma radius, length, and ion temperature on reactor cost and performance at constant fusion power; and (2) to determine the impact of reactor power level on cost.

  11. A concatenated coding scheme for error control

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1985-01-01

    A concatenated coding scheme for error contol in data communications was analyzed. The inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if either the inner code decoder fails to make a successful decoding or the outer code decoder detects the presence of errors after the inner code decoding. Probability of undetected error of the proposed scheme is derived. An efficient method for computing this probability is presented. Throughout efficiency of the proposed error control scheme incorporated with a selective repeat ARQ retransmission strategy is analyzed.

  12. Combined trellis coding with asymmetric modulations

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Simon, M. K.

    1985-01-01

    The use of asymmetric signal constellations combined with optimized trellis coding to improve the performance of coded systems without increasing the average or peak power, or changing the bandwidth constraints of a system is discussed. The trellis code, asymmetric signal set, and Viterbi decoder of the system model are examined. The procedures for assigning signals to state transitions of the trellis code are described; the performance of the trellis coding system is evaluated. Examples of AM, QAM, and MPSK modulations with short memory trellis codes are presented.

  13. Minimizing correlation effect using zero cross correlation code in spectral amplitude coding optical code division multiple access

    NASA Astrophysics Data System (ADS)

    Safar, Anuar Mat; Aljunid, Syed Alwee; Arief, Amir Razif; Nordin, Junita; Saad, Naufal

    2012-01-01

    The use of minimal multiple access interference (MAI) in code design is investigated. Applying a projection and mapping techniques, a code that has a zero cross correlation (ZCC) between users in optical code division multiple access (OCDMA) is presented in this paper. The system is based on an incoherent light source—LED, spectral amplitude coding (SAC), and direct detection techniques at the receiver. Using power spectral density (PSD) function and Gaussian approximation, we obtain the signal-to-noise ratio (SNR) and the bit-error rate (BER) to measure the code performance. Making a comparison with other existing codes, e.g., Hadamard, MFH and MDW codes, we show that our code performs better at BER 10-9 in terms of number of simultaneous users. We also demonstrate the comparison between the theoretical and simulation analyses, where the results are close to one another.

  14. An implicit Smooth Particle Hydrodynamic code

    SciTech Connect

    Charles E. Knapp

    2000-04-01

    An implicit version of the Smooth Particle Hydrodynamic (SPH) code SPHINX has been written and is working. In conjunction with the SPHINX code the new implicit code models fluids and solids under a wide range of conditions. SPH codes are Lagrangian, meshless and use particles to model the fluids and solids. The implicit code makes use of the Krylov iterative techniques for solving large linear-systems and a Newton-Raphson method for non-linear corrections. It uses numerical derivatives to construct the Jacobian matrix. It uses sparse techniques to save on memory storage and to reduce the amount of computation. It is believed that this is the first implicit SPH code to use Newton-Krylov techniques, and is also the first implicit SPH code to model solids. A description of SPH and the techniques used in the implicit code are presented. Then, the results of a number of tests cases are discussed, which include a shock tube problem, a Rayleigh-Taylor problem, a breaking dam problem, and a single jet of gas problem. The results are shown to be in very good agreement with analytic solutions, experimental results, and the explicit SPHINX code. In the case of the single jet of gas case it has been demonstrated that the implicit code can do a problem in much shorter time than the explicit code. The problem was, however, very unphysical, but it does demonstrate the potential of the implicit code. It is a first step toward a useful implicit SPH code.

  15. Accumulate-Repeat-Accumulate-Accumulate-Codes

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Sam; Thorpe, Jeremy

    2004-01-01

    Inspired by recently proposed Accumulate-Repeat-Accumulate (ARA) codes [15], in this paper we propose a channel coding scheme called Accumulate-Repeat-Accumulate-Accumulate (ARAA) codes. These codes can be seen as serial turbo-like codes or as a subclass of Low Density Parity Check (LDPC) codes, and they have a projected graph or protograph representation; this allows for a high-speed iterative decoder implementation using belief propagation. An ARAA code can be viewed as a precoded Repeat-and-Accumulate (RA) code with puncturing in concatenation with another accumulator, where simply an accumulator is chosen as the precoder; thus ARAA codes have a very fast encoder structure. Using density evolution on their associated protographs, we find examples of rate-lJ2 ARAA codes with maximum variable node degree 4 for which a minimum bit-SNR as low as 0.21 dB from the channel capacity limit can be achieved as the block size goes to infinity. Such a low threshold cannot be achieved by RA or Irregular RA (IRA) or unstructured irregular LDPC codes with the same constraint on the maximum variable node degree. Furthermore by puncturing the accumulators we can construct families of higher rate ARAA codes with thresholds that stay close to their respective channel capacity thresholds uniformly. Iterative decoding simulation results show comparable performance with the best-known LDPC codes but with very low error floor even at moderate block sizes.

  16. Damage predictions for wind turbine components using the LIFE2 computer code

    NASA Astrophysics Data System (ADS)

    Sutherland, Herbert J.

    The LIFE2 computer code is a fatigue/fracture analysis code that is specialized to the analysis of wind turbine components. It is a PC-compatible FORTRAN code that is written in a top-down modular format. The service lifetime of a component can be divided into three phases: crack initiation, growth and coalescence of micro-cracks and growth of a macro-crack. In the LIFE2 formulation, a S-n fatigue analysis is used to describe the first two phases and a linear, da/dn fracture analysis is used to describe the third phase. The code is divided into five main sections. The first four describe the wind resource, the constitutive properties of the turbine material, the stress state in which the turbine operates and operational parameters for the turbine system. The fifth uses the data files written by the first four sections to calculate the service lifetime of a turbine component. In addition to the main sections, auxiliary sections are included to permit the storage of data and code calculations and to permit the plotting of results. This report describes the computational framework used in the LIFE2 code to evaluate the damage rules cited above. An example problem is presented here to illustrate the capabilities of the code.

  17. Some optimal partial-unit-memory codes. [time-invariant binary convolutional codes

    NASA Technical Reports Server (NTRS)

    Lauer, G. S.

    1979-01-01

    A class of time-invariant binary convolutional codes is defined, called partial-unit-memory codes. These codes are optimal in the sense of having maximum free distance for given values of R, k (the number of encoder inputs), and mu (the number of encoder memory cells). Optimal codes are given for rates R = 1/4, 1/3, 1/2, and 2/3, with mu not greater than 4 and k not greater than mu + 3, whenever such a code is better than previously known codes. An infinite class of optimal partial-unit-memory codes is also constructed based on equidistant block codes.

  18. User's manual for Axisymmetric Diffuser Duct (ADD) code. Volume 1: General ADD code description

    NASA Technical Reports Server (NTRS)

    Anderson, O. L.; Hankins, G. B., Jr.; Edwards, D. E.

    1982-01-01

    This User's Manual contains a complete description of the computer codes known as the AXISYMMETRIC DIFFUSER DUCT code or ADD code. It includes a list of references which describe the formulation of the ADD code and comparisons of calculation with experimental flows. The input/output and general use of the code is described in the first volume. The second volume contains a detailed description of the code including the global structure of the code, list of FORTRAN variables, and descriptions of the subroutines. The third volume contains a detailed description of the CODUCT code which generates coordinate systems for arbitrary axisymmetric ducts.

  19. On the trail of Leonardo

    PubMed Central

    Johnston, P

    1998-01-01

    A night course taken almost 25 years ago sparked an interest in Leonardo da Vinci that has become a passion for a London, Ont., neurosurgeon. Dr. Rolando Del Maestro now boasts one of the largest collections of da Vinci artifacts in North America. PMID:9538858

  20. Physical Activity Monitoring: Gadgets and Uses. Article #6 in a 6-Part Series

    ERIC Educational Resources Information Center

    Mears, Derrick

    2010-01-01

    An early 15th century drawing by Leonardo da Vinci depicted a device that used gears and a pendulum that moved in synchronization with the wearer as he or she walked. This is believed to be the early origins of today's physical activity monitoring devices. Today's devices have vastly expanded on da Vinci's ancient concept with a myriad of options…

  1. Physical Activity Monitoring: Gadgets and Uses. Article #6 in a 6-Part Series

    ERIC Educational Resources Information Center

    Mears, Derrick

    2010-01-01

    An early 15th century drawing by Leonardo da Vinci depicted a device that used gears and a pendulum that moved in synchronization with the wearer as he or she walked. This is believed to be the early origins of today's physical activity monitoring devices. Today's devices have vastly expanded on da Vinci's ancient concept with a myriad of options

  2. Modular optimization code package: MOZAIK

    NASA Astrophysics Data System (ADS)

    Bekar, Kursat B.

    This dissertation addresses the development of a modular optimization code package, MOZAIK, for geometric shape optimization problems in nuclear engineering applications. MOZAIK's first mission, determining the optimal shape of the D2O moderator tank for the current and new beam tube configurations for the Penn State Breazeale Reactor's (PSBR) beam port facility, is used to demonstrate its capabilities and test its performance. MOZAIK was designed as a modular optimization sequence including three primary independent modules: the initializer, the physics and the optimizer, each having a specific task. By using fixed interface blocks among the modules, the code attains its two most important characteristics: generic form and modularity. The benefit of this modular structure is that the contents of the modules can be switched depending on the requirements of accuracy, computational efficiency, or compatibility with the other modules. Oak Ridge National Laboratory's discrete ordinates transport code TORT was selected as the transport solver in the physics module of MOZAIK, and two different optimizers, Min-max and Genetic Algorithms (GA), were implemented in the optimizer module of the code package. A distributed memory parallelism was also applied to MOZAIK via MPI (Message Passing Interface) to execute the physics module concurrently on a number of processors for various states in the same search. Moreover, dynamic scheduling was enabled to enhance load balance among the processors while running MOZAIK's physics module thus improving the parallel speedup and efficiency. In this way, the total computation time consumed by the physics module is reduced by a factor close to M, where M is the number of processors. This capability also encourages the use of MOZAIK for shape optimization problems in nuclear applications because many traditional codes related to radiation transport do not have parallel execution capability. A set of computational models based on the existing beam port configuration of the Penn State Breazeale Reactor (PSBR) was designed to test and validate the code package in its entirety, as well as its modules separately. The selected physics code, TORT, and the requisite data such as source distribution, cross-sections, and angular quadratures were comprehensively tested with these computational models. The modular feature and the parallel performance of the code package were also examined using these computational models. Another outcome of these computational models is to provide the necessary background information for determining the optimal shape of the D2O moderator tank for the new beam tube configurations for the PSBR's beam port facility. The first mission of the code package was completed successfully by determining the optimal tank shape which was sought for the current beam tube configuration and two new beam tube configurations for the PSBR's beam port facility. The performance of the new beam tube configurations and the current beam tube configuration were evaluated with the new optimal tank shapes determined by MOZAIK. Furthermore, the performance of the code package with the two different optimization strategies were analyzed showing that while GA is capable of achieving higher thermal beam intensity for a given beam tube setup, Min-max produces an optimal shape that is more amenable to machining and manufacturing. The optimal D2O moderator tank shape determined by MOZAIK with the current beam port configuration improves the thermal neutron beam intensity at the beam port exit end by 9.5%. Similarly, the new tangential beam port configuration (beam port near the core interface) with the optimal moderator tank shape determined by MOZAIK improves the thermal neutron beam intensity by a factor of 1.4 compared to the existing beam port configuration (with the existing D2O moderator tank). Another new beam port configuration, radial beam tube configuration, with the optimal moderator tank shape increases the thermal neutron beam intensity at the beam tube exit by a factor of 1.8. All these results indicate that MOZAIK is viable and effective and is ready for deployment to address shape optimization problems involving radiation transport in nuclear engineering applications.

  3. Search for optimal distance spectrum convolutional codes

    NASA Technical Reports Server (NTRS)

    Connor, Matthew C.; Perez, Lance C.; Costello, Daniel J., Jr.

    1993-01-01

    In order to communicate reliably and to reduce the required transmitter power, NASA uses coded communication systems on most of their deep space satellites and probes (e.g. Pioneer, Voyager, Galileo, and the TDRSS network). These communication systems use binary convolutional codes. Better codes make the system more reliable and require less transmitter power. However, there are no good construction techniques for convolutional codes. Thus, to find good convolutional codes requires an exhaustive search over the ensemble of all possible codes. In this paper, an efficient convolutional code search algorithm was implemented on an IBM RS6000 Model 580. The combination of algorithm efficiency and computational power enabled us to find, for the first time, the optimal rate 1/2, memory 14, convolutional code.

  4. Benchmarking of ECH Codes for ITER

    NASA Astrophysics Data System (ADS)

    Prater, R.

    2005-10-01

    Many computer codes have been developed for wave propagation, absorption, and current drive using electron cyclotron waves. These codes include ray tracing codes, like BANDIT-3D, GENRAY, TORAY-GA, and TORAY-FOM, and Gaussian beam codes like ECWGB (now GRAY), OGRAY, and TORBEAM. For absorption, codes may use analytic models or Fokker- Planck calculations, as in BANDIT-3D, CQL3D, and OGRAY. Detailed comparisions of the codes has been made (with the active participation of their authors) for a projected ITER discharge (Scenario 2). To better test the propagation part of the codes, a discharge with higher density and greater refraction was also used. Several code problems were fixed due to these studies, and the resulting profiles of power density and current density lie within a narrow range. Some issues needing further work will be discussed.

  5. Spatial transform coding of color images.

    NASA Technical Reports Server (NTRS)

    Pratt, W. K.

    1971-01-01

    The application of the transform-coding concept to the coding of color images represented by three primary color planes of data is discussed. The principles of spatial transform coding are reviewed and the merits of various methods of color-image representation are examined. A performance analysis is presented for the color-image transform-coding system. Results of a computer simulation of the coding system are also given. It is shown that, by transform coding, the chrominance content of a color image can be coded with an average of 1.0 bits per element or less without serious degradation. If luminance coding is also employed, the average rate reduces to about 2.0 bits per element or less.

  6. A Better Handoff for Code Officials

    SciTech Connect

    Conover, David R.; Yerkes, Sara

    2010-09-24

    The U.S. Department of Energy's Building Energy Codes Program has partnered with ICC to release the new Building Energy Codes Resource Guide: Code Officials Edition. We created this binder of practical materials for a simple reason: code officials are busy learning and enforcing several codes at once for the diverse buildings across their jurisdictions. This doesn’t leave much time to search www.energycodes.gov, www.iccsafe.org, or the range of other helpful web-based resources for the latest energy codes tools, support, and information. So, we decided to bring the most relevant materials to code officials in a way that works best with their daily routine, and point to where they can find even more. Like a coach’s game plan, the Resource Guide is an "energy playbook" for code officials.

  7. TDRSS telecommunication system PN code analysis

    NASA Technical Reports Server (NTRS)

    Gold, R.

    1977-01-01

    The pseudonoise (PN) code library for the Tracking and Data Relay Satellite System (TDRSS) Services was defined and described. The code library was chosen to minimize user transponder hardware requirements and optimize system performance. Special precautions were taken to insure sufficient code phase separation to minimize cross-correlation sidelobes, and to avoid the generation of spurious code components which would interfere with system performance.

  8. gyrfalcON: N-body code

    NASA Astrophysics Data System (ADS)

    Dehnen, Walter

    2014-02-01

    gyrfalcON (GalaxY simulatoR using falcON) is a full-fledged N-body code using Dehnen's force algorithm of complexity O(N) (falcON); this algorithm is approximately 10 times faster than an optimally coded tree code. The code features individual adaptive time steps and individual (but fixed) softening lengths. gyrfalcON is included in and requires NEMO to run.

  9. Design of additive quantum codes via the code-word-stabilized framework

    SciTech Connect

    Kovalev, Alexey A.; Pryadko, Leonid P.; Dumer, Ilya

    2011-12-15

    We consider design of the quantum stabilizer codes via a two-step, low-complexity approach based on the framework of codeword-stabilized (CWS) codes. In this framework, each quantum CWS code can be specified by a graph and a binary code. For codes that can be obtained from a given graph, we give several upper bounds on the distance of a generic (additive or nonadditive) CWS code, and the lower Gilbert-Varshamov bound for the existence of additive CWS codes. We also consider additive cyclic CWS codes and show that these codes correspond to a previously unexplored class of single-generator cyclic stabilizer codes. We present several families of simple stabilizer codes with relatively good parameters.

  10. Power System Optimization Codes Modified

    NASA Technical Reports Server (NTRS)

    Juhasz, Albert J.

    1999-01-01

    A major modification of and addition to existing Closed Brayton Cycle (CBC) space power system optimization codes was completed. These modifications relate to the global minimum mass search driver programs containing three nested iteration loops comprising iterations on cycle temperature ratio, and three separate pressure ratio iteration loops--one loop for maximizing thermodynamic efficiency, one for minimizing radiator area, and a final loop for minimizing overall power system mass. Using the method of steepest ascent, the code sweeps through the pressure ratio space repeatedly, each time with smaller iteration step sizes, so that the three optimum pressure ratios can be obtained to any desired accuracy for each of the objective functions referred to above (i.e., maximum thermodynamic efficiency, minimum radiator area, and minimum system mass). Two separate options for the power system heat source are available: 1. A nuclear fission reactor can be used. It is provided with a radiation shield 1. (composed of a lithium hydride (LiH) neutron shield and tungsten (W) gamma shield). Suboptions can be used to select the type of reactor (i.e., fast spectrum liquid metal cooled or epithermal high-temperature gas reactor (HTGR)). 2. A solar heat source can be used. This option includes a parabolic concentrator and heat receiver for raising the temperature of the recirculating working fluid. A useful feature of the code modifications is that key cycle parameters are displayed, including the overall system specific mass in kilograms per kilowatt and the system specific power in watts per kilogram, as the results for each temperature ratio are computed. As the minimum mass temperature ratio is encountered, a message is printed out. Several levels of detailed information on cycle state points, subsystem mass results, and radiator temperature profiles are stored for this temperature ratio condition and can be displayed or printed by users.

  11. A Mathematical Representation of the Genetic Code

    NASA Astrophysics Data System (ADS)

    Hill, Vanessa J.; Rowlands, Peter

    Algebraic and geometric representations of the genetic code are used to show their functions in coding for amino acids. The algebra is a 64-part vector quaternion combination, and the geometry is based on the structure of the regular icosidodecahedron. An almost perfect pattern suggests that this is a biologically significant way of representing the genetic code.

  12. Permutation codes for the Laplacian source

    NASA Technical Reports Server (NTRS)

    Townes, S. A.; Oneal, J. B., Jr.

    1984-01-01

    Permutation codes for the Laplacian source are developed. The performance of these codes is evaluated and compared with other quantizers and the rate-distortion function. It is shown that there is a bit-rate region in which the permutation codes outperform certain single-sample quantizers.

  13. Production code control system for hydrodynamics simulations

    SciTech Connect

    Slone, D.M.

    1997-08-18

    We describe how the Production Code Control System (pCCS), written in Perl, has been used to control and monitor the execution of a large hydrodynamics simulation code in a production environment. We have been able to integrate new, disparate, and often independent, applications into the PCCS framework without the need to modify any of our existing application codes. Both users and code developers see a consistent interface to the simulation code and associated applications regardless of the physical platform, whether an MPP, SMP, server, or desktop workstation. We will also describe our use of Perl to develop a configuration management system for the simulation code, as well as a code usage database and report generator. We used Perl to write a backplane that allows us plug in preprocessors, the hydrocode, postprocessors, visualization tools, persistent storage requests, and other codes. We need only teach PCCS a minimal amount about any new tool or code to essentially plug it in and make it usable to the hydrocode. PCCS has made it easier to link together disparate codes, since using Perl has removed the need to learn the idiosyncrasies of system or RPC programming. The text handling in Perl makes it easy to teach PCCS about new codes, or changes to existing codes.

  14. A concatenated coding scheme for error control

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1985-01-01

    A concatenated coding scheme for error control in data communications is analyzed. The inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. The probability of undetected error of the above error control scheme is derived and upper bounded. Two specific exmaples are analyzed. In the first example, the inner code is a distance-4 shortened Hamming code with generator polynomial (X+1)(X(6)+X+1) = X(7)+X(6)+X(2)+1 and the outer code is a distance-4 shortened Hamming code with generator polynomial (X+1)X(15+X(14)+X(13)+X(12)+X(4)+X(3)+X(2)+X+1) = X(16)+X(12)+X(5)+1 which is the X.25 standard for packet-switched data network. This example is proposed for error control on NASA telecommand links. In the second example, the inner code is the same as that in the first example but the outer code is a shortened Reed-Solomon code with symbols from GF(2(8)) and generator polynomial (X+1)(X+alpha) where alpha is a primitive element in GF(z(8)).

  15. Bar code instrumentation for nuclear safeguards

    SciTech Connect

    Bieber, A.M. Jr.

    1984-01-01

    This paper presents a brief overview of the basic principles of bar codes and the equipment used to make and to read bar code labels, and a summary of some of the more important factors that need to be considered in integrating bar codes into an information system.

  16. SPINK, A Thin Elements Spin Tracking Code

    SciTech Connect

    Luccio, Alfredo U.

    2009-08-04

    Spink is a spin tracking code for spin polarized particles. The code tracks both trajectories in 3D and spin. It works using thick element modeling from MAD and thin element modeling based on the BMT equation to track spin. The code is written in Fortran and typically runs on a Linux platform, either sequentially or MPI-parallel.

  17. The general theory of convolutional codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Stanley, R. P.

    1993-01-01

    This article presents a self-contained introduction to the algebraic theory of convolutional codes. This introduction is partly a tutorial, but at the same time contains a number of new results which will prove useful for designers of advanced telecommunication systems. Among the new concepts introduced here are the Hilbert series for a convolutional code and the class of compact codes.

  18. A Peer Helpers Code of Behavior.

    ERIC Educational Resources Information Center

    de Rosenroll, David A.

    This document presents a guide for developing a peer helpers code of behavior. The first section discusses issues relevant to the trainers. These issues include whether to give a model directly to the group or whether to engender "ownership" of the code by the group; timing of introduction of the code; and addressing the issue of consequences for…

  19. Ultra-narrow bandwidth voice coding

    DOEpatents

    Holzrichter, John F.; Ng, Lawrence C.

    2007-01-09

    A system of removing excess information from a human speech signal and coding the remaining signal information, transmitting the coded signal, and reconstructing the coded signal. The system uses one or more EM wave sensors and one or more acoustic microphones to determine at least one characteristic of the human speech signal.

  20. Subband coding for image data archiving

    NASA Technical Reports Server (NTRS)

    Glover, D.; Kwatra, S. C.

    1992-01-01

    The use of subband coding on image data is discussed. An overview of subband coding is given. Advantages of subbanding for browsing and progressive resolution are presented. Implementations for lossless and lossy coding are discussed. Algorithm considerations and simple implementations of subband are given.

  1. Subband coding for image data archiving

    NASA Technical Reports Server (NTRS)

    Glover, Daniel; Kwatra, S. C.

    1993-01-01

    The use of subband coding on image data is discussed. An overview of subband coding is given. Advantages of subbanding for browsing and progressive resolution are presented. Implementations for lossless and lossy coding are discussed. Algorithm considerations and simple implementations of subband systems are given.

  2. Mosaic of coded aperture arrays

    DOEpatents

    Fenimore, Edward E.; Cannon, Thomas M.

    1980-01-01

    The present invention pertains to a mosaic of coded aperture arrays which is capable of imaging off-axis sources with minimum detector size. Mosaics of the basic array pattern create a circular on periodic correlation of the object on a section of the picture plane. This section consists of elements of the central basic pattern as well as elements from neighboring patterns and is a cyclic version of the basic pattern. Since all object points contribute a complete cyclic version of the basic pattern, a section of the picture, which is the size of the basic aperture pattern, contains all the information necessary to image the object with no artifacts.

  3. Princeton spectral equilibrium code: PSEC

    SciTech Connect

    Ling, K.M.; Jardin, S.C.

    1984-03-01

    A fast computer code has been developed to calculate free-boundary solutions to the plasma equilibrium equation that are consistent with the currents in external coils and conductors. The free-boundary formulation is based on the minimization of a mean-square error epsilon while the fixed-boundary solution is based on a variational principle and spectral representation of the coordinates x(psi,theta) and z(psi,theta). Specific calculations using the Columbia University Torus II, the Poloidal Divertor Experiment (PDX), and the Tokamak Fusion Test Reactor (TFTR) geometries are performed.

  4. Probabilistic coding of quantum states

    SciTech Connect

    Grudka, Andrzej; Wojcik, Antoni; Czechlewski, Mikolaj

    2006-07-15

    We discuss the properties of probabilistic coding of two qubits to one qutrit and generalize the scheme to higher dimensions. We show that the protocol preserves the entanglement between the qubits to be encoded and the environment and can also be applied to mixed states. We present a protocol that enables encoding of n qudits to one qudit of dimension smaller than the Hilbert space of the original system and then allows probabilistic but error-free decoding of any subset of k qudits. We give a formula for the probability of successful decoding.

  5. Pump CFD code validation tests

    NASA Technical Reports Server (NTRS)

    Brozowski, L. A.

    1993-01-01

    Pump CFD code validation tests were accomplished by obtaining nonintrusive flow characteristic data at key locations in generic current liquid rocket engine turbopump configurations. Data were obtained with a laser two-focus (L2F) velocimeter at scaled design flow. Three components were surveyed: a 1970's-designed impeller, a 1990's-designed impeller, and a four-bladed unshrouded inducer. Two-dimensional velocities were measured upstream and downstream of the two impellers. Three-dimensional velocities were measured upstream, downstream, and within the blade row of the unshrouded inducer.

  6. GUIS for scientific code usage

    NASA Astrophysics Data System (ADS)

    Dionne, N.

    1993-12-01

    To achieve high-level functionality, an unadomed GUI based upon an enhanced version of the MIT XII graphic routines has been integrated into SAIC's MASK code for keystroke-controlled, fully-interactive scientific application. Featured run-time capabilities include: a) buffered plot animation, b) mouse-driven data extraction, c) menu-driven parameter editing, d) postscript-based hard copy prints, e) run-state save, f) numerous plot display selections, and g) optional GUI exit/return. A 400-line fortran-to-X-library interface (written in C) lies at the core of this utility, permitting either serial or concurrent interfacial keystroke control.

  7. Using the DEWSBR computer code

    SciTech Connect

    Cable, G.D.

    1989-09-01

    A computer code is described which is designed to determine the fraction of time during which a given ground location is observable from one or more members of a satellite constellation in earth orbit. Ground visibility parameters are determined from the orientation and strength of an appropriate ionized cylinder (used to simulate a beam experiment) at the selected location. Satellite orbits are computed in a simplified two-body approximation computation. A variety of printed and graphical outputs is provided. 9 refs., 50 figs., 2 tabs.

  8. Reconstruction of coded aperture images

    NASA Technical Reports Server (NTRS)

    Bielefeld, Michael J.; Yin, Lo I.

    1987-01-01

    Balanced correlation method and the Maximum Entropy Method (MEM) were implemented to reconstruct a laboratory X-ray source as imaged by a Uniformly Redundant Array (URA) system. Although the MEM method has advantages over the balanced correlation method, it is computationally time consuming because of the iterative nature of its solution. Massively Parallel Processing, with its parallel array structure is ideally suited for such computations. These preliminary results indicate that it is possible to use the MEM method in future coded-aperture experiments with the help of the MPP.

  9. [Olfactory receptors and odour coding].

    PubMed

    Pernollet, Jean-Claude; Sanz, Guenhaël; Briand, Loïc

    2006-09-01

    The first step of olfactory detection involves interactions between odorant molecules and neuronal protein receptors. Odour coding results from the combinatory activation of a set of receptors and rests on their clonal expression and olfactory neurone connexion, which lead to formation of a specific sensory map in the cortex. This system, sufficient to discriminate myriads of odorants with a mere 350 different receptors, allows humans to smell molecules that are not natural (new cooking flavours, synthetic chemicals...). The extreme olfactory genome diversity explains the absence of odour semantics. Olfactory receptors are also involved in cellular chemotaxis. PMID:16945834

  10. High rate concatenated coding systems using multidimensional bandwidth-efficient trellis inner codes

    NASA Technical Reports Server (NTRS)

    Deng, Robert H.; Costello, Daniel J., Jr.

    1989-01-01

    A concatenated coding system using two-dimensional trellis-coded MPSK inner codes and Reed-Solomon outer codes for application in high-speed satellite communication systems was proposed previously by the authors (1989). The authors extend their results to systems using symbol-oriented, multidimensional, trellis-coded MPSK inner codes. The concatenated coding systems are divided into two classes according to their achievable effective information rates. The first class uses multidimensional trellis-coded 8-PSK inner codes and achieves effective information rates around 1 b/dimension (spectral efficiency 2 b/s/Hz). The second class employs multidimensional trellis-coded 16-PSK inner codes and provides effective information rates around 1.5 b/dimension (spectral efficiency 3 b/s/Hz). Both classes provide significant coding gains over an uncoded reference system with the same effective information rate as the coded system. The results show that the symbol-oriented nature of multidimensional inner codes can provide an improvement of up to 1 dB in the overall performance of a concatenated coding system when these codes replace bit-oriented two-dimensional codes.

  11. Identification of a cDNA coding for a fifth form of myelin basic protein in mouse.

    PubMed Central

    Newman, S; Kitamura, K; Campagnoni, A T

    1987-01-01

    The primary sequences of four molecular mass variants (14, 17, 18.5, and 21.5 kDa) of the mouse myelin basic protein (MBP) have recently been determined through analysis of cDNA clones of their mRNAs. The mRNAs coding for the four MBP variants are thought to arise by differential splicing of two exons (exons 2 and 6) from a single gene. In contrast, exons 2 and 5 may be spliced out in the posttranscriptional processing of the human MBP gene. To investigate the possibility that a third exon (exon 5) may also be differentially spliced out in the processing of the mouse MBP gene transcript, a mouse cDNA library was screened to search for cDNAs missing exon 5. A MBP cDNA was isolated whose coding region specified a fifth mouse MBP variant with a molecular mass of approximately equal to 17 kDa. The mass of this variant (17,257 Da) is so close to that of the other 17-kDa mouse MBP (17,224 Da) that the two would be indistinguishable on NaDodSO4/polyacrylamide gels. Analysis of the sequence of the cDNA clone indicates that excision of exons 2 and 5 of the mouse MBP gene would produce the mRNA encoding this newly described 17-kDa MBP, whereas excision of exon 6 would produce the mRNA for the other 17-kDa MBP variant. Thus, the "17-kDa" mouse MBP consists of at least two molecular forms with very similar molecular masses but markedly different primary sequences. Of five full-length or near full-length cDNAs representing 17-kDa MBPs, one was missing exons 2 and 5 and four were missing exon 6. Images PMID:2433693

  12. Comparison of two computer codes for crack growth analysis: NASCRAC versus NASA/FLAGRO

    NASA Technical Reports Server (NTRS)

    Stallworth, Roderick; Meyers, Charles A.; Stinson, Helen C.

    1988-01-01

    The service life calculations of two computer codes, NASCRAC and NASA/FLAGRO, are compared. The analysis technique is based on linear elastic fracture mechanics (LEFM), in which stresses remain below the yield strength of an elastic/plastic material. To perform service life calculations, a relationship expressing incremental crack growth, DA/DN, as a function of loading, geometry, and material properties is necessary. Load and geometry are expressed in terms of the cyclic stress intensity factor, delta K. The crack growth rate as a function of delta K is then determined by material tests, plotting DA/DN versus delta K for the given material, loading condition, and environment. Crack growth rate equations such as the Paris, Walker, and modified Forman equations are used to obtain a best fit curve to the laboratory DA/DN versus delta K data.

  13. The Clawpack Community of Codes

    NASA Astrophysics Data System (ADS)

    Mandli, K. T.; LeVeque, R. J.; Ketcheson, D.; Ahmadia, A. J.

    2014-12-01

    Clawpack, the Conservation Laws Package, has long been one of the standards for solving hyperbolic conservation laws but over the years has extended well beyond this role. Today a community of open-source codes have been developed that address a multitude of different needs including non-conservative balance laws, high-order accurate methods, and parallelism while remaining extensible and easy to use, largely by the judicious use of Python and the original Fortran codes that it wraps. This talk will present some of the recent developments in projects under the Clawpack umbrella, notably the GeoClaw and PyClaw projects. GeoClaw was originally developed as a tool for simulating tsunamis using adaptive mesh refinement but has since encompassed a large number of other geophysically relevant flows including storm surge and debris-flows. PyClaw originated as a Python version of the original Clawpack algorithms but has since been both a testing ground for new algorithmic advances in the Clawpack framework but also an easily extensible framework for solving hyperbolic balance laws. Some of these extensions include the addition of WENO high-order methods, massively parallel capabilities, and adaptive mesh refinement technologies, made possible largely by the flexibility of the Python language and community libraries such as NumPy and PETSc. Because of the tight integration with Python tecnologies, both packages have benefited also from the focus on reproducibility in the Python community, notably IPython notebooks.

  14. Transform coding for space applications

    NASA Technical Reports Server (NTRS)

    Glover, Daniel

    1993-01-01

    Data compression coding requirements for aerospace applications differ somewhat from the compression requirements for entertainment systems. On the one hand, entertainment applications are bit rate driven with the goal of getting the best quality possible with a given bandwidth. Science applications are quality driven with the goal of getting the lowest bit rate for a given level of reconstruction quality. In the past, the required quality level has been nothing less than perfect allowing only the use of lossless compression methods (if that). With the advent of better, faster, cheaper missions, an opportunity has arisen for lossy data compression methods to find a use in science applications as requirements for perfect quality reconstruction runs into cost constraints. This paper presents a review of the data compression problem from the space application perspective. Transform coding techniques are described and some simple, integer transforms are presented. The application of these transforms to space-based data compression problems is discussed. Integer transforms have an advantage over conventional transforms in computational complexity. Space applications are different from broadcast or entertainment in that it is desirable to have a simple encoder (in space) and tolerate a more complicated decoder (on the ground) rather than vice versa. Energy compaction with new transforms are compared with the Walsh-Hadamard (WHT), Discrete Cosine (DCT), and Integer Cosine (ICT) transforms.

  15. Segmentation-based video coding

    SciTech Connect

    Lades, M.; Wong, Yiu-fai; Li, Qi

    1995-10-01

    Low bit rate video coding is gaining attention through a current wave of consumer oriented multimedia applications which aim, e.g., for video conferencing over telephone lines or for wireless communication. In this work we describe a new segmentation-based approach to video coding which belongs to a class of paradigms appearing very promising among the various proposed methods. Our method uses a nonlinear measure of local variance to identify the smooth areas in an image in a more indicative and robust fashion: First, the local minima in the variance image are identified. These minima then serve as seeds for the segmentation of the image with a watershed algorithm. Regions and their contours are extracted. Motion compensation is used to predict the change of regions between previous frames and the current frame. The error signal is then quantized. To reduce the number of regions and contours, we use the motion information to assist the segmentation process, to merge regions, resulting in a further reduction in bit rate. Our scheme has been tested and good results have been obtained.

  16. Development of the Code RITRACKS

    NASA Technical Reports Server (NTRS)

    Plante, Ianik; Cucinotta, Francis A.

    2013-01-01

    A document discusses the code RITRACKS (Relativistic Ion Tracks), which was developed to simulate heavy ion track structure at the microscopic and nanoscopic scales. It is a Monte-Carlo code that simulates the production of radiolytic species in water, event-by-event, and which may be used to simulate tracks and also to calculate dose in targets and voxels of different sizes. The dose deposited by the radiation can be calculated in nanovolumes (voxels). RITRACKS allows simulation of radiation tracks without the need of extensive knowledge of computer programming or Monte-Carlo simulations. It is installed as a regular application on Windows systems. The main input parameters entered by the user are the type and energy of the ion, the length and size of the irradiated volume, the number of ions impacting the volume, and the number of histories. The simulation can be started after the input parameters are entered in the GUI. The number of each kind of interactions for each track is shown in the result details window. The tracks can be visualized in 3D after the simulation is complete. It is also possible to see the time evolution of the tracks and zoom on specific parts of the tracks. The software RITRACKS can be very useful for radiation scientists to investigate various problems in the fields of radiation physics, radiation chemistry, and radiation biology. For example, it can be used to simulate electron ejection experiments (radiation physics).

  17. Neural Coding for Effective Rehabilitation

    PubMed Central

    2014-01-01

    Successful neurological rehabilitation depends on accurate diagnosis, effective treatment, and quantitative evaluation. Neural coding, a technology for interpretation of functional and structural information of the nervous system, has contributed to the advancements in neuroimaging, brain-machine interface (BMI), and design of training devices for rehabilitation purposes. In this review, we summarized the latest breakthroughs in neuroimaging from microscale to macroscale levels with potential diagnostic applications for rehabilitation. We also reviewed the achievements in electrocorticography (ECoG) coding with both animal models and human beings for BMI design, electromyography (EMG) interpretation for interaction with external robotic systems, and robot-assisted quantitative evaluation on the progress of rehabilitation programs. Future rehabilitation would be more home-based, automatic, and self-served by patients. Further investigations and breakthroughs are mainly needed in aspects of improving the computational efficiency in neuroimaging and multichannel ECoG by selection of localized neuroinformatics, validation of the effectiveness in BMI guided rehabilitation programs, and simplification of the system operation in training devices. PMID:25258708

  18. Fundamentals of the DIGES code

    SciTech Connect

    Simos, N.; Philippacopoulos, A.J.

    1994-08-01

    Recently the authors have completed the development of the DIGES code (Direct GEneration of Spectra) for the US Nuclear Regulatory Commission. This paper presents the fundamental theoretical aspects of the code. The basic modeling involves a representation of typical building-foundation configurations as multi degree-of-freedom dynamic which are subjected to dynamic inputs in the form of applied forces or pressure at the superstructure or in the form of ground motions. Both the deterministic as well as the probabilistic aspects of DIGES are described. Alternate ways of defining the seismic input for the estimation of in-structure spectra and their consequences in terms of realistically appraising the variability of the structural response is discussed in detaiL These include definitions of the seismic input by ground acceleration time histories, ground response spectra, Fourier amplitude spectra or power spectral densities. Conversions of one of these forms to another due to requirements imposed by certain analysis techniques have been shown to lead, in certain cases, in controversial results. Further considerations include the definition of the seismic input as the excitation which is directly applied at the foundation of a structure or as the ground motion of the site of interest at a given point. In the latter case issues related to the transferring of this motion to the foundation through convolution/deconvolution and generally through kinematic interaction approaches are considered.

  19. DNA: Polymer and molecular code

    NASA Astrophysics Data System (ADS)

    Shivashankar, G. V.

    1999-10-01

    The thesis work focusses upon two aspects of DNA, the polymer and the molecular code. Our approach was to bring single molecule micromanipulation methods to the study of DNA. It included a home built optical microscope combined with an atomic force microscope and an optical tweezer. This combined approach led to a novel method to graft a single DNA molecule onto a force cantilever using the optical tweezer and local heating. With this method, a force versus extension assay of double stranded DNA was realized. The resolution was about 10 picoN. To improve on this force measurement resolution, a simple light backscattering technique was developed and used to probe the DNA polymer flexibility and its fluctuations. It combined the optical tweezer to trap a DNA tethered bead and the laser backscattering to detect the beads Brownian fluctuations. With this technique the resolution was about 0.1 picoN with a millisecond access time, and the whole entropic part of the DNA force-extension was measured. With this experimental strategy, we measured the polymerization of the protein RecA on an isolated double stranded DNA. We observed the progressive decoration of RecA on the l DNA molecule, which results in the extension of l , due to unwinding of the double helix. The dynamics of polymerization, the resulting change in the DNA entropic elasticity and the role of ATP hydrolysis were the main parts of the study. A simple model for RecA assembly on DNA was proposed. This work presents a first step in the study of genetic recombination. Recently we have started a study of equilibrium binding which utilizes fluorescence polarization methods to probe the polymerization of RecA on single stranded DNA. In addition to the study of material properties of DNA and DNA-RecA, we have developed experiments for which the code of the DNA is central. We studied one aspect of DNA as a molecular code, using different techniques. In particular the programmatic use of template specificity makes gene expression a prime example of a biological code. We developed a novel method of making DNA micro- arrays, the so-called DNA chip. Using the optical tweezer concept, we were able to pattern biomolecules on a solid substrate, developing a new type of sub-micron laser lithography. A laser beam is focused onto a thin gold film on a glass substrate. Laser ablation of gold results in local aggregation of nanometer scale beads conjugated with small DNA oligonucleotides, with sub-micron resolution. This leads to specific detection of cDNA and RNA molecules. We built a simple micro-array fabrication and detection in the laboratory, based on this method, to probe addressable pools (genes, proteins or antibodies). We have lately used molecular beacons (single stranded DNA with a stem-loop structure containing a fluorophore and quencher), for the direct detection of unlabelled mRNA. As a first step towards a study of the dynamics of the biological code, we have begun to examine the patterns of gene expression during virus (T7 phage) infection of E-coli bacteria.

  20. Protograph LDPC Codes Over Burst Erasure Channels

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Sam; Jones, Christopher

    2006-01-01

    In this paper we design high rate protograph based LDPC codes suitable for binary erasure channels. To simplify the encoder and decoder implementation for high data rate transmission, the structure of codes are based on protographs and circulants. These LDPC codes can improve data link and network layer protocols in support of communication networks. Two classes of codes were designed. One class is designed for large block sizes with an iterative decoding threshold that approaches capacity of binary erasure channels. The other class is designed for short block sizes based on maximizing minimum stopping set size. For high code rates and short blocks the second class outperforms the first class.

  1. Bar Coding and Tracking in Pathology.

    PubMed

    Hanna, Matthew G; Pantanowitz, Liron

    2015-06-01

    Bar coding and specimen tracking are intricately linked to pathology workflow and efficiency. In the pathology laboratory, bar coding facilitates many laboratory practices, including specimen tracking, automation, and quality management. Data obtained from bar coding can be used to identify, locate, standardize, and audit specimens to achieve maximal laboratory efficiency and patient safety. Variables that need to be considered when implementing and maintaining a bar coding and tracking system include assets to be labeled, bar code symbologies, hardware, software, workflow, and laboratory and information technology infrastructure as well as interoperability with the laboratory information system. This article addresses these issues, primarily focusing on surgical pathology. PMID:26065787

  2. Bar Coding and Tracking in Pathology.

    PubMed

    Hanna, Matthew G; Pantanowitz, Liron

    2016-03-01

    Bar coding and specimen tracking are intricately linked to pathology workflow and efficiency. In the pathology laboratory, bar coding facilitates many laboratory practices, including specimen tracking, automation, and quality management. Data obtained from bar coding can be used to identify, locate, standardize, and audit specimens to achieve maximal laboratory efficiency and patient safety. Variables that need to be considered when implementing and maintaining a bar coding and tracking system include assets to be labeled, bar code symbologies, hardware, software, workflow, and laboratory and information technology infrastructure as well as interoperability with the laboratory information system. This article addresses these issues, primarily focusing on surgical pathology. PMID:26851661

  3. A code of professional conduct for members.

    PubMed

    2006-09-01

    In light of new legislation and changing practice, together with the impending legal status of members who practise clinical photography and/or clinical videography, the Institute of Medical Illustrators (IMI) has revised and brought together A Code of Responsible Practice and its Code of Conduct. The new document, A Code of Professional Conduct for Members, details the standards required to maintain professional practice. Within the text, the Code refers to members, and where specifically appropriate, to clinical photographers. The title, 'clinical photographer', is used where the code applies to members practising clinical photography and/or videography. PMID:17162339

  4. Syndrome source coding and its universal generalization

    NASA Technical Reports Server (NTRS)

    Ancheta, T. C., Jr.

    1975-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A universal generalization of syndrome-source-coding is formulated which provides robustly-effective, distortionless, coding of source ensembles.

  5. National Agenda for Hydrogen Codes and Standards

    SciTech Connect

    Blake, C.

    2010-05-01

    This paper provides an overview of hydrogen codes and standards with an emphasis on the national effort supported and managed by the U.S. Department of Energy (DOE). With the help and cooperation of standards and model code development organizations, industry, and other interested parties, DOE has established a coordinated national agenda for hydrogen and fuel cell codes and standards. With the adoption of the Research, Development, and Demonstration Roadmap and with its implementation through the Codes and Standards Technical Team, DOE helps strengthen the scientific basis for requirements incorporated in codes and standards that, in turn, will facilitate international market receptivity for hydrogen and fuel cell technologies.

  6. Code of Ethics: Principles for Ethical Leadership

    PubMed Central

    Flite, Cathy A.; Harman, Laurinda B.

    2013-01-01

    The code of ethics for a professional association incorporates values, principles, and professional standards. A review and comparative analysis of a 1934 pledge and codes of ethics from 1957, 1977, 1988, 1998, 2004, and 2011 for a health information management association was conducted. Highlights of some changes in the healthcare delivery system are identified as a general context for the codes of ethics. The codes of ethics are examined in terms of professional values and changes in the language used to express the principles of the various codes. PMID:23346028

  7. High dynamic range coding imaging system

    NASA Astrophysics Data System (ADS)

    Wu, Renfan; Huang, Yifan; Hou, Guangqi

    2014-10-01

    We present a high dynamic range (HDR) imaging system design scheme based on coded aperture technique. This scheme can help us obtain HDR images which have extended depth of field. We adopt Sparse coding algorithm to design coded patterns. Then we utilize the sensor unit to acquire coded images under different exposure settings. With the guide of the multiple exposure parameters, a series of low dynamic range (LDR) coded images are reconstructed. We use some existing algorithms to fuse and display a HDR image by those LDR images. We build an optical simulation model and get some simulation images to verify the novel system.

  8. Optimal Codes for the Burst Erasure Channel

    NASA Technical Reports Server (NTRS)

    Hamkins, Jon

    2010-01-01

    Deep space communications over noisy channels lead to certain packets that are not decodable. These packets leave gaps, or bursts of erasures, in the data stream. Burst erasure correcting codes overcome this problem. These are forward erasure correcting codes that allow one to recover the missing gaps of data. Much of the recent work on this topic concentrated on Low-Density Parity-Check (LDPC) codes. These are more complicated to encode and decode than Single Parity Check (SPC) codes or Reed-Solomon (RS) codes, and so far have not been able to achieve the theoretical limit for burst erasure protection. A block interleaved maximum distance separable (MDS) code (e.g., an SPC or RS code) offers near-optimal burst erasure protection, in the sense that no other scheme of equal total transmission length and code rate could improve the guaranteed correctible burst erasure length by more than one symbol. The optimality does not depend on the length of the code, i.e., a short MDS code block interleaved to a given length would perform as well as a longer MDS code interleaved to the same overall length. As a result, this approach offers lower decoding complexity with better burst erasure protection compared to other recent designs for the burst erasure channel (e.g., LDPC codes). A limitation of the design is its lack of robustness to channels that have impairments other than burst erasures (e.g., additive white Gaussian noise), making its application best suited for correcting data erasures in layers above the physical layer. The efficiency of a burst erasure code is the length of its burst erasure correction capability divided by the theoretical upper limit on this length. The inefficiency is one minus the efficiency. The illustration compares the inefficiency of interleaved RS codes to Quasi-Cyclic (QC) LDPC codes, Euclidean Geometry (EG) LDPC codes, extended Irregular Repeat Accumulate (eIRA) codes, array codes, and random LDPC codes previously proposed for burst erasure protection. As can be seen, the simple interleaved RS codes have substantially lower inefficiency over a wide range of transmission lengths.

  9. OVERAERO-MPI: Parallel Overset Aeroelasticity Code

    NASA Technical Reports Server (NTRS)

    Gee, Ken; Rizk, Yehia M.

    1999-01-01

    An overset modal structures analysis code was integrated with a parallel overset Navier-Stokes flow solver to obtain a code capable of static aeroelastic computations. The new code was used to compute the static aeroelastic deformation of an arrow-wing-body geometry and a complex, full aircraft configuration. For the simple geometry, the results were similar to the results obtained with the ENSAERO code and the PVM version of OVERAERO. The full potential of this code suite was illustrated in the complex, full aircraft computations.

  10. Coding region: the neglected post-transcriptional code.

    PubMed

    Lee, Eun Kyung; Gorospe, Myriam

    2011-01-01

    The control of mammalian mRNA turnover and translation has been linked almost exclusively to specific cis-elements within the 5'- and 3'-untranslated regions (UTRs) of the mature mRNA. However, instances of regulated turnover and translation via cis-elements within the coding region (CR) of mRNAs are accumulating. Here, we describe the regulation of post-transcriptional fate through trans-binding factors (RNA-binding proteins and microRNAs) that function via CR sequences. We discuss how the CR enriches the post-transcriptional control of gene expression, and predict that new high-throughput technologies will enable a more mainstream study of CR-governed gene regulation. PMID:21289484

  11. Understanding Mixed Code and Classroom Code-Switching: Myths and Realities

    ERIC Educational Resources Information Center

    Li, David C. S.

    2008-01-01

    Background: Cantonese-English mixed code is ubiquitous in Hong Kong society, and yet using mixed code is widely perceived as improper. This paper presents evidence of mixed code being socially constructed as bad language behavior. In the education domain, an EDB guideline bans mixed code in the classroom. Teachers are encouraged to stick to…

  12. 24 CFR 200.926c - Model code provisions for use in partially accepted code jurisdictions.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 24 Housing and Urban Development 2 2010-04-01 2010-04-01 false Model code provisions for use in... Minimum Property Standards § 200.926c Model code provisions for use in partially accepted code..., those portions of one of the model codes with which the property must comply. Schedule for Model...

  13. Using Coding Apps to Support Literacy Instruction and Develop Coding Literacy

    ERIC Educational Resources Information Center

    Hutchison, Amy; Nadolny, Larysa; Estapa, Anne

    2016-01-01

    In this article the authors present the concept of Coding Literacy and describe the ways in which coding apps can support the development of Coding Literacy and disciplinary and digital literacy skills. Through detailed examples, we describe how coding apps can be integrated into literacy instruction to support learning of the Common Core English…

  14. 24 CFR 200.926c - Model code provisions for use in partially accepted code jurisdictions.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 2 2013-04-01 2013-04-01 false Model code provisions for use in... Minimum Property Standards § 200.926c Model code provisions for use in partially accepted code..., those portions of one of the model codes with which the property must comply. Schedule for Model...

  15. 24 CFR 200.926c - Model code provisions for use in partially accepted code jurisdictions.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 2 2012-04-01 2012-04-01 false Model code provisions for use in... Minimum Property Standards § 200.926c Model code provisions for use in partially accepted code..., those portions of one of the model codes with which the property must comply. Schedule for Model...

  16. 24 CFR 200.926c - Model code provisions for use in partially accepted code jurisdictions.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 2 2014-04-01 2014-04-01 false Model code provisions for use in... Minimum Property Standards § 200.926c Model code provisions for use in partially accepted code..., those portions of one of the model codes with which the property must comply. Schedule for Model...

  17. Breeding quantum error-correcting codes

    SciTech Connect

    Dong Ying; Hu Dan; Yu Sixia

    2010-02-15

    The stabilizer code, one major family of quantum error-correcting codes (QECC), is specified by the joint eigenspace of a commuting set of Pauli observables. It turns out that noncommuting sets of Pauli observables can be used to construct more efficient QECCs, such as the entanglement-assisted QECCs, which are built directly from any linear classical codes whose detailed properties are needed to determine the parameters of the resulting quantum codes. Here we propose another family of QECCs, namely, the breeding QECCs, that also employ noncommuting sets of Pauli observables and can be built from any classical additive codes, either linear or nonlinear, with the advantage that their parameters can be read off directly from the corresponding classical codes. Besides, since nonlinear codes are generally more efficient than linear codes, our breeding codes have better parameters than those codes built from linear codes. The terminology is justified by the fact that our QECCs are related to the ordinary QECCs in exactly the same way that the breeding protocols are related to the hashing protocols in the entanglement purification.

  18. On the design of turbo codes

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Pollara, F.

    1995-01-01

    In this article, we design new turbo codes that can achieve near-Shannon-limit performance. The design criterion for random interleavers is based on maximizing the effective free distance of the turbo code, i.e., the minimum output weight of codewords due to weight-2 input sequences. An upper bound on the effective free distance of a turbo code is derived. This upper bound can be achieved if the feedback connection of convolutional codes uses primitive polynomials. We review multiple turbo codes (parallel concatenation of q convolutional codes), which increase the so-called 'interleaving gain' as q and the interleaver size increase, and a suitable decoder structure derived from an approximation to the maximum a posteriori probability decision rule. We develop new rate 1/3, 2/3, 3/4, and 4/5 constituent codes to be used in the turbo encoder structure. These codes, for from 2 to 32 states, are designed by using primitive polynomials. The resulting turbo codes have rates b/n (b = 1, 2, 3, 4 and n = 2, 3, 4, 5, 6), and include random interleavers for better asymptotic performance. These codes are suitable for deep-space communications with low throughput and for near-Earth communications where high throughput is desirable. The performance of these codes is within 1 dB of the Shannon limit at a bit-error rate of 10(exp -6) for throughputs from 1/15 up to 4 bits/s/Hz.

  19. The Astrophysics Source Code Library: An Update

    NASA Astrophysics Data System (ADS)

    Allen, Alice; Nemiroff, R. J.; Shamir, L.; Teuben, P. J.

    2012-01-01

    The Astrophysics Source Code Library (ASCL), founded in 1999, takes an active approach to sharing astrophysical source code. ASCL's editor seeks out both new and old peer-reviewed papers that describe methods or experiments that involve the development or use of source code, and adds entries for the found codes to the library. This approach ensures that source codes are added without requiring authors to actively submit them, resulting in a comprehensive listing that covers a significant number of the astrophysics source codes used in peer-reviewed studies. The ASCL moved to a new location in 2010, and has over 300 codes in it and continues to grow. In 2011, the ASCL (http://asterisk.apod.com/viewforum.php?f=35) has on average added 19 new codes per month; we encourage scientists to submit their codes for inclusion. An advisory committee has been established to provide input and guide the development and expansion of its new site, and a marketing plan has been developed and is being executed. All ASCL source codes have been used to generate results published in or submitted to a refereed journal and are freely available either via a download site or from an identified source. This presentation covers the history of the ASCL and examines the current state and benefits of the ASCL, the means of and requirements for including codes, and outlines its future plans.

  20. Advanced coding and modulation schemes for TDRSS

    NASA Technical Reports Server (NTRS)

    Harrell, Linda; Kaplan, Ted; Berman, Ted; Chang, Susan

    1993-01-01

    This paper describes the performance of the Ungerboeck and pragmatic 8-Phase Shift Key (PSK) Trellis Code Modulation (TCM) coding techniques with and without a (255,223) Reed-Solomon outer code as they are used for Tracking Data and Relay Satellite System (TDRSS) S-Band and Ku-Band return services. The performance of these codes at high data rates is compared to uncoded Quadrature PSK (QPSK) and rate 1/2 convolutionally coded QPSK in the presence of Radio Frequency Interference (RFI), self-interference, and hardware distortions. This paper shows that the outer Reed-Solomon code is necessary to achieve a 10(exp -5) Bit Error Rate (BER) with an acceptable level of degradation in the presence of RFI. This paper also shows that the TCM codes with or without the Reed-Solomon outer code do not perform well in the presence of self-interference. In fact, the uncoded QPSK signal performs better than the TCM coded signal in the self-interference situation considered in this analysis. Finally, this paper shows that the E(sub b)/N(sub 0) degradation due to TDRSS hardware distortions is approximately 1.3 dB with a TCM coded signal or a rate 1/2 convolutionally coded QPSK signal and is 3.2 dB with an uncoded QPSK signal.

  1. Maximal dinucleotide comma-free codes.

    PubMed

    Fimmel, Elena; Strüngmann, Lutz

    2016-01-21

    The problem of retrieval and maintenance of the correct reading frame plays a significant role in RNA transcription. Circular codes, and especially comma-free codes, can help to understand the underlying mechanisms of error-detection in this process. In recent years much attention has been paid to the investigation of trinucleotide circular codes (see, for instance, Fimmel et al., 2014; Fimmel and Strüngmann, 2015a; Michel and Pirillo, 2012; Michel et al., 2012, 2008), while dinucleotide codes had been touched on only marginally, even though dinucleotides are associated to important biological functions. Recently, all maximal dinucleotide circular codes were classified (Fimmel et al., 2015; Michel and Pirillo, 2013). The present paper studies maximal dinucleotide comma-free codes and their close connection to maximal dinucleotide circular codes. We give a construction principle for such codes and provide a graphical representation that allows them to be visualized geometrically. Moreover, we compare the results for dinucleotide codes with the corresponding situation for trinucleotide maximal self-complementary C(3)-codes. Finally, the results obtained are discussed with respect to Crick׳s hypothesis about frame-shift-detecting codes without commas. PMID:26562635

  2. Trellis Decoding Complexity of Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Kiely, A. B.; McEliece, R. J.; Lin, W.; Ekroot, L.; Dolinar, S.

    1995-01-01

    We consider the problem of finding a trellis for a linear block code that minimizes one or more measures of trellis complexity. The domain of optimization may be different permutations of the same code, or different codes with the same parameters. Constraints on trellises, including relationships between the minimal trellis of a code and that of the dual code, are used to derive bounds on complexity. We define a partial ordering on trellises: if a trellis is optimum with respect to this partial ordering, it has the desirable property that it simultaneously minimizes all of the complexity measures examined. We examine properties of such optimal trellises and give examples of optimal permutations of codes, most notably the (48,24,12) quadratic residue code.

  3. CFD Code Development for Combustor Flows

    NASA Technical Reports Server (NTRS)

    Norris, Andrew

    2003-01-01

    During the lifetime of this grant, work has been performed in the areas of model development, code development, code validation and code application. For model development, this has included the PDF combustion module, chemical kinetics based on thermodynamics, neural network storage of chemical kinetics, ILDM chemical kinetics and assumed PDF work. Many of these models were then implemented in the code, and in addition many improvements were made to the code, including the addition of new chemistry integrators, property evaluation schemes, new chemistry models and turbulence-chemistry interaction methodology. Validation of all new models and code improvements were also performed, while application of the code to the ZCET program and also the NPSS GEW combustor program were also performed. Several important items remain under development, including the NOx post processing, assumed PDF model development and chemical kinetic development. It is expected that this work will continue under the new grant.

  4. Evaluation of help model replacement codes

    SciTech Connect

    Whiteside, Tad; Hang, Thong; Flach, Gregory

    2009-07-01

    This work evaluates the computer codes that are proposed to be used to predict percolation of water through the closure-cap and into the waste containment zone at the Department of Energy closure sites. This work compares the currently used water-balance code (HELP) with newly developed computer codes that use unsaturated flow (Richards’ equation). It provides a literature review of the HELP model and the proposed codes, which result in two recommended codes for further evaluation: HYDRUS-2D3D and VADOSE/W. This further evaluation involved performing actual simulations on a simple model and comparing the results of those simulations to those obtained with the HELP code and the field data. From the results of this work, we conclude that the new codes perform nearly the same, although moving forward, we recommend HYDRUS-2D3D.

  5. Decoder for 3-D color codes

    NASA Astrophysics Data System (ADS)

    Hsu, Kung-Chuan; Brun, Todd

    Transversal circuits are important components of fault-tolerant quantum computation. Several classes of quantum error-correcting codes are known to have transversal implementations of any logical Clifford operation. However, to achieve universal quantum computation, it would be helpful to have high-performance error-correcting codes that have a transversal implementation of some logical non-Clifford operation. The 3-D color codes are a class of topological codes that permit transversal implementation of the logical π / 8 -gate. The decoding problem of a 3-D color code can be understood as a graph-matching problem on a three-dimensional lattice. Whether this class of codes will be useful in terms of performance is still an open question. We investigate the decoding problem of 3-D color codes and analyze the performance of some possible decoders.

  6. From Verified Models to Verifiable Code

    NASA Technical Reports Server (NTRS)

    Lensink, Leonard; Munoz, Cesar A.; Goodloe, Alwyn E.

    2009-01-01

    Declarative specifications of digital systems often contain parts that can be automatically translated into executable code. Automated code generation may reduce or eliminate the kinds of errors typically introduced through manual code writing. For this approach to be effective, the generated code should be reasonably efficient and, more importantly, verifiable. This paper presents a prototype code generator for the Prototype Verification System (PVS) that translates a subset of PVS functional specifications into an intermediate language and subsequently to multiple target programming languages. Several case studies are presented to illustrate the tool's functionality. The generated code can be analyzed by software verification tools such as verification condition generators, static analyzers, and software model-checkers to increase the confidence that the generated code is correct.

  7. Computer codes for fluid-structure interactions

    NASA Astrophysics Data System (ADS)

    McMaster, W. H.

    1984-08-01

    Three fluid dynamics codes, PELE-IC, PELE 3D and MAUI are available for the analysis of fluid-structure interactions where the fluid motion is drastic enough to cause difficulties in the Lagrangian formulation. The fluid-structure-interaction algorithms in these codes have been developed for the analysis of the dynamic response of coupled fluid-structure systems. The fluid motion is computed using an Eulerian differencing formulation and the structural motion is determined from a finite element code. The coupling is affected across the Lagrangian boundary defining the structure. These codes can be used over a wide range of fluid and structural motions. The incompressible fluid approach in the PELE codes is used when the effects of compressibility are minimal and the acoustic approximation suffices. The explicit algorithm of the MAUI code can handle those situations where compressible effects are important, and it can also accommodate shock waves. All three codes are in the public domain.

  8. Turbo codes for deep-space communications

    NASA Technical Reports Server (NTRS)

    Divsalar, D.; Pollara, F.

    1995-01-01

    Turbo codes were recently proposed by Berrou, Glavieux, and Thitimajshima, and it has been claimed these codes achieve near-Shannon-limit error correction performance with relatively simple component codes and large interleavers. A required E(b)/N(o) of 0.7 dB was reported for a bit error rate of 10(exp -5), using a rate 1/2 turbo code. However, some important details that are necessary to reproduce these results were omitted. This article confirms the accuracy of these claims, and presents a complete description of an encoder/decoder pair that could be suitable for deep-space applications, where lower rate codes can be used. We describe a new simple method for trellis termination, analyze the effect of interleaver choice on the weight distribution of the code, and introduce the use of unequal rate component codes, which yield better performance.

  9. Coordinated design of coding and modulation systems

    NASA Technical Reports Server (NTRS)

    Massey, J. L.

    1976-01-01

    Work on partial unit memory codes continued; it was shown that for a given virtual state complexity, the maximum free distance over the class of all convolutional codes is achieved within the class of unit memory codes. The effect of phase-lock loop (PLL) tracking error on coding system performance was studied by using the channel cut-off rate as the measure of quality of a modulation system. Optimum modulation signal sets for a non-white Gaussian channel considered an heuristic selection rule based on a water-filling argument. The use of error correcting codes to perform data compression by the technique of syndrome source coding was researched and a weight-and-error-locations scheme was developed that is closely related to LDSC coding.

  10. Genetic code, hamming distance and stochastic matrices.

    PubMed

    He, Matthew X; Petoukhov, Sergei V; Ricci, Paolo E

    2004-09-01

    In this paper we use the Gray code representation of the genetic code C=00, U=10, G=11 and A=01 (C pairs with G, A pairs with U) to generate a sequence of genetic code-based matrices. In connection with these code-based matrices, we use the Hamming distance to generate a sequence of numerical matrices. We then further investigate the properties of the numerical matrices and show that they are doubly stochastic and symmetric. We determine the frequency distributions of the Hamming distances, building blocks of the matrices, decomposition and iterations of matrices. We present an explicit decomposition formula for the genetic code-based matrix in terms of permutation matrices, which provides a hypercube representation of the genetic code. It is also observed that there is a Hamiltonian cycle in a genetic code-based hypercube. PMID:15294430

  11. New asymmetric quantum codes over Fq

    NASA Astrophysics Data System (ADS)

    Ma, Yuena; Feng, Xiaoyi; Xu, Gen

    2016-04-01

    Two families of new asymmetric quantum codes are constructed in this paper. The first family is the asymmetric quantum codes with length n=qm-1 over Fq , where q≥ 5 is a prime power. The second one is the asymmetric quantum codes with length n=3m-1 . These asymmetric quantum codes are derived from the CSS construction and pairs of nested BCH codes. Moreover, let the defining set T1=T2^{-q} , then the real Z-distance of our asymmetric quantum codes are much larger than δ _max+1 , where δ _max is the maximal designed distance of dual-containing narrow-sense BCH code, and the parameters presented here have better than the ones available in the literature.

  12. Quantum codes from cyclic codes over F3 + vF3

    NASA Astrophysics Data System (ADS)

    Ashraf, Mohammad; Mohammad, Ghulam

    2014-11-01

    Let R = F3 + vF3 be a finite commutative ring, where v2 = 1. It is a finite semi-local ring, not a chain ring. In this paper, we give a construction for quantum codes from cyclic codes over R. We derive self-orthogonal codes over F3 as Gray images of linear and cyclic codes over R. In particular, we use two codes associated with a cyclic code over R of arbitrary length to determine the parameters of the corresponding quantum code.

  13. User's manual for Axisymmetric Diffuser Duct (ADD) code. Volume 3: ADD code coordinate generator

    NASA Technical Reports Server (NTRS)

    Anderson, O. L.; Hankins, G. B., Jr.; Edwards, D. E.

    1982-01-01

    This User's Manual contains a complete description of the computer codes known as the Axisymmetric Diffuser Duct (ADD) code. It includes a list of references which describe the formulation of the ADD code and comparisons of calculation with experimental flows. The input/output and general use of the code is described in the first volume. The second volume contains a detailed description of the code including the global structure of the code, list of FORTRAN variables, and descriptions of the subroutines. The third volume contains a detailed description of the CODUCT code which generates coordinate systems for arbitrary axisymmetric ducts.

  14. All-optical code-division multiple-access applications: 2(n) extended-prime codes.

    PubMed

    Zhang, J G; Kwong, W C; Mann, S

    1997-09-10

    A new family of 2(n) codes, called 2(n) extended-prime codes, is proposed for all-optical code-division multiple-access networks. Such 2(n) codes are derived from so-called extended-prime codes so that their cross-correlation functions are not greater than 1, as opposed to 2 for recently proposed 2(n) prime codes. As a result, a larger number of active users can now be supported by the new codes for a given bit-error rate than can be by 2(n) prime codes, while power-efficient, waveguide-integrable all-serial coding and correlating configurations proposed for the 2(n) prime codes can still be employed. PMID:18259529

  15. Properties of even length Barker codes and specific polyphase codes with Barker type autocorrelation functions

    NASA Astrophysics Data System (ADS)

    Gabbay, S.

    1982-07-01

    Properties of even-length Barker codes, if they exist, are derived. The analysis leads to the analysis of polyphase codes. Similar properties are derived for specific types of polyphase codes, with Barker type autocorrelation functions. The analysis is done in the time and frequency domains (including linear algebra and Z transform treatments), and suggests a procedure to search for codes with Barker type autocorrelation functions. The search problem is reduced by using the properties of such codes.

  16. Strict optical orthogonal codes for purely asynchronous code-division multiple-access applications.

    PubMed

    Zhang, J G

    1996-12-10

    Strict optical orthogonal codes are presented for purely asynchronous optical code-division multiple-access (CDMA) applications. The proposed code can strictly guarantee the peaks of its cross-correlation functions and the sidelobes of any of its autocorrelation functions to have a value of 1 in purely asynchronous data communications. The basic theory of the proposed codes is given. An experiment on optical CDMA systems is also demonstrated to verify the characteristics of the proposed code. PMID:21151299

  17. Quantum coding with finite resources.

    PubMed

    Tomamichel, Marco; Berta, Mario; Renes, Joseph M

    2016-01-01

    The quantum capacity of a memoryless channel determines the maximal rate at which we can communicate reliably over asymptotically many uses of the channel. Here we illustrate that this asymptotic characterization is insufficient in practical scenarios where decoherence severely limits our ability to manipulate large quantum systems in the encoder and decoder. In practical settings, we should instead focus on the optimal trade-off between three parameters: the rate of the code, the size of the quantum devices at the encoder and decoder, and the fidelity of the transmission. We find approximate and exact characterizations of this trade-off for various channels of interest, including dephasing, depolarizing and erasure channels. In each case, the trade-off is parameterized by the capacity and a second channel parameter, the quantum channel dispersion. In the process, we develop several bounds that are valid for general quantum channels and can be computed for small instances. PMID:27156995

  18. Redundancy reduction in image coding

    NASA Technical Reports Server (NTRS)

    Rahman, Zia-Ur; Alter-Gartenberg, Rachel; Fales, Carl L.; Huck, Friedrich O.

    1993-01-01

    We assess redundancy reduction in image coding in terms of the information acquired by the image-gathering process and the amount of data required to convey this information. A clear distinction is made between the theoretically minimum rate of data transmission, as measured by the entropy of the completely decorrelated data, and the actual rate of data transmission, as measured by the entropy of the encoded (incompletely decorrelated) data. It is shown that the information efficiency of the visual communication channel depends not only on the characteristics of the radiance field and the decorrelation algorithm, as is generally perceived, but also on the design of the image-gathering device, as is commonly ignored.

  19. Quantum coding with finite resources

    PubMed Central

    Tomamichel, Marco; Berta, Mario; Renes, Joseph M.

    2016-01-01

    The quantum capacity of a memoryless channel determines the maximal rate at which we can communicate reliably over asymptotically many uses of the channel. Here we illustrate that this asymptotic characterization is insufficient in practical scenarios where decoherence severely limits our ability to manipulate large quantum systems in the encoder and decoder. In practical settings, we should instead focus on the optimal trade-off between three parameters: the rate of the code, the size of the quantum devices at the encoder and decoder, and the fidelity of the transmission. We find approximate and exact characterizations of this trade-off for various channels of interest, including dephasing, depolarizing and erasure channels. In each case, the trade-off is parameterized by the capacity and a second channel parameter, the quantum channel dispersion. In the process, we develop several bounds that are valid for general quantum channels and can be computed for small instances. PMID:27156995

  20. Noiseless coding for the magnetometer

    NASA Technical Reports Server (NTRS)

    Rice, Robert F.; Lee, Jun-Ji

    1987-01-01

    Future unmanned space missions will continue to seek a full understanding of magnetic fields throughout the solar system. Severely constrained data rates during certain portions of these missions could limit the possible science return. This publication investigates the application of universal noiseless coding techniques to more efficiently represent magnetometer data without any loss in data integrity. Performance results indicated that compression factors of 2:1 to 6:1 can be expected. Feasibility for general deep space application was demonstrated by implementing a microprocessor breadboard coder/decoder using the Intel 8086 processor. The Comet Rendezvous Asteroid Flyby mission will incorporate these techniques in a buffer feedback, rate-controlled configuration. The characteristics of this system are discussed.

  1. Multichannel Error Correction Code Decoder

    NASA Technical Reports Server (NTRS)

    1996-01-01

    NASA Lewis Research Center's Digital Systems Technology Branch has an ongoing program in modulation, coding, onboard processing, and switching. Recently, NASA completed a project to incorporate a time-shared decoder into the very-small-aperture terminal (VSAT) onboard-processing mesh architecture. The primary goal was to demonstrate a time-shared decoder for a regenerative satellite that uses asynchronous, frequency-division multiple access (FDMA) uplink channels, thereby identifying hardware and power requirements and fault-tolerant issues that would have to be addressed in a operational system. A secondary goal was to integrate and test, in a system environment, two NASA-sponsored, proof-of-concept hardware deliverables: the Harris Corp. high-speed Bose Chaudhuri-Hocquenghem (BCH) codec and the TRW multichannel demultiplexer/demodulator (MCDD). A beneficial byproduct of this project was the development of flexible, multichannel-uplink signal-generation equipment.

  2. The CALOR93 code system

    SciTech Connect

    Gabriel, T.A.

    1993-12-31

    The purpose of this paper is to describe a program package, CALOR93, that has been developed to design and analyze different detector systems, in particular, calorimeters which are used in high energy physics experiments to determine the energy of particles. One`s ability to design a calorimeter to perform a certain task can have a strong influence upon the validity of experimental results. The validity of the results obtained with CALOR93 has been verified many times by comparison with experimental data. The codes (HETC93, SPECT93, LIGHT, EGS4, MORSE, and MICAP) are quite generalized and detailed enough so that any experimental calorimeter setup can be studied. Due to this generalization, some software development is necessary because of the wide diversity of calorimeter designs.

  3. Terrain-Responsive Atmospheric Code

    Energy Science and Technology Software Center (ESTSC)

    1991-11-20

    The Terrain-Responsive Atmospheric Code (TRAC) is a real-time emergency response modeling capability designed to advise Emergency Managers of the path, timing, and projected impacts from an atmospheric release. TRAC evaluates the effects of both radiological and non-radiological hazardous substances, gases and particulates. Using available surface and upper air meteorological information, TRAC realistically treats complex sources and atmospheric conditions, such as those found in mountainous terrain. TRAC calculates atmospheric concentration, deposition, and dose for more thanmore » 25,000 receptor locations within 80 km of the release point. Human-engineered output products support critical decisions on the type, location, and timing of protective actions for workers and the public during an emergency.« less

  4. Numerical classification of coding sequences

    NASA Technical Reports Server (NTRS)

    Collins, D. W.; Liu, C. C.; Jukes, T. H.

    1992-01-01

    DNA sequences coding for protein may be represented by counts of nucleotides or codons. A complete reading frame may be abbreviated by its base count, e.g. A76C158G121T74, or with the corresponding codon table, e.g. (AAA)0(AAC)1(AAG)9 ... (TTT)0. We propose that these numerical designations be used to augment current methods of sequence annotation. Because base counts and codon tables do not require revision as knowledge of function evolves, they are well-suited to act as cross-references, for example to identify redundant GenBank entries. These descriptors may be compared, in place of DNA sequences, to extract homologous genes from large databases. This approach permits rapid searching with good selectivity.

  5. Multichannel Coding of Applause Signals

    NASA Astrophysics Data System (ADS)

    Hotho, Gerard; van de Par, Steven; Breebaart, Jeroen

    2007-12-01

    We develop a parametric multichannel audio codec dedicated to coding signals consisting of a dense series of transient-type events. These signals of which applause is a typical example are known to be problematic for such audio codecs. The codec design is based on preservation of both timbre and transient-type event density. It combines a very low complexity and a low parameter bit rate (0.2 kbps). In a formal listening test, we compared the proposed codec to the recently standardised MPEG Surround multichannel codec, with an associated parameter bit rate of 9 kbps. We found the new codec to have a significantly higher audio quality than the MPEG Surround codec for the two multichannel applause signals under test. Though this seems promising, the technique presented is not fully mature, for example, because issues related to integration of the proposed codec in the MPEG Surround codec were not addressed.

  6. Inlet Performance Analysis Code Developed

    NASA Technical Reports Server (NTRS)

    Jules, Kenol; Barnhart, Paul J.

    1998-01-01

    The design characteristics of an inlet very much depend on whether the inlet is to be flown at subsonic, supersonic, or hypersonic speed. Whichever the case, the primary function of an inlet is to deliver free-stream air to the engine face at the highest stagnation pressure possible and with the lowest possible variation in both stagnation pressure and temperature. At high speeds, this is achieved by a system of oblique and/or normal shock waves, and possibly some isentropic compression. For both subsonic and supersonic flight, current design practice indicates that the inlet should deliver the air to the engine face at approximately Mach 0.45. As a result, even for flight in the high subsonic regime, the inlet must retard (or diffuse) the air substantially. Second, the design of an inlet is influenced largely by the compromise between high performance and low weight. This compromise involves tradeoffs between the mission requirements, flight trajectory, airframe aerodynamics, engine performance, and weight-all of which, in turn, influence each other. Therefore, to study the effects of some of these influential factors, the Propulsion System Analysis Office of the NASA Lewis Research Center developed the Inlet Performance Analysis Code (IPAC). This code uses oblique shock and Prandtl-Meyer expansion theory to predict inlet performance. It can be used to predict performance for a given inlet geometric design such as pitot, axisymmetric, and two-dimensional. IPAC also can be used to design preliminary inlet systems and to make subsequent performance analyses. It computes the total pressure, the recovery, the airflow, and the drag coefficients. The pressure recovery includes losses associated with normal and oblique shocks, internal and external friction, the sharp lip, and diffuser components. Flow rate includes captured, engine, spillage, bleed, and bypass flows. The aerodynamic drag calculation includes drags associated with spillage, cowl lip suction, wave, bleed, and bypass.

  7. Novel coding, translation, and gene expression of a replicating covalently closed circular RNA of 220 nt.

    PubMed

    AbouHaidar, Mounir Georges; Venkataraman, Srividhya; Golshani, Ashkan; Liu, Bolin; Ahmad, Tauqeer

    2014-10-01

    The highly structured (64% GC) covalently closed circular (CCC) RNA (220 nt) of the virusoid associated with rice yellow mottle virus codes for a 16-kDa highly basic protein using novel modalities for coding, translation, and gene expression. This CCC RNA is the smallest among all known viroids and virusoids and the only one that codes proteins. Its sequence possesses an internal ribosome entry site and is directly translated through two (or three) completely overlapping ORFs (shifting to a new reading frame at the end of each round). The initiation and termination codons overlap UGAUGA (underline highlights the initiation codon AUG within the combined initiation-termination sequence). Termination codons can be ignored to obtain larger read-through proteins. This circular RNA with no noncoding sequences is a unique natural supercompact "nanogenome." PMID:25253891

  8. The Code of Ethics for Nurses

    PubMed Central

    Zahedi, F; Sanjari, M; Aala, M; Peymani, M; Aramesh, K; Parsapour, A; Maddah, SS Bagher; Cheraghi, MA; Mirzabeigi, GH; Larijani, B; Dastgerdi, M Vahid

    2013-01-01

    Nurses are ever-increasingly confronted with complex concerns in their practice. Codes of ethics are fundamental guidance for nursing as many other professions. Although there are authentic international codes of ethics for nurses, the national code would be the additional assistance provided for clinical nurses in their complex roles in care of patients, education, research and management of some parts of health care system in the country. A national code can provide nurses with culturally-adapted guidance and help them to make ethical decisions more closely to the Iranian-Islamic background. Given the general acknowledgement of the need, the National Code of Ethics for Nurses was compiled as a joint project (2009–2011). The Code was approved by the Health Policy Council of the Ministry of Health and Medical Education and communicated to all universities, healthcare centers, hospitals and research centers early in 2011. The focus of this article is on the course of action through which the Code was compiled, amended and approved. The main concepts of the code will be also presented here. No doubt, development of the codes should be considered as an ongoing process. This is an overall responsibility to keep the codes current, updated with the new progresses of science and emerging challenges, and pertinent to the nursing practice. PMID:23865008

  9. Bitplane Image Coding With Parallel Coefficient Processing.

    PubMed

    Auli-Llinas, Francesc; Enfedaque, Pablo; Moure, Juan C; Sanchez, Victor

    2016-01-01

    Image coding systems have been traditionally tailored for multiple instruction, multiple data (MIMD) computing. In general, they partition the (transformed) image in codeblocks that can be coded in the cores of MIMD-based processors. Each core executes a sequential flow of instructions to process the coefficients in the codeblock, independently and asynchronously from the others cores. Bitplane coding is a common strategy to code such data. Most of its mechanisms require sequential processing of the coefficients. The last years have seen the upraising of processing accelerators with enhanced computational performance and power efficiency whose architecture is mainly based on the single instruction, multiple data (SIMD) principle. SIMD computing refers to the execution of the same instruction to multiple data in a lockstep synchronous way. Unfortunately, current bitplane coding strategies cannot fully profit from such processors due to inherently sequential coding task. This paper presents bitplane image coding with parallel coefficient (BPC-PaCo) processing, a coding method that can process many coefficients within a codeblock in parallel and synchronously. To this end, the scanning order, the context formation, the probability model, and the arithmetic coder of the coding engine have been re-formulated. The experimental results suggest that the penalization in coding performance of BPC-PaCo with respect to the traditional strategies is almost negligible. PMID:26441420

  10. Spin glasses and error-correcting codes

    NASA Technical Reports Server (NTRS)

    Belongie, M. L.

    1994-01-01

    In this article, we study a model for error-correcting codes that comes from spin glass theory and leads to both new codes and a new decoding technique. Using the theory of spin glasses, it has been proven that a simple construction yields a family of binary codes whose performance asymptotically approaches the Shannon bound for the Gaussian channel. The limit is approached as the number of information bits per codeword approaches infinity while the rate of the code approaches zero. Thus, the codes rapidly become impractical. We present simulation results that show the performance of a few manageable examples of these codes. In the correspondence that exists between spin glasses and error-correcting codes, the concept of a thermal average leads to a method of decoding that differs from the standard method of finding the most likely information sequence for a given received codeword. Whereas the standard method corresponds to calculating the thermal average at temperature zero, calculating the thermal average at a certain optimum temperature results instead in the sequence of most likely information bits. Since linear block codes and convolutional codes can be viewed as examples of spin glasses, this new decoding method can be used to decode these codes in a way that minimizes the bit error rate instead of the codeword error rate. We present simulation results that show a small improvement in bit error rate by using the thermal average technique.

  11. Speech coding research at Bell Laboratories

    NASA Astrophysics Data System (ADS)

    Atal, Bishnu S.

    2001-05-01

    The field of speech coding is now over 70 years old. It started from the desire to transmit voice signals over telegraph cables. The availability of digital computers in the mid 1960s made it possible to test complex speech coding algorithms rapidly. The introduction of linear predictive coding (LPC) started a new era in speech coding. The fundamental philosophy of speech coding went through a major shift, resulting in a new generation of low bit rate speech coders, such as multi-pulse and code-excited LPC. The semiconductor revolution produced faster and faster DSP chips and made linear predictive coding practical. Code-excited LPC has become the method of choice for low bit rate speech coding applications and is used in most voice transmission standards for cell phones. Digital speech communication is rapidly evolving from circuit-switched to packet-switched networks to provide integrated transmission of voice, data, and video signals. The new communication environment is also moving the focus of speech coding research from compression to low cost, reliable, and secure transmission of voice signals on digital networks, and provides the motivation for creating a new class of speech coders suitable for future applications.

  12. The trellis complexity of convolutional codes

    NASA Technical Reports Server (NTRS)

    Mceliece, R. J.; Lin, W.

    1995-01-01

    It has long been known that convolutional codes have a natural, regular trellis structure that facilitates the implementation of Viterbi's algorithm. It has gradually become apparent that linear block codes also have a natural, though not in general a regular, 'minimal' trellis structure, which allows them to be decoded with a Viterbi-like algorithm. In both cases, the complexity of the Viterbi decoding algorithm can be accurately estimated by the number of trellis edges per encoded bit. It would, therefore, appear that we are in a good position to make a fair comparison of the Viterbi decoding complexity of block and convolutional codes. Unfortunately, however, this comparison is somewhat muddled by the fact that some convolutional codes, the punctured convolutional codes, are known to have trellis representations that are significantly less complex than the conventional trellis. In other words, the conventional trellis representation for a convolutional code may not be the minimal trellis representation. Thus, ironically, at present we seem to know more about the minimal trellis representation for block than for convolutional codes. In this article, we provide a remedy, by developing a theory of minimal trellises for convolutional codes. (A similar theory has recently been given by Sidorenko and Zyablov). This allows us to make a direct performance-complexity comparison for block and convolutional codes. A by-product of our work is an algorithm for choosing, from among all generator matrices for a given convolutional code, what we call a trellis-minimal generator matrix, from which the minimal trellis for the code can be directly constructed. Another by-product is that, in the new theory, punctured convolutional codes no longer appear as a special class, but simply as high-rate convolutional codes whose trellis complexity is unexpectedly small.

  13. Serial-Turbo-Trellis-Coded Modulation with Rate-1 Inner Code

    NASA Technical Reports Server (NTRS)

    Divsalar, Dariush; Dolinar, Sam; Pollara, Fabrizio

    2004-01-01

    Serially concatenated turbo codes have been proposed to satisfy requirements for low bit- and word-error rates and for low (in comparison with related previous codes) complexity of coding and decoding algorithms and thus low complexity of coding and decoding circuitry. These codes are applicable to such high-level modulations as octonary phase-shift keying (8PSK) and 16-state quadrature amplitude modulation (16QAM); the signal product obtained by applying one of these codes to one of these modulations is denoted, generally, as serially concatenated trellis-coded modulation (SCTCM). These codes could be particularly beneficial for communication systems that must be designed and operated subject to limitations on bandwidth and power. Some background information is prerequisite to a meaningful summary of this development. Trellis-coded modulation (TCM) is now a well-established technique in digital communications. A turbo code combines binary component codes (which typically include trellis codes) with interleaving. A turbo code of the type that has been studied prior to this development is composed of parallel concatenated convolutional codes (PCCCs) implemented by two or more constituent systematic encoders joined through one or more interleavers. The input information bits feed the first encoder and, after having been scrambled by the interleaver, enter the second encoder. A code word of a parallel concatenated code consists of the input bits to the first encoder followed by the parity check bits of both encoders. The suboptimal iterative decoding structure for such a code is modular, and consists of a set of concatenated decoding modules one for each constituent code connected through an interleaver identical to the one in the encoder side. Each decoder performs weighted soft decoding of the input sequence. PCCCs yield very large coding gains at the cost of a reduction in the data rate and/or an increase in bandwidth.

  14. The future of PanDA in ATLAS distributed computing

    NASA Astrophysics Data System (ADS)

    De, K.; Klimentov, A.; Maeno, T.; Nilsson, P.; Oleynik, D.; Panitkin, S.; Petrosyan, A.; Schovancova, J.; Vaniachine, A.; Wenaus, T.

    2015-12-01

    Experiments at the Large Hadron Collider (LHC) face unprecedented computing challenges. Heterogeneous resources are distributed worldwide at hundreds of sites, thousands of physicists analyse the data remotely, the volume of processed data is beyond the exabyte scale, while data processing requires more than a few billion hours of computing usage per year. The PanDA (Production and Distributed Analysis) system was developed to meet the scale and complexity of LHC distributed computing for the ATLAS experiment. In the process, the old batch job paradigm of locally managed computing in HEP was discarded in favour of a far more automated, flexible and scalable model. The success of PanDA in ATLAS is leading to widespread adoption and testing by other experiments. PanDA is the first exascale workload management system in HEP, already operating at more than a million computing jobs per day, and processing over an exabyte of data in 2013. There are many new challenges that PanDA will face in the near future, in addition to new challenges of scale, heterogeneity and increasing user base. PanDA will need to handle rapidly changing computing infrastructure, will require factorization of code for easier deployment, will need to incorporate additional information sources including network metrics in decision making, be able to control network circuits, handle dynamically sized workload processing, provide improved visualization, and face many other challenges. In this talk we will focus on the new features, planned or recently implemented, that are relevant to the next decade of distributed computing workload management using PanDA.

  15. Turbo Codes with Modified Code Matched Interleaver for Coded-Cooperation in Half-Duplex Wireless Relay Networks

    NASA Astrophysics Data System (ADS)

    Ejaz, Saqib; Yang, Feng-Fan

    2015-03-01

    The parallel encoding and decoding structure of turbo codes makes them natural candidate for coded-cooperative scenarios. In this paper, we focus on one of the key components of turbo codes i.e., interleaver, and analyze its effect on the performance of coded-cooperative communication. The impact of an interleaver on the overall performance of cooperative systems depends on the type of an interleaver and its location in the cooperative encoding scheme. We consider code matched interleaver (CMI) as an optimum choice and present its role in a coded-cooperation scenario. The search and convergence of CMI for long interleaver sizes is an issue; therefore, a modification in the search conditions is included without any compromise on the performance of CMI. We also present analytical method to determine maximum S-constraint length for a CMI design. Further, we analyze the performance of two different encoding schemes of turbo codes, i.e., distributed turbo code (DTC) and distributed multiple turbo code (DMTC) after inclusion of CMI. Monte Carlo simulations show that CMI increases the diversity gain relative to other conventional interleavers such as uniform random interleaver. The channel is assumed to be Rayleigh fading among all communication nodes.

  16. Multidimensional Trellis Coded Phase Modulation Using a Multilevel Concatenation Approach. Part 1; Code Design

    NASA Technical Reports Server (NTRS)

    Rajpal, Sandeep; Rhee, Do Jun; Lin, Shu

    1997-01-01

    The first part of this paper presents a simple and systematic technique for constructing multidimensional M-ary phase shift keying (MMK) trellis coded modulation (TCM) codes. The construction is based on a multilevel concatenation approach in which binary convolutional codes with good free branch distances are used as the outer codes and block MPSK modulation codes are used as the inner codes (or the signal spaces). Conditions on phase invariance of these codes are derived and a multistage decoding scheme for these codes is proposed. The proposed technique can be used to construct good codes for both the additive white Gaussian noise (AWGN) and fading channels as is shown in the second part of this paper.

  17. Mechanism on brain information processing: Energy coding

    NASA Astrophysics Data System (ADS)

    Wang, Rubin; Zhang, Zhikang; Jiao, Xianfa

    2006-09-01

    According to the experimental result of signal transmission and neuronal energetic demands being tightly coupled to information coding in the cerebral cortex, the authors present a brand new scientific theory that offers a unique mechanism for brain information processing. They demonstrate that the neural coding produced by the activity of the brain is well described by the theory of energy coding. Due to the energy coding model's ability to reveal mechanisms of brain information processing based upon known biophysical properties, they cannot only reproduce various experimental results of neuroelectrophysiology but also quantitatively explain the recent experimental results from neuroscientists at Yale University by means of the principle of energy coding. Due to the theory of energy coding to bridge the gap between functional connections within a biological neural network and energetic consumption, they estimate that the theory has very important consequences for quantitative research of cognitive function.

  18. The Proteus Navier-Stokes code

    NASA Technical Reports Server (NTRS)

    Towne, Charles E.; Bui, Trong T.; Cavicchi, Richard H.; Conley, Julianne M.; Molls, Frank B.; Schwab, John R.

    1992-01-01

    An effort is currently underway at NASA Lewis to develop two- and three-dimensional Navier-Stokes codes, called Proteus, for aerospace propulsion applications. The emphasis in the development of Proteus is not algorithm development or research on numerical methods, but rather the development of the code itself. The objective is to develop codes that are user-oriented, easily-modified, and well-documented. Well-proven, state-of-the-art solution algorithms are being used. Code readability, documentation (both internal and external), and validation are being emphasized. This paper is a status report on the Proteus development effort. The analysis and solution procedure are described briefly, and the various features in the code are summarized. The results from some of the validation cases that have been run are presented for both the two- and three-dimensional codes.

  19. Logical operator tradeoff for local quantum codes

    NASA Astrophysics Data System (ADS)

    Haah, Jeongwan; Preskill, John

    2011-03-01

    We study the structure of logical operators in local D -dimensional quantum codes, considering both subsystem codes with geometrically local gauge generators and codes defined by geometrically local commuting projectors. We show that if the code distance is d , then any logical operator can be supported on a set of specified geometry containing d~ qubits, where d~d 1 / (D - 1) = O (n) and n is the code length. Our results place limitations on partially self-correcting quantum memories, in which at least some logical operators are protected by energy barriers that grow with system size. We also show that two-dimensional codes defined by local commuting projectors admit logical ``string'' operators and are not self correcting. NSF PHY-0803371, DOE DE-FG03-92-ER40701, NSA/ARO W911NF-09-1-0442, and KFAS.

  20. An Experiment in Scientific Code Semantic Analysis

    NASA Technical Reports Server (NTRS)

    Stewart, Mark E. M.

    1998-01-01

    This paper concerns a procedure that analyzes aspects of the meaning or semantics of scientific and engineering code. This procedure involves taking a user's existing code, adding semantic declarations for some primitive variables, and parsing this annotated code using multiple, distributed expert parsers. These semantic parser are designed to recognize formulae in different disciplines including physical and mathematical formulae and geometrical position in a numerical scheme. The parsers will automatically recognize and document some static, semantic concepts and locate some program semantic errors. Results are shown for a subroutine test case and a collection of combustion code routines. This ability to locate some semantic errors and document semantic concepts in scientific and engineering code should reduce the time, risk, and effort of developing and using these codes.